uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,498,115
arxiv
\section{Introduction} The study of trace spaces (on the boundary of a domain) for Sobolev spaces on Euclidean domains is associated with the Dirichlet boundary value problem related to elliptic differential equations. Certain types of Dirichlet problems are guaranteed to have solutions when the boundary data arises as the trace of a Sobolev function. The work of Gagliardo identified the classical Besov spaces $B^{1-1/p}_{p, p}({\operatorname{Re}\,}^d)$ as the trace spaces for the first order Sobolev spaces $W^{1, p}({\operatorname{Re}\,}^{d+1}_+)$, where $1< p<\infty$, ${\operatorname{Re}\,}^{d+1}_+:=\{(x, t): x\in {\operatorname{Re}\,}^d, t>0\}$, and $1\leq d\in {\mathcal N}$, the set of all natural numbers, see {\cite[Theorem 1.\RNum1]{Ga}.} In the borderline case, that is, $p=1$, it was also shown that the trace spaces for $W^{1,1}({\operatorname{Re}\,}^{d+1}_+)$ are $L^1({\operatorname{Re}\,}^d)$, see {\cite[Theorem 1.\RNum2]{Ga}.} Here we say that a function space $\mathbb X({\operatorname{Re}\,}^d)$ is the trace space for the function space $\mathbb Y({\operatorname{Re}\,}^{d+1}_+)$ if every function in $\mathbb Y({\operatorname{Re}\,}^{d+1}_+)$ has a trace in $\mathbb X({\operatorname{Re}\,}^d)$ and every function in $\mathbb X({\operatorname{Re}\,}^d)$ is the trace of some function in $\mathbb Y({\operatorname{Re}\,}^{d+1}_+)$. We refer to \cite{Ga} and monographs \cite{Pe,T,T4} for more trace results and properties of classical Besov spaces. It is natural to seek for the trace spaces for Sobolev spaces associated with weights. The trace spaces for Sobolev spaces with the Muckenhoupt $A_p$ weights (in the following, we briefly denote by $A_p$ the set of all Muckenhoupt $A_p$ weights) (see Definition \ref{A_p-def} below for the precise definition) were also well studied. For example, early trace results for Sobolev spaces with weights in $A_p$ were given by Nikolskii, Lizorkin and Va\v{s}arin, see\cite{Li,Ni,Va}. Some recent results about trace spaces of Sobolev spaces with weights belonging to $A_p$ were given by \cite{KSW17,LS-21,Tyu1,Tyu2,Tyu4}. We also refer to \cite{Ar, MirRus, Pe, SlBa, T, T4, IV, HaMa} for more investigations in this line. There is an advantage for the discussions on the trace spaces of Sobolev spaces when the weights are in $A_p$ because under this constraint, the related trace spaces always exist. However, to the best of our knowledge, there are very few results about the trace spaces for Sobolev spaces with weights not belonging to $A_p$ { in the literatures.} For $1\leq p<\infty$, $-1<\alpha\leq p-1$ and $\lambda\in {\operatorname{Re}\,}$, let $\omega^{\lambda}_{\alpha}$: $\mathbb{R}^{d+1}_{+}\rightarrow(0,\infty)$ denote the weight \begin{equation}\label{eq-1.1} (x_{1},x_{2},...,x_{d+1})\mapsto \left\{\begin{array}{cl} |x_{d+1}|^{\alpha}\log^{\lambda}\frac{4}{|x_{d+1}|},&x_{d+1}\in(0,1], \\ \log^{\lambda}4,&\,x_{d+1}\in(1,\infty), \end{array}\right. \end{equation} and the weighted measure $\mu_{\alpha}^\lambda$ on ${\operatorname{Re}\,}^{d+1}_+$ is defined by \begin{equation}\label{weight} \mu^{\lambda}_{\alpha}(E)=\frac{1}{\log^{\lambda}4}\int_{E}\omega_{\alpha}^{\lambda}\, dm_{d+1}. \end{equation} It is known that, when $-1<\alpha< p-1$, for each $p\geq 1$, the weight $\omega^{\lambda}_{\alpha}$ belongs to $A_p$ for every $\lambda\in {\operatorname{Re}\,}$ (cf. Proposition \ref{A_p} below). But when $\alpha=p-1$, the situation is much different. Again, by Proposition \ref{A_p}, we see that if $p>1$, the weight $\omega^{\lambda}_{p-1}$ does not belong to $A_p$ for any $\lambda\in {\operatorname{Re}\,}$, and even for the special case when $p=1$, the weight $\omega^{\lambda}_{0}$ does not belong to $A_p$ for any $\lambda<0$. \begin{figure}[htbp] \begin{center} \includegraphics{fig.1} \end{center} \caption{The set $\Gamma$.} \label{fig-1} \end{figure} Let $$\Pi=\{(p,\lambda)\in \mathbb{R}^2:\; p\geq 1\}$$ and $$\Gamma=\{(p,\lambda)\in \Pi :\; p>1, \lambda>p-1\}\cup \{(p,\lambda)\in \Pi :\; p=1, \lambda\geq 0\}$$ (see Figure \ref{fig-1}). The following example shows that there are $(p,\lambda)\in \Pi$ such that the related trace operators $\mathscr T$ on $W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)$ do not exist. The reader is referred to Definition \ref{trace-defn} below for the precise definition of trace operators. \begin{example}\label{example} Suppose that $(p,\lambda)\in \Pi$ and $\alpha=p-1$. For any pair $(p, \lambda)\notin\Gamma$, there is a function $u\in W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{\alpha}^\lambda)$ such that $\mathscr T u$ does not exist. \end{example} Naturally, one will ask whether for any $(p, \lambda)\in\Gamma$, the related trace operator $\mathscr T$ on $W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)$ exists. Our following result shows that the answer to this question is positive. \begin{thm}\label{Thm-1.2} Suppose that $(p,\lambda)\in \Pi$. Then \begin{enumerate} \item[$(i)$] the trace function $\mathscr T u$ belongs to $L^p({\operatorname{Re}\,}^d)$ for every $u\in W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)$ if and only if $(p, \lambda)\in \Gamma$. \item[$(ii)$] for every $(p, \lambda)\in \Gamma$, the trace operator $\mathscr T:$ $W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)\rightarrow L^p({\operatorname{Re}\,}^d)$ is bounded and linear. \end{enumerate} \end{thm} To investigate the characterization of the trace spaces for weighted Sobolev spaces $W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)$ with $(p, \lambda)\in\Gamma$ is our main purpose of this paper. In \cite{KSW17}, Koskela, Soto and the third author of this paper considered the weights $\omega^{\lambda}_{\alpha}$ and the related measures $\mu_{\alpha}^\lambda$ when $\lambda=0$ and $\alpha<p-1$, and they obtained the characterization of the corresponding trace spaces by the aid of a class of Besov spaces. But they did not consider the borderline case $\alpha=p-1$ because the discussions in \cite{KSW17} do require the constraint $\alpha<p-1$, and thus, the methods are invalid for the case $\alpha=p-1$. Hence it needs new ideas to deal with this case. Under the inspiration of the approach of using integral averages over dyadic cubes in \cite{KSW17} and the idea of choosing a system of tilings in \cite{Tyu2}, we construct a new kind of Besov-type spaces $\mathcal B^{\gamma}_{p}({\operatorname{Re}\,}^d)$ based on the integral averages over the so-called {\it selected layers of dyadic cubes} (see Definition \ref{Besov-space-0} below for the details). By replacing $L^p({\operatorname{Re}\,}^d)$ in Theorem \ref{Thm-1.2}$(ii)$ with $\mathcal B^{\gamma}_{p}({\operatorname{Re}\,}^d)$, we get the following related trace result. \begin{thm}\label{thm-1.3} Suppose that $(p,\lambda)\in \Gamma$. Then the trace operator $\mathscr T: W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)\rightarrow \mathcal B^{\gamma}_{p}({\operatorname{Re}\,}^d)$ is bounded and linear provided that $0<\gamma<\lambda-(p-1)$ if $p>1$ or $0<\gamma\leq \lambda$ if $p=1$. \end{thm} By the aid of $\mathcal B^{\lambda}_{p}({\operatorname{Re}\,}^d)$, we also obtain the following extension result. \begin{thm}\label{thm-1.4} Suppose that $(p,\lambda)\in \Gamma$. Then there exists a bounded and linear extension operator $\mathscr E: \mathcal B^{\lambda}_{p}({\operatorname{Re}\,}^d) \rightarrow W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)$ such that $\mathscr T\circ \mathscr E={\operatorname{id}}$ on $\mathcal B^{\lambda}_{p}({\operatorname{Re}\,}^d)$, where ``$\,{\operatorname{id}}$'' denotes ``the identity map''. \end{thm} The combination of Theorem \ref{thm-1.3} and Theorem \ref{thm-1.4} implies that, when $p=1$, we get the full characterization of the trace spaces for weighted Sobolev spaces $W^{1,1}(\mathbb{R}^{d+1}_{+},\mu_{0}^{\lambda})$, which is formulated in the following corollary. \begin{cor}\label{cor-1.5} Let $\lambda>0$. Then the Besov-type space $\mathcal B^{\lambda}_{1}(\mathbb{R}^{d})$ is the trace space of the weighted Sobolev space $W^{1,1}(\mathbb{R}^{d+1}_{+},\mu_{0}^{\lambda})$. \end{cor} Notice that if $\lambda>0$, the weights $\omega_0^{\lambda}$ are actually in $A_1$ (cf. Proposition \ref{A_p}). Hence the trace spaces $\mathcal B^{\lambda}_{1}(\mathbb{R}^{d})$ are equivalent to the ones defined in \cite{Tyu2}. The trace spaces in \cite{Tyu2} are complicated since it needs to work for all weights in $A_1$, and then, our concrete trace spaces $\mathcal B^{\lambda}_{1}(\mathbb{R}^{d})$, which can be regarded as an example, would be helpful to understand it. We remark that we do not consider the case when $\alpha>p-1$. This is because, when $\alpha>p-1$, by slightly modifying the proof of Proposition \ref{A_p} and the construction of Example \ref{example}, we shall know that for any $(p, \lambda)$ with $p\geq 1$ and $\lambda\in {\operatorname{Re}\,}$, the weight $\omega_\alpha^\lambda$ does not belong to $A_p$, and the trace operator $\mathscr T$ on $W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_\alpha^\lambda)$ does not exist either. The paper is organized as follows. In Section \ref{sec-2}, some necessary terminology will be introduced and a proposition will be proved, which classifies the weights $w^\lambda_\alpha$ into two classes: $A_p$ class and non $A_p$ class. The proofs of Example \ref{example}, Theorem \ref{Thm-1.2} and Theorem \ref{thm-1.3} will be presented in Section \ref{sec-3}. Theorem \ref{thm-1.4} will be shown in Section \ref{sec-4}. \section{Preliminaries}\label{sec-2} Throughout this paper, the letter $C$ (sometimes with a subscript) will denote a positive constant that usually depend only on the given parameters of the spaces and may change at different occurrences; if $C$ depends on $a,$ $b,$ $\ldots$, then we write $C = C(a,b,\ldots).$ The notation $A\lesssim B$ (resp. $A \gtrsim B$) means that there is a constant $C\geq 1$ such that $A \leq C \cdot B$ (resp. $A \geq C \cdot B).$ If $A\lesssim B$ and $A \gtrsim B$, then we write $A\approx B$. If $(X, \mu)$ is a measure space, for every function $f\in L^1_{\rm loc}(X, \mu)$ and every measurable subset $A\subset X$, let $\vint_Afd\mu$ stand for the integral average $\frac{1}{\mu(A)}\int_Afd\mu$, i.e., $$\vint_Afd\mu=\frac{1}{\mu(A)}\int_Afd\mu.$$ Let us recall the definition of the Muckenhoupt $A_p$ weights with $1\leq p<\infty$ (see, e.g. \cite[Charpter 15]{HKM} or \cite{DILTV}). \begin{defn}\label{A_p-def} We say that a weight $\omega: {\operatorname{Re}\,}^{d+1}_+\rightarrow [0, \infty)$ belongs to the Muckenhoupt class $A_p$ if $\omega$ is locally integrable and there is a constant $C>0$ such that for any cube $Q\subset {\operatorname{Re}\,}^{d+1}_+$, \[\left(\vint_Q \omega\, dm_{d+1}\right)\left(\vint_Q \omega^{-1/(p-1)}\, dm_{d+1}\right)^{p-1}\leq C\ \ \ \text{if}\ \ p>1\] and \[\left(\vint_Q \omega\, dm_{d+1}\right)\leq C \,{\operatorname{ess\ inf}}_{x\in Q} \, {\omega(x)} \ \ \ \text{if}\ \ p=1.\] \end{defn} \begin{prop}\label{A_p} \noindent $(a)$ Suppose that $(p, \lambda)\in \Pi$, and let $-1<\alpha<p-1$. Then the weights $\omega_\alpha^\lambda$ $($see \eqref{eq-1.1} for the definition$)$ belong to $A_p$ for all pairs $(p, \lambda)$ with $p\geq 1$ and $\lambda\in {\operatorname{Re}\,}$. \noindent $(b)$ Suppose that $(p, \lambda)\in \Pi$, and let $\alpha=p-1$. Then the weights $\omega_\alpha^\lambda$ belong to $A_p$ if and only if $p=1$ and $\lambda\geq 0$. \end{prop} \begin{proof} $(a)$ The proof follows from a direct computation. We omit it here. $(b)$ We first show that if $p=1$ and $\lambda\geq 0$, the weights $\omega_\alpha^\lambda$ belong to $A_p$. If $p=1$ and $\lambda=0$, then $\omega_0^\lambda=1$, and obviously it belongs $A_1$. If $p=1$ and $\lambda>0$, then for any $t\geq0$ and $h>0$, by letting $Q=(0,h]^{d}\times(t,t+h]$ and by using the integration by parts, we have \begin{equation}\label{tmp-A_1} \vint_Q \omega_0^\lambda\, dm_{d+1}\leq C_{\lambda}\log^{\lambda}\frac{4}{t+h}=C_{\lambda}\cdot{\operatorname{ess\ inf}}_{x\in Q} \omega_0^\lambda(x), \end{equation} which implies that the weight $\omega_0^\lambda$ belongs to $A_1$. For the converse, it suffices to show that for any pair $(p, \lambda)$ with $p>1$ and $\lambda\in {\operatorname{Re}\,}$ or $p=1$ and $\lambda<0$, the weight $\omega_\alpha^\lambda$ does not belong to $A_p$. For $p>1$, we choose the cube $Q=(0,2^{-k}]^{d+1}\subset{\operatorname{Re}\,}^{d+1}_{+}$. Then $$ \vint_Q |x_{d+1}|^{\alpha}\log^{\lambda}\frac{4}{|x_{d+1}|}\, dm_{d+1}=2^{k}\int_{0}^{2^{-k}}t^{p-1}\log^{\lambda}\frac{4}{t}dt $$ and $$ \vint_Q \left(|x_{d+1}|^{\alpha}\log^{\lambda}\frac{4}{|x_{d+1}|}\right)^{-1/(p-1)}\, dm_{d+1}=2^{k}\int_{(k+2)\log2}^{\infty}t^{-\frac{\lambda}{p-1}}dt. $$ It follows that if $\lambda>p-1$, then $$ \left(\vint_Q |x_{d+1}|^{\alpha}\log^{\lambda}\frac{4}{|x_{d+1}|}\, dm_{d+1}\right)\left(\vint_Q (|x_{d+1}|^{\alpha}\log^{\lambda}\frac{4}{|x_{d+1}|})^{-1/(p-1)}\, dm_{d+1}\right)^{p-1}\gtrsim(k+2)^{p-1}, $$ and if $\lambda\leq p-1$, then for each $k\geq0$, $$\int_{(k+2)\log2}^{\infty}t^{-\frac{\lambda}{p-1}}dt=\infty.$$ Hence $\omega_\alpha^\lambda$ does not belong to $A_p$ for {any $\lambda \in {\operatorname{Re}\,}$}. For $p=1$, the estimate \eqref{tmp-A_1} will fail for any $\lambda<0$, since if $t=0$, we have ${\operatorname{ess\ inf}}_{x\in Q} \omega_0^\lambda(x)=0$. This implies that $\omega_0^\lambda$ does not belong to $A_1$ for any $\lambda<0$. Hence the proposition is proved. \end{proof} Let us recall that $\mu_{\alpha}^\lambda$ denotes the weighted measure on ${\operatorname{Re}\,}^{d+1}_+$ defined as in \eqref{weight}. Then by a direct computation, we know that the measure ${\mu_\alpha^\lambda}$ is doubling on ${\operatorname{Re}\,}^{d+1}_+$, i.e., there exists a constant $C\geq 1$ such that for all $x\in {\operatorname{Re}\,}^{d+1}_+$ and $r>0$, \[{\mu_\alpha^\lambda}(\mathbb{B}(x, 2r)\cap {\operatorname{Re}\,}^{d+1}_+)\leq C{\mu_\alpha^\lambda}(\mathbb{B}(x, r)\cap{\operatorname{Re}\,}^{d+1}_+ ),\] where $\mathbb{B}(x, r)$ denotes the open ball with center $x$ and radius $r$. \begin{defn} Suppose that $p\in[1,\infty)$. Then $W^{1,p}(\mathbb{R}^{d+1}_+,\mu^{\lambda}_{\alpha})$ is defined as the normed space of all measurable functions $f\in L^1_{\rm loc}(\mathbb{R}^{d+1}_+)$ such that their first-order distributional derivatives, denoted by $\nabla f$, belong to $L^1_{\rm loc}(\mathbb{R}^{d+1}_+)$, and $$ \|f\|_{W^{1,p}(\mathbb{R}^{d+1}_+,\mu^{\lambda}_{\alpha})}:=\|f\|_{L^{p}(\mathbb{R}^{d+1}_+,\mu^{\lambda}_{\alpha})}+\|\nabla f\|_{L^{p}(\mathbb{R}^{d+1}_+,\mu^{\lambda}_{\alpha})}<+\infty.$$ \end{defn} In order to formulate the dyadic norms of the related Besov-type spaces, let us recall the standard dyadic decompositions of ${\operatorname{Re}\,}^d$ and ${\operatorname{Re}\,}^{d+1}_+$, respectively (cf. \cite[Section 2]{KSW17}). Denote by $\mathscr{Q}_d$ the collection of dyadic semi-open cubes in ${\operatorname{Re}\,}^d$, i.e., the cubes of the form $Q := 2^{-k}\big((0,1]^d + m\big)$, where $k \in \mathbb Z$, the set of all integers, $m \in {\mathbb Z}^d$, and $\mathscr{Q}^+_d$ stands for the set of all cubes in $\mathscr{Q}_d$ which are contained in the upper half-space ${\operatorname{Re}\,}^{d-1}\times(0,\infty)$. Write $\ell(Q)$ for the edge length of $Q \in \mathscr{Q}_d$, i.e., $2^{-k}$ in the preceding representation, and $\mathscr{Q}_{d,k}$ for the cubes $Q \in \mathscr{Q}_d$ such that $\ell(Q) = 2^{-k}$. Let $Q\in \mathscr{Q}_{d, 2^j}$ for some $j\in {\mathcal N}$. We say that $Q'$ in $\mathscr{Q}_d$ is a {\it selected neighbor} of $Q$, denoted by $Q' \asymp Q$, if $Q'\in \mathscr{Q}_{d, 2^j}\cup \mathscr{Q}_{d, 2^{j-1}}$ and $\overline{Q}\cap\overline{Q'} \neq \emptyset$. Here we unify $\mathscr{Q}_{d, 2^{-1}}$ as $\mathscr{Q}_{d, 1}$, i.e., $\mathscr{Q}_{d, 2^{-1}}=\mathscr{Q}_{d, 1}$. Note that for every $Q\in\mathscr{Q}_d$, the number of its neighbors is uniformly bounded, and for every $Q\in \cup_{j\in {\mathcal N}}\mathscr{Q}_{d, 2^j}$, also, the number of its selected neighbors is uniformly bounded. \begin{defn}\label{Besov-space-0} Let $p\in [1, \infty)$ and $\lambda>0$. Then the Besov space $\mathcal B^{\lambda}_{p}(\mathbb{R}^{d})$ is defined as the normed space of all measurable functions $f\in L_{\rm loc}^{1}(\mathbb{R}^{d})$ such that $$ \|f\|_{\mathcal B^{\lambda}_{p}(\mathbb{R}^{d})}:=\|f\|_{L^{p}(\mathbb{R}^{d})}+\|f\|_{\dot{\mathcal B}^{\lambda}_{p}(\mathbb{R}^{d})}<+\infty,$$ where $$\|f\|^p_{\dot{\mathcal B}^{\lambda}_{p}(\mathbb{R}^{d})}:= \sum_{j=0}^{\infty}(2^{j}+2)^{\lambda}\sum_{Q\in\mathscr{Q}_{d,2^{j}}}m_{d}(Q)\sum_{Q'\asymp Q}|f_{Q}-f_{Q'}|^p.$$ \end{defn} Let us recall the standard $(1,1)$-Poincar\'e inequality satisfied by the functions that are locally $W^{1,1}$-regular in the upper half-space. If $Q$ is a cube in ${\operatorname{Re}\,}^{d+1}_+$ such that ${\operatorname{dist}}(Q,{\operatorname{Re}\,}^d\times\{0\}) > 0$ and $f \in W^{1,1}(Q)$, then there is a constant $C>0$ independent of $Q$ and $f$ such that \begin{equation}\label{poincare} \vint_Q |f-f_{Q}|\, dm_{d+1}\leq C \ell(Q)\vint_Q |\nabla f| dm_{d+1}, \end{equation} where ``${\operatorname{dist}}$'' means ``distance''. To formulate the trace and extension operators, we first recall a Whitney decomposition of ${\operatorname{Re}\,}^{d+1}_+$ related to the dyadic decomposition (cf. \cite{KSW17}). For $Q \in \mathscr{Q}_{d,k}$ and $k \in {\mathbb Z}$, write ${\mathscr W}(Q) := Q \times (2^{-k},2^{-k+1}] \in \mathscr{Q}^+_{d+1,k}$. To simplify the notation in the sequel, we further define $\mathscr{Q}^1_d := \cup_{k \geq 1} \mathscr{Q}_{d,k}$. Then $\{{\mathscr W}(Q) \,:\, Q \in \mathscr{Q}_d\}$ is a {\it Whitney decomposition} of ${\operatorname{Re}\,}^{d}\times(0,\infty)$ with respect to the boundary ${\operatorname{Re}\,}^d\times\{0\}$. For every $Q \in \mathscr{Q}^1_d$, define a smooth function $$\psi_Q \colon {\operatorname{Re}\,}^{d+1}_{+}\to[0,1]$$ such that\begin{enumerate} \item[(i)] ${\rm Lip}\, \psi_Q \lesssim 1/\ell(Q)$, \item[(ii)] $\inf_{x \in {\mathscr W}(Q)} \psi_Q(x) > 0$ uniformly in $Q \in \mathscr{Q}^1_d$, \item[(iii)] ${\rm supp}\, \psi_Q$ is contained in a $\ell(Q)/4$-neighborhood of ${\mathscr W}(Q)$, and \item[(iv)] $\sum_{Q \in \mathscr{Q}^1_d} \psi_Q \equiv 1$ in $\bigcup_{Q \in \mathscr{Q}^1_d} {\mathscr W}(Q)$, \end{enumerate} where ``${\rm supp}\,$'' means ``support''. We say that $Q$ and $Q'$ in $\mathscr{Q}_d$ are {\it neighbors} and write $Q \sim Q'$ if $\frac12 \leq \ell(Q)/\ell(Q') \leq 2$ and $\overline{Q}\cap\overline{Q'} \neq \emptyset$. We remark that the sum $\sum_{Q \in \mathscr{Q}^1_d}\psi_Q$ is locally finite -- more precisely, it follows from the definition that \begin{equation}\label{eq:bump-support} {\rm supp}\, \psi_Q \cap {\rm supp}\, \psi_{Q'} \neq \emptyset \quad \text{if and only if} \quad Q \sim Q'. \end{equation} \begin{defn}[Trace]\label{trace-defn} Suppose that $f\in L^1_{\rm loc}({\operatorname{Re}\,}^{d+1}_+)$ and $k\in\mathbb{N}$. Define the function $\mathscr{T}_{k}f:\mathbb{R}^{d}\rightarrow {\operatorname{Re}\,}$ by \begin{equation*} \mathscr{T}_{k}f:=\sum_{Q\in\mathscr{Q}_{d,k}}\left(\vint_{\mathscr{N}(Q)}f\, dm_{d+1}\right)\chi_{Q}, \end{equation*} where $\mathscr{N}(Q)=\frac{5}{4}\mathscr{W}(Q):=\{y\in \mathbb{R}^{d+1}_{+}: {\operatorname{dist}}(y,\mathscr{W}(Q))<\frac{1}{4}\ell(Q)\}$, and define the trace function $\mathscr T f$ by setting \begin{equation*}\label{trace} \mathscr T f=\lim_{k\rightarrow \infty} \mathscr T_k f, \end{equation*} if the limit exists $m_d$-a.e. in ${\operatorname{Re}\,}^d$. \end{defn} Before going to the definition of the extension operator, we give one more notation. For any $Q\in \mathscr{Q}_d$, let $ {\mathcal S(Q)}$ be the unique cube such that there is $j\in {\mathcal N}$ satisfying $ {\mathcal S(Q)}\in \mathscr{Q}_{d, 2^j}$ and $Q\subset {\mathcal S(Q)}$ if $Q\in \bigcup_{k=2^j}^{2^{j+1}-1}\mathscr{Q}_{d, k}$. \begin{defn}[Extension] \label{extension-defn} Suppose that $f \in L^1_{\rm loc}({\operatorname{Re}\,}^d)$. Then the {\it selected Whitney extension} $\mathscr E f \colon {\operatorname{Re}\,}^{d+1}_+\to{\operatorname{Re}\,}$ is defined by \[ \mathscr E f(x)= \sum_{Q\in \mathscr{Q}^1_d}\left(\vint_{ {\mathcal S(Q)}} f\, dm_d\right) \psi_Q(x).\] \end{defn} It is easy to see that the extension operator $\mathscr E: L^1_{\rm loc}({\operatorname{Re}\,}^d)\rightarrow C^\infty({\operatorname{Re}\,}^{d+1}_+)$ is linear. \section{Trace operators and trace spaces}\label{sec-3} In this section, we shall give the proofs of Example \ref{example}, Theorem \ref{Thm-1.2} and Theorem \ref{thm-1.3}. \subsection*{Proof of Example \ref{example}} Recall that if $(p, \lambda)\in \Pi$, but $(p, \lambda)\notin \Gamma$, then $\lambda\leq p-1$ if $p>1$, and $\lambda<0$ if $p=1$. Let $\varphi$ be a compactly supported smooth function on ${\operatorname{Re}\,}^d$ with $\varphi(x')=1$ for $x'\in [-1, 1]^d$ and ${\rm supp}\, \varphi\subset [-2, 2]^d$. Then we define the function $u$ as follows. For $(x', t)\in {\operatorname{Re}\,}^{d+1}_+$, let \[u(x', t)=\varphi(x')\cdot \max\{ v(t), 0\},\] where $$v(t)=\int_{t}^1 \frac{1}{t\log(e/t)\Big(1+\log^\beta\big(\log(e/t)\big)\Big)}\, dt.$$ Here we choose $\beta$ such that $\beta=0$ if $p=1$ and $0<\beta<1<\beta p$ if $p>1$. Obviously, for any $x'\in [-1, 1]^d$, $\mathscr {T} u(x')=\infty$. This shows that $\mathscr T u$ does not exist. Next, we demonstrate that $u\in W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{\alpha}^\lambda)$. For any $(x', t)\notin [-2, 2]^d\times (0, 1]$, we have that $$|u(x', t)|=|\nabla u(x', t)|=0.$$ For any $(x', t)\in [-2, 2]^d\times (0, 1]$, there is a constant $C>0$ such that $$|u(x', t)| \leq C |v(t)| \leq C \int_t^1 \frac{1}{t\log(e/t)}\, dt=C \log\big(\log(e/t)\big)$$ and \begin{align*}|\nabla u(x', t)|&\leq C|v(t)|+ \frac{C}{t\log(e/t)\log^\beta\big(\log(e/t)\big)}\\ &\leq C \log\big(\log(e/t)\big)+ \frac{C}{t\log(e/t)\log^\beta\big(\log(e/t)\big)}. \end{align*} For $p=1$, $\lambda<0$ and $\beta=0$, the above facts guarantee that \[\|u\|_{L^1({\operatorname{Re}\,}^{d+1}_+,{\mu_\alpha^\lambda})} \lesssim 4^d\int_{0}^{1} \log^{\lambda}(4/t)\log\big(\log(e/t)\big)\, dt<\infty\] and \begin{align*} \|\nabla u\|_{L^1({\operatorname{Re}\,}^{d+1}_+,{\mu_\alpha^\lambda})} &\lesssim \|u\|_{L^1({\operatorname{Re}\,}^{d+1}_+,{\mu_\alpha^\lambda})} +4^d\int_0^1 \frac{1}{t\log(e/t)\log^{-\lambda}(4/t)}\, dt\\ &\lesssim \|u\|_{L({\operatorname{Re}\,}^{d+1}_+,{\mu_\alpha^\lambda})} +\int_0^1 \frac{1}{t\log^{1-\lambda}(4/t)}\, dt<\infty. \end{align*} For $p>1$, $\lambda\leq p-1$ and $1/p <\beta<1$, similarly, we obtain that \[\|u\|^p_{L^p({\operatorname{Re}\,}^{d+1}_+,{\mu_\alpha^\lambda})} \leq 4^d\int_{0}^{1} t^{p-1}\log^{\lambda}(4/t)\log^p\big(\log(e/t)\big)\, dt<\infty\] and \begin{align*} \|\nabla u\|^p_{L^p({\operatorname{Re}\,}^{d+1}_+,{\mu_\alpha^\lambda})} &\lesssim \|u\|^p_{L^p({\operatorname{Re}\,}^{d+1}_+,{\mu_\alpha^\lambda})} +4^d\int_0^1 \frac{1}{t\log^p(e/t)\log^{-\lambda}(4/t) \log^{\beta p}\big(\log(e/t)\big)}\, dt\\ &\lesssim \|u\|^p_{L^p({\operatorname{Re}\,}^{d+1}_+,{\mu_\alpha^\lambda})} +\int_0^1 \frac{1}{t\log^{p-\lambda}(4/t) \log^{\beta p}\big(\log(4/t\big))}\, dt<\infty. \end{align*} Hence $u\in W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{\alpha}^\lambda)$. These imply that the example is true. \qed \subsection*{Proof of Theorem \ref{Thm-1.2}} {{ \bf Proof of (i)}.} From Example \ref{example}, it suffices to show that if $(p, \lambda)\in \Gamma$, then the trace function $\mathscr T u$ belongs to $L^p({\operatorname{Re}\,}^d)$ for every $u\in W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)$. Let $f\in W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{\alpha}^\lambda)$ with $\alpha=p-1$ and $(p, \lambda)\in \Gamma$. The first thing is to verify the existence of the limit in Definition \ref{trace-defn}. To reach this goal, it suffices to show that the function \[f^*:=\sum_{k\geq 0} |\mathscr {T}_{k+1} f-\mathscr {T}_k f|+|\mathscr {T}_0 f|\] belongs to $L^p({\operatorname{Re}\,}^d)$ because $f^*\in L^p({\operatorname{Re}\,}^d)$ implies that $f^*(x)<\infty$ for $m_d$-almost every $x\in {\operatorname{Re}\,}^d$. For any $P\in \mathscr{Q}_{d, 0},$ notice that $m_{d+1}(\mathscr N(P))\approx 1$ and ${\omega_\alpha^\lambda}\approx 1$ in $\mathscr N(P)$, it follows from the Minkowski inequality that \begin{align} \left(\int_{{\operatorname{Re}\,}^d}|f^*|^{p}\, dm_d\right)^{1/p}&\lesssim \left(\int_{{\operatorname{Re}\,}^d} \left(\sum_{k\geq 0} |\mathscr {T}_{k+1} f-\mathscr {T}_k f|\right)^p\, dm_d\right)^{1/p}+\left(\sum_{P\in\mathscr{Q}_{d, 0}}\int_{\mathscr N(P)} |f|^p\, dm_{d+1}\right)^{1/p}\notag\\ & \lesssim \sum_{k\geq 0}\left(\int_{ {\operatorname{Re}\,}^d} |\mathscr {T}_{k+1} f-\mathscr {T}_k f|^p\, dm_d\right)^{1/p} + \left(\sum_{P\in\mathscr{Q}_{d, 0}}\int_{\mathscr N(P)} |f|^p\, d{\mu_\alpha^\lambda}\right)^{1/p}\label{integral-P}. \end{align} For any $x\in {\operatorname{Re}\,}^d$, let $Q_k^x$ be the unique cube in $\mathscr{Q}_{d, k}$ containing $x$. By the definition, we know that the intersection $\mathscr N(Q_k^x)\bigcap\mathscr N(Q_{k+1}^x)$ contains a cube $\hat Q$ with edge length comparable to $2^{-k}$. Since ${\omega_\alpha^\lambda}(y)\approx 2^{-k\alpha}(2+k)^{\lambda}$ for all $y\in \mathscr N(Q_k^x)$ and ${\mu_\alpha^\lambda}(\mathscr N(Q_k^x))\approx 2^{-k\alpha} (2+k)^{\lambda} m_{d+1}(\mathscr N(Q_k^x))$, it follows from the Poincar\'e inequality \eqref{poincare} that \begin{align*} |\mathscr {T}_{k+1} f(x)-\mathscr {T}_k f(x)|&=\bigg|\vint_{\mathscr N(Q_k^x)} f\, dm_{d+1}-\vint_{\mathscr N(Q_{k+1}^x)} f\, dm_{d+1}\bigg|\\ &\leq \bigg|\vint_{\mathscr N(Q_k^x)} f\, dm_{d+1}-\vint_{\hat Q} f\, dm_{d+1}\bigg|+\bigg|\vint_{\hat Q} f\, dm_{d+1}-\vint_{\mathscr N(Q_{k+1}^x)} f\, dm_{d+1}\bigg|\\ &\lesssim \vint_{\mathscr N(Q_k^x)}|f-f_{\mathscr N(Q_k^x)}|\, dm_{d+1} +\vint_{\mathscr N(Q_{k+1}^x)}|f-f_{\mathscr N(Q_{k+1}^x) }|\, dm_{d+1}\\ &\lesssim 2^{-k}\vint_{\mathscr N(Q_k^x)}|\nabla f|\, dm_{d+1} + 2^{-k}\vint_{\mathscr N(Q_{k+1}^x)}|\nabla f|\, dm_{d+1}\\ &\approx 2^{-k}\vint_{\mathscr N(Q_k^x)}|\nabla f|\, d{\mu_\alpha^\lambda} + 2^{-k}\vint_{\mathscr N(Q_{k+1}^x)}|\nabla f|\, d{\mu_\alpha^\lambda}. \end{align*} Applying the H\"{o}lder inequality, we arrive at the estimate \begin{equation*}\label{estimate-T} |\mathscr {T}_{k+1} f(x)-\mathscr {T}_k f(x)|\lesssim 2^{-k} \left(\vint_{\mathscr N(Q_k^x)}|\nabla f|^p\, d{\mu_\alpha^\lambda}\right)^{1/p}+2^{-k} \left(\vint_{\mathscr N(Q_{k+1}^x)}|\nabla f|^p\, d{\mu_\alpha^\lambda}\right)^{1/p}. \end{equation*} Hence \begin{align} \int_{{\operatorname{Re}\,}^d} |\mathscr {T}_{k+1} f-\mathscr {T}_k f|^p\, dm_d&=\sum_{Q\in \mathscr{Q}_{d, k}} \int_Q|\mathscr {T}_{k+1} f-\mathscr {T}_k f|^p\, dm_d(x)\notag\\ &\lesssim \sum_{Q\in \mathscr{Q}_{d, k}} m_d(Q)\bigg(2^{-kp} \vint_{\mathscr N(Q)}|\nabla f|^p\, d{\mu_\alpha^\lambda} \bigg.\notag\\ &\bigg.\quad\quad\quad\quad\quad\quad\quad\quad\quad+\sum_{\substack{Q'\in \mathscr{Q}_{d, {k+1}}\\ Q'\subset Q} }2^{-kp} \vint_{\mathscr N(Q')}|\nabla f|^p\, d{\mu_\alpha^\lambda} \bigg)\notag\\ &\lesssim 2^{-k(d+p)} \sum_{Q\in \mathscr{Q}_{d, k}\cup\mathscr{Q}_{d, k+1}} \vint_{\mathscr N(Q)} |\nabla f|^p\, d{\mu_\alpha^\lambda}\notag\\ &\approx (2+k)^{-\lambda}\sum_{Q\in \mathscr{Q}_{d, k}\cup\mathscr{Q}_{d, k+1}} \int_{\mathscr N(Q)} |\nabla f|^p\, d{\mu_\alpha^\lambda},\label{estimate-T_k-T} \end{align} since $\alpha=p-1$ implies that ${\mu_\alpha^\lambda}(\mathscr N(Q))\approx 2^{-k(d+p)} (2+k)^{\lambda}$ for all $Q\in \mathscr{Q}_{d, k}\cup\mathscr{Q}_{d, k+1}$. Plugging this into \eqref{integral-P}, we obtain that \begin{align*} \|f^*\|_{L^p({\operatorname{Re}\,}^d)} \lesssim \sum_{k\geq 0} (2+k)^{-\lambda/p}\left(\sum_{Q\in \mathscr{Q}_{d, k}\cup\mathscr{Q}_{d, k+1}} \int_{\mathscr N(Q)} |\nabla f|^p\, d{\mu_\alpha^\lambda}\right)^{1/p} +\|f\|_{L^p({\operatorname{Re}\,}^{d+1}_{+},{\mu_\alpha^\lambda})}. \end{align*} If $p=1$ and $\lambda\geq 0$, it is obvious that \[\|f^*\|_{L^1({\operatorname{Re}\,}^d)} \lesssim\sum_{k\geq 0}\sum_{Q\in \mathscr{Q}_{d, k}\cup\mathscr{Q}_{d, k+1}} \int_{\mathscr N(Q)} |\nabla f|\, d{\mu_\alpha^\lambda} +\|f\|_{L^1({\operatorname{Re}\,}^{d+1}_{+},{\mu_\alpha^\lambda})} \lesssim \|f\|_{W^{1,1}({\operatorname{Re}\,}^{d+1}_+, {\mu_\alpha^\lambda})}.\] If $p>1$ and $\lambda>p-1$, since the sum of $(2+k)^{-\lambda/(p-1)}$ converges, it follows from the H\"{o}lder inequality that \begin{align*} \|f^*\|_{L^p({\operatorname{Re}\,}^d)}& \lesssim \|f\|_{L^p({\operatorname{Re}\,}^{d+1}_{+},{\mu_\alpha^\lambda})}+\left(\sum_{k\geq 0} \sum_{Q\in \mathscr{Q}_{d, k}\cup\mathscr{Q}_{d, k+1}} \int_{\mathscr N(Q)} |\nabla f|^p\, d{\mu_\alpha^\lambda}\right)^{1/p}\lesssim \|f\|_{ W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{\alpha}^\lambda)}. \end{align*} Thus we obtain that the estimate \begin{equation}\label{tmp-norm} \|f^*\|_{L^p({\operatorname{Re}\,}^d)}\lesssim \|f\|_{ W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)} \end{equation} holds for all $1\leq p<\infty$. Hence $f^*<\infty$ for $m_d$-almost every $x\in {\operatorname{Re}\,}^d$, so the trace function $\mathscr {T} f$ exists. Since $|\mathscr Tf|\leq |f^*|$ a.e. in ${\operatorname{Re}\,}^d$, it follows from \eqref{tmp-norm} that \begin{equation}\label{add-7-1} \|\mathscr Tf\|_{L^p({\operatorname{Re}\,}^d)}\lesssim \|f\|_{ W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)}<\infty. \end{equation} Hence the trace function $\mathscr T u$ belongs to $L^p({\operatorname{Re}\,}^d)$. {{\bf Proof of (ii)}.} For every $(p, \lambda)\in \Gamma$, it follows from the proof of (i) that the trace function $\mathscr T u$ belongs to $L^p({\operatorname{Re}\,}^d)$ for every $u\in W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)$ and that the norm estimate \eqref{add-7-1} holds for every $f\in W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)$. These guarantee the boundedness of the trace operator $\mathscr T$. Since the linearity of the trace operator $\mathscr {T}$ is obvious from the definition, the proof is complete. \qed \subsection*{Proof of Theorem \ref{thm-1.3}} Let $f\in W^{1,p}(\mathbb{R}^{d+1}_{+},\mu_{p-1}^{\lambda})$ for $(p, \lambda)\in \Gamma$ (i.e., $\alpha=p-1$). It follows from Theorem \ref{Thm-1.2} that the trace function $\mathscr {T} f$ exists and $$\|\mathscr Tf\|_{L^p({\operatorname{Re}\,}^d)}\lesssim \|f\|_{ W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{\alpha}^\lambda)}.$$ The remaining task is to estimate the $\dot {\mathcal B}^{\gamma}_{p}(\mathbb{R}^{d})$-energy of $\mathscr{T}f=:{\tilde f}$. Let $P\in \mathscr{Q}_{d, 2^k}$. Then $$m_d(P)\sum_{Q\asymp P} |{\tilde f}_Q-{\tilde f}_P|^p \leq \sum_{\substack{Q\asymp P\\ Q\in \mathscr{Q}_{d, 2^k}}} \int_P |{\tilde f}-{\tilde f}_Q|^p\, dm_d+\sum_{\substack{Q\asymp P\\ Q\in \mathscr{Q}_{d, 2^{k-1}}}} \int_P |{\tilde f}-{\tilde f}_Q|^p\, dm_d=:H^1_P+H^2_P,$$ and so, \begin{align*} \|\mathscr Tf\|^{p}_{\dot {\mathcal B}^{\gamma}_{p}(\mathbb{R}^{d})}=\|\tilde f\|^{ p}_{\dot {\mathcal B}^{\gamma}_{p}(\mathbb{R}^{d})}&=\sum_{k=0}^{\infty}(2^{k}+2)^{\gamma}\sum_{P\in\mathscr{Q}_{d,2^{k}}}m_{d}(P)\sum_{Q\asymp P}|{\tilde f}_{Q}-{\tilde f}_{P}|^p\\ &\leq \sum_{k=0}^{\infty}(2^{k}+2)^{\gamma}\sum_{P\in\mathscr{Q}_{d,2^{k}}}(H^1_P+H^2_P)\\ &=:H^1+H^2, \end{align*} where $$H^i=\sum_{k=0}^{\infty}(2^{k}+2)^{\gamma}\sum_{P\in\mathscr{Q}_{d,2^{k}}}H^i_P\ \ \ \text{for }\ i=1, 2.$$ Towards the estimate of $H^1$, notice that \begin{equation*}\label{measure-relation} m_d(P)\approx m_d(Q) \approx 2^{2^kp}(2^k+2)^{-\lambda}\mu_\alpha^\lambda(\mathscr N(P))\approx 2^{2^kp}(2^k+2)^{-\lambda}\mu_\alpha^\lambda(\mathscr N(Q)) \end{equation*} for any $P, Q\in \mathscr{Q}_{d, 2^k}$ with $P\asymp Q$. Hence \begin{align*} H^1_P&\leq \sum_{\substack{Q\asymp P\\ Q\in \mathscr{Q}_{d, 2^k}}}\left( \int_P|{\tilde f}-f_{\mathscr{N}(P)}|^p\, dm_d+m_d(P)|f_{\mathscr{N}(P)}- f_{\mathscr{N}(Q)}|^p+\int_Q|{\tilde f}-f_{\mathscr{N}(Q)}|^p\, dm_d\right)\\ &\lesssim \sum_{\substack{Q\asymp P\\ Q\in \mathscr{Q}_{d, 2^k}}} \int_{Q} |{\tilde f}(x)-\mathscr T_{2^k} f(x)|^p\, dm_d(x)+\sum_{\substack{Q\asymp P\\ Q\in \mathscr{Q}_{d, 2^k}}}m_d(P)|f_{\mathscr{N}(P)}-f_{\mathscr{N}(Q)}|^p\\ &\approx \sum_{\substack{Q\asymp P\\ Q\in \mathscr{Q}_{d, 2^k}}} \int_{Q} |{\tilde f}(x)-\mathscr T_{2^k} f(x)|^p\, dm_d(x)+ 2^{2^kp}(2^k+2)^{-\lambda}\sum_{\substack{Q\asymp P\\ Q\in \mathscr{Q}_{d, 2^k}}}\mu_\alpha^\lambda(\mathscr{N}(P))|f_{\mathscr{N}(P)}-f_{\mathscr{N}(Q)}|^p. \end{align*} Using the Poincar\'{e} inequality \eqref{poincare} and the fact that $\#\{Q: Q\asymp P\}$ are uniformly bounded, where ``$\#$'' means ``cardinality'', we get \begin{align*} \sum_{P\in\mathscr{Q}_{d, 2^k}} H^1_P&\lesssim \int_{{\operatorname{Re}\,}^d} |{\tilde f}(x)-\mathscr T_{2^k}f(x)|^{ p}\, dm_d(x)+ (2^k+2)^{-\lambda}\int_{\bigcup_{2^{-2^k-1}\leq\ell(Q)\leq 2^{-2^k+1}}\mathscr N(Q)} |\nabla f|^p\, d\mu_\alpha^\lambda\\ &=: I_k+ (2^k+2)^{-\lambda} I'_k. \end{align*} Since the domains of integration in $I'_k$'s have bounded overlap, we obtain that $$\sum_{k\geq 0} I'_k\lesssim \|f\|^{ p}_{W^{1,p}({\operatorname{Re}\,}^{d+1}_+, \mu_\alpha^\lambda)}.$$ Next, we are going to estimate the term $I_k$. Since it follows from the triangle inequality and the Minkowski inequality that \begin{align*} (I_k)^{1/p}&\leq \left(\int_{{\operatorname{Re}\,}^d}\left(\sum_{n\geq 2^k}|\mathscr{T}_{n+1}f-\mathscr{T}_{n}f| \right)^p\, dm_d\right)^{1/p}\leq \sum_{n\geq 2^{k}}\left(\int_{\mathbb{R}^{d}}|\mathscr{T}_{n+1}f-\mathscr{T}_{n}f|^pdm_{d}\right)^{1/p}, \end{align*} we know from the estimate \eqref{estimate-T_k-T} that \[\int_{{\operatorname{Re}\,}^d} |\mathscr {T}_{n+1} f-\mathscr {T}_n f|^p\, dm_d\lesssim (2+n)^{-\lambda}\sum_{Q\in \mathscr{Q}_{d, n}\cup\mathscr{Q}_{d, n+1}} \int_{\mathscr N(Q)} |\nabla f|^p\, d{\mu_\alpha^\lambda}.\] For the case when $p>1, \alpha=p-1$ and $0<\gamma<\lambda-(p-1)$, it follows from the H\"{o}lder inequality that \begin{align*} I_k&\lesssim\left(\sum_{n\geq 2^k} (n+2)^{-(\lambda-\gamma)/p}\left((n+2)^{-\gamma}\int_{\cup_{2^{-n-2}\leq\ell(Q)\leq 2^{-n+1}}\mathscr N(Q)}|\nabla f|^p\, d\mu_{\alpha}^{\lambda}\right)^{1/p} \right)^p\\ & \lesssim \sum_{n\geq 2^k}(n+2)^{-\gamma}\int_{\cup_{2^{-n-2}\leq\ell(Q)\leq 2^{-n+1}}\mathscr N(Q)}|\nabla f|^p\, d\mu_{\alpha}^{\lambda}, \end{align*} since the fact $\lambda-\gamma>p-1$ implies that the series $\sum_{n=1}^{\infty}(n+2)^{-\frac{\lambda-\gamma}{p-1}}$ is convergent. For the case when $p=1$ and $0<\gamma\leq\lambda$, it is obvious that $$I_k\lesssim \sum_{n\geq 2^k}(n+2)^{-\gamma}\int_{\cup_{2^{-n-2}\leq\ell(Q)\leq 2^{-n+1}}\mathscr N(Q)}|\nabla f|\, d\mu_{0}^{\lambda}.$$ Hence the Fubini theorem and the above estimates guarantee that for $p\geq 1$, \[\begin{split} \sum_{k\geq0}(2^{k}+2)^{\lambda}I_{k} \lesssim&\sum_{k\geq0}(2^{k}+2)^{\gamma}\sum_{n\geq2^{k}}(n+2)^{-\gamma}\int_{\cup_{2^{-n-2}\leq\ell(Q)\leq 2^{-n+1}}\mathscr N(Q)}|\nabla f|^p\, d\mu_{\alpha}^{\lambda}\\ =&\sum_{n\geq1}(n+2)^{-\gamma}\int_{\cup_{2^{-n-2}\leq\ell(Q)\leq 2^{-n+1}}\mathscr N(Q)}|\nabla f|^p\, d\mu_{\alpha}^{\lambda}\sum_{0\leq2^{k}\leq n}(2^{k}+2)^{\gamma}\\ \lesssim&\sum_{n\geq0}\int_{\cup_{2^{-n-2}\leq\ell(Q)\leq 2^{-n+1}}\mathscr N(Q)}|\nabla f|^p\, d\mu_{\alpha}^{\lambda}\\ \lesssim&\|f\|^{ p}_{W^{1, p}(\mathbb{R}^{d+1}_{+},\mu_{\alpha}^{\lambda})}. \end{split}\] Thus the estimate of $H^1$ is given by \begin{align*} H^1&= \sum_{k=0}^{\infty} (2^k+2)^\gamma\sum_{P\in \mathscr{Q}_{d, 2^k}} H^1_P\lesssim \sum_{k\geq 0} (2^k+2)^\gamma I_k+ \sum_{k\geq 0} I'_k\lesssim \|f\|^{ p}_{W^{1, p}(\mathbb{R}^{d+1}_{+},\mu_{\alpha}^{\lambda})}. \end{align*} For the estimate of $H^2$, again, it follows from the Fubini theorem that \begin{align*} H^2&=\sum_{k\geq 0} (2^k+2)^\gamma\sum_{P\in \mathscr{Q}_{d, 2^k}} \sum_{\substack{Q\asymp P\\ Q\in \mathscr{Q}_{d, 2^{k-1}}}} \int_P |{\tilde f}-{\tilde f}_Q|^p\, dm_d\\ &= \sum_{k\geq -1} (2^{k+1}+2)^\gamma \sum_{Q\in \mathscr{Q}_{d, 2^k} } \sum_{\substack{P\asymp Q\\ P\in \mathscr{Q}_{d, 2^{k+1}}}} \int_P |{\tilde f}-{\tilde f}_Q|^p\, dm_d\\ &\leq \sum_{k\geq -1} (2^{k+1}+2)^\gamma \sum_{Q\in \mathscr{Q}_{d, 2^k} } \sum_{\substack{Q'\asymp Q\\ Q'\in \mathscr{Q}_{d, 2^{k}}}} \int_{Q'} |{\tilde f}-{\tilde f}_Q|^p\, dm_d, \end{align*} where in the last inequality, the fact that for any $Q\in \mathscr{Q}_{d, 2^k}$, \[\bigcup_{\substack{P\asymp Q\\ P\in \mathscr{Q}_{d, 2^{k+1}}}} P\subset \bigcup_{\substack{Q'\asymp Q\\ Q'\in \mathscr{Q}_{d, 2^{k}}}} Q' \] is applied. By the reflexivity of the relation $Q'\asymp Q$ when $Q', Q\in \mathscr{Q}_{d, 2^k}$, we obtain that \begin{align*} H^2&\leq 3^\gamma\sum_{Q\in \mathscr{Q}_{d, 2^{-1}} } \sum_{\substack{Q'\asymp Q\\ Q'\in \mathscr{Q}_{d, 2^{-1}}}} \int_{Q'} |{\tilde f}-{\tilde f}_Q|^p\, dm_d +H^1\\ &\lesssim \sum_{Q\in \mathscr{Q}_{d, 1}} \int_Q |{\tilde f}|^p\, dm_d+H^1\leq \|\mathscr T f\|^{ p}_{L^p({\operatorname{Re}\,}^d)}+H^1\lesssim \|f\|^{ p}_{W^{1, p}(\mathbb{R}^{d+1}_{+},\mu_{\alpha}^{\lambda})}. \end{align*} By combining the estimates of $H^1$ and $H^2$, we obtain that $$\|\mathscr Tf\|_{\dot{\mathcal B}^{ \gamma}_{p}({\operatorname{Re}\,}^d)}\lesssim\|f\|_{W^{1, p}(\mathbb{R}^{d+1}_{+},\mu_{\alpha}^{\lambda})}=\|f\|_{ W^{1, p}({\operatorname{Re}\,}^{d+1}_+, \mu_{p-1}^\lambda)},$$ which proves the theorem.\qed \section{Extension operators}\label{sec-4} The purpose of this section is to prove Theorem \ref{thm-1.4}. Let $f\in \mathcal B^{\lambda}_{p}(\mathbb{R}^{d})$ and $\alpha=p-1$. For convenience, we rewrite $\mathscr Ef$ (see Definition \ref{extension-defn}) as follows: \begin{equation*}\label{extension-eq} \mathscr{E}f(x)=\sum_{k\geq0}\sum_{j=2^{k}}^{2^{k+1}-1}\sum_{Q\in \mathscr{Q}_{d, j}}\left(\vint_{ {\mathcal S(Q)}} f\, dm_{d}\right)\psi_{Q}(x). \end{equation*} Here we recall that $ {\mathcal S(Q)} \in \mathscr{Q}_{d, 2^k}$ is the unique cube with $Q\subset {\mathcal S(Q)}$ for any $Q\in \bigcup_{j=2^k}^{2^{k+1}-1}\mathscr{Q}_{d, j}$. The first task is to estimate the $L^{p}(\mathbb{R}^{d+1}_{+},\mu_{\alpha}^{\lambda})$-norm of $\mathscr{E}f$. It follows directly that \begin{align*} \|\mathscr{E}f\|_{L^{p}(\mathbb{R}^{d+1}_{+},\mu_{\alpha}^{\lambda})}^{p}&=\int_{\mathbb{R}^{d+1}_{+}}|\mathscr{E}f|^{p}d\mu_{\alpha}^{\lambda}=\sum_{n\geq1}\sum_{P\in\mathscr{Q}_{d,n}}\int_{{\rm supp}\,\psi_{P}}|\mathscr{E}f|^{p}d\mu_{\alpha}^{\lambda}\\ &=\sum_{P\in \mathscr{Q}_{d, 1}} \int_{{\rm supp}\,\psi_{P}}|\mathscr{E}f|^{p}d\mu_{\alpha}^{\lambda}+\sum_{n\geq2}\sum_{P\in\mathscr{Q}_{d,n}}\int_{{\rm supp}\,\psi_{P}}|\mathscr{E}f|^{p}d\mu_{\alpha}^{\lambda}\\ &=: I_1+I_2. \end{align*} To estimate $I_1$, notice that for any $P\in\mathscr{Q}_{d,1}$ and each $x\in {\rm supp}\,\psi_{P}$, we have $\psi_{Q}(x)\neq0$ only for $Q\in\mathscr{Q}_{d,j}$ with $j=1,2$ and $Q\sim P$. Since ${\mu_\alpha^\lambda}({\rm supp}\, \psi_p)\approx 1$ and $|\psi_Q|\leq 1$ for any $P\in \mathscr{Q}_{d, 1}$ and $Q\in \mathscr{Q}^1_d$, we obtain \begin{align*} I_1=\sum_{P\in\mathscr{Q}_{d,1}}\int_{{\rm supp}\,\psi_{P}}|\mathscr{E}f|^{p}d\mu_{\alpha}^{\lambda} =&\sum_{P\in\mathscr{Q}_{d,1}}\int_{{\rm supp}\,\psi_{P}}\Big|\sum_{\substack{Q\in\mathscr{Q}_{d, 1}\cup\mathscr{Q}_{d, 2}\\ Q\sim P}}f_{ {\mathcal S(Q)}}\,\psi_{Q}(x)\Big|^{p}d\mu_{\alpha}^{\lambda}\\ \lesssim&\sum_{Q\in \mathscr{Q}_{d, 1}\cup\mathscr{Q}_{d, 2}}\int_{Q}|f|^{p}dm_{d}\lesssim\|f\|_{L^{p}(\mathbb{R}^{d})}^{p}, \end{align*} where in the second last inequality, the fact that $ {\mathcal S(Q)}=Q$ for any $Q\in \mathscr{Q}_{d, 1}\cup\mathscr{Q}_{d, 2}$ is used. Towards the estimate of $I_2$, since \[\bigcup_{n\geq 2}\mathscr{Q}_{d, n}=\bigcup_{k\geq1}\bigcup_{2^k\leq j\leq 2^{k+1}-1} \mathscr{Q}_{d, j},\] we have \begin{align*} I_2&=\sum_{k\geq 1}\sum_{j=2^k}^{2^{k+1}-1}\sum_{P\in \mathscr{Q}_{d, j}} \int_{{\rm supp}\, \psi_P} |\mathscr E f|^p\, d{\mu_\alpha^\lambda}\\ &= \sum_{k\geq 1}\sum_{P\in \mathscr{Q}_{d, 2^k}}\int_{{\rm supp}\, \psi_P} |\mathscr E f|^p\, d{\mu_\alpha^\lambda} +\sum_{k\geq 1}\sum_{P\in \mathscr{Q}_{d, 2^{k+1}-1}}\int_{{\rm supp}\, \psi_P} |\mathscr E f|^p\, d{\mu_\alpha^\lambda}\\ &\quad\quad\quad \quad\quad+ \sum_{k\geq 2}\sum_{j=2^k+1}^{2^{k+1}-2}\sum_{P\in \mathscr{Q}_{d, j}} \int_{{\rm supp}\, \psi_P} |\mathscr E f|^p\, d{\mu_\alpha^\lambda}\\ &=:I_2^A+I_2^B+I_2^C. \end{align*} For the estimate of $I_2^A$, recall the relation \eqref{eq:bump-support} and notice that for $P\in \mathscr{Q}_{d, 2^k}$, if $Q\sim P$, then $$Q\in\mathscr{Q}_{d, 2^k-1}\cup\mathscr{Q}_{d, 2^k}\cup\mathscr{Q}_{d, 2^k+1}.$$ By the definition of $ {\mathcal S(Q)}$, we have \begin{equation*} \left\{\begin{array}{cl} {\mathcal S(Q)}\in \mathscr{Q}_{d, 2^k}\,\,&\mbox{if}\;\; Q\in \mathscr{Q}_{d, 2^k}\cup\mathscr{Q}_{d, 2^k+1},\\ {\mathcal S(Q)}\in \mathscr{Q}_{d, 2^{k-1}}\,\,&\mbox{if}\;\; Q\in \mathscr{Q}_{d, 2^k-1}. \end{array}\right. \end{equation*} Then it follows from the uniformly boundedness of $\#\{Q: Q\sim P\}$ that \begin{align*} I_2^A &\lesssim\sum_{k\geq 1}\sum_{P\in \mathscr{Q}_{d, 2^k}}{\mu_\alpha^\lambda}({\mathscr W}(P))\sum_{Q\sim P}\vint_{ {\mathcal S(Q)}} |f|^{p}\, dm_{d}\\ &=\sum_{k\geq1}\sum_{P\in\mathscr{Q}_{d,2^{k}}}\mu_{\alpha}^{\lambda}(\mathscr{W}(P))\sum_{\substack{Q\sim P\\ Q\in \mathscr{Q}_{d, 2^k}\cup\mathscr{Q}_{d, 2^k+1}}}\vint_{ {\mathcal S(Q)}} |f|^{p}\, dm_{d}\\ &\quad\quad\quad\quad+\sum_{k\geq1}\sum_{P\in\mathscr{Q}_{d,2^{k}}}\mu_{\alpha}^{\lambda}(\mathscr{W}(P))\sum_{\substack{Q\sim P\\ Q\in \mathscr{Q}_{d, 2^k-1}}}\vint_{ {\mathcal S(Q)}} |f|^{p}\, dm_{d}\\ &\lesssim\sum_{k\geq1}2^{-2^{k}p}(2^{k}+2)^{\lambda}\sum_{Q^{\prime}\in\mathscr{Q}_{d,2^{k}}\cup\mathscr{Q}_{d,2^{k-1}}}\int_{Q^{\prime}}|f|^{p}dm_{d}\\ &\lesssim\sum_{k\geq1}2^{-2^{k}p}(2^{k}+2)^{\lambda}\int_{\mathbb{R}^{d}}|f|^{p}dm_{d} \lesssim\|f\|_{L^{p}(\mathbb{R}^{d})}^{p}, \end{align*} since ${\mu_\alpha^\lambda}({\mathscr W}(P))\approx 2^{-2^k(d+p)} (2^{k}+2)^{\lambda}$ for $P\in \mathscr{Q}_{d, 2^k}$ and $\sum_{k\geq1}2^{-2^{k}p}(2^{k}+2)^{\lambda}$ is convergent. The similar reasoning as in the estimate of $I_2^A$ ensures that \[I_2^B\lesssim\sum_{k\geq 1} 2^{-2^{k+1}p}(2^{k+1}+2)^\lambda\sum_{Q'\in \mathscr{Q}_{d, 2^k}\cup\mathscr{Q}_{d, 2^{k+1}}} \int_{Q'}|f|^p\, dm_d \lesssim \|f\|_{L^p({\operatorname{Re}\,}^d)}^p.\] The estimate of $I_2^C$ is obtained by using the Fubini theorem as follows: \begin{align*} I_2^C&\lesssim \sum_{k\geq 2}\sum_{j=2^k+1}^{2^{k+1}-2}\sum_{P\in \mathscr{Q}_{d, j}}{\mu_\alpha^\lambda}({\mathscr W}(P))\sum_{Q\sim P}\vint_{ {\mathcal S(Q)}} |f|^{p}\, dm_{d}\\ & = \sum_{k\geq 2} \sum_{Q'\in \mathscr{Q}_{d, 2^k}} 2^{ 2^kd}\int_{Q'} |f|^p\, dm_d\, \Bigg(\sum_{j=2^k+1}^{2^{k+1}-2}\sum_{\substack{P\in \mathscr{Q}_{d, j}, P\sim Q\\ {\mathcal S(Q)}=Q'}} {\mu_\alpha^\lambda}({\mathscr W}(P)) \Bigg)\\ &\approx \sum_{k\geq 2} \sum_{Q'\in \mathscr{Q}_{d, 2^k}} 2^{ 2^kd}\int_{Q'} |f|^p\, dm_d\, \Bigg(\sum_{j=2^k+1}^{2^{k+1}-2} 2^{-j(d+p)}(2+j)^\lambda\left(\frac{2^{-2^k}}{2^{-j}}\right)^d\Bigg)\\ &\lesssim\sum_{k\geq2}2^{-2^{k}p}(2^k+2)^\lambda\sum_{Q^{\prime}\in\mathscr{Q}_{d,2^{k}}}\int_{Q^{\prime}}|f|^{p}dm_{d}\lesssim\|f\|_{L^{p}(\mathbb{R}^{d})}^{p}. \end{align*} By combining the estimates of $I_1$, $I_2^A$, $I_2^B$ and $I_2^C$, we have \begin{equation}\label{extension-L-norm} \|\mathscr{E}f\|_{L^{p}(\mathbb{R}^{d+1}_{+},\mu_{\alpha}^{\lambda})}\lesssim\|f\|_{L^{p}(\mathbb{R}^{d})}.\end{equation} The next task is to estimate the $L^{p}(\mathbb{R}^{d+1}_{+},{\mu_\alpha^\lambda})$-norm of $\nabla(\mathscr{E}f)$. To reach this goal, we first divide $\mathbb{R}^{d+1}_{+}$ into two parts: $$L^{p}(\mathbb{R}^{d+1}_{+},{\mu_\alpha^\lambda})=X_{1}\cup X_{2},$$ where $X_{1}=\bigcup_{k\geq1}\bigcup_{Q\in\mathscr{Q}_{d,k}}\mathscr{W}(P)$ and $X_{2}=\mathbb{R}^{d+1}_{+}\backslash X_{1}.$ If $x\in X_{2}$, we have $\psi_{Q}(x)\neq0$ only for $Q\in\mathscr{Q}_{d,1}$. So it follows from the Lipschitz continuity of the functions $\psi_Q$ that $$ |\nabla(\mathscr{E}f)(x)|\leq|{\rm Lip}\,(\mathscr{E}f)(x)|\lesssim\sum_{Q\in\mathscr{Q}_{d,1}}|f_{Q}|\chi_{{\rm supp}\,\psi_{Q}}(x). $$ Since ${\mu_\alpha^\lambda}({\rm supp}\,\psi_{Q})\approx1$ for each $Q\in\mathscr{Q}_{d,1}$, the above estimate yields that \begin{equation}\label{eq-add-} \int_{X_{2}}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda}\lesssim\sum_{Q\in\mathscr{Q}_{d,1}}|f_{Q}|^{p}\mu_{\alpha}^{\lambda}({\rm supp}\,\psi_{Q})\lesssim\|f\|_{L^{p}(\mathbb{R}^{d})}^{p}. \end{equation} If on the other hand $x\in X_{1}$, then there exist a $k\geq0$ and an unique cube $P\in\mathscr{Q}_{d,j}$ such that $2^{k}\leq j<2^{k+1}$ and $x\in\mathscr{W}(P)$. So we can have \begin{align*} \int_{X_{1}}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda}&=\sum_{k\geq0}\sum_{j=2^{k}}^{2^{k+1}-1}\sum_{P\in\mathscr{Q}_{d,j}}\int_{\mathscr{W}(P)}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda}\\ & =\sum_{k\geq0} \sum_{P\in\mathscr{Q}_{d,2^k}}\int_{\mathscr{W}(P)}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda} +\sum_{k\geq1} \sum_{P\in\mathscr{Q}_{d,2^{k+1}-1}}\int_{\mathscr{W}(P)}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda} \\ &\quad\quad\quad\quad+\sum_{k\geq2}\sum_{j=2^k+1}^{2^{k+1}-2} \sum_{P\in\mathscr{Q}_{d,2^{k+1}-1}}\int_{\mathscr{W}(P)}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda}\\ & =: H^A+H^B+H^C. \end{align*} For the estimate of $H^A$, let $P\in\mathscr{Q}_{d,2^k}$ with $k\geq 0$. Then for each $x\in\mathscr{W}(P)$, \begin{equation}\label{ext} \mathscr{E}f(x)=\sum_{Q\sim P}\left(\vint_{ {\mathcal S(Q)}} f\, dm_{d}\right)\psi_{Q}(x). \end{equation} Again, it follows from the Lipschitz continuity of the functions $\psi_Q$ that $$ |\nabla (\mathscr Ef)(x)|\leq |{\rm Lip}\,(\mathscr{E}f(x)-f_{P})|\lesssim\sum_{{Q\sim P}} \frac{1}{2^{-2^k}}|f_{ {\mathcal S(Q)}}-f_{P}|. $$ Since for any $P\in \mathscr{Q}_{d, 2^k}$ and $Q\sim P$, the cube $ {\mathcal S(Q)}$ is a selected neighbor of $P$, i.e., $ {\mathcal S(Q)}\asymp P$, we obtain \begin{align*} H^A&\lesssim \sum_{k\geq 0} \sum_{P\in\mathscr{Q}_{d, 2^k}} {\mu_\alpha^\lambda}({\mathscr W}(P)) 2^{2^k p}\sum_{Q\sim P}|f_{ {\mathcal S(Q)}}-f_P|^p\\ &\approx \sum_{k\geq 0} \sum_{P\in\mathscr{Q}_{d, 2^k}} (2^k+2)^\lambda m_d(P)\sum_{Q' \asymp P}|f_{Q'}-f_P|^p \\ &\leq \|f\|^{ p}_{\mathcal B^{\lambda}_{p}(\mathbb{R}^{d})}. \end{align*} For the estimate of $H^B$, let $P\in \mathscr{Q}_{d, 2^{k+1}-1}$. Then ${\mathcal S (P)} \in \mathscr{Q}_{d, 2^k}$ is the unique cube with $P\subset {\mathcal S (P)}$. It follows from \eqref{ext} and the continuity of the functions $\psi_Q$ that \[|\nabla(\mathscr Ef) (x)| \leq |{\rm Lip}\,(\mathscr{E}f(x)-f_{\mathcal S(P)})|\lesssim\sum_{{Q\sim P}} \frac{1}{2^{-2^{k+1}}}|f_{ {\mathcal S(Q)}}-f_{{\mathcal S (P)}}|.\] Since for any $P\in \mathscr{Q}_{d, 2^{k+1}-1}$ and $Q\sim P$, the cube ${\mathcal S (P)}$ is a selected neighbor of $ {\mathcal S(Q)}$, i.e., ${\mathcal S (P)}\asymp {\mathcal S(Q)}$, it follows from the Fubini theorem that \begin{align*} H^B&\lesssim \sum_{k\geq 1} \sum_{P\in \mathscr{Q}_{d, 2^{k+1}-1}}{\mu_\alpha^\lambda}({\mathscr W}(P)) 2^{2^{k+1}p} \sum_{Q\sim P} |f_{ {\mathcal S(Q)}}-f_{{\mathcal S (P)}}|\\ &\lesssim \sum_{k\geq 1} \sum_{j=2^{k+1}-2}^{2^k}\sum_{Q\in \mathscr{Q}_{d, j}} m_d(Q) (2^{k+1}+2)^\lambda\sum_{P'\asymp {\mathcal S(Q)}} |f_{ {\mathcal S(Q)}}-f_{P'}|^p\\ &=\sum_{k\geq 1}(2^{k+1}+2)^\lambda \sum_{Q'\in \mathscr{Q}_{d, 2^{k}}\cup\mathscr{Q}_{d, 2^{k+1}}} \sum_{P'\asymp Q'} |f_{P'}-f_{Q'}|^p\Bigg(\sum_{j=2^{k+1}-2}^{2^k}\sum_{\substack{Q\in \mathscr{Q}_{d, j}\\ {\mathcal S(Q)}=Q'}} m_d(Q)\Bigg)\\ &\lesssim \sum_{k\geq 1}(2^{k+1}+2)^\lambda \sum_{Q'\in \mathscr{Q}_{d, 2^{k}}\cup\mathscr{Q}_{d, 2^{k+1}}} m_d(Q')\sum_{P'\asymp Q'} |f_{P'}-f_{Q'}|^p\lesssim \|f\|^{ p}_{\mathcal B^{\lambda}_{p}(\mathbb{R}^{d})}. \end{align*} The left is to estimate $H^C$. For any $2^k+1\leq j\leq 2^{k+1}-2$ with $k\geq 2$, let $$\mathscr{Q}_{d,j}=Y_{1,j}\cup Y_{2,j},$$ where $$ Y_{1,j}:=\{P:P\in\mathscr{Q}_{d,j}, \overline{P} \bigcap\overline{\mathbb{R}^{d}\backslash {\mathcal S(P)}}=\emptyset\} $$ and $$ Y_{2,j}:=\{P:P\in\mathscr{Q}_{d,j}, \overline{P} \bigcap\overline{\mathbb{R}^{d}\backslash {\mathcal S(P)}}\neq\emptyset\}. $$ Let $x\in\mathscr{W}(P)$ for $P\in\mathscr{Q}_{d,j}$. If $P\in Y_{1,j}$, then the definition of $\mathscr{E}f$ implies that $\mathscr{E}f(x)\equiv f_{ {\mathcal S(P)}}$, and hence, $$\int_{\mathscr{W}(P)}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda}=0.$$ For the case when $P\in Y_{2,j}$, notice that for $Q\sim P$, the cube $ {\mathcal S(Q)}$ is a selected neighbor of $ {\mathcal S(P)}$, i.e., $ {\mathcal S(Q)}\asymp {\mathcal S(P)}$. It follows from \eqref{ext} that \[|\nabla(\mathscr Ef) (x)| \leq |{\rm Lip}\,(\mathscr{E}f(x)-f_{\mathcal S(P)})|\lesssim\sum_{{Q\sim P}} \frac{1}{2^{-j}}|f_{ {\mathcal S(Q)}}-f_{{\mathcal S (P)}}|\leq \frac{1}{2^{-j}}\sum_{Q\asymp {\mathcal S(P)}} |f_Q-f_{ {\mathcal S(P)}}| .\] Hence, for any $P\in Y_{2, j}$, we have the estimate \begin{align*} \int_{\mathscr{W}(P)}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda} \lesssim&\mu_{\alpha}^{\lambda}(\mathscr{W}(P))\bigg(\frac{1}{2^{-j}}\sum_{Q\asymp {\mathcal S(P)}} |f_Q-f_{ {\mathcal S(P)}}|\bigg)^{p}\\ \lesssim&2^{-jd}(j+2)^{\lambda}\sum_{Q\asymp {\mathcal S(P)}} |f_Q-f_{ {\mathcal S(P)}}|^{p}. \end{align*} It follows from the Fubini theorem that \begin{align*} H^C&=\sum_{k\geq 2} \sum_{j=2^{k}+1}^{2^{k+1}-2}\sum_{P\in Y_{1,j}}\int_{\mathscr{W}(P)}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda}+\sum_{k\geq 2} \sum_{j=2^{k}+1}^{2^{k+1}-2}\sum_{P\in Y_{2,j}}\int_{\mathscr{W}(P)}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda}\\ &=\sum_{k\geq 2} \sum_{j=2^{k}+1}^{2^{k+1}-2}\sum_{P\in Y_{2,j}} 2^{-jd}(j+2)^{\lambda}\sum_{Q\asymp {\mathcal S(P)}} |f_Q-f_{ {\mathcal S(P)}}|^{p}\\ &\approx\sum_{k\geq 2} (2^k+2)^\lambda \sum_{P'\in \mathscr{Q}_{d, 2^k}} m_d(P') \sum_{Q\asymp P'}|f_Q-f_{P'}|^p\Bigg(\sum_{j=2^{k}+1}^{2^{k+1}-2}\sum_{\substack{P\in Y_{2,j}\\ P'= {\mathcal S(P)}}} 2^{-jd}/m_d(P')\Bigg). \end{align*} Note that for each $P'\in\mathscr{Q}_{d,2^{k}}$, there are $\left(2^{(j-2^k)d}-(2^{j-2^k}-2)^d\right)$ many of $P\in Y_{2,j}$ such that $ {\mathcal S(P)}=P'$. Hence \begin{align*} \sum_{j=2^{k}+1}^{2^{k+1}-2}\sum_{\substack{P\in Y_{2,j}\\ P'= {\mathcal S(P)}}} 2^{-jd}/m_d(P')&=\sum_{j=2^{k}+1}^{2^{k+1}-2} 2^{(2^k-j)d}\left(2^{(j-2^k)d}-(2^{j-2^k}-2)^d\right)\\ &\leq \sum_{i=0}^{\infty}1-(1-2^{-i})^d\\ &\leq \sum_{i=0}^{\infty} d\cdot 2^{-i}<\infty. \end{align*} This yields that \[H^C\lesssim \sum_{k\geq 2} (2^k+2)^\lambda \sum_{P'\in \mathscr{Q}_{d, 2^k}} m_d(P') \sum_{Q\asymp P'}|f_Q-f_{P'}|^p\leq \|f\|^{ p}_{\mathcal B^{\lambda}_{p}(\mathbb{R}^{d})}.\] By combining the estimates of $H^A$, $H^B$ and $H^C$, we get that \[ \int_{X_{1}}|\nabla(\mathscr{E}f)|^{p}d\mu_{\alpha}^{\lambda} =H^A+H^B+H^C \lesssim\|f\|_{\mathcal B^{\lambda}_{p}(\mathbb{R}^{d})}^{p}. \] By recalling the estimates \eqref{extension-L-norm} and \eqref{eq-add-}, we finally obtain that $$\|\mathscr{E}f\|_{W^{1, p}(\mathbb{R}^{d+1}_{+},\mu_{p-1}^{\lambda})}=\|\mathscr{E}f\|_{W^{1, p}(\mathbb{R}^{d+1}_{+},\mu_{\alpha}^{\lambda})}\lesssim\|f\|_{\mathcal B^{\lambda}_{p}(\mathbb{R}^{d})}.$$ Since all $m_d$-almost every points in ${\operatorname{Re}\,}^d$ are the Lebesgue ones of a function $f\in\mathcal B^{\lambda}_{p}(\mathbb{R}^{d})$, it is evident from the definition of the trace operator $\mathscr T$ that $\mathscr T(\mathscr Ef)=f$ for $m_d$-almost every points in ${\operatorname{Re}\,}^d$ , and hence, the theorem is proved. \qed \section*{Acknowledgments} The first author (Manzi Huang) was partly supported by NNSF of China under the number 11822105. The second author (Xiantao Wang) was partly supported by NNSFs of China under the numbers 12071121 and 11720101003 and the project under the number 2018KZDXM034. The third author (Zhuang Wang) was supported by NNSF of China under the number 12101226.
1,116,691,498,116
arxiv
\section{Introduction} In the celebrated Kondo problem, a seemingly innocuous magnetic impurity coupled to a band of itinerant electrons gives rise to an infrared logarithmic divergence in not only resistivity but also almost all thermodynamic and kinetic properties \cite{kondo,hewson}. To interpret this logarithmic divergence, Anderson\cite{anderson} proposed the idea of the so-called poor man's scaling: the effects of high-energy excitations can be absorbed into renormalized coupling constants at low energies. It was later realized that similar physics is found in many more complicated impurity models with internal degrees of freedom. Among these is the Coqblin--Schrieffer (CS) model motivated by the orbital degeneracy of transition metal ions with unfilled d or f shells \cite{coqblin,cox,hewson}. The CS model has recently attracted renewed interest in various contexts\cite{kikoin,desgranges,figueira,avishai}. The study of spin anisotropy\cite{cox,costi,irkhin2,thomas} has yielded rich results as an offshoot of the original isotropic Kondo problem. Following works on the spin-anisotropic Kondo model\cite{anderson,shiba,yosida,kogan}, one of the authors introduced anisotropic CS models and derived the poor man's scaling equations for these models\cite{kogan2,kogan3}. In this work we return to the consideration of what we previously called the $XYZ$ CS model; the scaling equations are now evaluated to the third order, an error in the previously obtained second-order scaling equations\cite{kogan2} is corrected, and we explore the scaling flow diagrams in detail. We also take into account a possible power-law energy dependence of the density of states of itinerant electrons at the Fermi energy (i.e. a pseudogap density of states)\cite{fradkin,cassanello,gbi,vojtabulla,fritzvojta,mitchellfritz,kogan,shinicaaffleck}, which can arise in semimetals, nodal superconductors as well as one-dimensional interacting systems. The rest of the paper is constructed as follows. In Section~\ref{sec:scal} we present the third-order scaling equation for the coupling constants and review the notion of algebraic renormalizability for a rather generic quantum impurity model embedded into an itinerant electron gas. In Section~\ref{sec:kondo} we consider the solutions to the poor man's scaling equations for the $XYZ$ Kondo model, which is the $N=2$ special case of the $XYZ$ CS model, and plot the corresponding three-dimensional weak-coupling flow diagrams. We first review the case of a constant density of states for the itinerant electrons, expanding on the well-known limit of the $XXZ$ Kondo model. We then turn to a pseudogap density of states, and generalize the analysis of the $XXZ$ Kondo model in Refs.~\onlinecite{kogan,kogan2} to the fully anisotropic $XYZ$ case. In Section~\ref{sec:ani} we present the poor man's scaling equations for the $XYZ$ CS model in the more general case $N>2$, again for both constant and pseudogap densities of states, and analyze the three-dimensional flow diagrams. Section~\ref{sec:conclusion} concludes the paper. Appendix~\ref{sec:appv3} describes the derivation of the third-order scaling equation for the generic quantum impurity model. In Appendix~\ref{sec:renorm} we show explicitly the algebraic renormalizability of the $XYZ$ CS model at the second order. Some additional mathematical details are relegated to Appendix~\ref{sec:pfaff}. \section{Scaling equation and algebraic renormalizability\label{sec:scal}} The quantum impurity that we consider is coupled to conduction electrons and described by the Hamiltonian\cite{cox,kogan2,kogan3} \begin{eqnarray} \label{hamilto} H=\sum_{{\bf k}\alpha}\epsilon_{\bf k}c_{{\bf k}\alpha}^{\dagger}c_{{\bf k}\alpha} +\sum_{\substack{{\bf k},{\bf k}'\\\alpha\beta,ab}}V_{\beta\alpha,ba}X_{ba}c_{{\bf k}'\beta}^{\dagger}c_{{\bf k}\alpha}, \end{eqnarray} where $c^{\dagger}_{{\bf k}\alpha}$ creates a conduction electron with wave vector ${\bf k}$, channel $\alpha$, and energy $\epsilon_{\bf k}$. The Hubbard $X$-operator is defined as $X_{ba}=|b\rangle \langle a|$, where $|a\rangle,|b\rangle$ are the impurity states. While studying the physics in the vicinity of the Fermi energy, we must account for the virtual transitions from and to electron states at higher energies. In the poor man's scaling formalism \cite{anderson}, one reduces the semi-bandwidth of the conduction electrons from $D$ to $D-|dD|$ ($dD<0$ is infinitesimal), discarding the electronic states in the energy intervals $(D-|dD|,D)$ and $(-D,-D+|dD|)$; however, virtual transitions through these states are retained in the form of a modified coupling constant $V$, such that the impurity scattering matrix elements are the same at low energies. The coupling $V$ is therefore renormalized as the energy scale $D$ is reduced. To the order $O(V^3)$, the diagrams in Fig.~\ref{fig:feynman} produce the following scaling equation for the generic Hamiltonian Eq.~(\ref{hamilto}): \begin{eqnarray} \label{v3sceq} &&\frac{dV_{\beta \alpha ,ba}}{d\ln \Lambda } =\rho \sum_{\gamma c}\left( V_{\beta \gamma ,bc}V_{\gamma \alpha ,ca}-V_{\gamma \alpha ,bc}V_{\beta \gamma ,ca}\right) \notag \\ && -\rho ^{2}\sum_{\delta \gamma }\sum_{cd}V_{\delta \gamma ,bc}V_{\beta \alpha ,cd}V_{\gamma \delta ,da} \notag \\ &&+\frac{1}{2}\rho ^{2}\sum_{\gamma \delta cd}\left( V_{\delta \gamma ,bc}V_{\gamma \delta ,cd}V_{\beta \alpha ,da}+V_{\beta \alpha ,bc}V_{\delta \gamma ,cd}V_{\gamma \delta ,da}\right) \text{.} \label{3sc} \end{eqnarray} where $\Lambda=D/D_0$ and $D_0$ is the initial semi-bandwidth. The second-order terms in Eq.~(\ref{v3sceq}) are already given in Refs.~\onlinecite{kogan2,kogan3}; Appendix~\ref{sec:appv3} explains in detail how the third-order terms are obtained. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{feynman} \end{center} \caption{Diagrams contributing to the scaling equation Eq.~(\ref{v3sceq}) at (a) the second order and (b) the third order. Solid lines and double dashed lines represent electrons and impurity propagators respectively.}\label{fig:feynman} \end{figure} A particular interesting case is when $V_{\beta\alpha,ba}$ can be written as a sum of direct products of Hermitian matrices $\{G^p\}$ and $\{\Gamma^p\}$, which act respectively in impurity and channel Hilbert spaces: \begin{eqnarray} \label{producti} V=2\sum_iJ_i\sum_{p\in \{P_i\}}G^p\otimes \Gamma^{\tilde{p}}. \end{eqnarray} Here $\{G_p\}$ ($p\in \{P_i\}$) is a given set of generators corresponding to the coupling constants $J_i$; $\Gamma^{\tilde{p}}$ in most cases considered in Ref.~\onlinecite{kogan3} is just the generator isomorphic to the $G^p$, but in general, it can be another $i$-specific generator $\Gamma$ corresponding to $G^p$. Under the assumption of Eq.~(\ref{producti}), there will be fewer coupling constants than the maximum number of entries in the matrix $V_{\beta\alpha,ba}$, and it is clear that not all interactions in the form of Eq.~(\ref{producti}) have this form preserved by scaling (or ``algebraically renormalizable'') even at the second order. The problem of finding algebraically renormalizable interactions usually requires symmetry considerations of the group corresponding to $\{G^p\}$ and $\{\Gamma^p\}$, and has been discussed in Refs.~\onlinecite{kogan2,kogan3} at the second order. In particular, the search for algebraically renormalizable models has led to the proposal of the $XYZ$ CS model, which we will focus on in the remainder of this paper: \begin{eqnarray} \label{cs01b} V&=&J_S\sum_{m\neq m'}X_{mm'}c_{m'}^{\dagger}c_{m}+J_A\sum_{m\neq m'}X_{mm'}c_{m}^{\dagger}c_{m'}\nonumber\\ &+&J_z\sum_m X_{mm}c_{m}^{\dagger}c_m-\frac{J_z}{N}\sum_{mm'}X_{mm}c_{m'}^{\dagger}c_{m'}. \end{eqnarray} Here we have suppressed all momentum labels for clarity. Below we discuss the scaling flows of Eq.~(\ref{cs01b}) at the third order as predicted by Eq.~(\ref{3sc}). Appendix~\ref{sec:renorm} shows the detailed calculation of the second-order terms, thus demonstrating the algebraic renormalizability of Eq.~(\ref{cs01b}) explicitly at the second order; at the third order we only present the final results. \section{$XYZ$ Kondo model\label{sec:kondo}} \subsection{From spins to Hubbard operators} To motivate and explain our treatment of the $XYZ$ CS model in Sec.~\ref{sec:ani}, let us start from the analysis of the spin-anisotropic Kondo model \begin{eqnarray} \label{hamiltonian} H=\sum_{{\bf k}\alpha}\epsilon_{\bf k}c_{{\bf k}\alpha}^{\dagger}c_{{\bf k}\alpha} +\sum_{{\bf k}{\bf k}'\alpha\beta} J_{ij}S^i\sigma^j_{\alpha\beta}c_{{\bf k}'\alpha}^{\dagger}c_{{\bf k}\beta}, \end{eqnarray} where $S^x,S^y,S^z$ are the impurity spin operators, $\sigma^x,\sigma^y,\sigma^z$ are the Pauli matrices, $J_{ij}$ is the anisotropic exchange coupling matrix, and summation with respect to any repeated Cartesian index is implied. After the Hamiltonian Eq.~(\ref{hamiltonian}) is reduced to principal axes $J_{ij}=J_{i}\delta_{ij}$, the corresponding scaling equations are \begin{eqnarray} \label{scalingsc02} \frac{dJ_x}{d\ln\Lambda} &=& -2J_yJ_z+J_x (J_y^2+J_z^2),\nonumber\\ \frac{dJ_y}{d\ln\Lambda} &=& -2J_xJ_z+J_y (J_x^2+J_z^2),\\ \frac{dJ_z}{d\ln\Lambda} &=& -2J_xJ_y+J_z (J_x^2+J_y^2).\nonumber \end{eqnarray} (Here and further on we take the constant density of states of the itinerant electrons to be equal to 1). When we neglect the third-order terms (which is justified by the assumption of weak coupling) \cite{shiba,kogan}, the general solution of Eq.~(\ref{scalingsc02}) can be written in terms of elliptic functions \begin{eqnarray} \label{amm9} J_{\alpha} &=&A\;\mathrm{ns}(At+\psi,k)\nonumber\\ J_{\beta}&=&A\;\mathrm{cs}(At+\psi,k)\\ J_{\gamma} &=&A\;\mathrm{ds}(At+\psi,k),\nonumber \end{eqnarray} where $\{\alpha,\beta,\gamma\}$ is an arbitrary permutation of $\{x,y,z\}$, and $t=2\ln\Lambda$. In the general case ($k\neq 0,1$) the flow lines go to infinity both with decreasing and with increasing $t$. (A flow line starting and ending at finite energies corresponds to a finite interval of $At$.) A flow line is attracted to the asymptotic ray $J_{\alpha}=J_{\beta}=J_{\gamma}>0$ with decreasing $t$ and $J_{\alpha}=J_{\beta}=-J_{\gamma}>0$ with increasing $t$, or to the asymptotic ray $J_{\alpha}=J_{\beta}=-J_{\gamma}>0$ with decreasing $t$ and to $J_{\alpha}=J_{\beta}=J_{\gamma}<0$ with increasing $t$. We see that there are four strong-coupling phases in total, each one corresponding to the three-dimensional attraction region of the appropriate ray. The third-order terms do not change these conclusions, because they would only become important at strong coupling where we expect the perturbation theory to fail. From the scaling equation Eq.~(\ref{scalingsc02}) itself follows the existence of 6 planes, \begin{eqnarray} \label{sp} J_x&=&J_y,\;\;J_y=J_z,\;\;J_z=J_x\nonumber\\ J_x&=&-J_y,\;\;J_y=-J_z,\;\;J_z=-J_x, \end{eqnarray} each one being invariant under the scaling flow. These planes form in some sense the skeleton of flow diagram. From Eq.~(\ref{amm9}) it additionally follows that parts of these planes play the role of separatrices. Thus the phase characterized by the attractor $J_{\alpha}=J_{\beta}=J_{\gamma}>0$ is the space angle with 3 faces defined by the inequalities $J_x+J_y>0$, $J_x+J_z>0$, $J_y+J_z>0$. Other three phases can be obtained from that one by making rotations from the group of tetrahedron. The flow lines on the invariant planes should be considered separately. Taking, for example, the plane $J_x=J_y$, we return to the previously well-studied $XXZ$ Kondo model, \begin{eqnarray} \label{lingsc01d} \frac{dJ_x}{d\ln\Lambda}&=&-2J_xJ_z+J_x (J_x^2+J_z^2),\nonumber\\ \frac{dJ_z}{d\ln\Lambda}&=&-2 J_x^2+2J_z J_x^2. \end{eqnarray} These scaling equations exhibit Kosterlitz-Thouless (KT) physics in their range of validity: initial parameters satisfying $0<|J_x|<-J_z$ lead to a flow towards the fixed line $J_x=0$, $J_z<0$; otherwise, either $J_z>0$ or $0<-J_z<|J_x|$ results in a flow to strong coupling $|J_x|\to\infty$ and $J_z\to\infty$. The separatrix line between the two regimes is $|J_x|=-J_z$. Thus, considering, for example, the line $J_x=J_y=J_z$, we understand that the ray from the origin to $+\infty$ serves as an attractor, and the ray from the origin to $-\infty$ serves as the separatrix line on the invariant plane. The invariant planes also contain the lines of fixed points, each one corresponding to two of the three $J_i$ being equal to zero. Each fixed point has a one-dimensional attraction region and hence is a critical point. Notice that the flow diagram has a simple geometric meaning at the second order: each flow line can be considered as the intersection of two parabolic cylinders, one belonging to the family $J_x^2-J_y^2=C_1$, and the other belonging to the family $J_x^2-J_z^2=C_2$. To motivate the consideration of the $XYZ$ Coqblin-Schrieffer model in Section~\ref{sec:ani}, we now write down the interaction in Eq.~(\ref{hamiltonian}) using Hubbard $X$-operators \begin{eqnarray} \label{huhu} V&=&J_S\left(X_{+-}c_{-}^{\dagger}c_{+}+X_{-+}c_{+}^{\dagger}c_{-}\right)\nonumber\\ &+&J_A\left(X_{+-}c_{+}^{\dagger}c_{-}+X_{-+}c_{-}^{\dagger}c_{+}\right)\nonumber\\ &+&J_z\left(X_{++}c_{+}^{\dagger}c_{+}+X_{--}c_{-}^{\dagger}c_{-}\right)\nonumber\\ &-&\frac{1}{2}J_z\left(X_{++}+X_{--}\right)\left(c_{+}^{\dagger}c_{+}+c_{-}^{\dagger}c_{-}\right), \end{eqnarray} where $J_S=(J_x+J_y)/2$ and $J_A=(J_x-J_y)/2$. (We again omit the wave vector indices.) It turns out that such an interaction is also algebraically renormalizable, and the scaling equation can be written down as \begin{eqnarray} \label{lingsc01c} \frac{dJ_S}{d\ln\Lambda}&=&-2J_SJ_z+J_S (J_S^2-J_A^2+J_z^2),\nonumber\\ \frac{dJ_A}{d\ln\Lambda}&=&2 J_AJ_z+J_A (-J_S^2+J_A^2+J_z^2),\\ \frac{dJ_z}{d\ln\Lambda}&=&-2 (J_S^2-J_A^2)+2J_z (J_S^2+J_A^2).\nonumber \end{eqnarray} Of course, Eq.~(\ref{lingsc01c}) can also be obtained from Eq.~(\ref{scalingsc02}). The statements of the present Section are illustrated by the numerical flow diagram Fig.~\ref{fig:flowKondo}, which we plot by numerically integrating the weak-coupling scaling equation. Because of the symmetries $J_S\to -J_S$ and $J_A\to-J_A$, it suffices to focus on the case $J_S\geq0$ and $J_A\geq 0$. Scaling flows in the various limiting cases we have considered are highlighted. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{flow_diagram_XYZ_N=2_v2} \end{center} \caption{Numerical three-dimensional flow diagram of the $XYZ$ Kondo model, obtained from the third-order weak-coupling scaling equation Eq.~(\ref{lingsc01c}). Only the $J_S\geq0$ and $J_A\geq 0$ part is shown; the rest of the diagram follows from the symmetries $J_S\to -J_S$ and $J_A\to-J_A$. The red trajectories lie on the $J_A=0$ plane and the green ones lie on the $J_S=0$ plane; both exhibit Kosterlitz-Thouless physics. The brown trajectories are on the light brown phase boundary $J_z+J_S-J_A=0$, and the gray line is the fixed line $J_S=J_A$ and $J_z=0$. The remaining blue trajectories are between phase boundaries. Two strong-coupling phases are shown: $J_S\to\infty$ and $J_z\to\infty$; $J_A\to\infty$ and $J_z\to-\infty$.}\label{fig:flowKondo} \end{figure} \subsection{Pseudogap density of states} The flow diagram of the $XYZ$ Kondo model is much more interesting when the itinerant electrons have a (local) density of states with a power-law dependence upon the energy because of, for instance, the electron dispersion\cite{fradkin}: \begin{eqnarray} \label{e} \rho(\epsilon)=C|\epsilon|^r,\;\;\;\text{if}\;\;|\epsilon|<D. \end{eqnarray} This model was considered by us previously \cite{kogan,kogan2} but only the flow diagram of the $XXZ$ case ($J_x=J_y$) was discussed. In this section, we clarify the full three-dimensional flow diagram as a special case of the $XYZ$ CS model. We limit ourselves to the particle-hole symmetric case, because particle-hole symmetry breaking perturbations are known to change the phase diagram drastically in a way that is difficult to analyze using our weak-coupling method\cite{gbi}. The scaling equation in the appropriate units (for details see Refs.~\onlinecite{kogan,kogan2}) is \begin{eqnarray} \label{scalinga00} \frac{d J_x}{d\ln\Lambda}&=&rJ_x-2J_yJ_z+J_x (J_y^2+J_z^2) \nonumber\\ \frac{d J_y}{d\ln\Lambda}&=&rJ_y-2J_xJ_z+J_y (J_x^2+J_z^2) \\ \frac{d J_z}{d\ln\Lambda}&=&rJ_z-2J_xJ_y+J_z (J_x^2+J_y^2).\nonumber \end{eqnarray} In the weak-coupling regime Eq.~(\ref{scalinga00}) has a trivial fixed point $J_x=J_y=J_z=0$ corresponding to a decoupled impurity spin, and four nontrivial fixed points \begin{eqnarray} \label{odd} \left|J_x\right|=\left|J_y\right|=\left|J_z\right|=\frac{r}{2}+O(r^2);\;\;\; \;\;\;J_xJ_yJ_z>0, \end{eqnarray} describing a finite isotropic Heisenberg exchange. Linear analysis in the vicinity of the fixed points shows that the trivial fixed point is stable, and hence describes the decoupled phase. Nontrivial fixed points are semistable, and hence are critical points of the model. When we ignore the third-order terms, the general solution of Eq.~(\ref{scalinga00}) can be written in terms of elliptic functions and contains three parameters $A,\psi,k$, \cite{kogan,kogan2} \begin{eqnarray} \label{amm0} J_{\alpha} &=&A\lambda\cdot\mathrm{ns}(A\lambda+\psi,k)\nonumber\\ J_{\beta}&=&A\lambda\cdot\mathrm{cs}(A\lambda+\psi,k)\\ J_{\gamma} &=&A\lambda\cdot\mathrm{ds}(A\lambda+\psi,k),\nonumber \end{eqnarray} where $\lambda=2\Lambda^r$, and $\{\alpha,\beta,\gamma\}$ is an arbitrary permutation of $\{x,y,z\}$. Notice that Eq.~(\ref{amm0}) describes a two-parameter family of the flow lines (the parameters being $\psi$ and $k$). Eq.~(\ref{amm9}) also describes a two-parameter family of the flow lines, but the parameters are $A$ and $k$, the former being just a trivial scale parameter. Eq.~(\ref{amm0}) clearly shows the existence of the decoupled phase, and the strong-coupling phases, identical to those obtained in the previous Section. Eq.~(\ref{amm0}) also shows that each nontrivial fixed point belongs to a critical surface $\psi=0$, which separates one of the strong-coupling phases from the decoupled phase. From Eq.~(\ref{amm0}) we see that each flow line lies completely on the elliptic cone\cite{kogan,kogan2} \begin{equation} \label{cone} (1-k^2) J_{\alpha}^2+k^2 J_{\beta}^2-J_{\gamma}^2=0. \end{equation} It should be noted that this property holds only when the third-order terms in the scaling equation are negligible. This family of elliptic cones foliates the phase space, and all touch each other along the isotropic lines $\left|J_x\right|=\left|J_y\right|=\left|J_z\right|$, as shown in Fig.~\ref{fig:cones}. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{cones} \end{center} \caption{Elliptic cones Eq.~(\ref{cone}) containing second-order flow trajectories of the pseudogap Kondo model. Red, green, blue, yellow and magenta surfaces have $k=0$, $k=1/2$, $k=1/\sqrt{2}$, $k=\sqrt{3}/2$ and $k=1$, respectively.\label{fig:cones}} \end{figure} For the degenerate cases $k=0$ or $k=1$ the elliptic cone becomes a pair of planes. The flow trajectories on such planes were presented in our previous publication\cite{kogan}. Here, in Fig.~\ref{fig:onecone}, we show the trajectories on one of the nondegenerate cones, which nevertheless constitute a general representation of the flow diagram because the behaviors of different cones are rather similar. When the third-order terms are included, the flow diagram remains qualitatively the same, although we can no longer classify the trajectories by elliptic cones. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{onecone} \end{center} \caption{Typical flow trajectories of the pseudogap $XYZ$ Kondo model with $r=0.1$ on the surface of a cone Eq.~(\ref{cone}); we have chosen $k=1/\sqrt{2}$ and ignored the third-order terms in Eq.~(\ref{scalinga00}). The trivial fixed point at the origin is painted in black, and the nontrivial fixed points in red. The isotropic orange flow trajectories are shared by all elliptic cones; the green trajectories flow towards weak coupling, the blue ones towards strong coupling, and the red trajectories lie on phase boundaries.\label{fig:onecone}} \end{figure} \section{$XYZ$ Coqblin-Schrieffer model\label{sec:ani}} In this section, we turn our attention to the $XYZ$ CS model with an arbitrary number of channels $N$, thereby generalizing our $N=2$ results in Section~\ref{sec:kondo}. \subsection{Constant density of states} The CS model with full $SU(N)$ symmetry\cite{coqblin} is represented by the interaction \begin{equation} \label{cs0} V=J\sum_{mm'} X_{mm'}c_{m'}^{\dagger}c_m -(J/N)\sum_{mm'}X_{mm}c_{m'}^{\dagger}c_{m'}, \end{equation} where the quantum number $m,m'=1,\dots,N$. The scaling equation for the Hamiltonian Eq.~(\ref{cs0}) has the form \cite{hewson} \begin{eqnarray} \label{is} \frac{dJ}{d\ln\Lambda}=-N J^2+N J^3. \end{eqnarray} For $N=2$ the model coincides with the spin-isotropic Kondo model. While Eq.~(\ref{is}) suggests the existence of a nontrivial fixed point $J=1$, we emphasize again that the equation becomes unreliable in the strong-coupling regime. The anisotropic Kondo model represented in terms of Hubbard $X$-operators Eq.~(\ref{huhu}) has motivated one of us\cite{kogan2,kogan3} to consider the algebraic renormalizability of the anisotropic generalization of the CS model (or the ``$XYZ$ CS model'') for arbitrary $N$, Eq.~(\ref{cs01b}). This model proves to be algebraically renormalizable, and the scaling equations are\footnote{The second-order terms were obtained in the previous publication of one of the authors [Eq.~(46) of Ref.~\onlinecite{kogan2}], but due to an elementary algebra mistake, the analog of Eq.~(\ref{calingsc01c}) (and its special case for $N=3$ in Ref.~\onlinecite{kogan3}) contained an error.} \begin{eqnarray} \label{calingsc01c} \frac{dJ_S}{d\ln\Lambda}&=&-(N-2)J_S^2-2J_SJ_z\nonumber\\ &&+J_S [(N-1)J_S^2+(N-3) J_A^2+J_z^2],\nonumber\\ \frac{dJ_A}{d\ln\Lambda}&=&(N-2)J_A^2+2 J_AJ_z\\ &&+J_A [(N-3)J_S^2+(N-1) J_A^2+J_z^2],\nonumber\\ \frac{dJ_z}{d\ln\Lambda}&=&-N (J_S^2-J_A^2)+N J_z(J_S^2+J_A^2). \nonumber \end{eqnarray} The second-order terms are derived explicitly in Appendix~\ref{sec:renorm}. Notice the symmetry of the Eq.~(\ref{calingsc01c}) with respect to simultaneous transformations $J_S\leftrightarrow J_A$ and $\ln\Lambda \rightarrow -\ln\Lambda$. The $XXZ$ CS model introduced in Ref.~\onlinecite{kogan2} is obtained by setting $J_A=0$ (i.e. $J_x=J_y$) in Eqs.~(\ref{cs01b}) and (\ref{calingsc01c}). In the fully isotropic case $J_x=J_y=J_z$ from Eq.~(\ref{calingsc01c}) we recover the scaling equation for the original CS model Eq.~(\ref{is}). Lacking analytical integration of Eq.~(\ref{calingsc01c}) for $N>2$ (even when the third-order terms are neglected), we concentrate on quantitative analysis of the flow diagram and of the phase diagram. There are three special particular solutions of the equation, corresponding to straight lines \begin{eqnarray} \label{s1} &&J_A=0,\; J_S=J_z,\;\;\;\frac{dJ_z}{d\ln\Lambda}=-NJ_z^2+NJ_z^3; \\ \label{s2} &&J_S=0,\;J_A=J_z,\;\;\;\frac{dJ_z}{d\ln\Lambda}=NJ_z^2+NJ_z^3; \\ \label{s5} &&J_z=0,\;\;J_A=-J_S,\;\;\; \nonumber\\ &&\frac{dJ_S}{d\ln\Lambda} =-(N-2) J_S^2+2(N-2) J_S^3. \end{eqnarray} In addition, if we neglect the third-order terms, two more particular solutions emerge: \begin{eqnarray} \label{s3} &&J_A=0,\;\; J_S=-\frac{2}{N}J_z,\;\;\;\frac{dJ_z}{d\ln\Lambda}=-\frac{4}{N}J_z^2;\\ \label{s4} &&J_S=0,\;\;J_A=-\frac{2}{N}J_z,\;\;\;\frac{dJ_z}{d\ln\Lambda}=\frac{4}{N}J_z^2. \end{eqnarray} Consider, for example, Eq.~(\ref{s1}). The solution which starts at any $J_z>0$ blows up at a finite value of $\ln\Lambda$. In addition, as shown in Appendix~\ref{sec:pfaff}, the ray $J_x=J_y=J_z>0$ is the attractor for the flow lines for arbitrary $N$. We identify the appropriate strong-coupling phase with the attraction region of the solution Eq.~(\ref{s1}). The same is relevant for other three rays defined by Eqs.~(\ref{s2}), (\ref{s3}) and (\ref{s4}). Additional support for this assumption comes from the analysis of the separatrices. From Eq.~(\ref{calingsc01c}) it follows that, similar to the case of Kondo model, the flow diagram has three invariant planes: \begin{equation} \label{pl6_1} 1)\;J_S=0,\;\;\;2)\;J_A=0,\;\;\;3)\;J_S+J_A=J_z. \end{equation} In addition, when we neglect the third-order terms, there is another set of three invariant planes, \begin{eqnarray} \label{pl6_2} &&4)\;\frac{N}{2}(J_S+J_A)=-J_z\nonumber\\ &&5)\;J_S-\frac{N}{2} J_A=J_z,\\ &&6)\;\frac{N}{2}J_S-J_A=-J_z.\nonumber \end{eqnarray} One can easily see that for $N=2$ these are the invariant planes given by Eq.~(\ref{sp}). Like it was in the case of Kondo model, parts of these invariant planes are separatrices. Thus, for example Eq.~(\ref{calingsc01c}) leads to \begin{eqnarray} &&\frac{d}{d\ln\Lambda}\left(\frac{N}{2} J_S-J_A+J_z\right) \nonumber\\ =&&-\left(N J_S+2J_A\right)\left(\frac{N}{2} J_S-J_A+J_z\right) +O(J^3). \end{eqnarray} Hence when $N J_S+2J_A>0$, the last plane from Eq.~(\ref{pl6_2}) is an approximate phase boundary since any infinitesimal deviation from it is a relevant perturbation. A similar analysis can be performed for other invariant planes. Let us analyze the flow lines on the invariant planes. Though taking $J_A=0$ or $J_S=0$ no longer brings us back to the familiar equations for the anisotropic Kondo model, the KT physics remains in action; it is only the separatrix lines that are different from before \cite{kogan2}. Notice that half of each line given by Eqs.~(\ref{s1}) - (\ref{s4}) serves as an attractor, and the other half as a separatrix line. For instance, provided $J_A=0$, systems whose initial parameters are located between the two separatrix lines [i.e. $J_z<0$, $J_z<J_S<-(2/N)J_z$] will flow to one of the $J_S=0$, $J_z<0$ fixed points, and other systems flow to strong coupling $J_S\to \pm \infty$, $J_z\to \infty$ depending on the initial sign of $J_S$. Consider now the fixed points of the scaling equation. Similar to the spin-anisotropic Kondo model for $N=2$, Eq.~(\ref{calingsc01c}) indicates that the $XYZ$ CS model has a line of fixed points $J_S=J_A=0$ for any $N$. Since all $N$-dependent terms are quadratic in $J_S$ or $J_A$, linearization again tells us that these fixed points are semistable, with $|J_S|$ relevant and $J_A$ irrelevant for $J_z>0$, and $|J_A|$ relevant and $J_S$ irrelevant for $J_z<0$. However, the Kondo model line of fixed points $J_S=J_A$, $J_z=0$ for $N=2$ is replaced by the line $J_z=-(N/2-1)J_S=-(N/2-1)J_A$, which does not lie on the $J_S$-$J_A$ plane. On the other hand, the line (\ref{s5}) for $N>2$ is not a fixed line like it was for Kondo model, but describes two flow trajectories instead: $J_S<0$ flows to the origin, and $J_S>0$ flows to strong coupling. We illustrate our findings of this Section in the flow diagram Fig.~\ref{fig:flowCS} for the $N=3$ case. As shown in the figure, the four strong-coupling phases mentioned above exhaust the phase diagram (apart from the critical points) in the weak-coupling regime, where our perturbative scaling method is applicable. \begin{figure} \begin{center} \includegraphics[width=0.8\columnwidth]{flow_diagram_XYZ_N=3_v3} \end{center} \caption{Numerical three-dimensional flow diagram of the $XYZ$ CS model with $N=3$, calculated from the third-order weak-coupling scaling equation Eq.~(\ref{calingsc01c}). The symmetries $J_S\to -J_S$ and $J_A\to-J_A$ are now lost in contrast to the $N=2$ Kondo case. The red trajectories lie on the $J_A=0$ plane and the green ones lie on the $J_S=0$ plane; the KT separatrix lines are no longer reflection-symmetric with respect to the $J_S=0$ or $J_A=0$ planes. The gray line is the approximate fixed line $J_z=-(N/2-1)J_S=-(N/2-1)J_A$ near which the flow is third-order, and the blue trajectories satisfy $J_S=-J_A$, $J_z=0$. The remaining colored curves are typical trajectories which reside on one of the phase boundaries given in the text (magenta) or flow away from that phase boundary (cyan). All four strong-coupling phases are shown: $J_S\to \pm \infty$ and $J_z\to\infty$; $J_A\to \pm \infty$ and $J_z\to-\infty$.}\label{fig:flowCS} \end{figure} \subsection{Pseudogap density of states} When the density of states takes a power-law form Eq.~(\ref{e}), the scaling equation for the $XYZ$ CS model becomes \begin{eqnarray} \label{calingsc01p} \frac{dJ_S}{d\ln\Lambda}&=&r J_S-(N-2)J_S^2-2J_SJ_z\nonumber\\ &&+J_S [(N-1)J_S^2+(N-3) J_A^2+J_z^2],\nonumber\\ \frac{dJ_A}{d\ln\Lambda}&=&r J_A+(N-2)J_A^2+2 J_AJ_z\\ &&+J_A [(N-3)J_S^2+(N-1) J_A^2+J_z^2],\nonumber\\ \frac{dJ_z}{d\ln\Lambda}&=&r J_z-N (J_S^2-J_A^2)+N J_z(J_S^2+J_A^2).\nonumber \end{eqnarray} We plot the corresponding flow diagram in Fig.~\ref{fig:flowCSp}. \begin{figure} \begin{center} \includegraphics[width=0.9\columnwidth]{p_flow_diagram_XYZ_N=3_v2} \end{center} \caption{Numerical three-dimensional flow diagram of the pseudogap $XYZ$ CS model with $N=3$ and $r=0.12$, calculated from the third-order weak-coupling scaling equation Eq.~(\ref{calingsc01p}). There are no longer any fixed lines in contrast to the constant-density-of-states model, but all trajectories connecting the trivial fixed point (black) with the nontrivial fixed points (red, green and magenta) are straight lines when $r$ is small. The red and orange trajectories lie on the $J_A=0$ plane, the green and brown ones lie on the $J_S=0$ plane, and the blue trajectories represent perturbations away from these planes. The purple trajectories represent perturbations away from the magenta fixed point with $J_z=0$. The strong-coupling phases are identical to those in the case of a constant density of states.}\label{fig:flowCSp} \end{figure} There is first and foremost a trivial decoupled fixed point $J_S=J_A=J_z=0$ (black). In addition, several nontrivial fixed points $(J_S,J_A,J_z)$ exist where one of the coupling constants vanishes: \begin{eqnarray} \frac{1}{2}(1-\sqrt{1-\frac{4r}{N}})(1,0,1)\;\;\text{(red)},\nonumber\\ \frac{1}{2}(1-\sqrt{1-\frac{4r}{N}})(0,-1,-1)\;\;\text{(green)},\nonumber\\ \frac{1}{2}(1-\sqrt{1-\frac{4r}{N-2}})(1,-1,0)\;\;\text{(magenta)},\\ (-r/2,0,Nr/4)+O(r^2)\;\;\text{(red)},\nonumber\\ (0,r/2,-Nr/4)+O(r^2)\;\;\text{(green)}.\nonumber \end{eqnarray} As before, in order for the perturbative treatment to be valid, these fixed points are only meaningful for small $r$. For the last two fixed points we have only kept the lowest-order terms in $r$; the full expressions are lengthy as they require solving quartic equations. The four $J_A=0$ and $J_S=0$ fixed points are critical points with two stable directions, one in-plane and the other out-of-plane. Therefore, these four fixed points belong to the phase boundaries which separate the strong-coupling phases from the decoupled phase; these phase boundaries intersect the $J_A=0$ ($J_S=0$) plane at the red (green) trajectories, which are themselves separatrix lines on the coordinate planes. It is plausible that the $J_z=0$ nontrivial fixed point, which has only one stable direction, sits exactly at the intersection of two such phase boundaries. As shown in Fig.~\ref{fig:flowCSp}, all of the following straight lines between the trivial fixed point and the nontrivial ones are valid scaling trajectories: $J_S=J_z, J_A=0$ and $J_z=-NJ_S/2, J_A=0$ (orange), $J_A=J_z, J_S=0$ and $J_z=-NJ_A/2, J_S=0$ (brown) and $J_S=-J_A, J_z=0$ (magenta). Each of these lines is divided into three segments by the trivial, stable fixed point and the nontrivial, unstable fixed point. Based on our observations, as with the $N=2$ case, we expect the phase space for the $N>2$ pseudogap $XYZ$ CS model to be divided into five phases, namely a weak-coupling phase controlled by the trivial fixed point $J_S=J_A=J_z=0$ and four strong-coupling phases $J_S\to \pm \infty, J_z\to \infty$ and $J_A\to \pm \infty, J_z\to -\infty$. Unfortunately, the important linear terms in Eq.~(\ref{calingsc01p}) make it a difficult task in general to determine the phase boundaries or scaling invariants analytically. \section{Conclusions\label{sec:conclusion}} In this work, we have derived the third-order poor man's scaling equation for a quantum impurity model in an itinerant electron gas in the weak-coupling regime. Our theory is applied to the $XYZ$ Coqblin--Schrieffer model which was introduced by one of us earlier and is shown to be algebraically renormalizable. We write down the poor man's scaling equations under constant and pseudogap densities of states, and discuss their solutions for both the $N=2$ case (the anisotropic $XYZ$ Kondo model) and the $N>2$ case in detail. The corresponding three-dimensional weak-coupling flow diagrams are presented. \begin{acknowledgments} The results presented in this paper were obtained during E.K.'s visit to Max-Planck-Institut f\"{u}r Physik komplexer Systeme in December of 2019 and January of 2020. E.K. cordially thanks the Institute for the hospitality extended to him during that and all his previous visits. The authors are grateful to V. Yu. Irkhin and P. Zalom for valuable discussions. \end{acknowledgments} \begin{appendix} \section{Third-order scaling equation Eq.~\ref{v3sceq}\label{sec:appv3}} To obtain the third-order terms in the scaling equation Eq.~\ref{v3sceq}, following Ref.~\onlinecite{hewson}, Appendix~D, we should take into account the energy dependence of the effective Hamiltonian which is neglected at the second order. For the generic impurity Hamiltonian Eq.~\ref{hamilto}, when reducing the semi-bandwidth from $D$ to $D-\left\vert \delta D\right\vert $, the $O\left( V^{2}\right) $ correction to the low-energy Hamiltonian as represented by Fig.~\ref{fig:feynman} (a) is written as \begin{align} & \rho \left\vert \delta D\right\vert \sum_{k^{\prime }q}\sum_{\alpha \beta ab}\sum_{\alpha ^{\prime }a^{\prime }}\frac{V_{\beta \alpha ,ba}V_{\alpha \alpha ^{\prime },aa^{\prime }}X_{ba^{\prime }}c_{k^{\prime }\beta }^{\dag }c_{q\alpha ^{\prime }}}{E-D+\epsilon _{q}} \notag \\ & +\rho \left\vert \delta D\right\vert \sum_{kq^{\prime }}\sum_{\alpha \beta ab}\sum_{\beta ^{\prime }a^{\prime }}\frac{V_{\beta \alpha ,ba}V_{\beta ^{\prime }\beta ,aa^{\prime }}X_{ba^{\prime }}c_{k\alpha }c_{q^{\prime }\beta ^{\prime }}^{\dag }}{E-D-\epsilon _{q^{\prime }}}\text{.} \end{align Tracing out the conduction electrons, we find the $O\left( V^{2}\right) $ correction to the impurity part of the effective Hamiltonian: \begin{align} & \rho ^{2}\left\vert \delta D\right\vert \sum_{\alpha \beta ab}\sum_{a^{\prime }}V_{\beta \alpha ,ba}V_{\alpha \beta ,aa^{\prime }}X_{ba^{\prime }} \notag \\ & \times\left[ \int_{-D+\left\vert \delta D\right\vert }^{0}\frac{d\epsilon _{q}}{E-D+\epsilon _{q}}+\int_{0}^{D-\left\vert \delta D\right\vert }\frac{d\epsilon _{k}}{ E-D-\epsilon _{k}}\right] \notag \\ & =\rho ^{2}\left\vert \delta D\right\vert \sum_{ab}\sum_{\alpha \beta c}V_{\beta \alpha ,bc}V_{\alpha \beta ,ca}X_{ba}\left( -2\ln 2-\frac{E}{D \right) \text{.} \end{align The term linear in energy $E$ will play an especially important role in the following: \begin{equation} -E\frac{\left\vert \delta D\right\vert }{D}\rho ^{2}\sum_{ab}\sum_{\alpha \beta c}V_{\beta \alpha ,bc}V_{\alpha \beta ,ca}X_{ba}\text{.} \end{equation We emphasize that this is an operator in the impurity Hilbert space. We now turn to the third-order diagram in Fig.~\ref{fig:feynman} (b). Contracting the fermion lines, this $O\left( V^{3}\right) $ diagram reads \begin{eqnarray} &&\sum_{kq}\sum_{\alpha \beta ab}\sum_{k_{1}k_{1}^{\prime }}\sum_{\alpha _{1}\beta _{1}}\sum_{a^{\prime }b^{\prime }}c_{q\beta }^{\dag }c_{k\alpha }c_{k_{1}^{\prime }\beta _{1}}^{\dag }c_{k_{1}\alpha _{1}}c_{k\alpha }^{\dag }c_{q\beta }\notag \\ &&\times \frac{V_{\beta \alpha ,ba}V_{\beta _{1}\alpha _{1},ab^{\prime }}V_{\alpha \beta ,b^{\prime }a^{\prime }}X_{ba^{\prime }}}{\left(E-\epsilon _{k}+\epsilon _{q}-\epsilon _{k_{1}^{\prime }}+\epsilon _{k_{1}}\right) \left( E-\epsilon _{k}+\epsilon _{q}\right)} \text{;} \end{eqnarray here we should contract $c_{k\alpha }$ with $c_{k\alpha }^{\dag }$ and $c_{q\beta }^{\dag }$ with $c_{q\beta }$. If the virtual particle $c_{k\alpha}^{\dag }$ resides in the energy range to be integrated out (i.e. $\epsilon _{k}\approx D$), this gives a contribution \begin{align} & \rho ^{2}\left\vert \delta D\right\vert \sum_{\alpha \beta ab}\sum_{k_{1}k_{1}^{\prime }}\sum_{\alpha _{1}\beta _{1}}\sum_{a^{\prime }b^{\prime }}V_{\beta \alpha ,ba}V_{\beta _{1}\alpha _{1},ab^{\prime }}V_{\alpha \beta ,b^{\prime }a^{\prime }} \notag \\ &\times \int_{-D+\left\vert \delta D\right\vert }^{0}\frac{d\epsilon _{q} X_{ba^{\prime }} c_{k_{1}^{\prime }\beta _{1}}^{\dag }c_{k_{1}\alpha _{1}}}{\left(E-D+\epsilon _{q}-\epsilon _{k_{1}^{\prime }}+\epsilon _{k_{1}}\right) \left( E-D+\epsilon _{q}\right)} \notag \\ & \approx\rho ^{2}\frac{\left\vert \delta D\right\vert }{2D}\sum_{\alpha \beta ab}\sum_{k_{1}k_{1}^{\prime }}\sum_{\alpha _{1}\beta _{1}}\sum_{a^{\prime }b^{\prime }}V_{\beta \alpha ,ba}V_{\beta _{1}\alpha _{1},ab^{\prime }}V_{\alpha \beta ,b^{\prime }a^{\prime }} \notag \\ & \times X_{ba^{\prime }}c_{k_{1}^{\prime }\beta _{1}}^{\dag }c_{k_{1}\alpha _{1}}\text{;} \end{align on the other hand, if $\epsilon _{q}\approx -D$, we have the virtual hole contribution \begin{align} & \rho ^{2}\left\vert \delta D\right\vert \sum_{\alpha \beta ab}\sum_{k_{1}k_{1}^{\prime }}\sum_{\alpha _{1}\beta _{1}}\sum_{a^{\prime }b^{\prime }}V_{\beta \alpha ,ba}V_{\beta _{1}\alpha _{1},ab^{\prime }}V_{\alpha \beta ,b^{\prime }a^{\prime }} \notag \\ &\times \int_{0}^{D-\left\vert \delta D\right\vert }\frac{d\epsilon _{k}X_{ba^{\prime }}c_{k_{1}^{\prime }\beta _{1}}^{\dag }c_{k_{1}\alpha _{1}}}{\left(E-D-\epsilon _{k}-\epsilon _{k_{1}^{\prime }}+\epsilon _{k_{1}}\right) \left( E-D-\epsilon _{k}\right) } \notag \\ & \approx\rho ^{2}\frac{\left\vert \delta D\right\vert }{2D}\sum_{\alpha \beta ab}\sum_{k_{1}k_{1}^{\prime }}\sum_{\alpha _{1}\beta _{1}}\sum_{a^{\prime }b^{\prime }}V_{\beta \alpha ,ba}V_{\beta _{1}\alpha _{1},ab^{\prime }}V_{\alpha \beta ,b^{\prime }a^{\prime }}\notag \\ & \times X_{ba^{\prime }}c_{k_{1}^{\prime }\beta _{1}}^{\dag }c_{k_{1}\alpha _{1}}\text{.} \end{align The virtual particle and hole contributions are thus identical. To find the total third-order scaling contribution to the coupling constant, we write the effective Hamiltonian as \begin{align} H_{\text{eff}}\left( E\right) &=\left( 1+S\right) ^{-\frac{1}{2}}H_{\text{eff }\left( 0\right) \left( 1+S\right) ^{-\frac{1}{2}} \notag \\ &\approx \left( 1-\frac{1}{ }S\right) H_{\text{eff}}\left( 0\right) \left( 1-\frac{1}{2}S\right) \text{,} \end{align where the effective impurity Hamiltonian at $E=0$ is, to $O\left( V^{3}\right) $, \begin{eqnarray} &&H_{\text{eff}}\left( 0\right) =\sum_{\mathbf{kk}^{\prime }}\sum_{\alpha \beta ab}[ V_{\beta \alpha ,ba}-\rho \frac{\left\vert \delta D\right\vert }{D} \sum_{\gamma c} \notag \\ &&\times \left( V_{\beta \gamma ,bc}V_{\gamma \alpha ,ca}-V_{\gamma \alpha ,bc}V_{\beta \gamma ,ca}\right) +\rho ^{2}\frac{\left\vert \delta D\right\vert }{D} \notag \\ && \times \sum_{\delta \gamma }\sum_{cd}V_{\delta \gamma ,bc}V_{\beta \alpha ,cd}V_{\gamma \delta ,da}] X_{ba}c_{\mathbf{k}^{\prime }\beta }^{\dag }c_{\mathbf{k}\alpha \text{,} \end{eqnarray and the wave function renormalization $S\equiv -\partial H_{\text{eff}}\left( E\right) /\partial E$ is to $O\left( V^{2}\right) $ \begin{equation} S=\rho ^{2}\frac{\left\vert \delta D\right\vert }{D}\sum_{\alpha \beta }\sum_{abc}V_{\beta \alpha ,bc}V_{\alpha \beta ,ca}X_{ba}\text{.} \end{equation Putting everything together, we obtain Eq.~\ref{v3sceq}. \section{Algebraic renormalizability of the $XYZ$ CS model\label{sec:renorm}} The commutation relations necessary for writing down scaling equation for the interaction Eq.~(\ref{cs01b}) are \begin{eqnarray} [c_{m'}^{\dagger}c_{m},c_{m'''}^{\dagger}c_{m''}]=\delta_{mm'''}c_{m'}^{\dagger}c_{m''}-\delta_{m'm''}c_{m'''}^{\dagger}c_{m}. \end{eqnarray} (and the same for the Hubbard operators). Substituting the terms in the R.H.S. of Eq.~(\ref{cs01b}) into the second-order term in Eq.~(\ref{v3sceq}) we get: \noindent for the $J_S^2$ terms \begin{eqnarray} \label{oducti} &&\sum_{m\neq m'}\sum_{m''\neq m'''}[X_{mm'},X_{m''m'''}] [c_{m'}^{\dagger}c_{m},c_{m'''}^{\dagger}c_{m''}]\nonumber\\ &=&\sum_{m\neq m'}\sum_{m''\neq m'''}\left(X_{mm'''}\delta_{m'm''}-X_{m''m'}\delta_{mm'''}\right)\nonumber\\ &\cdot&\left(c_{m'}^{\dagger}c_{m''}\delta_{mm'''}-c_{m'''}^{\dagger}c_{m}\delta_{m'm''}\right)\nonumber\\ &=&2\left\{-(N-2)\sum_{m\neq m'''}X_{mm'''}c_{m'''}^{\dagger}c_{m}\right.\nonumber\\ &-&\left.(N-1)\sum_{m}X_{mm}c_{m}^{\dagger}c_{m}+\sum_{m\neq m'}X_{mm}c_{m'}^{\dagger}c_{m'}\right\}\nonumber\\ &=&2\left\{-(N-2)\sum_{m\neq m'''}X_{mm'''}c_{m'''}^{\dagger}c_{m}\right.\nonumber\\ &-&\left.N\sum_{m}X_{mm}c_{m}^{\dagger}c_{m}+\sum_{mm'}X_{mm}c_{m'}^{\dagger}c_{m'}\right\}; \end{eqnarray} \noindent for the $J_A^2$ terms \begin{eqnarray} \label{cti} &&\sum_{m\neq m'}\sum_{m''\neq m'''}[X_{mm'},X_{m''m'''}] [c_{m}^{\dagger}c_{m'},c_{m''}^{\dagger}c_{m'''}]\nonumber\\ &=&\sum_{m\neq m'}\sum_{m''\neq m'''}\left(X_{mm'''}\delta_{m'm''}-X_{m''m'}\delta_{mm'''}\right)\nonumber\\ &\cdot&\left(c_{m}^{\dagger}c_{m'''}\delta_{m'm''}-c_{m''}^{\dagger}c_{m'}\delta_{mm'''}\right)\nonumber\\ &=&2\left\{(N-2)\sum_{m\neq m'''}X_{mm'''}c_{m}^{\dagger}c_{m'''}\right.\nonumber\\ &+&\left.(N-1)\sum_{m}X_{mm}c_{m}^{\dagger}c_{m}-\sum_{m\neq m'}X_{mm}c_{m'}^{\dagger}c_{m'}\right\}\nonumber\\ &=&2\left\{(N-2)\sum_{m\neq m'''}X_{mm'''}c_{m}^{\dagger}c_{m'''}\right.\nonumber\\ &+&\left.N\sum_{m}X_{mm}c_{m}^{\dagger}c_{m}-\sum_{mm'}X_{mm}c_{m'}^{\dagger}c_{m'}\right\}; \end{eqnarray} \noindent for the $J_SJ_z$ terms \begin{eqnarray} \label{ctib} &&\sum_{m\neq m',m''}[X_{mm'},X_{m''m''}] [c_{m'}^{\dagger}c_{m},c_{m''}^{\dagger}c_{m''}]\nonumber\\ &=&\sum_{m\neq m',m''}\left(X_{mm''}\delta_{m'm''}-X_{m''m}\delta_{mm''}\right)\nonumber\\ &\cdot&\left(c_{m'}^{\dagger}c_{m''}\delta_{mm''}-c_{m''}^{\dagger}c_{m}\delta_{m'm''}\right)\nonumber\\ &=&-2\sum_{m\neq m'}X_{mm'}c_{m'}^{\dagger}c_{m}; \end{eqnarray} \noindent for the $J_AJ_z$ terms \begin{eqnarray} \label{tib} &&\sum_{m\neq m',m''}[X_{mm'},X_{m''m''}] [c_{m}^{\dagger}c_{m'},c_{m''}^{\dagger}c_{m''}]\nonumber\\ &=&\sum_{m\neq m',m''}\left(X_{mm''}\delta_{m'm''}-X_{m''m}\delta_{mm''}\right)\nonumber\\ &\cdot&\left(c_{m}^{\dagger}c_{m''}\delta_{m'm''}-c_{m''}^{\dagger}c_{m'}\delta_{mm''}\right)\nonumber\\ &=&2\sum_{m\neq m'}X_{mm'}c_{m}^{\dagger}c_{m'}. \end{eqnarray} Equations (\ref{oducti}) - (\ref{tib}) combined together give us scaling equation (\ref{calingsc01c}). \section{Attractors and separatrix lines\label{sec:pfaff}} Scaling flows of Eq.~(\ref{calingsc01c}) in the vicinity of the ray described by Eq.~(\ref{s1}) can be presented as \begin{eqnarray} J_S&=&\left(1+c_1t^{\rho}\right)\frac{1}{Nt}\nonumber\\ J_A&=&c_2t^{\rho}\frac{1}{Nt}\nonumber\\ J_z&=&\left(1+c_3t^{\rho}\right)\frac{1}{Nt}, \end{eqnarray} where $t=\ln\Lambda$; the solution blows up when $t\to +0$. Linearizing with respect to small deviations we obtain the matrix equation \begin{eqnarray} \label{rho} KC=\rho C, \end{eqnarray} where $C = (c_1,c_2,c_3)^T$, and the Kovalevskaya matrix \cite{yoshida2,goriely} (KM) $K$ is \begin{eqnarray} \label{ii2} K=\left(\begin{array}{ccc}\frac{2}{N}-1 & 0 & -\frac{2}{N} \\ 0 & \frac{2}{N}+1 & 0 \\ -2 & 0 & 1 \end{array}\right). \end{eqnarray} After elementary algebra we obtain the eigenvalues of the KM, which are called Kovalevskaya exponents (KEs): $\rho_1=-1$, and doubly degenerate $\rho_2 = 2/N+1$. Thus we have two independent solutions to Eq.~(\ref{rho}). The solution corresponding to $\rho_1$ is irrelevant; it has the form $C=(1,0,1)^T$ and represents just the shift of the ray $J_S=J_z=1/Nt,J_A=0$ along itself. The KE $\rho_2$ gives us two independent solutions, corresponding to $C=(1,0,-1)^T$ and $C=(0,1,0)^T$. Thus we see that the ray described by Eq.~(\ref{s1}) is an attractor. From the symmetry of Eq.~(\ref{calingsc01c}) it follows that the behavior of solutions in the vicinity of the ray Eq.~(\ref{s2}) is identical to that in the vicinity of the ray Eq.~(\ref{s1}). Calculation of the KEs can be formalized \cite{yoshida2,goriely}. Consider a system of ordinary differential equations \begin{eqnarray} \label{pf1} \frac{dx_i}{dt}=f_i(x_1,\dots,x_n). \end{eqnarray} Consider a solution $C = (c_1,\dots,c_n)^T\neq(0, \dots, 0)^T$ of algebraic equations \begin{eqnarray} \label{pf2} f_i(c_1,\dots,c_n)+c_i=0. \end{eqnarray} We define the KM \begin{eqnarray} \label{ii} K_{ij}=\frac{\partial f_i}{\partial x_j}(C)+\delta_{ij}. \end{eqnarray} The eigenvalues $\rho_1,\dots,\rho_n$ of the KM are the KEs. For Eq.~(\ref{calingsc01c}) \begin{eqnarray} \label{ii2b} &&K_{ij}=\\ &&\left(\begin{array}{ccc} 1-2(N-2)c_1 -2c_3 & 0 & -2c_1 \\ 0 & 1+2(N-2)c_2+2c_3 & 2c_2 \\ -2Nc_1 & 2Nc_2 & 1 \end{array}\right), \nonumber \end{eqnarray} where $c_1,c_2,c_3$ are the solutions of the equation \begin{eqnarray} \label{str} -(N-2)c_1^2-2c_1c_3+c_1&=&0 \nonumber\\ (N-2)c_2^2+2c_2c_3+c_2&=&0 \\ -N(c_1^2-c_2^2)+c_3&=&0.\nonumber \end{eqnarray} The ray Eq.~(\ref{s1}) corresponds to the solution of Eq.~(\ref{str}): $C=(1/N,0,1/N)$. Substituting the solution into Eq.~(\ref{ii2b}) we recover Eq.~(\ref{ii2}). The ray Eq.~(\ref{s3}) corresponds to the solution of Eq.~(\ref{str}): $C=(-1/2,0,N/4)$. Substituting the solution into Eq.~(\ref{ii2}) we obtain \begin{eqnarray} \label{ii2c} K_{ij}=\left(\begin{array}{ccc} \frac{N}{2}-1 & 0 & 1 \\ 0 & \frac{N}{2}+1 & 0 \\ N & 0 & 1 \end{array}\right). \end{eqnarray} After elementary algebra we obtain the KEs: $\rho_1=-1$, and doubly degenerate $\rho_2=N/2+1$. As is always the case, the solution corresponding to $\rho_1$ is irrelevant; it has the form $C=(1,0,N/2)^T$ and represents just the shift of the ray $J_S=-(2/N)J_z,J_A=0$ along itself. The positive value of $\rho_2$ means that the ray in question is an attractor. From the symmetry of Eq.~(\ref{calingsc01c}) it follows that the behavior of solutions in the vicinity of the ray Eq.~(\ref{s4}) is identical to that in the vicinity of the ray Eq.~(\ref{s3}). The ray Eq.~(\ref{s5}) corresponds to the solution of Eq.~(\ref{str}): $C=(1/(N-2),-1/(N-2),0)$. Substituting the solution into Eq.~(\ref{ii2b}) we obtain \begin{eqnarray} \label{ii4} K=\left(\begin{array}{ccc} -1 & 0 & -\frac{2}{N-2} \\ 0 & -1 & -\frac{2}{N-2} \\ -\frac{2N}{N-2} & -\frac{2N}{N-2} & 1 \end{array}\right). \end{eqnarray} After elementary algebra we obtain the KEs: $\rho_1=-1$, $\rho_2=-(N+2)/(N-2)$ and $\rho_3=(N+2)/(N-2)$. The solution corresponding to $\rho_1$ represents just the shift of the ray $J_S=-J_A,J_z=0$ along itself. The second negative KE makes behavior of the flow line in the vicinity of the ray Eq.~(\ref{s5}) qualitatively different from that in the vicinity of the rays Eqs.~(\ref{s1}) - (\ref{s4}). It means that in general the flow lines diverge from the ray. However, the positive value of $\rho_3$ is the manifestation of the fact that in the plane $(N/2)(J_S+J_A)=-J_z$ the ray is an attractor. Calculated KEs allow us to address the question of the existence of the first integrals for Eq.~(\ref{calingsc01c}). Notice that the existence of two such independent integrals makes the analytical integration of Eq.~(\ref{scalingsc02}) possible. All the KEs turn out to be rational, a necessary condition for the existence of polynomial first integrals of Eq.~(\ref{calingsc01c}) \cite{yoshida2,goriely}. Though the condition is not sufficient, one may hope that such integrals can be found. \end{appendix}
1,116,691,498,117
arxiv
\section{Introduction} The pipeline architecture is known for a very long time and used to increase the throughput of digital blocks\wzcite{Hallin1972880}. This concept has been also early adopted to signal or data processing systems implemented in FPGA\wzcite{614780}. The pipeline architecture allows to increase the clock frequency, because the complex operations, that would result in long critical paths in FPGA are divided into multiple significantly simpler operations. Those operations may be performed in a single clock cycle even at the much higher clock frequency. The time needed to process the set of data will be the same or even slightly longer, due to the introduction of additional registers. % However, the overall throughput of such system will increase because in each clock cycle, the system can accept a new set of data, and results of processing of certain previous data set are produced on the output. Of course such a system will introduce a latency of certain number of clock cycles, between the moment of delivery of the data set to the input and the moment when results of its processing are available on the output. Implementation of algorithms in pipelined architecture is more complicated when the processing consists of different operations performed in parallel, and each of them requires a different number of elementary single-cycle operations. The latency of each operation is different, and if we want to produce coherent results on the output, we need to add certain delay block (typically consisting of shift registers) at the output of the faster block (see Figure~\ref{fig:pipeline-demo1}). \begin{figure}[t] { \begin{center} \begin{tabular}{c} \includegraphics[width=0.9\linewidth]{pipeline-demo1} \end{tabular} \end{center} \caption { \label{fig:pipeline-demo1} An example of the data processing system performing two pipelined operations in parallel. a) The output data are incoherent: results of operation B are produced four clock periods (4~$T_{CLK}$) before results of the operation A. b) To assure output data coherency it was necessary to add the 4~$T_{CLK}$ delay after the operation B block. } } \end{figure} The problem is even more significant when results of such operations are used by another processing block. In this case, the results will be incorrect because operands of the final operation are calculated from input data originating from different datasets (see Figure~\ref{fig:pipeline-demo2}). Again the solution is to equalize the latencies by introducing the appropriate delay block. \begin{figure}[t] { \begin{center} \begin{tabular}{c} \includegraphics[width=0.9\linewidth]{pipeline-demo2} \end{tabular} \end{center} \caption { \label{fig:pipeline-demo2} An example of the data processing system where results of two operations calculated with different latencies are used as arguments for a third operation. a) Without additional delays, the output data are incorrect, as arguments for operation C were calculated from different data sets. b) To assure correct output data, it was necessary to add the 4~$T_{CLK}$ delay after the operation B block. Now both arguments of operation C are derived from the same data set. } } \end{figure} In real applications, the system may contain multiple paths with different latencies, which must be equalized to ensure the proper operation. A practical example of such system may be the Overlap Muon Track Finder trigger for CMS detector at CERN\wzcite{doi:10.1117/12.2073380, omtf-poster}, where multiple data processing operations are performed in parallel, in pipelined blocks to produce the muon candidates\footnote{In fact, the problems with proper synchronization of data paths in this design were an inspiration to intensify work on the methodology described in this paper.}. Calculation of latencies in different, often quite complex paths may be a tedious task. Unfortunately, this work often has to be repeated, as the latencies may change during the development of the system. The latencies may vary due to modifications of the algorithm, but their change may be also enforced by modification of other parts of the IP core. For example when occupancy of the FPGA increases (e.g. due to addition of other blocks) the routing of signal becomes more complicated, and to achieve timing closure it may be necessary to increase the number of pipeline stages\wzcite{xlx-ufast-des-met-guide,xlx-timing-closure-user-guide, alt-timing-closure-met}. Therefore such designs with manually handcrafted latencies equalization are difficult to maintain and reuse, and a method providing automatic delay balancing is needed. The proposed method should not impair portability of the design. Therefore, it should be compatible with a possibly broad range of design tools. Particularly, the method should be applicable for systems implemented entirely in VHDL. \section{Available solutions} Of course, the problem of latency equalization between paths in pipelined designs is not new. The graphical tools, allowing to build data or signal processing systems from predefined blocks implementing basic operations addressed that problem more than 14 years ago. Old versions of Xilinx System Generator for Simulink provided the ``sync'' block, which operation is described as follows: {\em ``The Xilinx Sync Block synchronizes two to four channels of data so that their first valid data samples appear aligned in time with the outputs. The input of each channel is passed through a delay line and then presented at the output port for that channel. The lengths of the delay lines embedded in this block, however, are adaptively chosen at the start of simulation so that the first valid input samples are aligned. Thus, no data appears on any channel until a first valid sample has been received into each channel.''} (\wzcite{xlx-xsg-2.1}, page 47)\label{sec:xlx-xsg-2.1}. This sync block was later synthesized using the hardware shift registers (\wzcite{xlx-acc-dsp-designs-using-fpgas}, slide 22). The modern block-based tools also provide similar functionality. For example, the Altera DSP Builder can automatically add delays in paths with lower latency ``to ensure that all the input data reaches each functional unit in the same cycle''\wzcite{dsp-bld-adv-blkset, url-eetimes-fpga-tool}. No detailed information about this methodology, revealing the implementation details is disclosed, though. % The article~\wzcite{6861592} describes the system level retiming, automatic pipelining and delay balancing (including the multi-rate pipelining) implemented in the MathWorks HDL Coder~\wzcite{del-bal-hdl-coder}. The delay balancing algorithm used by the authors depends on the transformation of the design into the Parallel Implementation Representation (PIR), and further analysis of the PIR graph. There are no known tools able to convert the generic HDL code into the PIR form, and, therefore, this solution is not suitable for designs implemented in pure VHDL. % Finally, the existing solutions have significant disadvantages: \begin{itemize} \item They are available only for systems built in graphical environments from predefined blocks (however the user may also add his or her own block with needed functionality). \item They are closed solutions offered for the particular proprietary environment. Therefore, they are not portable. \item Due to their closed source nature, it is not clear how the latency balancing is implemented and if it can be reused in designed entirely based on HDL description. The only exceptions are: \begin{itemize} \item The old ``Xilinx Sync Block'' which uses the approach based on simulation, where the main concept is described in the accompanying documentation. The interesting thing, however, is that this block has been removed from newer versions of Xilinx System Generator (see \wzcite{xlx-xsg-9.1}, page 6); \item The algorithm implemented in the MathWorks HDL Coder that unfortunately utilizes a special intermediate representation of the design. This representation may be created from the Simulink model, but not from the arbitrary VHDL source. \end{itemize} \end{itemize} As we can see from the above review. If we want solution applicable on the level of VHDL source, a new approach is necessary. \section{Latency analysis and equalization in VHDL based designs} The VHDL source fully describes the behavior of the system. Therefore, one could try to find the data path latencies via analysis of the source code. Unfortunately, calculation of latency introduced in the VHDL code may be extremely difficult. For example, a single sequential process may introduce either one clock period ($T_{CLK}$) latency (see Figure~\ref{fig:hdl-latencies-1} a) or a latency of a few clock periods (see Figure~\ref{fig:hdl-latencies-1} b). \begin{figure} \begin{minipage}{\linewidth} { % \scriptsize % % \begin{multicols}{2} \small a)\\ \scriptsize \verbatiminput{code/proc1.vhd} \vfill \columnbreak % \small b)\\ \scriptsize \verbatiminput{code/proc2.vhd} \end{multicols} } \end{minipage} \vspace{3mm} \caption{\label{fig:hdl-latencies-1} A simple process introducing different latencies between sig\_in and sig\_out signals. a) The process introduces the latency of $1 T_{CLK}$. b) The process introduces the latency of $3 T_{CLK}$ } \end{figure} When the structural description is used, the latency results from the serial connection of blocks introducing their latencies. As those blocks may be implemented in other files, it would be necessary to find a method to propagate information about introduced latencies from the file, which defines the particular block to the another one, in which this block is instantiated. The situation is even more complicated when we consider ``if generate'' and ``for generate'' statements. The final conclusion is that it is impossible to derive different data path latencies from the VHDL code without duplicating significant part of a functionality of the VHDL compiler. Maybe it could be possible to add such feature to an open source compiler like GHDL\wzcite{url-ghdl}, but this is out of the scope of this paper, as it is obviously not a portable solution. \section{Simulation-based analysis and equalization of latencies - simplified approach} An important part of the development of IP cores for FPGA is a preparation of testbenches allowing to verify correct operation of the design in simulation. Therefore, a method allowing to check and equalize data path latencies in a simulation using a dedicated testbench may be an acceptable solution. Such approach was employed by the ``Xilinx Sync Block'', mentioned in section~\ref{sec:xlx-xsg-2.1}. However, it seems that the method based only on the time of arrival of first valid data may be not fully reliable. It is desirable that the latency of different paths is checked or verified during the whole simulation period. The general idea of the proposed method is to supplement (in simulation only) each data set with the time marker (TM) describing the moment (the clock period number), in which these data were delivered to the analyzed system. Therefore in simulation the system must be equipped with an additional block, generating the current TM value. In the simplest implementation the TM may be just an integer signal, starting from certain value (e.g. -1) and increased every clock pulse\label{sec:time-markers-1st} (more detailed description of time markers implementation is available in Section~\ref{sec:time-markers-2nd}). During the processing, the time markers should travel through the system together with the data and results of their processing. Of course, the time markers, and all logic associated with their processing must be included only in simulation. Particularly, they should not increase the complexity of the synthesized design. Fortunately, most synthesis tools offer the ``\verb|--pragma translate_off|'' and ``\verb|--pragma translate_on|'' metacomments allowing to exclude certain fragments of the VHDL code from synthesis. Using them, we can ensure that only the delay blocks, needed to balance latencies in parallel paths will be added to the synthesized design. An example of an adder block implementing the described method is shown in Figure~\ref{fig:adder-block1}. \begin{figure}[t] { \begin{center} \begin{tabular}{c} \includegraphics[width=0.85\linewidth]{processing-block} \end{tabular} \end{center} \cprotect\caption { \label{fig:adder-block1} An adder as an example of the processing block implementing the described method. The data entering the system in the simulation are labeled with the time markers. The preprocessing of data involves pipelines with different latency. Finally, the sum of the three values resulting from the preprocessing is calculated. Equality of time markers on the adder inputs is verified, and the same time marker is produced on the output. Grayed objects are used only for simulation. They are excluded from synthesis using the ``\verb|--pragma translate_off|'' and ``\verb|--pragma translate_on|'' metacomments. } } \end{figure} Whenever an operation on two or more subsets of data is performed, the time markers should be checked, and in case if they are different, it is a symptom of unequal data path latencies. In such case, the simulation should be aborted. The difference between the time markers should be written to the file, and used to correct the design. The shorter data path (or data paths if more than two data subsets were used as operands) should be supplemented with additional delay block with latency equal to the detected difference (see Figure~\ref{fig:correction1-3inputs}). \begin{figure}[t] { \begin{center} \begin{tabular}{c} \includegraphics[width=0.9\linewidth]{correction1-3ins} \end{tabular} \end{center} \caption { \label{fig:correction1-3inputs} The idea of simulation based latency correction. During the simulation, in a certain place of the design three data sets: A, B and C are used to perform operation X. The time markers (TM) associated with the data sets are compared and found to be different. Therefore in the data paths with the highest values of TM the delay blocks are added. The latency of added delay is equal to the difference between the minimal TM and the TM in the particular path. } } \end{figure} Such a process should be performed iteratively until the design is found to work properly. Unfortunately, this method may require multiple iterations because each simulation-analysis-correction cycle allows to correct latencies on the input of one block only. The preferable method should provide equalization of all latencies in a single simulation-analysis-correction cycle. \section{Simulation-based latency analysis and correction - improved approach} In the first approach, the simulation was stopped, when the first inconsistency of time markers was detected. However most pipelined systems may work even with misaligned data. Of course, the results will be incorrect, but the system may be further simulated, and time markers differences between input data in other blocks may be analyzed. There is, however, one problem. If the input time markers are equal, the time marker of the result is simply copied from them. However, what should be the output time marker if the input time markers are not equal? To allow proper analysis of latencies in the rest of the system, we should imitate the appropriate latency equalization. The latency equalization is achieved by introducing the delay blocks, which results in the decrease of the time marker. Therefore to imitate the proper equalization, the output time marker should be set to the lowest one from the input time markers. Of course, such situation must be reported, as the processing results will be incorrect because data are not properly aligned in time. Additionally also the values of input time markers must be somehow reported, as they will be used to find the proper latencies of delay blocks in the correction phase. \begin{figure}[t] { \begin{center} \begin{tabular}{c} \includegraphics[width=0.5\linewidth]{processing-block2} \end{tabular} \end{center} \caption { \label{fig:adder-block2} An adder as an example of the processing block implementing the improved method. A sum of the three input values is calculated. Equality of time markers on the input is verified. If they are equal, the same time marker is produced on the output. In case of inequality, we pretend that latencies are properly equalized. As we can only increase the delay by adding delay blocks, the minimal time marker is produced on the output. In that way, from the point of view of time markers, we can imitate the operation of the system after proper latency balancing in this block (of course the data are still misaligned, and results are incorrect). } } \end{figure} Using the described method we can test latencies of all paths in the system and calculate delays of all necessary delay blocks in a single simulation-analysis-correction cycle. Certainly the testbench should also allow to test the properly latency-balanced design at the end. Therefore it must offer two modes of operation: \begin{itemize} \item {\bf The analysis mode}, in which the time marker inequalities do not cause the simulation failure and in each block the output time marker is set to the smallest one from the input time markers \item {\bf The final test mode}, in which any difference between time markers causes the simulation error \end{itemize} \section{Implementation of the proposed method in VHDL} To allow inclusion of the time marker in the processed data, those data should be encapsulated in a record type, with optional (used only in simulation) time marker field. An example of code implementing such a record type and the adder using this type is shown in Figure~\ref{fig:adder-impl-example}. \begin{figure} \begin{minipage}{\linewidth} { % \scriptsize % % \begin{multicols}{2} % % \verbatiminput{code/sample1.vhd} \end{multicols} } \end{minipage} \vspace{3mm} \caption{\label{fig:adder-impl-example} An example definition of the type encapsulating the user data and the time marker and of the adder using this type.} \end{figure} If the user had to modify all his or her processing blocks to include the TM handling (as in Figures~\ref{fig:adder-block1} and~\ref{fig:adder-block2}), the proposed method would be very inconvenient. To simplify its adoption, the dedicated ``Latency Checking and Equalizing'' blocks (LCEQ) are introduced, The LCEQ block should offer configurable number of signal paths and should behave in the following way: \begin{itemize} \item In the analysis mode: \begin{itemize} \item Checks the time markers on its input, reporting all detected inequalities, additionally the time markers values should be recorded for further analysis. \item Verifies the time markers on its output (after delay blocks) and in the case of their inequality, copies the smallest time marker to all outputs (to allow single-cycle analysis, as described previously). \end{itemize} \item In the final test mode: \begin{itemize} \item Checks the time markers on its outputs and abort the simulation in the case of any inequality. \end{itemize} \end{itemize} Additionally it should be possible to configure the latency value introduced by the LCEQ in each path. The block diagram of the proposed LCEQ block is shown in Figure~\ref{fig:lat-check-eq-1}. \begin{figure}[t] { \begin{center} \begin{tabular}{c} \includegraphics[width=0.95\linewidth]{lat-check-equal1} \end{tabular} \end{center} \cprotect\caption { \label{fig:lat-check-eq-1} The block diagram of the proposed latency checking and equalizing block. a) The block works in the analysis mode. b) The block works in the final test mode. } } \end{figure} A possible implementation of such block in VHDL is shown in Figure~\ref{fig:lateq-sample-impl1}. \begin{figure} \begin{minipage}{\linewidth} { % \scriptsize % % \begin{multicols}{2} % % \verbatiminput{code/lateq.vhd} \end{multicols} } \end{minipage} \vspace{1mm} \cprotect\caption {\label{fig:lateq-sample-impl1} A sample implementation of the latency checking and equalizing block. The number of equalized paths is configured with the NCHANS generic parameter. Values of input time markers in each clock cycle are reported by the \verb|lateq_report_delay| function. The end of each set is marked by the \verb|lateq_report_end| function. The latency of the delay block in each path is defined by the \verb|lateq_read_delays| function. Comparison of time markers is performed by the \verb|lateq_mrk_cmp| function. } \end{figure} Presented implementation of the LCEQ block may be used only when all paths carry data of the same type. It can be acceptable in some applications, but to assure maximal flexibility it should be possible to define the type of data in each path independently. Unfortunately, the VHDL language supported by most simulation and synthesis tools does not allow to implement a port that is an array of records of different types. The VHDL-2008\wzcite{Ashenden:1261178} introduces generic types, but even with that it still does not provide the necessary functionality. We must also consider the fact that VHDL-2008 is still not fully supported by most simulation and synthesis tools. Therefore for such more general case with different types of data, another solution is necessary. % % % % % % % % % Instead of providing the fully versatile block, there is a tool, which generates the dedicated LCEQ block for given number of paths and given types of data. \label{sec:lateqgen-1st} More details are provided in section~\ref{sec:lceq-generation}. \section{Practical implementation of the proposed method} \label{sec:practical-implementation} The first, ``proof of the concept'' implementation of the proposed method has been implemented and is available under open source BSD license on the OpenCores website\wzcite{url-opencores-lateq}. This project contains two implementations of the proposed method. The first one uses one type for all processed data (directory hdl\_single\_type), and the second one uses different types for different processed data (directory hdl\_various\_types). Both implementations use the same sample data processing system (described in section~\ref{sec:example-system}) as a demonstration example. \subsection{Generation of time markers} \label{sec:time-markers-2nd} As it was already mentioned in section~\ref{sec:time-markers-1st}, in the simplest implementation, one can use just integer numbers as time markers. For example, the -1 value may be set as the initial value for all time markers, and then time markers for input data are generated starting from 0 and increasing by 1 after each clock pulse. That allows special handling of uninitialized blocks. Such implementation has, however, one significant disadvantage. After $2^{31}$ clock pulses, the time marker value will achieve the maximum value and an attempt to increase it will generate a simulation error. For longer simulations another approach is needed, in which after reaching the maximum value, the time marker will return to 0. Of course, that solution needs the modified implementations of comparison and subtraction of time marker values, to handle the ``wrapped'' values properly. \subsection{Reporting of time markers} The essential part of the proposed methodology is reporting of time markers from different inputs in LCEQ blocks and delivering them to the program that calculates latencies of necessary delay blocks. In the tested implementation, the time markers are simply written to the file. In each clock pulse the value from each input of each LCEQ block is written to the file in a line containing: \begin{itemize} \item the unique identifier (LEQ\_ID) of the particular LCEQ block \item the number of the input \item the value of the time marker. \end{itemize} After the markers from each input in that clock cycle are reported, yet another line containing only the LEQ\_ID and the word "end" is written to the file. Such solution allows the analysis tool to check if the latency difference remains constant during the whole simulation. In the future implementations, it may be possible to connect the analysis tool directly to the VHDL simulator via named sockets or VHPI interface. That will eliminate writing a huge amount of data to the disk. Additionally the parallel operation of the simulator and analysis tool may reduce the execution time of the simulation-analysis-correction cycle on a multiprocessor machine. \subsection{Calculation of latencies of necessary delay blocks} The latency of each delay block must be known at the elaboration time. Therefore the analysis tool generates a package, implementing the function, which accepts two parameters: the unique ID of the LCEQ block (LEQ\_ID) and the number of the path in this block. This function returns required latency as an integer value. The analysis tool (latreadgen.py) is written in Python. Its calling syntax is as follows: \begin{verbatim} latreadgen.py /file/with_time_markers package_file package_name function_name \end{verbatim} An example call, as used in the demonstration project makefile is: \begin{verbatim} latreadgen.py /tmp/latrep.txt lateq_read_pkg.vhd lateq_read_pkg lateq_read_delays \end{verbatim} The analysis tools reads the time markers reported in each clock cycle, checks if their difference remains constant during the whole simulation (except the initialization phase, when at least one of time markers has the initial value), and calculates the needed additional delay as a difference between the lowest time marker and the time marker on the input of the particular path. An example of generated latency configuration function is shown in Figure~\ref{fig:read-lat-ex} \begin{figure} \begin{minipage}{\linewidth} { % \scriptsize % % \begin{multicols}{2} % % \verbatiminput{code/lat_read.vhd} \end{multicols} } \end{minipage} \vspace{3mm} \caption{\label{fig:read-lat-ex} An example of the generated function returning the calculated latency of each delay block in each LCEQ block. The function is generated by the latreadgen.py tool from the recorded time marker reports. } \end{figure} This approach has an additional advantage that the final sources with properly balanced latencies (which may be used for synthesis) contain only the standard VHDL files. \subsection{Unique identifiers of LCEQ blocks} Both reporting of time markers and configuration of latencies of delay blocks require that each LCEQ block in the design has its unique identifier. It must be the same during the simulation and during the synthesis. Theoretically VHDL offers the INSTANCE\_NAME attribute, which should univocally identify each instance of each component used in the design. Unfortunately, the tests have shown, that each simulation or synthesis tool may use a slightly different format of the generated identifier. Additionally, during the simulation the system is instantiated in the testbench, while, during the synthesis, it may be either a top entity or may be a component of a bigger system. That also leads to different INSTANCE\_NAME values during the simulation and during the synthesis. To work around those problems, the LCEQ block is equipped with generic LEQ\_ID of string type. This generic should be set to the unique LCEQ identifier during the instantiation of the block. If this block is instantiated inside of another block, then this ``container'' block should be also equipped with its unique ID. In such case, during instantiation of the internal LCEQ block, its LEQ\_ID should be set to the value: \verb|"ID_of_container_block:ID_of_LCEQ_block"| If the block is instantiated in the for-generate loop, the loop variable should be converted to the string (using the \verb|integer'image| function), and concatenated to the ID of the instantiated block. Such approach allows to create unique identifiers, portable between different simulation and synthesis tools. \subsection{Generation of the dedicated LCEQ blocks for different types of data} \label{sec:lceq-generation} As it was mentioned in section~\ref{sec:lateqgen-1st}, if the paths analyzed and equalized by the LCEQ blocks do not use the same type of data, it is necessary to generate the source code of the specialized LCEQ block for each combination of data types. The demonstration implementation provides such a tool, named ``lateqgen.py'', which should be called with the following arguments: \begin{itemize} \item Entity name of the generated block. \item Path to the file in which the sources of the block are to be generated. \item List of the types of data used in consecutive data paths of the created block. Length of this list defines the number of data paths in the block. \end{itemize} For example to generate the lceq1.vhd file with sources of the entity lceq1 implementing the LCEQ block with four paths where the first two of them handle data of type \verb|T_VOLTAGE|, the third one uses data of type \verb|T_WIDTH| and the fourth - \verb|T_POSITION|, the user should call that tool as: \verb|lateqgen.py lceq1 lceq1.vhd T_VOLTAGE T_VOLTAGE T_WIDTH T_POSITION| Due to the way how the code is generated there are some limitations on the names of the data types handled by the generated LCEQ blocks. The name of each type should start with \verb|T_|. Additionally for each such type the user should define the constant providing the initial value of signals of that type. The name of that constant must be derived from the name of the type by replacing the initial \verb|T_| with \verb|C_| and by adding \verb|_INIT| at the end. \section{Results} \label{sec:example-system} To verify the proposed methodology, the example data processing system has been included in the sources~\wzcite{url-opencores-lateq}. \subsection{Test data processing system} The system receives data from ADC converters connected to $M$ readout channels of a particle detector. The voltage level in each channel is proportional to the amount of charge received by that channel in the previous clock period. The particle passing through the detector generates a certain charge that is distributed between neighbouring channels. The amount of this charge is proportional to the particle's energy, and the center of gravity of the collected charge defines the position of the hit. In each clock cycle, the system finds the number of the channel with the highest level of the signal $N_{max}$. This value is treated as a non-interpolated position of the hit $X=N_{max}$. Then the system selects signals from this channel and $K$ neighbouring channels at each side: $V_i$ for $N_{max}-K < i < N_max+K$. Next, the system calculates the sum of charges (basing on the proportionality between the charge and the voltage): $$S=\sum_{i=N_{max}-K}^{N_{max}+K}{V_i}$$ and the weighted sum of charges: $$S_{W}=\sum_{i=N_{max}-K}^{N_{max}+K}{i \cdot V_i}$$ Calculated values are transmitted to the external system (in the simulation to the testbench), which calculates the center of gravity of the charge and finally, the interpolated position of the particle hit: $$X=N_{max}+\frac{S_{W}}{S}$$ The block diagram of the example system is shown in Figure~\ref{fig:sample-sys1} Please note, that this block is not of production quality. E.g., it may incorrectly handle the situation, where the maximum signal is too near to the edge of the detector (i.e. $N_max<K$ or $N_{max}>M-1-K$). \begin{figure}[t] { \begin{center} \begin{tabular}{c} \includegraphics[width=0.9\linewidth]{sample-sys1} \end{tabular} \end{center} \caption { \label{fig:sample-sys1} Block diagram of the example system using the same data type in all paths. } } \end{figure} \begin{figure}[t] { \begin{center} \begin{tabular}{c} \includegraphics[width=0.9\linewidth]{sample-sys2} \end{tabular} \end{center} \caption { \label{fig:sample-sys2} Block diagram of the example system using various types in different data paths. The most important type names are written in the figure. The path numbers are also shown at the inputs of the LCEQ blocks (they are referred to in the section describing the results). } } \end{figure} The latency of different paths in the example system may be modified by adjustment of certain parameters in the \verb|ex1_pkg.vhd| and \verb|ex1_trees_pkg.vhd| files. Finding the maximum value is performed in a multi-level tree based comparator consisting of a certain number of basic comparators. The number of inputs of each basic comparator should be chosen depending on the hardware features of the particular FPGA device and required speed of operation. Each level of the comparator is also equipped with a pipeline register. Therefore, the total latency of the whole ``Maximum Value Finder'' block depends on the number of inputs in the entire system (parameter \verb|C_N_CHANNELS| in \verb|ex1_pkg.vhd|) and also on the number of inputs in a single basic comparator (parameter \verb|EX1_NOF_INS_IN_CMP| in \verb|ex1_trees_pkg.vhd|). Similarly the adders calculating the sum of charge and the weighted sum of charge have a multilevel tree-based structure, and again their latency depends on the number of channels selected for those calculations (parameter \verb|C_N_SIDE_CHANS| in \verb|ex1_pkg.vhd|) and the number of inputs in a basic adder (parameter \verb|EX1_NOF_INS_IN_ADD| in \verb|ex1_trees.pkg.vhd|). There are two implementations of the demonstration system. The first one, located in the \verb|hdl_single_type| directory uses one type \verb|T_USER_DATA_MRK| in all paths in the system. That allows to avoid using generated LCEQ blocks but requires additional effort to find a common representation for different data (the input signal, the sum of charges, the position of maximum, etc.). The second implementation, located in the \verb|hdl_various_types| directory, shows how to use the proposed methodology with different types, individually suited for different kinds of information processed in the system. Therefore the LCEQ blocks are generated as follows: \verb|lateqgen.py ex1_eq_mf ex1_eq_mf.vhd T_INPUT_DATA_MRK T_POS_INT_MRK| \verb|lateqgen.py ex1_eq_calc ex1_eq_calc.vhd T_POS_INT_MRK \|\\ \verb| T_CALC_DATA_MRK T_CALC_DATA_MRK| Provided sample implementation is licensed under the BSD license, so it may be used not only to verify and investigate proposed methodology but also as a starting point for its adoption in user's own projects. \subsection{Tests of the proposed method} In the described parameterized implementation of the test system, each change of its parameters may result in a change of latency of corresponding paths. Without the described method, these latencies should be afterwards manually balanced by the user. Thanks to the proposed method, the user may perform automatic equalization of latencies. During the tests, the parameters described in the previous subsection were changed, and the additional latencies calculated by the proposed method were checked. Correct operation of the system was also verified, using the simulated hit data in the testbench. Obtained results are presented in Table~\ref{tab:results-latencies}. In all cases, the correct operation of the system after latency balancing was confirmed. \begin{table}[tp] \caption[results] { \label{tab:results-latencies} The results of latency adjustments for different values of parameters of the test system. The upper part of the table shows the parameters values for different test cases, the lower part - the values of additional latencies calculated by the method. The path numbers are defined in sources and shown in Figure~\ref{fig:sample-sys2}. In all cases, the correct operation of the system after latency balancing was confirmed. Only results for implementation with various data types is shown, as the number of paths in the LCEQ1 block for single type implementation is very high. } \begin{center} {\small \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c}{\multirow{2}{*}{Parameter name}} & \multicolumn{7}{|c|}{Test case}\\ \cline{3-9} \multicolumn{2}{|c|}{} & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline \multicolumn{2}{|c|}{C\_N\_CHANNELS} & 64 & 64 & 32 & 32 & 64 & 64 & 64 \\ \multicolumn{2}{|c|}{C\_N\_SIDE\_CHANS} & 3 & 3 & 3 & 3 & 5 & 5 & 5 \\ \multicolumn{2}{|c|}{EX1\_NOF\_INS\_IN\_CMP} & 3 & 3 & 2 & 2 & 2 & 3 & 3 \\ \multicolumn{2}{|c|}{EX1\_NOF\_INS\_IN\_ADD} & 3 & 2 & 3 & 2 & 2 & 2 & 3 \\ \hline \hline LCEQ block & Path & \multicolumn{7}{c|}{Calculated additional latency}\\ \hline \multirow{2}{*}{LCEQ1} & 0 & 4 & 4 & 5 & 5 & 6 & 4 & 4 \\ & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \multirow{2}{*}{LCEQ2} & 0 & 4 & 5 & 4 & 5 & 6 & 6 & 5 \\ & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ & 2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \hline \end{tabular} } \end{center} \end{table} To allow the user to verify the presented results, and to allow to perform experiments with the modified or own design, the dedicated makefile is prepared. To run the provided demonstration the user must have installed on his or her computer Python version 3\wzcite{url-python}, GHDL simulator\wzcite{url-ghdl} and GTKWave viewer\wzcite{url-gtkwave}. The test makefile defines a few targets: \begin{itemize} \item {\bf make clean} - removes the compiled files and simulation results. \item {\bf make initial} - generates the initial version of latency configuration function, which sets latency to 0 in all paths of all LCEQ blocks. \item {\bf make final} - performs simulation in the ``final test'' mode. If latencies are not properly balanced, one should expect error messages about unequal latencies\\ (e.g.: \verb|EQ1 inequal latencies: out0=0, out1=-1|). \item {\bf make synchro} - performs the simulation-analysis-correction cycle. After this command, the latencies should be properly equalized, and further running of ``make final'' should not report any errors. In fact, the testbench should also report two correctly analyzed particle hits like below:\\ Hit with charge: 2.5e2 at 1.476e1 \\ Hit with charge: 2.65e2 at 2.549e1 \item {\bf make reader} - allows to start the GTKWave viewer and see values of the signals in the demonstration system during the last simulation. This target may be used to analyze the internals of the system. \end{itemize} \subsection{Tests of synthesizability of the generated sources} The sources generated by the test makefile with ``synchro'' target have been successfully synthesized with the Xilinx Vivado\cite{url-xlx-vivado} tools. The blocks related to time markers generation and checking have been correctly removed from the synthesized design, and only the additional delay blocks have been inserted. Due to high number of pins, the xc7vx690tffg1930 Virtex 7 chip was selected for implementation. \section{Conclusions} The method presented in this paper is a solution of an important problem of equalization of latencies between parallel paths in complex pipelined data processing systems implemented in FPGA. The method extends the concept of simulation based pipeline delay balancing method offered by the ``sync'' block in the early versions of Xilinx System Generator for Simulink environment. The solution described in the paper is suitable for systems implemented entirely in the VHDL, and should be compatible with all recent simulation and synthesis tools. The simulation of the designed subsystem allows to calculate latencies of necessary additional delay blocks in a single simulation-analysis-correction cycle. The method allows to equalize latency between paths with data of different types which is crucial in complex systems. To achieve that, dedicated tools have been written in Python 3 to overcome limitations of the VHDL language and to generate source code of necessary blocks. The results of latency equalization are implemented in a standard VHDL package with function defining latencies of all added delay blocks. The sources of the first ``proof of the concept'' implementation of the proposed methodology are published on the Open Cores website\wzcite{url-opencores-lateq}, under the BSD license. The correctness of the method has been verified with the complete example data processing system included in the sources. Further improvements of the proposed method should be focused on optimization of communication between the simulator and the latency analysis tool. Probably using the named sockets or VHPI interface may significantly improve the simulation and analysis speed. Anyway even in the current state the proposed method may be a useful tool for designing and maintenance of complex pipelined IP cores implemented in VHDL. % % % % % % % % %
1,116,691,498,118
arxiv
\section{Discussion and Future Directions}\label{sec:discussion} Over the past two decades, the field of {\sc nlg} has advanced considerably, and many of these recent advances have not been covered in a comprehensive survey yet. This paper has sought to address this gap, with the following goals: \begin{enumerate} \item to give an update of the core tasks and architectures in the field, with an emphasis on recent data-driven techniques; \item to briefly highlight recent developments in relatively new areas, incuding vision-to-text generation and the generation of stylistically varied, engaging or creative texts; and \item to extensively discuss the problems and prospects of evaluating {\sc nlg} applications. \end{enumerate} Throughout this survey, various general, related themes have emerged. Probably the central theme has been the gradual shift away from traditional, rule-based approaches to statistical, data-driven ones, which, of course, has been taking place in {\sc ai} in general. In {\sc nlg}, this has had substantial impact on how individual tasks are approached (e.g., moving away from domain-dependent to more general, domain-independent approaches, relying on available data instead) as well as on how tasks are combined in different architectures (e.g., moving away from modular towards more integrated approaches). The trade-off between output quality of the generated text and the efficiency and robustness of an approach is becoming a central issue: data-driven approaches are arguably more efficient than rule-based approaches, but the output quality may be compromised, for reasons we have discussed. Another important theme has been the increased interplay between core {\sc nlg} research and other disciplines, such as computer vision (in the case of vision-to-text) and computational creativity research (in the case of creative language use). At the conclusion of this comprehensive survey of the state of the art in {\sc nlg}, and given the fast pace at which developments occur both in industry and academia, we feel it is useful to point to some potential future directions, as well as to raise a number of questions which recent research has brought to the fore. \subsection{Why (and How) Should NLG be Used?} Towards the beginning of their influential survey on {\sc nlg}, \citeA{Reiter2000} recommended to the developer that she pose this question before embarking on the design and implementation of a system. Can {\sc nlg} really help in the target domain? Does a cheaper, more standard solution exist and would it work just as well? From the perspective of an engineer or a company, these are obviously relevant questions. As recent industry-based applications of {\sc nlg} show, this technology is typically valuable whenever information that needs to be presented to users is relatively voluminous, and comes in a form which is not easily consumed and does not afford a straightforward mapping to a more user-friendly modality without considerable transformation. This is arguably where {\sc nlg} comes into its own, offering a battery of techniques to select, structure and present the information. However, the question whether {\sc nlg} is worth using in a specific setting should also be accompanied by the question of {\em how} it should be used. Our survey has focussed on techniques for the generation of text, but text is not always presented in isolation. Other important dimensions include document structure and layout, an under-studied problem \cite<but see>{Power2003}. They also include the role of graphics in text, an area where there is the potential for further interaction between the {\sc nlg} and visualisation communities, addressing such questions as which information should be rendered textually and which can be made more accessible in a graphical modality \cite<e.g.,>{demir2012}. These questions are of great relevance in some domains, especially those where accurate information delivery is a precursor to decision-making in fault-critical situations \cite<for some examples, see>{Elting1999,Law2005,Meulen2007}. \subsection{Does NLG Include Text-to-Text?} In our introductory section, we distinguished text-to-text generation from data-to-text generation; this survey has focussed primarily on the latter. The two areas have distinguishing characteristics, not least the fact that {\sc nlg} inputs tend to vary widely, as do the goals of {\sc nlg} systems as a function of the domain under consideration. In contrast, the input in text-to-text generation, especially Automatic Summarisation, is comparatively homogeneous, and while its goals can vary widely, the field has also been successful at defining tasks and datasets (for instance, through the {\sc duc} shared tasks), which have set the standard for subsequent research. Yet, a closer look at the two types of generation will show more scope for convergence than the above characterisation suggests. To begin with, if {\sc nlg} is concerned with going from data to text, then surely textual input should be considered as one out of broad variety of forms in which input data might be presented. Some recent work, such as that of \citeA{Kondadadi2013} (discussed in Section \ref{sec:datadriven}) and \citeA{Mcintyre2009} (discussed in Section \ref{sec:creativity}) has explicitly focussed on leveraging such data to generate coherent text. Other approaches to {\sc nlg}, including some systems that conform to a standard, modular, data-to-text architecture \cite<e.g.,>{Hunter2012}, have had to deal with text as one out of a variety of input types, albeit using very simple techniques. Generation from heterogeneous inputs which include text as one type of data is a promising research direction, especially in view of the large quantities of textual data available, often accompanied by numbers or images. \subsection{Theories and Models in Search of Applications?} In their overview of the status of evaluation in {\sc nlg} in the late 1990s, \citeA{Mellish1998a} discussed, among the possible ways of evaluating a system, its theoretical underpinnings and in particular whether the theoretical model underlying an {\sc nlg} system or one of its components is adequate to the task and can generalise to new domains. Rather than evaluating an {\sc nlg} system as such, this question targets the theory itself, and suggests that we view {\sc nlg} as a potential testbed for such theories or models. But what are the theories that underlie {\sc nlg}? The prominence of theoretical models in {\sc nlg} tends to depend on the task under consideration. For instance, many approaches to realisation discussed in Section \ref{sec:lr} are based on a specific theory of syntactic structure; research on {\sc reg} has often been based on insights from pragmatic theory, especially the Gricean maxims \cite{Grice1975}; and much research on text structuring has been inspired by Rhetorical Structure Theory \cite{Mann1988}. Relatively novel takes on various sentence planning tasks -- especially those concerned with style, affect and personality -- tend to have a theoretical inspiration, in the form of a model of personality \cite{John1999} or a theory of politenes \cite{BrownLevinson1987}, for example. More often than not, such theories are leveraged in the process of formalising a particular problem to achieve a tractable solution. Treating their implementation in an {\sc nlg} system as an explicit test of the theory, as \citeA{Mellish1998a} seem to suggest, happens far less often. This is perhaps a reflection of a division between `engineering-oriented' and `theoretically-oriented' perspectives in the field: the former perspective emphasises workable solutions, robustness and output quality; the latter emphasises theoretical soundness, cognitive plausibility and so forth. However, the theory/engineering dichotomy is arguably a false one. While the goal of {\sc nlg} research is often different from, say, that of cognitive modelling (for example, few {\sc nlg} systems seek to model production errors explicitly), it is also true that theory-driven implementations are themselves worthy contributions to theoretical work. Recently, some authors have argued that {\sc nlg} practitioners should pay closer attention to theoretical and cognitive models. The reasons marshalled in favour of this argument are twofold. First, psycholinguistic results and theoretical models can actually help to improve implemented systems, as \citeA{Rajkumar2014} show for the case of realisation. Second, as argued for example by \citeA{VanDeemter2012a}, theoretical models can benefit from the formal precision that is the bread-and-butter of computational linguistic research; a concrete case in point in {\sc nlp} is provided by \citeA{Poesio2004}, whose implementation of Centering Theory \cite{Grosz1995} shed light on a number of underspecified parameters in the original model and subsequent modifications of it. Our argument here is that {\sc nlg} has provided a wealth of theoretical insights which should not be lost to the broader research community; similarly, {\sc nlg} researchers would undoubtedly benefit from an awareness of recent developments in theoretical and experimental work. \subsection{Where do We Go from Here?} Finally, we conclude with some speculations on some further directions for future research for which the time seems ripe. Within the field of Natural Language Processing as a whole, a remarkable recent developments is the explosion of interest in social media, including online blogs, micro-blogs such as Twitter feeds, and social platforms such as Facebook. In one respect, interest in social media could be seen as a natural extension of long-standing topics in {\sc nlp}, including the desire to deal with language `in the wild'. However, social media data has given more impetus to the exploration of non-canonical language \cite<e.g.>{Eisenstein2013}; the impact of social and demographic factors on language use \cite<e.g.>{Hovy2015,Johannsen2015}; the prevalence of paralinguistic features such as affect, irony and humour \cite{Pang2008,Lukin2013}; and other variables such as personality \cite<e.g.>{Oberlander2006,Farnadi2013,Schwartz2013a}. Social media feeds are also important data streams for the identification of topical and trending events \cite<see>[for a recent review]{Atefeh2015}. There is as yet little work on generating textual or multimedia summaries of such data \cite<but see, for example,>{Wang2014} or generating text in social media contexts \cite<exceptions include>{Ritter2011,Cagan2014}. Since much of social media text is subjective and opinionated, an increased interest in social media on the part of {\sc nlg} researchers may also give new impetus to research on the impact of style, personality and affect on textual variation (discussed in Section \ref{sec:style}), and on non-literal language (including some of the phenomena discussed in Section \ref{sec:creativity}). A second potential growth area for {\sc nlg} is situated language generation. The term {\em situated} is usually taken to refer to language use in physical or virtual environments where production choices explicitly take into account perceptual and physical properties. Research on situated language processing has advanced significantly in the past several years, with frameworks for language production and understanding in virtual contexts \cite<e.g.,>{Kelleher2005}, as well as a number of contributions within {\sc nlg}, especially for the generation of language in interactive environments \cite{Kelleher2006,Stoia2006,Garoufi2013,Dethlefs2015}. The popular {\sc give} Challenge added further impetus to this research \cite{Striegnitz2011}. Clearly, this work is also linked to the enterprise of grounding generated language in the perceptual world, of which the research discussed in Section \ref{sec:image} constitutes one of the current trends. However, there are many fields where situatedness is key, in which {\sc nlg} can still make novel contributions. One of these is gaming. With the exception of a few endeavours to enhance the variety of linguistic expressions used in virtual environments \cite<e.g.,>{Orkin2007}, {\sc nlg} technology is relatively unrepresented in research on games, despite significant progress on dynamic content generation in game environments \cite<e.g.,>{Togelius2011}. This may be due to the perception that linguistic interaction in games is predictable and can rely on `canned' text. However, with the growing influence of gamification as a strategy for enhancing a variety of activities beyond entertainment, such as pedagogy, as well as the development of sophisticated planning techniques for varying the way in which game worlds unfold on the fly, the assumption of predictability where language use is concerned may well be up for revision. Third, there is a growing interest in applying {\sc nlg} techniques to generation from structured knowledge bases and ontologies \cite<e.g.>[some of which were briefly discussed in Section \ref{sec:parsing}]{Ell2012,Duma2013,Gyawali2014,Mrabet2016,Sleimi2016}. The availability of knowledge bases such as {\sc dbp}edia, or folksonomies such as Freebase, not only constitute input sources in their own right, but also open up the possibility of exploring alignments between structured inputs and text in a broader variety of domains than has hitherto been the case. Finally, while there has been a significant shift in the past few years towards data-driven techniques in {\sc nlg}, many of these have not been tested in commercial or real-world applications, despite the growth in commercialisation of text generation services noted in the introductory section. Typically, the arguments for rule-based systems in commercial scenarios, or in cases where input is high-volume and heterogeneous, are that (1) their output is easier to control for target systems; or (2) that data is in any case unavailable in a given domain, rendering the use of statistical techniques moot; or (3) data-driven systems have not been shown to be able to scale up beyond experimental scenarios \cite<some of these arguments are made, for instance, by>{Harris2008}. A response to the first point depends on the availability of techniques which enable the developer to `look under the hood' and understand the statistical relationships learned by a model. Such techniques are, for example, being developed to investigate or visualise the representations learned by deep neural networks. The second point calls for more investment in research on data acquisition and data-text alignment. Techniques for generation which rely on less precise alignments between data and text are also a promising future direction. Finally, scalability remains an open challenge. Many of the systems we have discussed have been developed within research environments, where the aim is of course to push the frontiers of {\sc nlg} and demonstrate feasibility or correctness of novel approaches. While in some cases, research on data-to-text has addressed large-scale problems -- notably in some of the systems that summarise numerical data -- a greater concern with scalability would also focus researchers' attention on issues such as the time and resources required to collect data and train a system and the efficiency of the algorithms being deployed. Clearly, developments in hardware will alleviate these problems, as has happened with some statistical methods that have recently become more feasible. \section{Conclusion} Recent years have seen a marked increase in interest in automatic text generation. Companies now offer {\sc nlg} technology for a range of applications in domains such as journalism, weather, and finance. The huge increase in available data and computing power, as well as rapid developments in machine-learning, have created many new possibilities and motivated {\sc nlg} researchers to explore a number of new applications, related to, for instance, image-to-text generation, while applications related to social media seem to be just around the corner, as witness, for instance, the emergence of {\sc nlg}-related techniques for automatic content-creation as well as {\sc nlg} for twitter and chatbots \cite<e.g.,>{Dale2016}. With developments occurring at a steady pace, and the technology also finding its way into industrial applications, the future of the field seems bright. In our view, research in {\sc nlg} should be further strengthened by more collaboration with kindred disciplines. It is our hope that this survey will serve to highlight some of the potential avenues for such multi-disciplinary work. \section{NLG Tasks}\label{sec:tasks} Traditionally, the {\sc nlg} problem of converting input data into output text was addressed by splitting it up into a number of subproblems. The following six are frequently found in many {\sc nlg} systems \cite{Reiter1997,Reiter2000}; their role is illustrated in Figure \ref{fig:tasks-example}: \begin{enumerate} \item {\it Content determination}\/: Deciding which information to include in the text under construction, \item {\it Text structuring}\/: Determining in which order information will be presented in the text, \item {\it Sentence aggregation}\/: Deciding which information to present in individual sentences, \item {\it Lexicalisation}\/: Finding the right words and phrases to express information, \item {\it Referring expression generation}\/: Selecting the words and phrases to identify domain objects, \item {\it Linguistic realisation}\/: Combining all words and phrases into well-formed sentences. \end{enumerate} \begin{figure}[!t] \makebox[\linewidth][l] {% \begin{subfigure}[b]{0.4\textwidth} \includegraphics[width=\textwidth]{img/tasks-cd} \caption{Content Determination } \label{fig:cd} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \begin{minipage}{\textwidth} \begin{tikzpicture} \Tree [.{\sc tsequence} [.\parbox{2.5cm}{{\sc bradycardia} (17:01:15)} ] [.\parbox{2.5cm}{{\sc bradycardia} (17:03:57)} ] [.\parbox{2.5cm}{{\sc bradycardia} (17:06:03)} ] ] \end{tikzpicture} \end{minipage} \caption{Text Structuring } \label{fig:ts} \end{subfigure} }\\\\ \makebox[\linewidth][l] {% \begin{subfigure}[b]{0.4\textwidth} \begin{minipage}{\textwidth} $\begin{bmatrix} \textit{Event} & \\ \textsc{type} & \textit{existential}\\ \textsc{pred} & \textit{be}\\ \textsc{tense} & \textit{past}\\ \textsc{args} & \begin{bmatrix}\textsc{theme} & \{b_1,b_2,b_3\} \\ \textsc{min-val} & 69\end{bmatrix}\\ \end{bmatrix}$ \end{minipage} \caption{Lexicalisation etc.} \label{fig:agg} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \begin{minipage}{\textwidth} \begin{tikzpicture} \Tree [.S [.PRO [{\textit there} ]] [.VP$_{\text{past}}$ [.V [{\textit be} ] ] [.NP$_{\text{pl}}$ \edge[roof]; {\em three successive bradycardias} ] [.PP \edge[roof]; {\em down to 69} ] ] ] \end{tikzpicture} \end{minipage} \caption{Realisation} \label{fig:lr} \end{subfigure} \caption{Tasks in {\sc nlg}, illustrated with a simplified example from the neonatal intensive care domain. First the system has to decide what the important events are in the data (a, content determination), in this case, occurrences of low heart rate (bradycardias). Then it has to decide in which order it wants to present data to the reader (b, text structuring) and how to express these in individual sentence plans (c, aggregation, lexicalisation, reference). Finally, the resulting sentences are generated (d, linguistic realisation).} \label{fig:tasks-example} \end{figure} These tasks could be thought of in terms of `early' decision processes (which information to convey to the reader?) to `late' ones (which words to use in a particular sentence, and how to put them in their correct order?). Here, we refer to `early' and `late' tasks by way of distinguishing between choices that are more oriented towards the data (such as what to say) and choices that are of an increasingly linguistic nature (e.g., lexicalisation, or realisation). This characterization reflects a long-running distinction in {\sc nlg} between {\em strategy} and {\em tactics} \cite<a distinction that goes back at least to>{Thompson1977}. This distinction also suggests a temporal order in which the tasks are executed, at least in systems with a modular, pipeline architecture (discussed in Section \ref{sec:modular}): for example, the system first needs to decide which input data to express in the text, before it can order information for presentation. However, such ordering of modules is nowadays increasingly put into question in the data-driven architectures discussed below (Section \ref{sec:architectures}). In this section, we briefly describe these six tasks, illustrating them with examples, and highlight recent developments in each case. As we shall see, while the `early' tasks are crucial for the development of {\sc nlg} systems, they are often intimately connected to the specific application. By contrast, `late' tasks are more often investigated independently of an application, and hence have resulted in approaches that can be shared between applications. \subsection{Content Determination}\label{sec:content-det} As a first step in the generation process, the {\sc nlg} system needs to decide which information should be included in the text under construction, and which should not. Typically, more information is contained in data than we want to convey through text, or the data is more detailed than we care to express in text. This is clear in Figure \ref{fig:cd}, where the input signal -- a patient's heart rate -- only contains a few patterns of interest. Selection may also depend on the target audience (e.g. does it consist of {\it experts}\/ or {\it novices}\/, for example) and on the overall communicative intention (e.g. should the text {\it inform}\/ the reader or {\it convince} him to do something). Content determination involves choice. In a soccer report, we may not want to verbalise each pass and foul committed, even though the data may contain this information. In the case of neonatal care, data might be collected continuously from sensors measuring heart rate, blood pressure and other physiological parameters. Data thus needs to be filtered and abstracted into a set of {\em preverbal messages}, semantic representations of information which are often expressed in a formal representation language, such as logical or database languages, attribute-value matrices or graph structures. They can express, among other things, which relations hold between which domain entities, for example, expressing that player X scored the first goal for team Y at time T. Though content determination is present in most {\sc nlg} systems \cite<cf.>{Mellish2006}, approaches are typically closely related to the domain of application. A notable exception is the work of \citeA{Guhe2007}, which offers a cognitively plausible, incremental account of content determination based on studies of speakers' descriptions of dynamic events as they unfold. This work belongs to a strand of research which considers {\sc nlg} first and foremost as a methodology eminently suitable for understanding {\it human}\/ language production. In recent years, researchers have started exploring data-driven techniques for content determination \cite<see e.g.,>{Barzilay2004,BouayadAgha2013,Kutlak2013,Venigalla2013}. \citeA{Barzilay2004}, for example, used Hidden Markov Models to model topic shifts in a particular domain of discourse (say, earthquake reports), where the hidden states represented `topics', modelled as sentences clustered together by similarity. A clustering approach was also used by \citeA{Duboue2003} in the biography domain, using texts paired with a knowledge base, from which semantic data was clustered and scored according to its occurrence in text. In a similar vein \citeA{Barzilay2005} use a database of American football records and corresponding text. Their aim was not only to identify bits of information that should be mentioned, but also dependencies between them, since mentioning a certain event (say, a score by a quarterback) may warrant the mention of another (say, another scoring event by a second quarterback). The solution proposed by \citeauthor{Barzilay2005} was to compute both individual preference scores for events, and a link preference score. More recently, various researchers have addressed the question of how to automatically learn alignments between data and text, also in the broader context of grounded language acquisition, i.e., modelling how we learn language by looking at correspondences between objects and events in the world and the way we refer to them in language \cite{Roy2002,yu2004multimodal,yu2013grounded}. For example, \citeA{Liang2009} extended the work by \citeA{Barzilay2005} to multiple domains (soccer and weather), relying on weakly supervised techniques; in a similar vein, \citeA{koncel2014multi} presented a weakly supervised multilevel approach, to deal with the fact that there is no one-to-one correspondence between, for example, soccer events in data and sentences in associated soccer reports. We shall return to these methods as part of a broader discussion of data-driven approaches below (Section \ref{sec:datadriven}). \subsection{Text Structuring}\label{sec:text-struct} Having determined what messages to convey, the {\sc nlg} system needs to decide on their order of presentation to the reader. For example, Figure \ref{fig:ts} shows three events of the same type (all bradycardia events, that is, brief drops in heart rate), selected (after abstraction) from the input signal and ordered as a temporal sequence. This stage is often referred to as text (or discourse or document) structuring. In the case of the soccer domain, for example, it seems reasonable to start with general information (where and when the game was played, how many people attended, etc.), before the goals are described, typically in temporal order. In the neonatal care domain, a temporal order can be imposed among specific events, as in Figure \ref{fig:ts}, but larger spans of text may reflect ordering based on importance, and grouping of information based on relatedness (e.g. all events related to a patient's respiration) \cite{Portet2009}. Naturally, alternative discourse relations may exist between separate messages, such as {\it contrasts}\/ or {\it elaborations}\/. The result of this stage is a discourse, text or document plan, which is a structured and ordered representation of messages. These examples again imply that the application domain imposes constraints on ordering preferences. Early approaches, such as \citeA{McKeown1985}, often relied on hand-crafted, domain-dependent structuring rules (which McKeown called {\it schemata}\/). To account for discourse relations between messages, researchers have alternatively relied on Rhetorical Structure Theory \cite<{\sc rst}; e.g.,>{Mann1988,Scott1990,Hovy1993}, which also typically involved domain-specific rules. For example, \citeA{Williams2008} used {\sc rst} relations to identify ordering among messages that would maximise clarity to low-skilled readers. Various researchers have explored the possibilities of using machine learning techniques for document structuring \cite<e.g.,>{Dimitromanolaki2003,althaus2004}, sometimes doing this in tandem with content selection \cite{Duboue2003}. General approaches for information ordering \cite{Barzilay2004,Lapata2006} have been proposed, which automatically try to find an optimal ordering of `information-bearing items'. These approaches can be applied to text structuring, where the items to be ordered are typically preverbal messages; however, they can also be applied in (multidocument) summarisation, where the items to be ordered are sentences from the input documents which are judged to be summary-worthy enough to include \cite<e.g.,>{Barzilay2002,Bollegala2010}. \subsection{Sentence Aggregation}\label{sec:aggregation} Not every message in the text plan needs to be expressed in a separate sentence; by combining multiple messages into a single sentence, the generated text becomes potentially more fluid and readable \cite<e.g.,>{Dalianis1999,Cheng2000}, although there are also situations where it has been argued that aggregation should be avoided (discussed in Section \ref{sec:affect}). For instance, the three events selected in Figure \ref{fig:ts} are shown as `merged' into a single pre-linguistic representation, which will be mapped to a single sentence. The process by which related messages are grouped together in sentences is known as sentence aggregation. To take another example, from the soccer domain, one (unaggregated) way to describe the fastest hat-trick in the English Premier League would be: \begin{examples}\label{ex:aggreg} \item Sadio Mane scored for Southampton after 12 minutes and 22 seconds.\\ \item Sadio Mane scored for Southampton after 13 minutes and 46 seconds.\\ \item Sadio Mane scored for Southampton after 15 minutes and 18 seconds. \end{examples} Clearly, this is rather redundant, not very concise or coherent, and generally unpleasant to read. An aggregated alternative, such as the following, would therefore be preferred: \begin{example}\label{ex:aggreg2} Sadio Mane scored three times for Southampton in less than three minutes. \end{example} In general, aggregation is difficult to define, and has been interpreted in various ways, ranging from redundancy elimination to linguistic structure combination. \citeA{Reape1999} offer an early survey, distinguishing between aggregation at the semantic level (as illustrated in Figure \ref{fig:agg}) and at the level of syntax, illustrated in the transition from (1-3) to (4) above. It is probably fair to say that much early work on aggregation was strongly domain-dependent. This work focussed on domain- and application-specific rules (e.g. `if a player scores two consecutive goals, describe these in the same sentence'), that were typically hand-crafted \cite<e.g.,>{Hovy1988,Dalianis1999,Shaw1998}. Once again, more recent work is gradually moving towards data-driven approaches, where aggregation rules are acquired from corpus data \cite<e.g.,>{Walker2001,Cheng2000}. \citeA{Barzilay2006} present a system that learns how to aggregate on the basis of a parellel corpus of sentences and corresponding database entries, by looking for similarities between entries. As was the case with the content selection method of \citeA{Barzilay2005}, \citeA{Barzilay2006} view the problem in terms of global optimisation: an initial classification is done over pairs of database entries which determines whether they should be aggregated or not on the basis of their pairwise similarity. Subsequently, a globally optimal set of linked entries is selected based on transitivity constraints (if $\langle e_{i},e_{j} \rangle$ and $\langle e_{j},e_{k} \rangle$ are linked, then so should $\langle e_{i},e_{k} \rangle$) and global constraints, such as how many sentences should be aggregated in a document. Global optimisation is cast in terms of Integer Linear Programming, a well-known mathematical optimization technique \cite<e.g.,>{nemhauser1988integer}. With syntactic aggregation, it is arguably more feasible to define domain-independent rules to eliminate redundancy \cite{Harbusch2009,Kempen2009}. For example, converting (5) into (6) below \begin{examples} \item Sadio Mane scored in the 12th minute and he scored again in the 13th minute. \item Sadio Mane scored in the 12th minute and again in the 13th. \end{examples} could be achieved by identifying the parallel verb phrases in the two conjoined sentences and eliding the subject and verb in the second. Recent work has explored the possibility of acquiring such rules from corpora automatically. For example, \citeA{Stent2009} describe an approach to the acquisition of sentence-combining rules from a discourse treebank, which are then incorporated into the {\sc sp}a{\sc rk}y sentence planner described by \citeA{Walker2007}. A more general approach to the same problem is discussed by \citeA{White2015}. Arguably, aggregation on the syntactic level can only account for relatively small reductions, compared to aggregation at the level of messages. Furthermore, syntactic aggregation assumes that the sentence planning process (which includes lexicalisation) is complete. Hence, while traditional approaches to {\sc nlg} view aggregation as part of sentence planning, which occurs prior to syntactic realisation, the validity of this claim depends on the type of aggregation being performed \cite<see also>{theune2006}. \subsection{Lexicalisation}\label{sec:lexicalisation} Once the content of the sentence has been finalised, possibly also as a result of aggregation at the message level, the system can start converting it into natural language. In our example (Figure \ref{fig:agg}), the outcome of aggregation and lexicalisation are shown together: here, the three events have been grouped, and mapped to a representation that includes a verb ({\em be}) and its arguments, though the arguments themselves still have to be rendered in a referring expression (see below). This reflects an important decision, namely, which words or phrases to use to express the messages' building blocks. A complication is that often a single event can be expressed in natural language in many different ways. A scoring event in a soccer match, for example, can be expressed as `to score a goal', `to have a goal noted', `to put the ball in the net', among many others. The complexity of this lexicalisation process critically depends on the number of alternatives that the {\sc nlg} system can entertain. Often, contextual constraints play an important role here as well: if the aim is to generate texts with a certain amount of variation \cite<e.g.,>{Theune2001}, the system can decide to randomly select a lexicalisation option from a set of alternatives (perhaps even from a set of alternatives not used earlier in the text). However, stylistic constraints come into play: `to score a goal' is an unfortunate way of expressing an own goal, for example. In other applications, lexical choice may even be informed by other considerations, such as the attitude or affective stance towards the event in question \cite<e.g.,>[and the discussion in Section \ref{sec:style}]{Fleischman2002}. Whether or not {\sc nlg} systems aim for variation in their output or not depends on the domain. For example, variation in soccer reports is presumably more appreciated by readers than variation in weather reports \cite<on which see>{Reiter2005}; it may also depend on where in a text the variation occurs. For example, variation in expressing timestamps may be less appreciated than variation in referential forms \cite{castro2016towards}. One straightforward model for lexicalisation -- the one assumed in Figure \ref{fig:tasks-example} -- is to operate on preverbal messages, converting domain concepts directly into lexical items. This might be feasible in well-defined domains. More often, lexicalisation is harder, for at least two reasons \cite<cf.>{Bangalore2000}: First, it can involve selection between semantically similar, near-synonymous or taxonomically related words \cite<e.g. {\em animal} vs {\em dog};>{Stede1996,Edmonds2002}. Second, it is not always straightforward to model lexicalisation in terms of a crisp concept-to-word mapping. One source of difficulty is vagueness, which arises, for example, with terms denoting properties that are gradable. For example, selecting the adjectives `wide' or `tall' based on the dimensions of an entity requires the system to reason about the width or height of similar objects, perhaps using some standard of comparison \cite<since a `tall glass' is shorter than a `short man'; cf.>{Kennedy2005a,VanDeemter2012}. A similar issue has been noted in the context of presenting numerical information, such as timestamps and quantities \cite{Reiter2005,power2012generating}. For example, \citeA{Reiter2005} discussed time expressions in the context of weather-forecast generation, pointing out that a timestamp {\em 00:00} could be expressed as {\em late evening}, {\em midnight}, or simply {\em evening} \cite[p. 143]{Reiter2005}. Not surprisingly, humans (including the professional forecasters that contributed to \citeauthor{Reiter2005}'s evaluation), show considerable variation in their lexical choices. It is interesting to note that many issues related to lexicalisation have also been discussed in the psycholinguistic literature on lexical access \cite{Levelt1989,Levelt1999:lexical}. Among these is the question of how speakers home in on the right word and under what conditions they are liable to make errors, given that the mental lexicon is a densely connected network in which lexical items are connected at multiple levels (semantic, phonological, etc). This has also been a fruitful topic for computational modelling \cite<e.g.,>{Levelt1999:lexical}. In contrast to cognitive modelling approaches, however, research in {\sc nlg} increasingly views lexicalisation as part of surface realisation (discussed below) \cite<a similar observation is made by>[p.351]{Mellish1998a}. A fundamental contribution in this context is by \citeA{Elhadad1997}, who describe a unification-based approach, unifying conceptual representations (i.e., preverbal messages) with grammar rules encoding lexical as well as syntactic choices. \subsection{Referring Expression Generation}\label{sec:reg} Referring Expression Generation ({\sc reg}) is characterised by \citeA[p.11]{Reiter1997} as ``the task of selecting words or phrases to identify domain entities". This characterisation suggests a close similarity to lexicalisation, but \citeA{Reiter2000} point out that the essential difference is that referring expression generation is a ``discrimination task, where the system needs to communicate sufficient information to distinguish one domain entity from other domain entities". {\sc reg} is among the tasks within the field of automated text generation that has received most attention in recent years \cite{Mellish2006,Siddharthan2011}. Since it can be separated relatively easily from a specific application domain and studied in its own right, various `standalone' solutions for the {\sc reg} problem exist. In our running example, the three bradycardia events shown in Figure \ref{fig:ts} are later represented as a set of three entities under the {\sc theme} argument of {\em be}, following lexicalisation (Figure \ref{fig:agg}). How the system refers to them will depend, among other things, on whether they've already been mentioned (in which case, a pronoun or definite description might work) and if so, whether they need to be distinguished from any other similar entities (in which case, they might need to be distinguished by some properties, such as the time when they occurred). The first choice is therefore related to {\em referential form}: whether entities are referred to using a pronoun, a proper name or an (in)definite description, for example. This depends partly on the extent to which the entity is `in focus' or `salient' \cite<see e.g.,>{Poesio2004} and indeed such notions underlie many computational accounts of pronoun generation \cite<e.g.,>{McCoy1999,Callaway2002,Kibble2004}. Choosing referential forms has recently been the topic of a series of shared tasks on the Generation of Referring Expressions in Context \cite<{\sc grec};>{Belz2010}, using data from Wikipedia articles, which included choices such as reflexive pronouns and proper names. Many systems participating in this challenge framed the problem in terms of classification among these many options. Still, it is probably fair to say that much work on referential form has focussed on when to use pronouns. Forms such as proper names remain understudied, although recently various researchers have highlighted the problems of proper name generation \cite{Siddharthan2011,deemter2016designing,ferreira2017generating}. \begin{figure}[t] \centering \begin{subfigure}[b]{0.45\textwidth} \includegraphics[width=\textwidth,height=0.2\textheight]{img/gre3d-scene} \caption{Visual domain from the {\sc gre3d} corpus \cite{Viethen2008}} \label{fig:example-domain} \end{subfigure} \hfill \begin{subtable}[b]{0.45\textwidth} \begin{tabular}{l | lll} \hline & \multicolumn{3}{c}{Domain objects}\\ Attr & $d_1$ & $d_2$ & $d_3$\\ \hline Color & blue & green & blue \\ Shape & ball & cube & ball \\ Size & small & large & large \\ Rel & bef($d_2$) & beh($d_1$) & nt($d_2$)\\ \hline \end{tabular} \caption{Table of objects and attributes. {\em beh}: `behind'; {\em bef}: `before'; {\em nt}: `next to'} \label{table:example-domain} \end{subtable} \caption{Visual domain and corresponding tabular representation} \end{figure} Determining the {\it referential content}\/ usually comes into play when the chosen form is a description. Typically, there are multiple entities which have the same referential category or type in a domain (more than one player, for example, or several bradycardias). As a result, other properties of the entity will need to be mentioned if it is to be identified by the reader or hearer. Earlier {\sc reg} research often worked with simple visual domains, such as Figure \ref{fig:example-domain} or its corresponding tabular representation, taken from the {\sc gre3d} corpus \cite{Viethen2008}. In this example, the {\sc reg} content selection problem is to find a set of properties for a target (say $d_1$) that singles it out from its two distractors ($d_2$ and $d_3$). {\sc reg} content determination algorithms can be thought of as performing a search through the known properties of the referent for the `right' combination that will distinguish it in context. What constitutes the `right' combination depends on the underlying theory. Too much information in the description (as in {\it the small blue ball before the large green cup}\/) might be misleading or even boring; too little ({\em the ball}) might hinder identification. Much work on {\sc reg} has appealed to the Gricean maxim stating that speakers should make sure that their contributions are sufficiently informative for the purposes of the exchange, but not more so \cite{Grice1975}. How this is interpreted has been the subject of a number of algorithmic interpretations, including: \begin{itemize} \item Conducting an exhaustive search through the space of possible descriptions and choosing the smallest set of properties that will identify the target referent, the strategy incorporated by the Full Brevity procedure \cite{Dale1989}. In our example domain, this would select size. \item Selecting properties incrementally, but choosing the one which rules out most distractors at each step, thereby minimising the possibility of including information that isn't directly relevant to the identification task. This is the underlying idea of the Greedy Heuristic algorithm \cite{Dale1989,Dale1992}, and it has more recently been revived in stochastic utility-based models such as \citeA{Frank2009}. In our example scene, such an algorithm would once again consider size first. \item Selecting properties incrementally, but based on domain-specific preference or cognitive salience. This is the strategy incorporated in the Incremental Algorithm \cite{Dale1995}, which would predict that color should be preferred over size in our example. \end{itemize} While these heuristics focus exclusively on the requirement that a referent be unambiguously identified, research on reference in dialogue \cite<e.g.,>{Jordan2005} has shown that under certain conditions, referring expressions may also include `redundant' properties in order to achieve other communicative goals, such as confirmation of a previous utterance by an interlocutor. Similarly, \citeA{White-Clark-Moore:2010} present a system which generates user-tailored descriptions in spoken dialogue, arguing that, for example, a frequent flyer would prefer different descriptions of flights than a student who only flies occasionally. These various algorithms compute (possibly different) distinguishing descriptions for target referents (more precisely: they select sets of properties that distinguish the target, but that still need to be expressed in words; see Section \ref{sec:lr} below). Various strands of more recent work can be distinguished \cite<surveyed in>{Krahmer2012}. Some researchers have focussed on extending the expressivity of the `classical' algorithms, to include plurals ({\it the two balls}\/) and relations ({\it the ball in front of a cube}\/) \cite<e.g.,>[among many others]{Horacek1997,Stone2000,Gardent2002,Kelleher2006,Viethen2008}. Other work has cast the problem in probabilistic terms; for example, \citeA{Fitzgerald2013} frame {\sc reg} as a problem of estimating a log-linear distribution over a space of logical forms representing expressions for sets of objects. Other work has concentrated on evaluating the performance of different {\sc reg} algorithms, by collecting controlled human references and comparing these with the references predicted by various algorithms \cite<e.g.,>[again among many others]{Belz2008,Gatt2010,Jordan2005}. In a similar vein, researchers have also started exploring the relevance of {\sc reg} algorithms as psycholinguistic models of human language production \cite<e.g.,>{VanDeemter2012a}. A different line of work has moved away from the separation between content selection and form, performing these tasks jointly. For example, \citeA{Engonopoulos2014} use a synchronous grammar that directly relates surface strings to target referents, using a chart to compute the possible expressions for a given target. This work bears some relationship to planning-based approaches we discuss in Section \ref{sec:planning} below, which exploit grammatical formalisms as planning operators \cite<e.g.>{Stone1998,Koller2007}, solving realisation and content determination problems in tandem (including {\sc reg} as a special case). Finally, in earlier work visual information was typically `simplified' into a table (as we did above), but there has been substantial progress on {\sc reg} in more complex scenarios. For example, the {\sc give} challenge \cite{koller2010report}, provided impetus for the exploration of situated reference to objects in a virtual environment \cite<see also>{Stoia2006,Garoufi2013}. More recent work has started exploring the interface between computer vision and {\sc reg} to produce descriptions of objects in complex, realistic visual scenes, including photographs \cite<e.g.,>{Mitchell2013,Kazemzadeh2014,Mao2016}. This forms part of a broader set of developments focussing on the relatonship between vision and language, which we turn to in Section \ref{sec:image}. \subsection{Linguistic Realisation}\label{sec:lr} Finally, when all the relevant words and phrases have been decided upon, these need to be combined to form a well-formed sentence. The simple example in Figure \ref{fig:lr} shows the structure underlying the sentence {\em there were three successive bradycardias down to 69}\/, the linguistic message corresponding to the portion selected from the original signal in Figure \ref{fig:cd}. Usually referred to as linguistic realisation, this task involves ordering constituents of a sentence, as well as generating the right morphological forms (including verb conjugations and agreement, in those languages where this is relevant). Often, realisers also need to insert function words (such as auxiliary verbs and prepositions) and punctuation marks. An important complication at this stage is that the output needs to include various linguistic components that may not be present in the input (an instance of the `generation gap' discussed in Section \ref{sec:modular} below); thus, this generation task can be thought of in terms of projection between non-isomorphic structures \cite<cf.>{Ballesteros2015}. Many different approaches have been proposed, of which we will discuss \begin{enumerate} \item human-crafted templates; \item human-crafted grammar-based systems; \item statistical approaches. \end{enumerate} \subsubsection{Templates} When application domains are small and variation is expected to be minimal, realisation is a relatively easy task, and outputs can be specified using templates \cite<e.g.,>{Reiter1995,mcroy2003augmented}, such as the following. \begin{example} \$\mbox{\tt{player}} scored for \$\mbox{\tt{team}} in the \$\mbox{\tt{minute}} minute. \end{example} This template has three variables, which can be filled with the names of a player, a team, and the minute in which this player scored a goal. It can thus serve to generate sentences like: \begin{examples} \item Ivan Rakitic scored for Barcelona in the 4th minute. \end{examples} An advantage of templates is that they allow for full control over the quality of the output and avoid the generation of ungrammatical structures. Modern variants of the template-based method include syntactic information in the templates, as well as possibly complex rules for filling the gaps \cite{Theune2001}, making it difficult to distinguish templates from more sophisticated methods \cite{VanDeemter2005}. The disadvantage of templates is that they are labour-intensive if constructed by hand \cite<though templates have recently been learned automatically from corpus data, see e.g.,>[ and the discussion in Section \ref{sec:datadriven} below]{angeli2012parsing,Kondadadi2013}. They also do not scale well to applications which require considerable linguistic variation. \subsubsection{Hand-Coded Grammar-Based Systems} An alternative to templates is provided by general-purpose, domain-independent realisation systems. Most of these systems are {\em grammar-based}, that is, they make some or all of their choices on the basis of a grammar of the language under consideration. This grammar can be manually written, as in many classic off-the-shelf realisers such as {\sc fuf/surge} \cite{Elhadad1996}, {\sc mumble} \cite{Meteer1987}, {\sc kpml} \cite{Bateman1997}, {\sc nigel} \cite{Mann1983}, and RealPro \cite{Lavoie1997}. Hand-coded grammar-based realisers tend to require very detailed input. For example, {\sc kpml} \cite{Bateman1997} is based on Systemic-Functional Grammar \cite<{\sc sfg}; >{Halliday2004}, and realisation is modelled as a traversal of a network in which choices depend on both grammatical and semantico-pragmatic information. This level of detail makes these systems difficult to use as simple `plug-and-play' or `off the shelf' modules \cite<e.g.,>{Kasper1989}, something which has motivated the development of simple realisation engines which provide syntax and morphology {\sc api}s, but leave choice-making up to the developer \cite{Gatt2009,Vaudry2013,Bollmann2011,DeOliveira2014,Mazzei2016}. One difficulty for grammar-based systems is how to make choices among related options, such as the following, where hand-crafted rules with the right sensitivity to context and input are difficult to design: \begin{examples} \item Ivan Rakitic scored for Barcelona in the 4th minute.\\ \item For Barcelona, Ivan Rakitic scored in minute four.\\ \item Barcelona player Ivan Rakitic scored after four minutes. \end{examples} \subsubsection{Statistical Approaches} Recent approaches have sought to acquire probabilistic grammars from large corpora, cutting down on the amount of manual labour required, while increasing coverage. Essentially, two approaches have been taken to include statistical information in the realisation process. One approach, introduced by the seminal work of Langkilde and Knight \cite{Langkilde2000,Langkilde-Geary2002} on the {\sc halogen/nitrogen} systems, relies on a two-level approach, in which a small, hand-crafted grammar is used to generate alternative realisations represented as a forest, from which a stochastic re-ranker selects the optimal candidate. Langkilde and Knight rely on corpus-based statistical knowledge in the form of n-grams, whereas others have experimented with more sophisticated statistical models to perform reranking \cite<e.g.,>{Bangalore2000,Ratnaparkhi2000,cahill2007stochastic}. The second approach does not rely on a computationally expensive generate-and-filter approach, but uses statistical information directly at the level of generation decisions. An example of this approach is the p{\sc cru} system developed by \citeA{Belz2008}, which generates the most likely derivation of a sentence, given a corpus, using a context-free grammar. In this case, the statistics are exploited to control the generator's choice-making behaviour as it searches for the optimal solution. In both approaches, the base generator is hand-crafted, while statistical information is used to filter outputs. An obvious alternative would be to {\it also}\/ rely on statistical information for the base-generation system. Fully data-driven grammar-based approaches have been developed by acquiring grammatical rules from treebanks. For example, the Open{\sc ccg} framework \cite{hypertagging:acl08,white-rajkumar:2009:EMNLP,deplen:2012:EMNLP} presents a broad coverage English surface realizer, based on Combinatory Categorial Grammar \cite<{\sc ccg}; >{Steedman2000}, relying on a corpus of {\sc ccg} representations derived from the Penn Treebank \cite{Hockenmaier2007} and using statistical language models for re-ranking. There are several other approaches to realisation that adopt a similar rationale, based on a variety of grammatical formalisms, including Head-Driven Phrase Structure Grammar \cite<{\sc hpsg}; >{Nakanishi2005,Carroll2005}, Lexical-Functional Grammar \cite<{\sc lfg}; >{Cahill2006} and Tree Adjoining Grammar \cite<{\sc tag}; >{Gardent2015}. In many of these systems, the base generator uses some variant of the chart generation algorithm \cite{Kay1996} to iteratively realise parts of an input specification and merge them into one or more final structures, which can then be ranked \cite<see>[for further discussion]{Rajkumar2014}. The existence of stochastic realisers with wide-coverage grammars has motivated a greater focus on subtle choices, such as how to avoid structural ambiguity, or how to handle choices such as explicit complementiser insertion in English \cite<see e.g.,>{Rajkumar2011}. In a somewhat similar vein, the statistical approach to microplanning proposed by \citeA{gardent2017statistical} focuses on interactions between surface realization, aggregation, and sentence segmentation in a joint model. Other approaches to realisation also rely on one or more classifiers to improve outputs. For example, \citeA{Filippova2007,Filippova2009} describe an approach to linearisation of constituents using a two-step approach with Maximum Entropy classifiers, first determining which constituent should occupy sentence-initial position, then ordering the constituents in the remainder of the sentence. \citeA{Bohnet2010} present a realiser using underspecified dependency structures as input, in a framework based on Support Vector Machines, where classifiers are organised in a cascade. An initial classifier decodes semantic input into the corresponding syntactic features, while two subsequent classifiers first linearise the syntax and then render the correct morphological realisation for the component lexemes. Modelling choices using classifier cascades is not restricted to realisation alone. Indeed, in some cases, it has been adopted as a model for the {\sc nlg} process as a whole, a topic we will return to in Section \ref{sec:classification}. One outcome of this view of {\sc nlg} is that the nature of the input representation also changes: the more decisions that are made within the statistical generation system, the less linguistic and more abstract the input representation becomes, paving the way for integrated, end-to-end stochastic generation systems, such as \citeA{Konstas2013}, which we also discuss in the next section. \subsection{Discussion}\label{sec:tasks-disc} This section has given an overview of some classic tasks that are found in most {\sc nlg} systems. One of the common trends that can be identified in each case is the steady move from early, hand-crafted approaches based on rules, to the more recent stochastic approaches that rely on corpus data, with a concomitant move towards more domain-independent approaches. Historically, this was the case already for individual tasks, such as referring expression generation or realisation, which became topics of intensive research in their own right. However, as more and more approaches to all {\sc nlg} tasks begin to take a statistical turn, there is increasing emphasis on learning techniques; the domain-specific aspect is, as it were, incidental, a property of the training data itself. As we shall see in the next section, this trend has also influenced the way different {\sc nlg} tasks are organised, that is, the architecture of systems for text generation from data. \section{Evaluation}\label{sec:evaluation} Though we have touched on the subject of evaluation at various points, it deserves a full discussion as a topic which has become a central methodological concern in {\sc nlg}. A factor that contributed to this development was the establishment of a number of {\sc nlg} shared tasks, launched in the wake of an {\sc nsf-}funded workshop held in Virginia in 2007 \cite{Dale2007}. These tasks have focussed on referring expression generation \cite{Belz2010,Gatt2010}; surface realisation \cite{Belz2011}; generation of instructions in virtual environments \cite{Striegnitz2011,Janarthanam2011}; content determination \cite{BouayadAgha2013,Banik2013}; and question generation \cite{Rus2011}. Recent proposals for new challenges extend these to narrative generation \cite{concepcion-EtAl:2016:INLG}, generation from structured web data \cite{colin-EtAl:2016:INLG}, and from pairs of meaning representations and text \cite{novikova-rieser:2016:INLG,May2017}. In image captioning, shared tasks have helped the development of large-scale datasets and evaluation servers such as {\sc ms-coco}\footnote{\url{http://mscoco.org/dataset/\#captions-upload}} (cf. Section \ref{sec:im-data}). In general, however, {\sc nlg} evaluation is marked by a great deal of variety and it is difficult to compare systems directly. There are at least two reasons why this is the case. \paragraph{Variable input} There is no single, agreed-upon input format for {\sc nlg} systems \cite{McDonald1993,Mellish1998a,evans2002nlg}. Typically, one can only compare systems against a common benchmark if the input is similar. Examples are the image-captioning systems described in Section \ref{sec:image}, or systems submitted to one of the shared tasks mentioned above. Even in case a common `standard' dataset is available for evaluation, comparison may not be straightforward due to input variation, or due to implicit biases in the input data. For example, \citeA{Rajkumar2014} observe that, despite many realisers being evaluated against the Penn Treebank, they make different assumptions about the input format, including how detailed the pre-syntactic input representation is, a problem also observed in the first Surface Realisation shared task \cite{Belz2011}. As \citeA{Rajkumar2014} note, a comparison of realisers on the basis of scores on the Penn Treebank shows that the highest-ranking is the {\sc fuf/surge} realiser (which is second in terms of coverage), based on experiments by \citeA{Callaway2005}. However, these experiments required painstaking effort to extract the input representations at the level of detail needed by {\sc fuf/surge}; other realisers support more underspecified input. In a related vein, image captioning evaluation studies have shown that many datasets contain a higher proportion of nouns than verbs, and few abstract concepts \cite{Ferraro2015}, making systems that generate descriptions emphasising objects more likely to score better. The relevance of this observation is shown by \citeA{Elliott2015}, who note that the ranking of their image captioning system based on visual dependency grammar depends in part on the data it is evaluated on, with better performance on data containing more images depicting actions (we return to this study below). \paragraph{Multiple possible outputs} Even for a single piece of input and a single system, the range of possible outputs is open-ended, a problem that arguably holds for any {\sc nlp} task involving textual output, including machine translation and summarisation. Corpora often display a substantial range of variation and it is often unclear, without an independent assessment, which outputs are to be preferred \cite{ReiterSripada2002}. In the image captioning literature, authors who have framed the problem in terms of retrieval have motivated the choice in part based on this problem, arguing that `since there is no consensus on what constitutes a good image description, independently obtained human assessments of different caption generation systems should not be compared directly' \cite[p. 580]{Hodosh2013}. While capturing variation may itself be a goal \cite<e.g.,>{Belz2008,Viethen2010,Hervas2013,castro2016towards}, as we also saw in our discussion of style in Section \ref{sec:style}, this is not always the case. Thus, in a user-oriented evaluation, the S{\sc um}T{\sc ime}-{\sc mousam} system weather forecasts were preferred by readers over those written by forecasters because the latter's lexicalisation decisions were susceptible to apparently arbitrary variation \cite{Reiter2005}; similar outcomes were more recently reported for statistical {\sc nlg} systems trained on the S{\sc um}T{\sc ime} corpus \cite{Belz2008,Angeli2010}.\\ \ \\ Rather than give an exhaustive review of {\sc nlg} evaluation -- hardly a realistic prospect given the diversity we have pointed out -- the rest of this section will highlight some topical issues in current work. By way of an overview of these issues, consider the hypothetical scenario sketched in Figure \ref{fig:eval-scenario}, which is loosely inspired by work on various weather-reporting systems developed in the field. This {\sc nlg} system is embedded in the environment of an offshore oil-rig; the relevant features of the {\em setup} \cite<in the sense of>{SparckJones1996} are the system itself and its users, here a group of engineers. While the {\em task} of the system is to generate weather reports from numerical weather prediction data, its ultimate {\em purpose} is to facilitate users' planning of drilling and maintenance operations. Figure \ref{fig:eval-scenario} highlights some of the common questions addressed in {\sc nlg} evaluation, together with a broad typology of the methods used to address them, in particular, whether they are objective -- that is measurable against an external criterion, such as corpus similarity or experimentally obtained behavioural data -- or subjective, requiring human judgements. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{img/evaluation} \caption{Hypothetical evaluation scenario: a weather report generation system embedded in an offshore oil platform environment. Possible evaluation methods, focussing on different questions, are highlighted at the bottom, together with the typical methodological orientation (subjective/objective) adopted to address them.} \label{fig:eval-scenario} \end{figure} A fundamental methodological distinction, due to \citeA{SparckJones1996}, is between {\em intrinsic} and {\em extrinsic} evaluation methods. In the case of {\sc nlg}, an intrinsic evaluation measures the performance of a system without reference to other aspects of the setup, such as the system's effectiveness in relation to its users. In our example scenario, questions related to text quality, correctness of output and readability qualify as intrinsic, whereas the question of whether the system actually achieves its goal in supporting adequate decision-making on the offshore platform is extrinsic. \subsection{Intrinsic Methods}\label{sec:intrinsic} Intrinsic evaluation in {\sc nlg} is dominated by two methodologies, one relying on human judgements (and hence subjective), the other on corpora. \subsubsection{Subjective (Human) Judgements} Human judgements are typically elicited by exposing naive or expert subjects to system outputs and getting them to rate them on some criteria. Common criteria include: \begin{itemize} \item {\em Fluency} or {\em readability}, that is, the linguistic quality of the text \cite<e.g.,>[{\em inter alia}]{Callaway2002,Mitchell2012,Stent2005a,Lapata2006,Cahill2009,Espinosa2010}; \item {\em Accuracy}, {\em adequacy}, {\em relevance} or {\em correctness} relative to the input, reflecting the system's rendition of the content \cite<e.g.>{Lester1997,Sripada2005,Hunter2012}, a criterion often used in subjective evaluations of image-captioning systems as well \cite<e.g.>{Kulkarni2011,Mitchell2012,Kuznetsova2012,Elliott2013}. \end{itemize} Though they are the most common, these two sets of criteria do not exhaust the possibilities. For example, subjective ratings have also been elicited for argument effectiveness in a system designed to generate persuasive text for prospective house buyers \cite{Carenini2006}. In image captioning, at least one system was evaluated by asking users to judge the creativity of the generated caption, with a view to assessing the contribution of web-scale n-gram language models to the captioning quality \cite{Li2011}. Below, we also discuss judgements of genre compatibility (Section \ref{sec:genre-eval}). In the case of fictional narrative, some evaluations have elicited judgments on qualities such as novelty \cite<e.g.,>{Perez2011} or believability of characters \cite<e.g.,>{Riedl2005a}. The use of scales to elicit judgements raises a number of questions. One has to do with the nature of the scale itself. While discrete, ordinal scales are the dominant method, a continuous scale -- for example, one involving a visually presented slider \cite{Gatt2010,Belz2011a} -- might give subjects the possibility of giving more nuanced judgements. For example, a text generated by our hypothetical weather report system might be judged so disfluent as to be given the lowest rating on an ordinal scale; if the following text is judged as being worse, a subject would have no way of indicating this. A related question is whether subjects find it easier to {\em compare} items rather than judge each one in its own right. This question has begun to be addressed in the {\sc nlp} evaluation literature, usually with binary comparisons, for example between the outputs of two {\sc mt} systems \cite<see>[for discussion]{Dras2015}. In a recent study evaluating causal connectives produced by an {\sc nlg} system, \citeA{Siddharthan2012a} used Magnitude Estimation, whereby subjects are not given a predefined scale, but are asked to choose their own and proceed to make comparisons of each item to a `modulus', which serves as a comparison point throughout the experiment \cite<see>{Bard1996}.\footnote{The modulus is an item -- a text, or a sentence -- which is selected in advance and which subjects are asked to rate first. All subsequent ratings or judgements are performed in comparison to this modus item. Though subjects are able to use any scale they choose, this method allows all judgements to be normalised by the judgement given for the modulus. Typically, normalised judgements are analysed on a logarithmic scale.} \citeA{Belz2010a} compared a preference-based paradigm to a standard rating scale to evaluate systems from two different domains (weather reporting and {\sc reg}), and found that the former was more sensitive to differences between systems, and less susceptible to variance between subjects. An additional concern with subjective evaluations is inter-rater reliability. Multiple judgements by different evaluators may exhibit high variance, a problem that was encountered in the case of Question Generation \cite{Rus2011}. Recently, \citeA{Godwin2016} suggested that such variance can be reduced by an iterative method whereby training of judges is followed by a period of discussion, leading to the updating of evaluation guidelines. This, however, is more costly in terms of time and resources. It is probably fair to state that, these days, subjective, human evaluations are often carried out via online platforms such as Amazon Mechanical Turk and CrowdFlower, though this is probably more feasible for widely-spoken languages such as English. A seldom-discussed issue with such platforms concerns their ethical implications \cite<for example, they involve large groups of poorly paid individuals; see>{Fort2011} as well as the reliability of the data collected, though measures can be put in place to ensure, for instance, that contributors are fluent in the target language \cite<see e.g.,>{goodman2013data,mason2012conducting}. \afterpage{ \begin{table}[!h] \small \centering \begin{tabular}{llp{9cm}l} \hline & Metric & Description & Origins \\ \hline \multirow{7}{*}[-60pt]{\rotatebox[origin=c]{90}{N-gram overlap}} & {\sc bleu} & Precision score over variable-length n-grams, with a length penalty \cite{Papineni2002} and, optionally, smoothing \cite{Lin2004}. & {\sc mt}\\ & {\sc nist} & A version of {\sc bleu} with higher weighting for less frequent $n-$grams and a different length penalty \cite{Doddington2002}. & {\sc mt}\\ & {\sc rouge} & Recall-oriented score, with options for comparing non-contiguous $n-$grams and longest common subsequences \cite{Lin2003}. & {\sc as}\\ & {\sc meteor} & Harmonic mean of unigram precision and recall, with options for handling (near-synonymy) and stemming \cite{Lavie2007}. & {\sc mt}\\ & {\sc gtm} & General Text Matcher. F-Score based on precision and recall, with greater weight for contiguous matching spans \cite{Turian2003} & {\sc mt} \\ & {\sc cide}r & Cosine-based n-gram similarity score, with n-gram weighting using {\sc tf-idf} \cite{Vedantam2015}. & {\sc ic} \\ & {\sc wmd} & Word-Mover Distance, a similarity score between texts, based on the (semantic) distance between words in the texts \cite{Kusner2015}. For {\sc nlp}, distance is operationalised using normalised bag of words ({\sc nbow}) representations \cite{Mikolov2013}. & {\sc ds}; {\sc ic} \\ \hline \multirow{3}{*}[-20pt]{\rotatebox[origin=c]{90}{String distance}} & Edit distance & Number of insertions, deletions, substitutions and, possibly, transposition required to transform the candidate into the reference string \cite{Levenshtein1966}. & {\sc n/a}\\ & {\sc ter} & Translation edit rate, a version of edit distance \cite{Snover2006}. & {\sc mt} \\ & {\sc terp} & Version of {\sc ter} handling phrasal substitution, stemming and synonymy \cite{Snover2006}. & {\sc mt} \\ & {\sc terpa} & Version of {\sc ter} optimised for correlations with adequacy judgements \cite{Snover2006}. & {\sc mt} \\ \hline \multirow{4}{*}[-30pt]{\rotatebox[origin=c]{90}{Content overlap}} & Dice/Jaccard & Set-theoretic measures of overlap between two unordered sets (e.g. of predicates or other content units) & {\sc n/a} \\ & {\sc masi} & Measure of agreement between set-valued items, a weighted version of Jaccard \cite{Passonneau2006} & {\sc as} \\ & {\sc pyramid} & Overlap measure relying on comparison of weighted Summarization Content Units (SCUs) \cite{nenkova2004,yang2016} & {\sc as} \\ & {\sc spice} & Measure of overlap between candidate and reference texts based on propositional content obtained by parsing the text into graphs representing objects and relations, by first parsing captions into scene graphs representing objects and relations \cite{Anderson2016} & {\sc ic}\\ \hline \end{tabular} \caption{Intrinsic, corpus-based metrics based on string overlap, string distance, or content overlap. The last column indicates the {\sc nlp} sub-discipline in which a metric originated, where applicable. Legend: {\sc mt} = Machine translation; {\sc as} = automatic summarisation; {\sc ic} = image captioning; {\sc ds} = document similarity.} \label{table:intrinsic-metrics} \end{table} \clearpage } \subsubsection{Objective Humanlikeness Measures Using Corpora} Intrinsic methods that rely on corpora can generally be said to be addressing the question of `humanlikeness', that is, the extent to which the system's output matches human output under comparable conditions. From the developer's perspective, the selling point of such methods is their cheapness, since they are usually based on automatically computed metrics. A variety of corpus-based metrics, often used earlier in related fields such as Machine Translation or Summarisation, have been used in {\sc nlg} evaluation. Some of the main ones are summarised in Table \ref{table:intrinsic-metrics}, which groups them according to their principal characteristics, and for each adds a key reference. Measures of n-gram overlap or string edit distance, usually originating in Machine Translation or Summarisation \cite<with some exceptions, such as {\sc cide}r,>{Vedantam2015} are frequently used for evaluating surface realisation \cite<e.g.,>{White2007,Cahill2006,Espinosa2010,Belz2011} and occasionally also to evaluate short texts characteristic of data-driven systems in domains such as weather reporting \cite<e.g.>{Reiter2009a,Konstas2013} and image captioning \cite<see>{Bernardi2016,Kilickaya2017}. Edit distance metrics have been exploited for realisation \cite{Espinosa2010}, but also for {\sc reg} \cite{Gatt2010}. The focus of these metrics is on the output text, rather than its fidelity to the input. In a limited number of cases, surface-oriented metrics have been used to evaluate the adequacy with which output text reflects content \cite{Banik2013,Reiter2009a}. However, if content determination is the focus, a measure of surface overlap is at best a proxy, relying on an assumption of a straightforward correspondence between input and output. This assumption may be tenable if texts are brief and relatively predictable. In some cases, it has been possible to use metrics to measure content determination directly, based on semantically annotated corpora. For instance, {\sc reg} algorithms have been evaluated in this fashion using set overlap metrics \cite{Viethen2007,Deemter2012}. Also relevant in this connection is the {\sc pyramid} method \cite{nenkova2004} for summarisation, which relies on the identification of the content units (which maximally correspond to clauses) in multiple human summaries. These are weighted and ordered by their frequency of mention by human summarises. A candidate summary is scored according to the ratio between the weight of the content units it includes, compared to the weight of an ideal summary bearing the same number of content units \cite<see>[for discussion]{Nenkova2011}. Direct measurements of content overlap between generated and candidate outputs will likely increase, as automatic data-text alignment techniques make such `semantically transparent' corpora more readily available for end-to-end {\sc nlg} \cite<see e.g.,>[and the discussion in Section \ref{sec:datadriven}]{Chen2008,Liang2009}. An important development away from pure surface overlap is the use of semantic resources \cite<as in the case of {\sc meteor}, >{Lavie2007}, or word embeddings \cite<as in {\sc wmd, }>{Kusner2015}, to compute the proximity of output to reference texts beyond literal string overlap. In a comparative evaluation of metrics for image captioning, \citeA{Kilickaya2017} found an advantage for {\sc wmd} compared to other metrics. \subsubsection{Evaluating Genre Compatibility and Stylistic Effectiveness}\label{sec:genre-eval} A slightly different question that has occasionally been posed in evaluation studies asks whether the linguistic artefact produced by a system is a recognisable instance of a particular genre or style. As noted in Section \ref{sec:style}, it is difficult to ascertain to what extent readers actually perceive subtle stylistic variation. Thus, \citeA{Mairesse2011} found inconsistent perceptions of personality in the evaluation of {\sc personage}, which was complicated by the fact that stylistic features interact and may cancel each other out. Genre perception is a central question for approaches to generating creative language (see Section \ref{sec:creativity}). For example, \citeA{Hardcastle2008} describe an evaluation of a generation system for cryptic crossword clues based on a Turing test in which the objective was to determine whether the system's outputs were recognisably different from human-authored clues. In a related vein, when evaluating the {\sc jape} joke generation system, \citeA[see Section \ref{sec:puns}]{Binsted1997} presented 120 8-11 year old children with a number of punning riddles, some automatically generated by {\sc jape} and some selected from joke books. They also included a number of non-joke controls, such as: \begin{example} What do you get when you cross a horse and a donkey? \\ {\it A mule} \end{example} For each stimulus that they were exposed to, children were asked to indicate whether they thought it was a joke, and how funny they considered it. The results revealed that computer generated riddles were recognised as jokes, and considered funnier than non-jokes. Interestingly, the joke children rated highest was automatically generated by {\sc jape} (we urge the reader to inspect the original paper), although in general, human-produced jokes were considered funnier by children than automatically generated ones. In this evaluation study, therefore, an extrinsic aspect of the generated text, concerning its efficacy (here, its `funniness') was found to be correlated with its recognisability as an instance of the target genre. \citeA{petrovic2013unsupervised} evaluated their unsupervised approach to joke generation by harvesting human-written jokes from Twitter, conforming to the {\em I like my X \textellipsis} template used by their system. Blind ratings by human judges of human-written and automatically generated jokes showed that their best-performing model was rated as funny in 16\% of cases, compared to 33\% of the human jokes (itself a relatively low rate). While the questions posed in these studies clearly have an intrinsic orientation (`Is the text compatible with the expected genre conventions?'), they also have a bearing on extrinsic factors, since the ability to recognise an artefact as an instance of a genre or as exhibiting a certain style or personality is arguably one of the sources of its impact, which in turn includes judgments of whether a text is funny or interesting, for example. Of course, the intention behind variation in style, personality or affect may well be to ultimately increase effectiveness in achieving some ulterior goal. Indeed, any {\sc nlg} system intended to be embedded in a specific environment will need to address stylistic and genre-based issues. For example, our hypothetical weather report generator might use a very brief, technical style given its professional pool of target users \cite<as was the case with S{\sc um}T{\sc ime}>{Reiter2005}; in contrast, weather reports intended for public consumption, such as those in the W{\sc eather}G{\sc ov} corpus, would probably be longer and less technical \cite{Angeli2010}. However, there is a difference between evaluating whether genre constraints or stylistic variation help contribute to a goal, and evaluating whether the text actually exhibits the desired variation. For example, \citeA{Mairesse2011} evaluated the {\sc personage} system (see Section \ref{sec:style}) by asking users to judge personality traits as reflected in generated dialogue fragments (rather than, say, measuring whether users were more likely to eat at a restaurant if this was recommended by a configuration of the system with a high degree of extraversion). This is similar in spirit to the question about jokehood asked by \citeA{Binsted1997}, in contrast to the more explicitly extrinsic evaluation of the {\sc standup} joke generator by \citeA{Waller2009}, which asked whether the system actually helped users improve their interactions with peers. \subsection{Extrinsic Evaluation Methods}\label{sec:extrinsic} In contrast to intrinsic methods, extrinsic evaluations measure effectiveness in achieving a desired goal. In the example scenario of Figure \ref{fig:eval-scenario}, such an evaluation might address the impact on planning by the engineers who are the target users of the system. Clearly, `effectiveness' is dependent on the application domain and purpose of a system. Examples include: \begin{itemize} \item persuasion and behaviour change, for example, through exposure to personalised smoking cessation letters \cite{Reiter2003}; \item purchasing decision after presentation of arguments for and against options on the housing market based on a user model \cite{Carenini2006}; \item engagement with ecological issues after reading blogs about migrating birds \cite{Siddharthan2012}; \item decision support in a medical setting following the generation of patient reports \cite{Portet2009,Hunter2012}; \item enhancing linguistic interaction among users with complex communication needs via the generation of personal narratives \cite{Tintarev2016}; \item enhancing learning efficacy in tutorial dialogue \cite{Dieugenio2005,Fossati2015,Boyer2011,Lipschultz2011,Chi2014} \end{itemize} While questionnaire-based or self-report studies can be used to address extrinsic criteria \cite<e.g.,>{Hunter2012,Siddharthan2012,Carenini2006}, in many cases evaluation relies on some objective measure of performance or achievement. This can be done with the target users {\sc in situ}, enhancing the ecological validity of the study, but can also take the form of a task that models the scenarios for which the {\sc nlg} system has been designed. Thus, in the {\sc give} Challenge \cite{Striegnitz2011}, in which {\sc nlg} systems generated instructions for a user to navigate through a virtual world, a large-scale task-based evaluation was carried out by having users play the {\sc give} game online, while various indices of success were logged, including the time it took a user to complete the game. {\sc reg} algorithms whose goal was to generate identifying descriptions of objects in visual domains, were evaluated in part based on the time it took readers to identify a referent based on a generated description, as well as their error rate \cite{Gatt2010}. {\sc skillsum}, a system to generate feedback reports from literacy assessments, was evaluated by measuring how user's self-assessment of their own literacy skills improved after reading generated feedback, compared to control texts \cite{Williams2008}. A potential drawback of extrinsic studies, in addition to time and expense, is a reliance on an adequate user base (which can be difficult to obtain when users have to be sampled from a specific population, such as the engineers in our hypothetical scenario in Figure \ref{fig:eval-scenario}) and the possibility of carrying out the study in a realistic setting. Such studies also raise significant design challenges, due to the need to control for intervening and confounding variables, comparing multiple versions of a system (e.g. in an ablative design; see Section \ref{sec:glassbox} below), or comparing a system against a gold standard or baseline. For example, \citeA{Carenini2006} note that evaluating the effectiveness of arguments presented in text needs to take into account aspects of a user's personality which may impact how receptive they are to arguments in the first place. An example of the trade-off between design and control issues and ecological validity is provided by the BabyTalk family of systems. A pilot system called {\sc bt-45} \cite{Portet2009}, which generated patient summaries from 45-minute spans of historical patient data, was evaluated in a task involving nurses and doctors, who chose from among a set of clinical actions to take based on the information given. These were then compared to `ground truth' decisions by senior neonatal experts. This evaluation was carried out off-ward; hence, subjects took clinical decisions in an artificial environment without direct access to the patient. On the other hand, in the evaluation of {\sc bt-nurse}, a successor to {\sc bt-45} which summarised patient data collected over a twelve-hour shift \cite{Hunter2012}, the system was evaluated on-ward using live patient data, but ethical considerations precluded a task-based evaluation. For the same reasons, comparison to `gold standard' human texts was also impossible. Hence, the evaluation elicited judgements, both on intrinsic criteria such as understandability and accuracy and on extrinsic criteria such as perceived clinical utility \cite<see>[for a similarly indirect extrinsic measure of impact, this time in an ecological setting]{Siddharthan2012}. \subsection{Black Box Versus Glass Box Evaluation}\label{sec:glassbox} With the exception of evaluations of specific modules or algorithms, as in the case of {\sc reg} or surface realisers, most of the evaluation studies discussed so far would be classified as `black box' evaluations of `end-to-end', or complete, {\sc nlg} systems. In a `glass box' evaluation, on the other hand, it is the contribution of individual components that is under scrutiny, ideally in a setup where versions of a system with and without a component are evaluated in the same manner. Note that the distinction between black box and glass box evaluation is orthogonal to the question of which methods are used. An excellent example of a glass-box evaluation is that by \citeA{Callaway2002}, who used an ablative design, eliciting judgements of the quality of the output of their narrative generation system based on different configurations that omitted or included key components. In a related vein, \citeA{Elliott2013} compared image-to-text models that included fine-grained dependency representations of spatial as well as linguistic dependencies, to models with a coarser-grained image representation, finding an advantage for the former. However, exhaustive component-wise comparisons are sometimes difficult to make and may result in a combinatorial explosion of configurations, with a concomitant reduction in data points collected per configuration (assuming subjects are limited and need to be divided among different conditions) and a reduction in statistical power. Alternatives do exist in the literature. \citeA{Reiter2003} elicited judgements on weather forecasts using human and machine-generated texts, together with a `hybrid' version where the content was selected by forecasters, but the language was automatically generated. This enabled a comparison of human and automatic content selection. \citeA{Angeli2010} used corpus-based and subjective measures to assess linguistic quality, coupled with precision and recall-based measures to assess content determination of their statistical system against human-annotated texts. In {\sc bt-nurse} \cite{Hunter2012}, nurses were prompted for free text comments (in addition to answering a questionnaire targeting extrinsic dimensions), which were then manually annotated and analysed to determine which elements of the system were potentially problematic. \subsection{On the Relationship Between Evaluation Methods} To what extent are the plethora of methods surveyed -- from extrinsic, task-oriented to intrinsic ones relying on automatic metrics or human judgements -- actually related? It turns out that multiple evaluation methods seldom give converging verdicts on a system, or on the relative ranking of a set of systems under comparison. \subsubsection{Metrics Versus Human Judgements} Although corpus-based metrics used in {\sc mt} and summarisation are typically validated by demonstrating their correlation with human ratings, meta-evaluation studies in these fields have suggested that the correspondence is somewhat weak \cite<e.g.,>{Dorr2004,Callison-Burch2006,Caporaso2008}. Similarly, shared task evaluations on referring expression generation showed that corpus-based, judgement-based and experimental or task-based methods frequently do not correlate \cite{Gatt2010}. In their recent review \citeA{Bernardi2016} note a similar issue in image captioning system evaluation. Thus, \citeA{Kulkarni2013} found that their image description system did not outperform two earlier methods \cite{Farhadi2010,Yang2011} on {\sc bleu} scores; however, human judgements indicated the opposite trend, with readers preferring their system \cite<similar observations are made by>{Kiros2014}. \citeA{Hodosh2013} compared the agreement (measured by Cohen's $\kappa$) between human judgements and {\sc bleu} or {\sc rouge} scores for retrieved captions, finding that outputs were not ranked similarly by humans and metrics, unless the retrieved captions were identical to the reference captions. On occasion, the correlation between a metric and human judgements appears to differ across studies, suggesting that metric-based results are highly susceptible to variation due to generation algorithms and datasets. For instance, \citeA{Konstas2013} (discussed in Section \ref{sec:parsing} above) find that on corpus-based metrics, the best-performing version of their model does not outperform that of \citeA{Kim2010} on the {\sc robocup} domain, or that of \citeA{Angeli2010} on their weather corpus ({\sc weathergov}), though it performs better than \citeauthor{Angeli2010}'s on the noisier {\sc atis} travel dataset. However, an evaluation of fluency and semantic correctness, based on human judgements, showed that the system outperformed, by a small margin, both \citeauthor{Kim2010}'s and \citeauthor{Angeli2010}'s on both measures in all domains with the exception of {\sc weathergov}, where \citeauthor{Angeli2010}'s system did marginally better. In a related vein, \citeA{Elliott2015} compare their image captioning system, based on visual dependency relations, to the Bidirectional {\sc rnn} developed by \citeA{Karpathy2015}, on two different datasets. The two systems were close to each other on the {\sc vlt2k} dataset, but not on Pascal1k, a result that the authors claim is due to {\sc vlt2k} containing more pictures involving actions. As for the relationship between metrics and human judgements, \citeA{Elliott2013} concluded that {\sc meteor} correlates better than {\sc bleu} \cite<see>[for a systematic comparison of automatic metrics in this domain]{Elliott2014}, a finding also confirmed in their later work \cite{Elliott2015}, as well as in the {\sc ms-coco} Evaluation Challenge, which found that {\sc meteor} was more robust. However, work by \citeA{Kuznetsova2014a} showed variable results; their highest-scoring method as judged by humans, involving tree composition, was ranked higher by {\sc bleu} than by {\sc meteor}. In the {\sc ms-coco} Evaluation Challenge, some systems outperformed a human-human upper bound when compared to reference texts using automatic metrics, but no system reached this level in an evaluation based on human judgements \cite<see>[for further discussion]{Bernardi2016}. Some studies have explicitly addressed the relationship between methods as a research question in its own right. An important contribution in this direction is the study by \citeA{Reiter2009a}, which addressed the validity of corpus-based metrics in relation to human judgements, within the domain of weather forecast generation \cite<a similar study has recently been conducted on image captioning; see>{Elliott2014}. In a first experiment, focussing on linguistic quality, the authors found a high correlation between expert and non-expert readers' judgements, but the correlation between human judgements and the automatic metrics varied considerably (from $0.3$ to $0.87$), depending on the version of the metric used and whether the reference texts were included in the comparison by human judges. The second experiment evaluated both linguistic quality, by asking human judges to rate clarity/readability; and content determination, by eliciting judgements of accuracy/appropriateness (by comparing texts to the raw data). The automatic metrics correlated significantly with judgements of clarity, but far less with accuracy, suggesting that they were better at predicting the linguistic quality than correctness. Other studies have yielded similarly inconsistent results. In a study on paraphrase generation, \citeA{Stent2005a} found that automatic metrics correlated highly with judgements of adequacy (roughly akin to accuracy), but not fluency. By contrast, \citeA{Espinosa2010} found that automatic metrics such as {\sc nist}, {\sc meteor} and {\sc gtm} correlate moderately well with human fluency and adequacy judgements of English surface realisation quality, while \citeA{Cahill2009} reported only a weak correlation for German surface realisation. \citeA{Wubben2012}, comparing text simplification strategies, found low, but significant correlations between {\sc bleu} and fluency judgements, and a very low, {\em negative} correlation between {\sc bleu} and adequacy. These contrasting findings suggest that the relationship between metrics may depend on purpose and genre of the text under consideration; for example, \citeA{Reiter2009a} used weather reports, while \citeA{Wubben2012} used Wikipedia articles. Various factors can be adduced to explain the inconsistency of these meta-evaluation studies: \begin{enumerate} \item Metrics such as {\sc bleu} are sensitive to the length of the texts under comparison. With shorter texts, n-gram based metrics are likely to result in lower scores. \item The type of overlap matters: for example, many evaluations in image captioning rely on {\sc bleu-1} \cite[was among the first to experiment with longer n-grams]{Elliott2013,Elliott2014}, but longer n-grams are harder to match, though they capture more syntactic information and are arguably better indicators of fluency. \item Semantic variability is an important issue. Generated texts may be similar to reference texts, but differ on some near-synonyms, or subtle word order variations. As shown in Table \ref{table:intrinsic-metrics}, some metrics are designed to partially address these issues. \item Many intrinsic corpus-based metrics are designed to compare against multiple reference texts, but this is not always possible in {\sc nlg}. For example, while image captioning datasets typically contain multiple captions per image (typically, around 5), this is not the case in other domains, like weather reporting or restaurant recommendations. \end{enumerate} The upshot is that {\sc nlg} evaluations increasingly rely on multiple methods, a trend that is equally visible in other areas of {\sc nlp} , such as {\sc mt} \cite{Callison-Burch2007,Callison-Burch2008}. \subsubsection{Using Controlled Experiments} A few studies have validated evaluation measures against experimental data. For example, \citeA{Siddharthan2012a} compared the outcomes of their magnitude estimation judgement study (see Section \ref{sec:intrinsic} above) to the results from a sentence recall task, finding that the results from the latter are largely consistent with judgements and concluding that they can substitute for task-based evaluations to shed light on breakdowns in comprehension at sentence level. A handful of studies have also used behavioral experiments and compared `online' processing measures, such as reading time of referring expressions, to corpus-based metrics \cite<e.g.>{Belz2010}. Correlations with automatic metrics are usually poor. A somewhat different use of reading times was made by \citeA{Lapata2006}, who used them as an objective measure against which to validate Kendall's $\tau$ as a metric for assessing information ordering in text (an aspect of text stucturing). In a recent study, \citeA{Zarriess2015} compared generated texts to human-authored and `filler' texts (which were manually manipulated to compromise their coherence). They found that reading-time measures were more useful to distinguish these classes of texts than offline measures based on elicited judgements of fluency and clarity. \subsection{Evaluation: Concluding Remarks} Against the background of this section, three main conclusions can be drawn: \begin{enumerate} \item There is a widespread acceptance of the necessity of using multiple evaluation methods in {\sc nlg}. While these are not always consistent among themselves, they are useful in shedding light on different aspects of quality, from fluency and clarity of output, to adequacy of semantic content and effectiveness in achieving communicative intentions. The choice of method has a direct impact on the way in which results can be interpreted. \item Meta-evaluation studies have yielded conflicting results on the relationship between human judgements, behavioural measures and automatically computed metrics. The correlation among them varies depending on task and application domain. This is a subject of ongoing research, with plenty of studies focussing on the reliabilty of metrics and their relationship to other measures, especially human judgements. \item A question that remains under-explored concerns the dimensions of quality that are themselves the object of inquiry. (In this connection, it is worth noting that some kindred disciplines have sought to de-emphasise their role on the grounds that they are inconsistent; see \citeauthor{Callison-Burch2008}, 2008, among others). For example, what are people judging when they judge fluency or adequacy and how consistently do they do so? It is far from obvious whether these judgements should really be expected to correlate with other measures, given that the latter are {\em producer-}oriented, focussing on output, while judgements are themselves often {\em receiver-}oriented, focussing on how the output is read or processed \cite<for a related argument, see>{Oberlander1998}. Furthermore, while meta-linguistic judgements can be expected to reflect the impact of a text on its readers, there is nevertheless the possibility that behavioural, online methods designed to directly investigate aspects of processing would yield a different picture, a result that has been obtained in some psycholinguistic studies \cite<e.g.>{Engelhardt2006}. \end{enumerate} In conclusion, our principal recommendation to {\sc nlg} practitioners, where evaluation is concerned, is to err in favour of diversity, by using multiple methods, as far as possible, and reporting not only their results, but also the correlation between them. Weak correlations need not imply that the results of a particular method are invalid. Rather, they may indicate that measures focus on different aspects of a system or its output. \section{Variation: Generating Text with Style, Personality and Affect}\label{sec:style} Based on the preceding sections, the reader could be excused for thinking that {\sc nlg} is mostly concerned with delivering factual information, whether this is in the form of a summary of weather data, or a description of an image. This bias was also flagged in the Introduction, where we gave a brief overview of some domains of application, and noted that informing was often, though not always, the goal in {\sc nlg}. Over the past decade or so, however, there has been a growing trend in the {\sc nlg} literature to also focus on aspects of textual information delivery that are arguably non-propositional, that is, features of text that are not strictly speaking grounded in the input data, but are related to the manner of delivery. In this section, we focus on these trends, starting with the broad concept of `stylistic variation', before turning to generation of affective text and politeness. \subsection{Generating with Style: Textual Variation and Personality} What does the term `linguistic style' refer to? Most work on what we shall refer to as `stylistic {\sc nlg}' shies away from a rigorous definition, preferring to operationalise the notion in the terms most relevant to the problem at hand. `Style' is usually understood to refer to features of lexis, grammar and semantics that collectively contribute to the identifiability of an instance of language use as pertaining to a specific author, or to a specific situation (thus, one distinguishes between levels of stylistic formality, or speaks of the distinctive characteristics of the style of William Faulkner). This implies that any investigation of style must concern itself, at least in part, with {\em variation} among the features that mark such authorial or situational variables. In line with this usage, this section reviews developments in {\sc nlg} in which variation is the key concern, usually at the {\em tactical}, rather than the strategic, level, the idea being that a given piece of information can be imparted in linguistically distinct, ways \cite<cf.>{Sluis2010}. This strategy was, for example, explicitly adopted by \citeA{Power2003}. Given its emphasis on linguistic features, controlling style (however it is defined) is a problem of great interest for {\sc nlg} since it directly addresses issues of {\em choice}, which are arguably the hallmark of any {\sc nlg} system \cite<cf.>{Reiter2010}. Early contributions in this area defined stylistic features using rules to vary generation according to pragmatic or stylistic goals. For example, \citeA{McDonald1985} argued that ``prose style is a consequence of what decisions are made during the transition from the conceptual representation level to the linguistic level'' (p. 61), thereby placing the problem within the domain of sentence planning and realisation. This stance was also adopted by \citeA{Dimarco1993}, who focus on syntactic variation, proposing a stylistic grammar for English and French. \citeA{Sheikha2011} proposed an adaptation of the SimpleNLG realiser \cite{Gatt2009} to handle formal versus informal language, via specific features, such as contractions ({\em are not} vs. {\em aren't}) and lexical choice. A related perspective on stylistic variation was adopted by \citeA{Walker2002}, in their description of how the {\sc sp}o{\sc t} sentence planner was adapted to learn strategies for different communicative goals, as reflected in the rhetorical and syntactic structures of the sentence plans. The planner was trained using a boosting technique to learn correlations between features of sentence plans and human ratings of the adequacy of a sample of outputs for different communicative goals. Like \citeA{Walker2002}, contemporary approaches to stylistic variation have tended to eschew rules in favour of data-driven methods to identify relevant features and dimensions of variation from corpora, in what might be thought of as an {\em inductive} view of style, where variation is characterised by the distribution of whatever linguistic features are considered relevant. An important precedent for this view is Biber's corpus-based multidimensional approach to style and register variation \cite{Biber1988}, roughly a contemporary of the grammar-inspired approach of \citeA{Dimarco1993}. Biber's model was at the heart of work by \citeA{Paiva2005}, which exhibits some characteristics in common with the `global' statistical approaches to {\sc nlg} discussed in Section \ref{sec:datadriven}, insofar as it exploits statistics to inform decision-making at the relevant choice points, rather than to filter the outputs of an overgeneration module. \citeA{Paiva2005} used a corpus of patient information leaflets, conducting factor analysis on their linguistic features to identify two stylistic dimensions. They then allowed their system to generate a large number of texts, varying its decisions at a number of choice points (e.g. choosing a pronoun versus a full {\sc np}) and maintaining a trace. Texts were then scored on the two stylistic dimensions, and a linear regression model was developed to predict the score on a dimension based on the choices made by the system. This model was used during testing to predict the best choice at each choice point, given a desired style. Style, however, is a global feature of a text, though it supervenes on local decisions. These authors solved the problem by using a best-first search algorithm to identify the {\em series} of local decisions as scored by the linear models, that was most likely to maximise the desired stylistic effect, yielding variations such as the following \cite<examples from>[p. 61]{Paiva2005}: \begin{examples} \item The dose of the patient's medicine is taken twice a day. It is two grams. \item The two-gram dose of the patient's medicine is taken twice a day. \item The patient takes the two-gram dose of the patient's medicine twice a day. \end{examples} Some authors \cite<e.g.,>[on which more below]{Mairesse2011} have noted that certain features, once selected, may `cancel' or obscure the stylistic effect of other features. This raises the question whether style can in fact be modelled as a linear, additive phenomenon, in which each feature contributes to an overall perception of style independently of others (modulo its weight in the regression equation). A second question is whether stylistic variation could be modelled in a more specific fashion, for example, by tailoring style to a specific author, rather than to generic dimensions related to `formality', `involvement' and so on. For instance a corpus-based analysis of human-written weather forecasts by \citeA{Reiter2005} found that lexical choice varies in part based on the author. One line of work has investigated this using corpora of referring expressions, such as the {\sc tuna} Corpus \cite{Deemter2012}, in which multiple referring expressions by different authors are available for a given input domain. For instance, \citeA{Bohnet2008} and \citeA{DiFabbrizio2008} explore statistical methods to learn individual preferences for particular attributes, a strategy also used by \citeA{Viethen2010}. \citeA{Hervas2013} use case-based reasoning to inform lexical choice when realising a set of semantic attributes for a referring expression, where the case base differentiates between authors in the corpus to take individual lexicalisation preferences into account \cite<see also>{Hervas2016}. A more ambitious view of individual variation is present in the work of \citeA{Mairesse2010,Mairesse2011}, in the context of {\sc nlg} for dialogue systems. Here, the aim is to vary the output of a generator so as to project different personality traits. Similar to the model of \citeA{Biber1988}, personality is here given a multidimensional definition, via the classic `Big 5' model \cite<e.g.,>{John1999}, where personality is a combination of five major traits (e.g. introversion/extraversion). However, while stylistic variation is usually defined as a linguistic phenomenon, the linguistic features of personality are only indirectly reflected in speaking or writing \cite<a hypothesis underlying much work on detection of personality and other features in text, including>{Oberlander2006,Argamon2007,Schwartz2013a,Youyou2015}. \citeauthor{Mairesse2011}'s {\sc personage} system, originally based on rules derived from an exhaustive review of psychological literature \cite{Mairesse2010}, was developed in the restaurant domain. The subsequent, data-driven version of the system \cite{Mairesse2011} takes as input a pragmatic goal and, like the system of \citeA{Paiva2005}, a list of real-valued style parameters, this time representing scores on the five personality traits. The system estimates generation parameters for stylistic features based on the input traits, using machine-learned models acquired from a dataset pairing sample utterances with human personality judgements. For example, an utterance reflecting high extraversion might be more verbose and involve more use of expletives (\ref{ex:mairesse1}), compared to a more introverted style, which might demonstrate more uncertainty, for example through the use of stammering and hedging (\ref{ex:mairesse2}). \begin{examples} \item \label{ex:mairesse1} Kin Khao and Tossed are bloody outstanding. Kin Khao just has rude staff. Tossed features sort of unmannered waiters, even if the food is somewhat quite adequate. \item \label{ex:mairesse2}Err... I am not really sure. Tossed offers kind of decent food. Mmhm... However, Kin Khao, which has quite ad-ad-adequate food, is a thai place. You would probably enjoy these restaurants. \end{examples} An interesting outcome of the evaluation with human subjects reported by \citeA{Mairesse2011} is that readers vary significantly in their judgements of what personality is actually reflected by a given text. This suggests that the relationship between such psychological features and their linguistic effects is far from straightforward. \citeA{Walker2011:Arboretum} compared the `Big 5' model incorporated in the rule-based version of {\sc personage}, to a corpus-based model drawn from character utterances in film scripts. These models were used to generate utterances for characters in an augmented reality game; their main finding was that modelling characters' style directly using corpora of utterances results in more specific and easily perceived traits than using a model based on personality traits, where the relationship between personality and individual style is more indirect. In another set of experiments on generating utterances for characters in a role-playing game, \citeA{Walker2011:Film} report the successful porting of {\sc personage} to the new domain by tuning some of its parameters on features identified in film dialogue. Models learned from film corpora were found to be close in style to the characters they were actually based on. \subsection{Generating with Feeling: Affect and Politeness}\label{sec:affect} Personality is usually thought of in terms of {\em traits}, which are relatively stable across time. However, language use may vary not only across individuals, as a function of their stable characteristics, but also within individuals across time, as a function of their more transient affective {\em states}. `Affective {\sc nlg}' \cite<a term due to>{Rosis2000} is concerned with variation that reflects emotional states which, unlike personality traits, are relatively transient. In this case, the goals can be twofold: (i) to induce an emotional state in the receiver; or (ii) to reflect the emotional state of the producer. As in the case of personality, the relationship between emotion and language is far from clear, as noted by \citeA{Belz2003}. For one thing, it isn't clear whether only surface linguistic choices need be affected. Some authors have argued that a text's affective impact impinges on content selection; this stance has been adopted, for example, in some applications in e-health where reporting of health-related issues should be sensitive to their potential emotional impact \cite{DiMarco2007,Mahamood2011}. Most work on affective {\sc nlg} has however focussed on tactical choices \cite<e.g.>{Hovy1988,Fleischman2002,Strong2007,VanDeemter2008,Keshtkar2011}. Various linguistic features that can have emotional impact have been identified, from the increased use of redundancy to enhance understanding of emotionally laden messages \cite{Walker1995,Rosis2000}, to the increased use of first-person pronouns and adverbs, as well as sentence ordering to achieve emphasis or reduce adverse emotional impact \cite{Rosis2000}. This research on affective {\sc nlg} relies on models of emotion of various degrees of complexity and cognitive plausibility. The common trend underlying all these approaches however is that emotional states should impact lexical, syntactic and other linguistic choices. The question then is to what extent such choices are actually perceived by readers or users of a system. In an empirical study, \citeA{Sluis2010} reported on two experiments investigating the effect of various tactical decisions on the emotional impact of text on readers. In one experiment, texts gave a (fake) report to participants on their performance on an aptitude test, with manually induced variations, such as these: \begin{examples} \item {\em Positive slant}: On top of this you also outperformed most people in your age group with your exceptional scores for Imagination and Creativity (7.9 vs 7.2) and Logical- Mathematical Intelligence (7.1 vs. 6.5). \item {\em Neutral/factual slant}: You did better than most people in your age group with your scores for Imagination and Creativity (7.9 vs 7.2) and Logical-Mathematical Intelligence (7.1 vs. 6.5). \end{examples} Evaluation of these texts showed that the extent to which affective tactical decisions influence hearer's emotional states is dependent on a host of other factors, including the degree to which the reader is directly implicated in what the text says (in the case of an aptitude test, the reader would be assumed to feel the outcomes have personal relevance). An important question raised by this study is how affect should be measured: \citeA{Sluis2010} used a standardised self-rating questionnaire to estimate changes in affect before and after reading a text, but the best way to measure emotion remains an open question. The emotional slant in the language used by an author or speaker may have implications for the degree to which the listener or reader may feel `impinged upon'. This becomes particularly relevant in interactive systems, where {\sc nlg} components are generating language in the context of dialogue. Consider, for example, the difference between these requests: \begin{examples} \item\label{ex:direct}{\em Direct strategy}: Chop the tomatoes! \item\label{ex:approval}{\em Approval strategy}: Would it be possible for you to chop the tomatoes? \item\label{ex:autonomy}{\em Autonomy strategy}: Could you possibly chop the tomatoes? \item\label{ex:indirect}{\em Indirect strategy}: The tomatoes aren't chopped yet. \end{examples} The four strategies exemplified above come across as having varying degrees of politeness which, according to one influential account \cite{BrownLevinson1987}, depends on {\em face}. {\em Positive face} reflects the speaker's desire that some of her goals be shared with her interlocutors; {\em negative face} refers to the speaker's desire not to have her goals impinged upon by others. The connection with affect that we suggested above hinges on these distinctions: different degrees of politeness reflect different degrees of `threat' to the listener; hence, generating language based on the right face strategy could be seen as a branch of affective {\sc nlg}. In an early, influential proposal, \citeA{Walker1997a} proposed an interpretation of the framework of \citeA{BrownLevinson1987} in terms of the four dialogue strategies exemplified in (\ref{ex:direct} -- \ref{ex:indirect}) above. Subsequently, \citeA{Moore2004} used this framework in the generation of tutorial feedback, where a discourse planner used a Bayesian network to inform linguistic choices compatible with the target politeness/affect value in a given context \cite<see>[for a related approach]{Johnson2004}. \citeA{Gupta2007} also used the four dialogue strategies identified by \citeA{Walker1997a} in the {\sc p}o{\sc lly} system, which used {\sc strips}-based planning to generate a plan distributed among two agents in a collaborative task \cite<see also>{Gupta2008}. An interesting finding in their evaluation is that perception of face-threat depends on the speech act; for example, requests can be more threatening. \citeA{Gupta2007} also note possible cultural differences in perception of face threat (in this case, between {\sc uk} and Indian participants). \subsection{Stylistic Control as a Challenge for Neural {\sc nlg}} In the past few years, stylistic -- and especially affective -- {\sc nlg} has witnessed renewed interest by researchers working on neural approaches to generation. The trends that can be observed here mirror those outlined in our general overview of deep learning approaches (Section \ref{sec:deep-learning}). A number of models focus on {\em response generation} (in the context of dialogue, or social media exchanges), where the task is to generate a response, given an utterance. Thus, these models fit well within the {\em seq2seq} or Encoder-Decoder framework (see Section \ref{sec:deep-learning} for discussion). Often, these models exploit social media data, especially from Twitter, a trend which goes back at least to \citeA{Ritter2011}, who adapted a Phrase-Based Machine Translation model to response generation. For example \citeA{Li2016} proposed a persona-based model in which the decoder {\sc lstm} is conditioned on embeddings obtained from tweets pertaining to individual speakers/authors. An alternative model conditions on both speaker and addressee profiles, with a view to incorporating not only the `persona' of the generator, but its variability with respect to different interlocutors. \citeA{Herzig2017}, also working on Twitter data, condition their decoder on personality features extracted from tweets based on the `Big Five' model, rather than on speaker-specific embeddings. This has the advantage of not enabling the generator to be tuned to specific personality settings, without re-training to adapt to a particular speaker style. While their personality-based model does not beat \citeauthor{Li2016}'s model, a human evaluation showed that judges were able to identify high-trait responses as more expressive than low-trait responses, suggesting that the conditioning was having a noticeable impact on style. In a dialogue context, \citeA{Ashgar2017} proposed to achieve affective responses on three levels: (a) by augmenting word embeddings with data from an affective dictionary; (b) by decoding with an affect-sensitive beam search; and (c) by training with an affect-sensitive loss function. On the other hand, a number of models condition an {\sc lstm} on attributes reflecting affective or personality traits, with a view to generating strings that express such traits. \citeA{Ghosh2017} used {\sc lstm}s trained on speech corpora conditioned on affect category and emotional intensity to drive lexical choice. \citeA{Hu2017} used variational auto-encoders and attribute discriminators, to control the stylistic parameters of generated texts individually. They experimented on controlling sentiment and tense, but restricted the generation to sentences of up to 16 words. By contrast, \citeA{Ficler2017} extend the range of parameters used to condition the {\sc lstm}, with two content-related attributes (sentiment and theme) and four stylistic parameters (length, whether the text is descriptive, whether it has a personal voice, and whether the style is professional). Their generator is trained on a corpus of movie reviews. Similarly, \citeA{Dong2017} propose an attribute-to-sequence model for product review generation based on a corpus of Amazon user reviews \cite<see also>[for neural models for product review generation]{Lipton2016,Tang2016}. The conditioning includes the reviewer {\sc id}, reminiscent of the persona-based response model of \citeA{Li2016}; however, they also include the rating, which functions to modulate the affect in the output. Their model incorporates an attentional mechanism to concentrate on different parts of the input encoding when predicting the next word during decoding. For example, for a specific reviewer and a specific product, changing the input rating from 1 to 5 yields the following difference: \begin{examples} \item (Rating: 1) i’m sorry to say this was a very boring book. i didn’t finish it. i’m not a new fan of the series, but this was a disappointment \item (Rating: 5) this was a very good book. i enjoyed the characters and the story line. i’m looking forward to reading more in this series. \end{examples} \subsection{Style and Affect: Concluding Remarks} Controlling stylistic, affective and personality-based variation in {\sc nlg} is still in a rather fledgling state, with several open questions of both theoretical and computational import. Among these is the question of how best to model complex, multi-dimensional constructs such as personality or emotion; this question speaks both to the cognitive plausibility of the models informing linguistic choices, and to the practical viability of different machine learning strategies that could be leveraged for the task (for example, linear, additive models versus more `global' models of personality or style). Also important here is the kind of data used to inform generation strategies: as we have seen above, a lot of affective {\sc nlg} work relies on ratings by human judges. However, some recent work in affective computing has questioned the use of ratings, comparing them to ranking-based and physiological methods \cite<e.g.>{Martinez2014,Yannakakis2015}. This and similar research is probably of high relevance to {\sc nlg} researchers. Some recent work relied on automatic extraction of personality features using tools such as {\sc ibm}'s Personality Insights \cite{Herzig2017}. As such tools \cite<another example of which is Lingustic Inquiry and Wordcount or {\sc liwc}, >{Pennebaker2007} become more reliable and widely available, we may see a turn towards less reliance on human elicitation. A second important question is which linguistic choices truly convey the intended variation to the reader or listener. While current systems use a range of devices, from aggregation strategies to lexical choice, it is not clear which ones are actually perceived as having the desired effect. A third important research avenue, which is especially relevant to interactive systems, is adaptivity, that is, the way speakers (or systems) alter their linguistic choices as a result of their interlocutors' utterances \cite{Clark1996a,Niederhoffer2002,Pickering2004}, a theme that has also begun to be explored in {\sc nlg} \cite{Isard2006,Herzig2017}. \section{Introduction}\label{sec:intro} In his intriguing story The Library of Babel ({\it La biblioteca de Babel}\/, 1941), Jorge Luis Borges describes a library in which every conceivable book can be found. It is probably the wrong question to ask, but readers cannot help wondering: who wrote all these books? Surely, this could not be the work of human authors? The emergence of automatic text generation techniques in recent years provides an interesting twist to this question. Consider Philip M. Parker, who offered more than 100.000 books for sale via Amazon.com, including for example his {\it The 2007-2012 Outlook for Tufted Washable Scatter Rugs, Bathmats, and Sets That Measure 6-Feet by 9-Feet or Smaller in India}\/. Obviously, Parker did not write these 100,000 books by hand. Rather, he used a computer program that collects publicly available information, possibly packaged in human-written texts, and compiles these into a book. Just like the library of Babel contains many books that are unlikely to appeal to a broad audience, Parker's books need not find many readers. In fact, even if only a small percentage of his books get sold a few times, this would still make him a sizeable profit. Parker's algorithm can be seen to belong to a research tradition of so-called {\bf text-to-text generation} methods, applications that take existing texts as their input, and automatically produce a new, coherent text as output. Other example applications that generate new texts from existing (usually human-written) text include: \begin{itemize} \item machine translation, from one language to another \cite<e.g.,>{hutchins1992,och2003}; \item fusion and summarization of related sentences or texts to make them more concise \cite<e.g.,>{Clarke2010}; \item simplification of complex texts, for example to make them more accessible for low-literacy readers \cite<e.g.,>{Siddharthan2014} or for children \cite{Macdonald2016}; \item automatic spelling, grammar and text correction \cite<e.g.,>{kukich1992techniques,dale2012hoo}; \item automatic generation of peer reviews for scientific papers \cite{bartoli2016your}; \item generation of paraphrases of input sentences \cite<e.g.,>{bannard2005paraphrasing,kauchak2006paraphrasing}; and \item automatic generation of questions, for educational and other purposes \cite<e.g.,>{brown2005automatic,rus2010overview}. \end{itemize} Often, however, it is necessary to generate texts which are not grounded in existing ones. Consider, as a case in point, the minor earthquake that took place close to Beverly Hills, California on March 17, 2014. The Los Angeles Times was the first newspaper to report it, within 3 minutes of the event, providing details about the time, location and strength of the quake. This report was automatically generated by a `robo-journalist', which converted the incoming automatically registered earthquake data into a text, by filling gaps in a predefined template text \cite{Slate2014}. Robo-journalism and associated practices, such as data journalism, are examples of what is usually referred to as {\bf data-to-text generation}. They have had a considerable impact in the fields of journalism and media studies \cite{VanDalen2012,Clerwall2014,Hermida2015}. The technique used by the Los Angeles Times was not new; many applications have been developed over the years which automatically generate text from non-linguistic data including, but not limited to, systems which produce: \begin{itemize} \item soccer reports \cite<e.g.,>{Theune2001,Chen2008}; \item virtual `newspapers' from sensor data \cite{Molina2011} and news reports on current affairs \cite{Lepp2017}; \item text addressing environmental concerns, such as wildlife tracking \cite{Siddharthan2012,Ponnamperuma2013}, personalised environmental information \cite{Wanner2015}, and enhancing engagement of citizen scientists via generated feedback \cite{VanderWal2016}; \item weather and financial reports \cite{Goldberg1994,Reiter2005,Turner2008a,Ramos-Soto2015,Plachouras2016}; \item summaries of patient information in clinical contexts \cite{Huske-Kraus2003,Harris2008,Portet2009,Gatt2009,Banaee2013}; \item interactive information about cultural artefacts, for example in a museum context \cite<e.g.,>{ODonnell2001,Stock2007}; and \item text intended to persuade \cite{Carenini2006}, or motivate behaviour modification \cite{Reiter2003}. \end{itemize} These systems may differ considerably in the quality and variety of the texts they produce, their commercial viability and the sophistication of the underlying methods, but all are examples of data-to-text generation. Many of the systems mentioned above focus on imparting information to the user. On the other hand, as shown by the examples cited above of systems focussed on persuasion or behaviour change, informing need not be the exclusive goal of {\sc nlg}. Nor is it a trivial goal in itself, since in order to successfully impart information, a system needs to select what to say, distinguishing it from what can be easily inferred (possibly also depending on the target user), before expessing it coherently. Generated texts need not have a large audience. There is no need to automatically generate a report of, say, the Champions League European football final, which is covered by many of the best journalists in the field anyway. However, there are many other games, less important to the general public (but presumably very important to the parties involved). Typically, all sports statistics (who played?, who scored? etc.) for these games are stored, but such statistics are not as a rule perused by sport-reporters. Companies like Narrative Science\footnote{\url{https://www.narrativescience.com}} fill this niche by automatically generating sport reports for these games. Automated Insights\footnote{\url{https://automatedinsights.com}} even generates reports based on user-provided `fantasy football' data. In a similar vein, the automatic generation of weather forecasts for offshore oil platforms \cite{Sripada2003}, or from sensors monitoring the performance of gas turbines \cite{Yu2006}, has proven to be a fruitful application of data-to-text techniques. Such applications are now the mainstay of companies like Arria-NLG.\footnote{\url{http://www.arria.com}} Taking this idea one step further, data-to-text generation paves the way for tailoring texts to specific audiences. For example, data from babies in neonatal care can be converted into text differently, with different levels of technical detail and explanatory language, depending on whether the intended reader is a doctor, a nurse or a parent \cite{Mahamood2011}. One could also easily imagine that different sport reports are generated for fans of the respective teams; the winning goal of one team is likely to be considered a lucky one from the perspective of the losing team, irrespective of its `objective' qualities \cite{van2017pass}. A human journalist would not dream of writing separate reports about a sports match (if only for lack of time), but for a computer this is not an issue and this is likely to be appreciated by a reader who receives a more personally appropriate report. \subsection{What is Natural Language Generation?} Both text-to-text generation and data-to-text generation are instances of {\bf Natural Language Generation} ({\sc nlg}). In the most widely-cited survey of {\sc nlg} methods to date \cite{Reiter1997,Reiter2000}, {\sc nlg} is characterized as `the subfield of artificial intelligence and computational linguistics that is concerned with the construction of computer systems than can produce understandable texts in English or other human languages from some underlying non-linguistic representation of information' \cite[p.1]{Reiter1997}. Clearly this definition fits data-to-text generation better than text-to-text generation, and indeed \citeA{Reiter2000} focus exclusively on the former, helpfully and clearly describing the rule-based approaches that dominated the field at the time. It has been pointed out that precisely defining {\sc nlg} is rather difficult \cite<e.g.,>{evans2002nlg}: everybody seems to agree on what the output of an {\sc nlg} system should be (text), but what the exact input is can vary substantially \cite{McDonald1993}. Examples include flat semantic representations, numerical data and structured knowledge bases. More recently, generation from visual input such as image or video has become an important challenge \cite<e.g.,>[among many others]{Mitchell2012,Kulkarni2013,Thomason2014}. A further complication is that the boundaries between different approaches are themselves blurred. For example, text summarisation was characterized above as a text-to-text application. However, many approaches to text-to-text generation (especially abstractive summarisation systems, which do not extract content wholesale from the input documents) use techniques which are also used in data-to-text, as when opinions are extracted from reviews and expressed in completely new sentences \cite<e.g.,>{labbe2012towards}. Conversely, a data-to-text generation system could conceivably rely on text-to-text generation techniques for learning how to express pieces of data in different or creative ways \cite{Mcintyre2009,Gatt2009,Kondadadi2013}. Considering other applications of {\sc nlg} similarly highlights how blurred boundaries can get. For example, the generation of spoken utterances in dialogue systems \cite<e.g.,>{Walker2007,Rieser2009,Dethlefs2014} is another application of {\sc nlg}, but typically it is closely related to dialogue management, so that management and realisation policies are sometimes learned in tandem \cite<e.g.,>{Rieser2011}. The position taken in this survey is that what distinguishes data-to-text generation is ultimately its input. Although this varies considerably, it is precisely the fact that such input is not -- or isn't exclusively -- linguistic that is the main challenge faced by most of the systems and approaches we will consider. In what follows, unless otherwise specified in context, the terms `Natural Language Generation' and `{\sc nlg}' will be used to refer to systems that generate text from non-linguistic data. \subsection{Why a Survey on Natural Language Generation?} Arguably \citeA{Reiter2000} is still the most complete available survey of {\sc nlg}. However, the field of {\sc nlg} has changed drastically in the last 15 years, with the emergence of successful applications generating tailored reports for specific audiences, and with the emergence of text-to-text as well as vision-to-text generation applications, which also tend to rely more on statistical methods than traditional data-to-text. None of these are covered by \citeA{Reiter2000}. Also notably absent are discussions of applications that move beyond standard, `factual' text generation, such as those that account for personality and affect, or creative text such as metaphors and narratives. Finally, a striking omission by \citeA{Reiter2000} is the lack of discussion of evaluation methodology. Indeed, evaluation of {\sc nlg} output has only recently started to receive systematic attention, in part due to a number of shared tasks that were conducted within the {\sc nlg} community. Since \citeA{Reiter2000} published their book, various other {\sc nlg} overview texts have also appeared. \citeA{Bateman2005} cover the cognitive, social and computational dimensions of {\sc nlg}. \citeA{McDonald2010} offers a general characterization of {\sc nlg} as `the process by which thought is rendered into language' (p. 121). \citeA{Wanner2010} zooms in on automatic generation of reports, while \citeA{DiEugenio2010} look at specific applications, especially in education and health-care. Various specialized collections of articles have also been published, including that by \citeA{Krahmer2010a}, which targets data-driven approaches; and by \citeA{Bangalore2014}, which focusses on interactive systems. The web offers various unpublished technical reports, such as the survey by \citeA{Theune2003} on dialogue systems; the reports by \citeA{Piwek2003} and \citeA{Belz2003} on affective {\sc nlg}; and the survey by \citeA{Gkatzia2016} on content selection. While useful, these resources do not discuss recent developments or offer a comprehensive review. This indicates that a new state-of-the-art survey is highly timely. \subsection{Goals of this Survey} The goal of the current paper is to present a comprehensive overview of {\sc nlg} developments since 2000, both in order to provide {\sc nlg} researchers with a synthesis and pointers to relevant research, and to introduce the field to researchers who are less familiar with {\sc nlg}. Though {\sc nlg} has been a part of {\sc ai} and {\sc nlp} from the early days \cite<see e.g.,>{Winograd1972,Appelt1985}, as a field it has arguably not been fully embraced by these broader communities, and has only recently began to take full advantage of recent advances in data-driven, machine learning and deep learning approaches. Following \citeA{Reiter2000}, our main focus, especially in the first part of the survey, will be on data-to-text generation. In any case, doing full justice to recent developments in the various text-to-text generation applications is beyond the scope of a single survey, and many of these are covered in other surveys, including those by \citeA{Mani2001} and \citeA{Nenkova2011} for summarisation; \citeA{androutsopoulos2010survey} on paraphrasing; and \citeA{piwek2012varieties} on automatic question generation. However, we will in various places discuss connections between data-to-text and text-to-text generation, both because -- as noted above -- the boundaries are blurred, but also, and perhaps more importantly, because text-to-text systems have long been couched in the data-driven frameworks that are becoming increasingly popular in data-to-text generation, also giving rise to some hybrid systems that combine rule-bused and statistical techniques \cite<e.g.,>{Kondadadi2013}. Our review will start with an updated overview of the core {\sc nlg} tasks that were introduced by \citeA{Reiter2000}, followed by a discussion of architectures and approaches, where we pay special attention to those not covered in the \citeA{Reiter2000} survey. These two sections constitute the `foundational' part of the survey. Beyond these, we highlight several new developments, including approaches where the input data is visual; and research aimed at generating more varied, engaging or creative and entertaining texts, taking {\sc nlg} beyond the factual, repetitive texts it is sometimes accused of producing. We believe that these applications are not only interesting in themselves, but may also inform more `utility'-driven text generation application. For example, by including insights from narrative generation we may be able to generate more engaging reports and by including insights from metaphor generation we may be able to phrase information in these reports in a more original manner. Finally, we will discuss recent developments in evaluation of natural language generation applications. In short, the goals of this survey are: \begin{itemize} \item To give an up-to-date synthesis of research on the core tasks in {\sc nlg}, as well as the architectures adopted in the field, especially in view of recent developments exploiting data-driven techniques (Sections \ref{sec:tasks} and \ref{sec:architectures}); \item To highlight a number of relatively recent research issues that have arisen partly as a result of growing synergies between {\sc nlg} and other areas of artificial intelligence, such as computer vision, stylistics and computational creativity (Sections \ref{sec:image}, \ref{sec:style} and \ref{sec:creativity}); \item To draw attention to the challenges in {\sc nlg} evaluation, relating them to similar challenges faced in other areas of {\sc nlp}, with an emphasis on different evaluation methods and the relationships between them (Section \ref{sec:evaluation}). \end{itemize} \section{The Vision-Language Interface: Image Captioning and Beyond}\label{sec:image} Over the past few years, there has been an explosion of interest in the task of automatically generating captions for images, as part of a broader endeavour to investigate the interface between vision and language \cite{Barnard2016}. Image captioning is arguably a paradigm case of data-to-text generation, where the input comes in the form of an image. The task has become a research focus not only in the {\sc nlg} community but also in the computer vision community, raising the possibility of more effective synergies between the two groups of researchers. Apart from its practical applications, the grounding of language in perceptual data has long been a matter of scientific interest in {\sc ai} \cite<see>[for a variety of theoretical views on the computational challenges of the perception-language interface]{Winograd1972,Harnad1990,Roy2005}. \begin{figure}[!h] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{img/captions-challenge2015.png} \caption{The man at bat readies to swing at the pitch while the umpire looks on \cite<Human-authored caption from the {\sc ms-coco} dataset>{Lin2014}} \end{subfigure} % \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{img/kulkarni2011.png} \caption{This picture shows one person, one grass, one chair, and one potted plant. The person is near the green grass, and in the chair. The green grass is by the chair, and near the potted plant \cite{Kulkarni2011}} \label{fig:kulkarni2011} \end{subfigure} % \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{img/elliott2015.png} \caption{A person is playing a saxophone \cite{Elliott2015}} \label{fig:elliott2015} \end{subfigure} % \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{img/mitchell2012.png} \caption{A bus by the road with a clear blue sky \cite{Mitchell2012}} \label{fig:mitchell2012} \end{subfigure} % \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{img/mao2015.png} \caption{A bus is driving down the street in front of a building \cite{Mao2015}} \label{fig:mao2015} \end{subfigure} % \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{img/hendricks2016.png} \caption{A gecko is standing on a branch of a tree \cite{Hendricks2016}} \label{fig:hendricks2016} \end{subfigure} \caption{Some caption generation examples} \label{fig:img2text} \end{figure} Figure \ref{fig:img2text} shows some examples of caption generation, sampled from publications spanning approximately 6 years. Current caption generation research focusses mainly on what \citeA{Hodosh2013} refer to as {\em concrete conceptual} image descriptions of elements directly depicted in a scene. As \citeA{Donahue2015} put it, image captioning is a task whose input is static and non-sequential (an image, rather than, say, a video), whereas the output is sequential (a multi-word text), in contrast to non-sequential outputs such as object labels \cite<e.g.>[among others]{Duygulu2002,Ordonez2016}. Our discussion will be brief, since image captioning has recently been the subject of an extensive review by \citeA{Bernardi2016}, and has also been discussed against the background of broader issues in research on the vision-language interface by \citeA{Barnard2016}. While the present section draws upon these sources, it is organised in a somewhat different manner, also bringing out the connections with {\sc nlg} more explicitly. \subsection{Data}\label{sec:im-data} A detailed overview of datasets is provided by \citeA{Bernardi2016}. \citeA{Ferraro2015} offer a systematic comparison of datasets for both caption generation and visual question answering with an accompanying online resource.\footnote{The resource provided by \citeA{Ferraro2015} can be found at \url{http://visionandlanguage.net}.} Datasets typically consist of images paired with one or more human-authored captions (mostly in English) and vary from artificially created scenes \cite{Zitnick2013} to real photographs. Among the latter, the most widely used are Flickr8k \cite{Hodosh2013}, Flickr30k \cite{Young2014} and {\sc ms-coco} \cite{Lin2014}. Datasets such as the {\sc sbu1m} Captioned Photo Dataset \cite{Ordonez2011} include naturally-occurring captions of user-shared photographs on sites such as Flickr; hence the captions included therein are not restricted to the concrete conceptual. There are also a number of specialised, domain-specific datasets, such as the Caltech {\sc ucsd} Birds datast \cite<{\sc cub}; >{Wah2011}. There have also been a number of shared tasks in this area, including the {\sc coco} (`Common Objects in Context') Captioning Challenge\footnote{\url{http://mscoco.org/dataset/\#captions-challenge2015}}, organised as part of the Large-Scale Scene Understanding Challenge ({\sc lsun})\footnote{\url{http://lsun.cs.princeton.edu/2016/}} and the Multimodal Machine Translation Task \cite{Elliott2016}. We defer discussion of evaluation of image captioning systems to Section \ref{sec:evaluation} of this paper, where it is discussed in the context of {\sc nlg} evaluation as a whole. \subsection{The Core Tasks} There are two logically distinguishable sub-tasks in an image captioning system, namely, image analysis and text generation. This is not to say that they need to be organised separately or sequentially. However, prior to discussing architectures as such, it is worth briefly giving an overview of the methods used to deal with these two tasks. \subsubsection{Image Analysis} There are three main groups of approaches to treating visual information for captioning purposes. \paragraph{Detection} Some systems rely on computer vision methods for the detection and labelling of objects, attributes, `stuff' (typically mapped to mass nouns, such as {\em grass}), spatial relations, and possibly also action and pose information. This is usually followed by a step mapping these outputs to linguistic structures (`sentence plans' of the sort discussed in Section \ref{sec:tasks} and \ref{sec:architectures}), such as trees or templates \cite<e.g.>{Kulkarni2011,Yang2011,Mitchell2012,Elliott2015,Yatskar2014,Kuznetsova2014a}. Since performance depends on the coverage and accuracy of detectors \cite{Kuznetsova2014a,Bernardi2016}, some work has also explored generation from gold standard image annotations \cite{Elliott2013,Wang2015,Muscat2015} or artificially created scenes in which the components are known in advance \cite{Ortiz2015}. \paragraph{Holistic scene analysis} Here, a more holistic characterisation of a scene is used, relying on features that do not typically identify objects, attributes and the like. Such features include {\sc rgb} histograms, scale-invariant feature transforms \cite<{\sc sift};>{Lowe2004}, or low-dimensional representations of spatial structure \cite<as in {\sc gist};>{Oliva2001}, among others. This kind of image processing is often used by systems that frame the task in terms of retrieval, rather than caption generation proper. Such systems either use a unimodal space to compare a query image to training images before caption retrieval \cite<e.g.>{Ordonez2011,Gupta2012}, or exploit a multimodal space representing proximity between images and captions \cite<e.g.>{Hodosh2013,Socher2014}. \paragraph{Dense image feature vectors} Given the success of convolutional neural networks ({\sc cnn}) for computer vision tasks \cite<cf. e.g.,>{LeCun2015}, many deep learning approaches use features from a pre-trained {\sc cnn} such as AlexNet \cite{Krizhevsky2012}, {\sc vgg} \cite{Simonyan2015} or Caffe \cite{Jia2014}. Most commonly, caption generators use an activation layer from the pre-trained network as their input features \cite<e.g.>{Kiros2014,Karpathy2014,Karpathy2015,Vinyals2015,Mao2015,Xu2015,Yagcioglu2015,Hendricks2016}. \subsubsection{Text Generation or Retrieval} Depending on the type of image analysis technique, captions can be generated using a variety of different methods, of which the following are well-established. \paragraph{Using templates or trees} Systems relying on detectors can map the output to linguistic structures in a sentence planning stage. For example, objects can be mapped to nouns, spatial relations to prepositions, and so on. \citeA{Yao2010} use semi-supervised methods to parse images into graphs and then generate text via a simple grammar. Other approaches rely on sequence classification algorithms, such as Hidden Markov Models \cite{Yang2011} and conditional random fields \cite{Kulkarni2011,Kulkarni2013}. \citeA[see the example in Figure \ref{fig:kulkarni2011}]{Kulkarni2013} experiment with both templates and web-derived $n$-gram language models, finding that the former are more fluent, but suffer from lack of variation, an issue we also addressed earlier, in connection with realisation (Section \ref{sec:lr}). In the Midge system \cite[see Figure \ref{fig:mitchell2012} for an example caption]{Mitchell2012}, input images are represented as triples consisting of object/stuff detections, action/pose detections and spatial relations. These are subsequently mapped to $\langle \textit{noun, verb, preposition} \rangle$ triples and realised using a tree substitution grammar. This is further enhanced with the ability to `hallucinate' likely words using a probabilistic model, that is, to insert words which are not directly grounded in the detections performed on the image itself, but have a high probability of occurring, based on corpus data. In a human evaluation, Midge was shown to outperform both the system by \citeA{Kulkarni2011} and \citeA{Yang2011} on a number of criteria, including humanlikeness and correctness. \citeA{Elliott2013} use {\em visual dependency representations} ({\sc vdr}), a dependency grammar-like formalism to describe spatial relations between objects based on physical features such as proximity and relative position. Detections from an image are mapped to their corresponding {\sc vdr} relations prior to generation \cite<see also>[and the example in Figure \ref{fig:elliott2015}]{Elliott2015}. \citeA{Ortiz2015} use {\sc ilp} to identify pairs of objects in abstract scenes \cite{Zitnick2013a} before mapping them to a {\sc vdr}. Realisation is framed as a machine translation task over {\sc vdr}-text pairs. A similar concern with identifying spatial relations is found in the work of \citeA{Lin2015}, who use scene graphs as input to a grammar-based realiser. \citeA{Muscat2015} propose a naive Bayes model to predict spatial prepositions based on image features such as object proximity and overlap. \paragraph{Using language models} Using language models has the potential advantage of facilitating joint training from image-language pairs. It may also yield more expressive or creative captions if it is used to overcome the limitations of grammars or templates \cite<as shown by the example of Midge;>{Mitchell2012}. In some cases, n-gram models are trained on out-of-domain data, the approach taken by \citeA{Li2011} using web-scale $n$-grams, and by \citeA{Fang2015}, who used a maximum entropy language model. Most deep learning architectures use language models in the form of vanilla {\sc rnn}s or long short-term memory networks \cite<e.g.>{Kiros2014,Vinyals2015,Donahue2015,Karpathy2015,Xu2015,Hendricks2016,Hendricks2016a,Mao2016}. These architectures model caption generation as a process of predicting the next word in a sequence. Predictions are biased both by the caption history generated so far (or the start symbol for initial words) and by the image features which, as noted above, are typically features extracted from a {\sc cnn} trained on the object detection task. \paragraph{Caption retrieval and recombination} Rather than generate captions, some systems retrieve them based on training data. The advantage of this is that it guarantees fluency, especially if retrieval is of whole, rather than partial, captions. \citeA{Hodosh2013} used a multimodal space to represent training images and captions, framing retrieval as a process of identifying the nearest caption to a query image. The idea of `wholesale' caption retrieval has a number of precedents. For example \citeA{Farhadi2010} use Markov random fields to parse images into $\langle \textit{object,action, scene} \rangle$ triples, paired with parsed captions. A caption for a query image is retrieved by comparing it to the parsed images in the training data, finding the most similar based on WordNet. Similarly, the {\tt Im2Text} \cite{Ordonez2011} system ranks candidate captions for a query image. \citeA{Devlin2015} use a $k$ nearest neighbours approach, with caption similarity quantified using {\sc bleu} \cite{Papineni2002} and {\sc cide}r \cite{Vedantam2015}. A different view of retrieval is proposed by \citeA{Feng2010}, who use extractive summarisation techniques to retrieve descriptions of images and associated narrative fragments from their surrounding text in news articles. A potential drawback of wholesale retrieval is that captions in the training data may not be well-matched to a query image. For instance, \citeA{Devlin2015} note that the less similar a query is to training images, the more generic the caption returned by the system. A possible solution is to use partial matches, retrieving and recombining caption fragments. \citeA{Kuznetsova2014a} use detectors to match query images to training instances, retrieving captions in the form of parse tree fragments which are then recombined. \citeA{Mason2014} use a domain-specific dataset to extract descriptions and adapt them to a query image using a joint visual and textual bag-of-words model. In the deep learning paradigm, both \citeA{Socher2014} and \citeA{Karpathy2014} use word embeddings derived from dependency parses, which are projected, together with {\sc cnn} image features, into a multimodal space. Subsequent work by \citeA{Karpathy2015} showed that this fine-grained pairing works equally well with word sequences, eschewing the need for dependency parsing. Recently, \citeA{Devlin2015a} compared nearest-neighbour retrieval approaches to different types of language models for caption generation, specifically, the Maximum Entropy approach of \citeA{Fang2015}, an {\sc lstm}-based approach and {\sc rnn}s which are coupled with a {\sc cnn} for image analysis \cite<e.g.>{Vinyals2015,Donahue2015,Karpathy2015}. A comparison of the linguistic quality of captions suggested that there was a significant tendency for all models to reproduce captions observed in the training set, repeating them for different images in the test set. This could be due to a lack of diversity in the data, which might also explain why the nearest neighbour approach compares favourably with language model-based approaches. \subsection{How is Language Grounded in Visual Data?} As the foregoing discussion suggests, views on the relationship between visual and linguistic data depend on how each of the two sub-tasks is dealt with. Thus, systems which rely on detections tend to make a fairly clear-cut distinction between input processing and content selection on the one hand, and sentence planning and realisation on the other \cite<e.g.>{Kulkarni2011,Mitchell2012,Elliott2013}. The link between linguistic expressions and visual features is mediated by the outcomes of the detectors. For example, Midge \cite{Mitchell2012} uses the object detections to determine which nouns to mention, before fleshing out the caption with attributes (mapped to adjectives) and verbs. Similarly, \citeA{Elliott2013} uses {\sc vdr}s to determine spatial expressions. Retrieval-based systems relying on unimodal or multimodal similarity spaces represent the link between linguistic expressions and image features more indirectly. Here, similarity plays the dominant role. In a unimodal space \cite{Ordonez2011,Gupta2012,Mason2014,Kuznetsova2012,Kuznetsova2014a}, it is images which are compared, with (partial) captions retrieved based on image similarity. A number of deep learning approaches also broadly conform to this scheme. For example, both \citeA{Yagcioglu2015} and \citeA{Devlin2015} retrieve and rank captions for a query image, using a {\sc cnn} for the representation of the visual space. By contrast, multimodal spaces involve a direct mapping between visual and linguistic features \cite<e.g.>{Hodosh2013,Socher2014,Karpathy2014}, enabling systems to map from images to `similar' -- that is, related or relevant -- captions. Much interesting work on vision-language integration is being carried out with deep learning models. \citeA{Kiros2014} introduced multimodal neural language models ({\sc mrnn}), experimenting with two main architectures. Their Modality-Biased Log-Bilinear Model ({\sc mlbl-b}) uses an additive bias to predict the next word in a sequence based on both the linguistic context and {\sc cnn} image features. The Factored 3-way Log-Bilinear Model ({\sc mlbl-f}) also gates the representation matrix for a word with image features. In a related vein, \citeA{Donahue2015} propose a combined {\sc cnn} $+$ {\sc lstm} architecture \cite<also used in>[for video captioning]{Venugopalan2015,Venugopalan2015a} where the next word is predicted as a function of both previous words and image features. In one version of the architecture, they inject {\sc cnn} features into the {\sc lstm} at each time-step. In a second version, they use two stacked {\sc lstm}s, the first of which takes {\sc cnn} features and produces an output which constitutes the input to the next {\sc lstm} to predict the word. Finally, \citeA{Mao2015} experiment with various {\sc mrnn} configurations, obtaining their best results with an architecture in which there are two word embedding layers preceding the recurrent layer, which is in turn projected into a multimodal layer where linguistic features are combined with {\sc cnn} features. An example caption is shown in Figure \ref{fig:mao2015} above. These neural network models shed light on the consequences of combining the two modalities at different stages, reflecting the point made by \citeA[cf. Section \ref{sec:deep-learning}]{Manning2015} that this paradigm encourages a focus on architectures and design. In particular, image features can be used to bias the recurrent, language generation layer -- at the start, or at each time-step of the {\sc rnn} -- as in the work of \citeA{Donahue2015}. Alternatively, the image features can be combined with linguistic features at a stage following the {\sc rnn}, as in the work of \citeA{Mao2015}. \subsection{Vision and Language: Current and Future Directions for NLG} Image to text generation is one area of {\sc nlg} where there is a clear dominance of deep learning methods. Current work focusses on a number of themes: \begin{enumerate} \item Generalising beyond training data is still a challenge, as shown by the work of \citeA{Devlin2015a}. More generally, dealing with novel images remains difficult, though experiments have been performed on using out-of-domain training data to expand vocabulary \cite{Ordonez2013}, learn novel concepts \cite{Mao2015a} or transfer features from image regions containing known labels, to similar, but previously unattested ones \cite[from which an example caption is shown in Figure \ref{fig:hendricks2016}]{Hendricks2016}. Progress in zero-shot learning, where the aim is to identify or categorise images for which little or no training data is available, is likely to contribute to the resolution of data sparseness problems \cite<e.g.>{Antol2014,Elhoseiny2015}. \item Attention is also being paid to what \citeA{Barnard2016} refers to as {\em localisation}, that is, the association of linguistic expressions with parts of images, and the ability to generate descriptions of specific image regions. Recent work includes that of \citeA{Karpathy2015}, \citeA{Johnson2016} and \citeA{Mao2016}, who focus on unambiguous descriptions of specific image regions and/or objects in images (see Section \ref{sec:reg} above for some related work). Attention-based models are a further development on this front. These have been exploited in various {\sc seq2seq} tasks, notably for machine translation \cite{Bahdanau2015}. In the case of image captioning, the idea is to allocate variable weights to portions of captions in the training data, depending on the current context, to reflect the `relevance' of a word given previous words and an image region \cite{Xu2015}. \item Recent work has also begun to explore generation from images that goes beyond the concrete conceptual, for instance, producing explanatory descriptions \cite{Hendricks2016a}. A further development is work on Visual Question Answering, where rather than descriptive captions, the aim is to produce responses to specific questions about images \cite{Antol2015,Geman2015,Malinowski2015,Barnard2016,mostafazadeh2016}. Recently, a new dataset was proposed providing both concrete conceptual and `narrative' texts coupled with images \cite{Huang2016}, a promising new direction for this branch of {\sc nlg}. \item There is a growing body of work that generalises the task from static inputs to sequential ones, especially videos \cite<e.g.>{Kojima2002,Regneri2013,Venugopalan2015,Venugopalan2015a}. Here, the challenges include handling temporal dependencies between scenes, but also dealing with redundancy. \end{enumerate} \section{Generating Creative and Entertaining Text}\label{sec:creativity} `Good' writers not only present their ideas in coherent and well-structured prose. They also succeed in keeping the attention of the reader through narrative techniques, and in occasionally surprising the reader, for example, through creative language use such as small jokes or well-placed metaphors \cite<see e.g., among many others, >{flower1981cognitive,nauman2011makes,veale2015distributed}. The {\sc nlg} techniques and applications discussed so far in this survey arguably do not simulate good writers in this sense, and as a result automatically generated texts can be perceived as somewhat boring and repetitive. This lack of attention to creative aspects of language production within {\sc nlg} is not due to a general lack of scholarly interest in these phenomena. Indeed, computational research into creativity has a long tradition, with roots that go back to the early days of {\sc ai} \cite<as>[notes, the first story generation algorithm on record, Novel Writer, was developed by Sheldon Klein in 1973]{Gervas2013}. However, it is fair to say that, so far, there has been little interaction between researchers from the computational creativity and {\sc nlg} communities respectively, even though both groups in our opinion could learn a lot from each other. In particular, {\sc nlg} researchers stand to benefit from insights into what constitutes creative language production, as well as structural features of narrative that have the potential to improve {\sc nlg} output even in data-to-text systems \cite<see>[for an argument to this effect in relation to a medical text generation system]{Reiter2008}. At the same time, researchers in computational creativity could also benefit from the insights provided by the {\sc nlg} community where the generation of fluent language is concerned since, as we shall see, a lot of the focus in this research, especially where narrative is concerned, is on the generation of plans and on content determination. In what follows, we give an overview of automatic approaches to creative language production, starting from relatively simple jokes and metaphors to more advanced forms, such as narratives. \subsection{Generating Puns and Jokes}\label{sec:puns} Consider: \begin{example} What's the difference between money and a bottom?\\ {\it One you spare and bank, the other you bare and spank.} \end{example} \begin{example} What do you call a weird market?\\ {\it A bizarre bazaar.} \end{example} These two (pretty good!) {\it punning riddles}\/ were automatically generated by the {\sc jape} system developed by \citeA{Binsted1994,Binsted1997a}. Punning riddles form a specific joke genre and have received considerable attention in the context of computational humor, presumably because they are relatively straightforward to define, often relying on spelling or word sense ambiguities. Many good, human-produced examples have been collected in joke books and sites and may thus act as a source of inspiration or training data. Simplifying somewhat, {\sc jape} (Joke Analysis and Production Engine) relies on a template-based {\sc nlg} system, combining fixed text ({\em What's the difference between X and Y?} or {\em What do you call X?}) with slots, which are the source of the riddle. Various standard lexical resources are used for joke production, including a British pronunciation dictionary (to find different words with a similar pronunciation, such as `bizarre' and `bazaar') and WordNet \cite[to find words with a similar meaning, such as {\em bazaar} and {\em market}]{Miller1995}. {\sc jape} uses various techniques to create the punning riddles, such as juxtaposition, in which related words are simply placed next to each other and treated as a normal construction, while making sure that the combination is novel (i.e., not in the {\sc jape} database already). It is interesting to observe that in this way {\sc jape} may automatically come up with existing jokes (a quick Google search reveals that many bizarre bazaars, as well as bazaar bizarres, exist). Following the seminal work of Binsted and Ritchie, various other systems have been developed which can automatically generate jokes, including for example the {\sc haha}cronym system of \citeA{Stock2005}, which produces humorous acronyms, and the system of \citeA{Binsted2003}, which focusses on the generation of referential jokes (``It was so cold, I saw a lawyer with his hands in his own pockets."). \citeA{petrovic2013unsupervised} offer an interesting, unsupervised alternative to this earlier work, which does not require labelled examples or hard-coded rules . Like their predecessors, \citeauthor{petrovic2013unsupervised} also start from a template -- in their case {\em I like my X like I like my Y, Z} -- where $X$ and $Y$ are nouns (e.g., {\em coffee} and {\em war}) and $Z$ is an attribute (e.g., {\em cold}). Clearly, linguistic realisation is not an issue, but content selection -- finding `funny' triples $X$, $Y$ and $Z$ -- is a challenge. Interestingly, the authors postulate a number of guiding principles for `good' triples. In particular, they hypothesize that (a) the joke is funnier if the attribute $Z$ can be used to describe both nouns $X$ and $Y$; (b) the joke is funnier if attribute $Z$ is both common and ambiguous;and (c) the joke is funnier the more dissimilar $X$ and $Y$ are. These three statements can be quantified relying on standard resources such as Wordnet and the Google n-gram corpus \cite{Brants2006}, and using these measures their system outputs, for example: \begin{example} I like my relationships like I like my source, open. \end{example} It is probably fair to say that computational joke generation research to date has mostly focussed on laying bare the basic structure of certain relatively simple puns and exploiting these to good effect \cite<e.g.,>{Ritchie2009}. However, many other kinds of jokes exist, often requiring sophisticated, hypothetical reasoning. Presumably, many of the central problems within {\sc ai} need to be solved first before generation systems will be capable of producing these kinds of advanced jokes. \subsection{Generating Metaphors and Similes}\label{sec:metaphor} Whether you think something is funny or not may be subjective, but in any case insights from joke generation can be useful as a stepping stone towards a better understanding of creative language use, including metaphor, simile and analogy. In all of these, a mapping is made between two conceptual domains, in such a way that terminology from the source domain is used to say something about the target domain, typically in a nonliteral fashion, which can be helpful in computer-generated texts to illustrate complex information. For example, \citeA{hervas2006cross} study analogies in narrative contexts, such as {\em Luke Skywalker was the King Arthur of the Jedi Knights}, which immediately clarifies an important aspect of Luke Skywalker for those not in the know. In a simile, the two domains are compared (A `is like' B); in a metaphor they are equated. Jokes and metaphors/similes are related: the automatically generated jokes of \citeauthor{petrovic2013unsupervised} are comparable to similes, while \citeA{kiddon2011thats}, for example, frame the problem of identifying double entendre jokes as a type of metaphor identification. Nevertheless, one could argue that generating jokes is more complex because of the extra funniness constraint. Like computational humor, the automatic recognition and interpretation of metaphorical, non-literal language has received considerable attention since the early days of {\sc ai} \cite<see>[for an overview]{Shutova2013}. \citeA{Martin1990,Martin1994}, for example, focussed on the recognition of metaphor in the context of Unix Support, as in the following examples: \begin{examples} \item How can I kill a process? \item How can I enter lisp? \end{examples} The first one, for example, makes a mapping between `life' (source) and `processes' (target), and is by now so common that is almost a dead metaphor, but this was not the case in the early days of Unix. Clearly, understanding of the metaphors is a prerequisite for automatically answering these questions. Early research on the computational interpretation of metaphor already recognised that metaphors rely on semantic conventions that are exploited (`broken') to express new meanings. A system for metaphor understanding, as well as one for metaphor generation, therefore requires knowledge about what literal meanings are, and how these can be stretched or translated into metaphoric meanings \cite<e.g.,>{Wilks1978,Fass1991}. Recent work by Veale and Hao \citeyear{Veale2007,Veale2008} has shown that this kind of knowledge can be acquired from the web, and used for the generation of new metaphors and similes (comparisons). Their system, called Sardonicus, is capable of generating metaphors for user-provided targets ({\sc t}), such as the following, expressing that Paris Hilton (``the person, not the hotel, though the distinction is lost on Sardonicus'', Veale \& Hao, 2007, p. 1474) is skinny: \begin{example} Paris Hilton is a stick \end{example} Sardonicus searches the web for nouns ({\sc n}) that are associated with skinniness, which are included in a case-base and range from {\em pole, pencil}, and {\em stick} to {\em snake} and {\em stick insect}. Inappropriate ones (like {\em cadaver}) are ruled out, based on the theory of category-inclusion of \citeA{Glucksberg2001}. This list of potential similes is then used to create Google queries, inspired by the work of \citeA{Hearst1992}, of the form {\em {\sc n}-like {\sc t}} (e.g., {\em stick insect-like Paris Hilton}, which actually occurs on the web), giving a ranking of the potential similes to be generated. A comparable technique is used by \citeA{Veale2013} to generate metaphors with an affective component, as in `Steve Jobs was a great leader, but he could be such a tyrant'. The Google $n$-gram corpus is used to find stereotypes suitable for simile generation (e.g., `lonesome as a cowboy'), a strategy reminiscent of the use of web-scale $n-$gram data to smooth the output of image-to-text systems (see Section\ref{sec:image}). Next, an affective dimension is added, based on the assumption that properties occurring in a conjunction (`as {\it lush and green}\/ as a jungle') are more likely to have the same affect than properties that do not. Using positive (e.g., `happy', `wonderful') and negative (e.g., `sad', `evil') seeds, coordination queries (e.g., `happy and X') are used to collect positive and negative labels for stereotypes, indicating, for instance, that babies are positively associated with qualities such as `smiling' and `cute', and negatively associated with `crying' and `sniveling'. This enables the automatic generation of positive (`cute as a baby') and negative (`crying like a baby') similes. Veale even points out that by collecting, for example, a number of negative metaphors for Microsoft being a monopoly, and using these in a set of predefined tropes, it becomes possible to automatically generate a poem such as the following: \begin{quote} {\bf No Monopoly Is More Ruthless}\\ Intimidate me with your imposing hegemony \\ No crime family is more badly organized, or controls more ruthlessly\\ Haunt me with your centralized organization\\ Let your privileged security support me\\ O Microsoft, you oppress me with your corrupt reign \end{quote} In fact, automatic generation of poetry is an emerging area at the crossroads of computational creativity and natural language generation \cite<see for example>[for variations on this theme]{Lutz1959,Gervas2001,Wong2008,Netzer2009,Greene2010,Colton2012,Manurung2012,zhang2014chinese}. See the recent review by \citeA{Oliveira2017}. \subsection{Generating Narratives}\label{sec:narrative} Computational narratology is concerned with computational models for the generation and interpretation of narrative texts \cite<e.g.,>{Gervas2009,Mani2010,Mani2013}. The starting point for many approaches to narrative generation is a view of narrative coming from classical narratology, a branch of literary studies with roots in the Formalist and Structuralist traditions \cite<e.g.,>{Propp1968,Genette1980,Bal2009}. This field has been concerned with analysing both the defining characteristics of narrative, such as plot or character, and more subtle features, such as the handling of time and temporal shifts, focalisation (that is, the ability to convey to the reader that a story is being recounted from a specific point of view), and the interaction of multiple narrative threads, in the form of sub-plots, parallel narratives, etc. An important recent development is the interest, on the part of narratologists, in bringing to bear insights from Cognitive Science and {\sc ai} on their literary work, making this field ripe for multi-disciplinary interaction \cite<see especially>[for programmatic statements to this effect, as well as theoretical contributions]{Herman1997,Herman2007,Meister2003}. Classical narratology makes a fundamental distinction between the `story world' and the text that narrates the story. In line with the formalist and structuralist roots of this tradition, the distinction is usually articulated as a dichotomy between {\em fabula} (or story) and {\em suzjet} (or discourse). There is a parallel between this distinction and that between a text plan in {\sc nlg}, versus the actual text which articulates that plan. However, the crucial difference is that in producing a plan for a narrative, a story generation system typically does not use input data of the sort required by most of the {\sc nlg} systems reviewed thus far, since the story is usually fictional. On the other hand, narratological tools have also been successfully applied to real-world narratives, including oral narratives of personal experience \cite<e.g.,>{Herman2001,Labov2010}. The focus of most work on narrative generation has been on the pre-linguistic stage, that is, on generating plans within a story world for fictional narratives, usually within a specific genre whose structural properties are well-understood, for example, fairy tales or Arthurian legends \cite<see>[for a review]{Gervas2013}. There are however links between the techniques used for such stories and those we have discussed above in relation to {\sc nlg} (see especially Section \ref{sec:planning}). Prominent among these are planning and reasoning techniques to model the creative process as a problem-solving task. For example, {\sc minstrel} \cite{Turner1992} uses reasoning to model creativity from the author's perspective, producing narrative plans based on authorial goals, such as the goal of introducing drama into a narrative, while ensuring thematic consistency. More recently, {\sc brutus} \cite{Bringsjord1999} used a knowledge base of story schemas, from which one is selected and elaborated using planning techniques to link causes and effects \cite<see also>[among others, for recent examples of the use of planning techniques to model the creative process in narrative generation]{Young2008,Riedl2010}. \begin{figure}[!h] \begin{subfigure}[b]{0.5\textwidth} \fbox{\begin{minipage}{0.85\textwidth} John Bear is somewhat hungry. John Bear wants to get some berries. John Bear wants to get near the blueberries. John Bear walks from a cave entrance to the bush by going through a pass through a valley through a meadow. John Bear takes the blueberries. John Bear eats the blueberries. The blueberries are gone. John Bear is not very hungry. \end{minipage}} \caption{Excerpt from {\sc TaleSpin}} \label{fig:talespin} \end{subfigure} ~\hfill \begin{subfigure}[b]{0.5\textwidth} \fbox{\begin{minipage}{0.85\textwidth} Once upon a time a woodman and his wife lived in a pretty cottage on the borders of a great forest. They had one little daughter, a sweet child, who was a favorite with every one. She was the joy of her mother's heart. To please her, the good woman made her a little scarlet cloak and hood. She looked so pretty in it that everybody called her Little Red Riding Hood. \end{minipage}} \caption{Excerpt from {\sc storybook}} \label{fig:storybook} \end{subfigure} \caption{Examples of automatically generated narratives. The left panel shows an excerpt from a story produced by {\sc TaleSpin} \cite{Meehan1977}; the right panel is an excerpt from the {\em Little Red Riding Hood} fairy-tale, generated by the {\sc storybook} system \cite{Callaway2002}.} \end{figure} As \citeA{Gervas2010} notes, the focus on planning story worlds and modelling creativity has often implied a sidelining of linguistic issues, so that rendering a story plan into text has often been viewed as a secondary consideration. For example Figure \ref{fig:talespin} shows an excerpt of a story produced by the {\sc talespin} system \cite{Meehan1977}: here, the emphasis is on using problem-solving techniques to produce a narrative in which events follow from each other in a coherent fashion, rather than on telling it in a fluent way. An important exception to this trend is the work of \citeA{Callaway2002}, who explicitly addressed the gap between computational narratology and {\sc nlg}. Their system took a narrative plan as a starting point, but focussed on the process of rendering the narrative in fluent English, handling time shifts, aggregation, anaphoric {\sc np}s and many other linguistic phenomena, as the excerpt in Figure \ref{fig:storybook} shows. It is worth noting that this system has since been re-used in the context of generating interactive text for a portable museum guide by \citeA{Stock2007}. In addition, there have been a number of contributions from the generation community on more specific issues related to narrative, such as how to convey the temporal flow of narrative discourse \cite{Oberlander1992,Dorr1995,Elson2010}. This is a problem that deserves more attention in {\sc nlg}, since texts with a complex narrative structure often narrate events in a different order from which they occurred. For example, a narrative or narrative-like text may recount events in order of importance rather than in temporal order, even when they are grounded in real-world data \cite<e.g.>{Portet2009}. This makes the use of the right choices for tense, aspect and temporal adverbials crucial to ensure clarity for the reader. This type of complexity in narrative structure also emerges in interactive narrative fiction \cite<for example, in games; cf.,>{montfort2007ordering}. Beyond the focus on specific linguistic issues, there has also been some work that leverages data-driven techniques to generate stories. For example, \citeA{Mcintyre2009} propose a story generation system whose input is a database of entities and their interactions, extracted from a corpus of stories by parsing them, retrieving grammatical dependencies, and building chains of events in which specific entities play a role. The outcome is a graph encoding a partial order of events, with edges weighted by mutual information to reflect the degree of association between nodes. Sentence planning then takes place using template-like grammar rules specifying verbs with subcategorisation information, followed by realisation using {\sc realpro} \cite{Lavoie1997}. One of the most interesting features of this work is the coupling of the generation model with an interest model to predict which stories would actually be rated as interesting by readers. This was achieved by training a kernel-based classifier on shallow lexical and syntactic features of stories, a novel take on an old problem in narratology, namely, what makes a story `tellable', thereby distinguishing it from a mere report \cite<e.g.,>{Herman1997,Norrick2005,Bruner2011}. Most story generation work is restricted to (very) short stories. It is certainly true that planning a book-length narrative along the lines sketched above is extremely challenging, but researchers have recently started exploring the possibilities, for instance in the context of NaNoGenMon (National Novel Generation Month), in which participants write a computer program capable of generating a 'novel'. Perhaps the best known example is {\it World Clock}\/ \cite{montfort2013world} which describes 1440 (24 $\times$ 60) events taking place around the world, one randomly selected minute at a time. These are the first two: \begin{quote} It is now exactly 05:00 in Samarkand. In some ramshackle dwelling a person who is called Gang, who is on the small side, reads an entirely made-up word on a box of breakfast cereal. He turns entirely around. It is now right about 18:01 in Matamoros. In some dim yet decent structure a man named Tao, who is no larger or smaller than one would expect, reads a tiny numeric code from a recipe clipping. He smiles a tiny smile. \end{quote} The book was fully generated by 165 lines of Python code, written by the author in a few hours, and later published (together with the software) by Harvard Book Store press. There is even a Polish translation (by Piotr Marecki), created by translating the terms and phrases used in the Python implementation of the original algorithm. \subsection{Generating Creative Language: Concluding Remarks} In this section we have highlighted recent developments in the broad area of creative language generation, a topic which is rather understudied in {\sc nlg}. Nevertheless, we would like to argue that {\sc nlg} researchers can improve the quality of their output by taking insights from computational creativity on board. Work that exploits corpora and other lexical resources for the automatic generation of jokes, puns, metaphors and similes has revealed different ways in which words are related and can be juxtaposed to form unexpected and possibly even `funny' or `poetic' combinations. Given that, for example, metaphor is pervasive in everyday language \cite<as argued, for example, by>{Lakoff1980}, not just in overtly creative uses, {\sc nlg} researchers interested in enhancing the readability -- and especially the variability -- of the text-generating capability of their models would benefit from a closer look at work in poetry, joke and metaphor generation. In a similar vein, work on narratology is rich in insights on the interaction of multiple threads in a single narrative, and how the choice of events and their ordering can give rise to interesting stories \cite<e.g.,>{Gervas2012}. These insights are valuable, for example, in the development of more elaborate text planners in domains where time and causality play a role. Similarly, narratological work on character and focalisation can also help in the development of better {\sc nlg} techniques to vary output according to specific points of view, an area that we touched on in Section \ref{sec:style}, We have deferred discussion of evaluation of creative {\sc nlg} to Section \ref{sec:evaluation}, which deals with evaluation in general. Anticipating some of that discussion, it is worth noting that evaluation of creative language generation remains something of a bottleneck. In part, this is because it is not always easy to determine the `right' question to ask in an evaluation of creative text. For instance, in the case of joke and poetry generators, demonstrating genre compatibility and recognition (`Is this a joke?') is arguably already an achievement, insofar as it suggests that a system is producing artefacts that conform to normative expectations (this is discussed further in Section \ref{sec:genre-eval} below). In other types of creative language generation, evaluation is more challenging because it is difficult to carry out without ensuring quality at {\em all} levels of the generation process, from planning to realisation. In the case of narrative generation, for example, if the emphasis is placed entirely on story planning, the perceived quality of the narrative will be compromised if story plans are rendered using a an excessively simple realisation strategy (as is the case in Figure \ref{fig:talespin}). This is an area where the consensus in the field is that much further research effort is required \cite<see>[for a recent argument to this effect]{Zhu2012}. It is also an area in which {\sc nlg} can potentially offer much to computational creativity researchers, including in the use of techniques to render text fluently and consistently, facilitating the evaluation of generated artefacts with human subjects. \section{NLG Architectures and Approaches}\label{sec:architectures} Having given an overview of the most common sub-tasks that {\sc nlg} systems incorporate, we now turn to the way such tasks can be organised. Broadly speaking, we can distinguish between three dominant approaches to {\sc nlg} architectures: \begin{enumerate} \item {\it Modular}\/ architectures: By design, such architectures involve fairly crisp divisions among sub-tasks, though with significant variations among them; \item {\it Planning}\/ perspectives: Viewing text generation as planning links it to a long tradition in {\sc ai} and affords a more integrated, less modular perspective on the various sub-tasks of {\sc nlg}; \item {\it Integrated} or {\it global} approaches: Now the dominant trend in {\sc nlg} (as it is in {\sc nlp} more generally), such approaches cut across task divisions, usually by placing a heavy reliance on statistical learning of correspondences between (non-linguistic) inputs and outputs. \end{enumerate} The above typology of {\sc nlg} is based on architectural considerations. An orthogonal question concerns the extent to which a particular approach relies on symbolic or knowledge-based methods, as opposed to stochastic, data-driven methods. It is important to note that none of the three architectural types listed above is inherently committed to one or the other of these. Thus, it is possible for a system to have a modular design but incorporate stochastic methods in several, or even all, sub-tasks. Indeed, our survey of the various tasks in Section \ref{sec:tasks} included several examples of stochastic approaches. Below, we will also discuss a number of data-driven systems whose design is arguably modular. Similarly, it is possible for a system to take a non-modular perspective, but eschew the use of data-driven models (this is a feature of some planning-based {\sc nlg} systems discussed in Section \ref{sec:planning} below, for instance). The fact that many modular {\sc nlg} systems are not data-driven is largely due to historical reasons since, of the three designs outlined above, the modular one is the oldest. As we will show below, however, challenges to the classical modular pipeline architecture -- once designated by \citeA{Reiter1994} as the consensus at the time -- have included blackboard and revision-based architectures that were not stochastic. At the same time, it must be acknowledged that the large-scale adoption of integrated, non-modular approaches has been impacted significantly by the uptake of data-driven techniques within the {\sc nlg} community and the development of repositories of data to support training and evaluation. In summary, there are at least two orthogonal ways of classifying {\sc nlg} systems, based on their {\em design} or on the {\em methods} adopted in their development. Our survey in this section follows the typology outlined above for convenience of exposition. The caveats raised here should, however, be borne in mind by the reader, and will in any case be brought up repeatedly in what follows, as we discuss different approaches under each heading. \subsection{Modular Approaches}\label{sec:modular} Existing surveys of {\sc nlg}, including those by \citeA{Reiter1997,Reiter2000} and \citeA{Reiter2010}, typically refer to some version of the pipeline architecture displayed in Figure \ref{fig:modular-arch} as the `consensus' architecture in the field. Originally introduced by \citeA{Reiter1994}, the pipeline was a generalisation based on actual practice and was claimed to have the status of a `de facto standard'. This, however, has been contested repeatedly, as we shall see. \begin{figure}[t] \centering \begin{minipage}[t][2cm]{\textwidth} \begin{center} \smartdiagramset{ border color=none, set color list={gray!60!black,white!60!black, gray!60!black, white!60!black, gray!60!black, white!60!black}, back arrow disabled=true} \smartdiagramadd[flow diagram:horizontal]{% Text Planner, text plan, Sentence Planner, sentence plan, Realiser, text% }{} \end{center} \end{minipage} \caption{Classical three-stage NLG architecture, after \citeA{Reiter2000}. Darker segments illustrate the three main modules; lighter segments show the outputs.} \label{fig:modular-arch} \end{figure} Different modules in the pipeline incorporate different subsets of the tasks described in Section \ref{sec:tasks}. The first module, the {\it Text Planner} (or Document Planner, or Macroplanner), combines content selection and text structuring (or document planning). Thus, it is concerned mainly with strategic generation \cite{McDonald1993}, the choice of `what to say'. The resulting text plan, a structured representation of messages, is the input to the {\it Sentence Planner}\/ (or microplanner), which typically combines sentence aggregation, lexicalisation and referring expression generation \cite{Reiter2000}. If text planning amounts to {\it deciding what to say}\/, sentence planning can be understood as deciding {\it how to say it}. All that remains then is to actually say it, i.e., generate the final sentences in a grammatically correct way, by applying syntactic and morphological rules. This task is performed by the {\it Linguistic Realiser}. Together, sentence planning and realisation encompass the set of tasks traditionally referred to as {\em tactical generation}. The pipeline architectures shares some characteristics with a widely-used architecture in text summarisation \cite{Mani2001,Nenkova2011}, where the process is sub-divided into (a) analysis of source texts and selection of information; (b) transformation of the selected information to enhance fluency; and (c) synthesis of the summary. A second related architecture, which was also noted by \citeA{Reiter1994}, is that proposed in psycholinguistics for human speech production, where the most influential psycholinguistic model of language production, proposed by \citeA{Levelt1989,Levelt1999}, makes a similar distinction between deciding what to say and determining how to say it. Levelt's model allows for a limited degree of self-monitoring through feedback loops, a feature that is absent in Reiter's {\sc nlg} pipeline, but continues to play an important role in psycholinguistics \cite<cf.>{Pickering2013}, though here too there has been increasing emphasis on more integrated models. A hallmark of the architecture in Figure \ref{fig:modular-arch} is that it represents clear-cut divisions among tasks that are traditionally considered to belong to the `what' (strategic) and the `how' (tactical). However, this does not imply that this division is universally accepted in practice. In an earlier survey, \citeA{Mellish2006} concluded that while several {\sc nlg} systems incorporate many of the core tasks outlined in Section \ref{sec:tasks}, their organisation varies considerably from system to system. Indeed, some tasks may be split up across modules. For example, the content determination part of referring expression generation might be placed in the sentence planner, but decisions about form (such as whether to use an anaphoric {\sc np}, and if so, what kind of {\sc np} to produce) may have to wait until at least some realisation-related decisions have been taken. Based on these observations, \citeauthor{Mellish2006} proposed an alternative formalism, the `objects-and-arrows' framework, within which different types of information flow between {\sc nlg} sub-tasks can be accommodated. Rather than offering a specific architecture, this framework was intended as a formalism within which high-level descriptions of different architectures can be specified. However, it retains the principle that the tasks, irrespective of their organisation, are relatively well-defined and distinguished. Another recent development in relation to the pipeline architecture in Figure \ref{fig:modular-arch} is a proposal by \citeA{Reiter2007} to accommodate systems in which input consists of raw (often numeric) data that requires some preprocessing before it can undergo the kind of selection and planning that the Text Planner is designed to execute. The main characteristic of these systems is that input is unstructured, in contrast to systems which operate over logical forms, or database entries. Examples of application domains where this is the case include weather reporting \cite<e.g.,>{Goldberg1994,Buseman1997,Coch1998,Turner2008a,Sripada2003,Ramos-Soto2015}, where the input often takes the form of numerical weather predictions; and generation of summaries from patient data \cite<e.g.,>{Hueske-kraus2003,Harris2008,Gatt2009,Banaee2013}. In such cases, {\sc nlg} systems often need to perform some form of {\em data abstraction} (for example, identifying broad trends in the data), followed by {\em data interpretation}. The techniques used to perform these tasks range from extensions of signal processing techniques \cite<e.g.,>{Portet2009} to the application of reasoning formalisms based on fuzzy set theory \cite<e.g.,>{Ramos-Soto2015}. \citeauthor{Reiter2007}'s \citeyear{Reiter2007} proposal accommodates these steps by extending the pipeline `backwards', incorporating stages prior to Text Planning. Notwithstanding its elegance and simplicity, there are challenges associated with a pipeline {\sc nlg} architecture, of which two are particularly worth highlighting: \begin{itemize} \item The {\em generation gap}\/ \cite{Meteer1991} refers to mismatches between strategic and tactical components, so that early decisions in the pipeline have unforeseen consequences further downstream. To take an example from \citeA{Inui1992}, a generation system might determine a particular sentence ordering during the sentence planning stage, but this might turn out to be ambiguous once sentences have actually been realised and orthography has been inserted; \item {\em Generating under constraints}: Itself perhaps an instance of the generation gap, this problem can occur when the output of a system has to match certain requirements, for example, it cannot exceed a certain length \cite<see>[for discussion]{Reiter2000a}. Formalising this constraint might appear possible at the realisation stage -- by stipulating the length constraint in terms of number of words or characters, for instance -- but it is much harder at the earlier stages, where the representations are pre-linguistic and their mapping to the final text are potentially unpredictable. \end{itemize} These, and related problems, motivated the development of alternative architectures. For instance, some early {\sc nlg} systems were based on an interactive design, in which a module's initially incomplete output could be fleshed out based on feedback from a later module \cite<the {\sc pauline} system is an example of this;>{Hovy1988}. An even more flexible stance is taken in blackboard architectures, in which task-specific procedures are not rigidly pre-organised, but perform their tasks reactively as the output, represented in a data structure shared between tasks, evolves \cite<e.g.,>{Nirenburg1989}. Finally, revision-based architectures allow a limited form of feedback between modules under monitoring, with the possibility of altering choices which prove to be unsatisfactory \cite<e.g.,>{Mann1981,Inui1992}. This has the advantage of not requiring `early' modules to be aware of the consequences of their choices for subsequent modules, since something that goes wrong can always be revised \cite{Inui1992}. Revision need not be carried out exclusively to rectify shortcomings. For instance, \citeA{Robin1993} used revision in the context of sports summaries; an initial draft was revised to add historical background information that was made relevant by the events reported in the draft, also taking decisions as to where to place them in relation to the main text. The price that all of these alternatives potentially incur is, of course, a reduction in efficiency, as noted by \citeA{Smedt1996}. Despite early criticisms of the modular approach, the strategic versus tactical division continues to influence recent data-driven approaches to {\sc nlg}, including a number of those discussed in Sections \ref{sec:datadriven} and \ref{sec:deep-learning} below \cite<e.g.>[among others]{Dusek2015,Dusek2016}. However, other alternatives to pipelines often end up blurring the boundaries between modules in the {\sc nlg} system. This is a feature that is more evident in some planning-based and integrated approaches proposed in recent years. It is to these that we now turn. \subsection{Planning-Based Approaches}\label{sec:planning} In {\sc ai}, the planning problem can be described as the process of identifying a sequence of one or more actions to satisfy a particular goal. An initial goal can be decomposed into sub-goals, satisfied by actions each of which has its preconditions and effects. In the classical planning paradigm \cite<{\sc strips};>{Fikes1971}, actions are represented as tuples of such preconditions and effects. The connection between planning and {\sc nlg} lies in that text generation can be viewed as the execution of planned behaviour to achieve a communicative goal, where each action leads to a new state, that is, a change in a context that includes both the linguistic interaction or discourse history to date, but also the physical or situated context and the user's beliefs and actions \cite<see>[for some recent perspectives on this topic]{Lemon2008,Rieser2009,Dethlefs2014,Garoufi2013,Garoufi2014}. This perspective on {\sc nlg} is therefore related to the view of `language as action' \cite{Clark1996a}, itself rooted in a philosophical tradition inaugurated by the work of \citeA{Austin1962} and \citeA{Searle1969}. Indeed, some of the earliest {\sc ai} work in this tradition \cite<especially>{Cohen1979,Cohen1985} sought an explicit formulation of preconditions (akin to Searle's felicity conditions) for speech acts and their consequences. Given that there is in principle no restriction on what types of actions can be incorporated in a plan, it is possible for plan-based approaches to {\sc nlg} to cut across the boundaries of many of the tasks that are normally encapsulated in the classic pipeline architecture, combining both tactical and strategic elements by viewing the problems of what to say and how to say it as part and parcel of the same set of operations. Indeed, there are important precedents in early work for a unified view of {\sc nlg} as a hierarchy of goals, the {\sc kamp} system \cite{Appelt1985} being among the best known examples. For instance, to generate referring expressions in {\sc kamp}, the starting point was reasoning about interlocutors' beliefs and mutual knowledge, whereupon the system generated sub-goals that percolated all the way down to property choice and realisation, finally producing a referential {\sc np} whose predicted effect was to alter the hearer's belief state about the referent \cite<see>[for a similar approach to the generation of referring expressions in dialogue]{Heeman1995}. One problem with these perspectives, however, is that deep reasoning about beliefs, desires and intentions \cite<or {\sc bdi}, as it is often called following>{Bratman1987} requires highly expressive formalisms and incurs considerable computational expense. One solution is to avoid general-purpose reasoning formalisms and instead adapt a linguistic framework to the planning paradigm for {\sc nlg}. \subsubsection{Planning through the Grammar} The idea of interpreting linguistic formalisms in planning terms is again prefigured in early {\sc nlg} work. For example, some early systems \cite<e.g. {\sc kpml}, which we briefly discussed in the context of realisation in Section \ref{sec:lr};>{Bateman1997} were based on Systemic-Functional Grammar \cite<{\sc sfg; }>{Halliday2004}, which can be seen as a precursor to contemporary planning-based approaches, since {\sc sfg} models linguistic constructions as the outcome of a traversal through a decision network that extends backwards to pragmatic intentions. In a similar vein, both \citeA{Hovy1991} and \citeA{Moore1993} interpreted the relations of Rhetorical Structure Theory \cite{Mann1988} as operators for text planning. Some recent approaches integrate much of the planning machinery into the grammar itself, viewing linguistic structures as planning operators. This requires grammar formalisms which integrate multiple levels of linguistic analysis, from pragmatics to morpho-syntax. It is common for contemporary planning-based approaches to {\sc nlg} to be couched in the formalism of Lexicalised Tree Adjoining Grammar \cite<{\sc ltag}; >{Joshi1997}, though other formalisms, such as Combinatory Categorial Grammar \cite{Steedman2000} have also been shown to be adequate to the task \cite<see especially>[for an approach to generation using Discourse Combinatory Categorial Grammar]{Nakatsu2010}. In an {\sc ltag}, pieces of linguistic structure (so-called elementary trees in a lexicon) can be coupled with semantic and pragmatic information that specify (a) what semantic preconditions need to obtain in order for the item to be felicitously used; and (b) what pragmatic goals the use of that particular item will achieve \cite<see>[for planning-based work using {\sc ltag}]{Stone1998,Garoufi2013,Koller2002}. As an example of how such a formalism could be deployed in a planning framework, let us focus on the task of referring to a target entity. \citeA{Koller2007} formulated the task in a way that obviates the need to distinguish between the content determination and realisation phases \cite<an approach already taken in>{Stone1998}. Furthermore, they do not separate sentence planning, {\sc reg} and realisation, as is done in the traditional pipeline. Consider the sentence {\em Mary likes the white rabbit}. Simplifying the formalism for ease of presentation, we can represent the lexical item {\em likes} as follows \cite<this example is based on>[albeit with some simplifications]{Garoufi2014}: \begin{example} {\bf likes($u$, $x$, $y$)} action:\\ {\sc preconditions}: \begin{itemize} \item The proposition that $x$ likes $y$ is part of the knowledge base (i.e. the statement is supported); \item $x$ is animate; \item The current utterance $u$ can be substituted into the derivation $S$ under construction; \end{itemize} {\sc effects}: \begin{itemize} \item $u$ is now part of $S$ \item New {\sc np} nodes for $x$ in agent position and $y$ in patient position have been set up (and need to be filled). \end{itemize} \end{example} As in {\sc strips}, an operator consists of preconditions and effects. Note that the preconditions associated with the lexical item require support in the knowledge base (thus making reference to the input {\sc kb}, which normally would not be accessible to the realiser), and include semantic information (such as that the agent needs to be animate). Having inserted {\em likes} as the sentence's main verb, we have two noun phrases which need to be filled by generating {\sc np}s for the arguments $x$ and $y$. Rather than deferring this task to a separate {\sc reg} module, \citeauthor{Koller2007} build referring expressions by associating further pragmatic preconditions on the linguistic operators (elementary trees) that will be incorporated in the referential {\sc np}. First, the entity must be part of the hearer's knowledge state, since an identifying description (say, to $x$) presupposes that the hearer is familiar with it. Second, an effect of adding words to the {\sc np} (such as the predicates {\em rabbit} or {\em white}) is that the phrase excludes distractors, i.e. entities of which those properties are not true. In a scenario with one human being and two rabbits, only one of which (the $y$ in our example) is white, the derivation would proceed by first updating the {\sc np} corresponding to $y$ with {\em rabbit}, thereby excluding the human from the distractor set, but leaving the goal to distinguish $y$ unsatisfied (since $y$ is not the only rabbit). The addition of another predicate to the {\sc np} ({\em white}) does the trick. A practical advantage to planning-based approaches is the availability of a significant number of off-the-shelf planners. Once the {\sc nlg} task is formulated in an appropriate plan description language, such as the Planning Domain Definition Language \cite<{\sc pddl}; >{McDermott2000}, it becomes possible in principle to use any planner to generate text. However, planners remain beset by problems of efficiency. In a set of experiments on {\sc nlg} tasks of differing complexity, \citeA{Koller2011} noted that planners tend to spend significant amounts of time on preprocessing, though solutions could often be found efficiently once preprocessing was complete. \subsubsection{Stochastic Planning under Uncertainty using Reinforcement Learning}\label{sec:reinforcement} The approaches to planning we have discussed so far are largely rule-based and tend to view the relationship between a planned action and its consequences (that is, its impact on the context), as fixed \cite<though exceptions exist, as in {\em contingency planning}, which generates multiple plans to address different possible outcomes;>{Steedman2007}. As \citeA{Rieser2009} note, this view is unrealistic. Consider a system that generates a restaurant recommendation. The consequences of its output (that is, the new state it gives rise to) are subject to noise arising from several sources of uncertainty. In part, this is due to trade-offs, for example, between needing to include the right amount of information while avoiding excessive prolixity. Another source of uncertainty is the user, whose actions may not be the ones predicted by the system. An instance of Meteer's \citeyear{Meteer1991} generation gap can rear its head, for instance if a stochastic realiser renders the content of a message in an ambiguous, or excessively lengthy utterance \cite{Rieser2009}, a problem that could be addressed by allowing different sub-tasks to share knowledge sources and be guided by overlapping constraints \cite[discussed below]{Dethlefs2015}. In short, planning a good solution to reach a communicative goal could be viewed as a stochastic optimisation problem (a theme we revisit in Section \ref{sec:classification} below). This view is shared by many recent approaches based on Reinforcement Learning \cite<{\sc rl};>{Lemon2008,Rieser2009,Rieser2011}, especially those that tackle {\sc nlg} within a dialogue context. In this framework, generation can be modelled as a Markov decision process where states are associated with possible actions and each state-action pair is associated with a probability of moving from a state at time $t$ to a new state at $t+1$ via action $a$. Crucially for the learning algorithm, transitions are associated with a reinforcement signal, via a reward function that quantifies the optimality of the generated output. Learning usually involves simulations in which different generation strategies or `policies' -- essentially, plans corresponding to possible paths through the state space -- come to be associated with different rewards. The {\sc rl} framework has been argued to be better at handling uncertainty in dynamic environments than supervised learning or classification, since these do not enable adaptation in a changing context \cite{Rieser2009}. \citeA{Rieser2011a} showed that this approach is effective in optimising information presentation when generating restaurant recommendations. \citeA{Janarthanam2014} used it to optimise the choice of information to select in a referring expression, given a user's knowledge. The system learns to adapt its user model as the user acquires new knowledge in the course of a dialogue. An important contribution of this work has been in exploring joint optimisation, where the policy learned satisfies multiple constraints arising from different sub-tasks of the generation process, by sharing knowledge across the sub-tasks. \citeA{Lemon2011a} showed that joint optimisation can learn a policy that determines when to generate informative utterances or queries to seek more information from a user. Similarly, \citeA{Cuay2011} used hierarchical {\sc rl} to jointly optimise the problem of finding and describing a short route description, while adapting to a user's prior knowledge, giving rise to a strategy whereby the user is guided past landmarks that they are familiar with, while avoiding potentially confusing junctions. Also in a route-finding setting, \citeA{Dethlefs2015} develop a hierarchical model comprising a set of learning agents whose tasks range from content selection through realisation. They show that a joint framework in which agents share knowledge, outperforms an isolated learning framework in which each task is modelled separately. For example, the joint policy learns to give high-level navigation instructions, but switches to low-level instructions if the user goes off-track. Furthermore, utterances produced by the joint policy are less verbose and lead to shorter interactions overall. The joint optimisation framework is of course not unique to Reinforcement Learning and planning-based approaches. A number of approaches to content determination discussed in earlier sections, including the work of \citeA{Marciniak2005} and \citeA{Barzilay2005}, also use joint optimisation in their approach to content determination and realisation (see Sections \ref{sec:content-det}), as does the work of \citeA{Lampouras2013}. We return to optimisation in Section \ref{sec:classification} below. In summary, {\sc nlg} research within the planning paradigm has highlighted the desirability of developing unified formalisms to represent constraints on the generation process at multiple levels, whether this is done using {\sc ai}-based planning formalisms \cite{Koller2011}, or stochastically via Reinforcement Learning. Among its contributions, the latter line of work has shed light on the value of (a) hierarchical relationships among sub-problems; and (b) joint optimisation of different sub-tasks. Indeed, the latter trend belongs to a much broader range of research on integrated approaches to {\sc nlg}, to which we turn our attention immediately below. \subsection{Other Stochastic Approaches to NLG}\label{sec:datadriven} As we noted at the start of this section, whether a system is data-driven or not is independent of its architectural organisation. Indeed, some of the earliest challenges to a modular or pipeline approach described in Section \ref{sec:modular} above, including revision-based and blackboard architectures, were symbolic in their methodological orientation. At the same time, the shift towards data-driven methods and the availability of data sources has given greater impetus to integrated approaches to {\sc nlg}, although this shift began somewhat later that in other areas of {\sc nlp}. As a result, a discussion of integrated approaches will necessarily tend to emphasise statistical methods. In the remainder of this section, we start with an overview of methods used to acquire training data for {\sc nlg} -- in particular, pairings of inputs (data) and outputs (text) -- before turning to an overview of techniques and frameworks. One of the themes that will emerge from this overview is that, as in the case of planning, statistical methods often take a unified or `global', rather than a modularised, view of the {\sc nlg} process. \subsubsection{Acquiring Data}\label{sec:data} As noted in Section \ref{sec:tasks}, some {\sc nlg} tasks support the transition to a stochastic approach fairly easily. For example, research on realisation often exploits the existence of treebanks from which input-output correspondences can be learned. Similarly, the emergence of corpora of referring expressions representing both input domains and output descriptions \cite<e.g.,>{Gatt2007a,Viethen2011a,Kazemzadeh2014,Gkatzia2015} has facilitated the development of probabilistic {\sc reg} algorithms. Shared tasks have also contributed to the development of both data sources and methods (see Section \ref{sec:evaluation}). As we show in Section \ref{sec:image} below, recent work on image-to-text generation has also benefited from the availability of large datasets. For statistical, end-to-end generation in other domains, there is less of an embarrassment of riches. However, this situation is improving as methods to automatically align input data with output text are developed. Still, it is worth emphasising that many of these alignment approaches use data which is semi-structured, rather than the raw, numerical input (e.g., signals) used by the data-to-text systems that \citeA{Reiter2007}, among others, drew attention to. Currently, there are a number of data-text corpora in specific domains, notably weather forecasting \cite{Reiter2005,Belz2008,Liang2009} and sports summaries \cite{Barzilay2005,Chen2008}. These usually consist of database records paired with free text. A promising recent trend is the introduction of statistical techniques that seek to automatically segment and align such data and text \cite<e.g.,>{Barzilay2005,Liang2009,Konstas2013}. In an influential paper, \citeA{Liang2009} described this framework in terms of a generative model that defines a distribution $p(w\|s)$, for sequences of words $w$ and input states $s$, with latent variables specifying the correspondence between $w$ and $s$ in terms of three main components: (i) the likelihood of database records being selected, given $s$; (ii) the likelihood of certain fields being chosen for some record; (iii) the likelihood that a string of a certain length is generated given the records, fields and states. The parameters of the model can be found using the Expectation Maximization ({\sc em}) algorithm. An example alignment is shown in Figure \ref{fig:liang-example}. \begin{figure}[t] \centering \begin{minipage}{\textwidth} \begin{tabular}{|l|l|lllll|} \hline {\em Events:} & {\sc skycover$_1$} & \multicolumn{5}{c|}{{\sc temperature}$_1$} \\ {\em Fields:} & {\tt percent=0-25} & & \multicolumn{2}{c}{\tt time=6am-9pm} & {\tt min=9} & {\tt max=21} \\ {\em Text:} & cloudy, & with & \multicolumn{2}{l}{temperatures between} & 10 & 20 degrees. [\textellipsis] \\ \hline \end{tabular} \\ \begin{tabular}{|l|lll|ll|} \hline {\em Events:} & & \multicolumn{2}{c|}{{\sc winddir}$_1$} & \multicolumn{2}{c|}{{\sc windspeed}$_1$} \\ {\em Fields:} & & {\tt mode=S} & & & {\tt mean=20} \\ {\em Text:} & [\textellipsis] & south & wind & around & 20mph. \\ \hline \end{tabular} \end{minipage} \caption{Database records aligned with text using minimal supervision \cite<after>{Liang2009}.} \label{fig:liang-example} \end{figure} These models perform alignment by identifying regular co-occurrences of segments of data and text. \citeA{koncel2014multi} go beyond this by proposing a model that exploits linguistic structure to align at varying resolutions. For example, (\ref{ex:kedziorski}) below is related to two observations in a soccer game log (an aerial pass and a miss), but can be further analysed into two sub-parts (indicated by indices 1 and 2 in our example), which individually map to these two sub-events. \begin{example}\label{ex:kedziorski} (Chamakh rises highest)$_{1}$ and (aims a header towards goal which is narrowly wide)$_{2}$. \end{example} A different approach to data acquisition is described by \citeA{Mairesse2014}, who use crowd-sourcing techniques to elicit realisations for semantic/pragmatic inputs describing dialogue acts in the restaurant domain \cite<see>[for another recent approach to crowd-sourcing in a similar domain]{Novikova2016}. The key to the success of this technique is the development of a semantics that is sufficiently transparent for use with non-specialists. In an earlier paper, \citeA{MairesseEtAl2010} describe a method to cut down on the amount of training data required for generation by using {\em uncertainty sampling} \cite{Lewis1994}, whereby a system can be trained on a relatively small amount of input data; subsequently, the learned model is applied to new data, from which the system samples the cases of which it is least certain, forwarding these to a (possibly human) oracle for feedback, which potentially leads to a new training cycle. Many of the stochastic end-to-end systems we discuss below rely on well-defined formalisms and typically need fairly precise alignments between inputs and portions of the output. One of the limitations of these approaches is that the reliance on alignment makes such systems highly domain-specific, as noted by \citeA{Angeli2010}. More recent stochastic methods obviate the need for alignment between input data and output strings. This is the case for many systems based on neural networks \cite<e.g.,>[discussed in Section \ref{sec:deep-learning}]{Wen2015,Dusek2016,Lebret2016,Mei2016} as well as other machine-learning approaches \cite<e.g.,>{Dusek2015,Lampouras2016}. For example, \citeA{Dusek2015} use the dialogue acts from the {\sc bagel} dataset \cite{MairesseEtAl2010} as meaning representations; the {\sc bagel} reference texts are parsed using an off-the-shelf deep syntactic analyser. They define a stochastic sentence planner, a variant of the $A^{*}$ algorithm, which builds optimal sentence plans using a base generator and a scoring function to rank candidates. Realisation is conducted using a rule-based realiser. The approach of \citeA{Lampouras2016}, also uses unaligned {\sc mr}-text pairs from {\sc bagel}, as well as the related {\sc sf} hotel and restaurant dataset by \citeA{Wen2015}. Here, content determination and realisation are both conceived as classification problems (choosing an attribute from the {\sc mr}, or choosing a word for the output), but are optimised jointly in an iterative training algorithm using imitation learning. \subsubsection{NLG as a Sequential, Stochastic Process}\label{sec:lm} Given an alignment between data and text, one way of modelling the {\sc nlg} process is to remain faithful to the division between strategic and tactical choices, using the statistical alignment to inform content selection, while deploying {\sc nlp} techniques to acquire rules, templates or schemas \cite<\'{a} la >{McKeown1985} to drive sentence planning and realisation. Recall that the generative model of \citeA{Liang2009} pairs data to text based on a sequential, Markov process, combining strategic choices (of {\sc db} records and fields) with tactical choices (of word sequences) into a single probabilistic model. In fact, Markov-based language modelling approaches continue to feature prominently in data-driven {\sc nlg}. One of the earliest examples is the work of \citeA{Oh2002} in the context of a dialogue system in the travel domain, where the input takes the form of a dialogue act (e.g. a query that the system needs to make to obtain information about the user's travel plans) with the attributes to include (e.g. the departure city). \citeauthor{Oh2002}'s approach encompasses both content planning and realisation. It relies on dialogue corpora annotated with utterance classes, that is, the type of dialogue act that each utterance is intended to fulfil. On this basis, they construct separate $n$-gram language models for each utterance class, as well as for word-classes that can appear in the input (for example, words corresponding to {\tt departure city}). Content planning is handled by a model that predicts which attributes should be included in an utterance on the basis of recent dialogue history. Realisation is handled using a combination of templates and $n$-gram models. Thus, generation is conceived as a two-step (planning followed by realisation) process. The reliance on standard language models has one potential drawback, in that such models are founded on a local history assumption, limiting the extent to which prior selections can influence current choices. An alternative, discriminative model \cite<known to the {\sc nlp} community at least since>{Ratnaparkhi96} is logistic regression (Maximum Entropy). The foundations for this approach in {\sc nlg} can be found in the work of \citeA{Ratnaparkhi2000}, who focussed primarily on realisation (albeit combined with elements of sentence planning). He compared two stochastic {\sc nlg} systems based on a maximum entropy learning framework, to a baseline {\sc nlg} system. The first of these ({\sc nlg2} in Ratnaparkhi's paper) uses a conditional language model that generates sentences in an incremental, left-to-right fashion, by predicting the best word given both the preceding history (as in standard n-gram models) and the semantic attributes that remain to be expressed. The second ({\sc nlg3}) augments the model with syntactic dependency relations, performing generation by recursively predicting the left and right children of a given constituent. In an evaluation based on judgements of correctness, \citeauthor{Ratnaparkhi2000} found that the system augmented with dependencies was generally preferred. In later work, \citeA{Angeli2010} describe an end-to-end {\sc nlg} system that maintains a separation between content selection, sentence planning and realisation, modelling each process as a sequence of decisions in a log-linear framework, where choices can be conditioned on arbitrarily long histories of previous decisions. This enables them to handle long-range dependencies, such as coherence relations, more flexibly \cite<e.g., a model can incorporate the information that a weather report which describes wind speed should do so after mentioning wind direction; see>[for similar insights based on global optimisation]{Barzilay2005}. The separation of tasks is maintained insofar as a different set of features can be used to inform decisions at each stage of the process. Sentence planning and realisation decisions are based on templates acquired from corpus texts: a template is selected based on its likelihood given the database fields selected during content selection. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth,height=0.2\textheight]{img/mairesseyoung-tree} \caption{Tree structure for a dialogue act, after \citeA{Mairesse2014}. Leaves correspond to word sequences. Non-terminal nodes are semantic attributes, shown at the bottom as semantic stacks. Stacks in bold represent mandatory content.} \label{fig:mairesseyoung} \end{figure} \citeA{Mairesse2014} describe a different approach, which also relies on alignments between database records and text, and seeks a global solution to generation, without a crisp distinction between strategic and tactical components. In this case, the basic representational framework is a tree of the sort shown in Figure \ref{fig:mairesseyoung}. The root indicates a dialogue act type (in the example, the dialogue act seeks to {\tt inform}). Leaves in the tree correspond to words or word sequences, while nonterminals are {\em semantic stacks}, that is, the pieces of input to which the words correspond. In this framework, content selection and realisation can be solved jointly by searching for the optimal stack sequence for a given dialogue act, and the optimal word sequence corresponding to that stack sequence. \citeauthor{Mairesse2014} use a factored language model ({\sc flm}), which extends n-gram models by conditioning probabilities on different utterance contexts, rather than simply on word histories. Given an input dialogue act, generation works by applying a Viterbi search through the {\sc flm} at each of the following stages: (a) mandatory semantic stacks are identified for the dialogue act; (b) these are enriched with possible non-mandatory stacks (those which are not in boldface in Figure \ref{fig:mairesseyoung}), usually corresponding to function words; (c) realisations are found for the stack sequence. The approach is also extended to deal with $n-$best realisations, as well as to handle variation, in the form of paraphrases for the same input. \subsubsection{NLG as Classification and Optimisation}\label{sec:classification} An alternative way to think about {\sc nlg} decisions at different levels is in terms of classification, already encountered in the context of specific tasks, such as content determination \cite<e.g.,>{Duboue2003} and realisation \cite<e.g.,>{Filippova2007}. Since generation is ultimately about choice-making at multiple levels, one way to model the process is by using a cascade of classifiers, where the output is constructed incrementally, so that any classifier $C_{i}$ uses as (part of) its input the output of a previous classifier $C_{i-i}$. Within this framework, it is still possible to conceive of {\sc nlg} in terms of a pipeline. As \citeA{Marciniak2005} note, an alternative way of thinking about it is in terms of a weighted, multi-layered lattice, where generation amounts to a best-first traversal: at any stage $i$, classifier $C_{i}$ produces the most likely output, which leads to the next stage $C_{i+1}$ along the most probable path. This generalisation is conceptually related to the view of {\sc nlg} in terms of policies in the Reinforcement Learning framework (see Section \ref{sec:reinforcement} above), which define a traversal through sequences of states which may be hierarchically organised \cite<as in the work of>[for example]{Dethlefs2015}. \citeA{Marciniak2004} start from a small corpus of manually annotated texts of route descriptions, dividing generation into a series of eight classification problems, from determining the linear precedence of discourse units, to determining the lexical form of verbs and the type of their arguments. Generation decisions are taken using the instance-based KStar algorithm, which is shown to outperform a majority baseline on all classification decisions. Instance-based approaches to {\sc nlg} are also discussed by \citeA{Varges2010}, albeit in an overgenerate-and-rank approach where rules overgenerate candidates, which are then ranked by comparison to the instance base. A similar framework was recently adopted by \citeA{Zarriess2013}, once again taking as their starting point textual data annotated with a dependency representation, as shown in (14) below, where referents are marked {\em v} and {\em p} and the implicit head of the dependency is underlined. \begin{examples}\label{ex:zarriess} \item \gll Junge Familie$_{v:0}$ auf \underline{dem Heimweg}$_{poss:v}$ \underline{ausgeraubt}$_{ag:p}$ Young family on {the way home} robbed \glt `A young family was robbed on their way home.' \glend \end{examples} These authors use a sequence of classifiers to perform referring expression generation and realisation. They use a ranking model based on Support Vector Machines which, given an input dependency representation extracted from annotated text such as (14), performs two tasks in either order: (a) mapping the input to a shallow syntactic tree for linearisation; and (b) inserting referring expressions. Interestingly, \citeA{Zarriess2013} observe that the performance of either task is order-dependent, in that both classification tasks perform worse when they are second in the sequence. They observe a marginal improvement when the tasks are performed in parallel, but achieve the best performance in a {\em revision-based} architecture, where syntactic mapping is followed by referring expression insertion, followed by a revision of the syntax. Classification cascades for {\sc nlg} maintain a clean separation between tasks, but research in this area has echoed earlier concerns about pipelines in general (see Section \ref{sec:modular}), the main problem being error propagation. Infelicitous choices will of course impact classification further downstream, a situation analogous to the problem of the generation gap. The conclusion by \citeA{Zarriess2013} in favour of a revision-based architecture, brings our account full circle, in that a well-known solution is shown to yield improvements in a new framework. Our discussion so far has repeatedly highlighted the fact that a sequential organisation of {\sc nlg} tasks is susceptible to error propagation, whether this takes the form of classifier errors, or decisions in a rule-based module that have a negative impact on downstream components. A potential solution is to view generation as an optimisation problem, where the best combination of decisions is sought in an exponentially large space of possible combinations. We have encountered the use of optimisation techniques, such as Integer Linear Programming ({\sc ilp}) in the context of aggregation and content determination (Section \ref{sec:aggregation}). For example, \citeA{Barzilay2006} group content units based on their pairwise similarity, with an optimisation step to identify a set of pairs that are maximally similar. {\sc ilp} has also been exploited by \citeA{Marciniak2004,Marciniak2005}, as a means to counteract the error propagation problem in their original classification-based approach. Similar solutions have been undertaken by \citeA{Lampouras2013}, in the context of generating text from {\sc owl} ontologies. \citeauthor{Lampouras2013} show that joint optimization using Integer Linear Programming to jointly determine content selection, lexicalisation and aggregation produces more compact verbalisations of ontology facts, compared to a pipeline system \cite<which the authors presented earlier in>{Androtsopoulos2013}. Conceptually, the optimisation framework is simple: \begin{enumerate} \item Each {\sc nlg} task is once again modelled as classification or label-assignment, but this time, labels are modelled as binary choices (either a label is assigned or not), associated with a cost function, defined in terms of the probability of a label in the training data; \item Pairs of tasks which are strongly inter-dependent \cite<for example, syntactic choices and {\sc reg} realisations, in the example from>{Zarriess2013} have a cost based on the joint probability of their labels; \item An {\sc ilp} model seeks the global labelling solution that minimises the overall cost, with the added constraint that if one of a pair of correlated labels $\langle l_{i},l_{j} \rangle$ is selected, the other must be too. \end{enumerate} Optimisation solutions have been shown to outperform different versions of the classification pipeline \cite<e.g., that of>{Marciniak2004}, much as the results of \citeA{Dethlefs2015}, discussed above, showed that reinforcement learning of a joint policy produces better dialogue interactions than learning isolated policies for separate {\sc nlg} tasks. The imitation learning framework of \citeA{Lampouras2016} (discussed earlier in Section \ref{sec:data}), which seeks to jointly optimise content determination and realisation, was also shown to achieve competitive results, approaching the performance of the systems of \citeA{Wen2015} on {\sc sf} and of \citeA{Dusek2015} on {\sc bagel}. \subsubsection{NLG as `Parsing'}\label{sec:parsing} In recent years, there has been a resurgence of interest in viewing generation in terms of probabilistic context-free grammar ({\sc cfg}) formalisms, or even as the `inverse' of semantic parsing. For example, \citeA{Belz2008} formalises the {\sc nlg} problem entirely in terms of {\sc cfg}s: a base generator expands inputs (bits of weather data in this case) by applying {\sc cfg} rules; corpus-derived probabilities are then used to control the choice of which rules to expand at each stage of the process. The base generator in this work is hand-crafted. However, it is possible to extract rules or templates from corpora, as has been done for aggregation rules \cite[and Section \ref{sec:aggregation}]{Stent2009,White2015}, and also for more general statistical approaches to sentence planning and realisation in a text-to-text framework \cite<e.g.,>{Kondadadi2013}. Similarly, approaches to {\sc nlg} from structured knowledge bases, expressed in formalisms such as {\sc rdf}, have described techniques to extract lexicalised grammars or templates from such inputs paired with textual descriptions \cite{Ell2012,Duma2013,Gyawali2014}. The work of Mooney and colleagues \cite{Wong2007,Chen2008,Kim2010} has compared a number of different generation strategies inspired by the {\sc wasp} semantic parser \cite{Wong2007}, which uses probabilistic synchronous {\sc cfg} rules learned from pairs of utterances and their semantic representations using statistical machine translation techniques. \citeA{Chen2008} use this framework for generation both by adapting {\sc wasp} in a generation framework, and by further adapting it to produce a new system, {\sc wasper-gen}. While {\sc wasp} seeks to maximise the probability of a meaning representation ({\sc mr}) given a sentence, {\sc wasper-gen} does the opposite, seeking the maximally probable sentence given an input {\sc mr}, as it were, learning a translation model from meaning to text. When trained on a dataset of sportscasts (the {\sc robocup} dataset), {\sc wasper-gen} outperforms {\sc wasp} on corpus-based evaluation metrics, and is shown to achieve a level of fluency and semantic correctness which approaches that of human text, based on subjective judgements by experimental participants. Note, however, that this framework focusses mainly on tactical generation. Content determination is performed separately, using a variant of the {\sc em}-algorithm to converge on a probabilistic model that predicts which events or predicates should be mentioned. By contrast, the work of \citeA{Konstas2012,Konstas2013}, which also relies on {\sc cfg}s, uses a unified framework throughout. The starting point is an alignment of text with database records, extending the proposal by \citeA{Liang2009}. The process of converting input data to output text is modelled in terms of rules which implicitly incorporate different types of decisions. For example, given a database of weather records, the rules might take the (somewhat simplified) form shown below: \begin{examples} \item $R(\textit{windSpeed}) \rightarrow FS(\textit{temperature}), R(\textit{rain})$\label{ex:konstas1} \item $FS(\textit{windSpeed,min}) \rightarrow F(\textit{windSpeed,max}) FS(\textit{windSpeed,max})$\label{ex:konstas2} \item $FS(\textit{windSpeed,min}) \rightarrow W(\textit{windSpeed,min})$\label{ex:konstas3} \end{examples} where $R$ stands for a database record, $FS$ is a set of fields, $F(x,y)$ stands for field $y$ in record $x$, $W$ is a word sequence, and all rules have associated probabilities that condition the {\sc rhs} on the {\sc lhs}, akin to the {\sc pcfg}s used in parsing. These rules specify that a description of {\em windSpeed} (\ref{ex:konstas1}) should be followed in the text by a temperature and a rain report. According to rule (\ref{ex:konstas2}), minimum windspeed should be followed by a mention of the maximum windspeed with a certain probability. Rule (\ref{ex:konstas3}) expands the minimum windspeed rule to a sequence of words according to a bigram language model \cite{Konstas2012}. \citeA{Konstas2012} pack the set of rules acquired from the alignment stage into a hypergraph, and treat generation as decoding to find the maximally likely word sequence. Under this view, generation is akin to inverted parsing. Decoding proceeds using an adaptation of the {\sc cyk} algorithm. Since the model defining the mapping from input to output does not incorporate fluency heuristics, the decoder is interleaved with two further sources of linguistic knowledge by \citeA{Konstas2013}: (a) a weighted finite-state automaton (representing an n-gram language model); and (b) a dependency model \cite<cf.>[, also discussed above]{Ratnaparkhi2000}. \subsubsection{Deep Learning Methods}\label{sec:deep-learning} We conclude our discussion of data-driven {\sc nlg} with an overview of applications of deep neural network ({\sc nn}) architectures. The decision to dedicate a separate section is warranted by the recent, renewed interest in these models \cite<see>[for an {\sc nlp}-focussed overview]{Goldberg2016}, as well as the comparatively small (but steadily growing) range of {\sc nlg} models couched within this framework to date. We will also revisit {\sc nn} models for {\sc nlg} under more specific headings in the following sections, especially in discussing stylistic variation (Section \ref{sec:style}) and the image captioning (Section \ref{sec:image}), where they are now the dominant approach. As a matter of fact, applications of {\sc nn}s in {\sc nlg} hark back at least to \citeA{Kukich1987}, though her work was restricted to small-scale examples. Since the early 1990s, when interest in neural approaches waned in the {\sc nlp} and {\sc ai} communities, cognitive science research has continued to explore their application to syntax and language production \cite<e.g.,>{Elman1990,Elman1993,Chang2006}. The recent resurgence of interest in {\sc nn}s is in part due to advances in hardware that can support resource-intensive learning problems \cite{Goodfellow2016}. More importantly, {\sc nn}s are designed to learn representations at increasing levels of abstraction by exploiting backpropagation \cite{LeCun2015,Goodfellow2016}. Such representations are dense, low-dimensional, and distributed, making them well-suited to capturing grammatical and semantic generalisations \cite<see>[{\em inter alia}]{Mikolov2013,Luong2013,Pennington2014}. {\sc nn}s have also scored notable successes in sequential modelling using feedforward networks \cite{Bengio2003,Schwenk2005}, log-bilinear models \cite{Mnih2007} and recurrent neural networks \cite<{\sc rnn}s, >{Mikolov2010}, including {\sc rnn}s with long short-term memory units \cite<{\sc lstm}, >{Hochreiter1997}. The latter are now the dominant type of {\sc rnn} for language modelling tasks. Their main advantage over standard language models is that they handle sequences of varying lengths, while avoiding both data sparseness and an explosion in the number of parameters through the projection of histories into a low-dimensional space, so that similar histories share representations. A demonstration of the potential utility of recurrent networks for {\sc nlg} was provided by \citeA{Sutskever2011}, who used a character-level {\sc lstm} model for the generation of grammatical English sentences. This, however, focussed exclusively on their potential for realisation. Models that generate from semantic or contextual inputs cluster around two related types of models, descibed below. \subsubsection{Encoder-Decoder Architectures} An influential architecture is the Encoder-Decoder framework \cite{Sutskever2014}, where an {\sc rnn} is used to encode the input into a vector representation, which serves as the auxiliary input to a decoder {\sc rnn}. This decoupling between encoding and decoding makes it possible in principle to share the encoding vector across multiple {\sc nlp} tasks in a multi-task learning setting \cite<see>[for some recent case studies]{Dong2015,Luong2016}. Encoder-Decoder architectures are particularly well-suited to Sequence-to-Sequence ({\sc seq2seq}) tasks such as Machine Translation, which can be thought of as requiring the mapping of variable-length input sequences in the source language, to variable-length sequences in the target \cite<e.g.,>{Kalchbrenner2013,Bahdanau2015}. It is easy to adapt this view to data-to-text {\sc nlg}. For example, \citeA{CastroFerreira2017} adapt {\sc seq2seq} models for generating text from abstract meaning representations ({\sc amr}s). A further important development within the Encoder-Decoder paradigm is the use of attention-based mechanisms, which force the encoder, during training, to weight parts of the input encoding more when predicting certain portions of the output during decoding \cite<cf.>{Bahdanau2015,Xu2015}. This mechanism obviates the need for direct input-output alignment, since attention-based models are able to learn input-output correspondences based on loose couplings of input representations and output texts \cite<see>[for discussion]{Dusek2016}. In {\sc nlg}, many approaches to response generation in an interactive context (such as dialogue or social media posts) adopt this architecture. For example, \citeA{Wen2015} use semantically-conditioned {\sc lstm}s to generate the next act in a dialogue; a related approach is taken by \citeA{Sordoni2015}, who use {\sc rnn}s to encode both the input utterance and the dialogue context, with a decoder to predict the next word in the response \cite<see also>{Serban2016}. \citeA{Goyal2016} found an improvement in the quality of generated dialogue acts when using a character-based, rather than a word-based {\sc rnn}. \citeA{Dusek2016} also use a {\sc seq2seq} model with attention for dialogue generation, comparing an end-to-end model where content selection and realisation are jointly optimised (so that outputs are strings), to a model which outputs deep syntax trees, which are then realised using an off-the-shelf realiser \cite<as done in>{Dusek2015}. Like \citeA{Wen2015}, they use a reranker during decoding to rank beam search outputs, penalising those that omit relevant information or include irrelevant information. Their evaluation, on {\sc bagel}, shows that the joint optimisation setup is superior to the {\sc seq2seq} model that generates trees for subsequent realisation. \citeA{Mei2016} also explicitly address the division into content selection and realisation, using {\sc weathergov} data \cite{Angeli2010}. They use a bidirectional {\sc lstm} encoder to map input records to a hidden state, followed by an attention-based aligner which models content selection, determining which records to mention as a function of their prior probability and the likelihood of their alignment with words in the vocabulary; a further refinement step weights the outcomes of the alignment with the priors, making it more likely that more important records will be verbalised. In this approach, {\sc lstm}s are able to learn long-range dependencies between records and descriptors, which the log-linear model of Angeli factored in explicitly (see Section \ref{sec:lm} above). Comparable approaches are now also use for automatic generation of poetry \cite<see e.g., >{zhang2014chinese}, a topic to which we will return below. \subsubsection{Conditioned Language Models} A related view of the data-to-text process views the generator as a conditioned language model, where output is generated by sampling words or characters from a distribution conditioned on input features, which may include semantic, contextual or stylistic attributes. For example, \citeA{Lebret2016} restricts generation to the initial sentence of wikipedia biographies from the corresponding wiki fact table and models content selection and realisation jointly in a feedforward {\sc nn} \cite{Bengio2003}, conditioning output word probabilities on both local context and global features obtained from the input table. This biases the model towards full coverage of the contents of a field. For example, a field in the table containing a person's name typically consists of more than one word and the model should concatenate the words making up the entire name. While simpler than some of the models discussed above, this model can also be thought of as incorporating an attentional mechanism. \citeA{Lipton2016} use character-level {\sc rnn}s conditioned on semantic information and sentiment, to generate product reviews, while \citeA{Tang2016} generate such reviews using an {\sc lstm} conditioned on input `contexts', where contexts incorporate both discrete (user, location etc) and continuous information. Similar approaches have been adopted in a number of models for stylistic and affective generation \cite<see>[and the discussion in Section \ref{sec:style} below]{Li2016,Herzig2017,Ashgar2017,Hu2017,Ficler2017}. \subsection{Discussion} An important theme that has emerged from recent work is the blurring of boundaries between tasks that are encapsulated in traditional architectures. This is evident in planning-based approaches, but perhaps the most radical break from this perspective arises in stochastic data-to-text systems which capitalise on alignments between input data and output text, combining content-oriented and linguistic choices within a unified framework. Among the open questions raised by research on stochastic {\sc nlg} is the extent to which sub-tasks need to be jointly optimised and, if so, which knowledge sources should be shared among them. This is also seen in recent work using neural models, where joint learning of content selection and realisation has been claimed to yield superior outputs, compared to models that leave the tasks separate \cite<e.g.,>{Dusek2016}. An outstanding issue is the balancing act between achieving adequate textual output versus doing so efficiently and robustly. Early approaches that departed from a pipeline architecture tended to sacrifice the latter in favour of the former; this was the case in revision-based and blackboard architectures. The same is to some extent true of planning-based approaches which are rooted in paradigms with a long history in {\sc ai}: As recent empirical work has shown \cite{Koller2011}, these too are susceptible to considerable computational cost, though this comes with the advantage of a unified view of language generation that is also compatible with well-understood linguistic formalisms, such as {\sc ltag}. Stochastic approaches present a different problem, namely, that of acquiring the right data to construct the necessary statistical models. While plenty of datasets have become available, for tasks such as recommendations in the restaurant or hotel domains, brief weather reports, or sports summaries, it remains to be seen whether data-driven {\sc nlg} models can be scaled up to domains where large volumes of heterogeneous data (numbers, symbols etc) are the norm, and where longer texts need to be generated. While such data is not easy to come by, crowd-sourcing techniques can presumably be exploited \cite{Mairesse2014,Novikova2016}. As we have seen, systems vary in whether they require aligned data (by which we mean data where strings are paired with the portion of the input to which they correspond), or not. As deep learning approaches become more popular -- and, as we shall see in the next section, they are now the dominant approach in certain tasks, such as generating image captions -- the need for alignment is becoming less acute, as looser input-output couplings can constitute adequate training data, especially in models that incorporate attentional mechanisms. As these techniques become better understood, they are likely to feature more heavily in a broader range of {\sc nlg} tasks, as well as end-to-end {\sc nlg} systems. A second possible outcome of the renewed interest in deep learning is its impact on representation learning and architectures. In a recent opinion piece, \citeA{Manning2015} suggested that the contribution of deep learning to {\sc nlp} has to date been mainly due to the power of distributed representations, rather than the exploitation of the `depth' of multi-layered models. Yet, as Manning also notes, greater depth can confer representational advantages. As researchers begin to define complex architectures that `self-organise' during training by minimising a loss function, it might turn out that different components of such architectures acquire core representations pertaining to different aspects of the problem at hand. This raises the question whether such representations could be reusable, in the same way that the layers of deep convolutional networks in computer vision learn representations at different levels of granularity which turn out to be reusable in a range of tasks \cite<not just object recognition, for instance, even though networks such as {\sc vgg} are typically trained for such tasks; see>{Simonyan2015}. A related aim, suggested by recent attempts at transfer learning, especially in the {\sc seq2seq} paradigm, is to attempt to learn domain-invariant representations that carry over from one task to another. Could {\sc nlp}, and the field of {\sc nlg} in particular, be about to witness a renewed emphasis on multi-levelled approaches to {\sc nlg}, with `deep' architectures whose components learn optimal representations for different sub-tasks, perhaps along the lines detailed in Section \ref{sec:tasks} above? And to what extent would such representations be reusable? As a number of other commentators have pointed out, the prospect of learning domain-invariant linguistic representations that facilitate transfer learning in {\sc nlp}, remains somewhat elusive, despite certain notable successes, not least those scored in the development of distributed word representations.\footnote{For some remarks on this topic, see for example the blog entry by \citeA{Ruder2017}. A recent note of caution against unrealistic claims of success of neural methods in {\sc nlg} was sounded by \citeA{Goldberg2017}.} This could well be the next frontier in research on statistical {\sc nlg}. In the following sections, we turn our attention away from standard tasks and the way they are organised, focussing on three broad topics -- image-to-text generation, stylistic variation and computational creativity -- in which {\sc nlg} research has also intersected with research in other areas of Artificial Intelligence and {\sc nlp}. \section*{Acknowledgements} We thank the four reviewers for their detailed and constructive comments. In addition, we have greatly benefitted from discussions with and comments from Grzegorz Chrupala, Robert Dale, Raquel Herv\'as, Thiago Castro Ferreira, Ehud Reiter, Marc Tanti, Mari\"et Theune, Kees van Deemter, Michael White and Sander Wubben. EK received support from RAAK-PRO SIA (2014-01-51PRO) and The Netherlands Organization for Scientific Research (NWO 360-89-050), which is gratefully acknowledged. \vskip 0.2in \bibliographystyle{theapa}
1,116,691,498,119
arxiv
\section{Introduction} Although much has been learnt from the study of Galactic classical novae (CNe), it is clear that Galactic data are not ideal for establishing the population characteristics of novae because these data are often heavily biased by selection effects. To-date no RNe have been identified outside the Milky Way and its companions, but CNe have been studied in about a dozen galaxies. To gain further insight into the population of novae, and specifically to explore further the question of whether there exist two distinct nova populations \citep[see e.g.][]{1992A&A...266..232D}, we are involved in a number of campaigns to study a large sample of novae in the Local Group and beyond. The following sections briefly describe the current status of the three parts of our extragalactic CN work: the POINT-AGAPE survey; our Local Group CN follow-up project, and the Liverpool Extragalactic Nova Survey. \section{The POINT-AGAPE Survey} The Pixel-lensing Observations with the Isaac Newton Telescope -- Andromeda Galaxy Amplified Pixels Experiment (POINT-AGAPE) survey (Calchi Novati et al. 2003) was an optical search for gravitational microlensing events towards the Andromeda Galaxy (M31). As well as microlensing, the survey was sensitive to many different classes of variable stars and other transients, including Classical Novae \citep{2002AIPC..637..481D}. Previous work with the POINT-AGAPE dataset included the development of an automated CN detection pipeline, which led to the discovery of 20 CNe \citep{2004MNRAS.353..571D}. Using the results from the catalogue, a global CN rate for M31 of $65^{+16}_{-15}$~yr$^{-1}$ was derived \citep{2006MNRAS.369..257D}. Separate M31 bulge and disk rates of $38^{+15}_{-12}$~yr$^{-1}$ and $27^{+19}_{-15}$~yr$^{-1}$ respectively were also determined. The derived global rate is a factor of around two higher than the most robust previous result of $37^{+12}_{-8}$ \citep{2001ApJ...563..749S} and strong evidence in favour of two separate CN populations was provided: one associated with the M31 bulge, the other with the disk. Darnley et al. (2006) were able to use the M31 dataset and various assumptions about the Milky Way \citep[see][]{2002AIPC..637..462S} to deduce a Galactic bulge nova rate of $14^{+6}_{-5}$~yr$^{-1}$, a disk rate of $20^{+14}_{-11}$ yr$^{-1}$ and a global Galactic rate of $35^{+15}_{-12}$~yr$^{-1}$. This rate is remarkably similar to independent estimates computed from direct observations of the Milky Way's CN population \citep{1997ApJ...487..226S}. Recently, an additional fourth season of POINT-AGAPE legacy data has been obtained. These data are being analysed with the expectation of additional CNe detections and hence further refining the M31 bulge and disk rates, and strengthening the evidence in favour of two distinct M31 CN populations. \section{The Local Group} As part of ongoing work observing Local Group CNe, programmes with guaranteed observing time are in place on a number of telescopes to follow-up novae discovered in the Andromeda Galaxy, its companion (M32) and the Triangulum Galaxy (M33). To provide optical follow-up observations the 1m telescope at the Mount Laguna Observatory (MLO), the Steward 2.3m and the LT are employed. Time has also been granted on the Hobby-Eberly Telescope (HET) to obtain low-resolution optical spectra and Spitzer Space Telescope time to perform IR photometry and spectroscopy. Systematic studies of M31 (and the Local Group) have become feasible for the first time in recent years due in part to the advent of robotic telescopes, such as the Liverpool Telescope \citep[LT,][]{2004SPIE.5489..679S} and Faulkes Telescopes. However a large number of novae are discovered in M31 each year by amateurs and professionals alike. Over the past two years, a total of 31 CNe have been discovered during the $\sim8$ month M31 observing season. This number neglects any contribution from routine optical imaging of M31 with the LT (undertaken by the Angstrom project, \citep[see][and Figure~\ref{Angstrom_lightcurve}]{2006MNRAS.365.1099K}, MLO and Steward, amongst others. The Local Group nova sample is also being supplemented by CN alerts from the Angstrom M31 bulge micro-lensing survey \citep{2007ApJ...661L..45D,ATel1192} and serendipitous nova discoveries made by the Lick Observatory Supernova Search (LOSS) and the ROTSE IIIb programme. \begin{figure} \center\includegraphics[keepaspectratio=true,clip=true,angle=270,width=4in]{darnley2_fig1.eps} \caption{Angstrom DIA light-curve of Classical Nova 2006 \#8 \citep{2007ApJ...661L..45D}, first announced by Burwitz et al. (2006). This nova has a $t_{2}\leq10$ days and is classed as a ``very fast'' nova. This light curve provides the best sampled light-curve of an extragalactic CN to-date.} \label{Angstrom_lightcurve} \end{figure} During the last M31/M33 observing season (August 2006 -- February 2007) six novae in M31 and one in each of M32 and M33 have been followed-up with all three optical telescopes and low-resolution spectroscopy has been performed with the HET \citep{2006ATel..923....1S}. Figure~\ref{HET_spectra} shows HET spectra of three Local Group novae. Infrared photometry and spectroscopy for four CNe in M31 has recently been obtained from Spitzer; these data are still being analysed. \begin{figure} \center\includegraphics[keepaspectratio=true,clip=true,width=4in,viewport=60 15 430 285]{darnley2_fig2.eps} \caption{HET spectra of three of our target Local Group novae, one from each of M31, M32 and M33 \citep{2006AAS...209.0920S}. These spectra show strong Hydrogen emission lines as well as prominent Fe II emission lines with characteristic P Cygni profiles \citep{1992AJ....104..725W}. All three are classic examples of Fe II CNe.} \label{HET_spectra} \end{figure} \section{Liverpool Extragalactic Nova Survey} The Liverpool Extragalactic Nova Survey (LENS) is a high cadence extragalactic CN monitoring survey. Conceived to expand upon the results of the POINT-AGAPE CN survey, LENS studies three more distant galaxies, covering a range a Hubble types; namely, M81, NGC 2403 and M64. This survey operates primarily on the robotic 2m LT and also utilises some RoboNet-1.0 time on the Faulkes Telescope North \citep[FTN,][]{2007P&SS...55..582B}. A primary objective is to determine how the nova rate varies with Hubble type. LENS has to-date taken three seasons of data (including an initial pilot season during the commissioning of the LT) for each galaxy, with guaranteed time on the LT to conduct a fourth observing season. These data are reduced using a fully automated difference-image-analysis (DIA) pipeline \citep{Angstrom_Pipeline}, with nova detection via the POINT-AGAPE algorithm \citep{2004MNRAS.353..571D}. Variable object detection and classification is currently being performed on the LENS dataset, and a number of CN candidates have already been identified. \acknowledgements The Liverpool Telescope is operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council (STFC). FTN is operated by the Las Cumbres Observatory Global Telescope network. AWS acknowledges support through NSF grant AST-0607682.
1,116,691,498,120
arxiv
\section{Introduction} \label{sec:introduction} Predictions for structure functions in deep-inelastic scattering (DIS) including perturbative corrections in Quantum Chromodynamics (QCD) have recently been advanced to an unprecedented level of precision over a wide kinematical region of Bjorken $x$ and $Q^2 = -q^2$, with $q$ being the momentum of the exchanged gauge boson. The knowledge of the complete three-loop splitting functions for the scale evolution of unpolarized parton distributions of hadrons~\cite{Moch:2004pa,Vogt:2004mw} together with the second-order coefficient functions~\cite{% vanNeerven:1991nn,Zijlstra:1991qc,Zijlstra:1992kj,Zijlstra:1992qd,Moch:1999eb} has completed the next-to-next-to-leading order (NNLO) approximation of massless perturbative QCD for the DIS structure functions $F_1$, $F_2$ and $F_3$. In addition for electromagnetic (photon-exchange) DIS the three-loop coefficient functions for both $F_{\,2}$ and $F_L = F_{\,2} - 2x F_1$ have become available~\cite{Moch:2004xu,Vermaseren:2005qc}, the latter being actually required to complete the NNLO predictions, since the leading contribution to the coefficient functions is of first order in the strong coupling constant $\alpha_{\rm s}$. In the present article, we extend the program of calculating higher order perturbative QCD corrections to the structure functions of charged current DIS. Our studies are motivated by the increasingly accurate measurements of neutral and charged current cross sections at HERA with a polarised beam of electrons and positrons~\cite{Chekanov:2003vw,Adloff:2003uh,Aktas:2005ju}. At the same time we are also able to quantitatively improve predictions for physics at the front-end of a neutrino-factory, see e.g. Ref.~\cite{Mangano:2001mj}. To be specific, we consider neutrino-proton scattering in the combination $\nu P - \bar \nu P$, which corresponds to charged lepton-proton DIS as far as QCD corrections are concerned. Following Refs.~\cite{Larin:1994vu,Larin:1997wd,Retey:2000nq,Moch:2001im,Blumlein:2004xt} we compute the perturbative QCD predictions to three-loop accuracy for a number of fixed Mellin moments of the structure functions $F_2$, $F_L$ and $F_3$. Within the framework of the operator product expansion (OPE), and working in Mellin space, $F_2^{ \nu P - \bar \nu P}$ and $F_L^{ \nu P - \bar \nu P}$ are functions of odd Mellin moments only, while only even moments contribute to $F_3^{\nu P -\bar \nu P}$. This distinction between odd and even Mellin moments is opposite to the case of the neutral current structure functions and also to the case of charged current structure functions for neutrino-proton scattering in the combination $\nu P + \bar \nu P$. In the latter case, the three-loop results for $F_2^{ \nu P + \bar \nu P}$ and $F_L^{ \nu P + \bar \nu P}$ can be directly checked in electromagnetic DIS and taken over from Refs.~\cite{Moch:2004xu,Vermaseren:2005qc}. Also $F_3^{\nu P + \bar \nu P}$ is known to three-loop accuracy~\cite{MVV7} with parametrizations for the respective coefficient functions given in Ref.~\cite{Vogt:2006bt}. Having available a limited number of fixed Mellin moments for $F_2$, $F_L$ and $F_3$ for both combinations of neutrino-proton scattering, i.e. $\nu P \pm \bar \nu P$ is a prerequisite for a subsequent complete calculation of the respective quantity to three loops. With the methods of Refs.~\cite{Moch:2004pa,Vogt:2004mw,Moch:2004xu,Vermaseren:2005qc} at hand we have all ingredients for a future computation of the ``all-$n$'' results in Mellin-$n$ space, or equivalently the complete expression in Bjorken-$x$ space after an inverse Mellin transform. However, applying the present results we can already comment on a number of phenomenological issues, which we do in a companion paper~\cite{MRV1}. The outline of this article is as follows. In Section~\ref{sec:formalism} we briefly recall our formalism, which is based on the optical theorem, the forward Compton amplitude and the OPE. Specifically we emphasize the symmetry properties of the Compton amplitude for neutral and charged current processes and show how these select either odd or even Mellin moments for the structure functions $F_2$, $F_L$ and $F_3$ depending on the process under consideration, i.e. $\nu P \pm \bar \nu P$. In Section~\ref{sec:renormalization} we recall details of the renormalization and give all relevant details of the calculation in Section~\ref{sec:calculation}. Section~\ref{sec:results} contains our results for the Mellin moments of $F_2^{ \nu P - \bar \nu P}$, $F_L^{ \nu P - \bar \nu P}$ and $F_3^{\nu P -\bar \nu P}$ in numerical form. Finally, we conclude in Section~\ref{sec:conclusions}. The lengthy full expressions for the new moments of the coefficient functions are deferred to Appendix~\ref{sec:appA} and some details on the OPE are given in Appendix~\ref{sec:appB}. \setcounter{equation}{0} \section{General formalism} \label{sec:formalism} The subject of our calculation is unpolarized inclusive deep-inelastic lepton-nucleon scattering, \begin{eqnarray} \label{eq:dis} l(k) \:+\: {\rm nucl}(p) \:\:\rightarrow\:\: l^{\, \prime}(k^{\,\prime}) \:+\: X\, , \end{eqnarray} where $l(k),\, l^{\,\prime}(k^{\,\prime})$ are leptons of momenta $k$ and $k^{\, \prime}$, ${\rm nucl}(p)$ denotes a nucleon of momenta $p$ and $X$ stands for all hadronic states allowed by quantum number conservation. In this article we are concentrating on charged current neutrino($\nu$)-proton($P$) scattering, i.e. $\nu P$, $\bar \nu P$ via $W^{\pm}$ boson exchange. As it is well known, the differential cross section for reaction~(\ref{eq:dis}) can be written as a product of leptonic $L_{\mu\nu}$ and hadronic $W_{\mu\nu}$ tensors \begin{eqnarray} \label{eq:diffcrosssec} d \sigma \propto L^{\mu\nu} W_{\mu\nu}\, . \end{eqnarray} The leptonic tensor $L^{\mu\nu}$ for electroweak or pure electromagnetic gauge boson exchange is detailed in the literature, see e.g. Ref.~\cite{Yao:2006px} and will not be considered here. The hadronic tensor in Eq.~(\ref{eq:diffcrosssec}) is given by \begin{eqnarray} \label{eq:htensor} W_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}(p,q) &=& \frac{1}{4\pi} \int d^4z\, {\rm{e}}^{{\rm{i}}q \cdot z}\langle {{\rm nucl}, p}\vert J^{\dagger}_{\m}(z)J_{\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}(0)\vert {{\rm nucl},p}\rangle \\ &=& e_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}\, \frac{1}{2x}F_{L}(x,Q^2) + d_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}\, \frac{1}{2x}F_{2}(x,Q^2) + {\rm{i}} \e_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda\a\b} \frac{p^\a q^\b}{2 p\!\cdot\! q} F_{3}(x,Q^2)\, ,\nonumber \end{eqnarray} where $J_{\m}$ is either an electromagnetic or a weak current and $\vert{{\rm nucl},p}\rangle$ is the unpolarized hadronic state with momentum $p$. The boson transfers momentum $q$, $Q^2=-q^2 > 0,$ and the Bjorken scaling variable is defined as $x=Q^2/ (2p\cdot q)$ with $0 < x \leq 1$. The tensors $e_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}$ and $d_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}$ are given by \begin{eqnarray} \label{eq:tensordef} e_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda} &=& g_{\m \nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}-\frac{q_{\m} q_{\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}}{q^2} \, ,\\ d_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda} &=& -g_{\m \nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}-p_{\m}p_{\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}\frac{4x^2}{q^2} -(p_{\m}q_{\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}+p_{\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}q_{\m})\frac{2x}{q^2} \, , \end{eqnarray} and $\varepsilon_{\mu\nu\alpha\beta}$ is the totally antisymmetric tensor. The hadron structure functions $F_{i}$, $i=L,1,2,3$ are the main subject of our investigations in the present paper, with $F_{1}$ being related to $F_{L}$ and $F_{2}$ by the Callan-Gross relation, \begin{eqnarray} \label{eq:callangross} F_{L}(x,Q^2) = F_{2}(x,Q^2) - 2xF_{1}(x,Q^2)\, . \end{eqnarray} The structure function $F_{3}$ describes parity-violating effects that arise from vector and axial-vector interference and vanishes for pure electromagnetic interactions. We are interested in the Mellin moments of the structure functions, defined as \begin{eqnarray} \label{eq:mellindefF2L} \displaystyle F_{i}(n,Q^2) &=& \int\limits_0^1 dx\, x^{n-2} F_{i}(x,Q^2)\, ,\quad i = 2,L\, ; \\ \label{eq:mellindefF3} F_{3}(n,Q^2) &=& \int\limits_0^1 dx\, x^{n-1} F_{3}(x,Q^2)\, . \end{eqnarray} The optical theorem relates the hadronic tensor in Eq.~(\ref{eq:htensor}) to the imaginary part of the forward scattering amplitude of boson-nucleon scattering, $T_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}$, \begin{eqnarray} \label{eq:opticaltheorem} W_{\m \nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}(p,q) &=& \frac{1}{2\pi}\, {\rm{Im}}\, T_{\m \nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}(p,q)\, . \end{eqnarray} The forward Compton amplitude $T_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}$ has a time-ordered product of two local currents, to which standard perturbation theory applies, \begin{eqnarray} \label{eq:forwardcompton} T_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}(p,q) &=& {\rm{i}} \int d^4z\, {\rm{e}}^{{\rm{i}}q \cdot z} \langle {{\rm nucl},p} \vert\, T \left( J^{\dagger}_{\m}(z)J_{\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}(0) \right) \vert {{\rm nucl},p}\rangle\, . \end{eqnarray} In the Bjorken limit, $Q^2 \rightarrow \infty$, $ x$ fixed, the integral in Eq.~(\ref{eq:forwardcompton}) is dominated by the integration region near the light-cone $z^2 \sim 0$. In this region the phase in the exponent in Eq.~(\ref{eq:forwardcompton}) becomes stationary for the external momentum $q$ being deep in the Euclidean region. Thus, we can use the OPE for a formal expansion of the current product in Eq.~(\ref{eq:forwardcompton}) around $z^2 \sim 0$ into a series of local composite operators of leading twist (see e.g. Ref.~\cite{Muta:1998vi} for details). In terms of local operators for a time ordered product of the two electromagnetic or weak hadronic currents the OPE for Eq.~(\ref{eq:forwardcompton}) can be written in the following form \begin{eqnarray} \label{eq:OPE} {\lefteqn{ {\rm{i}} \int d^4z\, {\rm{e}}^{{\rm{i}}q \cdot z}\, T\left( J^{\dagger}_{\mu}(z)J_{\nu}(0)\right) \,=\, 2 \sum_{n,j} \biggl(\frac{2}{Q^2}\biggr)^n \biggl[ \biggl(g_{\mu \nu}-\frac{q_{\mu}q_{\nu}}{q^2}\biggr) q_{\m_1}q_{\m_2} C_{L,j}\biggl(n, \frac{Q^2}{\m^2},\a_s\biggr) }} \\ & & -\biggl(g_{\mu \m_1}g_{\nu \m_2}q^2 -g_{\mu \m_1}q_{\nu }q_{\m_2} -g_{\nu \m_2}q_{\mu }q_{\m_1} +g_{\mu \nu }q_{\m_1}q_{\m_2} \biggr) C_{2,j}\biggl(n,\frac{Q^2}{\m^2},\a_s\biggr) \nonumber\\ & & + {\rm{i}} \e_{\mu \nu \m_1 \beta} g^{\beta \gamma} q_{\gamma}q_{\m_2} C_{3,j}\biggl(n, \frac{Q^2}{\m^2},\a_s\biggr) \biggr] q_{\m_3}...q_{\m_n} O^{j,\{\m_1,...,\m_n\}}(\m^2) + {\rm{higher\,\, twists,}} \nonumber \end{eqnarray} where $j=\alpha,{\rm{q}},{\rm{g}}$ and all quantities are assumed to be renormalized, $\m$ being the renormalization scale. Higher twist contributions are omitted in Eq.~(\ref{eq:OPE}) as they are less singular near the light-cone $z^2 \sim 0$ and suppressed by powers of $1/Q^2$. Therefore, the sum over $n$ in Eq.~(\ref{eq:OPE}) extends to infinity and runs only over the standard set of the spin-$n$ twist-2 irreducible symmetrical and traceless operators. In a general case three kind of operators contribute (these correspond to the index $j$ in Eq.~(\ref{eq:OPE})): the flavor non-singlet quark operators $O^\a$, the flavor singlet quark operator $O^{\rm{q}}$ and the flavor singlet gluon operator $O^{\rm{g}}$.\ These are defined by, \begin{eqnarray} \label{eq:defoperatorns} O^{\alpha,\{\m_1,\cdots ,\m_n\}} & = & \overline{\psi}\lambda^{\alpha} \gamma^{\{\m_1}D^{\m_2}\cdots D^{\m_n\}}\psi,~~\alpha=1,2,...,(n_f^2-1)\, ,\\ \label{eq:defoperatorquark} O^{{\rm{q}},\{\m_1,\cdots ,\m_n\}} & = & \overline{\psi} \gamma^{\{\m_1}D^{\m_2}\cdots D^{\m_n\}}\psi, \\ \label{eq:defoperatorgluon} O^{{\rm{g}},\{\m_1,\cdots ,\m_n\}} & = & F^{\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda\{\m_1} D^{\m_2}\cdots D^{\m_{n-1}} F^{\m_n\} \nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda }\, . \end{eqnarray} Here, $\psi$ defines the quark operator and $F^{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}$ the gluon operator. The generators of the flavor group $SU(n_f)$ are denoted by $\lambda^{\alpha}$, and the covariant derivative by $D^\m$. It is understood that the symmetrical and traceless part is taken with respect to the indices in curly brackets. The spin averaged operator matrix elements (OMEs) in Eqs.~(\ref{eq:defoperatorns})--(\ref{eq:defoperatorgluon}) sandwiched between some hadronic state are given by \begin{eqnarray} \label{eq:OME} \langle {{\rm nucl},p} \vert O^{j, \{\m_1,...,\m_n\}}\vert {{\rm nucl},p} \rangle =p^{\{\m_1}...p^{\m_n\}}A_{{\rm{nucl}}}^j\left(n, {\m^2}\right)\, , \end{eqnarray} where hadron mass effects have been neglected. The OMEs themselves as given in Eq.~(\ref{eq:OME}) are not calculable in perturbative QCD, but they can be related to the quark and anti-quark distributions of a given flavor and to the gluon distribution in the hadron. The scale evolution of the OMEs governed by anomalous dimensions is accessible to perturbative predictions as well as the coefficient functions $C_{i,j}$ multiplying the OMEs according to Eq.~(\ref{eq:OPE}). Both the anomalous dimensions and the coefficient functions are calculable order by order in perturbative QCD in an expansion in the strong coupling constant $\a_s$. In order to do so, we replace the nucleon state $\vert {\rm nucl,}p \rangle$ in Eqs.~(\ref{eq:forwardcompton}), (\ref{eq:OPE}) by partonic states. In complete analogy to Eq.~(\ref{eq:forwardcompton}) we define the forward Compton amplitude $t_{\mu\nu}$ at parton level and the corresponding partonic OMEs, \begin{eqnarray} \label{eq:OMEpartons} \langle {{\rm parton},p} \vert O^{j, \{\m_1,...,\m_n\}}\vert {{\rm parton},p} \rangle =p^{\{\m_1}...p^{\m_n\}}A_{{\rm{parton}}}^j\left(n, {\m^2}\right)\, . \end{eqnarray} As the OPE in Eq.~(\ref{eq:OPE}) represents an operator relation, we derive the following parton level expression \begin{eqnarray} \label{eq:OPEpartons} t_{\mu\nu} &\equiv& {\rm{i}} \int d^4z\, {\rm{e}}^{{\rm{i}}q \cdot z} \langle {{\rm parton},p} \vert\, T \left( J^{\dagger}_{\m}(z)J_{\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}(0) \right) \vert {{\rm parton},p}\rangle\, \\ &=& 2 \sum_{n,j} {\omega}^n \biggl[ e_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}\, C_{L,j}\biggl(n,\frac{Q^2}{\m^2},\a_s\biggr) + d_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda}\, C_{2,j}\biggl(n,\frac{Q^2}{\m^2},\a_s\biggr) \nonumber\\ & &\mbox{} + {\rm{i}} \e_{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda\a\b} \frac{p^\a q^\b}{p\!\cdot\! q} C_{3,j}\biggl(n,\frac{Q^2}{\m^2},\a_s\biggr) \biggr] A_{{\rm{parton}}}^{j}\left(n,{\m^2}\right) + {\rm{higher\,\, twists}} \, , \nonumber \end{eqnarray} which is an expansion in terms of the variable $\omega = (2 p\cdot q)/Q^2 = 1/x$ for unphysical $\omega \rightarrow 0$ ($x \rightarrow \infty$). The coefficients $C_{i,j}$ with $i=2,3,L$ are of course the same as the previous ones appearing in Eqs.~(\ref{eq:OPE}) and the scale evolution of the OMEs in Eq.~(\ref{eq:OMEpartons}) is controllable in perturbation theory. Let us in the following recall a few aspects of flavor (isospin) symmetry of the DIS structure functions which are relevant to neutrino-nucleon scattering. The composite operators Eqs.~(\ref{eq:defoperatorns})--(\ref{eq:defoperatorgluon}) are either singlet or non-singlet operators referring to the representation of the $SU(n_{f})$ flavor group. In particular the non-singlet operator $O^{\alpha,\{\m_1,\cdots ,\m_n\}}$ in Eq.~(\ref{eq:defoperatorns}) contains the generators $\lambda^{\alpha}$ of the flavor $SU(n_{f})$. It is well known, that for the separation of the singlet and non-singlet contributions to structure functions, Wilson coefficients, etc., one considers the sum and the difference of matrix elements for a proton $P$ and a neutron $N$, e.g. \begin{equation} \label{eq:isospin} F_{i}^{eP\pm eN}\equiv F_{i}^{eP} \pm F_{i}^{eN} \, , \quad\quad F_{i}^{\nu P\pm \nu N}\equiv F_{i}^{ \nu P} \pm F_{i}^{ \nu N} \, , \quad\quad\quad\quad i = 2,3,L \, . \end{equation} The combination $P+N$ singles out contributions to the singlet (isoscalar) operators and $P-N$ the corresponding ones to the non-singlet (isovector) operators, which can be seen readily as follows. To that end, let us specialize for simplicity to the case of two flavors ($n_{f}=2$) only, i.e. to a $SU(2)$-isospin symmetry, the generalization to an arbitrary number $n_f$ of flavors being straightforward. Then, in the $SU(2)$ example, the twist-two term $\Theta$ of the OPE consists of an isoscalar ($\theta_{0}$) and isovector ($\theta_{\alpha}$) part, i.e., \begin{equation} \label{eq:twist-two-piece} \Theta= \theta_{0}\, {\bf 1}+\theta_{\alpha}\lambda_{\alpha} , \quad \alpha=1,2,3 \, , \end{equation} where {\bf 1} is unit matrix and $\lambda_{\alpha} = \sigma _{\alpha}/2, \, \sigma_{\alpha}$ are the usual Pauli matrices in fundamental representation. Sandwiching Eq.~(\ref{eq:twist-two-piece}) between the proton $|P \rangle$ and neutron $|N\rangle$ states, one gets directly \begin{eqnarray} \label{eq:PNexample} \langle P| \Theta |P \rangle + \langle N|\Theta|N\rangle &=& \theta_{0}\langle P|P\rangle + \theta_{\alpha}\langle P|\lambda_{\alpha}|P\rangle + \theta_{0}\langle N|N\rangle + \theta_{\alpha}\langle P|\lambda_{\alpha}|P\rangle \\ &=& \theta_0 + \frac{1}{2} \theta_{3} + \theta_{0}-\frac{1}{2} \theta_{3} = 2\, \theta_{0} \, , \nonumber \\ \langle P|\Theta|P\rangle - \langle N|\Theta|N\rangle &=& \theta_0 +\frac{1}{2} \theta_{3} - (\theta_{0}-\frac{1}{2} \theta_{3}) = \theta_{3} \, . \end{eqnarray} Here one uses the fact that proton and neutron are eigenvectors of the $\lambda_{3}$ isospin operator with eigenvalues $+1/2$ and $-1/2$, respectively. Hence the combinations of the OMEs such as $ A_{ P \pm N }^j(n)=A_{P}^j(n) \pm A_{N}^j(n) $ correspond to the isoscalar part (singlet contribution) and isovector part (non-singlet contribution), respectively. As an upshot one can conclude that the OPE for the $P-N$ combination receives contributions from non-singlet quark operator $O^\alpha $ Eq.~(\ref{eq:defoperatorns}), i.e. $j=\alpha$, in the r.h.s of Eq.~(\ref{eq:OPE}). On the other hand, for the $P+N$ combination both singlet quark operator $O^{ \rm q }$ Eq.~(\ref{eq:defoperatorquark}) and singlet gluon operator $O^{\rm g } $ Eq.~(\ref{eq:defoperatorgluon}) contribute in the OPE, i.e. the sum on the r.h.s Eq.~(\ref{eq:OPE}) runs over $j={\rm q}, {\rm g}$. Since in the present article, we are considering charged current DIS in the combination $\nu P - \bar \nu P$ we have due to isospin symmetry \begin{eqnarray} \label{eq:FnuPFnuN} \left. \begin{array}{c} F_{i}^{\bar \nu P} = F_{i}^{ \nu N} \\[1ex] F_{i}^{\bar \nu N} = F_{i}^{\nu P} \end{array} \right\} &\Rightarrow& F_{i}^{ \nu P} - F_{i}^{\bar \nu P} = F_{i}^{ \nu P} -F_{i}^{\nu N} = F_{i}^{\bar \nu N} - F_{i}^{\nu N} \, . \end{eqnarray} Thus, we are entirely restricted to non-singlet quark operators for the structure functions $F_2^{ \nu P - \bar \nu P}$, $F_3^{ \nu P - \bar \nu P}$ and $F_L^{ \nu P - \bar \nu P}$. \begin{figure}[ht] \begin{center} \includegraphics[width=5.0cm]{./notcrossed.eps} \hspace{2.0cm} \includegraphics[width=5.0cm]{./crossed.eps} \caption[]{\label{fig:crossed-notcrossed} Leading order diagrams contributing to the forward Compton amplitude in deep-inelastic boson$(V)$-quark scattering.} \end{center} \end{figure} Next, we would like to address the symmetry properties of the partonic forward Compton amplitude~(\ref{eq:OPEpartons}) $t_{\mu\nu}$ and explain how these translate into selection rules for either even or odd Mellin $n$-moments of the different DIS structure functions. To that end, let us inspect the Feynman diagrams for $t_{\mu\nu}$ at the leading order with initial quarks in Fig.~\ref{fig:crossed-notcrossed}. There the right diagram is simply the crossed diagram of the left one. In Fig.~\ref{fig:crossed-notcrossed} we denote the gauge bosons by $V_\mu$ and $V_\nu$. For the latter there are various choices as they can either be photons $\gamma$ or weak gauge bosons $Z^{0}$ and $W^\pm$. The matrix element of the left diagram is proportional to \begin{equation} \label{eq:tmunuvert} t_{\mu\nu}\propto{}\Gamma_\nu\, \sla{D}(p+q)\, \Gamma_\mu \, , \end{equation} where $\Gamma_\mu$ and $\Gamma_{\nu}$ denote the vertices of vector boson-fermion coupling, while $\sla{D}(p+q)$ is the quark propagator of momentum $p+q$, $\sla{D}(p+q)=-1/(\sla{p}+\sla{q})$. For the right diagram one has \begin{equation} \label{eq:tmunuvertc} t_{\mu\nu}\propto{}\Gamma_\mu\, \sla{D}(p-q)\, \Gamma_\nu \, . \end{equation} Under the simultaneous transformation $\mu\leftrightarrow\nu$ and $q\rightarrow -q$ the matrix element of the crossed diagram is equal to uncrossed one, provided both vertices $\Gamma_{\mu}$ and $\Gamma_{\nu}$ have the same structure. Let us detail this situation for the case of the neutral current DIS first. The external bosons $V_\mu$ and $V_\nu$ in Fig.~\ref{fig:crossed-notcrossed} being photons $\gamma$ or $Z^{0}$-bosons couple to the vertices $\Gamma_{\mu}$ and $\Gamma_{\nu}$. The latter are either proportional to $e_{q}\gamma_{\mu}$ and $e_{q}\gamma_{\nu}$ with the fractional quark charge $e_q$ ($\gamma$-boson) or to $(v_{f}\gamma_{\mu}+ a_{f}\gamma_{\mu}\gamma_{5})$ and $(v_{f}\gamma_{\nu}+ a_{f}\gamma_{\nu}\gamma_{5})$ with the (flavor-depended) vector and axial-vector current coupling constants $v_{f}$ and $a_{f}$ ($Z^{0}$-boson). In the case of $\gamma-Z^{0}$-interference one has to consider both, $\gamma$ and $Z^{0}$, in the initial state in Fig.~\ref{fig:crossed-notcrossed} with a different gauge boson in the final state. In the end the effective number of diagrams for the interference contributions will be doubled. For all neutral current DIS cases the quark flavor, of course, remains conserved. At this point, we can relate the action of simultaneously transforming $\mu\leftrightarrow\nu$ and $q\rightarrow -q$ in all Feynman diagrams contributing to $t_{\mu\nu}$ to the parameters of the OPE in Eq.~(\ref{eq:OPEpartons}), namely the coefficient functions $C_2$, $C_3$ and $C_L$. It is clear that the full matrix element for $t_{\mu\nu}$ (l.h.s. of Eq.~(\ref{eq:OPEpartons})) remains unchanged, since the transformation $\mu\leftrightarrow\nu$ and $q\rightarrow -q$ maps the crossed and uncrossed diagrams onto each other, even in the case of $\gamma-Z^{0}$-interference due to the doubled number of diagrams. On the r.h.s of the OPE in Eq.~(\ref{eq:OPEpartons}) the tensors $e_{\mu\nu}$ and $d_{\mu\nu}$ remain invariant under $\mu\leftrightarrow\nu$, while the antisymmetric tensor $\varepsilon_{\mu\nu\alpha\beta}$ picks up a sign (-1). The coefficients $C_2$, $C_3$ and $C_L$ as well as the OMEs $A_{\rm parton}$, being Lorentz scalars, are at most functions of $Q^{2}=-q^{2}$. Therefore they are invariant as well. Finally $\omega$ will be transformed to $-\omega$ (recall its definition $\omega = (2 p\cdot q)/Q^2$). Thus, in the series expansion in spin $n$ in Eq.~(\ref{eq:OPEpartons}) the coefficient functions $C_2$ and $C_L$ are weighted by a factor $(-1)^{n}$, whereas $C_3$ is multiplied by $(-1)^{n+1}$. In other words, the sum in Eq.~(\ref{eq:OPEpartons}) runs for $C_2$ and $C_L$ only over even Mellin moments $n$ and only over odd for $C_3$. The coefficients for other $n$ have to vanish in Eq.~(\ref{eq:OPEpartons}). The same choice of $n$ holds for the Mellin moments of the structure functions $F_2$ and $F_L$ (even $n$) Eq.~(\ref{eq:mellindefF2L}) and $F_3$ (odd $n$) Eq.~(\ref{eq:mellindefF3}) of neutral current DIS because of relations Eq.~(\ref{eq:F2mellinMR}--\ref{eq:F3mellinMR}) which will be discussed later. \begin{figure}[ht] \begin{center} \includegraphics[width=4.0cm]{./fl2a.eps} \includegraphics[width=4.0cm]{./fl02a.eps} \includegraphics[width=4.0cm]{./fl11a.eps} \includegraphics[width=4.0cm]{./fl011a.eps} \caption[]{\label{fig:various_flavors} Representative three-loop diagrams for the various flavor classes in charged current neutrino-proton DIS (see text).} \end{center} \end{figure} So far our discussion has been based on leading order Feynman diagrams but the previous arguments carry over to higher orders as well. Up to three-loop accuracy all Feynman diagrams fall in one of the so-called flavor classes $fl_2$, $fl_{02}$, $fl_{11}$ or $fl_{011}$ displayed in Fig.~\ref{fig:various_flavors}. The class $fl_2$ corresponds to all diagrams with both gauge bosons $V_{\mu}$ and $V_{\nu}$ attached to the open fermion line of the initial (final) state quark. Class $fl_{02}$ collects the diagrams with both gauge bosons attached to an internal closed fermion loop, while $fl_{11}$ contains the diagrams with one boson attached to the closed loop and the other to the open line of the external quark. Finally the class $fl_{011}$ denotes diagrams with both bosons attached to different closed quark loops. Depending on the process under consideration some flavor classes vanish identically. It is easy to see that the neutral current DIS assignments for $C_2$ and $C_L$ (even Mellin moments) and $C_3$ (odd Mellin moments) from above persist and the same holds true for the structure functions. \begin{figure}[ht] \begin{center} \includegraphics[width=5.0cm]{./dWplus.eps}\hspace{2cm} \includegraphics[width=5.0cm]{./uWplus.eps} \caption[]{\label{fig:u-d-game} Leading order diagrams contributing to the forward Compton amplitude of charged current $\nu P$ and $\nu N$ scattering. The right diagram represents the crossed of the left one but with an incoming quark of different flavor.} \end{center} \end{figure} Let us next turn to the case of charged current DIS. We have the structure functions $F_2$, $F_3$ and $F_L$ for both, an isoscalar and an isovector target, i.e. ${\nu P \pm \nu N}$, which we have to distinguish (see also Eq.~(\ref{eq:FnuPFnuN})). On the partonic level this implies that we sum the contributions of $u$ and $d$ quarks in the singlet case and take their difference in the non-singlet case. For simplicity, we restrict ourselves here again to $SU(2)$-isospin symmetry with flavors $u$ and $d$ only. The generalization to $s$, $c$ and more flavors should be clear. In charged current $\nu P$ or $\nu N$ DIS we are considering initial and final gauge bosons $V_{\mu}=W^{+}_{\mu}$ and $V_{\nu}=W^{+}_{\nu}$ (cf. Fig.~\ref{fig:crossed-notcrossed}), the coupling of $d$-quarks to $W^{-}$ being excluded by electroweak theory. Say, we take $d$ as incoming and outgoing quark in the left diagram of Fig.~\ref{fig:crossed-notcrossed}. Then, the scattering of $W^{+}$ with the incoming $d$ quark yields a $u$ quark (or $c$ and $t$ if more flavors are considered). On the other hand, the crossed diagram on the right in Fig.~\ref{fig:crossed-notcrossed} simply does not exist for an incoming $d$ quark, because it is not allowed by the electroweak Standard Model couplings. Rather, the incoming quark should be a $u$ quark. In Fig.~\ref{fig:u-d-game} we display explicitly the appropriate pair of Feynman diagrams at leading order. Thus, for the partonic forward Compton amplitude~(\ref{eq:OPEpartons}) $t_{\mu\nu}$ in the combination $t_{\mu\nu}^{u+d}\equiv t_{\mu\nu}^{u}+t_{\mu\nu}^{d}$ we effectively sum the contributions of both, crossed and uncrossed, diagrams in Fig.~\ref{fig:u-d-game} whereas for $t_{\mu\nu}^{u-d}\equiv t_{\mu\nu}^{u}-t_{\mu\nu}^{d}$ we subtract them. Then, we arrive at the following properties for simultaneous transformations $\mu\leftrightarrow\nu$ and $q\rightarrow-q$, \begin{eqnarray} \label{eq:tmunu-isospin-upd} t_{\mu\nu}^{u+d}& \rightarrow & t_{\mu\nu}^{u+d}\, , \\ \label{eq:tmunu-isospin-umd} t_{\mu\nu}^{u-d}& \rightarrow & (-1)\, t_{\mu\nu}^{u-d}. \end{eqnarray} Eq.~(\ref{eq:tmunu-isospin-upd}) implies that the forward Compton amplitude $t_{\mu\nu}^{u+d}$ has the same symmetry property as in the case of neutral current DIS. For the corresponding coefficient functions and their dependence on the Mellin variable $n$ we may repeat exactly the same line of arguments as before leading to the conclusion that $C_2$, $C_L$ ($C_3$) are governed by even (odd) $n$ only. In the other case, Eq.~(\ref{eq:tmunu-isospin-umd}) shows that $t_{\mu\nu}^{u-d}$ is antisymmetric under the transformation $\mu\leftrightarrow\nu$ simultaneously with $q\rightarrow-q$. which gives an additional factor $(-1)$ for the l.h.s. of Eq.~(\ref{eq:OPEpartons}). This alters the Mellin-$n$ dependence of the coefficient functions so that we have precisely the opposite assignments, $C_2$, $C_L$ ($C_3$) being entirely odd (even) functions of $n$ only. Before moving on, let us briefly comment on the higher order diagrams for charged current DIS as illustrated in Fig.~\ref{fig:various_flavors}. For the flavor class $fl_2$ our tree level arguments from above may be literally repeated. In the flavor class $fl_{02}$, on the other hand, crossed diagrams with the same external quark flavor do contribute. However, this does not destroy the symmetry properties of the singlet and non-singlet combinations. The complete $t_{\mu\nu}^{u+d}$ simply sums the crossed and uncrossed diagrams and, therefore is still be symmetric under $\mu \leftrightarrow \nu $ simultaneously with $q\rightarrow-q$, thus Eq.~(\ref{eq:tmunu-isospin-upd}) holds for the $u+d$ combination. In contrast, the contributions from the $u-d$ combination to the flavor class $fl_{02}$ vanish. The flavor classes $fl_{11}$ and $fl_{011}$ are excluded in charged current DIS, because the flavor changes and, as a consequence, the coupling of one single $W^{+}$-boson to a quark loop is not possible. Finally, it remains to relate the coefficient functions $C_2$, $C_3$ and $C_L$ and the OMEs of Eq.~(\ref{eq:OME}) to the Mellin moments of structure functions. To that end, it is convenient to project Eq.~(\ref{eq:htensor}) and the analogue of Eq.~(\ref{eq:OPEpartons}) for the hadron forward Compton amplitude $T_{\mu\nu}$ onto the respective Lorentz structure using projectors \begin{eqnarray} \label{eq:projL} P_{L}^{\mu\nu} &\equiv& - \frac{q^2}{(p\!\cdot\! q)^2}\, p^\m p^\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda \, , \\ \label{eq:proj2} P_{2}^{\mu\nu} &\equiv& - \left( \frac{3-2\epsilon}{2-2\epsilon} \hspace{1mm} \frac{ q^2}{(p\!\cdot\! q)^2}\, p^\m p^\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda + \frac{1}{2-2\epsilon} \hspace{1mm} g^{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda} \right) \, , \\ \label{eq:proj3} P_{3}^{\mu\nu}&\equiv& - {\rm{i}} \frac{1}{(1-2\epsilon)(2-2\epsilon)} \hspace{1mm} \e^{\m\nu} \def\p{\pi} \def\q{\theta} \def\r{\rho} \def\s{\sigma} \def\L{\Lambda\a\b}\, \frac{p_\a q_\b}{p\!\cdot\! q} \, , \end{eqnarray} where all expressions are exact in $D=4-2\varepsilon$ dimensions. With the help of Eqs.~(\ref{eq:projL})--(\ref{eq:proj3}) one arrives at relations between Mellin moments of DIS structure functions~(\ref{eq:mellindefF2L}), (\ref{eq:mellindefF3}) and the parameters of OPE. On a technical level, this implies a Cauchy integration of the analogue of Eq.~(\ref{eq:OPEpartons}) for $T_{\mu\nu}$ in the complex $\omega$-plane and we recall the necessary details in Appendix~\ref{sec:appB}. \begin{eqnarray} \label{eq:F2mellinMR} \int\limits_0^1 dx\, x^{n-2} F_{i}(x,Q^2) &=& \sum\limits_{j} C_{i,j}\left(n,\frac{Q^2}{\m^2},\a_s\right) A_{ \rm nucl }^{j}\left(n,{\m^2}\right)\, , \quad\quad\quad i=2,L\, , \\ \label{eq:F3mellinMR} \int\limits_0^1 dx\, x^{n-1} F_{3}(x,Q^2) &=& \sum\limits_{j} C_{3,j}\left(n, \frac{Q^2}{\m^2},\a_s\right) A_{\rm nucl}^j\left(n,{\m^2}\right) \, . \end{eqnarray} To summarize, Eqs.~(\ref{eq:F2mellinMR}) and (\ref{eq:F3mellinMR}) provide the basis to obtain Mellin moments of DIS structure functions in our approach relying on the OPE and the optical theorem. Furthermore, from the careful examination of the symmetry properties of the forward Compton amplitude $T_{\mu\nu}$ and, related, the underlying Feynman diagrams, we have deduced corresponding rules for the Mellin variable $n$. In the case of neutral current and charged current ${\nu P+\nu N}$ DIS the structure functions $F_2$ and $F_L$ are only functions of even Mellin-$n$ and only functions of odd $n$ for $F_3$. For the case of interest in this paper, charged current ${\nu P-\nu N}$ DIS, we encounter only odd functions in $n$ for $F_2$ and $F_L$ and, only even functions in $n$ for $F_3$, respectively. \setcounter{equation}{0} \section{Renormalization} \label{sec:renormalization} In this Section we briefly recall the necessary steps in renormalizing the operators in the OPE and, following the discussion above, we restrict ourselves here entirely to the non-singlet case. Starting, say, from the partonic expression~(\ref{eq:OPEpartons}), i.e. partonic matrix elements of $t_{\mu\nu}$, we express the renormalized OMEs $A^{j}_{\rm parton}$ (see Eq.~(\ref{eq:OMEpartons})) in terms of matrix elements of bare composite operators, \begin{equation} \label{eq:Orenns} O^{\alpha,{\rm \,ren }} \: = \: Z_{\rm ns}\,O^{\alpha, \,{\rm bare}} \, . \end{equation} Here and later we suppress other indices and the explicit dependence on $n$ for the operators~(\ref{eq:defoperatorns}) The scale dependence of the operator $O^{\alpha}$ is governed by the anomalous dimension $\gamma_{\rm ns}$, \begin{equation} \label{eq:gamma_ns} \frac{d}{d \ln \mu^2 }\, O^{\alpha, {\rm\, ren} } \:\equiv \: - \,\gamma_{\rm ns}\, O^{\alpha , {\rm\, ren}} \, , \end{equation} and is connected to the renormalization constant $Z_{\rm ns}$ in Eq.~(\ref{eq:Orenns}) by \begin{equation} \label{eq:gamZns} \gamma_{\rm ns} \: = \: -\,\left( \frac{d }{d\ln\mu^2 }\, Z_{\rm ns} \right) Z^{-1}_{\rm ns} \, . \end{equation} In order to arrive at explicit expressions for $Z_{\rm ns}$ or Eq.~(\ref{eq:gamZns}), one has to make use of a regularization procedure and a renormalization scheme. We choose dimensional regularization~\cite{'tHooft:1972fi,Bollini:1972ui,Ashmore:1972uj,Cicuta:1972jf} in $D = 4 - 2\epsilon$ dimensions and the modified minimal subtraction \cite{'tHooft:1973mm,Bardeen:1978yd} scheme, $\overline{\rm{MS}}$. The running coupling evolves according to \begin{equation} \label{eq:arun} \frac{d}{d \ln \mu^2}\: \frac{\alpha_{\rm s}}{4\pi} \:\: \equiv \:\: \frac{d\,a_{\rm s}}{d \ln \mu^2} \:\: = \:\: - \epsilon\, a_{\rm s} - \beta_0\, a_{\rm s}^2 - \beta_1\, a_{\rm s}^3 - \beta_2\, a_{\rm s}^4 - \ldots \:\: , \end{equation} and we have introduced the common short hand notation $a_{s}\equiv \alpha_{s}/(4 \pi)$. The usual four-dimensional expansion coefficients $\beta_{\rm n}$ of the beta function in QCD read $\,\beta_0 = 11 - 2/3\,{n^{}_{\! f}}\,$ etc, with ${n^{}_{\! f}}$ representing the number of active quark flavors. The bare and the renormalized coupling, $\a^{\rm{bare}}_s$ and $\a_s$ are related by \begin{eqnarray} \label{eq:alpha-s-renorm} \a^{\rm{bare}}_s &=& Z_{\a_s}\, \a_s^{\rm{}}\, , \end{eqnarray} where we have put the factor $S_\epsilon = \exp( \epsilon\{\ln(4\p) - \g_{\rm{E}}\}) = 1$ in the $\overline{\rm{MS}}$-scheme and the renormalization constant $Z_{\a_s}$ reads \begin{eqnarray} \label{eq:Z-alpha-s} Z_{\a_s} &=& 1 - \frac{\b_0}{\epsilon} {a_s}\, + \left( \frac{\b_0^2}{\epsilon^2} - \frac{\b_1}{2\epsilon} \right) {a_s}^2 + \dots\,\, . \end{eqnarray} In this framework, the renormalization factor $Z_{\rm ns}$ in Eq.~(\ref{eq:Orenns}) is a series of poles in $1/\epsilon$, expressed in terms of $\beta_{\rm n}$ and the coefficients $\gamma^{\,(l)}$ of the anomalous dimensions from an expansion in $a_{\rm s}$, \begin{equation} \label{eq:gam-exp} \gamma(n) \: = \: \sum_{l=0}^{\infty}\, a_{\rm s}^{\,l+1}\, \gamma^{\,(l)}(n) \, . \end{equation} Up to the third order in the coupling constant the expansion of $Z_{\rm ns}$ reads \begin{eqnarray} \label{eq:Zns3} Z_{\rm ns} & = & 1 \: + \: \:a_{\rm s}\, \frac{1}{\epsilon}\,\gamma_{\,\rm ns}^{\,(0)} \: + \: a_{\rm s}^2 \,\left[\, \frac{1}{2\epsilon^2}\, \left\{ \left(\gamma_{\,\rm ns}^{\,(0)} - \beta_0 \right) \gamma_{\,\rm ns}^{\,(0)} \right\} + \frac{1}{2\epsilon}\, \gamma_{\,\rm ns}^{\,(1)} \right] \nonumber \\[1mm] & & \mbox{} + \: a_{\rm s}^3 \,\left[\, \frac{1}{6\epsilon^3}\, \left\{ \left( \gamma_{\,\rm ns}^{\,(0)} - 2 \beta_0 \right) \left( \gamma_{\,\rm ns}^{\,(0)} - \beta_0 \right) \gamma_{\,\rm ns}^{\,(0)} \right\} \right. \nonumber \\[1mm] & & \left. \mbox{} \quad\quad \! + \: \frac{1}{6\epsilon^2}\, \left\{ 3\, \gamma_{\,\rm ns}^{\,(0)} \gamma_{\,\rm ns}^{\,(1)} - 2 \beta_0\, \gamma_{\,\rm ns}^{\,(1)} - 2 \beta_1\, \gamma_{\,\rm ns}^{\,(0)} \right\} \: + \: \frac{1}{3\epsilon}\, \gamma_{\,\rm ns}^{\,(2)} \right] \:\: . \end{eqnarray} The anomalous dimensions $\gamma^{\,(l)}$ can thus be read off from the $\epsilon^{-1}$ terms of the renormalization factors at order $a_{\rm s} ^{\,l+1}$, while the higher poles in $1/\epsilon$ can serve as checks for the calculation. The coefficient functions in Eqs.~(\ref{eq:OPE}), (\ref{eq:OPEpartons}), on the other hand, have an expansion in positive powers of $\epsilon$, \begin{equation} \label{eq:cf-exp} C_{i,{\rm ns}} \: = \: \sum_{l=0}^{\infty} \, a_{\rm s}^{\, l} \left( c_{i,{\rm ns}}^{\,(l)} + \epsilon a_{i,{\rm ns}}^{\,(l)} + \epsilon^2 b_{i,{\rm ns}}^{\,(l)} + \ldots \right) \, , \end{equation} where $i =2, 3, L$ and we have again suppressed the dependence on $n$ (and $Q^2/\mu^2$). Here $C_{i,{\rm ns}}$ is our generic notation for non-singlet contributions obtained for $C_{i,\alpha}$ in Eqs.~(\ref{eq:OPE}), (\ref{eq:OPEpartons}) Due to the presence of $\gamma_5$ in the vertices, the axial-vector coupling in dimensional regularization brings up the need for additional renormalizations to restore the axial Ward-identities. This is extensively described in the literature and for the associated renormalizations we use the prescription of Ref.~\cite{Larin:1991tj,Larin:1993tq} based on relating vector and axial-vector currents. The necessary constant $Z_A$ for the axial renormalization $Z_5$ and the finite renormalization due to the treatment of $\gamma_5$ in the $\overline{\rm{MS}}$-scheme are known to three loops~\cite{Larin:1991tj,Larin:1993tq}. The actual calculation of the anomalous dimension~(\ref{eq:gamma_ns}) and the coefficient functions $C_{i,{\rm ns}}$ in perturbative QCD proceeds as follows. Using the Lorentz projectors~(\ref{eq:projL})--(\ref{eq:proj3}) we obtain from the forward partonic Compton amplitude Eq.~(\ref{eq:OPEpartons}) the partonic invariants \begin{equation} \label{eq:def-partoninv} t_{i,{\rm ns} }=P^{\mu\nu}_{i} t_{\mu\nu} \, , \quad\quad\quad i=2,3,L \, , \end{equation} see also Eqs.~(\ref{eq:invarTmunu2L}), (\ref{eq:invarTmunu3}). These invariants can be written in terms of the bare operator matrix elements as \begin{eqnarray} \lefteqn{ \label{eq:TmunuPartonRenNS} t_{ i,{\rm ns} } (x,Q^2,\a_s,\epsilon) \,=} \\ & & 2 \sum_{n} \left( \frac{1}{x}\right)^n C^{{\rm{}}}_{i,{\rm ns}}\left(n,\frac{Q^2}{\m^2},\a_s,\epsilon\right) Z_{\rm{ns}}\left(\a_s,\frac{1}{\epsilon}\right) A^{ {\rm ns} ,{\rm bare } }_{{\rm q}}\left(n,\a_s,\frac{p^2}{\m^2},\epsilon\right) + O(p^2) \, , \nonumber \end{eqnarray} where $i=2,3,L$ and the l.h.s. of Eq.~(\ref{eq:TmunuPartonRenNS}) is renormalized by substituting the bare coupling constant in terms of the renormalized, see Eq.~(\ref{eq:alpha-s-renorm}). The wave function renormalization for the external quark lines is an overall factor on both sides of the Eq.~(\ref{eq:TmunuPartonRenNS}) and drops out. The terms $O(p^2)$ on the r.h.s. of Eq.~(\ref{eq:TmunuPartonRenNS}) indicate higher twist contributions, which we neglect. Starting with the partonic invariant $t_{i,{\rm{ns}}}$ from Eq.~(\ref{eq:TmunuPartonRenNS}), the renormalization constants $Z_{\rm ns}$ and the coefficient functions $C_{i,{\rm ns}}$ are calculated using the method of projection developed in Ref.~\cite{Gorishnii:1983su}, which consists of applying the following projection operator, \begin{eqnarray} \label{eq:projectionoperator} {\cal P}_n[f(p,q)] \equiv \left. \Biggl[ \frac{q^{ \{\m_1}\cdots q^{\m_n \}}}{2 n !} \frac{\partial ^n}{\partial p^{\m_1} \cdots \partial p^{\m_n}} f(p,q) \Biggr] \right|_{p=0} \, , \end{eqnarray} to both sides of Eq.~(\ref{eq:TmunuPartonRenNS}). Here $q^{ \{\m_1}\cdots q^{\m_n \}}$ is symmetrical and traceless, i.e. the harmonic part of the tensor $q^{\m_1}\cdots q^{\m_n }$. On the r.h.s. of Eq.~(\ref{eq:TmunuPartonRenNS}), it is obvious, that the $n$-th order differentiation in the projection operator ${\cal P}_n$ singles out precisely the $n$-th moment, i.e. the coefficient of $1/x^{n}$. All other powers of $1/x$ vanish either by differentiation or after nullifying the momentum $p$. The operator ${\cal P}_n$ does not act on the renormalization constant $Z_{\rm ns}$ and the coefficient functions on the r.h.s. of Eq.~(\ref{eq:TmunuPartonRenNS}) as they are only functions of $n$, $\a_s$ and $\epsilon$. However, ${\cal P}_n$ does act on the partonic bare OMEs $A^{j,{\rm bare}}_{{\rm{parton}}}$, where the nullification of $p$ effectively eliminates all but the tree level diagrams $A^{j,{\rm{tree}}}_{{\rm parton}}$, because any diagram with loops becomes a massless tadpole and is put to zero in dimensional regularization. Finally, the $O(p^2)$ terms in Eq.~(\ref{eq:TmunuPartonRenNS}), which denote higher twist contributions, become proportional to the metric tensor after differentiation. They are removed by the harmonic tensor $q^{ \{\m_1}\cdots q^{\m_n \}}$. On the l.h.s. of Eq.~(\ref{eq:TmunuPartonRenNS}), ${\cal P}_{n}$ is applied to the integrands of all Feynman diagrams contributing to the invariants $t_{i,{\rm{parton}}}$. The momentum $p$ is nullified before taking the limit $\epsilon \rightarrow 0$, so that all infrared divergences as $p\rightarrow 0$ are dimensionally regularized for individual diagrams. This reduces the 4-point diagrams that contribute to $t_{\mu\nu}$ to self-energy type diagrams (2-point-functions) accessible to reduction algorithms such as {\sc Mincer}~\cite{Larin:1991fz} (see the Section~\ref{sec:calculation}). To summarize, we find after application of the projection operator ${\cal P}_n$ to Eq.~(\ref{eq:TmunuPartonRenNS}) \begin{eqnarray} \label{eq:TmunuPartonMomNS} t_{i,{\rm{ns}}} \left(n,\frac{Q^2}{\m^2},\a_s,\epsilon\right) &\equiv& \cp_n\, t_{i,\rm ns}(x,Q^2,\a_s,\epsilon) \\ &=& C_{i,{\rm ns}}\left(n,\frac{Q^2}{\m^2},\a_s,\epsilon\right) Z_{\rm{ns}}\left(\a_s,\frac{1}{\epsilon}\right) A^{{\rm ns},{\rm tree}}_{\rm q}(n,\epsilon) \, , \nonumber \end{eqnarray} where $i = 2,3,L$. This is our starting point for an iterative determination of the anomalous dimensions and coefficient functions via the OPE, since the $C_{i,{\rm ns}}$ ($Z_{\rm{ns}}$) are expanded in positive (negative) powers of $\epsilon$ while the OME $A_{{\rm q}}^{\rm ns, tree}$ does factorize after application of the projector $\cp_n$. In a series expansion in terms of the renormalized coupling $a_{\rm s}$ at the scale $\mu^2 = Q^2$ we can write \begin{eqnarray} \label{eq:partinv-exp} t_{i,{\rm ns}}(n) &=& \left( t^{(0)}_{i,{\rm ns}}(n) +a_{s}\, t^{(1)}_{i,{\rm ns}}(n) + a_{s}^{2}\, t^{(2)}_{i,{\rm ns}}(n) + a_{s}^{3}\, t^{(3)}_{i,{\rm ns}}(n) + \dots \right) A^{{{\rm ns}},{\rm{tree}}}_{{\rm q}}(n) \, , \end{eqnarray} with $i = 2,3,L$ and recall that we use $a_{s}\equiv \alpha_{s}/(4 \pi)$. Then we normalize leading order contribution as follows, \begin{eqnarray} \label{eq:F-0} t^{(0)}_{i,{\rm ns}}(n) = 1 \, , \quad i = 2,3 \, , \quad\quad \mbox{and} \quad\quad t^{(0)}_{L,{\rm ns}}(n) = 0 \, , \end{eqnarray} where the OME $A_{{\rm q}}^{\rm ns, tree}$ (being a constant) has been absorbed into the normalization of Eq.~(\ref{eq:F-0}). With the normalization~(\ref{eq:F-0}) one has \begin{eqnarray} \label{eq:F2L3-NS-0} c^{(0)}_{i,{\rm ns}}(n)=1 \, , \quad i = 2,3 \, , \quad\quad \mbox{and} \quad\quad c^{(0)}_{L,{\rm ns}}(n)=0 \, . \end{eqnarray} At first order in $\a_s$, expanding up to order $\epsilon^{2}$ and suppressing the $n$-dependence from now on for brevity, we find \begin{eqnarray} \label{eq:F2-NS-1} t^{(1)}_{i,{\rm ns}} &=& \frac{1}{\epsilon} \g^{(0)}_{\rm ns} + c^{(1)}_{i,{\rm ns}} + \epsilon a^{(1)}_{i,{\rm ns}} + \epsilon^{2} b^{(1)}_{i,{\rm ns}} +{\cal O}(\epsilon^{3}) \, , \quad i = 2,3 \, , \\[1ex] \label{eq:FL-NS-1} t^{(1)}_{L,{\rm ns}} &=& c^{(1)}_{L,{\rm ns}} + \epsilon a^{(1)}_{L,{\rm ns}} + \epsilon^{2} b^{(1)}_{L,{\rm ns}} +{\cal O}(\epsilon^{3}) \, . \end{eqnarray} Performing the expansion at $\alpha_{s}^{2}$ up to order $\epsilon$ we arrive at the equations: \begin{eqnarray} \label{eq:F2-NS-2} t^{(2)}_{i,{\rm ns}} &=& \frac{1}{2 \epsilon^2} \left\{ \left( \g^{(0)}_{\rm ns } - \b_0 \right) \g^{(0)}_{\rm ns } \right\} + \frac{1}{2 \epsilon} \left\{ \g^{(1)}_{\rm ns } + 2 c^{(1)}_{i,{\rm ns}}\, \g^{(0)}_{\rm ns} \right\} \\ & & + c^{(2)}_{i,{\rm ns}} + a^{(1)}_{i,{\rm ns}}\, \g^{(0)}_{\rm ns} + \epsilon \left\{ a^{(2)}_{i,{\rm ns}} + b^{(1)}_{i,{\rm ns}}\, \g^{(0)}_{\rm ns } \right\} +{\cal O}(\epsilon^{2}) \, , \quad i = 2,3 \, , \nonumber \\[1ex] \label{eq:FL-NS-2} t^{(2)}_{L,{\rm ns}} &=& \frac{1}{ \epsilon} \left\{ c^{(1)}_{L,{\rm ns}}\, \g^{(0)}_{\rm ns} \right\} + c^{(2)}_{L,{\rm ns}} + a^{(1)}_{L,{\rm ns}}\, \g^{(0)}_{\rm ns} + \epsilon \left\{ a^{(2)}_{L,{\rm ns}} + b^{(1)}_{L,{\rm ns}}\, \g^{(0)}_{\rm ns } \right\} + {\cal O}(\epsilon^{2}) \, . \nonumber \end{eqnarray} Finally for the third order $\alpha_{s}^{3}$ we obtain \begin{eqnarray} \label{eq:F2-NS-3} t^{(3)}_{i,{\rm ns}} &=& \frac{1}{6 \epsilon^3} \left\{ \left( \g^{(0)}_{\rm ns} - 2 \b_{0} \right)\left( \g^{(0)}_{\rm ns}-\b_{0} \right) \g^{(0)}_{\rm ns} \right\} \\ & & \frac{1}{6 \epsilon^2} \left\{ 3 \g^{(0)}_{\rm ns}\, \g^{(1)}_{\rm ns}- 2 \b_{0}\, \g^{(1)}_{\rm ns} - 2 \b_{1}\, \g^{(0)}_{\rm ns} + 3c^{(1)}_{i,{\rm ns}}\left( \g^{(0)}_{\rm ns}-\b_{0}\right) \g^{(0)}_{\rm ns} \right\} \nonumber \\ & & + \frac{1}{6 \epsilon} \left\{2 \g^{(2)}_{\rm ns} +3 c^{(1)}_{i,{\rm ns}}\, \g^{(1)}_{\rm ns} + 6 c^{(2)}_{i,{\rm ns}}\, \g^{(0)}_{\rm ns} + 3 a^{(1)}_{i,{\rm ns}} \left( \g^{(0)}_{\rm ns}-\b_{0}\right)\g^{(0)}_{\rm ns} \right\} \nonumber \\ & & + \frac{1}{2}\left\{ 2 c^{(3)}_{i,{\rm ns}} + a^{(1)}_{i,{\rm ns}}\, \g^{(1)}_{\rm ns} + 2 a^{(2)}_{i,{\rm ns}}\, \g^{(0)}_{\rm ns} + b^{(1)}_{i,{\rm ns}}\left( \g^{(0)}_{\rm ns} - \b_{0}\right)\, \g^{(0)}_{\rm ns} \right\} + {\cal O}(\epsilon) \, , \quad i = 2,3 \, , \nonumber \\[1ex] \label{eq:FL-NS-3} t^{(3)}_{L,{\rm ns}} &=& \frac{1}{2 \epsilon^2} \left\{ c^{(1)}_{L,{\rm ns}}\left( \g^{(0)}_{\rm ns}-\b_{0}\right) \g^{(0)}_{\rm ns} \right\} \\ & & + \frac{1}{2 \epsilon} \left\{ c^{(1)}_{L,{\rm ns}}\, \g^{(1)}_{\rm ns} +2 c^{(2)}_{L,{\rm ns}}\, \g^{(0)}_{\rm ns} + a^{(1)}_{L,{\rm ns}} \left( \g^{(0)}_{\rm ns}-\b_{0}\right)\g^{(0)}_{\rm ns} \right\}\nonumber \\ & & + \frac{1}{2}\left\{ 2 c^{(3)}_{L,{\rm ns}} + a^{(1)}_{L,{\rm ns}}\, \g^{(1)}_{\rm ns} + 2 a^{(2)}_{L,{\rm ns}}\, \g^{(0)}_{\rm ns}+ b^{(1)}_{L,{\rm ns}}\left( \g^{(0)}_{\rm ns} - \b_{0}\right)\, \g^{(0)}_{\rm ns} \right\} + {\cal O}(\epsilon) \, . \nonumber \end{eqnarray} Eqs.~(\ref{eq:F2-NS-1})--(\ref{eq:FL-NS-3}) hold for both, even and odd Mellin moments alike and we did not distinguish these in our notation. However, from the discussion of the preceding Sections it is clear that the respective anomalous dimensions $\gamma^{(l)}$ and coefficient functions $c^{(l)}_{i,{\rm ns}}$ describe different physical processes. In fact, it is well-known that starting from $\gamma^{(1)}$ and $c^{(2)}_{i,{\rm ns}}$ (and $a^{(2)}_{i,{\rm ns}}$, $b^{(2)}_{i,{\rm ns}}$, etc.), they differ. The new results of the present paper from Eqs.~(\ref{eq:F2-NS-3}), (\ref{eq:FL-NS-3}) at third order in $\alpha_{s}$ consist of odd Mellin moments for $c^{(3)}_{2,{\rm ns}}$ and $c^{(3)}_{L,{\rm ns}}$ and even moments for $c^{(3)}_{3,{\rm ns}}$. Below in Sec.~\ref{sec:calculation} we present numerical results for them, while complete expressions through rational numbers are deferred to Appendix~\ref{sec:appA}. \setcounter{equation}{0} \section{Calculation} \label{sec:calculation} In the previous Sections, we have laid the foundations to our calculation of Mellin moments of the DIS charged current structure functions $F_2^{\nu P -\bar \nu P}$, $F_3^{\nu P -\bar \nu P}$ and $F_L^{\nu P -\bar \nu P}$ together with their respective coefficient functions and anomalous dimensions. To that end, following Refs.~\cite{Moch:2004pa,Vogt:2004mw,Moch:2004xu,Vermaseren:2005qc}, we have calculated the Lorentz invariants of the parton Compton amplitude $t^{(l)}_{i,{\rm ns}}$, $l=0,1,2,3$, $i=2,3,L$, as given in the l.h.s. of Eqs.~(\ref{eq:F2-NS-1})--(\ref{eq:FL-NS-3}) from first principles. All contributing Feynman diagram were generated and then projected by one of the Lorentz projection~(\ref{eq:projL})--(\ref{eq:proj3}). Subsequently, the application of Eq.~(\ref{eq:projectionoperator}) for the harmonic projection ${\cal P}_n$ extracts all contributions to the given Mellin moment, which are finally solved in terms of rational numbers, values of the Riemann zeta functions and $SU(N_c)$ color coefficients $C_A$, $C_F$ and $n_f$. Due to the large number of diagrams involved in the calculations up to order $\alpha_{s}^{3}$ sufficient automatization is necessary. Therefore the calculations are organized in detail as follows: \begin{table} \begin{center} \begin{tabular}{ccccccc} \hline\hline & & & & & & \\[-3mm] Lorentz &Structure & tree & one-loop & two-loop &three-loop & sum\\ invariant &function &$ {\cal O}(\alpha_{s}^{0})$ & ${\cal O}(\alpha_{s}^{1})$ & ${\cal O}(\alpha_{s}^{2})$ & ${\cal O}(\alpha_{s}^{3})$ & \\ & & & & & & \\[-3mm] \hline & & & & & & \\[-4mm] $t_{2,{\rm ns}}$ & $F_{2}^{ \nu P - \bar \nu P}$ & 1 & 4 & 55 & 1016 & 1076 \\ & & & & & & \\[-4mm] \hline & & & & & & \\[-4mm] $t_{L,{\rm ns}}$ & $F_{L}^{ \nu P - \bar \nu P}$ & 1 & 4 & 55 & 1016 & 1076 \\ & & & & & & \\[-4mm] \hline & & & & & & \\[-4mm] $t_{3,{\rm ns}}$ & $F_{3}^{ \nu P - \bar \nu P}$ & 1 & 4 & 63 & 1246 & 1314 \\ & & & & & & \\[-4mm] \hline\hline & & & & & & \\[-3mm] in total & & & & & & 3466\\ & & & & & & \\[-3mm] \hline \hline \end{tabular} \caption{The number of diagrams involved in the calculation of the $\nu P - \bar \nu P$ charged current DIS structure functions $F_2$, $F_L$ and $F_3$ at tree level, one-loop, two-loop and three-loop, respectively.} \label{t:tab1} \end{center} \end{table} \begin{itemize} \item All Feynman diagrams are generated automatically with the program {\sc Qgraf}~\cite{Nogueira:1991ex}. This program generates all possible Feynman diagrams (and topologies) for a given process in some special format. The program works very effectively, producing a database with thousands of diagrams within seconds only. For charged current DIS we have obtained from {\sc Qgraf} 2, 10, 153 and 3468 diagrams for the tree, one-loop, two-loop and three-loop contributions, respectively. \item For all further calculations we have relied on the latest version of the symbolic manipulation program {\sc Form}~\cite{Vermaseren:2002rp,Vermaseren:2006ag}. For the further treatment of {\sc Qgraf} output, such as analysis of the topologies, the explicit implementation of Feynman rules etc. we have adapted a dedicated {\sc Form} procedure {\it conv.prc} from previous work, e.g. Ref.~\cite{Vermaseren:2005qc}. Most importantly, this procedure tries to exploit as many symmetry properties of the original Feynman diagrams in order to reduce their total number. The upshot of these efforts are presented in Table~\ref{t:tab1} order by order for structure function corresponding to the different Lorentz projections. As one can see, the number of diagrams obtained for $F_3^{\nu P -\bar \nu P}$ is always larger than for $F_2^{\nu P -\bar \nu P}$ or $F_L^{\nu P -\bar \nu P}$. The reason is that in the former case we can not apply certain symmetry transformations due to the presence of $\gamma_{5}$ in the vertices. The database for $F_2^{\nu P -\bar \nu P}$ and $F_L^{\nu P -\bar \nu P}$ produced by us does almost coincide with the one used in Ref.~\cite{Retey:2000nq}, except for small modifications. The database for $F_3^{\nu P -\bar \nu P}$ is completely new. \item For the calculation of the color factors for each Feynman diagram we have used the {\sc Form} package {\it color.h}~\cite{vanRitbergen:1998pn}. \item The actual calculation of the Mellin moments of the Feynman integrals has made use of {\sc Mincer}. The detailed description of this program can be found in Ref.~\cite{Larin:1991fz} for the {\sc Form} package {\it mincer.h}. For organization of the work of (a slightly modified version of) {\sc Mincer} with the input databases we have used a dedicated database program {\sc Minos}~\cite{Larin:1994vu,Larin:1997wd}. \item Finally, on top of {\sc Mincer} and {\sc Minos} some shell scripts managed the automatic runs of both programs for different parts of the calculation. This facilitates the bookkeeping of different input parameters for {\sc Minos} and {\sc Mincer} due to different Lorentz projections, orders of $\alpha_{s}$ etc in distributed running. Moreover, the shell scripts also organized the final summations over the flavor classes as well as the output of all final results. \end{itemize} Next, let us discuss the various checks we performed on the results of our calculations. First of all, we have tested our set-up by a recalculation of some known even Mellin moments for $F_2$, $F_L$ and odd moments for $F_3$ to find agreement with the published results of Refs.~\cite{Larin:1994vu,Larin:1997wd,Retey:2000nq,Vermaseren:2005qc}. Then, most importantly, we have checked gauge invariance, i.e. we calculated all our diagrams for all results for all Mellin moments presented in this article with a gauge parameter $\xi$ for the gluon propagator, \begin{equation} \label{eq:gluonprop} {\rm i} \frac{-g^{\mu\nu}+(1-\xi) q^{\mu } q^{\nu}}{q^2}. \end{equation} We kept all powers of $\xi$ (up to $\xi^{4}$ in three loops for this calculation) through the entire calculation. Since parton structure functions are physical quantities any dependence on the gauge parameter $\xi$ must disappear in the final result. This was indeed the case after summing all diagrams in a given flavor class. Furthermore, we have compared the anomalous dimensions $\gamma_{\rm ns}$~Eq.~(\ref{eq:gamma_ns}) as calculated by us from Eqs.~(\ref{eq:F2-NS-1})--(\ref{eq:FL-NS-3}) up to three loops with the results available in the literature~\cite{Moch:2004pa} and found complete agreement for both, even and odd Mellin moments. In addition, the coefficient functions for the structure functions $F_{i}^{ \nu P - \bar \nu P},\,\, i=2,3,L$ at two loops are known from earlier work. Our two-loop results as obtained from Eq.~(\ref{eq:F2-NS-1})--(\ref{eq:FL-NS-2}) coincide with Refs.~\cite{vanNeerven:1991nn,Zijlstra:1991qc,Zijlstra:1992kj,Zijlstra:1992qd,Moch:1999eb}. Finally, let us mention a few words on the hardware requirements. All calculations are CPU-time and disk space consuming, especially for the higher Mellin moments (higher $n$ values). They were typically performed on an 64-bit AMD Opteron 2.2 GHz Linux machine with 4 GByte of memory. For example, the calculation of $t_{3,{\rm ns}}$ for $n=10$ took 56 days with the gauge parameter included, while the calculation of both, $t_{2,{\rm ns}}$ and $t_{L,{\rm ns}}$ for $n=9$ on the same machine needed 33 days. For comparison, the calculation of lowest Mellin moment $n=1$ for both projections $t_{2,{\rm ns}}$ and $t_{L,{\rm ns}}$ consumes less than a hour, whereas for $n=2$ for $t_{3,{\rm ns}}$ one needs approximately a couple of hours, always with the full gauge parameter dependence. At intermediate stages the calculations required also a large amount of disk space. Although the programs calculate diagrams one by one at the time the size of intermediate algebraic expressions for some diagrams can grow up to 20 GByte of a disk space (for instance for $n=10$ three-loop diagrams). On the other hand, the final result for any of the Lorentz invariants occupies some KBytes only. With access to improved hardware, we plan to push the calculation further to $n=16$, cf. Ref.~\cite{Blumlein:2004xt}. \setcounter{equation}{0} \section{Results} \label{sec:results} Following the steps outlined above in Sec.~\ref{sec:calculation} we arrive at the results for the coefficient functions $C_{2, {\rm ns}}\,$, $C_{L,{\rm ns}}$ at the odd-integer values $n = 1,\, \ldots,\, 9,$ and for $C_{3,{\rm ns}}$ at the even-integer values $n = 2,\, \ldots ,\, 10,$ up to order $\alpha_{s}^3$. The third order expressions represent new results of this article. Using $a_{s}\equiv \alpha_{s}/(4 \pi)$ and the shorthand notation for the $n$-th moment $C_{i,n}^{\rm ns} \equiv C_{i, {\rm ns}}(n)$ we find the following numerical values at the scale $\mu_r = \mu_f = Q$, \begin{eqnarray} \label{eq:c2ns1} C_{2,1}^{\rm ns} & = & 1 \, , \quad \\ C_{2,3}^{\rm ns} & = & 1 + 3.222222222\, a_{\rm s} + a_{\rm s}^2\, ( 72.32720288 - 11.125 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 1948.031519 - 496.5427343 \, {n^{}_{\! f}} + 14.20173594 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{2,5}^{\rm ns} & = & 1 + 8.725925925\, a_{\rm s} + a_{\rm s}^2\, ( 220.4151827 - 22.64048559 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3 \, ( 6925.814438 - 1347.125829 \, {n^{}_{\! f}} + 32.94421923 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{2,7}^{\rm ns} & = & 1 + 13.43677248\, a_{\rm s} + a_{\rm s}^2 \, ( 386.2911104 - 33.10212484 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3 \, ( 13505.16600 - 2298.472900 \, {n^{}_{\! f}} + 52.34745652 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ \label{eq:c2ns9} C_{2,9}^{\rm ns} & = & 1+ 17.47820105\, a_{\rm s} + a_{\rm s}^2\, ( 555.2720117 - 42.50367619 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 20990.73668 - 3278.689323 \, {n^{}_{\! f}} + 71.31040423 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{L,1}^{\rm ns} & = & + 2.666666666\, a_{\rm s} + a_{\rm s} ^2 \, ( 61.33333333 - 4.740740740 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 2313.911655 - 405.2001359 \, {n^{}_{\! f}} + 10.20576131 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{L,3}^{\rm ns} & = & 1.333333333\, a_{\rm s} + a_{\rm s}^2\, ( 52.40384466 - 3.925925925 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 2584.178446 - 406.0509532 \, {n^{}_{\! f}} + 11.59670781 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{L,5}^{\rm ns} & = & + 0.8888888888\, a_{\rm s} + a_{\rm s}^2\, ( 44.23466187 - 3.012345679 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 2451.068575 - 360.6487058 \, {n^{}_{\! f}} + 10.15089163 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{L,7}^{\rm ns} & = & 0.6666666666\, a_{\rm s} + a_{\rm s}^2\, ( 38.25090234 - 2.440740740 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 2290.679208 - 321.7773285 \, {n^{}_{\! f}} + 8.868930041 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{L,9}^{\rm ns} & = & 0.5333333333\,a_{\rm s} + a_{\rm s}^2\, ( 33.82305394 - 2.056719576 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 2146.302724 - 290.9906309 \, {n^{}_{\! f}} + 7.868039976 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{3,2}^{\rm ns} & = & 1 - 1.777777777\, a_{\rm s} + a_{\rm s}^2\, ( - 47.07704646 - 0.09876543209 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( - 2359.001407 + 305.2538856 \, {n^{}_{\! f}} - 6.864103442 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{3,4}^{\rm ns} & = & 1 + 4.866666666\, a_{\rm s} + a_{\rm s}^2\, ( 90.15322509 - 13.25902469 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 1478.747872 - 491.0449098 \, {n^{}_{\! f}} + 11.77903924 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{3,6}^{\rm ns} & = & 1 + 10.35132275\, a_{\rm s} + a_{\rm s}^2\, ( 258.8595696 - 25.14210054 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 7586.717646 - 1458.855783 \, {n^{}_{\! f}} + 32.73965909 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ C_{3,8}^{\rm ns} & = & 1 + 14.90026455\, a_{\rm s} + a_{\rm s}^2\, ( 433.2106396 - 35.58166191 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 14862.60949 - 2469.591886 \, {n^{}_{\! f}} + 53.25812942 \, {n^{}_{\! f}}^2 ) \nonumber\, , \quad \\ \label{eq:c3ns10} C_{3,10}^{\rm ns} & = & 1 + 18.79152477\, a_{\rm s} + a_{\rm s}^2\, ( 605.9424494 - 44.87506803 \, {n^{}_{\! f}} ) \\ & &\mbox{} + a_{\rm s}^3\, ( 22806.38215 - 3482.933316 \, {n^{}_{\! f}} + 72.83344233 \, {n^{}_{\! f}}^2 ) \nonumber\, . \end{eqnarray} Exact analytical expressions of these moments also with complete dependence on the color coefficients are given in Appendix~\ref{sec:appA}. As was mentioned before, the two-loop coefficient functions in Eqs.~(\ref{eq:c2ns1})--(\ref{eq:c3ns10}) agree with the results in Refs.~\cite{vanNeerven:1991nn,Zijlstra:1991qc,Zijlstra:1992kj,Zijlstra:1992qd,Moch:1999eb}. In addition, Eq.~(\ref{eq:c2ns1}) for $C_{2,1}^{\rm ns}$ is nothing else but a manifestation of the Adler sum rule for DIS structure functions, \begin{equation} \label{eq:adler-sumrule} \int\limits_0^1 \frac{dx}{x} \biggl(F_2^{\nu P}(x,Q^{2}) - F_2^{\nu N}(x,Q^{2}) \biggr) = 2 \, , \end{equation} which measures the isospin of the nucleon in the quark-parton model and does not receive any perturbative or non-perturbative corrections in QCD, see e.g. the discussion in Ref.~\cite{Dokshitzer:1995qm}. Therefore, Eq.~(\ref{eq:c2ns1}) is another important check of the correctness of our results. \begin{figure}[ht] \begin{center} \includegraphics[width=8.15cm]{./GraphC2.eps} \includegraphics[width=8.15cm]{./GraphCL.eps} \caption[]{\label{fig:plotC2CL} The first ten integer Mellin moments of the third order non-singlet coefficient functions $c^{(3)}_{2,{\rm ns}}$ (left) and $c^{(3)}_{L,{\rm ns}}$ (right) for charged current DIS with $n_f=4$ flavors. For the even moments, the flavor class $fl_{02}$ has been omitted, i.e. $fl_{02}=0$ (see also the discussion in Sec.~\ref{sec:formalism}). } \end{center} \end{figure} \begin{figure}[ht] \begin{center} \includegraphics[width=8.15cm]{./GraphC3.eps} \caption[]{\label{fig:plotC3} The first ten integer Mellin moments of the third order non-singlet coefficient functions $c^{(3)}_{3,{\rm ns}}$ for charged current DIS with $n_f=4$ flavors. For the odd moments, the flavor class $fl_{02}$ has been omitted, i.e. $fl_{02}=0$ (see also the discussion in Sec.~\ref{sec:formalism}). } \end{center} \end{figure} For illustration, let us plot the Mellin moments of the coefficient functions $c^{(3)}_{2,{\rm ns}}$, $c^{(3)}_{3,{\rm ns}}$ and $c^{(3)}_{L,{\rm ns}}$ at three loops. The new non-singlet results (blue diamonds in Figs.~\ref{fig:plotC2CL} and \ref{fig:plotC3}) exhibit a similar smooth pattern as the known results (red squares in Figs.~\ref{fig:plotC2CL} and \ref{fig:plotC3}). Thus, it is feasible to use these moments for an approximate analytic reconstruction of the yet unknown coefficient functions $c^{(3)}_{i,{\rm ns}}$ for $(\nu P - {\bar \nu} P)$-DIS prior to ``all-$n$'' calculation and similar to e.g. Refs.~\cite{vanNeerven:2001pe,Vermaseren:2006ag}. We will do so in a companion paper~\cite{MRV1}. Furthermore, we see that the respective values for odd and even moments, for instance on the left in Fig.~\ref{fig:plotC2CL} do confirm that differences between $c^{(3)\, {\nu P + {\bar \nu} P}}_{2,{\rm ns}}$ and $c^{(3)\, {\nu P - {\bar \nu} P}}_{2,{\rm ns}}$ are numerically small. This observation (see Fig.~\ref{fig:plotC3}) provides also a posteriori justification for the extrapolation procedure from odd to even moments for $C_3$ in Refs.~\cite{Kataev:1999bp,Kataev:2001kk}. There, available information on odd moments~\cite{Retey:2000nq} used in fits of CCFR data~\cite{Seligman:1997mc} to the structure function $xF_3$ at NNLO in QCD and beyond. A further discussion of this and related issues is given in Ref.~\cite{MRV1}. \setcounter{equation}{0} \section{Conclusions} \label{sec:conclusions} In the present paper we have presented new results for Mellin moments of the charged current DIS structure functions $F_2^{\nu P - \bar \nu P}$, $F_L^{\nu P - \bar \nu P}$ and $F_3^{\nu P - \bar \nu P}$ including the perturbative QCD corrections to three loops. In the former case ($F_2$, $F_L$) we have computed the first five odd-integer Mellin moments while in the latter case ($F_3$), the first five even-integer moments have been given. Our efforts are part of an ongoing program~\cite{% Moch:2004pa,Vogt:2004mw,vanNeerven:1991nn,Zijlstra:1991qc,Zijlstra:1992kj,Zijlstra:1992qd,% Moch:1999eb,Moch:2004xu,Vermaseren:2005qc,Larin:1994vu,Larin:1997wd,Retey:2000nq,Moch:2001im,Blumlein:2004xt% } to calculate perturbative QCD corrections of all DIS structure functions to three-loop accuracy. Within the framework of the OPE and the optical theorem we have calculated Feynman diagrams of the forward Compton amplitude $T_{\mu\nu}$ in Mellin space. In our presentation we have emphasized the symmetry properties of $T_{\mu\nu}$ and their relation to charged current ${\nu P \pm \bar \nu P}$ DIS, which was a crucial point in setting up the databases of Feynman diagrams. We have performed various checks on our computation. Most prominently, we have kept all powers of the gauge parameter $\xi$ throughout the entire calculation to check that any $\xi$-dependence vanishes in our final results. Furthermore, we agree with the literature as far as the two-loop coefficient functions~\cite{vanNeerven:1991nn,Zijlstra:1991qc,Zijlstra:1992kj,Zijlstra:1992qd,Moch:1999eb} and the three-loop anomalous dimensions~\cite{Moch:2004pa} are concerned. The discussion of phenomenological consequences of our Mellin space results is deferred to Ref.~\cite{MRV1}. Future research will be devoted to the calculation of some higher Mellin moments, potentially $n=11,\dots,16$, depending on the available hardware infrastructure. Subsequently, we will also focus on an ``all-$n$'' calculation in Mellin-$n$ space with methods of Refs.~\cite{Moch:2004pa,Vogt:2004mw,Moch:2004xu,Vermaseren:2005qc}, since all databases for Feynman diagrams contributing to $F_2^{ \nu P - \bar \nu P}$, $F_3^{ \nu P - \bar \nu P}$ and $F_L^{\nu P -\bar \nu P}$ are available now. {\sc Form} files of these results can be obtained from the preprint server {\tt http://arXiv.org} by downloading the source of this article. Furthermore they are available from the authors upon request. {\bf{Acknowledgments:}} We are grateful to J.~Vermaseren and A.~Vogt for useful discussions and to A.~Vogt for valuable comments on the manuscript. The figures have been prepared with the packages {\sc Axodraw}~\cite{Vermaseren:1994je} and {\sc Jaxo\-draw}~\cite{Binosi:2003yf}. We acknowledge support by the Helmholtz Gemeinschaft under contract VH-NG-105 and in part by the Deutsche Forschungsgemeinschaft in Sonderforschungs\-be\-reich/Transregio~9.
1,116,691,498,121
arxiv
\section*{\bf 1 Introduction} \end{center} $$ $$ When, in a gauge theory, the local generators of gauge transformations are reducible, the quantization rules are known only in the case where the reducibility has a finite number of stages [1]. The case of infinite reducibility is also of interest [1-4] but in this case the usual methods of covariant quantization don't work. An important subcase of infinite reducibility is the one where the generators of gauge transformations are nilpotent. Only this subcase is considered in the present paper, and, furthermore, a solution to the quantization problem is obtained only for a theory with equal numbers of boson and fermion variables. This is the theory whose action solves the master equation [1]. The master equation has two aspects. On the one hand, it yields a universal formulation of any gauge theory (at least finite-reducible). A given gauge theory is quantized by building a dynamically equivalent solution of the master equation. On the other hand, a solution of the master equation itself is the action of a gauge theory with nilpotent generators [1]. The quantization rules in this theory have been known for one special class of gauges [1] which thus far was sufficient for all practical purposes. However, going beyond this class was notoriously difficult because the master equation, being a theory with nilpotent generators, has never been quantized properly. This problem is solved below. The solution is simple but has novel and unexpected features which are probably common for all cases of nilpotent generators. It is known that the higher the tower of reducibility the more ghost fields are needed for unitarity [1]. Therefore, naively, an infinite tower requires an infinite number of ghosts. The correct answer is that, for a generation of the gauge algebra with nilpotent generators, no ghosts are needed at all. The solution is that, because of the nilpotency, one can't write down the functional integral for a single copy of the system. One has to double the system and write down a product of two coupled functional integrals for the doubled system. One is then to take a square root. Only in the special class of gauges where the gauge conditions are commuting, the functional integrals decouple, and one recovers the previously known result. \begin{center} \section*{\bf 2 The master equation and the commuting gauge} \end{center} The master equation is the following equation: \begin{equation} (\S,\S)\equiv 2\frac{\partial_r\S}{\partial\phi^i} \frac{\le\S}{\partial{\bar\phi}_i}=0 \end{equation} for a boson function $\S(\phi,{\bar\phi})$ of fields $\phi^i$ and the "antifields" ${\bar\phi}_i$ having the statistics opposite to the statistics of $\phi^i$. The notation $(\S,\S)$ refers to the operation of antibrackets [1]. Introducing the collective notation for the set of fields and antifields: \begin{equation} \varphi^a =\phi^i , {\bar\phi}_i\;\;,\quad a=1,\ldots 2n \end{equation} \begin{equation} \S(\varphi)=\S(\phi,{\bar\phi})\quad , \end{equation} one can rewrite the master equation as follows [1]: \begin{equation} (\S,\S)\equiv \frac{\partial_r\S}{\partial\varphi^a} \xi^{ab}\frac{\le\S}{\partial\varphi^b}=0\quad , \end{equation} \begin{equation} \xi^{ab}= \left( \begin{array}{cc} {\displaystyle 0}&{\displaystyle\delta^i_k}\nonumber\\ {\displaystyle -\delta^k_i}&{\displaystyle 0}\nonumber\\ \end{array} \right)\quad . \end{equation} Differentiating eq. (4) yields the Noether identities \begin{equation} \frac{\partial_r\S}{\partial\varphi^a}R^a_c=0\quad , \end{equation} \begin{equation} R^a_c=\xi^{ab}\frac{\le\partial_r\S}{\partial\varphi^b\partial\varphi^c} \end{equation} which show that every solution of the master equation is a gauge action with the generators (7). Differentiating eq. (6) shows that these generators are nilpotent on shell: \begin{equation} R^a_bR^b_c\:\biggl |_{\frac{\partial\S}{\partial\varphi}=0}=0\quad . \end{equation} The solution $\S$ of the master equation is assumed proper [1]: \begin{equation} \mbox{rank}\:\partial^2\S=\frac{2n}{2} \end{equation} in which case half of the variables $\varphi$ is redundant and needs to be gauged away: \begin{equation} \chi_i(\varphi)=0\quad . \end{equation} For the special class of gauge conditions mentioned above, the gauge-fixing functions $\chi_i(\varphi)$ are of the form [1] \begin{equation} \chi_i(\varphi)=\chi_i(\phi,{\bar\phi})={\bar\phi}_i- \frac{\partial\Psi(\phi)}{\partial\phi^i} \end{equation} with an arbitrary fermion function $\Psi(\phi)$. Below, these gauge conditions are referred to as $\chi^{\mbox{\scriptsize com}}(\varphi)$ where \mbox{"com"} stands for "commuting". Their canonical-invariant characterization is \begin{equation} (\chi^{\mbox{\scriptsize com}}_i,\chi^{\mbox{\scriptsize com}}_j)=0\quad , \end{equation} i.e., the gauge-fixing functions commute in the sense of antibrackets [1]. In this class of gauges, the functional integral generating the Green's functions is of the form [1] \begin{equation} Z(J)=\int d\varphi\,\exp {\rm i}\Bigl(\S(\varphi)+\varphi^aJ_a\Bigr)\, \delta\Bigl(\chi^{\mbox{\scriptsize com}}(\varphi)\Bigr)\quad . \end{equation} Here and below, the measure [1] is omitted but can always be restored in the usual way. No ghosts are needed in the commuting gauge. It is shown in the appendix that the master equation always admits a gauge of the form (11) at least locally. Apart from that, it has always been a mystery, what makes this gauge distinguished but attempts at a generalization didn't go through. The answers come with quantizing the master equation. The procedure below is an example of quantization of a gauge theory with nilpotent generators. \begin{center} \section*{\bf 3 Quantizing the master equation} \end{center} For quantizing the master equation I shall build the master equation for the master equation. Taking $\S(\varphi)$ for the original action of a gauge field $\varphi^a$, I shall introduce the antifield $\varphi^*_a$ and look for an action \begin{equation} {\cal M}(\varphi,\varphi^*) \end{equation} that would have $\S(\varphi)$ as its classical limit but would also satisfy the master equation \begin{equation} \frac{\partial_r{\cal M}}{\partial\varphi^a}\frac{\le{\cal M}}{\partial\varphi^*_a}=0 \end{equation} and be its proper solution: \begin{equation} \mbox{rank}\:\partial^2{\cal M}=\frac{4n}{2}\quad . \end{equation} The key point is that one needs no ghosts for building the solution for ${\cal M}$. To see this, recall why ghosts are needed at all [1]. They are needed for including the gauge generators in the hessian of the action but, in the present case, the gauge generators of the original $\S(\varphi)$ are already contained in its hessian, eq. (7). Therefore, it should be possible to satisfy all the conditions for ${\cal M}$ without introducing new fields. Indeed, here is the solution for ${\cal M}$: \begin{equation} {\cal M}(\varphi,\varphi^*)=\S(\varphi^a+\xi^{ab}\varphi^*_b)+ \S(\varphi^a-\xi^{ab}\varphi^*_b)\quad . \end{equation} It is easy to check that, with this expression, eq. (15) is satisfied by virtue of the original master equation (4), and the rank condition (16) is satisfied because the arguments of the two $\S$ 's in (17) are independent. There remains to be inserted in (17) the overall 1/2 to satisfy the condition of the classical limit but {\it this is precisely what should not be done}. It is another key point that the classical limit should be kept doubled (see below). In terms of the original fields and antifields, eq. (3), the solution obtained is of the form \begin{equation} {\cal M}(\varphi,\varphi^*)=\S(\phi+{\bar\phi}^*\, ,\,{\bar\phi}-\phi^*)+ \S(\phi-{\bar\phi}^*\, ,\,{\bar\phi}+\phi^*)\quad , \end{equation} \begin{equation} \varphi=\phi,{\bar\phi}\;\quad ,\;\quad \varphi^*=\phi^*,{\bar\phi}^* \end{equation} where $\phi^*$ is the new antifield to the original field $\phi$ , and ${\bar\phi}^*$ is the new antifield to the original antifield ${\bar\phi}$ . The antifield to the antifield has, of course, the statistics of the field. The remaining procedure is standard. For the introduction of gauge conditions one needs ghosts of the auxiliary sector [1]. One extends ${\cal M}(\varphi,\varphi^*)$ in the usual way: \begin{equation} {\cal M}(\varphi,\varphi^*)+{\bar C}^*_i\pi^i= {\cal M}_{\mbox{\scriptsize tot}} (\varphi,\pi,{\bar C}\: ;\:\varphi^*,\pi^*,{\bar C}^*) \end{equation} where the bar over $C$ is needed only to hold to the conventional terminology [1] since now there is only the ghost ${\bar C}$ ; there is no $C$ . Then, by construction [1], the following functional integral: \begin{eqnarray} Z^2(J)&=&\int d\varphi d\pi d{\bar C} d\varphi^* d\pi^* d{\bar C}^*\, \exp {\rm i}\Bigl({\cal M}_{\mbox{\scriptsize tot}} +2\varphi^aJ_a\Bigr)\nonumber\\ &\times&{}\delta\Bigl(\varphi^*-\frac{\partial\Psi}{\partial\varphi}\Bigr) \delta\Bigl({\bar C}^*-\frac{\partial\Psi}{\partial{\bar C}}\Bigr) \delta\Bigl(\pi^*-\frac{\partial\Psi}{\partial\pi}\Bigr) \end{eqnarray} does not depend on the choice of \begin{equation} \Psi=\Psi(\varphi,\pi,{\bar C})\quad . \end{equation} It is not a misprint that, in (21), $Z(J)$ appears squared, and the source term $\varphi^aJ_a$ is doubled. The $Z(J)$ is the correct generating functional for the master equation. Indeed, if one decomposes $\varphi^a$ into the original $\phi^i,{\bar\phi}_i$ , and chooses \begin{equation} \Psi(\varphi,\pi,{\bar C})={\bar\phi}_i{\bar C}^i -\frac{1}{2} \psi_1(\phi+{\bar C})+\frac{1}{2}\psi_2(\phi-{\bar C}) \end{equation} with arbitrary functions $\psi_1,\psi_2$ , then upon the redesignations \begin{eqnarray} \varphi^a+\xi^{ab}\varphi^*_b&=&\varphi_1^a\; ,\\ \varphi^a-\xi^{ab}\varphi^*_b&=&\varphi_2^a \end{eqnarray} one obtains \begin{equation} Z^2(J)=\int d\varphi_1 d\varphi_2\,\exp {\rm i}\Bigl(\S(\varphi_1)+ \S(\varphi_2)+\varphi_1^aJ_a+\varphi_2^aJ_a\Bigr)\, \delta\Bigl(\chi^{\mbox{\scriptsize com}}(\varphi_1)\Bigr) \delta\Bigl(\chi^{\mbox{\scriptsize com}}(\varphi_2)\Bigr)\; . \end{equation} This is precisely the square of the functional integral (13) in the commuting gauge! \begin{center} \section*{\bf 4 The master equation in the general gauge} \end{center} By setting in (22) \begin{equation} \Psi(\varphi,\pi,{\bar C})={\bar C}^i\chi_i(\varphi) \end{equation} with {\it arbitrary} $\chi_i(\varphi)$ one obtains the generating functional $Z(J)$ in the general gauge: \begin{equation} Z^2(J)=\int d\varphi d{\bar C}\,\exp {\rm i}\biggl( \S\Bigl(\varphi^a+\xi^{ab}{\bar C}^i\frac{\partial_r\chi_i}{\partial\varphi^b}\Bigr)+ \S\Bigl(\varphi^a-\xi^{ab}{\bar C}^i\frac{\partial_r\chi_i}{\partial\varphi^b}\Bigr)+ 2\varphi^aJ_a\biggr)\,\delta\Bigl(\chi(\varphi)\Bigr)\;\; . \end{equation} In the general gauge, the $Z(J)$ itself cannot be written as a functional integral. One has to write down a product of two coupled functional integrals and take a square root. The mystery of the commuting gauge is in the fact that, in this gauge, the functional integrals decouple. The explanation of the fact that, in the commuting gauge, there are no ghosts is that the ghosts make another copy of the original field. After the two copies decouple, each copy remains without ghosts. Expanding in ${\bar C}$ in (28), one has \begin{eqnarray} Z^2(J)&=&\int d\varphi d{\bar C}\,\exp {\rm i}\biggl( 2\S(\varphi)+2\varphi^aJ_a -{\bar C}^k\frac{\partial_r\chi_k}{\partial\varphi^e}\xi^{ea} \frac{\le\partial_r\S}{\partial\varphi^a\partial\varphi^b}\xi^{bd} \frac{\partial_r\chi_i}{\partial\varphi^d}{\bar C}^i (-1)^{\varepsilon_i\varepsilon_b}\nonumber\\ &&\hspace{3cm}{}+O({\bar C}^4)\biggr)\,\delta\Bigl(\chi(\varphi)\Bigr) \end{eqnarray} with the obvious sign factor. It is seen that, owing to the doubling in (17) and in the source term, the classical limit obtains correctly but also, at the one-loop level, the ghost contribution doubles the contribution of the gauge field. Indeed, the statistics of the ghosts ${\bar C}^i$ is opposite to the statistics of the redundant variables (say, antifields) and is, therefore, the same as the statistics of the independent variables (fields). Moreover, instead of the usual ${\bar C}C$ in the quadratic term, one has now ${\bar C}{\bar C}$. The ghost determinant will, therefore, appear to the power 1/2. Finally, the inverse ghost propagator contains the hessian of the original gauge action which is the basic consequence of the nilpotency of the gauge generators. There are, of course, also the higher-order ghost couplings. To summarize, the master equation possesses the baron M\"unchhausen's gift of lifting itself up by the hair. The master equation (for the master equation) in the commuting gauge yields the master equation in the general gauge. Hence the notation ${\cal M}$ for the M\"unchhausen master solution. \begin{center} \section*{\bf Acknowledgments} \end{center} This work is a product of the research program "Quantization, Generalized BRS Cohomology, and Anomalies" organized by the Erwin Schr\"odinger International Institute for Mathematical Physics in autumn of 1998. The author is grateful to the Erwin Schr\"odinger Institute and the Technical University of Vienna for hospitality and support. Special thanks to Anton Rebhan for his organizational efforts, the physics discussions, and his enjoyable company in Vienna. When in the black hole, the author is supported in part by the Russian Foundation for Fundamental Research Grant 99-02-18107 and INTAS Grant 93-493-ext. \newpage
1,116,691,498,122
arxiv
\section{Introduction\label{sec:intro}} The partially filled 5$f$ electronic shell exhibits itinerant-localized dual nature which remains a long-standing issue in condensed matter physics~\cite{LAReview}. The itinerant 5$f$ states embodied by the noticeable hybridization with conduction bands and the localization degree encoded in magnetic order enable the sensitivity of electronic structure to the external parameters, such as temperature, pressure and chemical doping. Since plutonium (Pu) situates on the edge between the itinerant and localized 5$f$ states of light and heavy actinides~\cite{RevModPhys.81.235}, 5$f$ states tend to involve in active chemical bonding and form affluent Pu-based compounds~\cite{Bauer2015Plutonium}, which exhibit novel quantum phenomena including unconventional superconductivity, magnetic order, nontrivial topology, and heavy-fermion behavior, to name a few~\cite{PhysRevLett.108.017001,PhysRevLett.111.176404,Chudo2013}. PuSn$_3$ crystallizes in cubic AuCu$_{3}$ structure (space group $Pm$-3$m$) [see Fig.~\ref{fig:tstruct}(a)] with lattice constant 4.63 {\AA}~\cite{SARI1983301}. The temperature-independent paramagnetic order~\cite{Handbookferromag,PhysRevB.39.13115} is pretty anomalous because Pu-Pu distance is much larger than the Hill limit 3.40 {\AA}~\cite{Hilllimit} indicative of localized 5$f$ electrons. In addition, its low-temperature electrical resistivity displays a power-law relation~\cite{Brodsky1978}, which suggests the predominant distribution of conduction $s$ and $p$ states instead of 5$f$ states at the Fermi level. Hence the observed pseudogap at the Fermi level is attributed to the spin-orbit coupling, which enables the fully occupied 5$f_{5/2}$ states and partially distributed 5$f_{7/2}$ states. It is proposed that the $c$-$f$ hybridization becomes the dominant mechanism for Pu-5$f$ electron delocalization~\cite{PhysRevB.39.13115}. So far, photoemission spectroscopy, angle-resolved photoemission spectroscopy, x-ray adsorption spectroscopy and de Haas-van Alphen quantum oscillation experiments are still lacking which altogether provide subtle electronic structure of 5$f$ states. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{T_struct.pdf} \caption{(Color online). (a) Crystal structure of PuSn$_3$. (b) Schematic picture of the first Brillouin zone of PuSn$_3$. Some high-symmetry $k$ points $X$ [0.5, 0.0, 0.0], $\Gamma$ [0.0, 0.0, 0.0], $M$ [0.5, 0.5, 0.0], and $R$ [0.5, 0.5, 0.5] are marked. \label{fig:tstruct}} \end{figure} On the theoretical side, investigations concerning the structural stability, bulk modulus, band structure, electronic density of states and Fermi surface~\cite{PhysRevB.39.13115,BAIZAEE2005247,BAIZAEE2007287,PhysRevB.88.125106} attempt to elaborate narrow 5$f$ bands, Fermi surface topology and pseudogap in density of states at the Fermi level. Even though the Fermi surface and quantum oscillation are exhaustively interpreted, the missing narrow flat 5$f$ bands within the traditional density functional theory is ascribed to the underestimation of strong correlation among 5$f$ electrons~\cite{PhysRevB.88.125106}. Moreover, the unphysical density of states is mostly attributed to the exclusion of the spin-orbit coupling~\cite{BAIZAEE2007287}. Particularly, the underlying mechanism for the paramagnetic ground state has not been clearly understood. More importantly, the strongly correlated electronic states has not been fully quantified. Above all, the temperature-dependent electronic structure is rarely touched in previous literature. Consequently, it is crucial to study the temperature dependence of 5$f$ electronic structure and electronic correlation to disclose the relationship between the electronic structure and paramagnetic ground state of PuSn$_3$. In the paper, we present the electronic structure of PuSn$_3$ dependence on temperature using the density functional theory in combination with the single-site dynamical mean-field theory. We endeavor to elucidate the itinerant-localized 5$f$ states and to uncover the strongly correlated 5$f$ states. We calculate the momentum-resolved spectral functions, density of states, Fermi surface, self-energy functions, valence state fluctuations and hybridization dynamics of PuSn$_3$. It is found that 5$f$ states become itinerant at low temperature accompanied by remarkable valence state fluctuations. Additionally, the onset of atomic multiplets and prominent $c$-$f$ hybridization at low temperature imply a temperature-driven 5$f$ itinerant-localized crossover. Especially, the change in Fermi surface topology implicates a possible Lifshitz transition induced by temperature. Finally, the orbital selective electronic correlation and hybridization dynamics are addressed. The rest of this paper is organized as follows. In Sec.~\ref{sec:method}, the computational details are introduced. In Sec.~\ref{sec:results}, the electronic band structures, Fermi surface topology, total and partial 5$f$ density of states, 5$f$ self-energy functions, and probabilities of atomic eigenstates are presented. In Sec.~\ref{sec:dis}, we attempt to clarify some important topics about the 5$f$ itinerant-localized crossover and the hybridization gaps. Finally, Sec.~\ref{sec:summary} serves as a brief conclusion. \section{Methods\label{sec:method}} The well-established DFT + DMFT method combines realistic band structure calculation by DFT with non-perturbative many-body treatment of local interaction effects in DMFT~\cite{RevModPhys.68.13,RevModPhys.78.865}. The strong electronic correlation and large spin-orbit coupling are treated on the same footing. Here we perform charge fully self-consistent calculations to explore the temperature-dependent electronic structures of PuSn$_3$ using DFT + DMFT method. The self-consistent implementation of this method is divided into DFT and DMFT parts, which are solved separately by using the \texttt{WIEN2K} code~\cite{wien2k} and the \texttt{EDMFTF} package~\cite{PhysRevB.81.195107}. In the DFT calculation, the experimental crystal structure of PuSn$_3$~\cite{SARI1983301} was used and ignored the thermal expansion. The generalized gradient approximation was adopted to formulate the exchange-correlation functional~\cite{PhysRevLett.77.3865}. Additionally, the spin-orbit coupling was taken into account in a second-order variational manner. The muffin-tin radius for Pu and Sn were chosen as 2.7 au and 2.4 au, respectively. The $k$-points' mesh was $15 \times 15 \times 15$ and $R_{\text{MT}}K_{\text{MAX}} = 8.0$. In the DMFT part, 5$f$ orbitals of plutonium were treated as correlated. The four-fermions' interaction matrix was parameterized using the Coulomb interaction $U = 5.0$~eV and the Hund's exchange $J_H=0.6$~eV via the Slater integrals~\cite{PhysRevB.59.9903}. The fully localized limit scheme was used to calculate the double-counting term for impurity self-energy function~\cite{jpcm:1997}. The vertex-corrected one-crossing approximation (OCA) impurity solver~\cite{PhysRevB.64.115111} was employed to solve the resulting multi-orbital Anderson impurity models. Note that we not only utilized the good quantum numbers $N$ (total occupancy) and $J$ (total angular momentum) to classify the atomic eigenstates, but also made a severe truncation ($N \in$ [3, 7]) for the local Hilbert space~\cite{PhysRevB.75.155113} to reduce the computational burden. The convergence criteria for charge and energy were $10^{-5}$ e and $10^{-5}$ Ry, respectively. It is worth noting that the direct output of OCA impurity solver is real axis self-energy $\Sigma (\omega)$ which was applied to calculate the momentum-resolved spectral functions $A(\mathbf{k},\omega)$ and density of states $A(\omega)$, as well as other physical observables. \section{Results\label{sec:results}} \begin{figure*}[th] \centering \includegraphics[width=\textwidth]{T_akw.pdf} \caption{(Color online). Momentum-resolved spectral functions $A(\mathbf{k},\omega)$ of PuSn$_3$ as a function of temperature under ambient pressure calculated by the DFT + DMFT method. The horizontal lines denote the Fermi level. (a) 580 K. (b) 290 K. (c) 116 K. (d) 29 K. \label{fig:akw}} \end{figure*} \begin{figure*}[th] \centering \includegraphics[width=\textwidth]{T_dos.pdf} \caption{(Color online). Electronic density of states of PuSn$_3$ obtained by the DFT + DMFT method. Total density of states (thick solid lines) and partial 5$f$ density of states (color-filled regions) of 580 K (a), 290 K (b), 29 K (c). These peaks resulting from the quasiparticle multiplets are denoted with ``P1'', ``P2'', ``P3'', and ``P4''. The $j$-resolved 5$f$ partial density of states with $5f_{5/2}$ and $5f_{7/2}$ components represented by purple and green lines, respectively. 580 K (d), 290 K (e), 29 K (f). (g) The evolution of 5$f$ density of states against temperature in the vicinity of Fermi level. (h) The height of the central quasiparticle peak h$_{\rm QP}$(5$f_{5/2}$) and 5$f$ spectral weight at the Fermi level $A_{5f}$($\omega$ = 0) as a function of temperature. \label{fig:dos}} \end{figure*} \begin{figure*}[th] \centering \includegraphics[width=\textwidth]{T_fs.pdf} \caption{(Color online). Three-dimensional Fermi surface and two-dimensional Fermi surface of PuSn$_3$ calculated by the DFT + DMFT method at 580 K (a, b, c) and 116 K (d, e, f). Two-dimensional Fermi surface are on the $k_x-k_y$ plane (with $k_z = \pi/2$). They are visualized with different colors. \label{fig:FS}} \end{figure*} \begin{figure*}[th] \centering \includegraphics[width=\textwidth]{T_prob.pdf} \caption{(Color online). Probabilities of $5f^{4}$ (blue), $5f^{5}$ (red), $5f^{6}$ (green) configurations (a) and $5f^{3}$ (purple), $5f^{7}$ (cyan) (b) for PuSn$_3$ derived by DFT + DMFT calculations. (c) 5$f$ occupancy as a function of temperature. \label{fig:prob}} \end{figure*} \subsection{Momentum-resolved spectral functions} It is illuminating to inspect the momentum-resolved spectral functions which encode interesting features of PuSn$_3$ [see Fig.~\ref{fig:akw}]. We examine the quasiparticle bands as a function of temperature to uncover the inherent nature of 5$f$ states. First of all, it is necessary to evaluate the reliability of our calculated band structures. In comparison to the band structure of paramagnetic ground state in previous DFT calculations~\cite{PhysRevB.88.125106}, the low temperature momentum-resolved spectral functions [see Fig.~\ref{fig:akw}(d)] share similar characteristics. For example, a striking electron pocket locates at $\Gamma$ point about --3 eV with the hole pockets at $X$ and $M$ points below the Fermi level. Generally, the band dispersion around $X$ and $M$ points are reasonably in line with the previous DFT calculations~\cite{PhysRevB.88.125106}. In special, the conduction bands along $\Gamma$ - $M$ high-symmetry line are very much alike. Even though the flat bands parallel to the vicinity of Fermi level at low temperature [see Fig.~\ref{fig:akw}(d)] are absent in the DFT calculations, the basic features of energy bands are consistent with each other, evincing the validity of our results. The discrepancy in the existence of narrow 5$f$ bands around the Fermi level indicates the underestimation of strong correlation among 5$f$ electrons within traditional density functional theory, which demands a more rigorous and sophisticated quantum many-body approach. Next we turn to the temperature effects on quasiparticle bands of PuSn$_3$. At high temperature [see Fig.~\ref{fig:akw}(a)], only conduction bands with noticeable dispersions cross the Fermi level, accompanied by the emergence of lower and upper Hubbard bands in the energy range of --3 eV $\sim$ --1 eV and 1 eV $\sim$ 3 eV, respectively. When temperature decreases, 5$f$ electrons gradually get coherent and certain quasiparticle bands start to build up near the Fermi level. These bands are not evident at first because of their small quasiparticle weight and tiny band intensity. Combined with the density of states in Fig.~\ref{fig:dos}, these flat narrow bands mainly root from 5$f$ states, signifying the itinerant tendency of 5$f$ states. As temperature further lowers, quasiparticle weight grows obvious and apparent narrow 5$f$ bands develop at --0.9 eV, --0.47 eV and the Fermi level, which hybridize with 5$p$ conduction bands of Sn atom to open evident hybridization gaps. It is noted that nearly dispersionless quasiparticle bands are split by spin-orbit coupling with energy gap about 0.43 eV. Meanwhile, the profiles of conduction bands above and below the Fermi level remain almostly unchanged with respect to temperature. Accordingly, the significant quasiparticle bands, remarkable $c-f$ hybridization and salient spectral weight of 5$f$ states jointly imply intensifying itinerancy of 5$f$ bands with decreasing temperature. Thus, it is proposed that a potential localized to itinerant crossover of 5$f$ correlated electronic states is induced by temperature in PuSn$_3$. \subsection{Density of states} To elucidate the evolution of itinerant degree of 5$f$ correlated electronic states upon temperature, we explore the density of states and quasiparticle multiplets in a wide temperature range of 29 K $\sim$ 580 K. It is noticed that basic features exist in the whole temperature range [see Fig.~\ref{fig:dos}(a)-(c)]. Firstly, the overall profile of total density of states resemble each other including broad ``humps" from --3 eV $\sim$ --1 eV and 1 eV $\sim$ 3 eV, which are mainly assigned to the lower and upper Hubbard bands of plutonium's 5$f$ orbitals. Secondly, the fourteen-fold degenerated 5$f$ states are split into six-fold degenerated 5$f_{5/2}$ and eight-fold degenerated 5$f_{7/2}$ subbands [see Fig.~\ref{fig:dos}(e)-(f)] because of spin-orbit coupling~\cite{RevModPhys.81.235,PhysRevB.102.245111,PhysRevB.101.125123}. A sharp and narrow quasiparticle (QP) peak develops in the Fermi level, mostly belonging to 5$f_{5/2}$ orbital. Meanwhile, the two satellite peaks ``P1'' and ``P2'' at --0.9 eV and --0.47 eV with energy gap about 0.43 eV are ascribed to 5$f_{5/2}$ and 5$f_{7/2}$ orbitals, respectively. Above the Fermi level, the reflected peaks ``P3'' and ``P4'' locate at 0.47 eV and 0.9 eV with respect to the central quasiparticle. So the five representative peaks are called ``quasiparticle multiplets''~\cite{2102.02034,2102.03085}. To trace the origin of these quasiparticle multiplets, it is deduced that ``P3'' and ``P4'' peaks are formed by a mix of 5$f_{5/2}$ and 5$f_{7/2}$ orbitals. Overall, it is mentioned that the atomic multiplets are induced by 5$f$ valence fluctuations, which leave fingerprints on the 5$f$ photoemission spectroscopy of PuSn$_3$. As to the temperature-dependent coherence and itinerancy of 5$f$ electrons, it is instructive to examine the behavior of quasiparticle multiplets around the Fermi level. It reveals that the quasiparticle peak is too small to be seen in the Fermi level at high temperature, implicating the mostly localized and mainly incoherent 5$f$ states. When temperature slowly descends, the coherence of 5$f$ valence electron gradually builds up. Since the spectral weights of upper and lower Hubbard bands transfer to the Fermi level, the quasiparticle peak around the Fermi level rises up quickly. It is apparent that the height of 5$f_{5/2}$ quasiparticle peak [see Fig.~\ref{fig:dos}(g)] ascends swiftly and acutely, resulting in a sharp and intense quasiparticle peak at the Fermi level. Meanwhile, the quasipartilce weight at the Fermi level progressively magnifies, giving rise to coherent 5$f$ states. In a word, the significant hybridization between 5$f$ electrons of Pu atom and 5$p$ bands of Sn atom suggests the increasing itinerancy of 5$f$ states. The intensifying $c-f$ hybridization contributes to unveil the paramagnetic ground state. Hence the valence state fluctuations become predominant concurrently, which manifests the mixed-valence nature and hints the potential temperature-induced localized to itinerant crossover of 5$f$ states. \subsection{Fermi surface topology} Fermi surface topology is an effective physical quantity to capture the temperature-dependent electronic structure of PuSn$_3$. Figure~\ref{fig:FS} depicts the Fermi surface topology at two typical temperatures 580 K and 116 K, respectively. There are two doubly degenerated bands crossing the Fermi level (No. of bands: 18 and 19, 20 and 21), which are labeled by $\alpha$ and $\beta$, respectively. Note that the Fermi surface topology of $\alpha$ band takes an ellipsoid shape, which is in coincidence with those in previous DFT calculations~\cite{PhysRevB.88.125106}. Nevertheless, the Fermi surface topology of $\beta$ band resembles an anisotropic form, which acts different from the previous results~\cite{PhysRevB.88.125106}. The discrepancy of $\beta$ band might arise from the temperature effect on Fermi surface because the DFT calculations~\cite{PhysRevB.88.125106} are carried out at zero temperature and we perform the calculations at finite temperature. As expected, the Fermi surface is sensitive to varying temperature. It is detected that the volume of Fermi surface for $\alpha$ band expands evidently [see Fig.~\ref{fig:FS} (a) and (d)] when temperature diminishes, providing sufficient evidence for the itinerant 5$f$ electrons at low temperature. Since the three-dimensional Fermi surface of $\beta$ band is hard to discern its inner structure, the two-dimensional Fermi surface of $\alpha$ and $\beta$ bands are visualized in Fig.~\ref{fig:FS} (c) and (f). Obviously, both $\alpha$ and $\beta$ bands intersect the $\Gamma$ - $X$ line and the distance between intersections of two bands almost remains unchanged with decreasing temperature, which is in conformity with the momentum-resolved spectral functions [see Fig.~\ref{fig:akw}]. On the contrary, it appears that the distance between the $\beta$ band intersections with Fermi level along the $M$ - $X$ line shrinks as temperature lowers. In particular, the anisotropic Fermi surface topology of $\beta$ band undergoes variation in surface envelope along the $\Gamma$ - $M$ line on the two-dimenional Fermi surface. In consequence, the Fermi surface topology actually changes dramatically with decreasing temperature, which demonstrates a probable Lifshitz transition for 5$f$ states and the advent of 5$f$ localized to itinerant crossover driven by temperature. Since the Fermi surface could be detected by subsequent dHvA quantum oscillation, the experimental results could clarify the underlying mechanism behind the paramagnetic ground state of PuSn$_3$ provided that no signature of Fermi surface nesting is observed. \begin{figure*}[th] \centering \includegraphics[width=\textwidth]{T_sig.pdf} \caption{(Color online). Real-frequency self-energy functions of PuSn$_3$ obtained by the DFT + DMFT method. (a)-(b) $Z|{\rm Im}\Sigma (\omega)|$ for the 5$f_{5/2}$ and 5$f_{7/2}$ states. $Z$ is the renormalization factor. (c) Electron effective masses for the 5$f_{5/2}$ and 5$f_{7/2}$ states as a function of temperature, where the green arrow denotes maximum electron effective mass for the 5$f_{5/2}$ states. \label{fig:sig}} \end{figure*} \subsection{Valence state fluctuations} It is worth mentioning that $\delta$-Pu displays obvious mixed-valence behavior with noninteger occupation number deviating from nominal value 5.0. The 5$f$ electron atomic eigenstates derived from the output of DMFT many-body states shed light on the valence state fluctuations and related mixed-valence behavior. Here $p_\Gamma$ is used to quantify the probability of 5$f$ electrons which stay in each atomic eigenstate $\Gamma$. Then the average 5$f$ valence electron is defined as $\langle n_{5f} \rangle = \sum_\Gamma p_\Gamma n_\Gamma$, where $n_\Gamma$ denotes the number of electrons in each atomic eigenstate $\Gamma$. Finally, the probability of 5$f^n$ electronic configuration can be expressed as $\langle w(5f^{n}) \rangle = \sum_\Gamma p_\Gamma \delta (n-n_\Gamma)$. The calculated probabilities of 5$f^n$ electronic configuration for PuSn$_3$ are visualized in Fig.~\ref{fig:prob}(a) and (b). Apparently, the probability of 5$f^5$ electronic configuration is dominating, while the contributions of 5$f^3$ and 5$f^7$ electronic configurations are too small to be neglected. As is shown in Table~\ref{tab:prob}, at 580 K, the probability of 5$f^5$ electronic configuration accounts for 83.1\%, followed by the probabilities of 5$f^4$ and 5$f^6$ electronic configurations about 9.3\% and 7.4\%, respectively. Meanwhile, the occupation of 5$f$ electrons is approximately 4.98. When temperature gradually diminishes, the probability of 5$f^5$ electronic configuration declines slowly. At the same time, the probability of 5$f^4$ electronic configuration slightly decreases and the probability of 5$f^6$ electronic configuration appreciably augments. At 29 K, it is noticed that the probability of 5$f^5$ electronic configuration drops to 76.0\%, with 5$f^4$ and 5$f^6$ electronic configurations about 7.4\% and 16.3\%, respectively. At this stage, the occupation of 5$f$ electrons experiences a minor increase to 5.09. In this scenario, 5$f$ valence electrons are prone to spend more time in the 5$f^6$ configuration, rendering the valence fluctuation and promoting quasiparticle multiplets. In low-temperature regime, it is speculated that PuSn$_3$ is a mixed-valence metal. So it is inspiring to refer to isostructual compound PuIn$_3$~\cite{2102.03085}, whose electronic configuration probably suggests insensitivity to varying temperature. Analogous to cubic phase Pu$_3$Ga~\cite{2102.02034}, strong valence fluctuations play a key role in generating the quasiparticle multiplets at the Fermi level, and regulating the effective 5$f$ valence electrons. Interestingly, the temperature-dependent patterns of 5$f^4$ and 5$f^6$ electronic configurations intersect about 387 K demonstrating the initiating itinerancy of 5$f$ states and appearance of electron coherence. Coincidently, the electron effective mass for 5$f_{5/2}$ states obtained from self-energy functions according to Eq .~\ref{eqsigma} reaches the maximum value at 387 K [see Fig.~\ref{fig:sig}(c)]. Generally speaking, electronic correlation is usually strong in localized 5$f$ electrons and grows weaker in itinerant 5$f$ electrons. The coincidence between 5$f$ localized-itinerant nature and strong electronic correlation unravels the hidden connection of itinerant degree and electronic correlation, which requires further exploration. \begin{table}[th] \caption{Probabilities of $5f^{3}$, $5f^{4}$, $5f^{5}$, $5f^{6}$, and $5f^{7}$ for PuSn$_3$ at temperatures 580 K and 29 K, respectively. \label{tab:prob}} \begin{ruledtabular} \begin{tabular}{cccccc} Temperatures & $5f^{3}$ & $5f^{4}$ & $5f^{5}$ & $5f^{6}$ & $5f^{7}$ \\ \hline 580 K & 1.357$\times 10^{-3}$ & 0.093 & 0.831 & 0.074 & 4.934$\times 10^{-4}$ \\ 29 K & 9.921$\times 10^{-4}$ & 0.074 & 0.760 & 0.163 & 1.511$\times 10^{-3}$ \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Self-energy functions} In general, the electronic correlations are encapsulated in the electron self-energy functions~\cite{RevModPhys.68.13,RevModPhys.78.865}. Figure~\ref{fig:sig} illustrates the renormalized imaginary part of self-energy functions $Z|{\rm Im}\Sigma(\omega)|$ for 5$f_{5/2}$ and 5$f_{7/2}$ states, as well as the electron effective mass. Here $Z$ means the quasiparticle weight or renormalization factor, which denotes the electronic correlation strength and can be obtained from the real part of self-energy functions via the following equation~\cite{RevModPhys.68.13}: \begin{equation} Z^{-1} = \frac{m^\star}{m_e} = 1 - \frac{\partial \text{Re} \Sigma(\omega)}{\partial \omega} \Big|_{\omega = 0}. \label{eqsigma} \end{equation} As is stated above, $Z|{\rm Im}\Sigma(0)|$ embodies the low-energy electron scattering rate~\cite{PhysRevB.99.125113}. Both $Z|{\rm Im}\Sigma_{5f_{5/2}}(0)|$ and $Z|{\rm Im}\Sigma_{5f_{7/2}}(0)|$ approximate zero at low temperature, suggesting the itinerant 5$f$ states and the metallic feature. And they develop finite values at high temperatues indicating increscent low-energy electron scattering rate. With the increment of temperature, $Z|{\rm Im}\Sigma_{5f_{5/2}}(\omega)|$ grows up rapidly and saturates in the energy range of [--0.5 eV, --0.25 eV]. Concurrently, $Z|{\rm Im}\Sigma_{5f_{7/2}}(\omega)|$ climbs up sharply in the high-energy regime ($|\omega|$ $>$ 0.5 eV). Therefore the enhancement of $Z|{\rm Im}\Sigma_{5f_{7/2}}(\omega)|$ becomes more remarkable than that of $Z|{\rm Im}\Sigma_{5f_{5/2}}(\omega)|$, which concretises the growing localization of 5$f_{7/2}$ state and explains its incoherent nature at high temperature. Now that self-energy functions of 5$f_{5/2}$ and 5$f_{7/2}$ states evince differentiated temperature-dependent patterns, it is proposed that 5$f$ electronic correlation are orbital dependent, which commonly exists in actinide compounds. The evaluated electron effective masses $m^*$ for 5$f_{5/2}$ and 5$f_{7/2}$ states~\cite{RevModPhys.68.13} according to equation~\ref{eqsigma} are given in Fig.~\ref{fig:sig}(c). With the decrement of temperature, the electron effective mass for 5$f_{5/2}$ states surges up rapidly to reach the maximum value up to 40 $m_e$ at 387 K, followed by a monotonically decreasing trend below 387 K. The diminishing electron effective mass for 5$f_{5/2}$ states implies the weakening electronic correlations at low temperature, which is associated with the enhancive itinerancy of 5$f$ states. Conversely, the electron effective mass for 5$f_{7/2}$ states increases steadily until 387 K. After that it dramatically saturates to a finite value at low temperature. The maximum electron effective mass for 5$f_{7/2}$ states at low temperature hints strong electronic correlation and localized 5$f$ electrons. The distinct behaviors of electron effective masses for 5$f_{5/2}$ and 5$f_{7/2}$ states are associated with the intrinsic electron correlated characteristic, which confirms the orbital selective correlated states. \subsection{Discussions\label{sec:dis}} In this section, we discuss about the heavy-fermion state and temperature-dependent itinerant 5$f$ states to unveil the strongly correlated electronic structure of PuSn$_3$ and underlying mechanism of the paramagnetic ground state. \textbf{Electronic heat capacity.} To explore the heavy fermion behavior of PuSn$_3$, we evaluate the specific heat coefficient within the framework of Fermi-liquid theory. Taking into account the electronic degree of freedom and lattice vibrations, the heat capacity of solid contains two parts $C_v(T)=\gamma T+\beta T^3$. The linear part of heat capacity $\gamma T$ gives the linear specific heat coefficient $\gamma$, which is expressed by the following equation: \begin{equation} \gamma = \pi k^2_{B} \sum_{\alpha} \frac{A_{\alpha}(0)}{Z_{\alpha}}, \end{equation} where $\alpha$ is the orbital index, $A_{\alpha}(0)$ is the spectral weight at the Fermi level, and $Z_{\alpha}$ is the orbital-resolved renormalization factor~\cite{PhysRevLett.101.056403,RevModPhys.68.13}. Since the effective mass enhancement is usually not quite substantial in Pu-based compounds, the specific heat coefficient $\gamma >$ 100 mJ/(mol$\times$K$^2$) is defined as a threshold for Pu-based heavy-fermion compound~\cite{Bauer2015Plutonium}. Under the neglection of contributions from the conduction bands, the calculated electronic specific heat coefficient of 5$f$ states at 387 K is approximately 96 mJ/(mol$\times$K$^2$). Referring to typical Pu-based heavy-fermion compound PuIn$_3$ with specific heat coefficient 307 mJ/(mol$\times$K$^2$), it is conjectured that PuSn$_3$ might be a promising candidate of Pu-based heavy-fermion compound. \textbf{Orbital selectivity.} Here we analyze the temperature-driven itinerant to localized crossover and orbital dependent electronic correlations of 5$f$ states for PuSn$_3$. Primarily, flat narrow quasiparticle bands of 5$f$ states emerge at the Fermi level with amplifying quasiparticle weight and they hybridize with conduction bands to open obvious gaps. Then the itinerancy and coherence of 5$f$ electrons get strengthening at low temperature. Due to spin-orbit coupling, 5$f$ states are split into 5$f_{5/2}$ and 5$f_{7/2}$ states, which become itinerant asynchronously. Combined with the density of states, it is deduced that the quasiparticle peak rises up quickly at low temperature accompanied by predominant quasiparticle multiplets around the Fermi level. In addition, it is pointed out that quasiparticle multiplets are evoked by valence state fluctuations. It is observed that the central quasiparticle peak is mainly contributed by 5$f_{5/2}$ states which grows coherence at higher temperature than that of 5$f_{7/2}$ states. Ultimately, the evolution pattern of 5$f^n$ electronic configuration demonstrates remarkable valence state fluctuations and distinct mixed-valence behavior, which is closely related to the enhancing itinerancy and developing coherence of 5$f$ states. Briefly, it is speculated that temperature-induced localized to itinerant crossover of 5$f$ states is orbital differentiation. On the other hand, self-energy functions and electron effective masses of 5$f_{5/2}$ and 5$f_{7/2}$ states display differentiated temperature-dependent behaviors, which signifies the orbital dependent 5$f$ electronic correlation in reminiscence with Pu$_3$Ga~\cite{2102.02034}. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{T_hyb.pdf} \caption{(Color online). Temperature dependent quasiparticle band structures (along the $\Gamma$ - $M$ direction) of cubic phase PuSn$_3$ acquired by the DFT + DMFT method. (a) $T$ = 580 K. (b) $T$ = 116 K. (c) $T$ = 29 K. The white horizontal dashed lines denote the Fermi level. Here, we utilized a periodical Anderson model [see Eq.~\ref{eq:pam}] to fit the low-energy band structures. The colorful dashed lines are the fitting results. The renormalized $5f$ energy level $\epsilon_0$ is denoted by purple line, the unrenormalized band dispersion for conduction electrons $\epsilon(k)$ is plotted in black line. The upper and lower branches of hybridized bands are denoted with green and blue lines, respectively. \label{fig:hyb}} \end{figure*} \textbf{Evolution of hybridization gap.} As is already illustrated in Fig.~\ref{fig:akw}(a)-(d), strong hybridization exists between the 5$f$ bands and conduction bands at low temperature. The $c-f$ hybridization will open a hybridization gap for the conduction bands. Since this gap is adjacent to the Fermi level, the physical properties of PuSn$_{3}$ should be affected largely by it. Hence, it is considerable to determine the size of the hybridization gap and elucidate its temperature dependence relation. Phenomenologically, the low-energy hybridized bands can be well described by a simple mean-field hybridization band picture (i.e. the periodical Anderson model)~\cite{RevModPhys.92.011002}. According to this picture, the energy dispersions are expressed as: \begin{equation} \label{eq:pam} E_{\pm}(k) = \frac{[\epsilon_0 + \epsilon(k)] \pm \sqrt{[\epsilon_0 - \epsilon(k)]^2 + 4|V_k|^2}}{2}, \end{equation} where $\epsilon_0$ means the renormalized $5f$ energy level, $\epsilon(k)$ is the unrenormalized band dispersion for conduction electrons, and $V_k$ denotes the strength of hybridization. In the left side of this equation, the ``+'' and ``--'' symbols mean the upper and lower branches of hybridized bands, respectively. In Fig.~\ref{fig:hyb}(a) the band dispersion data at $T = 580$~K are shown. At such a high temperature the hybridization is negligible (i.e. $|V_k| = 0$ and $\epsilon_0 = 0$), so Eq.~(\ref{eq:pam}) is simplified to $E_{\pm}(k) = \epsilon(k)$. Thus, we used the data at $T = 580$~K to calibrate $\epsilon(k)$ (see the black dashed lines in Fig.~\ref{fig:hyb}). The band structures at $T = 116$~K and $29$~K are shown in Fig.~\ref{fig:hyb}(b) and (c), respectively. After fitting Eq.~(\ref{eq:pam}), we derive $\epsilon_0 = -26.5$~meV and $|V_k| = 42.5$~meV for $T = 116$~K, and $\epsilon_0 = -19$~meV and $|V_k| = 50$~meV for $T = 29$~K. Thus, the direct hybridization gaps ($\Delta \approx 2|V_k|$) are 85~meV and 100~meV at 116~K and 29~K, respectively. The results demonstrate that the 5$f$ energy level are pulled toward the Fermi level at low temperature, enhancing the $c-f$ hybridization strength, so as to enlarge the hybridization gap $\Delta$. Similarly, the quasiparticle multiplets also hybridize with the conduction bands. Therefore, multiple hybridization gaps will open at low temperature, which is quite complex and further work is ongoing. \section{conclusion\label{sec:summary}} In summary, we studied the detail electronic structure of PuSn$_3$ by employing a state-of-the-art first-principles many-body approach. The temperature dependence of itinerant to localized crossover of 5$f$ states and the correlated electronic states were addressed systematically. As temperature declines, 5$f$ electrons develop coherence and becomes itinerant, exhibiting outstanding quasiparticle multiplets and pronounced valence state fluctuations, accompanied by conspicuous spectal weight and noteworthy $c-f$ hybridization. Especially, the change in Fermi surface topology induced by temperature hints a Lifshitz transition which could be detected by quantum oscillation experiment. Accordingly, 5$f$ states experience a temperature-driven itinerant to localized crossover. Above all, 5$f$ states manifest orbital selective electronic correlation, expressing themselves as orbital dependent electron effective mass and renormalized bands. Our calculated results not only provide a comprehensive picture about how the 5$f$ correlated electronic states evolve with respect to temperature, but also gain deep insight into the complex electronic structure and mysterious paramagnetic state of PuSn$_3$. Further studies about the other Pu-based compounds are undertaken. \begin{acknowledgments} This work was supported by the National Natural Science Foundation of China (No.~11874329, No.~11934020, No.~22025602), and the Science Challenge Project of China (No.~TZ2016004). \end{acknowledgments}
1,116,691,498,123
arxiv
\section{Introduction} Solar irradiance varies on short timescales from minutes to hours as well as long time scales of days, years, decades and beyond. The variations at the different time scales originate from different processes in the solar atmosphere. The short time scales - other than reconnection processes such as flares - are mostly determined by convection and the longer ones are largely driven by the changing solar surface magnetic field \citep[see e.g., ][]{Domingo2009,Solanki2013ARAA,yeo2017b}. The results over the past decades clearly indicate that solar variability has an influence on the Earth's climate; for an overview see e.g. \cite{Matthes2017} and \cite{Shindell2020}. However, the exact quantification of the solar influence on climate - besides other natural forcings and the anthropogenic contribution - is still debated. \cite{Egorova2018FrEaS} conclude that the Sun must have varied substantially, in order to attribute the temperature increase in the early 19th century. To quantify the solar contribution to climate change more precisely, robust solar irradiance datasets are needed as input to the climate models. For those times when space observations provide a decent temporal and spectral coverage, a number of observational composite datasets are available \citep[see e.g., ][]{Haberreiter2017,Coddington2019,Marchenko2019}. However, when no satellite observations are available irradiance reconstruction models using proxy datasets to describe the state of solar activity back in time are key for our understanding of its impact on climate. The extent to which solar variability has changed over long time scales is still an open question. \cite{Shapiro2011} proposed a reconstruction of the {Total Solar Irradiance (TSI) and Spectral Solar Irradiance (SSI)}, which accounts for a variable quiet Sun intensity. This approach leads to a significant change of the radiative forcing between the Maunder Minimum and the space era of about 6$\pm$3 W m$^{-2}$. A later update of this approach suggests a smaller change in long-term irradiance variability \citep{Egorova2018AA}. However, other reconstructions methods give a rather flat long-term trend; see e.g. \cite{Solanki2013ARAA}. Recently, {\cite{Rempel2020}} determined a linear dependence between the outgoing radiative energy flux and the mean magnetic field strength for the quiet Sun. {While the strong sensitivity of TSI to the quiet Sun field strength implies that potential variations of the magnetic field over longer timescales could make a significant contribution to solar irradiance variations, such magnetic variations are not expected if the quiet Sun magnetic field originates primarily from a small-scale dynamo}. \begin{figure*}[th!] \begin{center} \includegraphics[width=0.9\linewidth, angle=0]{Fig1.pdf}\\ \hspace*{0.cm}\includegraphics[width=0.9\linewidth]{Fig2.pdf} \caption{\label{fig:magz} Top panels: {Photospheric} magnetic field strength $B_{z,\tau=1}$ for the snapshots of the pure hydrodynamic case (left panel), and the cases with a magnetic field strength of 100 G (middle panel), and 200 G (right panel); bottom panels: Horizontal temperature variation of the snapshots {at the top of the simulation box} for the pure hydrodynamical case (left panel), and the cases with a magnetic field strength of 100 G (middle panel), and 200 G (right panel).} \end{center} \end{figure*} Nevertheless, to determine the irradiance variations correctly, it is key to quantify the emergent spectrum for various solar surface magnetic elements with a radiative transfer code. State-of-the-art radiative transfer codes are however inherently different with respect to the atomic input data and {different numerical schemes to solve the radiative transfer equation}. This introduces some uncertainty for irradiance reconstructions. \cite{Criscuoli2020} recently validated the performance of a set of commonly used codes, i.e. the radiative transfer codes COSI \citep{Haberreiter2008b,shapiro2010,Criscuoli2020}, the RH code \citep{Uitenbroek2003}, and the radiation scheme of the MURaM code \citep{Rempel2014}. The authors find a good agreement of the spectral synthesis using 1D solar atmosphere structures. In the present paper we go beyond the study by \cite{Criscuoli2020} and use the vertical temperature and density profiles from three 3D MHD simulations, a hydrodynamic (HD) case with 0-G magnetic field, and a 100-G and 200-G MHD snapshot as input for the spectral synthesis to the above-mentioned radiative transfer codes in order to {assess} their performance. This is deemed particularly necessary, as radiative transfer codes developed to work in 3D geometry may differ not only for the reasons mentioned above, but also for the numerical schemes employed to resolve discontinuities and strong gradients typically present in atmospheres generated by MHD codes \citep[see e.g.][]{Janett2019}. Similar work has already been carried out by \cite{afram2011} and \cite{Norris2017} by applying a set of simulations with different magnetic field strengths from the MURaM Code \citep{Voegler2005} as input for the spectral synthesis. To our knowledge, this is the first time that {spectral syntheses from 3D MHD simulations carried out by different radiative transfer codes are compared in a quantitative manner.} The paper is organized as follows. First, in Sec.\,\ref{sec:mhd} we introduce the 3D MHD simulations used to calculate the synthetic spectra. Second, in Sec.\,\ref{sec:codes} we describe the radiative transfer codes and in Sec.\,\ref{sec:results} we discuss the results. Finally, in Sec.\,\ref{sec:concl} we summarize our findings. \begin{figure*}[th!] \sidecaptionvpos{}{} \begin{center} \includegraphics[width=0.33\linewidth]{Fig3_panel1.pdf} \includegraphics[width=0.33\linewidth]{Fig3_panel2.pdf}\\ \includegraphics[width=0.33\linewidth]{Fig3_panel3.pdf} \includegraphics[width=0.33\linewidth]{Fig3_panel4.pdf} \caption{\label{fig:fringes} Relative intensity calculated with the MURaM (top left), COSI (top right), RH based on the original MHD grid (bottom left), and RH based on the 8-pt interpolated MHD grid (bottom right). Each snaphot was normalized to its respective mean intensity.} \end{center} \end{figure*} \section{MHD simulations}\label{sec:mhd} For this study we calculate the emergent intensities for {different} snapshots from the simulation runs with the MURaM code \citep{Rempel2014} for the non-magnetic, or hydrodynamic (HD) case, and for a magnetic field of $100$~G and $200$~G, respectively. The magnetic cases were branched from the HD setup by adding a uniform vertical field of 100 and 200 G. The simulations ran for about an hour {simulated solar time} until a statistically relaxed state was reached. These simulations consist of cubes of $384\times384\times96$~pixels$^3$, corresponding to an area on the solar surface of $8.8 \times 8.8$ arcsec$^2$. {The vertical domain extent is 1536 km, with the average tau=1 level located at about 684 km beneath the top boundary}. For further details we refer to \cite{Criscuoli2020}. Figure\,\ref{fig:magz}, top panels, show the horizontal distribution of the line-of-sight magnetic field strength at the $\tau$=1 layer, and the temperature variation at the top of the simulation box. The bottom panels give an example of the horizontal variation of the temperature. The {depth stratification of the} temperature and density are used as input to the calculations discussed in Sec.\,\ref{sec:codes}. \begin{figure*}[th!] \begin{center} \includegraphics[width=0.33\linewidth]{Fig4_HD_MURaM_panel1.pdf} \includegraphics[width=0.33\linewidth]{Fig4_P100_MURaM_panel2.pdf} \includegraphics[width=0.33\linewidth]{Fig4_P200_MURaM_panel3.pdf}\\ \includegraphics[width=0.33\linewidth]{Fig4_HD_RH_panel4.pdf} \includegraphics[width=0.33\linewidth]{Fig4_P100_RH_panel5.pdf} \includegraphics[width=0.33\linewidth]{Fig4_P200_RH_panel6.pdf}\\ \includegraphics[width=0.33\linewidth]{Fig4_HD_COSI_panel7.pdf} \includegraphics[width=0.33\linewidth]{Fig4_P100_COSI_panel8.pdf} \includegraphics[width=0.33\linewidth]{Fig4_P200_COSI_panel9.pdf}\\ \caption{\label{fig:contrast_separate1} Relative intensity for the HD, 100 G and 200 G snapshots calculated from the MURaM radiation scheme (top row), the RH code (middle row) and COSI (bottom row). The intensities of all snapshots were normalized to the mean of the MURaM HD snapshot, which is 2.71 10${^7}$ erg s$^{-1}$ cm$^{-2}$ nm$^{-1}$ sr$^{-1}$. For better visibility of the low-medium contrast features, the color scale only covers the range between 0.5 and above 1.5 and is kept constant outside that range.} \end{center} \end{figure*} \begin{figure*}[th!] \begin{center} \includegraphics[width=0.33\linewidth]{Fig5_histo_HD_384x384_panel1.pdf} \includegraphics[width=0.33\linewidth]{Fig5_histo_100_384x384_panel2.pdf} \includegraphics[width=0.33\linewidth]{Fig5_histo_200_384x384_panel3.pdf} \\ \includegraphics[width=0.33\linewidth]{Fig5_histo_HD_384x384_panel4.pdf} \includegraphics[width=0.33\linewidth]{Fig5_histo_100_384x384_panel5.pdf} \includegraphics[width=0.33\linewidth]{Fig5_histo_200_384x384_panel6.pdf} \caption{\label{fig:distribution} Top panel: intensity distribution function for the full $384\times384$ snapshot calculated with the RH code using the original grid in the MHD simulation ({purple dot-dashed} lines) and the 8-point interpolated MHD grid (red dashed lines) and the MURaM calculation (blue lines). Bottom Panel: intensity distribution function for the HD (left panel), 100 G (middle panel) and 200 G (right panel) snapshots using the COSI (black lines), MURaM solver (blue lines), and RH (red lines) codes.} \end{center} \end{figure*} \section{Spectral synthesis codes}{\label{sec:codes}} For the spectral synthesis based on the 3D MHD simulations we use three different radiative transfer codes. First, the COde for Solar Irradiance \citep[COSI,][]{Haberreiter2008a,Haberreiter2008b,shapiro2010,Criscuoli2020} allows us to calculate the atomic level populations and the emergent intensity taking into account non-local thermodynamic equilibrium (non-LTE), which is a key element for the calculation of the UV spectral range. {The atomic data used in COSI are explained in detail in \cite{shapiro2010} and the numerical scheme of the radiative transfer goes back to \cite{HamannSchmutz1987} and \cite{Hubeny1981}. The scheme was first applied to solar studies by \cite{Haberreiter2008a,Haberreiter2008b}.} {So far}, COSI {calculations have been based on} vertical atmosphere structures, such as the 1D atmosphere structures by \cite{fontenla1999}. {In this work}, we use for the first time 3D MHD simulations as input to COSI. \begin{figure*}[th!] \begin{center} \includegraphics[width=0.33\linewidth]{Fig6_HD_RH-MURaM_diff_panel1_shifted.pdf} \includegraphics[width=0.33\linewidth]{Fig6_P100_RH-MURaM_diff_panel2_shifted.pdf} \includegraphics[width=0.33\linewidth]{Fig6_P200_RH-MURaM_diff_panel3_shifted.pdf}\\ \includegraphics[width=0.33\linewidth]{Fig6_HD_COSI-MURaM_diff_panel4_shifted.pdf} \includegraphics[width=0.33\linewidth]{Fig6_P100_COSI-MURaM_diff_panel5_shifted.pdf} \includegraphics[width=0.33\linewidth]{Fig6_P200_COSI-MURaM_diff_panel6_shifted.pdf}\\ \caption{\label{fig:diff} Top panels: relative pixel-to-pixel difference, (RH$_i$ - MURaM$_i$)/<MURaM,HD>, for the HD, 100 G and 200 G snapshots; Bottom panels: same as top panels but for (COSI$_i$ - MURaM$_i$)/<MURaM,HD>. For better visibility of the low-medium contrast features, the color scale only covers the given ranges between in the top and bottom panels and is kept constant outside of that range.} \end{center} \end{figure*} \begin{figure*}[th!] \begin{center} \includegraphics[width=0.33\linewidth]{Fig7_HD_RH-MURaM_ratio_panel1_shifted.pdf} \includegraphics[width=0.33\linewidth]{Fig7_P100_RH-MURaM_ratio_panel2_shifted.pdf} \includegraphics[width=0.33\linewidth]{Fig7_P200_RH-MURaM_ratio_panel3_shifted.pdf}\\ \includegraphics[width=0.33\linewidth]{Fig7_HD_COSI-MURaM_ratio_panel4_shifted.pdf} \includegraphics[width=0.33\linewidth]{Fig7_P100_COSI-MURaM_ratio_panel5_shifted.pdf} \includegraphics[width=0.33\linewidth]{Fig7_P200_COSI-MURaM_ratio_panel6_shifted.pdf}\\ \caption{\label{fig:ratio} Top panels: ratio of the intensities, RH$_i$/MURaM$_i$, with $i$ referring to the HD, 100 G and 200 G snapshots, respectively; Bottom panels: same as top panel but for COSI$_i$/MURaM$_i$. For better visibility of the low-medium contrast features, the color scale only covers the given ranges between in the top and bottom panels and is kept constant outside of that range.} \end{center} \end{figure*} \begin{figure*}[th!] \begin{center} \hspace{-0.cm}\includegraphics[width=0.33\linewidth]{Fig8_Btau1_P100_panel1.pdf} \hspace*{-0.cm}\includegraphics[width=0.33\linewidth]{Fig8_Btau1_P200_panel2.pdf}\\ \hspace*{0.3cm}\includegraphics[width=0.33\linewidth]{Fig8_Btau1_contour_P100_panel3.pdf} \hspace{0.3cm}\includegraphics[width=0.33\linewidth]{Fig8_Btau1_contour_P200_panel4.pdf} \caption{\label{fig:Btau} Top panel: Magnetic field strengths at the height where $\tau=1$ for the 100-G and 200-G snapshots. Bottom panel: Mask of the 100-G and 200-G snapshot to identify which pixels fall into 100-G intervals of the absolute magnetic field strength over the range for each snapshot.} \end{center} \end{figure*} Second, we {make use of} the {RH} code \citep{Uitenbroek2001}. This radiative transfer code {can compute} emergent intensities at different viewing angles in different geometries. It allows the computation of several atomic and molecular transitions in both LTE and non-LTE under complete or partial redistribution. Because of its versatility, RH is widely employed for spectral and spectro-polarimetric syntheses of atomic and molecular lines in solar and stellar atmospheres, and more recently it has been employed for solar irradiance reconstructions \citep{criscuoli2018, criscuoli2019, berrilli2020}. A {massively parallel version, RH1.5D}, allows us to compute emergent radiation on a column-by-column basis \citep{pereira2015} and has been used for syntheses {from 3D MHD simulations \citep[e.g.][]{Pereira2013, Antolin2018, Peck2019}}. {We make use of RH1.5D to allow for the efficient calculation of intensities from the simulations. Because we focus on LTE calculations for the vertically emergent intensity, there is no difference between a 1.5D calculation and a full 3D calculation.} Third, the MURaM code is a radiative MHD code that was {originally} developed by \citet{Voegler2005}. {We use here the version of \citet{Rempel2014} that uses a different formulation of numerical diffusivities and has been tweaked for computational performance. The radiative transfer scheme is identical to \citet{Voegler2005}, but has been expanded to allow for additional diagnostic radiative transfer as described in \citet{Rempel2020}}. In this study we use MURaM in two capacities: (1) to compute the MHD snapshots analysed and (2) to use the radiative transfer solver of MURaM in order to compute diagnostic intensities for comparison. To this end we use the approach detailed in \citet{Rempel2020} in combination with a RH based opacity table from \citet{Criscuoli2020}. {The MURaM RT scheme \citep{Voegler2005} uses short characteristics integration as described in \citet{KUNASZ198867}. Since gradients in opacity, density and source function are steep, MURaM uses a linear interpolation for enhanced stability. This can lead to artificial broadening of inclined rays as discussed by \citet{Peck2017SC}. Since we focus in this paper only on the intensity for vertical rays (i.e. coordinate axis aligned rays), intensity diffusion is not a concern. Finally, as already indicated above, MURaM uses the same opacities as RH.} The main purpose of this paper is to benchmark the radiative transfer codes and to validate their performance. {For the present study to be consistent all radiative transfer calculations are done in LTE. The basic concept of the radiative transfer equations and its solution is given in the Appendix.} We are specifically interested in {validating} the codes for the continuum wavelength. We have identified the wavelength of 665.01 nm to be free of spectral lines, and {therefore adopt it for this study, hereby referring to it as 665 nm}. \section{Results}\label{sec:results} For each column in the MHD simulation box we have calculated the intensity spectrum at 665 nm in LTE. For the comparison of the results it turned out to be important to use an identical grid for all three radiative transfer codes. This is particularly important as the MURaM radiation scheme is inherently set up to use a grid that is shifted {from the MHD grid} by half a grid point in all three dimensions (i.e. MHD quantities are cell-centred while intensities are computed on cell corners). Data are interpolated onto this grid by using the 8-pt average of the surrounding grid cells. Figure\,\ref{fig:fringes} shows the normalized intensities for the 100-G snapshot for MURaM (left panel), RH based on the original MHD grid (middle panel), and RH using the 8-pt averaged grid (right panel). The RH intensities based on the original MHD grid show spurious fringes that disappear when using the 8-pt averaged MHD grid. We note that the decrease of the strong gradients when using interpolated atmospheres mostly results from a better sampling of the regions around the optical depth unity, where most of the radiation originates from. To confirm this point, we repeated the RH syntheses using snapshots interpolated on a vertical grid twice the resolution of the original one while keeping the original horizontal grid, and found that the intensity distribution is very similar to the one obtained on the 8-pt averaged snapshots. Overall, using interpolated grids reduces the peak intensity and increases the width of the intensity distribution, which is qualitatively what one would expect when increasing the spatial resolution. Figure\,\ref{fig:contrast_separate1} shows normalized intensity as calculated with the MURaM radiative transfer scheme (top panels) and the RH (bottom panels). From visual inspection it is not possible to identify any difference. In Fig.\,\ref{fig:distribution}, top panels, we show the corresponding intensity distributions for all three snapshots. The {purple} lines show the intensity distribution of the RH spectral synthesis using the original MHD grid values, the red dashed lines the calculations with the 8-point interpolation of the grid cells, and the blue dotted line the MURaM intensities. While there are some small deviations, overall, the distributions show consistent results. Since the MHD cubes where computed with the MURaM RT scheme in the first place, and as the MHD grid interpolation removes spurious intensity fringes - while still producing consistent intensity distributions - we apply the 8-pt grid interpolation scheme for the RH and COSI calculations presented here. To validate the code performance in more detail, in Fig,\,\ref{fig:distribution}, bottom panels, we compare the distribution of the intensity calculated with the 8-pt interpolated snapshots for COSI (black line), RH (red dashed) and MURaM (blue dotted) obtained for the HD (left panel)), 100G (middle panel), and 200G (right panel). The comparison shows that the distributions cover the same range in absolute intensity. Furthermore, the overall shape of the distributions agrees very well, while some systematic differences can be identified. A key result of this comparison is that in the low-intensity wing of the distribution RH and MURaM agree very well, while COSI and RH reproduce the same intensities at the high-intensity wing of the distribution. {This finding is systematically present in all three snapshots. As such, it can be concluded that RH generally produces slightly wider intensity distribution than MURaM and COSI. Comparing the COSI and MURaM calculations more closely, these two distributions appear systematically shifted, with COSI producing slightly brighter intensities than MURaM. Furthermore, taking into account the shift between COSI and MURaM, the features with more abundant intensity values at the peak of the distributions are consistently reproduced in all three codes. In summary, COSI, MURaM and RH reproduce consistently the same distribution envelope as well as details with small intensity variations for all snapshots.} We are further interested in studying where the largest differences of the intensities come from. Fig.~\ref{fig:diff} and Fig.~\ref{fig:ratio} show the difference and the ratio, respectively, between emergent intensities (normalized to their average) obtained with MURaM, RH, {and COSI}. The images indicate that in the HD snapshots differences of roughly the same amplitude between the {three} codes are found in both dark intergranular lanes and bright upflow regions. Here, the pixel-to-pixel difference between the RH versus MURaM calculations ranges from -0.04 to 0.12, and the respective ratio from 0.96 to 1.10. For COSI and the MURaM the difference ranges from -0.24 to 0.46, and the ratio between 0.77 and 1.58. For the 100-G snapshot, the differences between RH and MURaM range from -0.08 to 0.09 and the ratio between 0.92 to 1.10, while for the case of COSI and MURaM the difference ranges from -0.58 to 0.63, and the ratio between 0.60 and 1.73. The difference and ratio for the 200-G simulation range from -0.05 to 0.16 and 0.95 to 1.13, respectively. For COSI and MURaM the difference ranges for the 200-G simulation ranges from -0.88 to 0.56, and the ratio between 0.55 and 1.71. {The larger pixel-to-pixel variations of the COSI calculations compared to RH and MURaM stem from higher intensity values, mostly in the intergranular lanes, which will be further investigated.} Comparing the averages of the intensities, the ratios of RH and MURaM give 1.012, 1.013, and 1.014 {and for COSI and MURaM it is 1.042, 1.041, and 1.040 for the HD, 100-G, and 200-G snapshots, respectively}. From this we conclude that while the pixel-to-pixel differences of the codes can vary by several percent. The averaged snapshots agree to 1.4\,\% for {the codes using the same opacities and to about 3-4\,\% for the codes with different opacity sets}. {Understanding how the intensity scales with the magnetic field strength is especially important in the context of irradiance studies, as some irradiance reconstruction models make use of the magnetic flux as proxy for the radiative intensity \citep[e.g.][]{krivova2003, foukal2011,yeo2017b}}. As such, the relation between brightness and magnetic flux has been the subject of several observational \citep[e.g.][]{ortiz2002,criscuoli2017} and theoretical studies \citep[e.g.][]{rohrbein2011, Peck2019}. \begin{figure}[t!] \begin{center} \includegraphics[width=0.95\linewidth]{Fig9_Contrast_cosi_rh_muram.pdf} \\ \caption{\label{fig:Int_B} Normalized intensity as a function of absolute magnetic field strength $B_{\tau=1}$ for COSI (black), MURaM (blue), and RH (red). The diamonds give the normalized intensity determined from the 100-G simulation, while the stars the ones from the 200-G run. The last bin contains all magnetic field strength > 1000\,G.} \end{center} \end{figure} Therefore, we are further interested {in finding} how well the intensities calculated by the different codes agree for different magnetic field strengths. Figure\,\ref{fig:Btau}, top panels, shows the magnetic field strengths for each snapshot, interpolated to the height where $\tau=1$. The bottom panels show the respective absolute field strength for both snapshots for the same layer { segmented at 500G intervals }. We then investigate how the normalized intensity depends on absolute magnetic field strength at $\tau=1$ for subsequent 100-G bins in both snapshots. Figure\,\ref{fig:Int_B} compares the mean intensity for each of the bins for the 100-G and 200-G snapshots calculated with COSI (black), MURaM (blue) and RH (red). We note that the shape of these curves is heavily influenced by realization noise, since they are based on single MHD snapshots { (see \citet{Rempel2020} for a detailed discussion on the influence of realization noise on the emergent intensity. Consequently,} while we cannot {meaningfully} compare the differences between the 100 and 200 G snapshots, we can compare the differences of the 3 radiative transfer codes for each of the setups, { which is the goal of this work}. Nonetheless, the overall shape of the dependence of the intensity as a function of magnetic field strength is in line with the findings by \cite{Rempel2020}. Specifically, the intensities in regions with small-medium magnetic field strength (100-900 G) are darker than in the non-magnetic areas. Second, the shape of the intensity variation as a function of magnetic field strength shows systematic differences for the 100-G and 200-G snapshots. In particular, the turning point of the intensity in the 100-G snapshot is found at around 500\,G, while for the 200-G snapshot it is at around 350\,G. This difference can be explained by the fact that the stronger magnetic field in the 200-G snapshot pushes the weak field, which is naturally confined to the intergranular lanes, towards {the edges of granular cells} and as such to areas with higher intensity. Third, while all three codes agree very well in their response to the varying plasma properties some detailed performance differences can still be identified. The scatter amongst the codes is systematically lower for the 100-G than for the 200-G snapshots. This goes in line with the differences found in Fig.\,\ref{fig:diff}, where the RH and MURaM code show the largest deviations in difference (top panels) and ratio (bottom panels) for the 200-G snapshot, and specifically in the intergranular lanes. Looking at the codes individually, in the 100-G snapshots all three codes agree within almost about 1-2\,\% and the codes do not seem to produce systematic differences for that individual snapshot. This is somewhat different for the 200-G snapshot. Here the codes differ by about 2-3\,\%. Moreover, the MURaM code tends to give systematically brighter intensity for the weak-medium magnetic field strengths. An indication of that can be also found in Fig.\,\ref{fig:distribution}, top panel, where the MURaM calculations give consistently higher intensities at the low-intensity side of the peak of the distributions than the RH code. Finally, the average intensities for all magnetic field strength larger than 1000 G lead to a normalized intensity above unity. \section{Conclusions}\label{sec:concl} We have determined the emergent intensity from 3D MHD simulation snapshots for a non-magnetic case, and 100 G and 200 G simulation using the radiative transfer codes COSI, RH and the MURaM radiation scheme. We compared the difference, ratio and distribution of the intensities and find an overall good agreement amongst the codes. {We find that while the absolute intensities produced by the codes agree very well, systematic differences on the code performance could be identified from the distribution functions. In particular, the RH code produces slightly wider distributions. The RH and MURaM calculations agree very well in the low-intensity range, while RH and COSI match very well on the higher intensities.} While the pixel-to-pixel differences of the codes can vary by several percent, the averaged intensities for RH versus MURaM differ up to 1.4\,\% and COSI versus RH and MURaM to about 3-4\,\%, respectively. Furthermore, we investigated how well the codes reproduce the intensities for different magnetic field strengths at the $\tau$=1 layer. Here, we find the differences between the codes to be about 1-2\,\% for the 100-G snapshot and about 2-3\,\% for the 200-G snapshot. The overall shape of the change in intensity as a function of magnetic field strength is in line with previous work. For the COSI and RH calculations we carried out an 8-point averaging of the cell corners of the MHD grid onto the cell centers as it is done in the MURaM code. In addition to ensuring consistency, this technique also removes fringes that appear when using the original MHD grid with RH and COSI. While we focused in this study on computing intensities, it is likely that this technique should also be considered for polarized radiative transfer as well. Tests performed using snapshots interpolated on the vertical grid suggest that MHD simulations with substantially higher resolution in the vertical $\tau$-scale might not cause this issue. A detailed study investigating resolution dependence would need to follow. \begin{acknowledgements} We kindly acknowledge the support by the International Space Science Institute (ISSI), Bern, Switzerland during the International Team on Modeling Solar Irradiance with 3D MHD simulations, lead by Serena Criscuoli. MH kindly acknowledges support by Daniel Karbacher. TMDP's research is supported by the Research Council of Norway through its Centres of Excellence scheme, project number 262622. The National Solar Observatory is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the National Science Foundation. This material is based upon work supported by the National Center for Atmospheric Research, which is a major facility sponsored by the National Science Foundation under Cooperative Agreement No. 1852977. \end{acknowledgements} \begin{appendix} \section{Basic concept of radiative transfer} In the following we describe the basic concept of how the radative transfer is solved in the case of local thermodynamic equilibrium and then discuss the differences in the implementation in the case of MURaM, RH and COSI. In local thermodynamic equilibrium (LTE) the velocity distribution of the atoms, ions and electrons is Maxwellian. Furthermore, the ionization rates and population numbers are a function of the local temperature and density. The assumption of LTE is only true if the collisional processes dominate over the radiative processes, e.g. for high densities, which is a suitable assumption for our study. The standard radiative transport equation in Eq.\,\ref{equ:transp} \begin{equation} \centering \mu \frac{dI_{\nu\mu}}{dz} = \eta_{\nu} -\chi_{\nu} I_{\nu\mu} \label{equ:transp} \end{equation} describes the change of the specific intensity $I_{\nu}$ at frequency $\nu$ along the path $dz$ in a plane-parallel atmosphere due to the emission and absorption, where $\eta$ is the emissivity and $\chi$ the extinction coefficient. In our study $\mu = \cos\Theta$=1 as we consider the radiative transfer in the vertical direction. Using the optical depth \begin{equation} \centering \tau_{\nu} = \int_0^{z'}\chi_{\nu} dz, \label{equ:tau} \end{equation} and the source function, defined as \begin{equation} \centering S_{\nu} \equiv \frac{\eta_{\nu}}{\chi_{\nu}}, \label{equ:source} \end{equation} the intensity change can be described as a function of the change of the optical depth as \begin{equation} \centering \frac{dI_{\nu}}{d\tau} = I(d\tau_\nu) - S(d\tau_\nu). \label{equ:inttau} \end{equation} \noindent Multiplying Eq.\,\ref{equ:inttau} with $e^{-\tau}$ gives \begin{equation} \centering \frac{d I_{\nu} e^{-\tau_\nu}}{d\tau_{\nu}} = I_{\nu} e^{-\tau} - S_{\nu} e^{-\tau_\nu} \label{eq:fac} \end{equation} \noindent If $S_{\nu}$ is known, the integration of Eq.\,\ref{eq:fac} gives \begin{equation} \centering I_{\nu}(\tau_1) = I_{\nu}(\tau_2) e^{-(\tau_2-\tau_1)} + \int_{\tau_1}^{\tau_2}S_{\nu}(t)e^{-(t)} dt, \label{equ:solution} \end{equation} \noindent It is assumed $S_\nu$ is a linear function of $\tau_\nu$. Then Eq.\,\ref{equ:solution} can be written as: \begin{equation}\label{eq:formal} \centering I_{\nu}(\tau_1) = I_{\nu}(\tau_2) e^{-(\tau_2-\tau_1)} + \int_{\tau_1}^{\tau_2} e^{-t}[S_1+S_2(t-\tau_1)/(\tau_2-\tau_1)] dt \end{equation} Clearly, the determination of the opacities and the source function has an impact in the emergent intensity. In the following, we give the basic concept of the numerical scheme of all three codes that is used to determine the emergent intensity in LTE and vertical direction ($\mu=1$). \section{MURaM} MURaM interpolates the opacity from a RH opacity table $\kappa=f(\rho, T)$ using a bi-linear table interpolation. Starting with the values at the positions: $z_1$ ($\kappa_1$, $\varrho_1$, $S_1$) and $z_2$ ($\kappa_2$, $\varrho_2$, $S_2$) we compute the $\tau$ scale using $\Delta \tau=\tau_2-\tau_1>0$: \begin{equation} \Delta \tau=-\Delta z [(\kappa_1 \varrho_1+\kappa_2 \varrho_2)/3+(\kappa_1 \varrho_2+\kappa_2 \varrho_1)/6] \end{equation} The outgoing intensity at the top of the domain is computed solely for diagnostics using a scheme that is separate from the radiative transfer that is used to compute the intensity throughout the simulation domain (needed for radiative heating/cooling). The contribution to the outgoing intensity at the top of the domain from the interval $[\tau_1, \tau_2]$ in regions with $\Delta\tau>10^{-5}$ is given by: \begin{eqnarray} c&=& (1-e^{-\Delta \tau})/\Delta \tau\\ \Delta I &=& [S_1 (1-c) + S_2 (c-e^{-\Delta \tau})] e^{-\tau_1} \end{eqnarray} otherwise: \begin{eqnarray} \Delta I &=& 0.5\Delta\tau[S_1 + S_2] e^{-\tau_1} \end{eqnarray} \section{RH} In RH, the formal solution of Eq. (\ref{equ:solution}), is achieved by piecewise integration using the formalism of characteristics \citep[see][]{KUNASZ198867}. In this work we used the solver with linear interpolation of the source function between points $\tau_1$ and $\tau_2$, according to the expression: \begin{equation} I_\nu(\tau_1) = I_\nu(\tau_2) e^{-\Delta\tau} + w_0 S_1 + w_1 \frac{\Delta S}{\Delta \tau}, \end{equation} where \begin{eqnarray*} \Delta\tau & = & 0.5 \Delta z (\chi_1 + \chi_2),\\ \Delta S &=& S_2 - S_1,\\ w_0 & = & 1 - e^{-\Delta\tau},\\ w_1 & = & w_0 - \Delta\tau e^{-\Delta\tau}. \end{eqnarray*} To preserve numerical accuracy, when $\Delta\tau > 50$, $w_0=w_1=1$ and when $\Delta\tau < 5\cdot10^{-4}$, $w_0$ and $w_1$ are approximated by \begin{eqnarray*} w_0 & = & \Delta\tau \left(1 - \frac{\Delta\tau}{2}\right),\\ w_1 & = & \Delta\tau^2 \left(\frac{1}{2} - \frac{\Delta\tau}{3}\right). \end{eqnarray*} In RH the extinction coefficients are explicitly calculated, while in MURaM they are interpolated from the RH opacity table ($\kappa\equiv\chi/\rho=f(\rho, T)$). For consistency, both the explicit RH synthesis and the opacity table employed in MURaM were computed under LTE using the same set of input parameters. In summary, the synthesis included the full list of atomic and molecular bound-free transitions from the Kurucz website, photo-ionization of 12 atomic species (including updated Fe atom cross-sections) and photo-dissociation of 52 diatomic molecules. The computation took also into account Thomson and Rayleigh scattering, although scattering contribution is expected to be negligible in continua along the vertical line-of-sight \citep[e.g.][]{Fabbian2015}. \section{COSI} The numerical scheme of COSI consists of two modules. The first, also referred to as {\it hminus} \citep{Haberreiter2008b,shapiro2010} calculates the level populations of the atoms, which under the assumption of LTE follow the Boltzmann distribution \begin{equation} \frac{n_{\mathrm{u}}}{n_{\mathrm{l}}} = \frac{g_{\mathrm{u}}}{g_{\mathrm{l}}}e^{-(\chi_{\mathrm{u}}-\chi_{\mathrm{l}})/kT} = \frac{g_{u}}{g_{l}}e^{-(h\nu/kT)}. \label{equ:boltz} \end{equation} The second module of the code, also referred to as {\it fioss}, uses the these level populations to calculate the emergent intensity, in our case in the vertical direction ($\mu=1$). In COSI the opacities are determined taking into account all radiative (negligible in the case of LTE) and collisional processes for absorption, emission and ionization. In COSI the solution of Eq.\,(\ref{equ:solution}) is obtained via consideration of the change of the intensity between 3 subsequent depth points, $n$=1,2,3 and $\tau_3 > \tau_2 > \tau_1$:\\ if $\tau > 10^{-10}$ then \begin{eqnarray} I_{\nu} &=&I_{\nu}e^{-\Delta \tau} + dS_{\nu}/d\tau =\\ &=&I_{\nu}e^{-\Delta \tau} + S_{\nu}w_\tau \end{eqnarray} with \begin{eqnarray} w_{\tau} & = & w_{a}+w_{b}\\ w_{a} & = & (e^{-\tau_1} - e^{-\tau_2})/\Delta \tau_1 \\ w_{b} & = & (e^{-\tau_3} - e^{-\tau_2})/\Delta \tau_2 \\ \Delta\tau_1&=&0.5 \,\Delta z_1 (\chi_1+\chi_2) \\ \Delta\tau_2&=&0.5\,\Delta z_2(\chi_2+\chi_3). \end{eqnarray} Note that the $S_2 e^{-\tau_2}$ term (from equation ) cancels for consecutive depth points with the next term, which is then equal to the new $S_1$-term. \end{appendix}
1,116,691,498,124
arxiv
\section{Introduction} \label{sec:intro} The ongoing Karl G. Jansky Very Large Array (VLA) Sky Survey \citep[VLASS;][]{VLASS} is the first radio all-sky survey that is sensitive to several classes of slowly evolving extragalactic radio transients, including flares from active galactic nuclei (AGN), core-collapse supernova afterglows, the orphan afterglows of off-axis $\gamma$-ray bursts, and tidal disruption events (TDEs) of stars by supermassive black holes (SMBHs). VLASS is an interferometric survey in the 2--4\,GHz band of the entire 33,885\,deg$^{2}$ of sky north of a declination of $-40^{\circ}$, with an angular resolution of 2.5\arcsec. When complete, the sky will be surveyed over three epochs spaced by 32 months, to a continuum-image rms of $120\,\mu$Jy per beam per epoch. The exceptional survey grasp of VLASS provides the opportunity to assemble samples of radio transients without the need for external triggers, enabling radio-selected populations to be compared with those from other wavelengths. Here we report on a remarkable radio transient discovered by jointly searching the first half of the first epoch of VLASS (VLASS\,1.1) and the VLA Faint Images of the Radio Sky at Twenty centimeters (FIRST) survey \citep{FIRST}.\footnote{A similar search yielded the discovery of the luminous extragalactic radio transient FIRST\,J141918.9$+$394036 \citep{j1419}.} The FIRST survey was conducted at a frequency of 1.4\,GHz, and covered $\sim10,000$\,deg$^{2}$ of the northern sky mostly between 1994 and 1999 with an angular resolution of 5\arcsec and a continuum-image rms of $150\,\mu$Jy per beam. We discovered a source (see \S\ref{sec:sample}), FIRST\,J153350.8$+$272729 (hereafter J1533$+$2727), which was detected in FIRST in 1995 with a flux density of 9.7\,mJy, but was undetected in VLASS in 2017. Further searches of archival radio data revealed that J1533$+$2727 was detected in 1986 and 1987 by the Green Bank 300-foot telescope \citep{gb6} at 4.85\,GHz, with a mean flux density of 51\,mJy. We performed multi-band observations with the VLA on 2019 May 14 that re-detected the source at a level consistent with the VLASS upper limit (\S\ref{sec:archival}), notably with a flux density of just $132\,\mu$Jy at 5\,GHz. J1533$+$2727 is associated with the nucleus of a galaxy (SDSS\,J153350.89$+$272729) at a (luminosity) distance of 147\,Mpc (see \S\ref{sec:host}). The fading of J1533$+$2727 by nearly a factor of 400 over 33 years at $\sim5$\,GHz is strong evidence for its transient nature, and the preferred interpretation for its origin is of a TDE (see \S\ref{sec:Interpretation}). Although nearly a hundred TDE candidates are now cataloged\footnote{See http://tde.space.}, only $\sim10\%$ of TDEs exhibit radio emission \citep{tde_sample,marin_tde}. The three most distant of the radio TDEs (Sw\,J1644$+$57, Sw\,J2058$+$05, and Sw\,J1112-82 at redshifts of 0.35, 1.18, and 0.89 respectively) were first discovered through transient $\gamma$-ray emission corresponding to the launch of a nascent relativistic jet. The remaining radio TDEs are powered by mildly relativistic outflows that drive shocks into the circum-nuclear medium, and peak at radio luminosities that are two to three orders of magnitude below the three relativistic radio TDEs. Only one TDE candidate, CNSS\,J0019$+$00 \citep{marin_tde}, has previously been first discovered through its radio emission. In general, the radio emission generated by extragalactic explosions (e.g., supernovae, $\gamma$-ray bursts, and TDEs) is enhanced in the presence of more energetic outflows, and denser circum-explosion material. TDEs discovered through their radio emission, rather than through X-ray or optical emission associated with accreting material, may offer a novel selection of TDE phenomena and host galaxies. Throughout this work, we adopt the following cosmological parameters: $H_{0}=67.7$\,km\,s$^{-1}$\,Mpc$^{-1}$, $\Omega_{M}=0.3089$, and $\Omega_{\Lambda}=0.6911$ \citep{planck15}. \section{Discovery of J1533+2727}\label{sec:sample} Two independent efforts discovered J1533$+$2727 by comparing catalogs of sources from VLASS\,1.1 and FIRST. Each effort first generated source catalogs from the VLASS\,1.1 quick-look images\footnote{https://archive-new.nrao.edu/vlass/quicklook/}, using either the Aegean \citep{aegean1,aegean2} or PyBDSF \citep{pybdsf} source finding software. Details of how the source finding algorithms were applied will be presented in future works that describe larger samples of transients (e.g., Dong et al., in prep.). Once the VLASS\,1.1 source catalogs were made, we performed cross-matches between unresolved sources in the VLASS\,1.1 and FIRST catalogs, with a specific focus on finding FIRST sources not present in VLASS\,1.1. In one of our efforts we only considered sources with a more than $75\%$ decrease in measured flux densities between FIRST and VLASS. In the other effort we only considered sources detected at $>2.3$\,mJy in FIRST and undetected in VLASS (i.e., with 3\,GHz flux densities $<0.5$\,mJy), which were additionally coincident with the nuclei of spectroscopically detected galaxies in SDSS\,DR14 \citep{sdss14} at redshifts $z<0.1$. J1533+2727 was noteworthy as one of the brightest sources to pass all our selection thresholds. The position of this source in the FIRST catalog is (R.A. J2000, decl. J2000) = (15:33:50.884, $+$27:27:29.57), with uncertainties of 0.4\arcsec~in each coordinate \citep{FIRST}. \begin{deluxetable*}{cccc} \tabletypesize{\footnotesize} \tablewidth{0pt} \centering \tablecaption{ Radio Observations of FIRST\,J153350.8$+$272729. \label{tab:sample}} \tablehead{ \colhead{Epoch} & \colhead{Survey} & \colhead{Frequency (GHz)} & \colhead{Flux density (mJy)} } \startdata 1968 & Bologna & 0.408 & $< 750$ \\ 1974 -- 1983 & Texas & 0.365 & $< 1200$ \\ 1983 April 2 -- 21 & GBNSS & 1.4 & $< 300$ \\ 1986 November 6 -- December 13 & GB6 & 4.85 & $65\pm8$ \\ 1987 September 30 -- November 1 & GB6 & 4.85 & $42\pm8$ \\ 1987 January & GEETEE & 0.0345 & $<$ 15000 \\ 1995 April 16 & NVSS & 1.4 & $9.1\pm0.5$ \\ 1995 November 65 & FIRST & 1.4 & $9.7\pm0.1$ \\ 1998 May 22 & CLASS & 8.46 & $<1.5$ \\ 2001 September 14 & CLASS & 1.425 & $<3$ \\ 2001 September 14 & CLASS & 4.86 & $<1.5$ \\ 2001 September 14 & CLASS & 8.46 & $<1.5$ \\ 2001 September 14 & CLASS & 22.46 & $<7.5$ \\ 2006 & VLSSr & 0.074 & $< 300$\\ 2010 April -- 2012 March & TGSS &0.15 & $< 15$\\ 2017 October 2 & VLASS & 3 & $<0.46$ \\ 2019 May 14 & 19A-470 & 1.52 & $0.36\pm0.03$ \\ 2019 May 14 & 19A-470 & 5 & $0.132\pm0.009$ \\ 2019 May 14 & 19A-470 & 7 & $0.100\pm0.009$ \\ \enddata \tablecomments{All upper limits are at the $3\sigma$ level. References: Bologna Sky Survey \citep[Bologna;][]{BOLOGNA}, Texas Survey of Radio Sources at 365 MHz \citep[Texas;][]{TEXAS}, Green Bank Northern Sky Survey \citep[GBNSS;][]{GBNSS}, Green Bank 6 cm survey \citep[GB6;][]{gb6cat}, Gauribidanur Telescope \citep[GEETEE;][]{GTEE}, Faint Images of the Radio Sky at Twenty centimeters \citep[FIRST;][]{FIRST}, NRAO VLA Sky Survey \citep[NVSS;][]{nvss}, Cosmic Lens All-Sky Survey \citep[CLASS;][]{class}, VLA Low-frequency Sky Survey \citep[VLSSr;][]{VLSSr}, TIFR GMRT Sky Survey Alternative Data Release \citep[TGSS;][]{TGSS}, VLA Sky Survey \citep[VLASS;][]{VLASS}.} \end{deluxetable*} \section{Archival and follow-up radio observations}\label{sec:archival} We searched a selection of existing radio-survey catalogs and data sets for detections of J1533+2727. The results are summarized in Table~\ref{tab:sample}. J1533+2727 is cataloged in the FIRST survey with a flux density of $9.7\pm0.1$\,mJy, on an observing epoch of 1995 November 06. The source is also present in the NVSS catalog with a flux density of $9.1\pm0.5$\,mJy on 1995 April 16. Although the formal $3\sigma$ upper limit on the flux density of J1533+2727 in the VLASS quick-look images is 0.38\,mJy (based on the per-pixel rms at the source location), we adopt an upper limit of 0.46\,mJy to account for errors in the flux scale \citep{pb17}. The VLASS observation epoch was 2017 October 02. We next searched the VLA archive\footnote{\url{https://archive.nrao.edu/archive/advquery.jsp}} for data obtained at the position of J1533+2727, and found that this source had been observed (in a targeted observation, at the center of the primary beam) by the Cosmic Lens All-Sky Survey \citep[CLASS;][]{class} on 1998 May 22 at 8.46\,GHz (VLA project AM0593), and by a wideband survey of sources with rising spectra between 1.4\,GHz and 4.8\,GHz on 2001 September 14 (VLA project AG0617). We re-analyzed these data using standard tasks from the Common Astronomy Software Applications \citep[CASA, version 5.1.1;][]{casa}, and finding no source at the position of J1533+2727 derived the upper limits on its flux density reported in Table~\ref{tab:sample}. As above, these upper limits were derived using the per-pixel rms at the source location. The selection of J1533+2727 as a CLASS source implied that J1533+2727 is present in the Green Bank 300-foot telescope 6\,cm survey catalog \citep[GB6;][]{gb6cat} with a flux density in excess of 30\,mJy \citep{class} at 4.85\,GHz. Indeed, the GB6 catalog lists a $51\pm6$\,mJy source (GB6\,J1533$+$2728) at a position of (R.A. J2000, decl. J2000) = (15:33:49.9$\pm$0.8, $+$27:28:12$\pm$12), where a substantial component of the error is due to pointing errors of order 8\arcsec~\citep{gb6cat}. Despite the 46.8\arcsec~offset between the FIRST and GB6 sources, the association is considered likely as the next closest source in FIRST or VLASS to the GB6 position is offset by 379\arcsec, which is greater than the 3.5\,arcmin full-width half-maximum of the GB6 survey beam.\footnote{The final CLASS sample was chosen by associating GB6 sources with NVSS sources within a 70\arcsec~separation cut, which is explained by the much greater positional uncertainty of NVSS than FIRST.} The GB6 survey catalog is actually comprised of observations obtained over two epochs, between 1986 November 6 and December 13, and between 1987 September 30 and December 1.\footnote{Survey operations in late 1988 with the 300-foot telescope were increasingly affected by pointing errors that rendered the data unusable.} Single epoch maps were converted into source catalogs by \citet{gb6var}\footnote{Currently available at \url{https://phas.ubc.ca/~gregory/RadioAstronomy.html}.}. The 1986 observations contain J1533+2727 with a flux density of $65\pm8$\,mJy, and the 1987 observations showed a flux density of $42\pm8$\,mJy. The remarkable fading of J1533+2727 over 31 years between 1986 and 2017 motivated follow-up VLA observations (VLA program 19A-470). We obtained data in the B configuration (antenna separations between 0.21\,km and 11.1\,km) on 2019 May 14 in the L (1--2\,GHz) and C (4--8\,GHz) bands, using standard continuum observing setups and CASA data-reduction procedures. The absolute flux-scale and bandpass calibrator was 3C286, and time-variable complex gain calibration was accomplished using J1513+2338. No self-calibration was conducted. We detected J1533+2727 in both bands at a position consistent with the FIRST position within 0.2\arcsec. The measured flux densities were $0.36\pm0.03$\,mJy at 1.52\,GHz, $0.132\pm0.009$\,mJy at 5\,GHz, and $0.100\pm0.009$\,mJy at 7\,GHz. The measurements are consistent with a single power law (flux density $S(\nu)\propto\nu^{\alpha}$, where $\nu$ is the frequency) with spectral index $\alpha=-0.840\pm0.003$. These measurements do not include a 3--5\% uncertainty in the VLA flux-density scale \citep{pb17}. \section{Host-galaxy properties}\label{sec:host} \begin{figure*}[ht] \centering \includegraphics[width=0.35\textwidth]{galaxy.png} \hspace{0.6cm} \includegraphics[width=0.5\textwidth]{spec.pdf} \caption{{\em Left:} Three-color composite in the SDSS $g$, $r$ and $i$ bands of the host galaxy of J1533+2727, SDSS\,J153350.89+272729. The radio position of J1533+2727 is shown as a blue cross, and the spatial extent of the SDSS fiber input on the sky is shown as a red circle \citep{sdss_spec}. {\em Right:} SDSS spectrum \citep{sdss16} of SDSS\,J153350.89+272729 obtained on 2007 March 21. Some relevant emission lines are labeled, which are indicative of weak Type II Seyfert activity.} \label{fig:host} \end{figure*} The centroid of the FIRST position of J1533+2727 is located 0.150\arcsec~from the optical center of the galaxy SDSS\,J153350.89+272729. With a FIRST positional uncertainty of 0.400\arcsec, J1533+2727 is therefore consistent with being coincident with the galaxy nucleus. Figure~\ref{fig:host} shows the SDSS DR16 \citep{sdss16} image and spectrum of this galaxy, which we adopt as the host of J1533+2727. The host galaxy lies at a redshift of $z=0.03243\pm0.00001$ (luminosity distance of $147.14\pm0.01$\,Mpc), and the difference between the radio position of J1533+2727 and the center of light of the host galaxy corresponds to a projected separation of just $107$\,pc. An inspection of the SDSS optical spectrum indicates weak Type II Seyfert activity, according to standard line-ratio diagnostics \citep{kewley}, with $\log{\rm ([NII]/H\alpha)}=-0.05$ and $\log{\rm ([OIII]/H\beta)}=0.78$. Stellar population synthesis fits to the SDSS photometry indicate a stellar mass of between $10^{10.45}M_{\odot}$ and $10^{10.49}M_{\odot}$, and an ongoing star-formation rate of $\sim0.6\,M_{\odot}$\,yr$^{-1}$ \citep[`stellarMassFSPSGranEarlyDust' table;][]{sdssspec}. The quoted uncertainties are purely statistical, and do not reflect the range of possible parameterizations. The galaxy was classified morphologically as a spiral by \citet{class_SPIRAL}. The absolute rest frame magnitudes of the bulge in the $g$ and $r$ bands were calculated by \citet{Abs_Mag} using Galaxy IMage 2D \citep{GIM2D}, assuming extinction values obtained from the SDSS database. We transformed these bulge absolute magnitudes, $M_{\rm g,bulge}=-18.52$ and $M_{\rm r,bulge}=-19.39$ to the $V$ band, $M_{\rm V,bulge}=-19.04$, according to formulas from \citet{JESTER}. We then applied the relation between SMBH mass and bulge luminosity from \cite{Lum_to_mass} to estimate the total black hole mass to be $\log M_{BH}/M_{\odot}=7.6 \pm 0.2$ \section{Discussion}\label{sec:Interpretation} \subsection{The nature of J1533+2727} We now interpret these observations in terms of three hypotheses for J1533+2727: \begin{itemize} \item AGN variability. \item An engine-driven transient associated with a supernova.\footnote{We do not consider standard supernovae because the peak radio luminosity of J1533+2727 ($1.6\times10^{30}$\,erg\,s$^{-1}$\,Hz$^{-1}$) is a factor of eight greater than even the most luminous radio supernova \citep[PTF\,11qcj;][]{11qcj}.} \item A jet or outflow powered by a TDE. \end{itemize} We first augment the observations presented above with archival ROSAT/PSPC pointings that included J1533+2727 in the field of view. We highlight two observations in particular: a 415\,s pointing on MJD~48102 (1990 July 30; sequence id rs931238n00; around three years after the last Green Bank detection) obtained as part of the ROSAT All-Sky Survey, and a 13932\,s pointing a year later on MJD~48449 (1991 July 12; sequence id RP201103N00) obtained as part of a long exposure on $\alpha$ Cor Bor. We used the \texttt{sosta} tool in the \texttt{XIMAGE} package, and the exposure maps associated with the observations, to derive $3\sigma$ upper limits on the count rates at the position of J1533+2727. These were 0.125\,cts\,s$^{-1}$ and 0.0108\,cts\,s$^{-1}$ on the respective dates (in the standard PSPC 0.1--2.4\,keV band). We then converted these to upper limits on the 2--10\,keV luminosity assuming a photon index of 2, and a Galactic neutral-hydrogen column density of $n_{H}=2.9\times10^{20}$\,cm$^{-2}$ derived from the HI4PI neutral hydrogen column map \citep{hi4pi}. These upper limits were $L_{X}<3.0\times10^{42}$\,erg\,s$^{-1}$ and $L_{X}<2.6\times10^{41}$\,erg\,s$^{-1}$ on the respective dates. These X-ray upper limits are low in comparison with the radio luminosity of J1533+2727, if J1533+2727 represents an active black hole. The black hole fundamental plane relates the mass, 5\,GHz radio luminosity, and 2--10\,keV X-ray luminosity of actively accreting objects across nine orders of magnitude in black hole mass, and hints at a fundamental link between accretion rate and jet power. We used the latest iteration of this relation \citep{gkc+19} to derive predicted upper limits on the 5\,GHz radio luminosity of J1533+2727 at the epochs of the X-ray observations. Given the derived SMBH mass, by assuming Poisson statistics for the X-ray upper limits, and by using a Monte Carlo technique to account for the uncertainty and intrinsic scatter in the black hole fundamental plane, we calculated 95\% confidence upper limits on the expected radio luminosity of $4.8\times10^{38}$\,erg\,s$^{-1}$ and $9.8\times10^{37}$\,erg\,s$^{-1}$ corresponding to the two X-ray observations. These in turn imply radio flux-density upper limits of 3.7\,mJy and 0.8\,mJy respectively, where we divided the luminosities by the frequency to derive the spectral luminosities following \citet{gkc+19}. If J1533+2727 was indeed this faint during the X-ray observations, a much more rapid evolution is implied between the Green Bank 4.85\,GHz detection and these epochs than at later times. Furthermore, unless the source re-brightened between the X-ray observations and the NVSS and FIRST detections, an unrealistic spectral index steeper than $-2$ is implied between 1.4\,GHz and 5\,GHz for the FIRST detection. We conclude that J1533+2727 was inconsistent with the black hole fundamental plane when the ROSAT X-ray observations were undertaken. This suggests that J1533+2727 was not actively accreting as an AGN at this time, which is in tension with the hypothesis of ongoing AGN variability. This is however not in tension with the TDE hypothesis, because the accretion could have ceased \citep[e.g.,][]{levan15}. When detected in the Green Bank survey, J1533+2727 was also more luminous at a wavelength of 6\,cm than the cores of any of the 52 nearby Seyfert galaxies observed by \citet{hu01}, besides Perseus~A (NGC\,1275; Seyfert~1.5) and NGC\,1167 (Seyfert~II; beyond the magnitude-completeness limit of the survey). Additionally, J1533+2727 is likely more variable than any of the 12 Seyferts observed by \citet{mfn+09}, within whose sample the maximum variability in seven years at 8.4\,GHz was a factor of three. We therefore proceed to consider hypotheses (2) and (3) given above. Some insight can be gained by analyzing J1533+2727 as a synchrotron transient, despite the lack of detailed spectral information. That J1533+2727 represents synchrotron emission is evident given the brightness temperature ($\gtrsim5.3\times10^{9}$\,K) implied by the extreme variability of the source.\footnote{A useful working definition of a variability timescale that can be used to calculate a light-crossing time is a timescale over which the modulation index, defined as the variability range divided by the mean source flux density, is greater than unity. Adopting a timescale of six years (between the FIRST/NVSS observations and L-band VLA observations in 2001), we find a brightness temperature in excess of $5.3\times10^{9}$\,K.} We can also derive rough constraints on the source radius, $R$, the energy required to power the source, $E$, and the electron number density, $n_{e}$, of the medium into which the source is expanding. The constraints are based on a fiducial $65$\,mJy maximum flux density measured at $4.85$\,GHz, and the assumption that the optically thin spectral index observed in our 2019 observations of the source of $\alpha=-0.84$ is representative of a non-evolving relativistic electron energy distribution $N(E)\propto E^{-p}$ with $p=-2\alpha+1=2.68$. In the following, we assume (\textit{a}) equipartition between the energy in relativistic electrons and in magnetic fields within the source, (\textit{b}) that the source is expanding sub-relativistically, (\textit{c}) that the source is spherically symmetric with a filling factor of unity, and (\textit{d}) that the relativistic electrons are accelerated in a strong (forward) shock that deposits 10\% of its energy in the electrons, and 10\% of its energy in magnetic fields (i.e., $\epsilon_{e}=\epsilon_{B}=0.1$ in usual terms).\footnote{In our calculation, we adopt $c_{5}=9.68\times10^{-24}$ and $c_{6}=8.10\times10^{-41}$ from \citet{Pacholczyk1970}.} The non-relativistic assumption further implies that the spectral peak was associated with synchrotron self-absorption rather than the minimum relativistic-electron energy \citep[e.g.,][]{c98}. In this scenario, following \citet{c98}, $R\propto S_{p}^{(p+6)/(2p+13)}\nu_{p}^{-1}$, and $E\propto S_{p}^{(3p+14)/(2p+13)}\nu_{p}^{-1}$, where $\nu_{p}$ is the peak frequency and $S_{p}$ is the peak flux density. If we are simply constraining the values of $R$ and $E$ when $\nu_{p}=4.85$\,GHz, we are setting lower limits on both quantities. This is also essentially the case if we are constraining the values of $R$ and $E$ during the 1986 Green Bank observations.\footnote{If the spectrum was optically thick and $\nu_{p}>\nu=4.85$\,GHz, $S_{p}\propto(\nu_{p}/\nu)^{2}$ implies a nearly fixed estimate of $R$ regardless of the true value of $\nu_{p}$, and a larger value of $E$. If the spectrum was optically thin, $R$ and $E$ would clearly both be larger.} The electron number density is derived by applying the Rankine-Hugoniot jump conditions in the strong shock limit \citep[Equation~(15) of][]{18cow}, with the further assumption of the shock velocity being given by the source size divided by its lifetime, $T$. In this case, $n_{e}\propto S_{p}^{-(2p+16)/(2p+13)}\nu_{p}^{4}$. In summary, using the 1986 measurement, we find \begin{align} R &=\begin{aligned}& 2.3\times10^{17} \left(\frac{S_{p}}{\rm 65\,mJy}\right)^{0.47}\left(\frac{\nu_{p}}{\rm 4.85\,GHz}\right)^{-1}\,{\rm cm}\end{aligned} \\[\jot] E &=\begin{aligned}& 7.5\times10^{50} \left(\frac{S_{p}}{\rm 65\,mJy}\right)^{1.20}\left(\frac{\nu_{p}}{\rm 4.85\,GHz}\right)^{-1}\,{\rm erg}\end{aligned} \\[\jot] n_{e} &=\begin{aligned}& 1.6\times10^{-3} \left(\frac{S_{p}}{\rm 65\,mJy}\right)^{-1.16}\left(\frac{T}{\rm 1\,day}\right)^{2} \\ &\left(\frac{\nu_{p}}{\rm 4.85\,GHz}\right)^{4}\,{\rm cm}^{-3}.\end{aligned} \end{align} These equations, including the constants of proportionality, directly reproduce Equations (8), (12), and (16) of \citet{18cow}, for our value of $p$. Although the radius estimate is only mildly sensitive to the assumptions above, a departure from equipartition like that observed for Sw\,J1644$+$57 \citep{tarraneh}, where $\epsilon_{B}=0.001$ was inferred assuming $\epsilon_{e}=0.1$, would increase the energy estimate by two orders of magnitude. \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{comparison_lightcurve.pdf} \caption{Lightcurves of all TDEs detected at radio wavelengths. All TDE data are at 5\,GHz, except for the 1.4\,GHz data on J1533$+$2727, and 8.4\,GHz data on AT2019dsg and Arp\,299. Data were collated from \citet{tde_sample} and references therein. The time since the event for the first data point on J1533$+$2727 (88 days) was derived assuming expansion at the speed of light to that time (see text for details). Also shown are representative radio lightcurves of four luminous supernovae: the relativistic-outflow events 1998bw \citep[4.8\,GHz;][]{1998bw} and 2009bb \citep[8.46\,GHz;][]{2009bb}, and the energetic Type Ic broad line events 2003bg \citep[8.5\,GHz;][]{2003bg} and 2007bg \citep[8.5\,GHz;][]{2007bg}. In the latter two supernovae the late-time radio emission was enhanced by a dense, structured circumstellar medium.} \label{fig:lightcurve} \end{figure*} These results provide evidence that J1533+2727 represents a relativistic jet/outflow from a TDE. First, the lower limit on the energy in the outflow is greater than that of any known stellar cataclysm \citep[e.g.,][]{229B_m} besides classical, on-axis long gamma-ray bursts (LGRBs).\footnote{Radio calorimetry of the ejecta of cosmic explosions traces the fastest ejecta, and therefore cannot be directly related to the total energies in outflows with a range of velocities, like supernovae \citep[e.g.,][]{berger03}.} Additionally, most LGRBs have apparent expansion velocities of $\Gamma\beta\gtrsim3$ (here, $\Gamma$ is the bulk Lorentz factor of the emitting material, and $\beta=v/c$ is the normalized expansion velocity). However for J1533+2727, following \citet{sari98}, the characteristic frequency corresponding to radiation from the lowest-energy relativistic electrons, $\nu_{m}$, was likely lower than 4.85\,GHz in 1986, because the source declines between 1986 and 1987. This frequency is related to $E$ and $T$ in the case of adiabatic evolution, which implies a lifetime: \begin{equation} T\gtrsim22\left(\frac{E}{7.5\times10^{50}\,{\rm erg}}\right)^{1/2}\,{\rm days}. \end{equation} This in turn implies $\Gamma\beta\lesssim4$, and $n_{e}\lesssim 0.8\,{\rm cm}^{-3}$. Although this makes the LGRB scenario somewhat fine-tuned, the derived parameters are consistent with some GRBs \citep[e.g., GRB\,980703;][]{perley17}. Indeed, GRB\,980703 exhibits late-time emission 16 years post-burst that is similar to J1533+2727 \citep{perley17}. However, the projected offset between J1533+2727 and the center of light of its host galaxy of $\approx 100$\,pc is inconsistent with more than 95\% of the LGRB population \citep{grboffset}. Additionally, the high stellar mass and lack of evidence for an ongoing starburst in the host galaxy of J1533+2727 are inconsistent with typical LGRB hosts \citep{taggart}. We therefore favor the TDE scenario. A comparison between the radio lightcurve of J1533+2727 and the remainder of the TDE population is shown in Figure~\ref{fig:lightcurve}. The post-explosion time of the 1986 epoch (88 days) was derived assuming a nominal expansion velocity of $c$, which would imply $n_{e}\sim12$\,cm$^{-3}$. Please note however that sub-relativistic expansion was assumed to derive the constraints above, and this is therefore for illustrative purposes only. The high radio luminosity and outflow energy relative to several radio-detected TDEs is suggestive of a relativistic jet, rather than a wide-angle outflow \citep{tde_sample}. We note that if the emission region were significantly aspherical, with a non-unity filling factor, some of the above conclusions would be altered by factors of a few \citep{duran}. \subsection{Implications for the TDE population} We have established that J1533+2727 is a remarkable radio transient and a likely TDE afterglow. Using the FIRST survey, and assuming the detection of just one such source, we can calculate a lower limit on the occurrence of sources like J1533+2727. The 1\,mJy minimum flux density of the FIRST catalog and the peak luminosity of J1533+2727 in FIRST implies a detectable distance of 452\,Mpc. The FIRST and VLASS\,1.1 sky surveys have an overlapping sky coverage of $\sim6000$\,deg$^{2}$, and thus the detectable distance corresponds to an observable volume of 0.056\,Gpc$^3$. J1533+2727 emitted above its detected FIRST luminosity for at least 8 years between the Green Bank and FIRST detections. We can therefore infer a lower limit on the volumetric rate of approximately 2.2\,Gpc$^{-3}$yr$^{-1}$, or $\sim1$\% of the observed TDE rate \citep{rate}. We do not take this analysis further because the search for TDEs detected in FIRST is ongoing. Our results add to the emerging picture of the diversity of TDE-driven jets/outflows from supermassive black holes. Although it has long been known that $\sim50\%$ of the mass of a disrupted star is likely to be unbound \citep[e.g.,][]{rees1988}, the geometry and kinematics of such outflows are poorly constrained, as are any jets/outflows powered by the accretion of the remaining 50\% of the mass. The radio luminosity and derived outflow energy of J1533+2727 fills the gap between the three relativistic TDEs identified through their prompt high-energy emission, and the remaining TDE sample (Figure~\ref{fig:lightcurve}). The host galaxy of J1533+2727 and its central supermassive black hole appear otherwise unremarkable with respect to the TDE population \citep{french_host,BH_mass}. Although the black hole mass is somewhat high relative to optically selected TDEs \citep{BH_mass}, stars with a wide range of masses ($\gtrsim0.3M_{\odot}$) are expected to be disrupted by such black holes \citep{kochanek}. The optical spectrum of the host of J1533+2727 shows emission lines characteristic of the narrow-line regions of Type II Seyferts; this nuclear activity must have been ongoing prior to the transient event, given the large expected sizes of narrow-line regions \citep[e.g.][]{bennert06}. This is similar to the host of the radio-discovered TDE CNSS\,J0019$+$00 \citep{marin_tde}. The presence of nuclear activity in the spectrum of the J1533+2727 host makes it difficult to determine whether or not it is a post-starburst galaxy, although we note that TDEs are found to be over-represented in galaxies that are evidently post-starburst from their optical spectra \citep[Figure~\ref{fig:host_distribution};][]{decker_2016,french_host}. We speculate that TDEs discovered in radio transient surveys will have substantially different selection effects, especially with regards to AGN activity and extinction, than the selection effects present in optical and soft X-ray surveys that dominate TDE discoveries today. \begin{figure}[!ht] \centering \includegraphics[width=0.45\textwidth]{host_comp.png} \caption{Plot adapted from \citet{french_host} (their Figure~5 -- see \citet{french_host} and \citet{decker_2016} for details) showing spectral indices of a sample of SDSS galaxies, and selected TDE hosts including J1533+2727. H$\alpha$ emission traces current star formation while H$\delta$ absorption traces star-formation activity in the past $\sim$Gyr. Post-starburst / quiescent Balmer-strong galaxies comprising 0.2\% (solid box) and 2\% (dashed box) of the parent SDSS sample are at the lower right of the plot. The hosts of optically and X-ray selected TDEs are over-represented among post-starburst galaxies. Although only two radio-selected TDE candidates have been identified so far (J1533+2727 and CNSS\,J0019+00), neither is hosted by a post-starburst galaxy.} \label{fig:host_distribution} \end{figure} \section{Conclusions}\label{sec:Conclusions} We present the discovery of the candidate TDE FIRST\,J153350.8$+$272729 using the GB6, FIRST, and VLASS radio surveys. The source was first detected in 1986 with a flux density of 65\,mJy at 4.85\,GHz, and has been monotonically fading ever since. This is the second TDE candidate to be solely identified at radio wavelengths. Its radio luminosity (observed maximum of $8\times10^{39}$\,erg\,s$^{-1}$), and the implied energy in the outflow generated by the TDE ($\gtrsim7\times10^{50}$\,erg), fill a gap between most radio-detected TDEs and the three high-redshift events that were first discovered through their prompt $\gamma$-ray emission. Little more can be said about the nature of the outflow and the medium into which it propagates, because we have only observed the optically thin component of the radio spectral energy distribution. The host galaxy, at a distance of 147\,Mpc, is largely unremarkable (inferred supermassive black hole mass of $4\times10^{7}M_{\odot}$), and shows signatures of Type II Seyfert activity. We anticipate that ongoing surveys for radio transients like VLASS, and with the Australian Square Kilometre Array Pathfinder \citep{vast}, and the Aperture Tile In Focus at the Westerbork Synthesis Radio Telescope \citep{apertif}, together with dedicated radio follow-up observations will yield several such TDEs that may help untangle selection effects in surveys in other bands. \acknowledgements Contributions from GZ and JC were made through the Harvard Science Research Mentoring Program \citep[SRMP;][]{graur2018}. Support for this program is provided by the National Science Foundation under award AST-1602595, City of Cambridge, the John G. Wolbach Library, Cambridge Rotary, and generous individuals. We thank Or Graur for useful discussions on the science, and for coordinating the 2018--2019 SRMP that made this research possible. The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto. H.D. and B.M.G. acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) through grant RGPIN-2015-05948, and of the Canada Research Chairs program. M.R.D. acknowledges support from the NSERC through grant RGPIN-2019-06186, the Canada Research Chairs Program, the Canadian Institute for Advanced Research (CIFAR), and the Dunlap Institute at the University of Toronto. C.J.L. acknowledges support under NSF grant 2022546. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, and others. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is http://www.sdss.org. This research has made use of: the SIMBAD database, operated at Centre de Donn\'{e}es astronomiques de Strasbourg, France; the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA; NASA’s Astrophysics Data System; and the VizieR catalog access tool, CDS, Strasbourg, France. This research has made use of data and/or software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory.
1,116,691,498,125
arxiv
\section{Introduction} Classification of C*-algebras is a research programme initiated by the work of Glimm, Dixmier, Bratteli and Elliott. After some recent breakthroughs, the combination of work of many many mathematicians over several decades has culminated in the complete classification of unital separable simple nuclear $\mathcal Z$-stable C*-algebras satisfying the UCT (see \cite{KP, Phi, GLN, GLNa, GLNb, EGLN, TWW} and the references therein). Further classification results are expected to cover the stably projectionless case as well (see for instance \cite{EN,EGLN17a,EGLN17b,GLI,GLII}). All in all, the final result classifies all separable simple nuclear $\mathcal Z$-stable C*-algebras satisfying the UCT (which we refer to as \an{classifiable C*-algebras} in this paper) by their Elliott invariants. Recently, it was shown in \cite{Li18} that every classifiable C*-algebra has a Cartan subalgebra. The interest here stems from the observation in \cite{Kum,Ren} that once a Cartan subalgebra has been found, it automatically produces an underlying topological groupoid such that the ambient C*-algebra can be written as the corresponding groupoid C*-algebra. Therefore, the results in \cite{Li18} build a strong connection between classification of C*-algebras and generalized topological dynamics (in the form of topological groupoids and their induced orbit structures). This connection has already proven to be very fruitful, for instance in the classification of Cantor minimal systems up to orbit equivalence \cite{GPS, GMPS08, GMPS10} or in the context of approximation properties \cite{Ker,KS}. Generally speaking, the notion of Cartan subalgebras in C*-algebras has attracted attention recently due to links to topological dynamics \cite{Li16,Li17,Li_DQH} and the UCT question \cite{BL16,BL17}. More precisely, the construction in \cite{Li18} produces Cartan subalgebras in all the C*-algebra models from \cite{Ell, EV, Tho, GLN, GLNa} which exhaust all possible Elliott invariants of classifiable stably finite C*-algebras. Actually, we obtain C*-diagonals in this case (i.e., the underlying topological groupoid has no non-trivial stabilizers). Together with groupoid models (and hence Cartan subalgebras) which have already been constructed in the purely infinite case (see \cite{Spi} and also \cite[\S~5]{LR}), this produces Cartan subalgebras in all classifiable C*-algebras. An alternative approach to constructing groupoid models, based on topological dynamics, has been developed in \cite{DPS18,Put,DPS19a,DPS19b} and covers large classes of classifiable C*-algebras. In special cases, groupoid models have also been constructed in \cite{AM}. The goal of this paper is to start a more detailed analysis of the C*-diagonals and the corresponding groupoids constructed in \cite{Li18}. A motivating question is whether the construction in \cite{Li18} produces a one-dimensional C*-diagonal of the Jiang-Su algebra $\mathcal Z$ which is distinguished in some sense (here one-dimensional C*-diagonal means C*-diagonal whose spectrum has covering dimension one), or, put differently (see \cite[Problem~3]{BS_MFO}): \begin{question} \label{q:UniqueCartanZ} Does the Jiang-Su algebra $\mathcal Z$ have any distinguished (one-dimensional) Cartan subalgebras? \end{question} Note that such uniqueness questions cannot have an affirmative answer without restrictions such as bounds on the dimension because every classifiable C*-algebra is $\mathcal Z$-stable, so that taking tensor products produces Cartan subalgebras whose spectra have arbitrarily large covering dimension (see \cite[Proposition~5.1]{LR}). Instead of fixing the covering dimension, an even stronger restriction would be to fix the homeomorphism type of the spectrum and to look for a unique or distinguished Cartan subalgebra whose spectrum coincides with a given topological space. This leads to the question what we can say about the spectra of Cartan subalgebras of classifiable C*-algebras. In general, not much is known. Before the work in \cite{Li18}, it was for instance not even known whether $\mathcal Z$ has any Cartan subalgebra with one-dimensional spectrum. Another example is the following question (see \cite[Problem~11]{BS_MFO}): \begin{question} \label{q:UHF_connCartan} Does the CAR algebra have a Cartan subalgebra with connected spectrum? \end{question} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} This question is motivated by a construction, due to Phillips and Wassermann \cite{PW_MFO}, of uncountably many pairwise non-conjugate MASAs (which are not Cartan subalgebras) in the CAR algebra whose spectra are all homeomorphic to the unit interval. In this context, we would like to mention that Kumjian \cite{Kum88} had constructed a Cartan subalgebra in an AF algebra with spectrum homeomorphic to the unit circle. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} The following are the main results of this paper, which shed some light on the above-mentioned questions. \begin{theorem} \label{thm:main1} Every classifiable stably finite C*-algebra which is unital or stably projectionless with continuous scale (in the sense of \cite{Lin91,Lin04,GLI,GLII}) has a C*-diagonal with connected spectrum. \end{theorem} \begin{theorem} \label{thm:main2_unital} Every classifiable stably finite unital C*-algebra with torsion-free $K_0$ and trivial $K_1$ has continuum many pairwise non-conjugate C*-diagonals whose spectra are all homeomorphic to the Menger curve. \end{theorem} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} The Menger curve is also known as Menger universal curve, Menger cube, Menger sponge, Sierpinski cube or Sierpinski sponge. It was constructed by Menger \cite{Men} as a universal one-dimensional space, in the sense that every separable metrizable space of dimension at most one embeds into it. Anderson \cite{And58_1,And58_2} characterized the Menger curve by abstract topological properties. The reader may consult \cite{MOT} for more information about the Menger curve, including a concrete construction. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} In order to obtain a version of Theorem~\ref{thm:main2_unital} in the stably projectionless setting, we need to replace the Menger curve $\bm{M}$ by another Menger manifold (a topological space locally homeomorphic to $\bm{M}$) of the form $\bm{M} \setminus \iota(C)$, where $\iota$ is an embedding of the Cantor space $C$ into $\bm{M}$ such that $\iota(C)$ is a non-locally-separating subset of $\bm{M}$, in the sense that for every connected open subset $U$ of $\bm{M}$, $U \setminus \iota(C)$ is still connected. Up to homoemorphism, the space $\bm{M} \setminus \iota(C)$ does not depend on the choice of $\iota$ (see \cite{MOT}), and we denote this space by $\bm{M}_{\setminus C} \defeq \bm{M} \setminus \iota(C)$. \begin{theorem} \label{thm:main2_spl} Every classifiable stably projectionless C*-algebra with continuous scale, torsion-free $K_0$ and trivial $K_1$ has continuum many pairwise non-conjugate C*-diagonals whose spectra are all homeomorphic to $\bm{M}_{\setminus C}$. \end{theorem} Theorem~\ref{thm:main1} answers Question~\ref{q:UHF_connCartan}. Note that in the stably projectionless case, the absence of projections only guarantees the absence of compact open subsets in the spectrum, but it does not automatically lead to a single connected component (see \cite[\S~8]{Li18}). Theorems~\ref{thm:main2_unital} and \ref{thm:main2_spl} show that the uniqueness question for Cartan subalgebras in classifiable C*-algebras has a negative answer unless we impose further conditions. Hence the problem asking for classification of Cartan subalgebras in classifiable C*-algebras (see \cite[Problem~3]{BS_MFO}) seems to be challenging (which maybe makes it interesting). It is interesting to point out that the situation for classifiable C*-algebras is very different from the corresponding one for von Neumann algebras. Theorems~\ref{thm:main2_unital} and \ref{thm:main2_spl} also tell us that in general, it seems that there is not much we can say about the induced map on K-theory of the natural inclusion of a Cartan subalgebra (see Remark~\ref{rem:KMenger}, which sheds some light on \cite[Problem~8]{BS_MFO}). \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} In particular, Theorem~\ref{thm:main2_unital} applies to all infinite-dimensional unital separable simple AF algebras, for instance all UHF algebras, and to $\mathcal Z$. Theorem~\ref{thm:main2_spl} applies in particular to the Razak-Jacelon algebra $\mathcal W$ and the stably projectionless version $\mathcal Z_0$ of the Jiang-Su algebra of \cite[Definition~7.1]{GLII}. Even restricted to these special cases, Theorems~\ref{thm:main2_unital} and \ref{thm:main2_spl} yield new results (and we do not need the full strength of the classification theorem for all classifiable C*-algebras; the results in for instance \cite{Rob} suffice). \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} The constructions we develop in order to prove our main theorems work in general, but only produce C*-diagonals with the desired properties under the conditions we impose in our main theorems. There are several reasons: In \cite{Li18}, C*-diagonals are constructed in all classifiable C*-algebras using the method of cutting down by suitable elements. This procedure, however, might not preserve connectedness. This is why Theorem~\ref{thm:main1} only covers unital C*-algebras and stably projectionless C*-algebras with continuous scale. Note that, however, this class of C*-algebras covers all classifiable C*-algebras up to stable isomorphism. The reason we further restrict to the case of torsion-free $K_0$ and trivial $K_1$ in Theorems~\ref{thm:main2_unital} and \ref{thm:main2_spl} is twofold: It is shown in \cite{Li18} that the spectra of the C*-diagonals constructed in \cite{Li18} will have dimension at least two as soon as torsion appears in K-theory. This rules out $\bm{M}$ or $\bm{M}_{\setminus C}$ as the spectrum in general. Even more serious is the obstruction that the path-lifting property established in Proposition~\ref{prop:path} for the connecting maps at the groupoid level, which plays a crucial role in establishing Theorems~\ref{thm:main2_unital} and \ref{thm:main2_spl}, does not hold anymore in the case where $K_0$ contains torsion or $K_1$ is non-trivial. In order to prove our main results, the strategy is to adjust the constructions of C*-algebra models in \cite{Ell, EV, Tho, GLN, GLNa}, which arise as inductive limits of simpler building blocks and which exhaust all possible Elliott invariants, in such a way that the new, modified constructions produce C*-algebra models with C*-diagonals having various desired properties. The reader may find the corresponding versions of our main results in Theorems~\ref{thm:conn_Ell}, \ref{thm:ManyMenger_GPD_Ell} and \ref{thm:ManyMenger_Diag_Ell}, which do not depend on general classification results for all classifiable C*-algebras. These versions in combination with general classification results then yield our main theorems as stated above. To construct Cartan subalgebras in inductive limit C*-algebras, an important tool has been developed in \cite{Li18}. However, in \cite{Li18}, we were merely interested in existence results for C*-diagonals, whereas the present work requires several further modifications as well as a finer analysis of the construction of C*-diagonals in \cite{Li18} in order to ensure topological properties of the spectrum such as connectedness as well as abstract topological properties characterizing $\bm{M}$ or further properties characterizing $\bm{M}_{\setminus C}$. At the technical level, a crucial role is played by a new path-lifting property (see Proposition~\ref{prop:path}) of the connecting maps at the groupoid level. This is particularly powerful in combination with inverse limit descriptions of the spectra of the C*-diagonals we construct. Further fine-tuning of the construction is required to produce C*-diagonals for which we can completely determine the spectra up to homeomorphism. In order to show that the construction yields continuum many pairwise non-conjugate C*-diagonals, the key idea is to exploit connectedness not only of the spectra but of (parts of) the groupoid models themselves. This aspect of the construction seems to be interesting on its own right, because many important groupoid models which have been previously studied (for instance for AF algebras, Kirchberg algebras or coming from Cantor minimal systems) have totally disconnected unit spaces. Important building blocks leading to the C*-algebra models in \cite{Ell, EV, Tho, GLN, GLNa} are given by one-dimensional non-commutative CW complexes and their generalizations. Therefore, as a starting point, we develop a complete classification of C*-diagonals in one-dimensional non-commutative CW complexes. Roughly speaking, the conjugacy class of C*-diagonals in these building blocks encode a particular set of data which can be used to construct the ambient non-commutative CW complex and which we can view as a one-dimensional CW complex in the classical sense (i.e., a graph). We refer to Theorem~\ref{thm:ClassAB} for more details. Our classification theorem generalizes the corresponding results for C*-diagonals in dimension drop algebras in \cite{BR}. It also puts into context the observation in \cite{BR} that in special cases, these C*-diagonals are classified up to conjugacy by the homeomorphism type of their spectra (see Theorem~\ref{thm:AB-B} for a generalization and Remark~\ref{rem:appBR}, Example~\ref{ex:appBR} for a clarification). \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} Several of the ideas and techniques leading to our main theorems already feature in the discussion of C*-diagonals in one-dimensional noncommutative CW complexes. However, even though a good understanding of these C*-diagonals played an important role in developing our main results, the actual classification results for this class of C*-diagonals are not needed in the proofs of Theorems~\ref{thm:main1}, \ref{thm:main2_unital} and \ref{thm:main2_spl}. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \section{Classification of C*-diagonals in 1-dimensional NCCW complexes} \label{s:nccw} We set out to classify C*-diagonals in 1-dimensional non-commutative CW (NCCW) complexes up to conjugacy. The reader may find more about NCCW complexes in \cite{ET, Ell, ELP, Dea, Rob}. Let us start by introducing notations and some standing assumptions. Throughout this section, $\beta_0, \, \beta_1: \: F \to E$ denote *-homomorphisms between finite-dimensional C*-algebras $F$ and $E$. Let $F = \bigoplus_{i \in I} F^i$ and $E = \bigoplus_{p \in P} E^p$ denote the decompositions of $F$ and $E$ into matrix algebras and $DF^i$, $DE^p$ the canonical C*-diagonals of diagonal matrices. The 1-dimensional NCCW complex $A = A(E,F,\beta_0,\beta_1)$ is given by $A = \menge{(f,a) \in C([0,1],E) \oplus F}{f(\mathfrak r) = \beta_\mathfrak r(a) \ \rm{for} \ \mathfrak r = 0,1}$. For $\mathfrak r = 0,1$, we write $\beta_\mathfrak r^p$ for the composition $F \overset{\beta_\mathfrak r}{\longrightarrow} E \onto E^p$ where the second map is the canonical projection. We also write $\beta_\mathfrak r^{p,i} \defeq \beta_\mathfrak r^p \vert_{F^i}$ for the restriction of $\beta_\mathfrak r^p$ to $F^i \subseteq F$. Throughout this section, we make the following assumptions: \begin{enumerate} \item[(A1)] For all $i$, $p$ and $\mathfrak r = 0,1$, $\beta_\mathfrak r^{p,i}$ is given by the composition \begin{equation} \label{e:beta=} F^i \overset{1 \otimes {\rm id}_{F^i}}{\longrightarrow} 1_{m_\mathfrak r(p,i)} \otimes F^i \subseteq M_{m_\mathfrak r(p,i)} \otimes F^i \tailarr E^p. \end{equation} \item[(A2)] $(\beta_0, \beta_1): \: F \to E \oplus E$ is injective. \end{enumerate} In \eqref{e:beta=}, an arrow $\tailarr$ denotes a *-homomorphism of multiplicity $1$, i.e., which preserves ranks of projections, and which sends diagonal matrices to diagonal matrices (in our case $DM_{m_\mathfrak r(p,i)} \otimes DF^i$ to $DE^p$). Note that (A1) implies that $\beta_\mathfrak r$ sends $DF$ to $DE$. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} There is no loss of generality assuming (A1) and (A2): Up to unitary equivalence, every *-homomorphism $F \to E$ is of the form as in \eqref{e:beta=}, so that we can always replace $\beta_\mathfrak r^{p,i}$ by a map of the form \eqref{e:beta=} without changing the isomorphism class of $A$. And if (A2) does not hold, then $A$ decomposes as $A = A' \oplus F'$ where $A'$ is a 1-dimensional NCCW complex for which (A2) holds and $F' = \ker(\beta_0,\beta_1)$. Then the study of C*-diagonals in $A' \oplus F'$ reduces to the study of C*-diagonals in $A'$ and $F'$, and C*-diagonals in $F'$ are well-understood. (A2) allows us to identify $A$ with the sub-C*-algebra $\menge{f \in C([0,1],E)}{(f(0), f(1)) \in {\rm im\,}(\beta_0,\beta_1)}$ of $C([0,1],E)$. We will do so frequently without explicitly mentioning it. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Before we start to develop our classification results, we give an overview. If we let $\mathcal X^i \defeq \Spec DF^i$, $\mathcal X \defeq \Spec DF$ and $\mathcal Y^p \defeq \Spec DE^p$, $\mathcal Y \defeq \Spec DE$, then for $\mathfrak r = 0, 1$, $\beta_{\mathfrak r}$ corresponds to a collection $(\bm{b}_\mathfrak r^p)_p$ of maps $\bm{b}_\mathfrak r^p: \: \mathcal Y_\mathfrak r^p \to \mathcal X$ for some $\mathcal Y_\mathfrak r^p \subseteq \mathcal Y^p$. Viewing $\mathcal Y^p$ as edges, $\mathcal X$ as vertices and $\bm{b}_0^p$, $\bm{b}_1^p$ as source and target maps, this data gives rise to a collection of directed graphs $\Gamma^p$, or 1-dimensional CW complexes in the classical sense. (Strictly speaking, this is only correct when $A$ is unital; in the non-unital case, we obtain non-compact 1-dimensional CW complexes obtained by removing finitely many points from compact 1-dimensional CW complexes.) Moreover, given a permutation $\bm{\sigma} = \coprod \bm{\sigma}^p$ of $\mathcal Y = \coprod \mathcal Y^p$, we obtain twisted graphs $\Gamma_{\bm{\sigma}}^p$ with the same edge set $\mathcal Y^p$, the same vertex set $\mathcal X$, the same source map $\bm{b}_0^p$ and twisted target map $\bm{b}_1^p \circ \bm{\sigma}^p$. Now it turns out that every C*-diagonal of a 1-dimensional NCCW complex corresponds to a permutation $\bm{\sigma}$ as above, and for two such permutations $\bm{\sigma}$ and $\bm{\tau}$, the corresponding C*-diagonals are conjugate if and only if the collections of oriented graphs $(\Gamma_{\bm{\sigma}}^p)_p$ and $(\Gamma_{\bm{\tau}}^p)_p$ are isomorphic in the sense that there exist isomorphisms of the individual graphs which are either orientation-preserving or orientation-reversing for each $p$. We refer to Theorem~\ref{thm:ClassAB} for more details. As a first step, we provide models for C*-diagonals in $A$ up to conjugacy. Given a permutation matrix $\sigma$ in $E$, set $$ A_{\sigma} \defeq A(E,F,\beta_0,{\rm Ad\,}(\sigma) \circ \beta_1) = \menge{(f,a) \in C([0,1],E) \oplus F}{f(0) = \beta_0(a), \, f(1) = \sigma \beta_1(a) \sigma^*}. $$ Moreover, define $$ B_{\sigma} \defeq \menge{(f,a) \in A_{\sigma}}{f(t) \in DE \ \forall \, t \in [0,1]}. $$ Note that given $(f,a) \in A_{\sigma}$, the condition $f(t) \in DE$ for all $t \in [0,1]$ implies $a \in DF$ by (A1) and (A2). The following observation is a straightforward generalization of \cite[Proposition~5.1]{BR}. \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{lemma} $B_{\sigma}$ is a C*-diagonal of $A_{\sigma}$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Conversely, it turns out that up to conjugacy, every C*-diagonal of $A$ is of this form. \begin{proposition} \label{prop:AB=AsBs} For every C*-diagonal $B$ of $A$, there exists a permutation matrix $\sigma \in E$ such that $(A,B) \cong (A_{\sigma},B_{\sigma})$, i.e., there exists an isomorphism $A \isom A_{\sigma}$ sending $B$ onto $B_{\sigma}$. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} For a subset $S \subseteq [0,1]$, let $A_S \defeq \menge{f \vert_S}{f \in A} \subseteq C(S,E)$ and $B_S \defeq \menge{f \vert_S}{f \in B} \subseteq A_S$. It is easy to see (compare \cite[Proposition~4.1]{BR}) that for every $t \in (0,1)$, $B_{\gekl{t}}$ is a C*-diagonal of $A_{\gekl{t}} = E$, and that $B_{\gekl{0,1}}$ is a C*-diagonal of $A_{\gekl{0,1}}$. By (A2), $(\beta_0,\beta_1)$ defines an isomorphism $F \isom A_{\gekl{0,1}}$. Hence $(\beta_0,\beta_1)^{-1}(B_{\gekl{0,1}})$ is a C*-diagonal of $F$. Thus there is a unitary $u_F \in U(F)$ such that $u_F (\beta_0,\beta_1)^{-1}(B_{\gekl{0,1}}) u_F^* = DF$. Applying $(\beta_0,\beta_1)$ on both sides, we get $$ (\beta_0(u_F),\beta_1(u_F)) (B_{\gekl{0,1}}) (\beta_0(u_F),\beta_1(u_F))^* = (\beta_0,\beta_1)(DF) \subseteq DE \oplus DE. $$ Here we used that (A1) implies $\beta_\mathfrak r(DF) \subseteq DE$ for $\mathfrak r = 0,1$. Therefore, for $\mathfrak r = 0,1$, $u_\mathfrak r \defeq \beta_\mathfrak r(u_F) + (1_E - \beta_\mathfrak r(1_F))$ is a unitary in $E$ such that $u_\mathfrak r B_{\gekl{\mathfrak r}} u_\mathfrak r^* = \beta_\mathfrak r(u_F) B_{\gekl{\mathfrak r}} \beta_\mathfrak r(u_F)^* \subseteq DE$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Using \cite[Corollary~2.5 and Lemma~3.4]{BR}, it is straightforward to find $u: \: [0,\tfrac{1}{2}] \to U(E)$ with $u(0) = u_0$ and $u \vert_{(0,1/2]} \in C((0,\tfrac{1}{2}],U(E))$ such that ${\rm Ad\,}(u)$ induces an isomorphism $A_{[0,1/2]} \isom A_{[0,1/2]}$ sending $B_{[0,1/2]}$ to $\menge{f \in A_{[0,1/2]}}{f(t) \in DE \ \forall \, t \in [0,\tfrac{1}{2}]}$. Similarly, find $v: \: [\tfrac{1}{2},1] \to U(E)$ satisfying $v(1) = u_1$ and $v \vert_{[1/2,1)} \in C([\tfrac{1}{2},1),U(E))$ such that ${\rm Ad\,}(v)$ induces $A_{[1/2,1]} \isom A_{[1/2,1]}$ sending $B_{[1/2,1]}$ to $\menge{f \in A_{[1/2,1]}}{f(t) \in DE \ \forall \, t \in [\tfrac{1}{2},1]}$. Now consider $\sigma = u(\tfrac{1}{2}) v(\tfrac{1}{2})^* \in U(E)$. We have $\sigma DE \sigma^* = u(\tfrac{1}{2}) v(\tfrac{1}{2})^* DE v(\tfrac{1}{2}) u(\tfrac{1}{2})^* = u(\tfrac{1}{2}) B_{\gekl{1/2}} u(\tfrac{1}{2})^* = DE$. Thus $\sigma$ normalizes $DE$. This implies that $\sigma$ is the product of a unitary in $DE$ and a permutation matrix in $E$. By multiplying $u$ by a suitable unitary in $C([0,\tfrac{1}{2}],U(DE))$, we can arrange that $\sigma$ is given by a permutation matrix in $E$. Define $w: \: [0,1] \to U(E)$ by $w(t) \defeq u(t)$ for $t \in [0,\tfrac{1}{2}]$ and $w(t) \defeq \sigma v(t)$ for $t \in [\tfrac{1}{2},1]$. Then $w(t) B_{\gekl{t}} w(t)^* \subseteq DE$ for all $t \in [0,1]$. Hence ${\rm Ad\,}(w)$ induces an isomorphism $A \isom A_{\sigma}, \, (f,a) \ma (wfw^*, u_F a u_F^*)$ sending $B$ to $B_{\sigma} = \menge{f \in A_{\sigma}}{f(t) \in DE \ \forall \, t \in [0,1]}$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} By Proposition~\ref{prop:AB=AsBs}, the classification problem for C*-diagonals in $A$ reduces to the classification problem for Cartan pairs of the form $(A_{\sigma}, B_{\sigma})$. Our next goal is to further reduce to the situation where no index in $P$ is redundant. Let $A = A(E,F,\beta_0,\beta_1)$ be a 1-dimensional NCCW complex and $B = \menge{f \in A}{f(t) \in DE \ \forall \, t \in [0,1]}$. \begin{definition} An index $q \in P$ is called redundant if there exists $\bar{q} \in P$ with $\bar{q} \neq q$ and $j \in I$, $\mathfrak r, \mathfrak s \in \gekl{0,1}$ such that $\beta_\mathfrak r^{\bar{q},j}$ and $\beta_\mathfrak s^{q,j}$ are isomorphisms and $\beta_\bullet^{p,j} = 0$ for all $p \notin \gekl{q,\bar{q}}$ and $\bullet = 0,1$. \end{definition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} Note that we must have $\beta_\mathfrak r^{\bar{q},i} = 0$ and $\beta_\mathfrak s^{q,i} = 0$ for all $i \neq j$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Given a redundant index $q$ as above, assume first that $\mathfrak r = \mathfrak s$, say $\mathfrak r = \mathfrak s = 0$ (the case $\mathfrak r = \mathfrak s = 1$ is treated analogously). Set $\check{\beta}_\bullet^p \defeq \beta_\bullet^p$ for all $p \neq q, \bar{q}$ and $\bullet = 0,1$, $\check{\beta}_0^{\bar{q}} \defeq \beta_1^{\bar{q}}$, write $\gamma = \beta_0^{\bar{q},j} (\beta_0^{q,j})^{-1}$ and set $\check{\beta}_1^{\bar{q}} \defeq \gamma \beta_1^q$. Set $\check{E} \defeq \bigoplus_{p \in P \setminus \gekl{q}} E^p$ and let $\check{\beta}_\bullet: \: F \to \check{E}$ be given by $\check{\beta}_\bullet = (\check{\beta}_\bullet^p)_{p \in P \setminus \gekl{q}}$ for $\bullet = 0,1$. Let $\check{A} \defeq A(\check{E},F,\check{\beta}_0,\check{\beta}_1)$ and $\check{B} \defeq \menge{f \in \check{A}}{f(t) \in D\check{E} \ \forall \, t \in [0,1]}$. The following is straightforward to check. \begin{lemma} \label{lem:A=vA_00} We have an isomorphism $A \isom \check{A}, \, (f^p)_p \ma (\check{f}^p)_p$, where for $f^p \in C([0,1],E^p)$, $\check{f}^p = f^p$ if $p \neq q, \bar{q}$, $\check{f}^{\bar{q}}(t) = f^{\bar{q}}(1-2t)$ for $t \in [0,\tfrac{1}{2}]$ and $\check{f}^{\bar{q}}(t) = \gamma(f^q(2t-1))$ for $t \in (\tfrac{1}{2},1]$. This isomorphism sends $B$ to $\check{B}$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} If $\mathfrak r \neq \mathfrak s$, say $\mathfrak r = 0$ and $\mathfrak s = 1$ (the other case is analogous), define $\check{\beta}_\bullet^p \defeq \beta_\bullet^p$ for all $p \neq q, \bar{q}$ and $\bullet = 0,1$, $\check{\beta}_0^{\bar{q}} \defeq \beta_0^q$, and $\check{\beta}_1^{\bar{q}} \defeq \gamma \beta_1^{\bar{q}}$, where $\gamma \defeq \beta_1^{q,j} (\beta_0^{\bar{q},j})^{-1}$, set $\check{E} \defeq \bigoplus_{p \in P \setminus \gekl{q}} E^p$, $\check{\beta}_\bullet \defeq (\check{\beta}_\bullet^p)_{p \in P \setminus \gekl{q}}$ for $\bullet = 0,1$, $\check{A} \defeq A(\check{E},F,\check{\beta}_0,\check{\beta}_1)$ and $\check{B} \defeq \menge{f \in \check{A}}{f(t) \in D\check{E} \ \forall \, t \in [0,1]}$. Then the following analogue of Lemma~\ref{lem:A=vA_00} is straightforward: \begin{lemma} \label{lem:A=vA_01} We have an isomorphism $A \isom \check{A}, \, (f^p)_p \ma (\check{f}^p)_p$, where for $f^p \in C([0,1],E^p)$, $\check{f}^p \defeq f^p$ if $p \neq q, \bar{q}$, $\check{f}^{\bar{q}}(t) = f^q(2t)$ for $t \in [0,\tfrac{1}{2}]$ and $\check{f}^{\bar{q}}(t) = \gamma(f^{\bar{q}}(2t-1))$ for $t \in (\tfrac{1}{2},1]$. This isomorphism sends $B$ to $\check{B}$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{definition} We say that $A$ is in reduced form if no index in $P$ is redundant. \end{definition} Lemmas~\ref{lem:A=vA_00} and \ref{lem:A=vA_01} allow us to assume that $A$ is in reduced form from now on. In the following, let us develop direct sum decompositions so that we can reduce our discussion to individual summands, i.e., to the case where $A$ is indecomposable. Let $\sim_P$ be the equivalence relation on $P$ generated by $q \sim_P \bar{q}$ if there are $i \in I$, $\mathfrak r, \mathfrak s \in \gekl{0,1}$ such that $\beta_\mathfrak r^{q,i} \neq 0$ and $\beta_\mathfrak s^{\bar{q},i} \neq 0$. Let $P = \coprod_{l \in L} P_l$ be the decomposition of $P$ into equivalence classes with respect to $\sim_P$. For each $l \in L$, let $E_l \defeq \bigoplus_{p \in P_l} E^p$, $I_l \defeq \{ i \in I: \: \beta_\bullet^{p,i} \neq 0 \ \text{for some} \ \bullet = 0,1 \ \text{and} \ p \in P_l \}$ and $F_l \defeq \bigoplus_{i \in I_l} F^i$. Define $\beta_{\bullet; l} \defeq (\beta_\bullet^{p,i})_{p \in P_l, \, i \in I_l}: \: \bigoplus_{i \in I_l} F^i \to \bigoplus_{p \in P_l} E^p$ for $\bullet = 0, 1$. Set $A_l \defeq A(E_l,F_l,\beta_{0; l},\beta_{1; l})$. The following is straightforward. \begin{lemma} \label{lem:dirsum} We have $A = \bigoplus_{l \in L} A_l$, and for each $l \in L$, $A_l$ cannot be further decomposed into (non-trivial) direct summands. Moreover, the decomposition $A = \bigoplus_{l \in L} A_l$ is the unique direct sum decomposition of $A$ into indecomposable direct summands. \end{lemma} \begin{remark}\em \label{rem:dirsumAB} The direct sum decomposition in Lemma~\ref{lem:dirsum} is compatible with C*-diagonals in the sense that if $B = \menge{f \in A}{f(t) \in DE \ \forall \, t \in [0,1]}$, then under the direct sum decomposition $A = \bigoplus_{l \in L} A_l$ from Lemma~\ref{lem:dirsum}, we have $B = \bigoplus_{l \in L} B_l$, where $B_l = \menge{f \in A_l}{f(t) \in DE_l \ \forall \, t \in [0,1]}$. \end{remark} \begin{corollary} \label{cor:dirsum} Every isomorphism $A_{\sigma} \isom A_{\tau}$ restricts to isomorphisms $A_{\sigma; l} \isom A_{\tau; \lambda(l)}$ for all $l \in L$, where $A_{\sigma; l}$ and $A_{\tau; \lambda(l)}$ are the direct summands of $A_{\sigma}$ and $A_{\tau}$ provided by Lemma~\ref{lem:dirsum}, and $\lambda: \: L \isom L$ is a permutation of $L$. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} If the isomorphism $A_{\sigma} \isom A_{\tau}$ above sends $B_{\sigma}$ onto $B_{\tau}$, then for all $l \in L$, the isomorphism $A_{\sigma; l} \isom A_{\tau; \lambda(l)}$ above must send $B_{\sigma; l}$ onto $B_{\sigma; \lambda(l)}$, where $B_{\sigma; l}$ and $B_{\sigma; \lambda(l)}$ are as in Remark~\ref{rem:dirsumAB}. \end{corollary} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} Here, we are implicitly using that the equivalence relations $\sim_P$ does not depend on $\sigma$, $\tau$, i.e., they coincide for $A$, $A_{\sigma}$ and $A_{\tau}$. This is because $\sigma$ and $\tau$ decompose as $\sigma = (\sigma^p)$, $\tau = (\tau^p)$ for permutation matrices $\sigma^p$, $\tau^p$ in $E^p$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Lemma~\ref{lem:dirsum} and Corollary~\ref{cor:dirsum} allow us to reduce our discussion to the case where $A$ is indecomposable. So let us assume that we have $p_1 \sim_P p_2$ for all $p_1, p_2 \in P$. Let us now describe the centre $Z(A)$ and its spectrum $\Spec Z(A)$. Let $\sim_Z$ be the equivalence relation on $[0,1] \times P$ generated by $(\mathfrak r,q) \sim_Z (\mathfrak s,\bar{q})$ if $\mathfrak r, \mathfrak s \in \gekl{0,1}$ and there exists $i \in I$ with $\beta_\mathfrak r^{q,i} \neq 0$ and $\beta_\mathfrak s^{\bar{q},i} \neq 0$. Note that on $(0,1) \times P$, $\sim_Z$ is trivial. We write $[\cdot]_Z$ for the canonical projection map $[0,1] \times P \onto ([0,1] \times P) / {}_{\sim_Z}$. Let $[0,1] \times_\bullet P \defeq \menge{(t,p) \in [0,1] \times P}{\beta_\mathfrak r^q \ \text{is unital for all} \ (\mathfrak r,q) \in [t,p]_Z \ \text{if} \ t \in \gekl{0,1}}$. \begin{lemma} \label{lem:Z} The centre of $A$ is given by \begin{align} \label{e:Z} Z(A) = \big\{ &(f^p) = (g^p \cdot 1_{E^p}) \in C([0,1],Z(E)) = \bigoplus_p C([0,1],Z(E^p)):\\ &g^p \in C[0,1], \, g^q(\mathfrak r) = g^{\bar{q}}(\mathfrak s) \ {\rm if} \ (\mathfrak r,q) \sim_Z (\mathfrak s,\bar{q}), \, g^q(\mathfrak r) = 0 \ \text{if} \ (\mathfrak r,q) \notin [0,1] \times_\bullet P \big\}. \nonumber \end{align} We have a homeomorphism $([0,1] \times_\bullet P) / {}_{\sim_Z} \isom \Spec Z(A)$ sending $[t,q]$ to the character $Z(A) \to \mathbb{C}, \, (f^p) = (g^p \cdot 1_{E^p}) \ma g^q(t)$. Here $[0,1]$ is given the usual topology and $P$ the discrete topology. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} If $f = (f^p)$ lies in $Z(A)$, then $f^p$ lies in $Z(C([0,1],E^p)) = C([0,1],Z(E^p))$ for all $p$, hence $f^p = g^p \cdot 1_{E^p}$ for some $g^p \in C[0,1]$. Moreover, if $a \in F$ satisfies $(f(0),f(1)) = \beta_0(a),\beta_1(a))$, then $a \in Z(F)$, i.e., $a = (\alpha^i \cdot 1_{F^i})$ with $\alpha^i \in \mathbb{C}$. Now $g^q(\mathfrak r) \cdot 1_{E^q} = f^q(\mathfrak r) = \beta_\mathfrak r^q(a)$ and $\beta_\mathfrak r^{q,i}(\alpha^i \cdot 1_{F^i}) = \alpha^i \beta_\mathfrak r^{q,i}(1_{F^i})$ implies that $g^q(\mathfrak r) = \alpha^i$ if $\beta_\mathfrak r^{q,i} \neq 0$. Hence $g^q(\mathfrak r) = \alpha^i = g^{\bar{q}}(\mathfrak s)$ if both $\beta_\mathfrak r^{q,i} \neq 0$ and $\beta_\mathfrak s^{\bar{q},i} \neq 0$. In addition, we see that $g^q(\mathfrak r) = 0$ and $(\alpha^i) = 0$ if $\beta_\mathfrak r^q$ is not unital. This shows \an{$\subseteq$} in \eqref{e:Z}. For \an{$\supseteq$}, let $f = (g^p \cdot 1_{E^p})$ satisfy $g^p \in C[0,1]$, $g^q(\mathfrak r) = g^{\bar{q}}(\mathfrak s)$ if $(\mathfrak r,q) \sim_Z (\mathfrak s,\bar{q})$ and $g^q(\mathfrak r) = 0$ if $(\mathfrak r,q) \notin [0,1] \times_\bullet P$. For $i \in I$ take any $(\mathfrak r,q) \in \gekl{0,1} \times P$ with $\beta_\mathfrak r^{q,i} \neq 0$ and set $\alpha^i \defeq g^q(\mathfrak r)$. This is well-defined by our assumption on $(g^p)$. Let $a \defeq (\alpha^i \cdot 1_{F^i}) \in F$. Then it is straightforward to see that $(f(0),f(1)) = (\beta_0(a),\beta_1(a))$. Hence $f \in A$, and thus $f \in Z(A)$. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} The second part describing $\Spec Z(A)$ is an immediate consequence. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} In the following, we will always identify $\Spec Z(A)$ with $([0,1] \times_\bullet P) / {}_{\sim_Z}$ using the explicit homeomorphism from Lemma~\ref{lem:Z}. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Let us show that the points in $\partial \defeq \menge{[\mathfrak r,p]_Z \in \Spec Z(A)}{(\mathfrak r,p) \in \gekl{0,1} \times P}$ are special. Suppose that $A$ is in reduced form, i.e., no index in $P$ is redundant. Further assume that $A$ is indecomposable, so that for all $p_1, p_2 \in P$, we have $p_1 \sim_P p_2$. Let $\sigma$ and $\tau$ be permutation matrices in $E$. Let $\phi: \: A_{\sigma} \isom A_{\tau}$ be an isomorphism. We denote its restriction to $Z(A_{\sigma})$ also by $\phi$, and let $\phi_Z^*$ be the induced homeomorphism $\Spec Z(A_{\tau}) \isom \Spec Z(A_{\sigma})$. Let $\partial_{\sigma} \defeq \menge{[\mathfrak r,p]_Z \in \Spec Z(A_{\sigma})}{(\mathfrak r,p) \in \gekl{0,1} \times P}$ and $\partial_{\tau} \defeq \menge{[\mathfrak r,p]_Z \in \Spec Z(A_{\tau})}{(\mathfrak r,p) \in \gekl{0,1} \times P}$. \begin{lemma} \label{lem:01special} We have $\phi_Z^*(\partial_{\tau}) = \partial_{\sigma}$ unless $\# P = 1 = \# I$ and $\beta_0$, $\beta_1$ are isomorphisms. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Assume that $\phi^*_Z [\mathfrak r,p]_Z = [t,\bar{q}]_Z$ for some $(\mathfrak r,p) \in \gekl{0,1} \times P$, $(t,\bar{q}) \in (0,1) \times P$. Let $$ I_{[\mathfrak r,p]_Z} \defeq \big\{ i \in I: \: \beta_\mathfrak s^{q,i} \neq 0 \ \text{for some} \ (\mathfrak s,q) \sim_Z (\mathfrak r,p) \big\}. $$ $\phi$ induces the following commutative diagram with exact rows: $$ \xymatrix{ 0 \ar[r] & \spkl{\ker ([t,\bar{q}]_Z)} \ar[d]_{\phi} \ar[r] & A_{\sigma} \ar[d]_{\phi} \ar[r] & E^{\bar{q}} \ar[d]^{\cong} \ar[r] & 0\\ 0 \ar[r] & \spkl{\ker ([\mathfrak r,p]_Z)} \ar[r] & A_{\tau} \ar[r] & \bigoplus_{i \in I_{[\mathfrak r,p]_Z}} F^i \ar[r] & 0 } $$ Here the map $ A_{\tau} \to \bigoplus_{i \in I_{[\mathfrak r,p]_Z}} F^i$ sends $f \in A_{\tau}$ to the uniquely determined $a \in \bigoplus_{i \in I_{[\mathfrak r,p]_Z}} F^i$ with $f^q(\mathfrak s) = \beta_\mathfrak s^q(a)$ for all $(\mathfrak s,q) \in [\mathfrak r,p]_Z$. $E^{\bar{q}} \cong \bigoplus_{i \in I_{[\mathfrak r,p]_Z}} F^i$ implies that $\# I_{[\mathfrak r,p]_Z} = 1$, say $I_{[\mathfrak r,p]_Z} = \gekl{i}$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Moreover, for every sufficiently small open neighbourhood $U$ of $[t,\bar{q}]_Z$ in $\Spec Z(A_{\sigma})$, $U \setminus \gekl{[t,\bar{q}]_Z}$ is homeomorphic to $(0,1) \amalg (0,1)$, while for every sufficiently small neighbourhood $V$ of $[\mathfrak r,p]_Z$ in $\Spec Z(A_{\tau})$, $V \setminus \gekl{[\mathfrak r,p]_Z}$ is homeomorphic to $\coprod_{(\mathfrak s,q) \in [\mathfrak r,p]_Z} (0,1)$. Hence we must have $\# [\mathfrak r,p]_Z = 2$. Furthermore, if $U$ and $V$ are as above, then for all $\bm{u} \in U$, $A_{\sigma} / \spkl{\ker(\bm{u})}$ has the same dimension as $A_{\sigma} / \spkl{\ker ([t,\bar{q}]_Z)}$, whereas $A_{\tau} / \spkl{\ker(\bm{v})}$ has the same dimension as $A_{\tau} / \spkl{\ker ([\mathfrak r,p]_Z)}$ for all $\bm{v} \in V$ only if $\beta_\mathfrak s^{q,i}$ is an isomorphism $F^i \isom E^q$ for all $(\mathfrak s,q) \in [\mathfrak r,p]_Z$. Now if there exists $(\mathfrak s,q) \in [\mathfrak r,p]_Z$ with $q \neq p$, then $q$ (and equivalently $p$) would be a redundant index in $P$, which is impossible because $A$ is in reduced form. Hence we must have $[\mathfrak r,p]_Z = \gekl{(0,p), (1,p)}$. But this implies that $\gekl{p}$ is an equivalence class with respect to $\sim_P$. Since $A$ is indecomposable, we must have $P = \gekl{p}$, and thus $I = I_{[\mathfrak r,p]_Z}$. Thus, indeed, $\#P = 1 = \# I$, and $\beta_0$, $\beta_1$ are isomorphisms. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} It is straightforward to deal with the remaining case where $\#P = 1 = \# I$ and $\beta_0$, $\beta_1$ are isomorphisms: \begin{lemma} \label{lem:PI=11} If $\#P = 1 = \# I$ and $\beta_0$, $\beta_1$ are isomorphisms, then for all $\dot{t} \in (0,1)$, $A_{\tau} \to A_{\tau}, \, f \ma \tilde{f}$, with $\tilde{f}(t) \defeq \beta_0 \beta_1^{-1} f(t + (1-\dot{t}))$ for $t \in [0,\dot{t}]$ and $\tilde{f}(t) \defeq f(t - \dot{t})$ for $t \in [\dot{t},1]$, is an isomorphism sending $B_{\tau}$ onto $B_{\tau}$ such that the induced map $\Spec Z(A_{\tau}) \isom \Spec Z(A_{\tau})$ sends $[\dot{t}]_Z$ to $[0]_Z = [1]_Z$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} Here we identify $[0,1] \times P$ with $[0,1]$, so that there is no need to carry around the $P$-coordinate. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{corollary} \label{cor:01special} If $(A_{\sigma},B_{\sigma}) \cong (A_{\tau},B_{\tau})$, then there exists an isomorphism $A_{\sigma} \isom A_{\tau}$ sending $B_{\sigma}$ onto $B_{\tau}$ such that the induced map $\Spec Z(A_{\tau}) \isom \Spec Z(A_{\sigma})$ sends $\partial_{\tau}$ to $\partial_{\sigma}$. \end{corollary} Let $A = A(E,F,\beta_0,\beta_1)$ and $B = \menge{f \in A}{f(t) \in DE \ \forall \, t \in [0,1]}$. To describe $\Spec B$, let $\mathcal Y \defeq \Spec DE$, $\mathcal Y^p \defeq \Spec DE^p$, $\mathcal X \defeq \Spec DF$, $\mathcal X^i = \Spec DF^i$, and for $\mathfrak r = 0,1$, let $\mathcal Y_\mathfrak r = \Spec(DE \cdot \beta_\mathfrak r(1_F) \cdot DE) = \menge{y \in \mathcal Y}{y(\beta_\mathfrak r(1_F)) = 1}$. Let $\bm{b}_\mathfrak r$ be the map $\mathcal Y_\mathfrak r \to \mathcal X$ dual to $\beta_\mathfrak r \vert_{DF}: \: DF \to DE$, i.e., $\bm{b}_\mathfrak r (y) = y \circ \beta_\mathfrak r$. We have $\mathcal Y = \coprod_p \mathcal Y^p$, $\mathcal X = \coprod_i \mathcal X^i$, and with $\mathcal Y_\mathfrak r^p \defeq \mathcal Y^p \cap \mathcal Y_\mathfrak r$, the restriction $\bm{b}_\mathfrak r^p \defeq \bm{b}_\mathfrak r \vert_{\mathcal Y_\mathfrak r^p}$ is dual to $\beta_\mathfrak r^p \vert_{DF}: \: DF \to DE^p$. Define an equivalence relation $\sim_B$ on $[0,1] \times \mathcal Y$ by setting $(\mathfrak r,y) \sim_B (\mathfrak s,\bar{y})$ if $\mathfrak r, \mathfrak s \in \gekl{0,1}$, $y \in \mathcal Y_\mathfrak r$, $\bar{y} \in \mathcal Y_\mathfrak s$ and $\bm{b}_\mathfrak r(y) = \bm{b}_\mathfrak s(\bar{y})$. Note that on $(0,1) \times \mathcal Y$, $\sim_B$ is trivial. We write $[\cdot]_B$ for the canonical projection map $[0,1] \times \mathcal Y \onto ([0,1] \times \mathcal Y) / {}_{\sim_B}$. Set $[0,1] \times_\bullet \mathcal Y \defeq \menge{(t,y) \in [0,1] \times \mathcal Y}{y \in \mathcal Y_t \ \text{if} \ t \in \gekl{0,1}}$. Let $\bar{\Pi}: \: ([0,1] \times \mathcal Y) / {}_{\sim_B} \onto ([0,1] \times P) / {}_{\sim_Z}, \, [t,y] \ma [t,p]$ for $y \in \mathcal Y^p$ be the canonical projection. The following is straightforward: \begin{lemma} \label{lem:SpecB} We have a homeomorphism $([0,1] \times_\bullet \mathcal Y) / {}_{\sim_B} \isom \Spec B$ sending $[t,y]$ to the character $$ B \to \mathbb{C}, \, f \ma \bfa y(f(t)) & \rm{if} \ t \in (0,1);\\ \bm{b}_t(y) \big( (\beta_0,\beta_1)^{-1}(f(0),f(1)) \big) & \rm{if} \ t \in \gekl{0,1}. \end{cases} $$ Here $[0,1]$ is given the usual topology and $\mathcal Y$ the discrete topology. Moreover, with respect to this description of $\Spec B$ and the description of $\Spec Z(A)$ from Lemma~\ref{lem:Z}, the map $\Pi: \: \Spec B \to \Spec Z(A)$ induced by the canonical inclusion $Z(A) \into B$ is given by the restriction of $\bar{\Pi}$ to ${\rm dom\,} \Pi \defeq \bar{\Pi}^{-1} (\Spec Z(A))$. \end{lemma} We are now ready for our main classification theorem. Let $A = A(E,F,\beta_0,\beta_1)$ be in reduced form. Let $\sigma = (\sigma^p)$ and $\tau = (\tau^p)$ be permutation matrices in $E$. Write $\tensor*[_\sigma]{\beta}{_1} \defeq {\rm Ad\,}(\sigma) \circ \beta_1$ and $\tensor*[_{\tau}]{\beta}{_1} \defeq {\rm Ad\,}(\tau) \circ \beta_1$, and let $\tensor*[_\sigma]{\bm{b}}{_1^p}: \: \tensor*[_\sigma]{\mathcal Y}{_1^p} \to \mathcal X$, $\tensor*[_\tau]{\bm{b}}{_1^p}: \: \tensor*[_\tau]{\mathcal Y}{_1^p} \to \mathcal X$ be the maps dual to $\tensor*[_\sigma]{\beta}{_1^p}, \, \tensor*[_\tau]{\beta}{_1^p}: \: DF \to DE^p$. \begin{theorem} \label{thm:ClassAB} We have $(A_\sigma,B_\sigma) \cong (A_\tau,B_\tau)$ if and only if there exist \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{itemize} \item a permutation $\rho$ of $P$ and for each $p \in P$ a bijection $\Theta^p: \: \mathcal Y^p \isom \mathcal Y^{\rho(p)}$, \item a permutation $\kappa$ of $I$ and for each $i \in I$ a bijection $\Xi^i: \: \mathcal X^i \isom \mathcal X^{\kappa(i)}$ giving rise to the bijection $\Xi = \coprod_i \Xi^i: \: \mathcal X = \coprod_i \mathcal X^i \isom \coprod_i \mathcal X^{\kappa(i)} = \mathcal X$, \item a map $o: \: P \to \gekl{\pm 1}$ \end{itemize} such that for every $p \in P$, we have commutative diagrams \begin{equation} \label{e:CDYYXX+} \begin{tikzcd} \mathcal Y_0^p \arrow[d, "\bm{b}_0^p"'] \arrow[r, "\sim"', "\Theta^p"] & \mathcal Y_0^{\rho(p)} \arrow[d, "\bm{b}_0^{\rho(p)}"] & & \tensor*[_\tau]{\mathcal Y}{_1^p} \arrow[d, "{\tensor*[_\tau]{\bm{b}}{_1^p}}"'] \arrow[r, "\sim"', "\Theta^p"] & \tensor*[_\sigma]{\mathcal Y}{_1^{\rho(p)}} \arrow[d, "{\tensor*[_\sigma]{\bm{b}}{_1^{\rho(p)}}}"] \\ \mathcal X \arrow[r, "\sim"', "\Xi"] & \mathcal X & & \mathcal X \arrow[r, "\sim"', "\Xi"] & \mathcal X \end{tikzcd} \end{equation} if $o(p) = +1$, \begin{equation} \label{e:CDYYXX-} \begin{tikzcd} \mathcal Y_0^p \arrow[d, "\bm{b}_0^p"'] \arrow[r, "\sim"', "\Theta^p"] & \tensor*[_\sigma]{\mathcal Y}{_1^{\rho(p)}} \arrow[d, "{\tensor*[_\sigma]{\bm{b}}{_1^{\rho(p)}}}"] & & \tensor*[_\tau]{\mathcal Y}{_1^p} \arrow[d, "{\tensor*[_\tau]{\bm{b}}{_1^p}}"'] \arrow[r, "\sim"', "\Theta^p"] & \mathcal Y_0^{\rho(p)} \arrow[d, "\bm{b}_0^{\rho(p)}"] \\ \mathcal X \arrow[r, "\sim"', "\Xi"] & \mathcal X & & \mathcal X \arrow[r, "\sim"', "\Xi"] & \mathcal X \end{tikzcd} \end{equation} if $o(p) = -1$. \end{theorem} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} \an{$\Larr$}: The commutative diagrams \eqref{e:CDYYXX+} and \eqref{e:CDYYXX-} induce commutative diagrams \begin{equation*} \begin{tikzcd} \mathcal Y_0^p \times \mathcal Y_0^p \arrow[r, "\sim"', "\Theta^p \times \Theta^p"] & \mathcal Y_0^{\rho(p)} \times \mathcal Y_0^{\rho(p)} \\ (\bm{b}_0^p \times \bm{b}_0^p)^{-1}(\coprod_i \mathcal X^i \times \mathcal X^i) \arrow[u, hook] \arrow[d, "\bm{b}_0^p \times \bm{b}_0^p"'] \arrow[r, "\sim"', "\Theta^p \times \Theta^p"] & (\bm{b}_0^{\rho(p)} \times \bm{b}_0^{\rho(p)})^{-1}(\coprod_i \mathcal X^{\kappa(i)} \times \mathcal X^{\kappa(i)}) \arrow[u, hook] \arrow[d, "\bm{b}_0^{\rho(p)} \times \bm{b}_0^{\rho(p)}"] \\ \coprod_i \mathcal X^i \times \mathcal X^i \arrow[r, "\sim"', "\Xi \times \Xi"] & \coprod_i \mathcal X^{\kappa(i)} \times \mathcal X^{\kappa(i)} \end{tikzcd} \end{equation*} if $o(p) = +1$, \begin{equation*} \begin{tikzcd} \mathcal Y_0^p \times \mathcal Y_0^p \arrow[r, "\sim"', "\Theta^p \times \Theta^p"] & \mathcal Y_0^{\rho(p)} \times \mathcal Y_0^{\rho(p)} \\ (\bm{b}_0^p \times \bm{b}_0^p)^{-1}(\coprod_i \mathcal X^i \times \mathcal X^i) \arrow[u, hook] \arrow[d, "\bm{b}_0^p \times \bm{b}_0^p"'] \arrow[r, "\sim"', "\Theta^p \times \Theta^p"] & (\tensor*[_\sigma]{\bm{b}}{_1^{\rho(p)}} \times \tensor*[_\sigma]{\bm{b}}{_1^{\rho(p)}})^{-1}(\coprod_i \mathcal X^{\kappa(i)} \times \mathcal X^{\kappa(i)}) \arrow[u, hook] \arrow[d, "{\tensor*[_\sigma]{\bm{b}}{_1^{\rho(p)}} \times \tensor*[_\sigma]{\bm{b}}{_1^{\rho(p)}}}"] \\ \coprod_i \mathcal X^i \times \mathcal X^i \arrow[r, "\sim"', "\Xi \times \Xi"] & \coprod_i \mathcal X^{\kappa(i)} \times \mathcal X^{\kappa(i)} \end{tikzcd} \end{equation*} if $o(p) = -1$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Applying the groupoid C*-algebra construction, and using \cite[Proposition~5.4]{Li18} (see also \cite[Lemmas~3.2 and 3.4]{BL17}), we obtain the commutative diagram \begin{equation} \label{e:CDbeta0} \begin{tikzcd} E^p & \arrow[l, "\sim", "\theta^p"'] E^{\rho(p)} \\ F \arrow[u, "\beta_0^p"] & \arrow[l, "\sim", "\xi"'] F \arrow[u] \end{tikzcd} \end{equation} where $\theta^p = (\Theta^p \times \Theta^p)^*$ is the map induced by $\Theta^p \times \Theta^p$, $\xi = (\Xi \times \Xi)^*$ is the map induced by $\Xi \times \Xi$, and the right vertical map is given by $\beta_0^{\rho(p)}$ if $o(p) = +1$ and $\tensor*[_\sigma]{\beta}{_1^{\rho(p)}}$ if $o(p) = -1$. Similarly, we obtain a commutative diagram \begin{equation} \label{e:CDbeta1} \begin{tikzcd} E^p & \arrow[l, "\sim", "\theta^p"'] E^{\rho(p)} \\ F \arrow[u, "{\tensor*[_\tau]{\beta}{_1^p}}"] & \arrow[l, "\sim", "\xi"'] F \arrow[u] \end{tikzcd} \end{equation} where the right vertical map is given by $\tensor*[_\sigma]{\beta}{_1^{\rho(p)}}$ if $o(p) = +1$ and $\beta_0^{\rho(p)}$ if $o(p) = -1$. Now denote by $\theta$ the isomorphism $E \isom E$ given by $\bigoplus_p \theta^p: \: E = \bigoplus_p E^{\rho(p)} \isom \bigoplus_p E^p = E$. For $f = (f^p) \in C([0,1],E)$, $f^p \in C([0,1],E^p)$, define $\tilde{f} \in C([0,1],E$ by $\tilde{f} \defeq (\tilde{f}^p)$, $\tilde{f}^p \defeq f^p$ if $o(p) = +1$ and $\tilde{f}^p \defeq f^p \circ (1 - {\rm id})$ if $o(p) = -1$. We claim that $A_{\sigma} \to A_{\tau}, \, (f,a) \ma (\theta(\tilde{f}),\xi(a))$ is an isomorphism sending $B_\sigma$ to $B_\tau$. All we have to show is that this map is well-defined, because we can construct an inverse by replacing $\theta$ by $\theta^{-1}$ and $\xi$ by $\xi^{-1}$, and the map clearly sends $B_\sigma$ to $B_\tau$. To see that it is well-defined, we compute \begin{align*} (\theta(\tilde{f})(0))^p &= \theta^p (\tilde{f}^{\rho(p)}(0)) = \theta^p (f^{\rho(p)}(0)) = \theta^p (\beta_0^{\rho(p)}(a)) \overset{\eqref{e:CDbeta0}}{=} \beta_0^p(\xi(a)) && {\rm if} \ o(p) = +1,\\ (\theta(\tilde{f})(0))^p &= \theta^p (\tilde{f}^{\rho(p)}(0)) = \theta^p (f^{\rho(p)}(1)) = \theta^p ( \tensor*[_\sigma]{\beta}{_1^{\rho(p)}} (a)) \overset{\eqref{e:CDbeta0}}{=} \beta_0^p(\xi(a)) && {\rm if} \ o(p) = -1. \end{align*} Similarly, $\theta(\tilde{f})(1) = \tensor*[_\tau]{\beta}{_1} (\xi(a))$. This shows that $(\tilde{f},\xi(a)) \in A_\tau$, as desired. \an{$\Rarr$}: By Lemmas~\ref{lem:A=vA_00} and \ref{lem:A=vA_01}, we may assume that $A$ is in reduced form, i.e., no index in $P$ is redundant. By Corollary~\ref{cor:dirsum}, we may assume that $A$ is indecomposable, i.e., we have $p_1 \sim_P p_2$ for all $p \in P$. Let $\phi: \: A_\sigma \isom A_\tau$ be an isomorphism with $\phi(B_\sigma) = B_\tau$. Let $\phi^*_B$ be the induced homeomorphism $\Spec B_\tau \isom \Spec B_\sigma$ and $\phi^*_Z$ the induced homeomorphism $\Spec Z(A_\tau) \isom \Spec Z(A_\sigma)$. By Corollary~\ref{cor:01special}, we may assume that $\phi^*_Z(\partial_\tau) = \partial_\sigma$. We have a commutative diagram \begin{equation*} \begin{tikzcd} {\rm dom\,} \Pi_\tau \arrow[d, "\Pi_\tau"'] \arrow[r, "\sim"', "\phi^*_B"] & {\rm dom\,} \Pi_\sigma \arrow[d, "\Pi_\sigma"] \\ \Spec Z(A_\tau) \arrow[r, "\sim"', "\phi^*_Z"] & \Spec Z(A_\sigma) \end{tikzcd} \end{equation*} where the maps $\Pi_\tau$ and $\Pi_\sigma$ are the ones from Lemma~\ref{lem:SpecB}. $\phi^*_Z$ restricts to a homeomorphism $\Spec Z(A_\tau) \setminus \partial_\tau \isom \Spec Z(A_\sigma) \setminus \partial_\sigma$. As $\Spec Z(A_\tau) \setminus \partial_\tau \cong (0,1) \times P$ and $\Spec Z(A_\sigma) \setminus \partial_\sigma \cong (0,1) \times P$, there must exist a permutation $\rho$ of $P$ and for each $p \in P$ a homeomorphism $\lambda^p$ of $(0,1)$ such that $\phi^*_Z([t,p]_Z) = [\lambda^p(t),\rho(p)]_Z$. Set $o(p) \defeq +1$ if $\lambda^p$ is orientation-preserving and $o(p) \defeq -1$ if $\lambda^p$ is orientation-reversing. For fixed $p$, $\Pi_\tau^{-1} ((0,1) \times \gekl{p}) = (0,1) \times \mathcal Y^p$ and $\Pi_\sigma^{-1} ((0,1) \times \gekl{\rho(p)}) = (0,1) \times \mathcal Y^{\rho(p)}$, so that we obtain the commutative diagram \begin{equation*} \begin{tikzcd} (0,1) \times \mathcal Y^p \arrow[d, "\Pi_\tau"'] \arrow[r, "\sim"', "\phi^*_B"] & (0,1) \times \mathcal Y^{\rho(p)} \arrow[d, "\Pi_\sigma"] \\ (0,1) \times \gekl{p} \arrow[r, "\sim"', "\phi^*_Z"] & (0,1) \times \gekl{\rho(p)} \end{tikzcd} \end{equation*} It follows that there exists a bijection $\Theta^p: \: \mathcal Y^p \isom \mathcal Y^{\rho(p)}$ such that $\phi^*_B([t,y]_B) = [\lambda^p(t),\Theta^p(y)]_B$ for all $y \in \mathcal Y^p$. Now consider $\partial \bm{B}_\tau \defeq \menge{[\mathfrak r,y]_B}{\mathfrak r \in \gekl{0,1}, \, y \in \mathcal Y_{\mathfrak r}}$, and define $\partial \bm{B}_\sigma$ analogously. $\phi^*_B$ restricts to a bijection $\partial \bm{B}_\tau \isom \partial \bm{B}_\sigma$ because $\partial \bm{B}_\tau = \Spec B_\tau \setminus \Pi_\tau^{-1}( \Spec Z(A_\tau) \setminus \partial_\tau )$ and similarly for $\partial \bm{B}_\sigma$. In addition, we have a bijection $\partial \bm{B}_\tau \isom \mathcal X$ sending $[0,y]_B$ to $\bm{b}_0(y)$ and $[1,y]_B$ to $\tensor*[_\tau]{\bm{b}}{_1}(y)$, and an analogous bijection $\partial \bm{B}_\sigma \isom \mathcal X$. Thus we obtain a bijection $\mathcal X \isom \mathcal X$ which fits into the commutative diagram \begin{equation} \label{e:CDBdBdXX} \begin{tikzcd} \partial \bm{B}_\tau \arrow[d, "\cong"'] \arrow[r, "\sim"', "\phi^*_B"] & \partial \bm{B}_\sigma \arrow[d, "\cong"] \\ \mathcal X \arrow[r, "\sim"] & \mathcal X \end{tikzcd} \end{equation} As this bijection $\mathcal X \isom \mathcal X$ corresponds to an isomorphism $F \isom F$ which fits into the commutative diagram \begin{equation*} \begin{tikzcd} A_\sigma \arrow[d, "{(\beta_0, \tensor*[_\sigma]{\beta}{_1})^{-1} \circ (\ev_0, \ev_1)}"'] \arrow[r, "\sim"', "\phi"] & A_\tau \arrow[d, "{(\beta_0, \tensor*[_\tau]{\beta}{_1})^{-1} \circ (\ev_0, \ev_1)}"] \\ F \arrow[r, "\sim"] & F \end{tikzcd} \end{equation*} there must exist a permutation $\kappa$ of $I$ and bijections $\Xi^i: \: \mathcal X^i \isom \mathcal X^{\kappa(i)}$ such that the bijection $\mathcal X \isom \mathcal X$ in \eqref{e:CDBdBdXX} is given by $\Xi \defeq \coprod_i \Xi^i: \: \coprod_i \mathcal X^i \isom \coprod_i \mathcal X^{\kappa(i)}$. Now take $p \in P$ with $o(p) = +1$. For $y \in \mathcal Y^p$, $[0,\Theta^p(y)]_B$ is mapped under the right vertical map in \eqref{e:CDBdBdXX} to $\bm{b}_0^{\rho(p)}(\Theta^p(y))$. At the same time $[0,\Theta^p(y)]_B = \lim_{t \, {\scriptscriptstyle \searrow} \, 0} \, [\lambda^p(t), \Theta^p(y)]_B = \lim_{t \, {\scriptscriptstyle \searrow} \, 0} \, \phi^*_B([t,y]_B) = \phi^*_B([0,y]_B)$. By commutativity of \eqref{e:CDBdBdXX}, the right vertical map in \eqref{e:CDBdBdXX} sends $\phi^*_B([0,y]_B)$ to $\Xi(\bm{b}_0^p(y))$. Hence $\Xi \circ \bm{b}_0^p = \bm{b}_0^{\rho(p)} \circ \Theta^p$. Similarly, $\Xi \circ \tensor*[_\tau]{\bm{b}}{_1^p} = \tensor*[_\sigma]{\bm{b}}{_1^{\rho(p)}} \circ \Theta^p$. If $o(p) = -1$, then an analogous argument shows that $\Xi \circ \bm{b}_0^p = \tensor*[_\sigma]{\bm{b}}{_1^{\rho(p)}} \circ \Theta^p$ and $\Xi \circ \tensor*[_\tau]{\bm{b}}{_1^p} = \bm{b}_0^{\rho(p)} \circ \Theta^p$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} In \cite{BR}, the authors identify particular 1-dimensional NCCW complexes $A$ (certain dimension drop algebras) with the property that given any two C*-diagonals $B_1$ and $B_2$ of $A$, we have $(A,B_1) \cong (A,B_2)$ if and only if $\Spec B_1 \cong \Spec B_2$ (i.e., $B_1 \cong B_2$). As the latter is obviously a necessary condition, this can be viewed as a rigidity result. Moving towards a rigidity result in our general setting, let us first prove a weaker statement. \begin{theorem} \label{thm:AB-BZ} Suppose that $A$ is a 1-dimensional NCCW complex such that for all $(\mathfrak r,p) \in \gekl{0,1} \times P$, $\# \{i \in I: \: \beta_\mathfrak r^{p,i} \neq 0\} \leq 1$. Given two C*-diagonals $B_1$ and $B_2$ of $A$, we have $(A,B_1) \cong (A,B_2)$ if and only if there exists an isomorphism $B_1 \isom B_2$ sending $Z(A)$ onto $Z(A)$. \end{theorem} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} \an{$\Rarr$} is clear. Let us prove \an{$\Larr$}. By Proposition~\ref{prop:AB=AsBs}, it suffices to show that given permutation matrices $\sigma$ and $\tau$ in $E$, $(B_\sigma,Z(A_\sigma)) \cong (B_\tau,Z(A_\tau))$ implies that $(A_\sigma,B_\sigma) \cong (A_\tau,B_\tau)$. As in the proof of Theorem~\ref{thm:ClassAB}, we may assume that $A$ is in reduced form and that $A$ is indecomposable. Suppose that we have an isomorphism $\phi: \: B_\sigma \isom B_\tau$ with $\phi(Z(A_\sigma)) = Z(A_\tau)$. Let $\phi^*: \: \Spec B_\tau \isom \Spec B_\sigma$ be the homeomorphism induced by $\phi$. Define $\partial \bm{B}_\tau$ and $\partial \bm{B}_\sigma$ as in the proof of Theorem~\ref{thm:ClassAB}. Using Lemma~\ref{lem:PI=11} as for Theorem~\ref{thm:ClassAB}, we can without loss of generality assume that $\phi^*(\partial \bm{B}_\tau) = \partial \bm{B}_\sigma$. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} As in the proof of Theorem~\ref{thm:ClassAB} we get a bijection $\Xi: \: \mathcal X \isom \mathcal X$ which fits into a commutative diagram \begin{equation*} \begin{tikzcd} \partial \bm{B}_\tau \arrow[d, "\cong"'] \arrow[r, "\sim"', "\phi^*_B"] & \partial \bm{B}_\sigma \arrow[d, "\cong"] \\ \mathcal X \arrow[r, "\sim"] & \mathcal X \end{tikzcd} \end{equation*} It remains to show that there exists a permutation $\kappa$ of $I$ and bijections $\Xi^i: \: \mathcal X^i \isom \mathcal X^{\kappa(i)}$ such that $\Xi = \coprod_i \Xi^i$. This follows from the observation --- which is a consequence of our assumption --- that $[\mathfrak r,y]_B$, $[\mathfrak s,\tilde{y}]_B$ in $\partial \bm{B}_\tau$ are mapped to elements in $\mathcal X^i$ for the same index $i \in I$ if and only if we have for all open neighbourhoods $U$ and $V$ of $[\mathfrak r,y]_B$ and $[\mathfrak s,\tilde{y}]_B$ that $\Pi_\tau( U \cap {\rm dom\,} \Pi_\tau ) \cap \Pi_\tau( V \cap {\rm dom\,} \Pi_\tau ) \neq \emptyset$. Now the rest of the proof proceeds in exactly the same way as the proof of Theorem~\ref{thm:ClassAB}. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Now let us present a strong rigidity result in our general context. \begin{theorem} \label{thm:AB-B} Suppose that $A$ is a 1-dimensional NCCW complex such that \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{itemize} \item $\beta_0$ and $\beta_1$ are unital, \item for all $i \in I$, there exists exactly one $(\mathfrak r,p) \in \gekl{0,1} \times P$ such that $\beta_\mathfrak r^{p,i} \neq 0$, \item for all $(\mathfrak r,p) \in \gekl{0,1} \times P$, there exists exactly one $i \in I$ such that $\beta_\mathfrak r^{p,i} \neq 0$, \item for all these triples $(\mathfrak r, p, i)$, we have $m_\mathfrak r(p,i) \neq 2$, \item the map defined on such triples sending $(\mathfrak r, p, i)$ to $m_\mathfrak r(p,i)$ must be injective. \end{itemize} Then given any two C*-diagonals $B_1$ and $B_2$ of $A$, we have $(A,B_1) \cong (A,B_2)$ if and only if $\Spec B_1 \cong \Spec B_2$. \end{theorem} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} \an{$\Rarr$} is clear. To prove \an{$\Larr$}, by Proposition~\ref{prop:AB=AsBs}, it suffices to show that for any two permutation matrices $\sigma$ and $\tau$ in $E$, $B_\sigma \cong B_\tau$ implies that $(A_\sigma,B_\sigma) \cong (A_\tau,B_\tau)$. Let $\phi: \: B_\sigma \isom B_\tau$ be an isomorphism and $\phi^*: \: \Spec B_\tau \isom \Spec B_\sigma$ its dual map. Our condition $m_\mathfrak r(p,i) = 1$ or $m_\mathfrak r(p,i) \geq 3$ implies that $\phi^*(\partial \bm{B}_\tau) = \partial \bm{B}_\sigma$, where $\partial \bm{B}_\tau = \menge{[\mathfrak r,y]_B}{(\mathfrak r,y) \in \gekl{0,1} \times \mathcal Y}$ and similarly for $\partial \bm{B}_\sigma$. Thus $\phi^*$ induces a permutation $\Xi: \: \mathcal X \isom \mathcal X$, which must be of the desired form as in the proof of Theorem~\ref{thm:ClassAB} by our assumptions on $\beta_\mathfrak r^{p,i}$. Similarly, $\phi^*$ induces homeomorphisms $(0,1) \times \mathcal Y = \Spec B_\tau \setminus \partial \bm{B}_\tau \isom \Spec B_\sigma \setminus \partial \bm{B}_\sigma = (0,1) \times \mathcal Y$ as in the proof of Theorem~\ref{thm:ClassAB}, again using our assumptions on $\beta_\mathfrak r^{p,i}$. Now proceed in exactly the same way as the proof of Theorem~\ref{thm:ClassAB}. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{remark}\em \label{rem:appBR} The hypotheses of Theorem~\ref{thm:AB-B} are for instance satisfied in the case of stabilized dimension drop algebras in the sense of \cite{BR}, i.e., where $P = \gekl{p}$, $I = \gekl{i_0,i_1}$, $E = E^p = M_m \otimes M_n \otimes M_o$, $F^{i_0} = M_m \otimes M_o$, $F^{i_1} = M_n \otimes M_o$, $\beta_0 = \beta_0^{p,i_0}: \: M_m \otimes M_o \to M_m \otimes 1_n \otimes M_o \subseteq E^p$ is given by ${\rm id} \otimes 1_n \otimes {\rm id}$, $\beta_1 = \beta_1^{p,i_1}: \: M_n \otimes M_o \to 1_m \otimes M_n \otimes M_o \subseteq E^p$ is given by $1_m \otimes {\rm id} \otimes {\rm id}$, and $m, n \geq 3$, $m \neq n$. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} The conclusion of Theorem~\ref{thm:AB-B} is also shown to be true in \cite{BR} using ad-hoc methods in the case where exactly one of $m$ or $n$ is equal to $2$ or when $(m,n,o) = (2,2,1)$. However, contrary to what is claimed in \cite[Theorem~7.8]{BR}, the conclusion of Theorem~\ref{thm:AB-B} is not true in case $m=n$ and $m, n \geq 3$. The problem is that \cite[Remark~7.7]{BR} is not true in this case, as the following example shows. \end{remark} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{example}\em \label{ex:appBR} Let $\nu \geq 6$ be an integer and suppose that $\nu$ is not prime, so that $\nu$ has a divisor $\delta \in \gekl{3, \dotsc, \nu - 3}$. Let $M$ be a $\nu \times \nu$-matrix with two identical rows and pairwise distinct columns such that each row and each column has exactly $\delta$ ones, and zeros everywhere else. Such a matrix has been constructed for example in \cite{NJLSB}. Now consider the matrices $$ M_\sigma \defeq \rukl{ \begin{smallmatrix} \frac{2 \nu}{\delta} \cdot M & 0 \\ 0 & \frac{2 \nu}{\delta} \cdot M \end{smallmatrix} }, \qquad M_\tau \defeq \rukl{ \begin{smallmatrix} \frac{2 \nu}{\delta} \cdot M & 0 \\ 0 & (\frac{2 \nu}{\delta} \cdot M)^t \end{smallmatrix} } $$ Then each row and each column of $M_\sigma$ and $M_\tau$ has sum equal to $2\nu$. The columns of $M_\sigma$ are pairwise distinct, whereas $M_\tau$ has two identical rows and two identical columns. Hence we cannot find permutation matrices $P$ and $Q$ such that $M_\sigma = P M_\tau Q$ or $M_\sigma = P M_\tau^t Q$. Thus $M_\sigma$ and $M_\tau$ are not congruent in the language of \cite{BR}. However, it is straightforward to see that the bipartite graphs $\Gamma_\sigma$ and $\Gamma_\tau$ attached to $M_\sigma$ and $M_\tau$ are isomorphic (though not in a way which either consistently preserves orientation or consistently reverses orientation), where the bipartite graphs $\Gamma_\bullet = (V, E_\bullet, E_\bullet \to V \times V)$, for $\bullet = \sigma,\tau$, are defined as follows: Let $V \defeq \gekl{1, \dotsc, 2\nu} \times \gekl{0,1}$, $E_\bullet \defeq \menge{(v_0,v_1,\mu)}{v_0, v_1 \in \gekl{1, \dotsc, 2\nu}, \, \mu \in \gekl{1, \dotsc, (M_\bullet)_{v_0,v_1}}}$ for $\bullet = \sigma, \tau$, and define $E_\bullet \to V \times V, \, (v_0,v_1,\mu) \ma ((v_0,0), (v_1,1))$. This shows that \cite[Remark~7.7]{BR} is not true. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} This leads to an example of a 1-dimensional NCCW complex in the same form as in Remark~\ref{rem:appBR} with $m = n = 2\nu$, $o = 1$, and permutation matrices $\sigma, \tau \in E^p$ such that $B_\sigma \cong B_\tau$ but $(A_\sigma, B_\sigma) \not\cong (A_\tau, B_\tau)$: For $\mathcal Y = \Spec DE$, $\mathcal X = \Spec DF$, we have canonical identifications $\mathcal Y \cong \gekl{1, \dotsc, 2\nu} \times \gekl{1, \dotsc, 2\nu}$, $\mathcal X = \mathcal X^{i_0} \amalg \mathcal X^{i_1}$ with $\mathcal X^{i_0} \cong \gekl{1, \dotsc, 2\nu}$ and $\mathcal X^{i_1} \cong \gekl{1, \dotsc, 2\nu}$ such that the maps $\bm{b}_\bullet$ dual to $\beta_\bullet$ are given by $\bm{b}_0: \: \mathcal Y \to \mathcal X^{i_0} \subseteq \mathcal X, \, (y_0,y_1) \ma y_0$ and $\bm{b}_1: \: \mathcal Y \to \mathcal X^{i_1} \subseteq \mathcal X, \, (y_0,y_1) \ma y_1$. Let $\bm{\sigma}$ be the permutation of $\mathcal Y$ such that for all $x_0, x_1 \in \mathcal X$, we have $\# \menge{y \in \mathcal Y}{(\bm{b}_0(y), \bm{b}_1(\bm{\sigma}(y))) = (x_0,x_1)} = (M_{\sigma})_{x_0,x_1}$, and let $\bm{\tau}$ be a permutation of $\mathcal Y$ with the analogous property for $M_\tau$ instead of $M_\sigma$. The proof of \cite[Proposition~6.9]{BR} gives a precise recipe to find such $\bm{\sigma}$ and $\bm{\tau}$. Now let $\sigma$ be the permutation matrix in $E$ given by $\sigma_{\bar{y},y} = 1$ if and only if $\bar{y} = \bm{\sigma}^{-1}(y)$, and define $\tau$ similarly. Then $\Gamma_\sigma \cong \Gamma_\tau$ implies that $\Spec B_\sigma \cong \Spec B_\tau$, hence $B_\sigma \cong B_\tau$, while we cannot have $(A_\sigma, B_\sigma) \cong (A_\tau, B_\tau)$ since otherwise Theorem~\ref{thm:ClassAB} would imply that $M_\sigma$ and $M_\tau$ would have to be congruent in the sense of \cite{BR}. \end{example} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \section{Construction of C*-diagonals with connected spectra} We set out to construct C*-diagonals with connected spectra in classifiable stably finite C*-algebras. \subsection{Construction of C*-diagonals in classifiable stably finite C*-algebras} \label{ss:CCCC} We recall the construction in \cite[\S~4]{Li18} which is a modified version of the constructions in \cite{Ell, EV, GLN} (see \cite[\S~2]{Li18}). The construction provides a model for every classifiable stably finite C*-algebra which is unital or stably projectionless with continuous scale, with prescribed Elliott invariant $\mathcal E = (G_0, G_0^+, u, T, r, G_1)$ as in \cite[Theorem~1.2]{Li18} or $\mathcal E = (G_0, \gekl{0}, T, \rho, G_1)$ as in \cite[Theorem~1.3]{Li18}, in the form of an inductive limit $\ilim_n \gekl{A_n, \varphi_n}$. In addition, the crucial point in \cite{Li18} is to identify C*-diagonals $B_n$ of $A_n$ which are preserved under the connecting maps and which satisfy the hypothesis of \cite[Theorem~1.10]{Li18} so that $\ilim_n \gekl{B_n, \varphi_n}$ becomes a C*-diagonal of $\ilim_n \gekl{A_n, \varphi_n}$. Here $A_n = \menge{(f,a) \in C([0,1],E_n) \oplus F_n}{f(\mathfrak r) = \beta_{n, \mathfrak r}(a) \ {\rm for} \ \mathfrak r = 0,1}$ where $E_n = \bigoplus_p E_n^p$, $E_n^p = M_{\gekl{n,p}}$, $F_n = \bigoplus_i F_n^i$, $F_n^{\bm{i}} = P_n^{\bm{i}} M_{\infty}(C(Z_n)) P_n^{\bm{i}}$ for a distinguished index $\bm{i}$, $Z_n$ is a path-connected space with base point $\theta^{\bm{i}}_n$, $P^{\bm{i}}_n$ is a projection corresponding to a vector bundle over $Z_n$ of dimension $[n,\bm{i}]$, $P^{\bm{i}}_n(\theta^{\bm{i}}_n)$ is up to conjugation by a permutation matrix given by $1_{[n,\bm{i}]}$, $F_n^i = M_{[n,i]}$ for $i \neq \bm{i}$, $\hat{F}_n = \bigoplus \hat{F}_n^i$, $\hat{F}_n^{\bm{i}} = M_{[n,\bm{i}]}$, $\hat{F}_n^i = F_n^i$ if $i \neq \bm{i}$, $\pi_n: \: F_n \onto \hat{F}_n$ is given by $\pi_n = \ev_{\theta_n^{\bm{i}}} \oplus \bigoplus_{i \neq \bm{i}} {\rm id}_{F_n^i}$, and $\beta_{n, \bullet} = \hat{\beta}_{n, \bullet} \circ \pi_n$, where $\hat{\beta}_{n, \bullet}: \: \hat{F}_n \to E_n$ is of the same form as in \eqref{e:beta=}. In the stably projectionless case, we can (and will) always arrange that for all $n$, there exists exactly one index $\grave{p}$ such that $\beta_{n,0}^{\grave{p}}$ is unital and $\beta_{n,1}^{\grave{p}}$ is non-unital, while $\beta_{n,\bullet}^p$ is unital for all other $p \neq \grave{p}$. The connecting maps $\varphi \defeq \varphi_n: \: A_n \to A_{n+1}$ are determined by $\varphi_C: \: A_n \overset{\varphi}{\longrightarrow} A_{n+1} \to C([0,1],E_{n+1})$ and $\varphi_F: \: A_n \overset{\varphi}{\longrightarrow} A_{n+1} \to F_{n+1}$. $\varphi_C(f,a)$ is of block diagonal form, with block diagonal entries of the form \begin{equation} \label{e:fplp} f^p \circ \lambda, \end{equation} for a continuous map $\lambda: \: [0,1] \to [0,1]$ with $\lambda^{-1}(\gekl{0,1}) \subseteq \gekl{0,1}$, where $f^p$ is the image of $f$ under the canonical projection $C([0,1],E_n) \onto C([0,1],E_n^p)$ (see \cite[Equation~(16)]{Li18}), or of the form \begin{equation} \label{e:tauax} [0,1] \to E_{n+1}^q, \, t \ma \tau(t) a(x(t)), \end{equation} where $x: \: [0,1] \to Z_n$ is continuous and $\tau(t): \: P_n(x(t)) M_{\infty} P_n(x(t)) \cong P_n(\theta_n^i) M_{\infty}P_n(\theta_n^i)$ is an isomorphism depending continuously on $t$, with $\theta_n^i$ in the same connected component as $x(t)$, and $\tau(t) = {\rm id}$ if $x(t) = \theta_n^i$ (see \cite[Equation~(17)]{Li18}). Moreover, we can always arrange the following conditions: \begin{eqnarray} \label{e:phiCfp} &&\forall \ p, \, q \ \exists \ \text{a block diagonal entry in} \ C([0,1],E_{n+1}^q) \ \text{of the form} \ f^p \circ \lambda^p \ \text{as in} \ \eqref{e:fplp} \ \text{with} \ \lambda^p(0) = 0, \, \lambda^p(1) = 1. \\ \label{e:phiCl} &&\forall \ \lambda \ \text{as in} \ \eqref{e:fplp} \ \text{and} \ \mathfrak r \in \gekl{0,1}, \, \lambda(\mathfrak r) \notin \gekl{0,1} \ \Rarr \ \lambda(\mathfrak r^*) \in \gekl{0,1}, \ \text{where} \ \mathfrak r^* = 1 - \mathfrak r.\\ \label{e:phiCx} &&\forall \ x \ \text{as in} \ \eqref{e:tauax}, \ \text{if} \ {\rm im\,}(x) \subseteq Z_n^{\bm{i}}, \ \text{then} \ x(0) = \theta_n^{\bm{i}} \ \text{or} \ x(1) = \theta_n^{\bm{i}}. \end{eqnarray} Note that a crucial (though basic) modification of the constructions in \cite{Ell, EV, GLN} is to push unitary conjugation all into $\beta_{n+1,\bullet}$, so that $\varphi_C$ can be arranged to be always of this block diagonal form (see \cite[Remark~4.1]{Li18} for details). $\varphi_F(f,a)$ is up to permutation given by \begin{equation} \label{e:phiF} \varphi_F(f,a) = \rukl{ \begin{smallmatrix} \varphi_{F,C}(f) & 0 \\ 0 & \varphi_{F,F}(a) \end{smallmatrix} }, \quad \text{where} \ \varphi_{F,F}(a) = (\varphi^{j,i}(a^i))_j \ \text{for} \ F_n = \bigoplus_i F_n^i, \, F_{n+1} = \bigoplus_j F_{n+1}^j. \end{equation} Moreover, with $\pi \defeq \pi_{n+1}$, $\pi \circ \varphi^{j,i}$ is given by the composition \begin{equation} \label{e:piphiji} F_n^i = \hat{F}_n^i \overset{1 \otimes {\rm id}_{\hat{F}_n^i}}{\longrightarrow} 1_{m(j,i)} \otimes \hat{F}_n^i \subseteq M_{m(j,i)} \otimes \hat{F}_n^i \tailarr \hat{F}_{n+1}^j \quad \text{if} \ i \neq \bm{i}, \end{equation} $\pi \circ \varphi^{j,\bm{i}}$ is given by \begin{equation} \label{e:piphijbi} \rukl{ \begin{smallmatrix} \pi \circ \varphi_\theta^{j,\bm{i}} & 0 \\ 0 & \pi \circ \varphi_{\bm{Z}}^{j,\bm{i}} \end{smallmatrix} } \end{equation} where $\pi \circ \varphi_\theta^{j,\bm{i}}$ is given by the composition \begin{equation} \label{e:piphijbitheta} F_n^{\bm{i}} \overset{\ev_{\theta_n^{\bm{i}}}}{\longrightarrow} \hat{F}_n^{\bm{i}} \overset{1 \otimes {\rm id}_{\hat{F}_n^{\bm{i}}}}{\longrightarrow} 1_{m(j,\bm{i})} \otimes F_n^{\bm{i}} \subseteq M_{m(j,\bm{i})} \otimes F_n^{\bm{i}} \tailarr \hat{F}_{n+1}^j, \end{equation} and $\pi \circ \varphi_{\bm{Z}}^{j,\bm{i}}$ consists of block diagonals of a similar form as $\pi \circ \varphi_\theta^{j,\bm{i}}$, but starting with $\ev_{\bm{z}}$ instead of $\ev_{\theta_n^{\bm{i}}}$, for $\bm{z} \in \bm{Z} \subseteq Z_n \setminus \gekl{\theta_n^{\bm{i}}}$. As in \eqref{e:beta=}, the arrow $\tailarr$ denotes a *-homomorphism of multiplicity $1$ sending diagonal matrices to diagonal matrices. It is convenient to collect $\pi \circ \varphi^{j,i}$ for all $j$ into a single map $\pi \circ \varphi^{-,i}: \: F_n^i \to \hat{F}_{n+1}$ given by \begin{equation} \label{e:piphi-i} F_n^i = \hat{F}_n^i \to (1_{m(j,i)} \otimes \hat{F}_n^i)_j \subseteq \Big( \bigoplus_j M_{m(j,i)} \Big) \otimes \hat{F}_n^i \tailarr \hat{F}_{n+1} \quad \text{if} \ i \neq \bm{i}, \end{equation} and in a similar way, we obtain $\pi \circ \varphi_{\theta}^{-,\bm{i}}: \: F_n^{\bm{i}} \to \hat{F}_{n+1}$ given by \begin{equation} \label{e:piphi-bi} F_n^{\bm{i}} \overset{\ev_{\theta_n^{\bm{i}}}}{\longrightarrow} \hat{F}_n^{\bm{i}} \to (1_{m(j,{\bm{i}})} \otimes \hat{F}_n^{\bm{i}})_j \subseteq \Big( \bigoplus_j M_{m(j,{\bm{i}})} \Big) \otimes \hat{F}_n^{\bm{i}} \tailarr \hat{F}_{n+1}. \end{equation} \subsection{Modification (conn)} \label{ss:(conn)} We modify the construction described in \S~\ref{ss:CCCC} to obtain C*-diagonals with connected spectra. We start with the inductive limit decomposition as in \cite[\S~2]{Li18} and construct C*-algebras $F_n$ as in \cite[\S~3,4]{Li18}. Now the original construction recalled in \S~\ref{ss:CCCC} produces $\bfdot{A}_n$ and $\bfdot{\varphi}_{n-1}$ inductively on $n$. Suppose that the original construction starts with the C*-algebra $\bfdot{A}_1$ of the form as in \S~\ref{ss:CCCC}. Let us explain how to modify it. We will use the same notation as in \S~\ref{s:nccw}. Let $[1,I] \defeq \sum_i [1,i]$. Choose an index $\mathfrak p$ and define $E_1^{\mathfrak p} \defeq M_{\gekl{1,\mathfrak p} + [1,I]}$. View $\bfdot{E}_1^{\mathfrak p}$ and $\hat{F}_1$ as embedded into $E_1^{\mathfrak p}$ via $\bfdot{E}_1^{\mathfrak p} \oplus \hat{F}_1 = M_{\gekl{1,\mathfrak p}} \oplus (\bigoplus_i M_{[1,i]}) \subseteq M_{\gekl{1,\mathfrak p}} \oplus M_{[1,I]} \subseteq E_1^{\mathfrak p}$. Define $E_1^p \defeq \bfdot{E}_1^p$ for all $p \neq \mathfrak p$ and $E_1 \defeq \bigoplus_p E_1^p$. Let $d_l$, $1 \leq l \leq [1,I]$, be the rank-one projections in $DM_{[1,I]}$ and $w \in M_{[1,I]}$ the permutation matrix such that $w d_l w^* = d_{l+1}$ if $1 \leq l \leq [1,I] - 1$ and $w d_{[1,I]} w^* = d_1$. Define $\beta_{1,\mathfrak r}^p \defeq \bfdot{\beta}_{1,\mathfrak r}^p$ for $\mathfrak r = 0,1$, $p \neq \mathfrak p$, $\beta_{1,0}^{\mathfrak p} \defeq (\bfdot{\beta}_{1,0}^{\mathfrak p}, \pi)$ and $\beta_{1,1}^{\mathfrak p} \defeq {\rm Ad\,}(1_{\bfdot{E}_1^{\mathfrak p}}, w) \circ (\beta_{1,1}^{\mathfrak p}, \pi_1)$ as maps $F_1 \to \bfdot{E}_1^{\mathfrak p} \oplus \hat{F}_1 \subseteq E_1^{\mathfrak p}$. Now define $A_1 \defeq \menge{(f,a) \in C([0,1],E_1) \oplus F_1}{f(\mathfrak r) = \beta_{1,\mathfrak r}(a) \ {\rm for} \ \mathfrak r = 0,1}$. Now suppose that our new construction produced $$ A_1 \overset{\varphi_1}{\longrightarrow} A_2 \overset{\varphi_2}{\longrightarrow} \dotso \overset{\varphi_{n-1}}{\longrightarrow} A_n. $$ Let $\bfdot{A}_{n+1}$ and $\bfdot{\varphi}_n: \: A_n \to \bfdot{A}_{n+1}$ be given by the original construction as recalled in \S~\ref{ss:CCCC}. In order to modify $\bfdot{A}_{n+1}$ and $\bfdot{\varphi}_n$, we use the same notation for $\bfdot{\varphi} \defeq \bfdot{\varphi}_n$ as in \S~\ref{ss:CCCC}. Let $[n+1,J] \defeq \sum_j [n+1,j]$. Choose an index $\mathfrak q$. Define $E_{n+1}^{\mathfrak q} \defeq M_{\gekl{n+1,\mathfrak q} + [n+1,J]}$. View $\bfdot{E}^{\mathfrak q}_{n+1}$ and $\hat{F}_{n+1}$ as embedded into $E^{\mathfrak q}_{n+1}$ via $\bfdot{E}^{\mathfrak q}_{n+1} \oplus \hat{F}_{n+1} = M_{\gekl{n+1,\mathfrak q}} \oplus (\bigoplus_j M_{[n+1,j]}) \subseteq M_{\gekl{n+1,\mathfrak q}} \oplus M_{[n+1,J]} \subseteq E^{\mathfrak q}_{n+1}$. Define $E^q_{n+1} \defeq \bfdot{E}^q_{n+1}$ for all $q \neq \mathfrak q$ and $E_{n+1} \defeq \bigoplus_q E^q_{n+1}$. Set $\beta_\mathfrak r^q \defeq \bfdot{\beta}_\mathfrak r^q$ for $\mathfrak r = 0,1$, $q \neq \mathfrak q$ and $\beta_0^{\mathfrak q} \defeq (\bfdot{\beta}_0^{\mathfrak q}, \pi)$ as a map $F_{n+1} \to \bfdot{E}^{\mathfrak q}_{n+1} \oplus \hat{F}_{n+1} \subseteq E^{\mathfrak q}_{n+1}$. Let us now define $\beta_1$. Consider the descriptions of $\pi \circ \bfdot{\varphi}^{j,i}$ for $i \neq \bm{i}$ in \eqref{e:piphiji} and \eqref{e:piphi-i} and of $\pi \circ \bfdot{\varphi}^{j,\bm{i}}_\theta$ in \eqref{e:piphijbitheta} and \eqref{e:piphi-bi}. Let $d_l^i$, $1 \leq l \leq \sum_j m(j,i)$, be the rank-one projections in $\bigoplus_j DM_{m(j,i)}$ and $w^i, w^{\bm{i}}_\theta \in M_{[n+1,J]}$ permutation matrices such that, if we identify $d_l^i \otimes \mathfrak f$ with its image in $E_{n+1}^\mathfrak q$ under the compositions of the embeddings $\big( \bigoplus_j M_{m(j,i)} \big) \otimes \hat{F}_n^i \tailarr \hat{F}_{n+1}$ from \eqref{e:piphi-i}, \eqref{e:piphi-bi} and $\hat{F}_{n+1} = \bigoplus_j M_{[n+1,j]} \subseteq M_{[n+1,J]}$ from above, we have $w^i (d_l^i \otimes \mathfrak f) (w^i)^* = d_{l+1}^i \otimes \mathfrak f$ if $1 \leq l \leq \sum_j m(j,i) - 1$, $w^i (d_l^i \otimes \mathfrak f) (w^i)^* = d_1^i \otimes \mathfrak f$ if $l = \sum_j m(j,i)$, $w^{\bm{i}}_\theta (d_l^{\bm{i}} \otimes \mathfrak f) (w^{\bm{i}}_\theta)^* = d_{l+1}^{\bm{i}} \otimes \mathfrak f$ if $1 \leq l \leq \sum_j m(j,\bm{i}) - 1$ and $w^{\bm{i}}_\theta (d_l^{\bm{i}} \otimes \mathfrak f) (w^{\bm{i}}_\theta)^* = d_1^{\bm{i}} \otimes \mathfrak f$ if $l = \sum_j m(j,\bm{i})$, for all $\mathfrak f \in D\hat{F}_n^{\bm{i}}$. Let $\mathfrak e^i$ be the unit of $\big( \bigoplus_j M_{m(j,i)} \big) \otimes \hat{F}_n^i$, viewed as a projection in $M_{[n+1,J]}$ via the above embedding into $M_{[n+1,J]}$, and let $\mathfrak e^{\bm{i}}_\theta$ be the unit of $\big( \bigoplus_j M_{m(j,\bm{i})} \big) \otimes \hat{F}_n^{\bm{i}}$, viewed as a projection in $M_{[n+1,J]}$ via the above embedding into $M_{[n+1,J]}$. Let $\mathfrak e_{F,C}$ and $\mathfrak e_{F,F}$ be the projections in $\hat{F}_{n+1}$ corresponding to the decomposition of $\bfdot{\varphi}_F$ (or rather $\pi \circ \bfdot{\varphi}_F$) in \eqref{e:phiF} so that $1_{\hat{F}_{n+1}} = \mathfrak e_{F,C} + \mathfrak e_{F,F}$, and set $\mathfrak e_{\bm{Z}} \defeq \mathfrak e_{F,F} - (\sum_{i \neq \bm{i}} \mathfrak e^i) - \mathfrak e_\theta^{\bm{i}}$. Now define $w_{F,F} \defeq (\sum_{i \neq \bm{i}} \mathfrak e^i w^i \mathfrak e^i) + \mathfrak e_\theta^{\bm{i}} w_\theta^{\bm{i}} \mathfrak e_\theta^{\bm{i}} + \mathfrak e_{\bm{Z}}$ and \begin{equation} \label{e:w=1w} w \defeq \rukl{ \begin{smallmatrix} \mathfrak e_{F,C} & 0 \\ 0 & w_{F,F} \end{smallmatrix} } \end{equation} with respect to the decomposition of $\bfdot{\varphi}_F$ in \eqref{e:phiF}. Set $$ \beta_1^{\mathfrak q} \defeq {\rm Ad\,} \rukl{ \begin{smallmatrix} 1_{\bfdot{E}_{n+1}^{\mathfrak q}} & 0 \\ 0 & w \end{smallmatrix} } \circ \rukl{ \begin{smallmatrix} \bfdot{\beta}_1^{\mathfrak q} & 0 \\ 0 & \pi \end{smallmatrix} } : \: F_{n+1} \to \bfdot{E}_{n+1}^{\mathfrak q} \oplus \hat{F}_{n+1} \subseteq E_{n+1}^{\mathfrak q}. $$ Finally, define $A_{n+1} \defeq \menge{(f,a) \in C([0,1],E_{n+1}) \oplus F_{n+1}}{f(\mathfrak r) = \beta_\mathfrak r(a) \ \text{for} \ \mathfrak r = 0,1}$ and $\varphi = \varphi_n: \: A_n \to A_{n+1}$ by $\varphi_F \defeq \bfdot{\varphi}_F$, $\varphi_C^q \defeq \bfdot{\varphi}_C^q$ for $q \neq \mathfrak q$, and \begin{equation} \label{e:dotphibqC} \varphi_C^{\mathfrak q} \defeq \rukl{ \begin{smallmatrix} \bfdot{\varphi}_C^{\mathfrak q} & 0 \\ 0 & \pi \circ \bfdot{\varphi}_F \end{smallmatrix} } : \: A_n \to C([0,1], \bfdot{E}_{n+1}^{\mathfrak q} \oplus \hat{F}_{n+1}) \subseteq C([0,1], E_{n+1}^{\mathfrak q}). \end{equation} By construction, $\varphi_n$ is well-defined, i.e., $\varphi_n(f,a)$ satisfies the defining boundary conditions for $A_{n+1}$ for all $(f,a) \in A_n$. Proceeding in this way, we obtain an inductive system $\gekl{A_n, \varphi_n}_n$. \begin{lemma} \label{lem:conn:Ell} $A \defeq \ilim_n \gekl{A_n, \varphi_n}$ is a classifiable C*-algebra with ${\rm Ell}(A) \cong \mathcal E$, where $\mathcal E = (G_0, G_0^+, u, T, r, G_1)$ as in \cite[Theorem~1.2]{Li18} or $\mathcal E = (G_0, \gekl{0}, T, \rho, G_1)$ as in \cite[Theorem~1.3]{Li18}. In the latter case, $A$ has continuous scale. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} If we set $B_n \defeq \menge{(f,a) \in A_n}{f(t) \in DE_n \ \forall \ t \in [0,1], \, a \in DF_n}$, then $B \defeq \ilim_n \gekl{B_n, \varphi_n}$ is a C*-diagonal of $A$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} Here $DF_n$ is the C*-diagonal of $F_n$ defined in \cite[\S~6.1]{Li18}. \begin{proof} $A$ is classifiable and unital or stably projectionless with continuous scale for the same reasons why the original construction recalled in \S~\ref{ss:CCCC} yields C*-algebras with these properties (see \cite{Li18, Ell, EV, GLN} for details). We also have ${\rm Ell}(A) \cong \mathcal E$ for the same reasons as for the original construction. This is straightforward for K-theory, as $A_{n+1}$ and $\bfdot{A}_{n+1}$ have the same K-theory and $\varphi_n$ induces the same map on K-theory as $\bfdot{\varphi}_n$. It is also straightforward to see that modification (conn) yields the desired trace simplex and pairing between $K_0$ and traces. Indeed, we can think of our modification taking place already at the first stage of the construction summarized in \cite[\S~2]{Li18}, where a non-simple C*-algebra with the prescribed Elliott invariant is constructed. And that this non-simple C*-algebra has the desired trace simplex and pairing is enforced in the construction summarized in \cite[\S~2]{Li18} by making sure that for the analogues of $\bfdot{\varphi}_C$ and $\bfdot{\varphi}_F$, the block diagonal entries of the form $t \ma \tau(t) a(x(t))$ as in \eqref{e:tauax} and $\bfdot{\varphi}_{F,F}$ as in \eqref{e:phiF} take up larger and larger portions of $C([0,1],\bfdot{E}_{n+1})$. But our modification only increases these portions. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} Finally, the connecting maps $\varphi_n$ are of the same form as in \cite[\S~4]{Li18}, and hence admit groupoid models as in \cite[\S~6]{Li18}. Hence $B$ is indeed a C*-diagonal of $A$ by the same argument as in \cite[\S~5 -- 7]{Li18}. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \subsection{Building block C*-diagonals with path-connected spectra} \label{ss:BBCDiagPathConn} Let us now show that modification (conn) yields C*-diagonals with connected spectra. We need the following notations: Let $\mathcal Y_n \defeq \Spec DE_n$, $\bfdot{\mathcal Y}_n^p \defeq \Spec D\bfdot{E}_n^p$, $\mathcal Y_n^{\mathfrak p} \defeq \Spec DE_n^{\mathfrak p}$ so that $\mathcal Y_n = \mathcal Y_n^{\mathfrak p} \amalg \coprod_{p \neq \mathfrak p} \bfdot{\mathcal Y}_n^p$, $\mathcal X_n \defeq \Spec D\hat{F}_n$, $\mathcal X_n \defeq \Spec D\hat{F}_n$, $\mathcal X_n^i \defeq \Spec D\hat{F}_n^i$ and $\mathcal F_n^{(0)} \defeq \Spec DF_n$. We have $\mathcal F_n^{(0)} \cong (Z_n \times \mathcal X^{\bm{i}}_n) \amalg (\coprod_{i \neq \bm{i}} \gekl{\theta_n^i} \times \mathcal X_n^i)$. $\pi: \: F_n \onto \hat{F}_n$ restricts to $DF_n \onto D\hat{F}_n$, which induces $\mathcal X_n \into \mathcal F_n^{(0)}$ given by $\mathcal X_n^i \into \gekl{\theta_n^i} \times \mathcal X_n^i, \, x \ma (\theta_n^i, x)$ with respect to the identification of $\mathcal F_n^{(0)}$ we just mentioned. We identify $\mathcal X_n$ with a subset of $\mathcal F_n^{(0)}$ in this way. Let $\bm{b}_{n,\mathfrak r}^p: \: \mathcal Y_{n,\mathfrak r}^p \to \mathcal X_n$, where $\mathcal Y_{n,\mathfrak r}^p \defeq {\rm dom\,} \bm{b}_{n,\mathfrak r}^p \subseteq \mathcal Y_n^p$, be the map inducing $\beta_{n,\mathfrak r}^p$, define $\bm{b}_{n,\mathfrak r}: \: \mathcal Y_{n,\mathfrak r} \to \mathcal X_n$ correspondingly, and let $\bfdot{\bm{b}}_{n,\mathfrak r}^{\mathfrak p}: \: \bfdot{\mathcal Y}_{n,\mathfrak r}^{\mathfrak p} \to \mathcal X_n$, with $\bfdot{\mathcal Y}_{n,\mathfrak r}^{\mathfrak p} \defeq {\rm dom\,} \bfdot{\bm{b}}_{n,\mathfrak r}^{\mathfrak p} \subseteq \bfdot{\mathcal Y}_n^{\mathfrak p}$, be the map inducing $\bfdot{\beta}_{n,\mathfrak r}^{\mathfrak p}$. Let $\sim$ be the equivalence relation on $([0,1] \times \mathcal Y_n) \amalg \mathcal F_n^{(0)}$ generated by $(\mathfrak r,y) \sim \bm{b}_{n,\mathfrak r}(y) \in \mathcal X_n \subseteq \mathcal F_n^{(0)}$ for $\mathfrak r \in \gekl{0,1}$ and $y \in \mathcal Y_{n,\mathfrak r}$. We write $[\cdot]$ for the canonical projection map $([0,1] \times \mathcal Y_n) \amalg \mathcal F_n^{(0)} \onto \big( ([0,1] \times \mathcal Y_n) \amalg \mathcal F_n^{(0)} \big) / {}_{\sim}$ and identify $\mathcal F_n^{(0)}$ with its image under $[\cdot]$. Set $[0,1] \times_\bullet \mathcal Y_n \defeq \menge{(t,y) \in [0,1] \times \mathcal Y_n}{y \in \mathcal Y_{n,t} \ \text{if} \ t \in \gekl{0,1}}$. The following generalization of Lemma~\ref{lem:SpecB} is straightforward: \begin{lemma} \label{lem:SpecB_gen} We have a homeomorphism $\big( ([0,1] \times_\bullet \mathcal Y_n) \amalg \mathcal F_n^{(0)} \big) / {}_{\sim} \isom \Spec B_n$ sending $[t,y]$ (for $(t,y) \in [0,1] \times \mathcal Y_n$) to the character $B_n \to \mathbb{C}, \, (f,a) \ma y(f(t))$ and $\bm{x} \in \mathcal F_n^{(0)}$ to the character $B_n \to \mathbb{C}, \, (f,a) \ma \bm{x}(a)$. \end{lemma} Moreover, we always have $\mathcal Y_n^{\mathfrak p} = \bfdot{\mathcal Y}_n^{\mathfrak p} \amalg \mathcal X_n$, $\mathcal Y_{n,\mathfrak r}^{\mathfrak p} = \bfdot{\mathcal Y}_{n,\mathfrak r}^{\mathfrak p} \amalg \mathcal X_n$ and \begin{equation} \label{e:bn0bp} \bm{b}_{n,0}^{\mathfrak p} = \bfdot{\bm{b}}_{n,0}^{\mathfrak p} \amalg {\rm id}_{\mathcal X_n}. \end{equation} For $n=1$, $\bm{b}_{1,1}$ is given by $\bm{b}_{1,1}^{\mathfrak p} \vert_{\mathcal Y_{1,1}^{\mathfrak p}} = \bfdot{\bm{b}}_{1,1}^{\mathfrak p}$, and if $\gekl{x_l}_{1 \leq l \leq [1,I]} = \mathcal X_1$ according to the enumeration of rank-one projections in $DM_{[1,I]}$ in \S~\ref{ss:(conn)}, we have \begin{equation} \label{e:b11bp} \bm{b}_{1,1}^{\mathfrak p}(x_l) = x_{l-1} \ \text{if} \ 2 \leq l \leq [1,I] \qquad \text{and} \ \bm{b}_{1,1}^{\mathfrak p}(x_1) = x_{[1,I]}. \end{equation} Now we need to describe the groupoid model $\bm{p} \defeq \bm{p}_n$ for $\varphi_n$. Let us drop the index $n+1$ and write $\mathcal Y \defeq \mathcal Y_{n+1}$, $\mathcal X \defeq \mathcal X_{n+1}$ and so on. By construction of $E^{\mathfrak q}$, we have a decomposition $\mathcal Y^{\mathfrak q} = \bfdot{\mathcal Y}^{\mathfrak q} \amalg \mathcal X$. Moreover, according to the decomposition of $\pi \circ \bfdot{\varphi}_F$ in \S~\ref{ss:CCCC} (see \eqref{e:phiF} -- \eqref{e:piphijbitheta} in combination with the definition of $\mathfrak e^i$, $\mathfrak e_\theta^{\bm{i}}$, $\mathfrak e_{\bm{Z}}$, $\mathfrak e_{F,C}$ in \S~\ref{ss:CCCC}), we have a decomposition of $\mathcal X \subseteq \mathcal Y^{\mathfrak q}$ as $\mathcal X = (\coprod_{i \neq \bm{i}} \mathcal X[\mathfrak e^i]) \amalg (\mathcal X[\mathfrak e_\theta^{\bm{i}}] \amalg \mathcal X[\mathfrak e_{\bm{Z}}]) \amalg \mathcal X[\mathfrak e_{F,C}]$, where $\mathcal X[\mathfrak e] = \menge{x \in \mathcal X}{x(\mathfrak e) = 1}$. Define $\mathcal Y_{\rm conn}^{\mathfrak q} \defeq (\coprod_{i \neq \bm{i}} \mathcal X[\mathfrak e^i]) \amalg \mathcal X[\mathfrak e_\theta^{\bm{i}}]$ and $\mathcal Y_{\rm rest}^{\mathfrak q} \defeq \mathcal X[\mathfrak e_{\bm{Z}}] \amalg \mathcal X[\mathfrak e_{F,C}]$. We have $\mathcal Y_{\rm conn}^{\mathfrak q} \subseteq \mathcal Y_\mathfrak r^{\mathfrak q}$ for $\mathfrak r = 0,1$. According to the construction of $\beta_0^{\mathfrak q} \defeq \beta_{n+1,0}^{\mathfrak q}$ in \S~\ref{ss:(conn)}, $\bm{b}_0 \defeq \bm{b}_{n+1,0}$ sends $x \in \mathcal Y_{\rm conn}^{\mathfrak q}$ to $x \in \mathcal X$. To describe $\bm{b}_1 \defeq \bm{b}_{n+1,1}$ on $\mathcal Y_{\rm conn}^{\mathfrak q}$, note that we have an identification \begin{equation} \label{e:X1X1=MX} \Big( \coprod_{i \neq \bm{i}} \mathcal X[\mathfrak e^i] \Big) \amalg \mathcal X[\mathfrak e_\theta^{\bm{i}}] \isom \Big( \coprod_{i \neq \bm{i}} \Big( \coprod_j \mathcal M(j,i) \times \mathcal X_n^i \Big) \Big) \amalg \Big( \coprod_j \mathcal M(j,\bm{i}) \times \mathcal X_n^{\bm{i}} \Big) = \Big( \coprod_{i \neq \bm{i}} \mathcal M^i \times \mathcal X_n^i \Big) \amalg (\mathcal M^{\bm{i}} \times \mathcal X_n^{\bm{i}}) \end{equation} corresponding to the decomposition of $\pi \circ \bfdot{\varphi}_F$ in \S~\ref{ss:CCCC} (see \eqref{e:phiF} -- \eqref{e:piphi-bi} in combination with the definition of $\mathfrak e^i$, $\mathfrak e_\theta^{\bm{i}}$, $\mathfrak e_{\bm{Z}}$, $\mathfrak e_{F,C}$ in \S~\ref{ss:CCCC}), where $\mathcal M^i = \coprod_j \mathcal M(j,i)$ and $\mathcal M^{\bm{i}} = \coprod_j \mathcal M(j,\bm{i})$. With respect to \eqref{e:X1X1=MX}, if $\mathcal M^i = \big\{ \mu^i_1, \dotsc, \mu^i_{\sum_j m(j,i)} \big\}$ corresponding to the enumeration of rank-one projections in $\bigoplus_j DM_{m(j,i)}$ in \S~\ref{ss:(conn)}, we have \begin{equation} \label{e:b1mux} \bm{b}_1(\mu^i_l,x) = (\mu^i_{l-1},x) \ {\rm if} \ 2 \leq l \leq \sum_j m(j,i) \qquad \text{and} \ \bm{b}_1(\mu^i_1,x) = (\mu^i_{\sum_i m(j,i)},x) \qquad \qquad \forall \ x \in \mathcal X_n^i, \end{equation} according to the construction of $\beta_1^{\mathfrak q} \defeq \beta_{n+1,1}^{\mathfrak q}$ in \S~\ref{ss:(conn)}. We also have $\mathcal Y_{\rm rest}^{\mathfrak q} \subseteq \mathcal Y_{\mathfrak r}^{\mathfrak q}$ for $\mathfrak r = 0,1$, and $\bm{b}_\mathfrak r$ sends $x \in \mathcal Y_{\rm rest}^{\mathfrak q}$ to $x \in \mathcal X$ for $\mathfrak r = 0,1$ according to the construction of $\beta_\mathfrak r^{\mathfrak q}$ in \S~\ref{ss:(conn)}. On $( \coprod_{i \neq \bm{i}} \mathcal X[\mathfrak e^i] ) \amalg \mathcal X[\mathfrak e_\theta^{\bm{i}}]$, using the identification \eqref{e:X1X1=MX}, we have \begin{equation} \label{e:pmux} \bm{p}(\mu,x) = x \in \mathcal X_n^i \qquad \forall \ \mu \in \mathcal M^i, \, x \in \mathcal X_n^i \end{equation} according to the descriptions of the components of $\pi \circ \bfdot{\varphi}_F$ in \eqref{e:piphiji}, \eqref{e:piphijbitheta}, \eqref{e:piphi-i} and \eqref{e:piphi-bi}. Furthermore, note that condition \eqref{e:phiCfp} implies that we have embeddings \begin{equation} \label{e:YnYn+1} \mathcal Y_n = \mathcal Y_n^{\mathfrak p} \amalg \coprod_{p \neq \mathfrak p} \bfdot{\mathcal Y}_n^p \into \mathcal Y_\mathfrak r^{\mathfrak q}, \qquad \mathfrak r = 0,1, \end{equation} sending $\mathcal Y_{n,\mathfrak r}$ into $\bm{b}_\mathfrak r^{-1}(( \coprod_{i \neq \bm{i}} \mathcal X[\mathfrak e^i] ) \amalg \mathcal X[\mathfrak e_\theta^{\bm{i}}])$ such that the following diagram commutes for $\mathfrak r = 0,1$: \begin{equation} \label{e:CDbnb} \begin{tikzcd} \mathcal Y_{n,\mathfrak r} \arrow[d, "\bm{b}_{n,\mathfrak r}"'] \arrow[r, hook] & \bm{b}_\mathfrak r^{-1}((\coprod_{i \neq \bm{i}} \mathcal X[\mathfrak e^i] ) \amalg \mathcal X[\mathfrak e_\theta^{\bm{i}}]) \arrow[d, "\bm{b}_\mathfrak r"] \\ \mathcal X_n & \arrow[l, "\bm{p}"'] ( \coprod_{i \neq \bm{i}} \mathcal X[\mathfrak e^i] ) \amalg \mathcal X[\mathfrak e_\theta^{\bm{i}}] \end{tikzcd} \end{equation} \begin{proposition} \label{prop:conn} The C*-diagonals $B_n$ as in Lemma~\ref{lem:conn:Ell} have path-connected spectra for all $n = 1, 2, 3, \dotsc$. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} In the following, for two points $x_1$ and $x_2$, we write $x_1 \sim_{\rm conn} x_2$ if there exists a continuous path from $x_1$ to $x_2$, in a space which will be clear from the context or specified otherwise. We start with the observation that given $\bm{x} \in \mathcal F_n^{(0)} \setminus \mathcal X_n$, i.e., $\bm{x} \in (Z_n \setminus \gekl{\theta^{\bm{i}}_n}) \times \mathcal X_n^{\bm{i}}$, since $Z_n$ is path-connected, we have $\bm{x} \sim_{\rm conn} x \in \gekl{\theta_n^{\bm{i}}} \times \mathcal X_n^{\bm{i}} \subseteq \mathcal X_n$. Hence, to show that $\Spec B_n$ is path-connected, it suffices to show that $[[0,1] \times_\bullet \mathcal Y_n]$ is path-connected. Let us prove inductively on $n$ that $[[0,1] \times_\bullet \mathcal Y_n] \subseteq \Spec B_n$ is path-connected. Note that we can always make the following reduction: For all $(t,y) \in [0,1] \times \mathcal Y_n$, we have $y \in \mathcal Y_{n,0}$ because $\beta_{n,0}$ is always unital, and $[t,y] \sim_{\rm conn} [0,y]$. Moreover, given $\mathfrak r \in \gekl{0,1}$ and $y \in \mathcal Y_{n,\mathfrak r}$, since $\bm{b}_{n,0}: \: \mathcal X_n \subseteq \mathcal Y_{n,0}^{\mathfrak p} \to \mathcal X_n$ is surjective by \eqref{e:bn0bp}, there exists $\bar{y} \in \mathcal X_n \subseteq \mathcal Y_{n,0}^{\mathfrak p}$ such that $\bm{b}_{n,0}(\bar{y}) = \bm{b}_{n,\mathfrak r}(y)$ and thus $[0,\bar{y}] = [\mathfrak r,y]$. Hence it is enough to show for $y, \bar{y} \in \mathcal X_n \subseteq \mathcal Y_{n,0}^{\mathfrak p}$ that $[0,y] \sim_{\rm conn} [0,\bar{y}]$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} For $n=1$, this follows from the observation that we have $[0,x_{l+1}] \sim_{\rm conn} [1,x_{l+1}] = [0,x_l]$ for all $1 \leq l \leq [1,I] - 1$ because of \eqref{e:b11bp}. Now let us assume that $[[0,1] \times_\bullet \mathcal Y_n]$ is path-connected, and let us show that $[[0,1] \times_\bullet \mathcal Y_{n+1}]$ is path-connected. We use the same notation as in the description of $\bm{p}$ above (we also drop the index $n+1$). It suffices to show that for all $y, \bar{y} \in \mathcal X \subseteq \mathcal Y_0^{\mathfrak q}$ that $[0,y] \sim_{\rm conn} [0,\bar{y}]$. We further reduce to $y, \bar{y} \in \mathcal Y_{\rm conn}^{\mathfrak q}$: If $y \in \mathcal X[\mathfrak e_{\bm{Z}}]$, then there exists $\mathfrak r \in \gekl{0,1}$ and $\tilde{y} \in \mathcal Y_\mathfrak r$ with $\bm{b}_\mathfrak r(\tilde{y}) = \bm{b}_0(y) \ (= y)$, and by \eqref{e:phiCx}, we must have $\bm{b}_{\mathfrak r^*}(\tilde{y}) \in \bm{p}^{-1}(\mathcal X_n) \cap \mathcal X = ( \coprod_{i \neq \bm{i}} \mathcal X[\mathfrak e^i] ) \amalg \mathcal X[\mathfrak e_\theta^{\bm{i}}] = \mathcal Y_{\rm conn}^{\mathfrak q}$, where $\mathfrak r^* = 1 - \mathfrak r$. Hence $[0,y] = [\mathfrak r,\tilde{y}] \sim_{\rm conn} [\mathfrak r^*,\tilde{y}] = [0,y']$ for some $y' \in \mathcal Y_{\rm conn}^{\mathfrak q}$. If $y \in \mathcal X[\mathfrak e_{F,C}]$, then there must exist $\mathfrak r \in \gekl{0,1}$ and $\tilde{y} \in \mathcal Y_\mathfrak r$ with $\bm{b}_\mathfrak r(\tilde{y}) = \bm{b}_0(y) \ (= y)$, and by \eqref{e:phiCl}, we must have $\bm{b}_{\mathfrak r^*}(\tilde{y}) \in \bm{p}^{-1}(\mathcal X_n) \cap \mathcal X = ( \coprod_{i \neq \bm{i}} \mathcal X[\mathfrak e^i] ) \amalg \mathcal X[\mathfrak e_\theta^{\bm{i}}] = \mathcal Y_{\rm conn}^{\mathfrak q}$ (here $\mathcal X$ is viewed as a subset of $\Spec B_{n+1}$), where $\mathfrak r^* = 1 - \mathfrak r$. Hence $[0,y] = [\mathfrak r,\tilde{y}] \sim_{\rm conn} [\mathfrak r^*,\tilde{y}] = [0,y']$ for some $y' \in \mathcal Y_{\rm conn}^{\mathfrak q}$. Moreover, given $y \in \mathcal Y_{\rm conn}^{\mathfrak q}$ (for which we have $\bm{b}_0(y) = y \in \mathcal X$), there exists by \eqref{e:CDbnb} $y' \in \mathcal Y_{n,0} \subseteq \mathcal Y_0$ such that $\bm{p}(\bm{b}_0(y')) = \bm{p}(\bm{b}_0(y))$. Viewing $\bm{b}_0(y')$ as an element in $\mathcal Y_{\rm conn}^{\mathfrak q}$, let us now show that \begin{equation} \label{e:0bypc0by'} [0,\bm{b}_0(y')] = [0,\bm{b}_0(y)]: \end{equation} Under the bijection \eqref{e:X1X1=MX}, we have $y = (\mu,x)$ and $\bm{b}_0(y') = (\mu',x)$, where $x = \bm{p}(\bm{b}_0(y')) = \bm{p}(\bm{b}_0(y))$. Hence \eqref{e:0bypc0by'} follows from the following claim: \begin{equation} \label{e:0mxpc0m'x} \text{Under the bijection} \ \eqref{e:X1X1=MX}, \ \text{we have} \ [0,(\mu,x)] \sim_{\rm conn} [0,(\mu',x)] \ \text{for all} \ \mu, \mu' \in \mathcal M^i, \, x \in \mathcal X_n^i. \end{equation} This in turn follows from the observation that for all $l \in \gekl{1, \dotsc, (\sum_j m(j,i)) - 1}$ and $x \in \mathcal X_n^i$, we have $[0,(\mu_{l+1},x)] \sim_{\rm conn} [1,(\mu_{l+1},x)] = [0,(\mu_l,x)]$. The last equation follows from \eqref{e:b1mux}. So we have $[0,y] \sim_{\rm conn} [0,\bm{b}_0(y')] = [0,y']$. Hence it suffices to show $[0,y] \sim_{\rm conn} [0,\bar{y}]$ in $[[0,1] \times_\bullet \mathcal Y_{n+1}] \subseteq \Spec B_{n+1}$ for all $y, \bar{y} \in \mathcal Y_{n,0}$. By induction hypothesis, we have $[0,y] \sim_{\rm conn} [0,\bar{y}]$ in $[[0,1] \times_\bullet \mathcal Y_n] \subseteq \Spec B_n$. Hence there exist $(\mathfrak r_k,y_k) \in \gekl{0,1} \times \mathcal Y_n$, $0 \leq k \leq K$, such that $(\mathfrak r_0,y_0) = (0,y)$, $(\mathfrak r_K,y_K) = (0,\bar{y})$ and for all $0 \leq k \leq K - 1$, we have $[\mathfrak r_k,y_k] = [\mathfrak r_{k+1},y_{k+1}]$ in $\Spec B_n$ or $y_k = y_{k+1}$, $\mathfrak r_{k+1} = \mathfrak r_k^*$ (where $\mathfrak r_k^* = 1 - \mathfrak r_k$). Clearly, in the latter case, we have $[\mathfrak r_k,y_k] \sim_{\rm conn} [\mathfrak r_k^*,y_k] = [\mathfrak r_{k+1},y_{k+1}]$ in $[[0,1] \times_\bullet \mathcal Y_{n+1}] \subseteq \Spec B_{n+1}$. To treat the former case, we need to show that $[\mathfrak r_k,y_k] = [\mathfrak r_{k+1},y_{k+1}]$ in $\Spec B_n$ (i.e., $\bm{b}_{n,\mathfrak r_k}(y_k) = \bm{b}_{n,\mathfrak r_{k+1}}(y_{k+1})$) implies $[\mathfrak r_k,y_k] \sim_{\rm conn} [\mathfrak r_{k+1},y_{k+1}]$ in $[[0,1] \times_\bullet \mathcal Y_{n+1}] \subseteq \Spec B_{n+1}$, where we view $y_k$ and $y_{k+1}$ as elements of $\mathcal Y_{\mathfrak r_k}^{\mathfrak q}$ and $\mathcal Y_{\mathfrak r_{k+1}}^{\mathfrak q}$ using \eqref{e:YnYn+1}. We have $$ \bm{p}(\bm{b}_{\mathfrak r_k}(y_k)) \overset{\eqref{e:CDbnb}}{=} \bm{b}_{n,\mathfrak r_k}(y_k) = \bm{b}_{n,\mathfrak r_{k+1}}(y_{k+1}) = \bm{p} (\bm{b}_{\mathfrak r_{k+1}}(y_{k+1})). $$ Thus, viewing $\bm{b}_{\mathfrak r_k}(y_k)$, $\bm{b}_{\mathfrak r_{k+1}}(y_{k+1})$ as elements of $\mathcal Y_{\rm conn}^{\mathfrak q}$, we have $\bm{b}_{\mathfrak r_k}(y_k) = (\mu,x)$ and $\bm{b}_{\mathfrak r_{k+1}}(y_{k+1}) = (\mu',x)$ for some $\mu, \mu' \in \mathcal M^i$ and $x \in \mathcal X_n^i$ with respect to \eqref{e:X1X1=MX}. Hence \eqref{e:0mxpc0m'x} implies that, in $[[0,1] \times_\bullet \mathcal Y_{n+1}] \subseteq \Spec B_{n+1}$, we have \begin{equation*} [\mathfrak r_k,y_k] = [0, \bm{b}_{\mathfrak r_k}(y_k)] = [0, (\mu,x)] \sim_{\rm conn} [0, (\mu',x)] = [0, \bm{b}_{\mathfrak r_{k+1}}(y_{k+1})] = [\mathfrak r_{k+1},y_{k+1}]. \qedhere \end{equation*} \end{proof} \begin{remark}\em \label{rem:Yconnconn} The proof of \eqref{e:0mxpc0m'x} yields that for all $y, \bar{y} \in \mathcal Y_{\rm conn}^{\mathfrak q}$ with $\bm{p}[0,y] = \bm{p}[0,\bar{y}]$, there exists a continuous path $\xi$ in $[[0,1] \times_\bullet \mathcal Y_{n+1}]$ with $\xi(0) = [0,y]$, $\xi(1) = [0,\bar{y}]$ and $\bm{p} \circ \xi \equiv \bm{p}[0,y] = \bm{p}[0,\bar{y}]$. \end{remark} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{corollary} \label{cor:conn_unital} In the unital case, modification (conn) yields C*-diagonals with connected spectra. \end{corollary} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} The C*-diagonal is given by $B = \ilim_n \gekl{B_n, \varphi_n}$, so that its spectrum is $\Spec B \cong \plim_n \gekl{\Spec B_n, \bm{p}_n}$. In the unital case, $B_n$ is unital for all $n$, so that $\Spec B_n$ is compact for all $n$. By Proposition~\ref{prop:conn}, $\Spec B_n$ is path-connected, in particular connected. Now our claim follows from the general fact that inverse limits of compact connected spaces are again connected (see for instance \cite[Theorem~6.1.20]{Eng}). \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} In the stably projectionless case, we cannot argue as for Corollary~\ref{cor:conn_unital} because it is no longer true in general that inverse limits of locally compact, non-compact, connected spaces are again connected. Instead, by conjugating $\beta_{n+1,\bullet}$ by suitable permutation matrices and adjusting $\varphi$ accordingly, we can always arrange that the $\lambda$s in \eqref{e:fplp} are monotonous and that, in addition to \eqref{e:phiCfp} -- \eqref{e:phiCx}, we have the following: \begin{align} \label{e:phiCl*} & \forall \ \lambda, \ \text{corresponding block diagonal entry} \ f^p \circ \lambda \ \text{in} \ \varphi_C(f,a) \ \text{as in} \ \eqref{e:fplp}, \ \mathfrak r \ \text{as in} \ \eqref{e:phiCl} \ \text{with} \ \lambda(\mathfrak r) \notin \gekl{0,1} \tag{11$^*$} \\ & \exists \ \text{a block diagonal entry} \ f^p \circ \lambda^* \ \text{in} \ \varphi_C(f,a) \ \text{as in} \ \eqref{e:fplp} \ \text{with} \ \lambda^*(\mathfrak r^*) = \lambda(\mathfrak r), \, \lambda^*(\mathfrak r) = \lambda(\mathfrak r^*)^*, \nonumber \\ & \text{unless} \ p = \grave{p}, \ \text{in which case} \ \lambda(\mathfrak r^*) = 0; \nonumber \\ \label{e:phiClx} & \forall \ \lambda, \mathfrak r \ \text{as in} \ \eqref{e:phiCl} \ \text{with} \ \bm{t} \defeq \lambda(\mathfrak r) \notin \gekl{0,1} \ \text{and the corresponding block diagonal entry} \ f^p \circ \lambda \ \text{in} \ \eqref{e:fplp}, \tag{11$\reg$}\\ & \text{we have that} \ f^p(\bm{t}) \ \text{appears as exactly one block diagonal entry in} \ \varphi_{F,C}(f) \ \text{in} \ \eqref{e:phiF}. \nonumber \end{align} \begin{proposition} \label{prop:conn_spl} In the stably projectionless case, modification (conn) with the above-mentioned adjustments yields C*-diagonals with connected spectra. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} The C*-diagonal is given by $B = \ilim_n \gekl{B_n, \varphi_n}$, so that its spectrum is $\Spec B \cong \plim_n \gekl{\Spec B_n, \bm{p}_n}$. Let $\bm{p}_{n,\infty}: \: \Spec B \onto \Spec B_n$ be the canonical map from the inverse limit structure of $\Spec B$, and denote by $\bm{p}_{n,\bar{n}}: \: \Spec B_{\bar{n}+1} \onto \Spec B_n$ the composition $\bm{p}_{\bar{n}} \circ \dotso \circ \bm{p}_n$. Now define for each $N \geq 1$ the intervals $I_y \defeq [0,1]$ for $y \in \mathcal Y_{1,1}$, $I_y \defeq [0,1 - \tfrac{1}{N}]$ for $y \notin \mathcal Y_{1,1}$, and the subset $K_{N,1} \defeq [(\bigcup_{y \in \mathcal Y_1} I_y \times \gekl{y}) \amalg \mathcal F_1^{(0)}] \subseteq \Spec B_1$. Now it is straightforward to check by induction on $n$ that $\bm{p}_{1,n}^{-1}(K_{N,1}) = (\bigcup_{y \in \mathfrak Y} [\mathfrak I_y \times \gekl{y}]) \cup (\bigcup_{x \in \mathfrak X} [Z_n \times \gekl{x}])$ where $\mathfrak Y$ is a subset of $\mathcal Y_n$, $\mathfrak I_y$ is of the form $[0,1]$, $[0,t]$ or $[t,1]$ for some $t \in [0,1]$, $\mathfrak X$ is a subset of $\mathcal X_n^{\bm{i}}$, for all $\tilde{y} \in \mathfrak Y$ with $\mathfrak I_{\tilde{y}} \neq [0,1]$ there exists $y \in \mathfrak Y$ with $\mathfrak I_y = [0,1]$ and $[\mathfrak I_{\tilde{y}} \times \gekl{\tilde{y}}] \cap [\mathfrak I_y \times \gekl{y}] \neq \emptyset$, and for all $x \in \mathfrak X$ there exists $y \in \mathfrak Y$ with $\mathfrak I_y = [0,1]$ and $[Z_n \times \gekl{x}] \cap [\mathfrak I_y \times \gekl{y}] \neq \emptyset$. Now we proceed inductively on $n$ to show that $\bm{p}_{1,n}^{-1}(K_{N,1})$ is path-connected for all $n$. The case $n=1$ is checked as in Proposition~\ref{prop:conn}. For the induction step, first reduce as in Proposition~\ref{prop:conn} to showing that $\bigcup_y [\mathfrak I_y \times \gekl{y}]$ is path-connected, where the union is taken over all $y \in \mathfrak Y$ with $\mathfrak I_y = [0,1]$. Further reduce as in Proposition~\ref{prop:conn} to the statement that for all $y, \bar{y} \in \mathfrak Y$ with $\mathfrak I_y = [0,1]$, $\mathfrak I_{\bar{y}} = [0,1]$ and $y, \bar{y} \in \mathcal Y_{\rm conn}^\mathfrak q$ that $[0,y] \sim_{\rm conn} [0,\bar{y}]$ in $\bm{p}_{1,n}^{-1}(K_{N,1})$. Here the case $y \in \mathcal X[\mathfrak e_{\bm{Z}}]$ is treated as in Proposition~\ref{prop:conn}, while the case $y \in \mathcal X[\mathfrak e_{F,C}]$ uses \eqref{e:phiCl*} and \eqref{e:phiClx}. Now use the induction hypothesis as in Proposition~\ref{prop:conn} to show that we indeed have $[0,y] \sim_{\rm conn} [0,\bar{y}]$ in $\bm{p}_{1,n}^{-1}(K_{N,1})$ for all $y, \bar{y} \in \mathfrak Y$ with $\mathfrak I_y = [0,1]$, $\mathfrak I_{\bar{y}} = [0,1]$ and $y, \bar{y} \in \mathcal Y_{\rm conn}^\mathfrak q$. As $\bm{p}_{1,n}$ is proper, $\bm{p}_{1,n}^{-1}(K_{N,1})$ is compact. Hence it follows that $K_N \defeq \bm{p}_{1,\infty}^{-1}(K_{N,1}) \cong \plim_n \big\{ \bm{p}_{1,n}^{-1}(K_{N,1}), \bm{p}_n \big\}$ is connected (see for instance \cite[Theorem~6.1.20]{Eng}). Therefore $\Spec B = \bigcup_N K_N$ is connected as it is the increasing union of connected subsets. \end{proof} All in all, we obtain the following \begin{theorem} \label{thm:conn_Ell} For every prescribed Elliott invariant $(G_0, G_0^+, u, T, r, G_1)$ as in \cite[Theorem~1.2]{Li18}, our construction produces a twisted groupoid $(G,\Sigma)$ with the same properties as in \cite[Theorem~1.2]{Li18} (in particular, $C^*_r(G,\Sigma)$ is a classifiable unital C*-algebra with ${\rm Ell}(C^*_r(G,\Sigma)) \cong (G_0, G_0^+, u, T, r, G_1)$) such that $G$ has connected unit space. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} For every prescribed Elliott invariant $(G_0, T, \rho, G_1)$ as in \cite[Theorem~1.3]{Li18}, our construction produces a twisted groupoid $(G,\Sigma)$ with the same properties as in \cite[Theorem~1.3]{Li18} (in particular, $C^*_r(G,\Sigma)$ is classifiable stably projectionless with continuous scale, and ${\rm Ell}(C^*_r(G,\Sigma)) \cong (G_0, \gekl{0}, T, \rho, G_1)$) such that $G$ has connected unit space. \end{theorem} This theorem, in combination with classification results for all classifiable C*-algebras, implies Theorem~\ref{thm:main1}. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \section{Further modification of the construction leading to the path-lifting property} \label{s:FurtherMod} Let us now present a further modification of the construction recalled in \S~\ref{ss:CCCC} which will allow us to produce C*-diagonals with Menger manifold spectra. We focus on constructing classifiable C*-algebras (unital or stably projectionless with continuous scale) with torsion-free $K_0$ and trivial $K_1$. In that case, the construction recalled in \S~\ref{ss:CCCC} simplifies because $F_n = \hat{F}_n$ for all $n$, so that we can (and will) think of $A_n$ as a sub-C*-algebra of $C([0,1],E_n)$. \subsection{Modification (path)} \label{ss:(path)} Suppose that we are given a tuple $\mathcal E = (G_0, G_0^+, u, T, r, G_1)$ as in \cite[Theorem~1.2]{Li18} or $\mathcal E = (G_0, \gekl{0}, T, \rho, G_1)$ as in \cite[Theorem~1.3]{Li18} which we want to realize as the Elliott invariant of a classifiable C*-algebra, with $G_0$ torsion-free and $G_1 = \gekl{0}$. As explained in \cite[\S~2]{Li18}, the construction recalled in \S~\ref{ss:CCCC} proceeds in two steps. First an inductive system $\{ \bfdot{A}_n, \accir{\varphi}_n \}$ is constructed so that $\ilim_n \{ \bfdot{A}_n, \accir{\varphi}_n \}$ has the desired Elliott invariant, but is not simple, and then a further modification yields an inductive system $\{ \bfdot{A}_n, \bfdot{\varphi}_n \}$ such that $\ilim_n \{ \bfdot{A}_n, \bfdot{\varphi}_n \}$ has the same Elliott invariant and in addition is simple. The first step in our modification (path) is as in the previous modification (conn) (see \S~\ref{ss:(conn)}) and produces the first building block $A_1$. Now suppose that we have produced $$ A_1 \overset{\varphi_1}{\longrightarrow} A_2 \overset{\varphi_2}{\longrightarrow} \dotso \overset{\varphi_{n-1}}{\longrightarrow} A_n, $$ and that the first step of the original construction as in \cite[\S~2]{Li18} yields $\accir{\varphi}_n: \: A_n \to \bfdot{A}_{n+1}$. We modify $\accir{\varphi}_n$ in two steps, first to $\bfdot{\varphi}_n: \: A_n \to \bfdot{A}_{n+1}$, then to $\varphi_n: \: A_n \to A_{n+1}$. Let us start with the first step. We use the same notation as in \S~\ref{ss:CCCC} and \S~\ref{ss:(conn)}. Recall the description of $\beta_{n,\mathfrak r}^{p,i}$ in \eqref{e:beta=}; it is a composition of the form $$ F_n^i \overset{1 \otimes {\rm id}_{F_n^i}}{\longrightarrow} 1_{m_\mathfrak r(p,i)} \otimes F_n^i \subseteq M_{m_\mathfrak r(p,i)} \otimes F_n^i \tailarr E_n^p. $$ Here and in the sequel, an arrow $\tailarr$ denotes a *-homomorphism of multiplicity $1$ sending diagonal matrices to diagonal matrices as before. Let $\psi_n: \: F_n \to F_{n+1}$ be as in \cite[\S~2]{Li18}. The map $$ \psi_n^{j,i}: \: F_n^i \into F_n \overset{\psi_n}{\longrightarrow} F_{n+1} \onto F_{n+1}^j $$ is given by the following composition: \begin{equation*} F_n^i \overset{1 \otimes {\rm id}_{\hat{F}_n^i}}{\longrightarrow} 1_{m(j,i)} \otimes F_n^i \subseteq M_{m(j,i)} \otimes F_n^i \tailarr F_{n+1}^j. \end{equation*} By choosing $G'$ in \cite[\S~2]{Li18} suitably and because of \cite[Inequality~(2)]{Li18}, we can always arrange that there exist pairwise distinct indices $\gekl{j_0^p}_p \cup \gekl{j_1^p}_{p \neq \grave{p}}$ such that we have $m(j_\mathfrak r^p,i) \geq m_\mathfrak r(p,i)$ for all $p$, $i$, $\mathfrak r = 0,1$ ($p \neq \grave{p}$ if $\mathfrak r = 1$). Then for suitable embeddings $E_n^p \tailarr F_{n+1}^{j_\bullet^p}$ sending $DE_n^p$ into $DF_{n+1}^{j_\bullet^p}$, $\psi^{j_\bullet^p}$ is of the form $F_n \to E_n^p \oplus \bar{F}_{n+1}^{j_\bullet^p} \tailarr F_{n+1}^{j_\bullet^p}$, for some finite-dimensional algebra $\bar{F}_{n+1}^{j_\bullet^p}$, where the first map is given by $ \rukl{ \begin{smallmatrix} \beta_{n,\bullet}^p & 0 \\ 0 & \bar{\psi}^{j_\bullet^p} \end{smallmatrix} } $ for some map $\bar{\psi}^{j_\bullet^p}: \: F_n \to \bar{F}_{n+1}^{j_\bullet^p}$. Let $\varepsilon_\beta^{j_\bullet^p} \defeq \rukl{ \begin{smallmatrix} \beta_{n,\bullet}^p(1_{F_n}) & 0 \\ 0 & 0 \end{smallmatrix} } $, viewed as a projection in $F_{n+1}^{j_\bullet^p}$ via the second embedding $E_n^p \oplus \bar{F}_{n+1}^{j_\bullet^p} \tailarr F_{n+1}^{j_\bullet^p}$. We start discussing the connecting map and will drop indices whenever convenient. $\accir{\varphi} \defeq \accir{\varphi}_n: \: A_n \to \bfdot{A}_{n+1}$ is given by $\accir{\varphi}_F: \: A_n \overset{\accir{\varphi}}{\longrightarrow} \bfdot{A}_{n+1} \to F_{n+1}$, $\accir{\varphi}_F(f,a) = \psi(a)$, and $\accir{\varphi}_C: \: A_n \overset{\accir{\varphi}}{\longrightarrow} \bfdot{A}_{n+1} \to C([0,1],\bfdot{E}_{n+1})$, $\accir{\varphi}_C(f,a) = \rukl{ \begin{smallmatrix} \Phi(f) & 0 \\ 0 & \Phi_F(a) \end{smallmatrix} } $. Let $\varepsilon_\Phi$ be the smallest projection in $D\bfdot{E}$ such that $\Phi(f)(t) = \varepsilon_\Phi \cdot \Phi(f)(t) \cdot \varepsilon_\Phi$ for all $t \in [0,1]$, and let $\varepsilon_{C,F} \in D\bfdot{E}$ be such that $\Phi_F(1_{F_n}) \equiv \varepsilon_{C,F}$. We have a decomposition $\varepsilon_\Phi = \sum_{q,p} \varepsilon_\Phi^{q,p}$, $\varepsilon_\Phi^{q,p} = \varepsilon^+_{q,p} + \varepsilon_+^{q,p} + \varepsilon^-_{q,p} + \varepsilon_-^{q,p}$ into pairwise orthogonal projections in $D\bfdot{E}$ such that, for all $q, p$, \begin{align*} \varepsilon^+_{q,p} \cdot \Phi(f) \cdot \varepsilon^+_{q,p} &= e^+_{q,p} \otimes f^p, &\varepsilon_+^{q,p} \cdot \Phi(f) \cdot \varepsilon_+^{q,p} &= e_+^{q,p} \otimes f^p,\\ \varepsilon^-_{q,p} \cdot \Phi(f) \cdot \varepsilon^-_{q,p} &= e^-_{q,p} \otimes f^p \circ (1 - {\rm id}), &\varepsilon_-^{q,p} \cdot \Phi(f) \cdot \varepsilon_-^{q,p} &= e_-^{q,p} \otimes f^p \circ (1 - {\rm id}), \end{align*} for some finite-rank projections $e^+_{q,p}, e_+^{q,p}, e^-_{q,p}, e_-^{q,p}$ encoding multiplicities of block diagonal entries in $\Phi$. In the unital case, we can always arrange \begin{equation} \label{e:e>1_unital} {\rm rk}\, e^+_{q,p}, {\rm rk}\, e_+^{q,p}, {\rm rk}\, e^-_{q,p}, {\rm rk}\, e_-^{q,p} \geq 1 \quad \forall \ q, p. \end{equation} In the stably projectionless case, we can always arrange that \begin{equation} \label{e:e>1_spl1} {\rm rk}\, e^+_{q,p}, {\rm rk}\, e_+^{q,p}, {\rm rk}\, e^-_{q,p}, {\rm rk}\, e_-^{q,p} \geq 1 \quad \forall \ q \neq \grave{q}, \, p \neq \grave{p}, \qquad \text{and} \ {\rm rk}\, e^+_{\grave{q},\grave{p}} \geq 1, \end{equation} as well as ${\rm rk}\, e^+_{q,p}, {\rm rk}\, e_+^{q,p}, {\rm rk}\, e^-_{q,p}, {\rm rk}\, e_-^{q,p} = 0$ for all $q = \grave{q}, p \neq \grave{p}$ or $q \neq \grave{q}, p = \grave{p}$, and ${\rm rk}\, e_+^{\grave{q},\grave{p}}, {\rm rk}\, e^-_{\grave{q},\grave{p}}, {\rm rk}\, e_-^{\grave{q},\grave{p}} = 0$. $\beta_\mathfrak r^{q,j}$ is a composition as in \eqref{e:beta=} of the form $F^j \overset{1 \otimes {\rm id}_{F^j}}{\longrightarrow} 1_{m_\mathfrak r(q,j)} \otimes F^j \subseteq M_{m_\mathfrak r(q,j)} \otimes F^j \tailarr \bfdot{E}^q$. By replacing $\bfdot{E}^q$ by $M_{{n+1,q} + N \cdot [n+1,J]}$ containing $\bfdot{E}^q \oplus F^{\oplus N}$ in the canonical way, and by replacing $\beta_\mathfrak r^q$ by $\beta_\mathfrak r^q \oplus {\rm id}_{F^{\oplus N}}$ as in modification (conn), we can arrange that, for all $q,p$, \begin{equation*} m_0(q,j_0^p) \geq {\rm rk}\, e^+_{q,p}, \quad m_1(q,j_1^p) \geq {\rm rk}\, e_+^{q,p}, \quad m_1(q,j_0^p) \geq {\rm rk}\, e^-_{q,p}, \quad m_0(q,j_1^p) \geq {\rm rk}\, e_-^{q,p}. \end{equation*} By further enlarging $\bfdot{E}^q$ as above, and by conjugating $\beta_\mathfrak r^q$ by suitable permutation matrices if necessary, we can arrange that there exist a decomposition $\varepsilon_{C,F} = (\sum_{q,p} \underline{\varepsilon}^{q,p}) + (\sum_{q,p} \bar{\varepsilon}_{q,p}) + \varepsilon_{\rm const}$ into pairwise orthogonal projections in $D\bfdot{E}$ such that for all $q,p$ and $\mathfrak r = 0,1$, \begin{align*} \beta_\mathfrak r^q \circ (\varepsilon_\beta^{j_\mathfrak s^p} \cdot \psi^{j_\mathfrak s^p} \cdot \varepsilon_\beta^{j_\mathfrak s^p}) =& \ \varepsilon^+_{q,p} \cdot ( \beta_\mathfrak r^q \circ (\varepsilon_\beta^{j_\mathfrak s^p} \cdot \psi^{j_\mathfrak s^p} \cdot \varepsilon_\beta^{j_\mathfrak s^p}) ) \cdot \varepsilon^+_{q,p} + \varepsilon_+^{q,p} \cdot ( \beta_\mathfrak r^q \circ (\varepsilon_\beta^{j_\mathfrak s^p} \cdot \psi^{j_\mathfrak s^p} \cdot \varepsilon_\beta^{j_\mathfrak s^p}) ) \cdot \varepsilon_+^{q,p}\\ &+ \varepsilon^-_{q,p} \cdot ( \beta_\mathfrak r^q \circ (\varepsilon_\beta^{j_\mathfrak s^p} \cdot \psi^{j_\mathfrak s^p} \cdot \varepsilon_\beta^{j_\mathfrak s^p}) ) \cdot \varepsilon^-_{q,p} + \varepsilon_-^{q,p} \cdot ( \beta_\mathfrak r^q \circ (\varepsilon_\beta^{j_\mathfrak s^p} \cdot \psi^{j_\mathfrak s^p} \cdot \varepsilon_\beta^{j_\mathfrak s^p}) ) \cdot \varepsilon_-^{q,p}\\ &+ \underline{\varepsilon}^{q,p} \cdot ( \beta_\mathfrak r^q \circ (\varepsilon_\beta^{j_\mathfrak s^p} \cdot \psi^{j_\mathfrak s^p} \cdot \varepsilon_\beta^{j_\mathfrak s^p}) ) \cdot \underline{\varepsilon}^{q,p} + \bar{\varepsilon}_{q,p} \cdot ( \beta_\mathfrak r^q \circ (\varepsilon_\beta^{j_\mathfrak s^p} \cdot \psi^{j_\mathfrak s^p} \cdot \varepsilon_\beta^{j_\mathfrak s^p}) ) \cdot \bar{\varepsilon}_{q,p}, \end{align*} and pairwise orthogonal finite-rank projections $\underline{e}^{q,p}, e_{\scriptscriptstyle (\diagup)}^{q,p}, e_{\scriptscriptstyle (\diagdown)}^{q,p}, \bar{e}_{q,p}, e^{\scriptscriptstyle (\diagup)}_{q,p}, e^{\scriptscriptstyle (\diagdown)}_{q,p}$ encoding multiplicities of block diagonal entries in $\Phi$, such that we have, for all $q,p$, \begin{align*} &\varepsilon^+_{q,p} \cdot ( \beta_0^q \circ (\varepsilon_\beta^{j_0^p} \cdot \psi^{j_0^p} \cdot \varepsilon_\beta^{j_0^p}) ) \cdot \varepsilon^+_{q,p} = e^+_{q,p} \otimes \beta_{n,0}^p, \ \ \underline{\varepsilon}^{q,p} \cdot ( \beta_0^q \circ (\varepsilon_\beta^{j_0^p} \cdot \psi^{j_0^p} \cdot \varepsilon_\beta^{j_0^p}) ) \cdot \underline{\varepsilon}^{q,p} = \underline{e}^{q,p} \otimes \beta_{n,0}^p + e_{\scriptscriptstyle (\diagdown)}^{q,p} \otimes \beta_{n,0}^p,\\ &\varepsilon^-_{q,p} \cdot ( \beta_1^q \circ (\varepsilon_\beta^{j_0^p} \cdot \psi^{j_0^p} \cdot \varepsilon_\beta^{j_0^p}) ) \cdot \varepsilon^-_{q,p} = e^-_{q,p} \otimes \beta_{n,0}^p, \ \ \underline{\varepsilon}^{q,p} \cdot ( \beta_1^q \circ (\varepsilon_\beta^{j_0^p} \cdot \psi^{j_0^p} \cdot \varepsilon_\beta^{j_0^p}) ) \cdot \underline{\varepsilon}^{q,p} = \underline{e}^{q,p} \otimes \beta_{n,0}^p + e_{\scriptscriptstyle (\diagup)}^{q,p} \otimes \beta_{n,0}^p,\\ &\varepsilon_-^{q,p} \cdot ( \beta_0^q \circ (\varepsilon_\beta^{j_1^p} \cdot \psi^{j_1^p} \cdot \varepsilon_\beta^{j_1^p}) ) \cdot \varepsilon_-^{q,p} = e_-^{q,p} \otimes \beta_{n,1}^p, \ \ \bar{\varepsilon}_{q,p} \cdot ( \beta_0^q \circ (\varepsilon_\beta^{j_1^p} \cdot \psi^{j_1^p} \cdot \varepsilon_\beta^{j_1^p}) ) \cdot \bar{\varepsilon}_{q,p} = \bar{e}_{q,p} \otimes \beta_{n,1}^p + e^{\scriptscriptstyle (\diagup)}_{q,p} \otimes \beta_{n,1}^p,\\ &\varepsilon_+^{q,p} \cdot ( \beta_1^q \circ (\varepsilon_\beta^{j_1^p} \cdot \psi^{j_1^p} \cdot \varepsilon_\beta^{j_1^p}) ) \cdot \varepsilon_+^{q,p} = e_+^{q,p} \otimes \beta_{n,1}^p, \ \ \bar{\varepsilon}_{q,p} \cdot ( \beta_1^q \circ (\varepsilon_\beta^{j_1^p} \cdot \psi^{j_1^p} \cdot \varepsilon_\beta^{j_1^p}) ) \cdot \bar{\varepsilon}_{q,p} = \bar{e}_{q,p} \otimes \beta_{n,1}^p + e^{\scriptscriptstyle (\diagdown)}_{q,p} \otimes \beta_{n,1}^p, \end{align*} and $\varepsilon \cdot ( \beta_\mathfrak r^q \circ (\varepsilon_\beta^{j_\mathfrak s^p} \cdot \psi^{j_\mathfrak s^p} \cdot \varepsilon_\beta^{j_\mathfrak s^p}) ) \cdot \varepsilon = 0$ for all remaining choices of $\mathfrak r, \mathfrak s \in \gekl{0,1}$ and $\varepsilon \in \gekl{\varepsilon^+_{q,p}, \varepsilon_+^{q,p}, \varepsilon^-_{q,p}, \varepsilon_-^{q,p}, \bar{\varepsilon}_{q,p}, \underline{\varepsilon}^{q,p}}$. In the stably projectionless case, we have $\bar{\varepsilon}_{q,\grave{p}} = 0$ for all $q$ by arrangement. Moreover, we can always arrange that \begin{equation} \label{e:e>1_spl2} {\rm rk}\, e_{\scriptscriptstyle (\diagdown)}^{\grave{q},\grave{p}} + {\rm rk}\, e_{\scriptscriptstyle (\diagup)}^{\grave{q},\grave{p}} \geq 1. \end{equation} Now define $\bfdot{\varphi} = \bfdot{\varphi}_n: \: A_n \to \bfdot{A}_{n+1}$ by setting $\bfdot{\varphi}_F^j: \: A_n \overset{\bfdot{\varphi}}{\longrightarrow} \bfdot{A}_{n+1} \to F_{n+1} \onto F_{n+1}^j$ and $\bfdot{\varphi}_C: \: A_n \overset{\bfdot{\varphi}}{\longrightarrow} \bfdot{A}_{n+1} \onto C([0,1],\bfdot{E}_{n+1})$ as follows: \begin{eqnarray} \label{e:phiF=1/2} \bfdot{\varphi}_F^{j_\mathfrak r^p}(f,a) &\defeq& \rukl{ \begin{smallmatrix} f^p(\tfrac{1}{2}) & 0 \\ 0 & \bar{\psi}^{j_\mathfrak r^p}(a) \end{smallmatrix} }, \quad {\rm and} \ \bfdot{\varphi}_F^j(f,a) \defeq \accir{\varphi}_F^j(f,a) \ {\rm for} \ j \notin \gekl{j_0^p, j_1^p};\\ \label{e:phiC_path} \bfdot{\varphi}_C &=& \sum_{q,p} \Big( \varepsilon^+_{q,p} \cdot \bfdot{\varphi}_C \cdot \varepsilon^+_{q,p} + \varepsilon_+^{q,p} \cdot \bfdot{\varphi}_C \cdot \varepsilon_+^{q,p} + \varepsilon^-_{q,p} \cdot \bfdot{\varphi}_C \cdot \varepsilon^-_{q,p} + \varepsilon_-^{q,p} \cdot \bfdot{\varphi}_C \cdot \varepsilon_-^{q,p}\\ &&+ \ \underline{\varepsilon}^{q,p} \cdot \bfdot{\varphi}_C \cdot \underline{\varepsilon}^{q,p} + \bar{\varepsilon}_{q,p} \cdot \bfdot{\varphi}_C \cdot \bar{\varepsilon}_{q,p} \Big) + \varepsilon_{\rm const} \cdot \bfdot{\varphi}_C \cdot \varepsilon_{\rm const}; \nonumber \\ \varepsilon^+_{q,p} \cdot \bfdot{\varphi}_C(f,a) \cdot \varepsilon^+_{q,p} &\defeq& e^+_{q,p} \otimes f^p \circ (\tfrac{1}{2} + \tfrac{1}{2} \cdot {\rm id}), \quad \varepsilon_+^{q,p} \cdot \bfdot{\varphi}_C(f,a) \cdot \varepsilon_+^{q,p} \defeq e_+^{q,p} \otimes f^p \circ (\tfrac{1}{2} \cdot {\rm id}), \nonumber \\ \varepsilon^-_{q,p} \cdot \bfdot{\varphi}_C(f,a) \cdot \varepsilon^-_{q,p} &\defeq& e^-_{q,p} \otimes f^p \circ (1 - \tfrac{1}{2} \cdot {\rm id}), \quad \varepsilon_-^{q,p} \cdot \bfdot{\varphi}_C(f,a) \cdot \varepsilon_-^{q,p} \defeq e_-^{q,p} \otimes f^p \circ (\tfrac{1}{2} - \tfrac{1}{2} \cdot {\rm id}); \nonumber \\ \underline{\varepsilon}_{q,p} \cdot \bfdot{\varphi}_C(f,a) \cdot \underline{\varepsilon}_{q,p} &\defeq& \underline{e}^{q,p} \otimes f^p (\tfrac{1}{2}) + e_{\scriptscriptstyle (\diagdown)}^{q,p} \otimes f^p \circ (\tfrac{1}{2} - \tfrac{1}{2} \cdot {\rm id}) + e_{\scriptscriptstyle (\diagup)}^{q,p} \otimes f^p \circ (\tfrac{1}{2} \cdot {\rm id}), \nonumber \\ \bar{\varepsilon}_{q,p} \cdot \bfdot{\varphi}_C(f,a) \cdot \bar{\varepsilon}_{q,p} &\defeq& \bar{e}_{q,p} \otimes f^p (\tfrac{1}{2}) + e^{\scriptscriptstyle (\diagup)}_{q,p} \otimes f^p \circ (\tfrac{1}{2} + \tfrac{1}{2} \cdot {\rm id}) + e^{\scriptscriptstyle (\diagdown)}_{q,p} \otimes f^p \circ (1 - \tfrac{1}{2} \cdot {\rm id}); \nonumber \\ \varepsilon_{\rm const} \cdot \bfdot{\varphi}_C \cdot \varepsilon_{\rm const} &\defeq& \varepsilon_{\rm const} \cdot \accir{\varphi}_C \cdot \varepsilon_{\rm const}. \nonumber \end{eqnarray} Let us now continue with the second step and modify $\bfdot{\varphi}_n$ to $\varphi_n: \: A_n \to A_{n+1}$. This second step in our modification proceeds exactly in the same way as modification (conn), with the following difference: The embeddings $E_n^p \subseteq F_{n+1}^{j_\bullet^p}$, lead to the embedding $(\bigoplus_p E_n^p) \oplus (\bigoplus_{p \neq \grave{p}} E_n^p) \subseteq (\bigoplus_p F_{n+1}^{j_0^p}) \oplus (\bigoplus_{p \neq \grave{p}} F_{n+1}^{j_1^p}) \subseteq F_{n+1} \subseteq E_{n+1}^\mathfrak q$, where $E_{n+1}^\mathfrak q$ and the embedding $F_{n+1} \subseteq E_{n+1}^\mathfrak q$ are constructed as in modification (conn). Let $\mathfrak e_{EE} \in E_{n+1}^\mathfrak q$ be the image of the unit of $(\bigoplus_p E_n^p) \oplus (\bigoplus_{p \neq \grave{p}} E_n^p)$ under the above embedding. Let $w_{EE}$ be a permutation matrix in $E_{n+1}^\mathfrak q$ inducing the flip automorphism on $E_n^p \oplus E_n^p$ (i.e., the automorphism $E_n^p \oplus E_n^p \isom E_n^p \oplus E_n^p, \, (e,e') \ma (e',e)$) for all $p \neq \grave{p}$. Using the same notation as in modification (conn), note that $\mathfrak e_{EE} \leq \mathfrak e_{F,C}$, and define $w_{F,C} \defeq \mathfrak e_{EE} \cdot w_{EE} \cdot \mathfrak e_{EE} + (\mathfrak e_{F,C} - \mathfrak e_{EE})$. Define $w_{F,F}$ as in modification (conn). Now replace $w$ defined by \eqref{e:w=1w} in modification (conn) by $w \defeq \rukl{ \begin{smallmatrix} w_{F,C} & 0 \\ 0 & w_{F,F} \end{smallmatrix} } $. Furthermore, define $\beta_{n+1}^\mathfrak q$, $A_{n+1}$ and $\varphi_n$ in the same way as in modification (conn). Now it is straightforward to check that $\varphi_n$ is well-defined, i.e., $\varphi_n(f,a)$ satisfies the defining boundary conditions for $A_{n+1}$ for all $(f,a) \in A_n$. Proceeding recursively in this way, we obtain an inductive system $\gekl{A_n, \varphi_n}_n$. \begin{lemma} \label{lem:path:Ell} $A = \ilim_n \gekl{A_n, \varphi_n}$ is a classifiable C*-algebra with ${\rm Ell}(A) \cong \mathcal E$. In the stably projectionless case $A$ has continuous scale. If we define $B_n \defeq \menge{(f,a) \in A_n}{f(t) \in DE_n \ \forall \ t \in [0,1], \, a \in DF_n}$, then $B \defeq \ilim_n \gekl{B_n, \varphi_n}$ is a C*-diagonal of $A$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} $A$ is classifiable and unital or stably projectionless with continuous scale for the same reasons why the original construction recalled in \S~\ref{ss:CCCC} yields classifiable C*-algebras with these properties (see \cite{Li18, Ell, EV, GLN} for details). Indeed, to see for instance that $A$ is simple, note that with $\varphi_{N,n}$ denoting the composition $$ A_n \overset{\varphi_n}{\longrightarrow} A_{n+1} \to \dotso \to A_{N-1} \overset{\varphi_{N-1}}{\longrightarrow} A_N, $$ we have for $f \in A_n \subseteq C([0,1],E_n)$ and $t \in [0,1]$ that $(\varphi_{N,n}(f))^q(t) = 0$ for some $q$ only if $f^p(\bar{t}) = 0$ for all $\bar{t} \in \menge{\tfrac{t+k}{2^{N-n}}}{0 \leq k \leq 2^{N-n}-1}$ for all $p$ in the unital case and for all $p \neq \grave{p}$ in the stably projectionless case, and similarly for $p = \grave{p}$ in the stably projectionless case. Hence we see that for all $p$, $\bar{t}$ runs through subsets of $[0,1]$ which become arbitrarily dense in $[0,1]$. This shows simplicity of $A$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} It is clear that $A_n$ has the same K-theory as $\bfdot{A}_n$ and that $\varphi_n$ induces the same map on K-theory as $\bfdot{\varphi}_n$. To see that $\bfdot{\varphi}_n$ induces the same K-theoretic map as $\accir{\varphi}_n$, we construct a homotopy between $\accir{\varphi}_n$ and $\bfdot{\varphi}_n$ as follows: For $s \in [0,1]$, define $\bfdot{\varphi}_s: \: A_n \to \bfdot{A}_{n+1}$ by setting $\bfdot{\varphi}_{s,F}^j: \: A_n \to \bfdot{A}_{n+1} \to F_{n+1}^j$ and $\bfdot{\varphi}_{s,C}: \: A_n \to \bfdot{A}_{n+1} \to C([0,1],E_{n+1})$ as \begin{align*} \bfdot{\varphi}_{s,F}^{j_0^p}(f,a) &\defeq \rukl{ \begin{smallmatrix} f^p(s \cdot \tfrac{1}{2}) & 0 \\ 0 & \bar{\psi}^{j_0^p}(a) \end{smallmatrix} }, \ \bfdot{\varphi}_{s,F}^{j_1^p}(f,a) \defeq \rukl{ \begin{smallmatrix} f^p(1 - s \cdot \tfrac{1}{2}) & 0 \\ 0 & \bar{\psi}^{j_1^p}(a) \end{smallmatrix} }, \ \bfdot{\varphi}_{s,F}^j(f,a) \defeq \accir{\varphi}_F^j(f,a) \ {\rm for} \ j \notin \gekl{j_0^p, j_1^p};\\ \bfdot{\varphi}_{s,C} &= \sum_{q,p} \Big( \varepsilon^+_{q,p} \cdot \bfdot{\varphi}_{s,C} \cdot \varepsilon^+_{q,p} + \varepsilon_+^{q,p} \cdot \bfdot{\varphi}_{s,C} \cdot \varepsilon_+^{q,p} + \varepsilon^-_{q,p} \cdot \bfdot{\varphi}_{s,C} \cdot \varepsilon^-_{q,p} + \varepsilon_-^{q,p} \cdot \bfdot{\varphi}_{s,C} \cdot \varepsilon_-^{q,p}\\ &+ \ \underline{\varepsilon}^{q,p} \cdot \bfdot{\varphi}_{s,C} \cdot \underline{\varepsilon}^{q,p} + \bar{\varepsilon}_{q,p} \cdot \bfdot{\varphi}_{s,C} \cdot \bar{\varepsilon}_{q,p} \Big) + \varepsilon_{\rm const} \cdot \bfdot{\varphi}_{s,C} \cdot \varepsilon_{\rm const};\\ \end{align*} \vspace*{-1.25cm} \begin{align*} \varepsilon^+_{q,p} \cdot \bfdot{\varphi}_{s,C}(f,a) \cdot \varepsilon^+_{q,p} &\defeq e^+_{q,p} \otimes f^p \circ (s \cdot \tfrac{1}{2} + (1 - s \cdot \tfrac{1}{2}) \cdot {\rm id}),\\ \varepsilon_+^{q,p} \cdot \bfdot{\varphi}_{s,C}(f,a) \cdot \varepsilon_+^{q,p} &\defeq e_+^{q,p} \otimes f^p \circ ((1 - s \cdot \tfrac{1}{2}) \cdot {\rm id}),\\ \varepsilon^-_{q,p} \cdot \bfdot{\varphi}_{s,C}(f,a) \cdot \varepsilon^-_{q,p} &\defeq e^-_{q,p} \otimes f^p \circ (1 - (1 - s \cdot \tfrac{1}{2}) \cdot {\rm id}),\\ \varepsilon_-^{q,p} \cdot \bfdot{\varphi}_{s,C}(f,a) \cdot \varepsilon_-^{q,p} &\defeq e_-^{q,p} \otimes f^p \circ (1 - s \cdot \tfrac{1}{2} - (1 - s \cdot \tfrac{1}{2}) \cdot {\rm id});\\ \underline{\varepsilon}^{q,p} \cdot \bfdot{\varphi}_{s,C}(f,a) \cdot \underline{\varepsilon}^{q,p} &\defeq \underline{e}^{q,p} \otimes f^p (s \cdot \tfrac{1}{2}) + e_{\scriptscriptstyle (\diagdown)}^{q,p} \otimes f^p \circ (s \cdot \tfrac{1}{2} - s \cdot \tfrac{1}{2} \cdot {\rm id}) + \ e_{\scriptscriptstyle (\diagup)}^{q,p} \otimes f^p \circ (s \cdot \tfrac{1}{2} \cdot {\rm id}),\\ \bar{\varepsilon}_{q,p} \cdot \bfdot{\varphi}_{s,C}(f,a) \cdot \bar{\varepsilon}_{q,p} &\defeq \bar{e}_{q,p} \otimes f^p (1 - s \cdot \tfrac{1}{2}) + e^{\scriptscriptstyle (\diagup)}_{q,p} \otimes f^p \circ (1 - s \cdot \tfrac{1}{2} + s \cdot \tfrac{1}{2} \cdot {\rm id}) + e^{\scriptscriptstyle (\diagdown)}_{q,p} \otimes f^p \circ (1 - s \cdot \tfrac{1}{2} \cdot {\rm id});\\ \varepsilon_{\rm const} \cdot \bfdot{\varphi}_{s,C} \cdot \varepsilon_{\rm const} &\defeq \varepsilon_{\rm const} \cdot \accir{\varphi}_C \cdot \varepsilon_{\rm const}. \end{align*} Then $s \ma \bfdot{\varphi}_s$ is a continuous path connecting $\accir{\varphi}_n$ with $\bfdot{\varphi}_n$. Hence $\accir{\varphi}_n$ and $\bfdot{\varphi}_n$ induce the same map on K-theory. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} The same argument as for modification (conn) (see Lemma~\ref{lem:conn:Ell}) shows that our modification (path) yields a C*-algebra $A$ with the desired trace simplex and prescribed pairing between $K_0$ and traces. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Finally, the connecting maps $\varphi_n$ are of the same form as in \cite[\S~4]{Li18}, and hence admit groupoid models as in \cite[\S~6]{Li18}. Hence $B$ is indeed a C*-diagonal of $A$ by the same argument as in \cite[\S~5 -- 7]{Li18}. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \subsection{Groupoid models for building blocks and connecting maps} \label{ss:GPDModels} Before we establish the path-lifting property for our connecting maps, let us first develop a groupoid model for them. Suppose that modification (path) gives us the inductive system $$ A_1 \overset{\varphi_1}{\longrightarrow} A_2 \overset{\varphi_2}{\longrightarrow} A_3 \overset{\varphi_3}{\longrightarrow} \dotso, $$ with $A_n = \menge{(f,a) \in C([0,1],E_n) \oplus F_n}{f(\mathfrak r) = \beta_{n, \mathfrak r}(a) \ {\rm for} \ \mathfrak r = 0,1}$ for finite-dimensional algebras $E_n$ and $F_n$ as in \S~\ref{ss:(path)} (we use the same notation as in \S~\ref{ss:CCCC}). To describe the connecting map $\varphi \defeq \varphi_n$, we describe $$ \varphi_C^q: \: A_n \overset{\varphi}{\longrightarrow} A_{n+1} \to C([0,1],E_{n+1}) \onto C([0,1],E_{n+1}^q) \quad \text{and} \quad \varphi_F^j: \: A_n \overset{\varphi}{\longrightarrow} A_{n+1} \to F_{n+1} \onto F_{n+1}^j. $$ For $q \neq \mathfrak q$, $\varphi_C^q$ is given by the following composition: \begin{align} \label{e:varphiC} (f,a) \ma & \ \big(1_{m^+(q,p)} \otimes f^p \circ \lambda^+, 1_{m_+(q,p)} \otimes f^p \circ \lambda_+, 1_{m^-(q,p)} \otimes f^p \circ \lambda^-, 1_{m_-(q,p)} \otimes f^p \circ \lambda_- )_p,\\ & \ (1_{\underline{m}(q,p)} \otimes f^p(\tfrac{1}{2}))_p, (1_{\overline{m}(q,p)} \otimes f^p(\tfrac{1}{2}))_p , (1_{m^{q,i}} \otimes a^i)_i \big) \nonumber \\ \in & \ C \Big( [0,1], \big(\bigoplus_p (M_{m^+(q,p)} \oplus M_{m_+(q,p)} \oplus M_{m^-(q,p)} \oplus M_{m_-(q,p)}) \otimes E_n^p \big) \nonumber \\ & \ \oplus \big(\bigoplus_p (M_{\underline{m}(q,p)} \oplus M_{\overline{m}(q,p)}) \otimes E_n^p \big) \oplus \big( \bigoplus_i M_{m^{q,i}} \otimes F_n^i \big) \Big) \nonumber \\ \tailarr & \ C([0,1], E_{n+1}^q). \nonumber \end{align} Here $\lambda^+ = \tfrac{1}{2} + \tfrac{1}{2} \cdot {\rm id}$, $\lambda_+ = \tfrac{1}{2} \cdot {\rm id}$, $\lambda^- = 1 - \tfrac{1}{2} \cdot {\rm id}$ and $\lambda_- = \tfrac{1}{2} - \tfrac{1}{2} \cdot {\rm id}$. The last arrow is induced by an embedding $\big(\bigoplus_p (M_{m^+(q,p)} \oplus M_{m_+(q,p)} \oplus M_{m^-(q,p)} \oplus M_{m_-(q,p)}) \otimes E_n^p \big) \oplus \big(\bigoplus_p (M_{\underline{m}(q,p)} \oplus M_{\overline{m}(q,p)}) \otimes E_n^p \big) \oplus \big( \bigoplus_i M_{m^{q,i}} \otimes F_n^i \big) \tailarr E_{n+1}^q$ of multiplicity $1$ sending diagonal matrices to diagonal matrices as in \eqref{e:beta=}. Note that $m^+(q,p) = {\rm rk}\, e^+_{q,p} + {\rm rk}\, e^{\scriptscriptstyle (\diagup)}_{q,p}$, $m_+(q,p) = {\rm rk}\, e_+^{q,p} + {\rm rk}\, e_{\scriptscriptstyle (\diagup)}^{q,p}$, $m^-(q,p) = {\rm rk}\, e^-_{q,p} + {\rm rk}\, e^{\scriptscriptstyle (\diagdown)}_{q,p}$ and $m_-(q,p) = {\rm rk}\, e_-^{q,p} + {\rm rk}\, e_{\scriptscriptstyle (\diagdown)}^{q,p}$. By \eqref{e:e>1_unital} -- \eqref{e:e>1_spl2}, we have \begin{align} \label{e:m>1} & m^+(q,p), m_+(q,p), m^-(q,p), m_-(q,p) \geq 1 \quad \forall q,p && \text{in the unital case};\\ & m^+(q,p), m_+(q,p), m^-(q,p), m_-(q,p) \geq 1 \quad \forall q \neq \grave{q}, p \neq \grave{p}, && \nonumber\\ & m^+(\grave{q},\grave{p}) \geq 1 \quad \text{and} \ m_+(\grave{q},\grave{p}) \ \text{or} \ m_-(\grave{q},\grave{p}) \geq 1 && \text{in the stably projectionless case}.\nonumber \end{align} $\varphi_C^\mathfrak q$ is of a similar form, but has an additional component given by $\varphi_F(f,a)$ going into $C([0,1],F_{n+1}) \subseteq C([0,1],E_{n+1}^\mathfrak q)$ (see the second step of modification (path)). $\varphi_F^j$ is given by the following composition: \begin{equation} \label{e:varphiFj_NEW} (f,a) \ma \bfa (f^p(\tfrac{1}{2}), (1_{m(j,i)} \otimes a^i)_i) \in E_n^p \oplus \bigoplus_i M_{m(j,i)} \otimes F_n^i \tailarr F_{n+1}^j & \text{if} \ j = j_\bullet^p,\\ (1_{m(j,i)} \otimes a^i)_i \in \bigoplus_i M_{m(j,i)} \otimes F_n^i \tailarr F_{n+1}^j & \text{if} \ j \notin \gekl{j_0^p, j_1^p}. \end{cases} \end{equation} Recall that $\beta_{n,\mathfrak r} = (\beta^{p,i}_{n,\mathfrak r})_{p,i}$ and that $\beta^{p,i}_{n,\mathfrak r}$ is a composition of the form \begin{equation} \label{e:betanpi} F_n^i \overset{1 \otimes {\rm id}_{F_n^i}}{\longrightarrow} 1_{m_\mathfrak r(p,i)} \otimes F_n^i \subseteq M_{m_\mathfrak r(p,i)} \otimes F_n^i \tailarr E_n^p. \end{equation} The groupoid morphism $\bm{b}_{n,\mathfrak r}$ inducing $\beta_{n,\mathfrak r}$ is given on $\mathcal E_{n,\mathfrak r}^p$, the intersection of the domain $\mathcal E_{n,\mathfrak r}$ of $\bm{b}_{n,\mathfrak r}$ with $\mathcal E_n^p$, by \begin{equation} \label{e:bmbnp} \bm{b}_{n,\mathfrak r}^p: \: \mathcal E_{n,\mathfrak r}^p \cong \coprod_i \mathcal M_\mathfrak r(p,i) \times \mathcal F_n^i \to \coprod_i \mathcal F_n^i = \mathcal F_n, \end{equation} where $\mathcal E_n$, $\mathcal E_n^p$, $\mathcal F_n$ and $\mathcal F_n^i$ are groupoid models for $E_n$, $E_n^p$, $F_n$ and $F_n^i$. Now a groupoid model for $(A_n,B_n)$ is given by $G_n \defeq \big( ([0,1] \times_\bullet \mathcal E_n) \amalg \mathcal F_n \big) / {}_\sim$, where $[0,1] \times_\bullet \mathcal E_n \defeq \menge{(t,\gamma) \in [0,1] \times \mathcal E_n}{\gamma \in \mathcal E_{n,t} \ \text{if} \ t = 0,1}$, and $\sim$ is the equivalence relation on $([0,1] \times_\bullet \mathcal E_n) \amalg \mathcal F_n$ generated by $(\mathfrak r,\gamma) \sim \bm{b}_{n,\mathfrak r}(\gamma)$ for all $\mathfrak r = 0,1$, $\gamma \in \mathcal E_{n,\mathfrak r}$. For details, we refer to \cite[\S~6.1]{Li18}. Note that we have $G_n = [[0,1] \times_\bullet \mathcal E_n]$ just as in \S~\ref{s:nccw}, i.e., the extra copy of $\mathcal F_n$ is not needed; it is just convenient to describe the groupoid model $\bm{p}_n$ for $\varphi_n$. Let us now describe a groupoid model $\bm{p} \defeq \bm{p}_n$ for the connecting map $\varphi_n$ (see \cite[\S~6.2]{Li18} for details). Let $H_n$ be the subgroupoid of $G_{n+1}$ given by $H_n \defeq \big( ([0,1] \times_\bullet \mathcal E_{n+1}[\bm{p}]) \amalg \mathcal F_{n+1}[\bm{p}] \big) / {}_{\sim}$, where, with $\lambda_\mu \defeq \lambda^+$ if $\mu \in \mathcal M^+(q,p)$, $\lambda_\mu \defeq \lambda_+$ if $\mu \in \mathcal M_+(q,p)$, $\lambda_\mu \defeq \lambda^-$ if $\mu \in \mathcal M^-(q,p)$, $\lambda_\mu \defeq \lambda_-$ if $\mu \in \mathcal M_-(q,p)$, $\lambda_\mu \equiv \tfrac{1}{2}$ if $\mu \in \underline{\mathcal M}(q,p) \amalg \overline{\mathcal M}(q,p)$, $\mathcal E_{n+1}[\bm{p}] = \coprod_q \mathcal E_{n+1}^q[\bm{p}]$, and we have identifications \begin{align} \label{e:Eqp=} \mathcal E_{n+1}^q[\bm{p}] \cong \ & {} \big( \coprod_p (\mathcal M^+(q,p) \amalg \mathcal M_+(q,p) \amalg \mathcal M^-(q,p) \amalg \mathcal M_-(q,p)) \times \mathcal E_n^p \big)\\ &\amalg \big( \coprod_p (\underline{\mathcal M}(q,p) \amalg \overline{\mathcal M}(q,p)) \times \mathcal E_n^p \big) \amalg \big( \coprod_i \mathcal M^{q,i} \times \mathcal F_n^i \big) \qquad \text{if} \ q \neq \mathfrak q, \nonumber \end{align} and similarly for $\mathcal E_{n+1}^\mathfrak q[\bm{p}]$, but with an additional copy of $\mathcal F_{n+1}[\bm{p}]$, and for $\mathfrak r = 0,1$, we have with respect to \eqref{e:Eqp=}: \begin{eqnarray*} \mathcal E_{n+1,\mathfrak r}^q[\bm{p}] &=& \big\{ (\mu,\gamma) \in \mathcal E_{n+1}^q[\bm{p}]: \: \gamma \in \mathcal E_{n,\lambda_\mu(\mathfrak r)}^p \ \text{if} \ \mu \in \mathcal M^+(q,p) \amalg \mathcal M_+(q,p) \amalg \mathcal M^-(q,p) \amalg \mathcal M_-(q,p) \big\},\\ \mathcal E_{n+1,\mathfrak r}[\bm{p}] &=& \coprod_q \mathcal E_{n+1,\mathfrak r}^q[\bm{p}]; \ [0,1] \times_\bullet \mathcal E_{n+1}[\bm{p}] \defeq \menge{(t,(\mu,\gamma)) \in [0,1] \times \mathcal E_{n+1}[\bm{p}]}{(\mu,\gamma) \in \mathcal E_{n+1,t}[\bm{p}] \ \text{if} \ t \in \gekl{0,1}}. \end{eqnarray*} Now $\bm{p}$ is given by $\bm{p}[t,(\mu,\gamma)] = [\lambda_\mu(t),\gamma]$ for $\gamma \in \mathcal E_n^p$ and $\bm{p}(\mu,\gamma) = \gamma$ for $\gamma \in \mathcal F_n^i$. Moreover there are identifications \begin{equation} \label{e:F=EMForMF} \mathcal F_{n+1}^j[\bm{p}] \cong \mathcal E_n^p \amalg \big( \coprod_i \mathcal M(j,i) \times \mathcal F_n^i \big) \quad \text{if} \ j = j_\bullet^p; \qquad \mathcal F_{n+1}^j[\bm{p}] \cong \coprod_i \mathcal M(j,i) \times \mathcal F_n^i \quad \text{if} \ j \notin \gekl{j_0^p, j_1^p}, \end{equation} such that $\bm{p}(\mu,\gamma) = \gamma$ for $(\mu,\gamma) \in \mathcal M(j,i) \times \mathcal F_n^i$, and \begin{equation} \label{e:bpgamma_FFinE} \bm{p}(\gamma) = [\tfrac{1}{2},\gamma] \quad \forall \ \gamma \in \mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_\bullet^p}[\bm{p}] \qquad \text{and} \ \bm{p}[t,\gamma] = \bm{p}(\gamma) \quad \forall \ \gamma \in \mathcal F_{n+1}[\bm{p}] \subseteq \mathcal E_{n+1}^\mathfrak q[\bm{p}], \, t \in [0,1]. \end{equation} We will often work with the identifications \eqref{e:Eqp=} and \eqref{e:F=EMForMF} without explicitly mentioning them. That $\varphi_n(f,a)$ satisfies the defining boundary condition for $A_{n+1}$ for all $(f,a) \in A_n$ translates to the following compatibility conditions for $\bm{b}_\bullet$ and $\bm{p}$: We have a commutative diagram \begin{equation} \label{e:EEFF} \begin{tikzcd} \mathcal E_{n+1,\mathfrak r}^q[\bm{p}] \arrow[d, "\bm{b}_\mathfrak r"'] \arrow[r, "\subseteq"] & \mathcal E_{n+1,\mathfrak r}^q \arrow[d, "\bm{b}_\mathfrak r"] \\ \mathcal F_{n+1}[\bm{p}] \arrow[r, "\subseteq"] & \mathcal F_{n+1} \end{tikzcd} \end{equation} For every $\mu \in \mathcal M^+(q,p) \amalg \mathcal M_+(q,p) \amalg \mathcal M^-(q,p) \amalg \mathcal M_-(q,p)$, $\mathfrak r, \mathfrak s \in \gekl{0,1}$ with $\lambda_\mu(\mathfrak r) = \mathfrak s$, the restriction of \eqref{e:EEFF} to $\gekl{\mu} \times \mathcal E_{n,\mathfrak s}^p \subseteq \mathcal E_{n+1,\mathfrak r}^q[\bm{p}]$ fits into the following commutative diagram \begin{equation} \label{e:BiggestCD} \begin{tikzcd} \gekl{\mathfrak s} \times \mathcal E_{n,\mathfrak s}^p \arrow[d, "\cong"'] & \arrow[l] \gekl{\mathfrak r} \times \gekl{\mu} \times \mathcal E_{n,\mathfrak s}^p \arrow[d, "\cong"] \arrow[r, "\subseteq"] & \gekl{\mathfrak r} \times \mathcal E_{n+1,\mathfrak r}^q \arrow[d, "\cong"]\\ \mathcal E_{n,\mathfrak s}^p \arrow[d, "\cong"'] \arrow[dd, "\bm{b}_{n,\mathfrak s}"', bend right=90] & \arrow[l, "{\gamma \mapsfrom (\mu,\gamma)}"'] \gekl{\mu} \times \mathcal E_{n,\mathfrak s}^p \arrow[dd, "\bm{b}_\mathfrak r"] \arrow[r, "\subseteq"] & \mathcal E_{n+1,\mathfrak r}^q \arrow[d, "\cong"] \arrow[dd, "\bm{b}_\mathfrak r", bend left=90]\\ \coprod_i \mathcal M_\mathfrak s(p,i) \times \mathcal F_n^i \arrow[d] & & \coprod_j \mathcal M_\mathfrak r(q,j) \times \mathcal F_{n+1}^j \arrow[d]\\ \coprod_i \mathcal F_n^i & \arrow[l, "\bm{p}"'] \coprod_j ( \coprod_i \mathcal M(j,i) \times \mathcal F_n^i ) \arrow[r, "\subseteq"] & \coprod_j \mathcal F_{n+1}^j \end{tikzcd} \end{equation} where we identify $\gekl{\mu} \times \mathcal E_{n,\mathfrak s}^p$ and $\coprod_j ( \coprod_i \mathcal M(j,i) \times \mathcal F_n^i )$ with subsets of $\mathcal E_{n+1,\mathfrak r}^q[\bm{p}]$ and $\coprod_j \mathcal F_{n+1}^j[\bm{p}]$ via \eqref{e:Eqp=} and \eqref{e:F=EMForMF}, and the lower vertical arrows on the left and right are given by the canonical projection maps. Moreover, for all $q, p$, we have $\bm{b}_\mathfrak r(\mu,\gamma) = \gamma \in \mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_\mathfrak r^p}[\bm{p}]$ for all $\mu \in \mathcal M^+(q,p) \amalg \mathcal M_+(q,p)$ and $\bm{b}_\mathfrak r(\mu,\gamma) = \gamma \in \mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_{\mathfrak r^*}^p}[\bm{p}]$ for all $\mu \in \mathcal M^-(q,p) \amalg \mathcal M_-(q,p)$, where $\mathfrak r \in \gekl{0,1}$ satisfies $\lambda_\mu(\mathfrak r) = \tfrac{1}{2}$, and $\mathfrak r^* = 1 - \mathfrak r$, $\bm{b}_\mathfrak r(\mu,\gamma) = \gamma \in \mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_0^p}[\bm{p}]$ for all $\mu \in \underline{\mathcal M}(q,p)$ and $\mathfrak r = 0,1$, and $\bm{b}_\mathfrak r(\mu,\gamma) = \gamma \in \mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_1^p}[\bm{p}]$ for all $\mu \in\overline{\mathcal M}(q,p)$ and $\mathfrak r = 0,1$. On $\mathcal F_{n+1} \subseteq \mathcal E_{n+1}^\mathfrak q$, $\bm{b}_0$ is given by ${\rm id}$ and $\bm{b}_1$ is of a similar form as in \S~\ref{ss:BBCDiagPathConn} and in addition sends $\mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_0^p}$ identically onto $\mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_1^p}$ and $\mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_1^p}$ identically onto $\mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_0^p}$ for all $p \neq \grave{p}$. Finally, the restriction of \eqref{e:EEFF} to $\coprod_i \mathcal M^{q,i} \times \mathcal F_n^i \subseteq \mathcal E_{n+1,\mathfrak r}^q[\bm{p}]$ fits into the following commutative diagram \begin{equation*} \begin{tikzcd} & \arrow[dl, "\bm{p}"'] \coprod_i \mathcal M^{q,i} \times \mathcal F_n^i \arrow[d, "\bm{b}_\mathfrak r"] \arrow[r, "\subseteq"] & \mathcal E_{n+1,\mathfrak r}^q \arrow[d, "\bm{b}_\mathfrak r"]\\ \coprod_i \mathcal F_n^i & \arrow[l, "\bm{p}"] \coprod_j ( \coprod_i \mathcal M(j,i) \times \mathcal F_n^i ) \arrow[r, "\subseteq"] & \coprod_j \mathcal F_{n+1}^j \end{tikzcd} \end{equation*} \subsection{The path-lifting property for connecting maps} \label{ss:path-lift} We now establish a path-lifting property for $\bm{p} = \bm{p}_n$. \begin{proposition} \label{prop:path} Suppose that $\xi_n: \: [0,1] \to G_n$ is a continuous path with the following properties: \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{enumerate} \item[(P1)] There exist $0 = \mathfrak t_0 < \mathfrak t_1 < \dotso < \mathfrak t_D < \mathfrak t_{D+1} = 1$, $D \geq 0$, such that for all $0 \leq d \leq D$ and $I = [\mathfrak t_d,\mathfrak t_{d+1}]$, there exist $\gamma_{n,I} \in \mathcal E_n$ and a continuous, monotonous function $\omega_{n,I}: \: I \to [0,1]$ with stop values at $\omega_{n,I}(I) \cap \mathbb{Z}[\tfrac{1}{2}]$, i.e., such that, for all $t \in I$, $\xi_n(t) = [\omega_{n,I}(t),\gamma_{n,I}]$. \item[(P2)] There exist $d$ and $t \in I = [\mathfrak t_d,\mathfrak t_{d+1}]$ such that $\omega_{n,I}(t) \in \gekl{0,\tfrac{1}{2},1}$ is a stop value of $\omega_{n,I}$. \end{enumerate} Let $\xi_{n+1}^0, \xi_{n+1}^1 \in H_n$ satisfy $\bm{p}(\xi_{n+1}^\mathfrak r) = \xi_n(\mathfrak r)$ for $\mathfrak r = 0,1$. Then there exists a continuous path $\xi_{n+1}: \: [0,1] \to H_n$ with properties (P1) and (P2) such that $\xi_{n+1}(\mathfrak r) = \xi_{n+1}^\mathfrak r$ for $\mathfrak r = 0,1$ and $\bm{p} \circ \xi_{n+1} = \xi_n$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \textbf{Variation:} Suppose that $\xi_n$ has properties (P1) and (P2), with the following exception: \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{enumerate} \item[(P3a)] $\omega_{n,[\mathfrak t_0,\mathfrak t_1]}(0) \in \gekl{0,1}$ is not a stop value for $\omega_{n,[\mathfrak t_0,\mathfrak t_1]}$, \item[(P3b)] there exist $w_{n+1}^0 \in \gekl{0,1}, \mu_{n+1}^0, \gamma_n^0$ such that $\xi_{n+1}^0 = [w_{n+1}^0,(\mu_{n+1}^0,\gamma_n^0)]$ and $\omega_{n,[\mathfrak t_d,\mathfrak t_{d+1}]}(t) \in {\rm im\,}(\lambda_{\mu_{n+1}^0})$ for all $t \in [0,\mathfrak t] \cap [\mathfrak t_d,\mathfrak t_{d+1}]$ if $\mathfrak t_d < \mathfrak t$, where $\mathfrak t \defeq \min \menge{t>0}{t \in [\mathfrak t_d,\mathfrak t_{d+1}], \, \omega_{n,[\mathfrak t_d,\mathfrak t_{d+1}]}(t) \in \gekl{0,\tfrac{1}{2},1}}$. \end{enumerate} Then we can arrange that $\xi_{n+1}$ has (P3a). We allow for a similar variation for $\omega_{n,[\mathfrak t_D,\mathfrak t_{D+1}]}(1)$ instead of $\omega_{n,[\mathfrak t_0,\mathfrak t_1]}(0)$. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} Here $w$ is called a stop value of $\omega_{n,I}$ if $\omega_{n,I}$ takes the constant value $w$ on some closed subinterval of $I$ with positive length (see for instance \cite{FR}). \begin{proof} By assumption, there are $0 = r_0 \leq t_0 < r_1 < t_1 < \dotso < r_c < t_c < r_{c+1} \leq t_{c+1} = 1$, $c \geq 0$, such that for every interval $I$ of the form $[t_b,r_{b+1}]$, we have $\xi_n(t) = [\omega_{n,I}(t),\gamma_{n,I}]$ for all $t \in I$ for some $\gamma_{n,I} \in G_n$ and $\omega_{n,I} \equiv 0, \tfrac{1}{2}$ or $1$, and every interval of the form $[r_b,t_b]$ of positive length splits into finitely many subintervals $I$ for which there are $\gamma_{n,I} \in \mathcal E_n$ and continuous maps $\omega_{n,I}: \: I \to [0,1]$ as in (P1) and (P2) such that $\xi_n(t) = [\omega_{n,I}(t),\gamma_{n,I}]$ for all $t \in I$. Moreover, for $1 \leq b \leq c$ and $I \subseteq [r_b,t_b]$ as above, $\omega_{n,I}$ does not take the values $0$, $\tfrac{1}{2}$ or $1$ on $(r_b,t_b)$. Set $\xi_{n+1}[0] \defeq \xi_{n+1}^0$, $\xi_{n+1}[c+1] \defeq \xi_{n+1}^1$, and write $\xi_{n+1}[0] = [w[0],(\mu[0],\gamma[0])]$, $\xi_{n+1}[c+1] = [w[c+1],(\mu[c+1],\gamma[c+1])]$. If $w[0] \in (0,1)$, we arrange $t_0 > 0$ by replacing $t_0$ by $\tfrac{1}{2} \cdot r_1$ if necessary. If $w[c+1] \in (0,1)$, we arrange $r_{c+1} < 1$ by replacing $r_{c+1}$ by $\tfrac{1}{2} \cdot (t_c + 1)$ if necessary. As a result, if $w[0] \in (0,1)$, we must have $t_0 > 0$, and either $\omega_{n,I}$ does not take the values $0$, $\tfrac{1}{2}$ or $1$ on $I \cap [r_0,t_0)$ for each $I \subseteq [r_0,t_0]$ as above, or $\omega_{n,[r_0,t_0]}$ is constant with value $0$, $\tfrac{1}{2}$ or $1$. If $w[0] \in \gekl{0,1}$, then we must have $t_0 = 0$. For the variation, we must have $t_0 > 0$, and for each $I \subseteq [r_0,t_0]$ as above, $\omega_{n,I}$ does not take the values $0$, $\tfrac{1}{2}$ or $1$ on $I \cap (r_0,t_0)$, $\omega_{n,I}(0) \in \gekl{0,1}$ is not a stop value, and $w[0] \in \gekl{0,1}$. A similar statement holds for $w[c+1]$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} For $1 \leq b \leq c$, take $s_b \in (r_b,t_b)$ and choose $\xi_{n+1}[b] = [w[b],(\mu[b],\gamma[b])] \in H_n$ such that $\bm{p}(\xi_{n+1}[b]) = \xi_n(s_b)$. Such $\xi_{n+1}[b]$ exist because of \eqref{e:m>1}. Define $s_0 \defeq 0$ and $s_{c+1} \defeq 1$. Now let $0 \leq b \leq c+1$. Suppose that $\xi_n(s_b)$ is of the form $[w,\gamma]$ with $w \notin \gekl{0,1}$, which is always the case if $1 \leq b \leq c$. Let $I \subseteq [r_b,t_b]$ be as above. Define $\gamma_{n+1,I} \defeq \gamma[b]$. If $\lambda_{\mu[b]} = \lambda^+$, define $\omega_{n+1,I} \defeq -1 + 2 \cdot \omega_{n,I}$, if $\lambda_{\mu[b]} = \lambda_+$, define $\omega_{n+1,I} \defeq 2 \cdot \omega_{n,I}$, if $\lambda_{\mu[b]} = \lambda^-$, define $\omega_{n+1,I} \defeq 2 - 2 \cdot \omega_{n,I}$, and if $\lambda_{\mu[b]} = \lambda_-$, define $\omega_{n+1,I} \defeq 1 - 2 \cdot \omega_{n,I}$. If $b=0$, $I = [r_0,t_0]$, $t_0 > 0$, i.e., $w[0] \in (0,1)$, and if $\omega_{n,I} \equiv 0, \tfrac{1}{2}$ or $1$, set $\gamma_{n+1,I} \defeq (\mu[0],\gamma[0])$ and let $\omega_{n+1,I}$ be a continuous path as in (P1) with $\omega_{n+1,I}(0) = w[0]$, $\omega_{n+1,I}(0) = 1$ ((P2) is then automatic). Such a path exists by \cite[Lemma~2.10]{FR}. Define $\gamma_{n+1,I}$ and $\omega_{n+1,I}$ similarly for $b=c+1$, $I = [r_{c+1},t_{c+1}]$, $r_{c+1} < 1$ and $\omega_{n,I} \equiv 0, \tfrac{1}{2}$ or $1$. For the variation, note that (P3b) implies that we can define $\gamma_{n+1,I}$ and $\omega_{n+1,I}$ for $I \subseteq [r_0,t_0]$ and $I \subseteq [r_{c+1},t_{c+1}]$ as above in the same way as for $I \subseteq [r_b,t_b]$, where $\xi_n(s_b)$ is of the form $[w,\gamma]$ with $w \notin \gekl{0,1}$. Now set $\xi_{n+1}(t) \defeq [\omega_{n+1,I}(t),\gamma_{n+1,I}]$ for all $t \in I$. Next, consider $I = [t_b,r_{b+1}]$ for $0 \leq b \leq c$. First assume that $\omega_{n,I} \equiv \tfrac{1}{2}$. Let $\gamma_{n,I} = \gamma$. Set $w \defeq \omega_{n+1,[r_b,t_b]}(t_b)$, $\bar{w} \defeq \omega_{n+1,[s_{b+1},t_{b+1}]}(s_{b+1})$ and let $\gamma_{n+1,[s_b,t_b]}(t_b) = (\mu,\gamma)$, $\gamma_{n+1,[r_{b+1},t_{b+1}]}(s_{b+1}) = (\bar{\mu},\gamma)$. Note that $w, \bar{w} \in \gekl{0,1}$. If $[w,(\mu,\gamma)] = [\bar{w},(\bar{\mu},\gamma)]$, then set $\omega_{n+1,I} \equiv w$ and $\gamma_{n+1,I} \defeq (\mu,\gamma)$. If $[w,(\mu,\gamma)] \neq [\bar{w},(\bar{\mu},\gamma)]$, then $\bm{b}_w(\mu,\gamma) = \gamma^{j_0^p} \in \mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_0^p}$ and $\bm{b}_{\bar{w}}(\bar{\mu},\gamma) = \gamma^{j_1^p} \in \mathcal E_n^p \subseteq \mathcal F_{n+1}^{j_1^p}$ for $p \neq \grave{p}$ (or with $j_0^p$ and $j_1^p$ swapped), where $\gamma^{j_\mathfrak r^p}$ denotes the copy of $\gamma$ in $\mathcal F_{n+1}^{j_\mathfrak r^p}$. Let $\omega_{n+1,I}$ be a continuous path as in (P1) such that $\omega_{n+1,I}(t_b) = 0$, $\omega_{n+1,I}(s_{b+1}) = 1$ (such a path exists by \cite[Lemma~2.10]{FR}, and (P2) is automatic), and define $\gamma_{n+1,I} \defeq \gamma^{j_0^p}$. Set $\xi_{n+1}(t) \defeq [\omega_{n+1,I}(t), \gamma_{n+1,I}]$ for all $t \in I$. Then by \eqref{e:bpgamma_FFinE}, we have $\bm{p}(\xi_{n+1}(t)) = [\tfrac{1}{2},\gamma] = \xi_n(t)$ for all $t \in I$, as well as $\xi_{n+1}(t_b) = [0,\gamma^{j_0^p}] = [w,(\mu,\gamma)]$ since $\bm{b}_0(\gamma^{j_0^p}) = \gamma^{j_0^p}$ and $\xi_{n+1}(r_{b+1}) = [1,\gamma^{j_0^p}] = [0,\gamma^{j_1^p}] = [\bar{w},(\bar{\mu},\gamma)]$ as $\bm{b}_1(\gamma^{j_0^p}) = \gamma^{j_1^p} = \bm{b}_0(\gamma^{j_1^p})$. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} Now assume that $\omega_{n,I} \equiv 0$. Set $w \defeq \omega_{n+1,[r_b,t_b]}(t_b)$, $\bar{w} \defeq \omega_{n+1,[r_{b+1},t_{b+1}]}(r_{b+1})$ and let $\gamma_{n+1,[r_b,t_b]}(t_b) = (\mu,\gamma)$, $\gamma_{n+1,[r_{b+1},t_{b+1}]}(r_{b+1}) = (\bar{\mu},\bar{\gamma})$. We have $\xi_n(t) = [0,\gamma_{n,I}] = \bm{p}[w,(\mu,\gamma)] = \bm{p}[\bar{w},(\bar{\mu},\bar{\gamma})]$ for all $t \in I$. Note that $w, \bar{w} \in \gekl{0,1}$. Now $[w,(\mu,\gamma)] = [0,\bm{b}_w(\mu,\gamma)]$ and $[\bar{w},(\bar{\mu},\bar{\gamma})] = [0, \bm{b}_{\bar{w}}(\bar{\mu},\bar{\gamma})]$, where we view $\bm{b}_w(\mu,\gamma)$ and $\bm{b}_{\bar{w}}(\bar{\mu},\bar{\gamma})$ as elements of $\mathcal E^{\mathfrak q}_{\rm conn}$ (the analogue of $\mathcal Y^{\mathfrak q}_{\rm conn}$ in \S~\ref{ss:BBCDiagPathConn}). We have $\bm{p}[0,\bm{b}_w(\mu,\gamma)] = \bm{p}[w,(\mu,\gamma)] = \bm{p}[\bar{w},(\bar{\mu},\bar{\gamma})] = \bm{p}[0, \bm{b}_{\bar{w}}(\bar{\mu},\bar{\gamma})]$, so that by the analogue of Remark~\ref{rem:Yconnconn} for $\mathcal E^{\mathfrak q}_{\rm conn}$ instead of $\mathcal Y^{\mathfrak q}_{\rm conn}$, after possibly splitting $I$ into finitely many subintervals, we can find $\gamma_{n+1,I}$ and $\omega_{n+1,I}$ as in (P1) and (P2) such that, if we define $\xi_{n+1}(t) \defeq [\omega_{n+1,I}(t),\gamma_{n+1,I}]$ for all $t \in I$, then we have $\bm{p}(\xi_{n+1}(t)) = \bm{p}[0,\bm{b}_w(\mu,\gamma)] = \bm{p}[w,(\mu,\gamma)] = \xi_n(t)$ for all $t \in I$, and $\xi_{n+1}(t_b) = [0,\bm{b}_w(\mu,\gamma)] = [w,(\mu,\gamma)]$, $\xi_{n+1}(r_{b+1}) = [0, \bm{b}_{\bar{w}}(\bar{\mu},\bar{\gamma})] = [\bar{w},(\bar{\mu},\bar{\gamma})]$. The case $\omega_{n,I} \equiv 1$ is similar. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \section{Constructing C*-diagonals with Menger manifold spectra} \label{s:CDiagMenger} Suppose that modification (path) produces the C*-algebra $A = \ilim_n \gekl{A_n,\varphi_n}$ with prescribed Elliott invariant $\mathcal E$ as in \S~\ref{ss:(path)} and the C*-diagonal $B = \ilim_n \gekl{B_n,\varphi_n}$ of $A$ as in Lemma~\ref{lem:path:Ell}. In the following, we write $X_n \defeq \Spec B_n$, $X \defeq \Spec B$. Note that $X$ is metrizable, Hausdorff and compact (in the unital case) or locally compact (in the stably projectionless case), $X \cong \plim_n \gekl{X_n,\bm{p}_n}$ and $\dim X \leq 1$ (see \cite{Li18}). Our goal now is to determine $X$ further. Let $\bm{p}_{n,\infty}: \: X \to X_n$ be the map given by the inverse limit structure of $X$ and $\bm{p}_{n,N}: \: X_{N+1} \to X_n$ the composition $\bm{p}_{n,N} \defeq \bm{p}_n \circ \dotso \circ \bm{p}_N$. Moreover, the groupoid model $G_n$ for $A_n$ in \S~\ref{ss:GPDModels} yields descriptions $X_n \cong \big( ([0,1] \times_\bullet \mathcal Y_n) \amalg \mathcal X_n \big) / {}_\sim$, where $\mathcal Y_n = \mathcal E_n^{(0)}$, $\mathcal X_n = \mathcal F_n^{(0)}$, and with $\mathcal Y_{n,\mathfrak r} \defeq \mathcal Y_n \cap \mathcal E_{n,\mathfrak r}$, $[0,1] \times_\bullet \mathcal Y_n \defeq \menge{(t,y) \in [0,1] \times \mathcal Y_n}{y \in \mathcal Y_{n,t} \ \text{if} \ t = 0,1}$, and $\sim$ is the equivalence relation on $([0,1] \times_\bullet \mathcal Y_n) \amalg \mathcal X_n$ generated by $(\mathfrak r,y) \sim \bm{b}_{n,\mathfrak r}(y)$ for all $\mathfrak r = 0,1$, $y \in \mathcal Y_{n,\mathfrak r}$. \begin{proposition} \label{prop:pathconn} The C*-diagonal $B$ has path-connected spectrum $X$. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Let $\eta = (\eta_n)_n$, $\zeta = (\zeta_n)_n$ be two points in $X$. The induction start in the proof of Proposition~\ref{prop:conn} shows that there exists a continuous path $\xi_1: \: [0,1] \to X_1$ with $\xi_1(0) = \eta_1$, $\xi_1(1) = \zeta_1$. Using \cite[Lemma~2.10]{FR}, it is straightforward to see that $\xi_1$ can be chosen with property (P1) and (P2). Applying Proposition~\ref{prop:path} recursively, we obtain continuous paths $\xi_n: \: [0,1] \to X_n$ with $\xi_n(0) = \eta_n$, $\xi_n(1) = \zeta_n$ and $\bm{p}_n \circ \xi_{n+1} = \xi_n$. Hence $\xi(t) \defeq (\xi_n(t))_n$ defines a continuous path $[0,1] \to X \cong \plim_n \gekl{X_n, \bm{p}_n}$ with $\xi(0) = \eta$ and $\xi(1) = \zeta$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{proposition} \label{prop:lpc} The spectrum $X$ of $B$ is locally path-connected. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Consider a point $\bm{c} = ([w_n,y_n])_n \in X$ with $w_n \in [0,1]$, $y_n \in \mathcal Y_n$, and an open set $V$ of $X$ with $\bm{c} \in V$. First suppose that there is $\underline{n}$ such that $w_n \notin \gekl{0,1}$ for all $n \geq \underline{n}$. Then there exists $n \geq \underline{n}$, an open interval $I_n \subseteq (0,1)$, $\alpha, e \in \mathbb{Z}_{\geq 0}$ such that $\tfrac{\alpha}{2^e} < w_n < \tfrac{\alpha + 1}{2^e}$, $[\tfrac{\alpha}{2^e}, \tfrac{\alpha + 1}{2^e}] \subseteq I_n$ and $\bm{p}_{n,\infty}^{-1}[I_n \times \gekl{y_n}] \subseteq V$. It is straightforward to see that if there exists an open interval $I_{n+m} \subseteq (0,1)$ of length at least $\frac{1}{2^{e-m}}$ with $w_{n+m} \in I_{n+m}$ and $\tfrac{1}{2} \notin I_{n+m}$ such that $[I_{n+m} \times \gekl{y_{n+m}}] \subseteq \bm{p}_{n,n+m}^{-1}[I_n \times \gekl{y_n}]$, then there exists an open interval $I_{n+m+1} \subseteq (0,1)$ of length at least $\frac{1}{2^{e-m-1}}$ with $w_{n+m+1} \in I_{n+m+1}$ and such that $[I_{n+m+1} \times \gekl{y_{n+m+1}}] \subseteq \bm{p}_{n,n+m+1}^{-1}[I_n \times \gekl{y_n}]$. Thus there exists $m \leq e-1$ and an open interval $I_{n+m} \subseteq (0,1)$ with $w_{n+m}, \tfrac{1}{2} \in I_{n+m}$ such that $[I_{n+m} \times \gekl{y_{n+m}}] \subseteq \bm{p}_{n,n+m}^{-1}[I_n \times \gekl{y_n}]$. Hence $U \defeq \bm{p}_{n+m,\infty}^{-1}[I_{n+m} \times \gekl{y_{n+m}}]$ satisfies $\bm{c} \in U \subseteq V$. Set $\bar{n} \defeq n+m$, so that $U = \bm{p}_{\bar{n},\infty}^{-1}[I_{\bar{n}} \times \gekl{y_{\bar{n}}}]$. Now assume that for all $\underline{n}$ there is $n \geq \underline{n}$ such that $w_n \in \gekl{0,1}$. Then, since $V$ is open, there exists $\bar{n} \geq \underline{n}$ and, for every $(\mathfrak r,y) \sim (w_{\bar{n}},y_{\bar{n}})$, half-open intervals $I(\mathfrak r,y)$ containing $\mathfrak r$ such that $U \defeq \bigcup_{(\mathfrak r,y) \sim (w_{\bar{n}},y_{\bar{n}})} \bm{p}_{\bar{n},\infty}^{-1}[I(\mathfrak r,y) \times \gekl{y}] \subseteq V$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} We claim that in both cases above, $U$ is path-connected. Let $\eta = (\eta_n), \zeta = (\zeta_n) \in U$. We construct a path $\xi_{\bar{n}}: \: [0,1] \to X_{\bar{n}}$ with (P1) and (P2) such that $\xi_{\bar{n}}(0) = \eta_{\bar{n}}$ and $\xi_{\bar{n}}(1) = \zeta_{\bar{n}}$. Let us treat the first case ($w_{\bar{n}} \notin \gekl{0,1}$). We have $\eta_{\bar{n}} = [w_{\bar{n}}^0,y_{\bar{n}}]$, $\zeta_{\bar{n}} = [w_{\bar{n}}^1,y_{\bar{n}}]$. Define $\xi_{\bar{n}}$ as in (P1), with $D=1$, $\mathfrak t_1 = \tfrac{1}{2}$, for $I = [\mathfrak t_0,\mathfrak t_1] = [0,\tfrac{1}{2}]$, $\gamma_{\bar{n},I} \defeq y_{\bar{n}}$, $\omega_{\bar{n},I}: \: [0,\tfrac{1}{2}] \to [0,1]$ as in (P1) with $\omega_{\bar{n},I}(0) = w_{\bar{n}}^0$, $\omega_{\bar{n},I}(\tfrac{1}{2}) = \tfrac{1}{2}$, and for $I = [\mathfrak t_1,\mathfrak t_2] = [\tfrac{1}{2},1]$, $\gamma_{\bar{n},I} \defeq y_{\bar{n}}$, $\omega_{\bar{n},I}: \: [\tfrac{1}{2},1] \to [0,1]$ as in (P1) with $\omega_{\bar{n},I}(\tfrac{1}{2}) = \tfrac{1}{2}$, $\omega_{\bar{n},I}(1) = w_{\bar{n}}^1$ (such paths exist by \cite[Lemma~2.10]{FR} and have (P2)). In the second case ($w_{\bar{n}} \in \gekl{0,1}$), let $\eta_{\bar{n}} = [w_{\bar{n}}^0,y_{\bar{n}}^0]$, $\zeta_{\bar{n}} = [w_{\bar{n}}^1,y_{\bar{n}}^1]$. There must exist $\mathfrak r^0, \mathfrak r^1 \in \gekl{0,1}$ with $(\mathfrak r^0,y_{\bar{n}}^0) \sim (w_{\bar{n}},y_{\bar{n}}) \sim (\mathfrak r^1,y_{\bar{n}}^1)$. Define $\xi_{\bar{n}}$ as in (P1), with $D=1$, $\mathfrak t_1 = \tfrac{1}{2}$, for $I = [\mathfrak t_0,\mathfrak t_1] = [0,\tfrac{1}{2}]$, $\gamma_{\bar{n},I} \defeq y_{\bar{n}}^0$, $\omega_{\bar{n},I}: \: [0,\tfrac{1}{2}] \to [0,1]$ as in (P1) with $\omega_{\bar{n},I}(0) = w_{\bar{n}}^0$, $\omega_{\bar{n},I}(\tfrac{1}{2}) = \mathfrak r^0$, and for $I = [\mathfrak t_1,\mathfrak t_2] = [\tfrac{1}{2},1]$, $\gamma_{\bar{n},I} \defeq y_{\bar{n}}^1$, $\omega_{\bar{n},I}: \: [\tfrac{1}{2},1] \to [0,1]$ as in (P1) with $\omega_{\bar{n},I}(\tfrac{1}{2}) = \mathfrak r^1$, $\omega_{\bar{n},I}(1) = w_{\bar{n}}^1$ (such paths exist by \cite[Lemma~2.10]{FR} and have (P2)). Now apply Proposition~\ref{prop:path} to obtain paths $\xi_n$ and thus a path $\xi$ connecting $\eta$ and $\zeta$ as for Proposition~\ref{prop:pathconn}. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{corollary} \label{cor:Peano} $X$ is a Peano continuum in the unital case and a generalized Peano continuum in the stably projectionless case (see for instance \cite[Chapter~I, \S~9]{BQ} for the definition of a generalized Peano continuum). \end{corollary} Our next goal is to show that we can always arrange $X$ to have no local cut points. In the following, we keep the same notations as in \S~\ref{ss:GPDModels}. First, we observe that in modification (path), because multiplicities in the original C*-algebra models in \cite{Ell, EV, GLN} can be chosen bigger than a fixed constant, by conjugating by suitable permutation matrices, we can always arrange the following conditions for all $n$: \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{enumerate} \item[(nlc$_1$)] For all $p$ and $m = m^+, m_+, m^-$ or $m_-$, we either have $\sum_q m(q,p) = 0$ or $\sum_q m(q,p) \geq 2$, and $\sum_q (\underline{m}(q,p) + \overline{m}(q,p)) \geq 2$; and for all $i$, we have $\sum_q m^{q,i} \geq 2$; \item[(nlc$_2$)] For all $p$, $m = m^+, m_+, m^-$ or $m_-$, $\lambda = \lambda^+, \lambda_+, \lambda^-$ or $\lambda_-$ correspondingly, $\mathfrak r, \mathfrak s \in \gekl{0,1}$ with $\lambda(\mathfrak r) = \mathfrak s$, rank-one projections $d \in DM_{m(p,i)}$ and $\delta$ the image of $d \otimes 1_{F_n^i}$ under $\tailarr$ in the description \eqref{e:betanpi} of $\beta_{n,\mathfrak s}^p$, and rank-one projections $\mathfrak d \in DM_{m(q,p)}$, there exists a rank-one projection $\mathfrak d' \in DM_{m(q',p)}$ orthogonal to $\mathfrak d$, and orthogonal projections $\mathfrak f, \mathfrak f' \in DF_{n+1}$ such that, if $\Delta$ is the image of $\mathfrak d \otimes \delta$ under $\tailarr$ in the description \eqref{e:varphiC} of $\varphi_C$, then $$ \mathfrak d \otimes (\delta \cdot \beta_{n,\mathfrak s}^p(a) \cdot \delta) = \mathfrak d \otimes (\delta \cdot f^p(\mathfrak s) \cdot \delta) \tailarr \Delta \cdot \varphi_C^q(f,a)(\mathfrak r) \cdot \Delta = \Delta \cdot \beta_\mathfrak r(\mathfrak f \cdot \varphi_F(f,a) \cdot \mathfrak f) \cdot \Delta $$ for all $(f,a) \in A_n$ with respect to the description \eqref{e:varphiC} of $\varphi_C$, and similarly for $\mathfrak d'$ and $\mathfrak f'$. \end{enumerate} On the groupoid level, with the same notation as in \S~\ref{ss:GPDModels}, (nlc)$_2$ means that for all $\gamma \in \mathcal M(p,i) \times \mathcal F_n^i \into \mathcal E_{n,\mathfrak s}^p$, $\mu \in \mathcal M(q,p)$, where $\mathcal M = \mathcal M^+, \mathcal M_+, \mathcal M^-$ or $\mathcal M_-$, $\lambda_\mu(\mathfrak r) = \mathfrak s$, there exists $\nu \in \mathcal M(q',p)$ such that $\bm{b}_\mathfrak r(\mu,\gamma) \neq \bm{b}_\mathfrak r(\nu,\gamma)$, i.e., $[\mathfrak r,(\mu,\gamma)] \neq [\mathfrak r,(\nu,\gamma)]$, and $\lambda_\mu = \lambda_\nu$. \begin{proposition} \label{prop:nlc} If we arrange (nlc$_1$), (nlc$_2$) in modification (path), then we obtain a C*-diagonal $B$ whose spectrum $X$ has no local cut points (i.e., for all $\bm{c} \in X$ and open connected sets $V \subseteq X$ containing $\bm{c}$, $V \setminus \gekl{\bm{c}}$ is still connected). \end{proposition} \begin{proof} Let $\bm{c}$, $V$ and $U$ be as in Proposition~\ref{prop:lpc}. It suffices to show that $U \setminus \gekl{\bm{c}}$ is path-connected. Let $\eta, \zeta \in U \setminus \gekl{\bm{c}}$ and $\xi$ a path in $U$ connecting $\eta$ and $\zeta$ as in the proof of Proposition~\ref{prop:lpc}. If $\xi$ hits $\bm{c}$, our goal is to modify $\xi$ to obtain a path in $U$ from $\eta$ to $\zeta$ which avoids $\bm{c}$. First of all, we may assume that $\xi$ hits $\bm{c}$ only once, i.e., there exists $\check{t} \in [0,1]$ such that $\xi(\check{t}) = \bm{c}$ and $\xi(t) \neq \bm{c}$ for all $t \in [0,1] \setminus \gekl{\check{t}}$. Otherwise we could define $t^{\min} \defeq \min \menge{t \in [0,1]}{\xi(t) = \bm{c}}$, $t^{\max} \defeq \max \menge{t \in [0,1]}{\xi(t) = \bm{c}}$, and concatenate $\xi \vert_{[0,t^{\min}]}$ with $\xi \vert_{[t^{\max},1]}$ (and re-parametrize to get a map defined on $[0,1]$). Let $\xi_n$ be as in the proof of Proposition~\ref{prop:lpc}, obtained from Proposition~\ref{prop:path}, and let $\gamma_{n,I}$ and $\omega_{n,I}$ be as in (P1) for $\xi_n$. Choose $n$ such that, with $\bm{c}_n = [w_n,y_n]$, we have $[w_n,y_n] \neq \eta_n = [w_n^0,y_n^0]$ and $[w_n,y_n] \neq \zeta_n = [w_n^1,y_n^1]$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} If $w_n^0 \notin \gekl{0,1}$, then either $y_n \neq y_n^0$, in which case $\omega_n([0,\check{t}]) \defeq \bigcup_I \omega_{n,I}(I \cap [0,\check{t}])$ must contain either $[0,w_n^0]$ or $[w_n^0,1]$, or $w_n \neq w_n^0$, in which case $\omega_n([0,\check{t}])$ must contain the interval between $w_n$ and $w_n^0$. If $w_n^0 \in \gekl{0,1}$ and $w_n \notin \gekl{0,1}$, then $\omega_n([0,\check{t}])$ must contain the interval between $w_n$ and $w_n^0$ (or $1 - w_n^0$). If $w_n^0, w_n \in \gekl{0,1}$, then since $[w_n,y_n] \neq [w_n^0,y_n^0]$, we must have $\omega_n([0,\check{t}]) = [0,1]$. We conclude that in any case, $\omega_n([0,\check{t}]) \cap \mathbb{Z}[\tfrac{1}{2}] \neq \emptyset$ and $\omega_n([0,\check{t}]) \cap (\mathbb{Z}[\tfrac{1}{2}])^c \neq \emptyset$. Similarly, with $\omega_n([\check{t},1]) \defeq \bigcup_I \omega_{n,I}(I \cap [\check{t},1])$, $\omega_n([\check{t},1]) \cap \mathbb{Z}[\tfrac{1}{2}] \neq \emptyset$ and $\omega_n([\check{t},1]) \cap (\mathbb{Z}[\tfrac{1}{2}])^c \neq \emptyset$. By increasing $n$, we can arrange that $0, \tfrac{1}{2} \ \text{or} \ 1 \in \omega_n([0,\check{t}])$ and $\omega_n([0,\check{t}]) \cap (\mathbb{Z}[\tfrac{1}{2}])^c \neq \emptyset$, as well as $0, \tfrac{1}{2} \ \text{or} \ 1 \in \omega_n([\check{t},1])$ and $\omega_n([\check{t},1]) \cap (\mathbb{Z}[\tfrac{1}{2}])^c \neq \emptyset$. If we now let $0 = r_0 \leq t_0 < r_1 < t_1 < \dotso < r_c < t_c < r_{c+1} \leq t_{c+1} = 1$ be as in the proof of Proposition~\ref{prop:path}, then we must have $\check{t} \in (r_1,t_c)$. First assume that $w_n \neq 0, \tfrac{1}{2}, 1$ and $w_{n+1} \neq 0, 1$. Then $\check{t} \in I \subseteq [r_b,t_b]$, and $\check{t}$ must lie in the interior of $I$. Let $s_b$ and $\xi_{n+1}[b] = [w[b],(\mu[b],y[b])]$ be as in the proof of Proposition~\ref{prop:path}. By condition (nlc$_1$), we can find $\bar{\mu}[b] \neq \mu[b]$ with $\lambda_{\bar{\mu}[b]} = \lambda_{\mu[b]}$, so that we can replace $\xi_{n+1}[b]$ by $[w[b],(\bar{\mu}[b],y[b])]$ since we still have $\bm{p}_n[w[b],(\bar{\mu}[b],y[b])] = \xi_n(s_b)$. Now let $\gamma_{n+1,I} \defeq (\bar{\mu}[b],y[b])$ and follow the recipe in the proof of Proposition~\ref{prop:path} to get $\omega_{n+1,I}$. Recursive application of Proposition~\ref{prop:path} gives us the desired path, which will not hit $\bm{c}$ in $(r_b,t_b)$ by construction, on $[t_{b-1},r_b]$ and $[t_b,r_{b+1}]$, we have $\omega_{n,I} \equiv 0, \tfrac{1}{2} \ \text{or} \ 1$, so that we will not hit $\bm{c}$ there, either, and on the rest of $[0,1]$, we keep our path $\xi$ and hence will not hit $\bm{c}$ there, either. Secondly, assume that $w_n = 0, \tfrac{1}{2}, \ \text{or} \ 1$ and $w_{n+1} \neq 0, 1$. Then $\check{t} \in I = [r_b,t_b]$, and $\check{t}$ must lie in the interior of $I$. Let $\gamma_{n+1,I} = (\mu,y)$. We must have $\lambda_{\mu} \equiv w_n$. By condition (nlc$_1$), there exists $\bar{\mu} \neq \mu$ with $\lambda_{\bar{\mu}} = \lambda_\mu$. Now replace $\gamma_{n+1,I}$ by $(\bar{\mu},y)$ and follow the recipe in the proof of Proposition~\ref{prop:path} to get $\omega_{n+1,I}$. Recursive application of Proposition~\ref{prop:path} gives us the desired path, which will not hit $\bm{c}$ in $I$ by construction, and on $[0,1] \setminus I$, we keep our path $\xi$ and hence will not hit $\bm{c}$ there, either. Thirdly, assume $w_n \in \gekl{0, \tfrac{1}{2}, 1}$ and $w_{n+1} \in \gekl{0,1}$. By increasing $n$ if necessary, we may assume $w_n \in \gekl{0,1}$. We have $\check{t} \in I \defeq [t_b,r_{b+1}]$. If $\check{t} \in (t_b,r_{b+1})$, then choose a different path between $[w,(\mu,y)]$ and $[\bar{w},(\bar{\mu},\bar{y})]$ (here we are using the same notation as in the proof of Proposition~\ref{prop:path}). There are always two such paths only overlapping at their end points (see Remark~\ref{rem:Yconnconn} and the proof of Proposition~\ref{prop:conn} it refers to). Complete the construction of $\xi$ on $[t_b,r_{b+1}]$ using Proposition~\ref{prop:path} repeatedly. Keep $\xi$ on $[0,1] \setminus (t_b,r_{b+1})$. This yields the desired path which does not hit $\bm{c}$ anymore. Now assume that $\check{t} = t_b < r_{b+1}$. By condition (nlc$_2$), there exists $\nu$ with $\lambda_\nu = \lambda_\mu$ such that $[w,(\nu,y)] \neq [w,(\mu,y)]$. Construct a path as in the proof of Proposition~\ref{prop:path} connecting $[w,(\nu,y)]$ and $[\bar{w},(\bar{\mu},\bar{y})]$ not hitting $\bm{c}_{n+1}=[w_{n+1},y_{n+1}]$. This is possible because there are always two paths connecting these points and only overlapping at their end points (see Remark~\ref{rem:Yconnconn} and the proof of Proposition~\ref{prop:conn} it refers to). On the interval $\dot{I}$ before $t_b$, re-define $\xi_{n+1}$ by setting $\gamma_{n+1,\dot{I}} \defeq (\nu,y)$ and following the recipe in the proof of Proposition~\ref{prop:path} for $\omega_{n+1,\dot{I}}$. On the interval $\ddot{I}$ before $\dot{I}$, either $\omega_{n,\ddot{I}} \equiv \tfrac{1}{2}$, in which case simply following the recipe in the proof of Proposition~\ref{prop:path} for $\omega_{n+1,\ddot{I}}$ will make sure that we do not hit $\bm{c}$, or $\omega_{n,\ddot{I}} \equiv w$, in which case we construct $\xi_{n+1}$ on $\ddot{I}$ avoiding $\bm{c}_{n+1}$ using as before that Remark~\ref{rem:Yconnconn} (and the proof of Proposition~\ref{prop:conn} it refers to) always provides two paths we can choose from to connect end points as required. Complete the construction of $\xi$ on $I$, $\dot{I}$ and $\ddot{I}$ using Proposition~\ref{prop:path} repeatedly, and keep $\xi$ on the remaining part of $[0,1]$. This yields the desired path not hitting $\bm{c}$. Finally, suppose that $\check{t} = t_b = r_{b+1}$ (this can happen due to our re-parametrisation). By condition (nlc$_2$), there exist $\nu$, $\bar{\nu}$ with $\lambda_\nu = \lambda_\mu$, $\lambda_{\bar{\nu}} = \lambda_{\bar{\mu}}$ and $[w,(\nu,y)] \neq [w,(\mu,y)]$, $[\bar{w},(\bar{\nu},\bar{y})] \neq [\bar{w},(\bar{\mu},\bar{y})]$. As in the previous cases, we can construct a path connecting $[w,(\nu,y)]$ and $[\bar{w},(\bar{\nu},\bar{y})]$ not hitting $[w_{n+1},y_{n+1}]$. Complete the construction of $\xi$ on $I$ using Proposition~\ref{prop:path} repeatedly. On the two intervals before and after $t_b = r_{b+1}$, construct $\xi$ as in the previous case so that we do not hit $\bm{c}$ there. On the remaining part of $[0,1]$, keep the path $\xi$, which does not hit $\bm{c}$ by assumption. This yields the desired path not hitting $\bm{c}$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Next, we show that we can always arrange $X$ so that no non-empty open subset of $X$ is planar. For this purpose, we observe that in modification (path), with the same notations as in \S~\ref{ss:GPDModels}, for the same reasons why we can always arrange (nlc$_1$) and (nlc$_2$), we can always arrange the following conditions for all $n$: \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{enumerate} \item[(nop$_1$)] For all $p$, $\sum_q m^+(q,p) \geq 1$ and $\sum_q m_+(q,p) \geq 1$, and $\sum_q \underline{m}(q,p) \geq 9$ or $\sum_q \overline{m}(q,p) \geq 9$; \item[(nop$_2$)] The analogue of (nlc$_2$), implying on the groupoid level that for all $\gamma \in \mathcal E_{n,0}^p$, there exist $\nu_+^i \in \mathcal M_+(q^i,p)$, $i = 1, 2, 3$, such that $\bm{b}_0(\nu_+^i,\gamma)$ are pairwise distinct, i.e., $[0,(\nu_+^i,\gamma)]$ are pairwise distinct, and for all $\gamma \in \mathcal E_{n,1}^p$, there exist $\nu^+_i \in \mathcal M^+(q_i,p)$, $i = 1, 2, 3$, with $\bm{b}_1(\nu^+_i,\gamma)$ pairwise distinct, i.e., $[1,(\nu^+_i,\gamma)]$ pairwise distinct. \end{enumerate} \begin{proposition} \label{prop:nop} If we arrange (nop$_1$) and (nop$_2$) in modification (path), then we obtain a C*-diagonal $B$ whose spectrum $X$ has the property that no non-empty open subset of $X$ is planar. \end{proposition} \begin{proof} Let $\emptyset \neq V \subseteq X$ be open. A similar argument as in the beginning of the proof of Proposition~\ref{prop:lpc} shows that there exists $n$ and an open subset $U_n$ of $X_n$ such that $[\tfrac{1}{2},y] \in U_n$ for some $y \in \mathcal E_n$ and $\bm{p}_{n,\infty}^{-1}(U_n) \subseteq V$. By condition (nop$_1$), there exist $\mu^{ij}$, $1 \leq i, j \leq 3$ with $\lambda_{\mu^{ij}} \equiv \tfrac{1}{2}$ and $[0,(\mu^{ij},y)] = [1,(\mu^{kl},y)]$ for all $i, j, k, l$. Let $\xi_{n+1}^{ij}(t) \defeq [\omega_{n+1}(t),(\mu^{ij},y)]$, where $\omega_{n+1}: \: [0,1] \to [0,1]$ is as in (P1), with $\omega_{n+1}(0) = 0$, $\omega_{n+1}(1) = 1$ and $\omega_{n+1}(0)$, $\omega_{n+1}(1)$ are not stop values ($\omega_{n+1}$ exists by \cite[Lemma~2.10]{FR} and automatically has (P2)). Set $y_{n+1} \defeq (\mu^{11},y)$. By condition (nop$_2$), we can find $\nu_+^i$, $i = 1, 2, 3$, such that $\lambda_{\nu_+^i} = \tfrac{1}{2} \cdot {\rm id}$ and, with $y_{n+2,i}^0 \defeq (\nu_+^i,y_{n+1})$, we have that $[0, y_{n+2,i}^0]$ are pairwise distinct for $i = 1, 2, 3$. Similarly, by condition (nop$_2$), we can find $\nu^+_j$, $j = 1, 2, 3$, such that $\lambda_{\nu^+_i} = \tfrac{1}{2} + \tfrac{1}{2} \cdot {\rm id}$ and, with $y_{n+2,j}^1 \defeq (\nu^+_j,y_{n+1})$, we have that $[1, y_{n+2,j}^1]$ are pairwise distinct for $j = 1, 2, 3$. By the variation of Proposition~\ref{prop:path}, we can find a path $\xi_{n+2}^{ij}$ satisfying (P1), (P2) and (P3a) such that $\xi_{n+2}^{ij}(0) = [0, y_{n+2,i}^0]$, $\xi_{n+2}^{ij}(1) = [1, y_{n+2,j}^1]$ and $\bm{p}_{n+1} \circ \xi_{n+2}^{ij} = \xi_{n+1}^{ij}$. Now define recursively $y_{N,i}^0$ and $y_{N,j}^1$ for all $N \geq n+2$ by setting $y_{N+1,i}^0 \defeq (\mu_+,y_{N,i}^0)$ for some $\mu_+$ with $\lambda_{\mu_+} = \tfrac{1}{2} \cdot {\rm id}$ and $y_{N+1,j}^1 \defeq (\mu^+,y_{N,j}^1)$ for some $\mu^+$ with $\lambda_{\mu^+} = \tfrac{1}{2} + \tfrac{1}{2} \cdot {\rm id}$. We can find such $\mu_+$ and $\mu^+$ by the first part of condition (nop$_1$). By the variation of Proposition~\ref{prop:path}, we can find paths $\xi_N^{ij}$ satisfying (P1), (P2) and (P3a) such that $\xi_N^{ij}(0) = [0, y_{N,i}^0]$, $\xi_N^{ij}(1) = [1, y_{N,j}^1]$ and $\bm{p}_{N-1} \circ \xi_N^{ij} = \xi_{N-1}^{ij}$ for all $N \geq n+2$. This gives rise to paths $\xi^{ij}$, $1 \leq i, j \leq 3$, with $\xi^{ij}(t) \defeq (\xi_N^{ij}(t))_N$. As $\bm{p}_{n,\infty}(\xi^{ij}(t)) = \bm{p}_n(\xi_{n+1}^{ij}(t)) \in U_n$ for all $t \in [0,1]$, we must have ${\rm im\,}(\xi^{ij}) \subseteq \bm{p}_{n,\infty}^{-1}(U_n) \subseteq V$ for all $i$, $j$. Now define $v_i^0 \defeq ([0,y_{N,i}^0])_N$ and $v_j^1 \defeq ([1,y_{N,j}^1])_N$. By construction, we have ${\rm im\,}(\xi^{i,j}) \cap {\rm im\,}(\xi^{k,l}) = \{ v^0_i \}$ if $i = k$ and $j \neq l$, ${\rm im\,}(\xi^{i,j}) \cap {\rm im\,}(\xi^{k,l}) = \{ v^1_j \}$ if $i \neq k$ and $j = l$, and ${\rm im\,}(\xi^{i,j}) \cap {\rm im\,}(\xi^{k,l}) = \emptyset$ if $i \neq k$ and $j \neq l$. As ${\rm im\,}(\xi^{i,j})$ is a compact, connected, locally connected metric space, it is arcwise connected (see for instance \cite[\S~31]{Wil}). Hence we can find arcs $\xi^{(i \to j)}$ such that $\xi^{(i \to j)}(0) = v^0_i$, $\xi^{(i \to j)}(1) = v^1_j$, and ${\rm im\,}(\xi^{(i \to j)}) \subseteq {\rm im\,}(\xi^{i,j})$ for $i, j \in \gekl{1,2,3}$. Then we still have that ${\rm im\,}(\xi^{(i \to j)}) \cap {\rm im\,}(\xi^{(k \to l)}) = \{ v^0_i \}$ if $i = k$ and $j \neq l$, ${\rm im\,}(\xi^{(i \to j)}) \cap {\rm im\,}(\xi^{(k \to l)}) = \{ v^1_j \}$ if $i \neq k$ and $j = l$, and ${\rm im\,}(\xi^{(i \to j)}) \cap {\rm im\,}(\xi^{(k \to l)}) = \emptyset$ if $i \neq k$ and $j \neq l$. Now let $K_{3,3}$ be the bipartite graph consisting of vertices $e^i(0)$, $e^j(1)$, $1 \leq i,j \leq 3$, and edges $e^{(i \to j)}$, $1 \leq i, j \leq 3$, connecting $e^i(0)$ to $e^j(1)$ for all $i,j$, such that we have $e^{(i \to j)} \cap e^{(k \to l)} = \gekl{e^i(0)}$ if $i = k$ and $j \neq l$, $e^{(i \to j)} \cap e^{(k \to l)} = \gekl{e^j(1)}$ if $i \neq k$ and $j = l$, and $e^{(i \to j)} \cap e^{(k \to l)} = \emptyset$ if $i \neq k$ and $j \neq l$. By construction, we obtain a continuous map $K_{3,3} \to V$ which is a homeomorphism onto its image by sending $e^i(0)$ to $v^0_i$, $e^j(1)$ to $v^1_j$ and $e^{(i \to j)}$ to $\xi^{(i \to j)}$. Since $K_{3,3}$ is not planar by \cite{Kur}, this shows that $V$ is not planar. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{corollary} \label{cor:Menger_unital} Suppose that we are in the unital case and that we arrange (nlc$_1$), (nlc$_2$), (nop$_1$) and (nop$_2$) in modification (path). Then we obtain a C*-diagonal $B$ whose spectrum $X$ is homeomorphic to the Menger curve. \end{corollary} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Anderson characterized the Menger curve as the (up to homeomorphism) unique Peano continuum with no local cut points and for which no non-empty open subset is planar (see \cite{And58_1,And58_2}). Our result thus follows from Corollary~\ref{cor:Peano} combined with Propositions~\ref{prop:nlc} and \ref{prop:nop}. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Our next goal is to identify $X = \Spec B$ in the stably projectionless case. We show that $X \cong \bm{M} \setminus \iota(C)$, where $\iota$ is an embedding of the Cantor space $C$ into the Menger curve $\bm{M}$ such that $\iota(C)$ is a non-locally-separating subset of $\bm{M}$. By \cite{MOT}, the homeomorphism type of $\bm{M} \setminus \iota(C)$ does not depend on the choice of $\iota$, and hence we denote the space by $\bm{M}_{\setminus C} \defeq \bm{M} \setminus \iota(C)$. More precisely, we will show that the Freudenthal compactification $\overline{X}{}^{^F}$ of $X$ is homeomorphic to $\bm{M}$, that the space of Freudenthal ends ${\rm End}_F(X)$ is homeomorphic to $C$, and that ${\rm End}_F(X)$ is a non-locally-separating subset of $\overline{X}{}^{^F}$. It follows that $X$ is homeomorphic to $\bm{M}_{\setminus C}$. We refer the reader to \cite{Fre} and \cite[Chapter~I, \S~9]{BQ} for details about the Freudenthal compactification. We follow the exposition in \cite[Chapter~I, \S~9]{BQ}. First of all, in the stably projectionless case, we define $\overline{X}_n \defeq \big( ([0,1] \times \mathcal Y_n) \amalg \mathcal X_n \big) / {}_\sim$, where we extend the equivalence relation describing $X_n$ (see the beginning of \S~\ref{s:CDiagMenger}) trivially from $([0,1] \times_\bullet \mathcal Y_n) \amalg \mathcal X_n$ to $([0,1] \times \mathcal Y_n) \amalg \mathcal X_n$. By our arrangement, for all $n$, there exists exactly one index $\grave{p}$ such that $\beta_{n,0}^{\grave{p}}$ is unital and $\beta_{n,1}^{\grave{p}}$ is non-unital, while $\beta_{n,\bullet}^p$ is unital for all other $p \neq \grave{p}$. This means that $\mathcal Y_{n,0} = \mathcal Y_n$ and $\mathcal Y_n \setminus \mathcal Y_{n,1} = \mathcal Y_n^{\grave{p}} \setminus \mathcal Y_{n,1}^{\grave{p}}$. Hence $\overline{X}_n \setminus X_n = \big\{ [1,y_n]: \: y_n \in \mathcal Y_n^{\grave{p}} \setminus \mathcal Y_{n,1}^{\grave{p}} \big\}$. Let $\bar{\bm{p}}_n: \: \overline{X}_{n+1} \to \overline{X}_n$ be the unique continuous extension of $\bm{p}_n$. Every $y_{n+1} \in \mathcal Y_{n+1}^{\grave{q}} \setminus \mathcal Y_{n+1,1}^{\grave{q}}$ is of the form $y_{n+1} = (\mu,y_n)$ for some $y_n \in \mathcal Y_n^{\grave{p}} \setminus \mathcal Y_{n,1}^{\grave{p}}$, $\mu \in \mathcal M^+(\grave{q},\grave{p})$, and we have $\bar{\bm{p}}_n[1,y_{n+1}] = [1,y_n]$. Define $\overline{X} \defeq \plim_n \big\{ \overline{X}_n,\bar{\bm{p}}_n \big\}$. \begin{lemma} \label{lem:XXF} ${\rm id}_X$ extends to a homeomorphism $\overline{X} \isom \overline{X}{}^{^F}$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} For $y \notin \mathcal Y_{n,1}$, define $I_y \defeq [0,\tfrac{1}{3}]$, and for $y \in \mathcal Y_{n,1}$, set $I_y \defeq [0,1]$. Define $K_n \defeq \bm{p}_{n,\infty}^{-1}[\bigcup_{y \in \mathcal Y_n} I_y \times \gekl{y}]$. Then $K_n$ is compact because $K_n \cong \plim_N \gekl{\bm{p}_{n,N}^{-1}[\bigcup_{y \in \mathcal Y_n} I_y \times \gekl{y}], \bm{p}_N}$ and $\bm{p}_{n,N}$ is proper (see \cite[\S~7]{Li18}). Every $y_{n+1} \notin \mathcal Y_{n+1,1}$ is of the form $y_{n+1} = (\mu,y_n)$ for some $y_n \notin \mathcal Y_{n,1}$ with $\lambda_\mu = \tfrac{1}{2} + \tfrac{1}{2} \cdot {\rm id}$, so that $\bm{p}_n[t,y_{n+1}] = [\tfrac{1}{2} + \tfrac{t}{2},y_n] \notin [[0,\tfrac{1}{3}] \times \gekl{y_n}]$ for all $t \in [0,1]$. Hence $\bm{p}_n^{-1}[\bigcup_{y_n \in \mathcal Y_n} I_{y_n} \times \gekl{y_n}] \subseteq [\bigcup_{y \in \mathcal Y_{n+1,1}} [0,1] \times \gekl{y}] \subseteq {\rm int}([\bigcup_{y \in \mathcal Y_{n+1}} I_y \times \gekl{y}])$. Thus $K_n \subseteq {\rm int}(K_{n+1})$ for all $n$. Moreover, $X \setminus K_n = \bm{p}_{n,\infty}^{-1}[\bigcup_{y \notin \mathcal Y_{n,1}} (\tfrac{1}{3},1) \times \gekl{y}] = \bigcup_{y \notin \mathcal Y_{n,1}} \bm{p}_{n,\infty}^{-1}[(\tfrac{1}{3},1) \times \gekl{y}]$. Using Proposition~\ref{prop:path}, the same argument as for Proposition~\ref{prop:pathconn} shows that $\bm{p}_{n,\infty}^{-1}[(\tfrac{1}{3},1) \times \gekl{y}]$ is path-connected, and we obtain $\menge{[1,y]}{y \notin \mathcal Y_{n,1}} \isom \Pi_0(X \setminus K_n), \, [1,y] \ma \bm{p}_{n,\infty}^{-1}[(\tfrac{1}{3},1) \times \gekl{y}]$. This induces a homeomorphism $ \overline{X} \setminus X = \ilim_n \gekl{\menge{[1,y_n]}{y_n \notin \mathcal Y_{n,1}}, \bm{p}_n} \isom \plim_n \Pi_0(X \setminus K_n) = {\rm End}_F(X)$ and hence a (set-theoretic) bijection $\overline{X} \isom \overline{X}{}^{^F}$ extending ${\rm id}_X$. For this description of ${\rm End}_F(X)$, we are using that $X$ is a generalized Peano continuum (see Corollary~\ref{cor:Peano}). It is now straightforward to see that this bijection is a homeomorphism. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} To study properties of $\overline{X}$, we need the following observation. \begin{remark}\em \label{rem:path-lift_spl} In $\overline{X}$, the analogue of Proposition~\ref{prop:path} holds for a path $\xi_n$ with $\xi_n(0) \in \overline{X}_n \setminus X_n$, $\xi_n(1) \in X_n$ and $\xi_{n+1}^0 \in \overline{X}_{n+1} \setminus X_{n+1}$, $\xi_{n+1}^1 \in X_{n+1}$ with $\bm{p}(\xi_{n+1}^\mathfrak r) = \xi_n(\mathfrak r)$ for $\mathfrak r = 0,1$. We also have the analogue of the variation, but we only need (P3a) because (P3b) is automatic in the present situation since we must have $\lambda_{\mu_{n+1}^0} = \tfrac{1}{2} + \tfrac{1}{2} \cdot {\rm id}$, and we get the additional statement that if $\xi_n(t) \in X_n \ \forall \ t \in (0,1]$, then $\xi_{n+1}(t) \in X_{n+1} \ \forall \ t \in (0,1]$. \end{remark} \begin{proposition} \label{prop:clX...} $\overline{X}$ is compact, path-connected and locally path-connected. If we arrange (nlc$_1$) and (nlc$_2$) in modification (path), then $\overline{X}$ has no local cut points. If we arrange (nop$_1$) and (nop$_2$) in modification (path), then no non-empty subset of $\overline{X}$ is planar. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Clearly, $\overline{X}$ is compact. To see that $\overline{X}$ is path-connected, consider $\eta, \zeta \in \overline{X}$. If both $\eta$ and $\zeta$ lie in $X$, then Proposition~\ref{prop:pathconn} provides a path connecting them. If $\eta \in \overline{X} \setminus X$ and $\zeta \in X$, we produce a path connecting them as in the proof of Proposition~\ref{prop:pathconn} using the analogue of Proposition~\ref{prop:path} from Remark~\ref{rem:path-lift_spl}. If both $\eta$ and $\zeta$ lie in $\overline{X} \setminus X$, just connect them to some point in $X$ and concatenate the two paths. To see that $\overline{X}$ is locally path-connected, we follow the same strategy as for Proposition~\ref{prop:lpc}. We only need to consider $\bm{c} = ([1,y_n])_n \in \overline{X} \setminus X$. Choose $U$ in the proof of Proposition~\ref{prop:lpc} of the form $U = \bm{p}_{n,\infty}^{-1}[I \times \gekl{y_n}]$, where $I$ is a half-open interval containing $\tfrac{1}{2}$ and $1$. Then the same proof as for Proposition~\ref{prop:lpc}, using the analogue of Proposition~\ref{prop:path} from Remark~\ref{rem:path-lift_spl}, shows that $U$ is path-connected. To show that $\overline{X}$ has no local cut points if (nlc$_1$) and (nlc$_2$) hold, we again only need to consider $\bm{c} = ([1,y_n])_n \in \overline{X} \setminus X$. Choose $U$ as before and take $\eta, \zeta \in U \setminus \gekl{\bm{c}}$. If both $\eta$ and $\zeta$ lie in $X$, then Proposition~\ref{prop:nlc} yields a path in $U \setminus \gekl{\bm{c}}$ connecting $\eta$ and $\zeta$ because $U \cap X$ is of the form as in Proposition~\ref{prop:nlc}. If $\eta \in \overline{X} \setminus X$ and $\zeta \in X$, then we can construct a path $\xi$ in $U$ with $\xi(0) = \eta$, $\xi(1) = \zeta$ and $\xi(t) \in X$ for all $t \in (0,1]$, using the analogue of the variation in Proposition~\ref{prop:path} from Remark~\ref{rem:path-lift_spl}. Then $\xi(t) \neq \bm{c}$ for all $t \in (0,1]$, and we also have $\xi(0) = \eta \neq \bm{c}$. If both $\eta$ and $\zeta$ lie in $\overline{X} \setminus X$, then pick a point $u \in U \cap X$, connect $\eta$ and $\zeta$ to $u$ in $U \setminus \gekl{\bm{c}}$ as in the previous case, and concatenate the two paths. Finally, to see that no non-empty open subset of $\overline{X}$ is planar if (nlc$_1$) and (nlc$_2$) hold, just observe that every non-empty open subset $V$ of $\overline{X}$ gives rise to a non-empty open subset $V \cap X$ of $X$, and apply Proposition~\ref{prop:nop} to $V \cap X$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} The same reasoning as for Corollary~\ref{cor:Menger_unital} yields \begin{corollary} If we arrange (nlc$_1$), (nlc$_2$), (nop$_1$) and (nop$_2$) in modification (path), then $\overline{X}$ is homeomorphic to the Menger curve. \end{corollary} \begin{lemma} \label{lem:Cantor_spl} If (nlc$_1$) holds, then $\overline{X} \setminus X$ is homeomorphic to the Cantor space. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} (nlc$_1$) implies that we always have $m^+(\grave{q},\grave{p}) \geq 2$, so that for all $y_n \notin \mathcal Y_{n,1}$, $\# \bm{p}_n^{-1}[1,y_n] \geq 2$. Now it is straightforward to see that $ \overline{X} \setminus X = \ilim_n \gekl{\menge{[1,y_n]}{y_n \notin \mathcal Y_{n,1}}, \bm{p}_n}$ is homeomorphic to the Cantor space. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{proposition} $\overline{X} \setminus X$ is a non-locally-separating subset of $\overline{X}$, i.e., for every connected open subset $V \subseteq \overline{X}$, $V \setminus (\overline{X} \setminus X) = V \cap X$ is connected. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} $V$ is open and connected, hence locally path-connected by Proposition~\ref{prop:clX...} and thus path-connected. Take $\eta, \zeta \in V \cap X$ and a continuous path $\xi: \: [0,1] \to \overline{X}$ with $\xi(0) = \eta$ and $\xi(1) = \zeta$. It is straightforward to see that we can find $0 = t_0 < t_1 < \dotso < t_l < t_{l+1} = 1$ and for each $0 \leq k \leq l$ an open subset $\bm{U}_k \subseteq V$ as in the proof that $X$ and $\overline{X}$ are locally path-connected (see Propositions~\ref{prop:lpc} and \ref{prop:clX...}) such that $\xi([t_k,t_{k+1}]) \subseteq \bm{U}_k$ for all $0 \leq k \leq l$. Set $\xi[0] \defeq \xi(0)$, $\xi[1] \defeq \xi(1)$, and for $1 \leq k \leq l$, set $\xi[t_k] \defeq \xi(t_k)$ if $\xi(t_k) \in X$ and pick some $\xi[t_k] \in \bm{U}_{k-1} \cap \bm{U}_k \cap X$ otherwise. Since $\bm{U}_k \cap X$ is an open set of the form as in the proof of Proposition~\ref{prop:lpc}, it is path-connected, so that we can find paths connecting $\xi[t_k]$ and $\xi[t_{k+1}]$ in $\bm{U}_k \cap X$ for all $0 \leq k \leq l$. Now concatenate these paths to obtain a path in $V \cap X$ connecting $\eta$ and $\zeta$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} All in all, we obtain the following consequence. \begin{corollary} \label{cor:M-C_spl} Suppose that we are in the stably projectionless case and that we arrange (nlc$_1$), (nlc$_2$), (nop$_1$) and (nop$_2$) in modification (path). Then we obtain a C*-diagonal $B$ whose spectrum $X$ is homeomorphic to $\bm{M}_{\setminus C}$. \end{corollary} \begin{remark}\em \label{rem:KMenger} The K-groups of $C(\bm{M})$ and $C_0(\bm{M}_{\setminus C})$ are given as follows: We have $K_0(C(\bm{M})) = \mathbb{Z}[1]$, $K_1(C(\bm{M})) \cong \bigoplus_{i=1}^\infty \mathbb{Z}$ (see for instance \cite[Equation~(32)]{Li18}), and it follows that $K_0(C_0(\bm{M}_{\setminus C})) \cong \gekl{0}$, $K_1(C_0(\bm{M}_{\setminus C})) \cong \bigoplus_{i=1}^\infty \mathbb{Z}$. \end{remark} \section{Constructing continuum many non-conjugate C*-diagonals with Menger manifold spectra} \label{s:ManyCDiagMenger} Let us present two further modifications of our constructions of C*-diagonals in classifiable C*-algebras which will allow us to produce continuum many pairwise non-conjugate C*-diagonals in all our classifiable C*-algebras. First of all, we recall the construction of the groupoid model $\bar{G}$ for the pair $(A,B)$, where $A$ is our classifiable C*-algebra with prescribed Elliot invariant $\mathcal E$ as in \S~\ref{ss:(path)} and $B$ the C*-diagonal of $A$ produced by modification (path). Let $G_n$, $H_n$ and $\bm{p}_n$ be as in \S~\ref{ss:GPDModels}. Following \cite[\S~5]{Li18}, we define $G_{n,0} \defeq G_n$, $G_{n,m+1} \defeq \bm{p}_{n+m}^{-1}(G_{n,m}) \subseteq H_{n+m}$ for all $n$ and $m = 0, 1, \dotsc$, $\bar{G}_n \defeq \plim_m \gekl{G_{n,m}, \bm{p}_{n+m}}$ for all $n$. Moreover, the inclusions $H_n \into G_{n+1}$ induce embeddings with open image $\bm{i}_n: \: \bar{G}_n \into \bar{G}_{n+1}$, allowing us to define $\bar{G} \defeq \ilim \gekl{\bar{G}_n, \bm{i}_n}$. We will identify $\bar{G}_n$ with its image in $\bar{G}$. As explained in \cite[\S~5]{Li18}, $\bar{G}$ is a groupoid model for $(A,B)$ in the sense that we have a canonical isomorphism $A \isom C^*_r(\bar{G})$ sending $B$ to $C_0(\bar{G}^{(0)})$. In the following, we let $\bm{p}_{n+m,\infty}: \: \bar{G}_n \onto G_{n,m}$ be the canonical projection from the inverse limit structure of $\bar{G}_n$, and $\bm{p}_{n+m,n+\bar{m}}: \: G_{n,\bar{m}+1} \onto G_{n,m}$ denotes the composition $\bm{p}_{n+m} \circ \dotso \circ \bm{p}_{n+\bar{m}}$. \subsection{Constructing closed subgroupoids} Recall the description of $\beta_{n,\mathfrak r}^{p,i}$ in \eqref{e:betanpi}. We observe that in modification (path), with the same notations as in \S~\ref{ss:GPDModels}, we can always arrange the following condition for all $n$ by adding ${\rm id}_{F_{n+1}^j}$ to $\beta_{n+1,\bullet}^q$, enlarging $E_{n+1}^q$ accordingly, and conjugation $\beta_{n+1,\bullet}^q$ by suitable permutation matrices as in modification (conn) without changing the properties or Elliott invariant of the classifiable C*-algebra we construct: \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{enumerate} \item[(clsg)] For all $q, p$, $m = m^+, m_+, m^-$ or $m_-$ and $\lambda = \lambda^+, \lambda_+, \lambda^-$ or $\lambda_-$ correspondingly, $\mathfrak r, \mathfrak s \in \gekl{0,1}$ with $\lambda(\mathfrak r) = \mathfrak s$, $\mathfrak d \in DM_{m(q,p)}$, if we denote by $\mathfrak D(p,i)$ the set of rank-one projections in $DM_{m_\mathfrak s(p,i)}$ and by $\mathfrak D(q,j)$ the set of rank-one projections in $DM_{m_\mathfrak r(q,j)}$, then there is an injective map $\coprod_i \mathfrak D(p,i) \into \coprod_j \mathfrak D(q,j), \, d(p,i) \ma d(q,j)$ and $d \in DM_{m(j,i)}$ attached to each pair $d(p,i)$ and $d(q,j)$ such that, if we denote by $\delta(p,i)$ the image of $d(p,i) \otimes 1_{F_n^i}$ under $\tailarr$ in the description of $\beta_{n,\mathfrak s}^p$ in \eqref{e:betanpi}, by $\Delta$ the image of $\mathfrak d \otimes \delta(p,i)$ under $\tailarr$ in the description \eqref{e:varphiC} of $\varphi_C$, and by $\delta$ the image of $d \otimes 1_{F_n^i}$ under $\tailarr$ in the description of $\varphi_F^j \vert_{F_n^i}$ as in \eqref{e:varphiFj_NEW}, we have that for all $(f,a) \in A_n$ $$ d(p,i) \otimes a \tailarr \delta(p,i) \cdot f^p(\mathfrak s) \cdot \delta(p,i) $$ under $\tailarr$ in the description of $\beta_{n,\mathfrak s}^p$ in \eqref{e:betanpi}, $$ \mathfrak d \otimes (\delta(p,i) \cdot f^p(\mathfrak s) \cdot \delta(p,i)) \tailarr \Delta \cdot \varphi_C^q(f,a)(\mathfrak r) \cdot \Delta = \Delta \cdot \beta_\mathfrak r(\varphi_F(f,a)) \cdot \Delta $$ under $\tailarr$ in the description \eqref{e:varphiC} of $\varphi_C$, and $$ d(q,j) \otimes \delta \cdot \varphi_F^j(a) \cdot \delta \tailarr \Delta \cdot \beta_\mathfrak r(\varphi_F(f,a)) \cdot \Delta $$ under $\tailarr$ in the description of $\beta_{n+1,\mathfrak r}^q$ in \eqref{e:betanpi}. \end{enumerate} On the groupoid level, with the same notation as in \S~\ref{ss:GPDModels}, (clsg) means that in \eqref{e:BiggestCD}, if we start at $\coprod_i \mathcal M_\mathfrak s(p,i) \times \mathcal F_n^i$, go up to $\mathcal E_{n,\mathfrak s}^p$, follow the horizontal arrows to $\mathcal E_{n+1,\mathfrak r}^q$ and go down to $\coprod_j \mathcal M_\mathfrak r(q,j) \times \mathcal F_{n+1}^j$, we get a map of the form $(\mu(p,i),\gamma_n^i) \ma (\mu(q,j),\tilde{\gamma}_n^i)$ such that the assignment $\mu(p,i) \ma \mu(q,j)$ is injective (and $\gamma_n^i \ma \tilde{\gamma}_n^i$ corresponds to the composition $a \ma d(j,i) \otimes a \tailarr \delta(j,i) \cdot \varphi_F^j(a) \cdot \delta(j,i)$ as in the description of $\varphi_F^j \vert_{F_n^i}$ in \eqref{e:varphiFj_NEW}). \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{lemma} \label{lem:HnClosed} If we arrange (clsg) in modification (path), then $H_n$ is a closed subgroupoid of $G_{n+1}$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} We make use of the descriptions of $H_n$ and $G_{n+1}$ in \S~\ref{ss:GPDModels}. Suppose $[t_k,\mu_k,\gamma_k] \in H_n$ converges in $G_{n+1}$ to $[t,\gamma_{n+1}] \in G_{n+1}$. Our goal is to show that $[t,\gamma_{n+1}]$ lies in $H_n$. As there are only finitely many possibilities for $(\mu_k,\gamma_k)$, we may assume that $(\mu_k,\gamma_k) = (\mu,\gamma)$ is constant (independent of $k$), and thus $\gamma_{n+1} = (\mu,\gamma)$. If $t \in (0,1)$, then we have $[t,\gamma_{n+1}] \in H_n$. Now suppose that $t \in \gekl{0,1}$. If $\mu \in \overline{\mathcal M}(q,p) \amalg \underline{\mathcal M}(q,p)$, $\gamma \in \mathcal E_n^p$, or $\mu \in \mathcal M(q,i)$, $\gamma \in \mathcal F_n^i$, then $(\mu,\gamma) \in \mathcal E_{n+1,t}^q$ and hence $[t,\gamma_{n+1}] \in H_n$. Finally, assume that $\mu \in \mathcal M^+(q,p) \amalg \mathcal M_+(q,p) \amalg \mathcal M^-(q,p) \amalg \mathcal M_-(q,p)$ and $\gamma \in \mathcal E_n^p$. Let $\lambda = \lambda^+, \lambda_+, \lambda^-, \ \text{or} \ \lambda_-$ accordingly. Since $[t,(\mu,\gamma)] \in G_{n+1}$, $s(\mu,\gamma)$ and $r(\mu,\gamma)$ must be mapped to elements in $\coprod_j \mathcal M_t(q,j) \times \mathcal X_{n+1}^j$ with the same $\mathcal M_t(q,j)$-component in \eqref{e:BiggestCD}, which then implies by (clsg) that $s(\gamma)$ and $r(\gamma)$ are mapped to elements in $\coprod_i \mathcal M_{\lambda(t)}(p,i) \times \mathcal X_n^i$ with the same $\mathcal M_{\lambda(t)}(p,i)$-component in \eqref{e:BiggestCD}, which in turn implies that $\gamma \in \mathcal E_{n,\lambda(t)}^p$ and thus $[t,(\mu,\gamma)] \in H_n$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{corollary} \label{cor:barGn-clopen} If we arrange (clsg) in modification (path), then $\bar{G}_n$ is a clopen subset of $\bar{G}$ for all $n = 1, 2, \dotsc$. \end{corollary} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} An easy induction on $m$ shows that $G_{n,m}$ is an open subset of $G_{n+m}$: $G_{n,1} = H_n$ is open in $G_{n+1}$ by construction (see \cite[\S~6.2]{Li18}), and for the induction step, use the recursive definition of $G_{n,m}$ together with continuity of $\bm{p}_{n+m}$ and the observation that $H_{n+m}$ is open in $G_{n+m+1}$. Hence, for all $n$, $\bar{G}_n = \bm{p}_{n+m,\infty}^{-1}(G_{n,m})$ is an open subset of $\bar{G}_{n+m}$ for all $m = 0, 1, \dotsc$. By definition of the inductive limit topology, this shows that $\bar{G}_n$ is open in $\bar{G}$. \setlength{\parindent}{0.5cm} \setlength{\parskip}{0cm} To see that $\bar{G}_n$ is closed in $G$, let $(\bm{g}_k)_k$ be a sequence in $\bar{G}_n$ converging to $\bm{g} \in \bar{G}$. Suppose $\bm{g} \notin \bar{G}_n$. Then let $m \geq 1$ be minimal with $\bm{g} \in \bar{G}_{n+m}$. We have $\bm{g}_k \in \bar{G}_n = \bm{p}_{n+m,\infty}^{-1}(G_{n,m}) \subseteq \bm{p}_{n+m,\infty}^{-1}(H_{n+m-1})$ for all $k$. Since $H_{n+m-1}$ is closed in $G_{n+m}$ by Lemma~\ref{lem:HnClosed}, we must have $\bm{g} \in \bm{p}_{n+m,\infty}^{-1}(H_{n+m-1}) = \bm{p}_{n+m,\infty}^{-1}(G_{n+m-1,1}) = \bar{G}_{n+m-1}$. But this contradicts minimality of $m$. Hence $\bm{g} \in \bar{G}_n$, and thus $\bar{G}_n$ is closed in $\bar{G}$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \subsection{Modification (sccb)} We now present a further modification (path), which works in a similar way as modification (conn) or the second step in modification (path) and for the same reasons will not change the properties or Elliott invariant of the classifiable C*-algebra we construct. Let us use the same notations as in \S~\ref{ss:GPDModels}. Suppose that the first step in modification (path) produces the first building block $A_1$. For all $d \in DF_1^i$ choose a permutation matrix $w_{i,d} \in F_1^i$ such that $w_{i,d} d w_{i,d}^* = d$ and $w_{i,d} \hat{d} w_{i,d}^* \neq \hat{d}$ for all $\hat{d} \in DF_1^i$ with $\hat{d} \neq d$. Let $w \defeq (w_{i,d})_{i,d}$. Choose an index $\tilde{q}$ and replace $E_1^{\tilde{q}}$ by $M_{ \gekl{1,\tilde{q}} + \sum_{i,d} [1,i] }$, $\beta_{1,0}^{\tilde{q}}$ by $ \rukl{ \begin{smallmatrix} \beta_{1,0}^{\tilde{q}} & 0 \\ 0 & {\rm id}_{\big( \bigoplus_{i,d} F_1^i \big)} \end{smallmatrix} } $ and $\beta_{1,1}^{\tilde{q}}$ by $ \rukl{ \begin{smallmatrix} \beta_{1,1}^{\tilde{q}} & 0 \\ 0 & {\rm Ad\,}(w) \circ {\rm id}_{\big( \bigoplus_{i,d} F_1^i \big)} \end{smallmatrix} } $. Now suppose that we have produced $A_1 \overset{\varphi_1}{\longrightarrow} A_2 \overset{\varphi_2}{\longrightarrow} \dotso \overset{\varphi_{n-1}}{\longrightarrow} A_n$, and that the next step of modification (path) yields $\varphi_n: \: A_n \to A_{n+1}$. We use the description of $\varphi_F^j \vert_{F_n^i}$ in \eqref{e:varphiFj_NEW}. Let $\mathfrak D(j,i)$ be the set of one-dimensional projections in $DM_{m(j,i)}$. For each $d \in \mathfrak D(j,i)$, define a permutation $w_{j,i,d} \in F_{n+1}^j$ such that, identifying $d' \otimes \mathfrak f$ (for $d' \in \mathfrak D(j,i')$ and $\mathfrak f \in DF_n^{i'}$) with its image under $\tailarr$ in the description of $\varphi_F^j \vert_{F_n^{i'}}$ in \eqref{e:varphiFj_NEW}, we have $w_{j,i,d} (d \otimes \mathfrak f) w_{j,i,d}^* = d \otimes \mathfrak f$ and, for all $\hat{d} \in \mathfrak D(j,\hat{i})$ with $\hat{d} \neq d$, $w_{j,i,d} (\hat{d} \otimes \mathfrak f) w_{j,i,d}^* = \check{d} \otimes \mathfrak f$ for some $\check{d} \in \mathfrak D(j,\hat{i})$ with $\check{d} \neq \hat{d}$, for all $\mathfrak f \in DF_n^{\hat{i}}$. Set $w \defeq (w_{j,i,d})_{j,i,d}$. Choose an index $\tilde{q}$ and replace $E_{n+1}^{\tilde{q}}$ by $M_{ \gekl{n+1,\tilde{q}} + \sum_{j, i, d} [n+1,j] }$, $\beta_{n+1,0}^{\tilde{q}}$ by $ \rukl{ \begin{smallmatrix} \beta_{n+1,0}^{\tilde{q}} & 0 \\ 0 & {\rm id}_{\big( \bigoplus_{j,i,d} F_{n+1}^j \big)} \end{smallmatrix} } $ and $\beta_{n+1,1}^{\tilde{q}}$ by $ \rukl{ \begin{smallmatrix} \beta_{n+1,1}^{\tilde{q}} & 0 \\ 0 & {\rm Ad\,}(w) \circ {\rm id}_{\big( \bigoplus_{j,i,d} F_{n+1}^j \big)} \end{smallmatrix} } $. Modify $A_{n+1}$ and $\varphi_n$ accordingly as in modification (conn) or the second step of modification (path). Recursive application of this procedure completes modification (sccb). \begin{lemma} \label{lem:eta-zeta} After modification (path) combined with modification (sccb), we have the following: For all $\eta \in \mathcal F_1^i$, there exists a continuous path $\xi: \: [0,1] \to G_1$ of the form $\xi(t) = [\omega(t),\gamma]$ with (P1) and (P2) such that $\omega(0) = 0$, $\omega(1) = 1$, $\xi(0) = \eta$, and $\zeta \defeq \xi(1)$ lies in $\mathcal F_1^i$ and satisfies $s(\zeta) = s(\eta)$ but $r(\zeta) \neq r(\eta)$ or $r(\zeta) = r(\eta)$ but $s(\zeta) \neq s(\eta)$. For all $n \geq 1$, $j$ and $\eta \in \mathcal F_{n+1}^j \setminus \mathcal F_{n+1}^j[\bm{p}]$, there exists a continuous path $\xi: \: [0,1] \to G_{n+1}$ of the form $\xi(t) = [\omega(t),\gamma]$ with (P1) and (P2) such that $\omega(0) = 0$, $\omega(1) = 1$, $\xi(0) = \eta$ and $\zeta \defeq \xi(1)$ lies in $\mathcal F_{n+1}^j$ and satisfies $s(\zeta) = s(\eta)$ but $r(\zeta) \neq r(\eta)$ or $r(\zeta) = r(\eta)$ but $s(\zeta) \neq s(\eta)$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Let us start with the first part ($n=1$). Suppose that $s(\eta) = x$ and $r(\eta) = y$ with $x, y \in \mathcal X_1^i$. We think of $x$ corresponding to $d$, $y$ corresponding to $\hat{d}$ and take $\nu$ corresponding to $(i,d)$ in the notation of modification (sccb). Let $\gamma \defeq (\nu,\eta) \in \mathcal E_1^{\tilde{q}}$, let $\omega$ be as in (P1) and (P2) with $\omega(0) = 0$ and $\omega(1) = 1$, and define $\xi(t) \defeq [\omega(t),\gamma]$. Then we have $\xi(0) = [0,(\nu,\eta)] = \eta$ as $\bm{b}_{1,0}(\nu,\eta) = \eta$, $s(\xi(1)) = s[1,(\nu,\eta)] = [1,(\nu,x)] = x = s(\eta)$ as $\bm{b}_{1,1}(\nu,x) = x$, but $r(\xi(1)) = r[1,(\nu,\eta)] = [1,(\nu,y)] \neq y = r(\eta)$ as $\bm{b}_{1,1}(\nu,y) \neq y$ by construction. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Now we treat the second part. First suppose that $s(\eta) = (\mu,x)$ and $r(\eta) = (\hat{\mu},y)$ with $\mu \in \mathcal M(j,i)$, $\hat{\mu} \in \mathcal M(j,\hat{i})$, $x \in \mathcal X_n^i$, $y \in \mathcal X_n^{\hat{i}}$. We think of $\mu$ corresponding to $d$, $\hat{\mu}$ corresponding to $\hat{d}$ and take $\nu$ corresponding to $(j,i,d)$ in the notation of modification (sccb). Let $\gamma \defeq (\nu,\eta) \in \mathcal E_{n+1}^{\tilde{q}}$, let $\omega$ be as in (P1) and (P2) with $\omega(0) = 0$ and $\omega(1) = 1$, and define $\xi(t) \defeq [\omega(t),\gamma]$. Then we have $\xi(0) = [0,(\nu,\eta)] = \eta$ as $\bm{b}_{n+1,0}(\nu,\eta) = \eta$, $s(\xi(1)) = s[1,(\nu,\eta)] = [1,(\nu,(\mu,x))] = (\mu,x) = s(\eta)$ as $\bm{b}_{n+1,1}(\nu,(\mu,x)) = (\mu,x)$, but $r(\xi(1)) = r[1,(\nu,\eta)] = [1,(\nu,(\hat{\mu},y))] = (\check{\mu},y) \neq (\hat{\mu},y) = r(\eta)$ as $\bm{b}_{n+1,1}(\nu,(\hat{\mu},y)) = (\check{\mu},y)$ for some $\check{\mu} \in \mathcal M(j,\hat{i})$ with $\check{\mu} \neq \hat{\mu}$ by construction. Now suppose that $s(\eta) = (\hat{\mu},x)$ and $r(\eta) = y$ with $\hat{\mu} \in \mathcal M(j,i)$, $x \in \mathcal X_n^i$ and $y \in \mathcal E_n^p \subseteq \mathcal F_{n+1}^j$ (or the other way round, with $s$ and $r$ swapped). We think of $\hat{\mu}$ corresponding to $\hat{d}$ and take $\nu$ corresponding to $(j,i,d)$ for some $d \neq \hat{d}$ in the notation of modification (sccb). Let $\gamma \defeq (\nu,\eta) \in \mathcal E_{n+1}^{\tilde{q}}$, let $\omega$ be as in (P1) and (P2) with $\omega(0) = 0$ and $\omega(1) = 1$, and define $\xi(t) \defeq [\omega(t),\gamma]$. Then we have $\xi(0) = [0,(\nu,\eta)] = \eta$ as $\bm{b}_{n+1,0}(\nu,\eta) = \eta$, $r(\xi(1)) = r[1,(\nu,\eta)] = [1,(\nu,y)] = y = r(\eta)$ as $\bm{b}_{n+1,1}(\nu,y) = y$, but $s(\xi(1)) = s[1,(\nu,\eta)] = [1,(\nu,(\hat{\mu},x))] = (\check{\mu},x) \neq (\hat{\mu},x) = s(\eta)$ as $\bm{b}_{n+1,1}(\nu,(\hat{\mu},x)) = (\check{\mu},x)$ for some $\check{\mu} \in \mathcal M(j,\hat{i})$ with $\check{\mu} \neq \hat{\mu}$ by construction. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{proposition} \label{prop:eta-zeta_bG} If we combine modification (path) with modification (sccb) and arrange condition (clsg), then the following holds: Let $\check{\bm{\eta}} \in \bar{G}$, and let $n \geq 0$ be such that $\check{\bm{\eta}} \in \bar{G}_{n+1} \setminus \bar{G}_n$. Suppose that $\bm{p}_{n+1,\infty}(\check{\bm{\eta}})$ is not of the form $[t,\gamma]$ for some $t \in (0,1)$ and $\gamma \in \mathcal E_{n+1}$ with $\gamma \notin \mathcal E_{n+1,0}$ and $\gamma \notin \mathcal E_{n+1,1}$. Then there exist $\bm{\eta}, \bm{\zeta} \in \bar{G}$ such that $\check{\bm{\eta}} \sim_{\rm conn} \bm{\eta} \sim_{\rm conn} \bm{\zeta}$ in $\bar{G}$, and $s(\bm{\zeta}) = s(\bm{\eta})$ but $r(\bm{\zeta}) \neq r(\bm{\eta})$ or $r(\bm{\zeta}) = r(\bm{\eta})$ but $s(\bm{\zeta}) \neq s(\bm{\eta})$. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Write $\check{\bm{\eta}} = (\check{\eta}_n)$. We have $\check{\eta}_{n+1} = [t,\gamma]$ for some $\gamma$ in $\mathcal E_{n+1,\mathfrak r}$ for $\mathfrak r = 0 \ \text{or} \ 1$. Construct a path in $G_{n+1}$ connecting $[t,\gamma]$ with $[\mathfrak r,\gamma]$, and, using Proposition~\ref{prop:path}, lift it to a path in $\bar{G}_{n+1}$ connecting $\check{\bm{\eta}}$ to an element $\bm{\eta} = (\eta_n) \in \bar{G}_{n+1}$ with $\eta_{n+1} = [\mathfrak r,\gamma]$. Corollary~\ref{cor:barGn-clopen} implies that $\bar{G}_{n+1} \setminus \bar{G}_n$ is clopen. Since $\check{\bm{\eta}}$ lies in $\bar{G}_{n+1} \setminus \bar{G}_n$ and $\check{\bm{\eta}} \sim_{\rm conn} \bm{\eta}$, it follows that $\bm{\eta}$ lies in $\bar{G}_{n+1} \setminus \bar{G}_n$, too. Therefore, if $n=0$, we must have $\eta_1 = \eta_{n+1} \in \mathcal F_1$, and if $n \geq 1$, we must have $\eta_{n+1} \in \mathcal F_{n+1} \setminus \mathcal F_{n+1}[\bm{p}]$. In both cases, Lemma~\ref{lem:eta-zeta} provides an element $\zeta_{n+1}$ such that $\zeta_{n+1} \in \mathcal F_{n+1}^j$ if $\eta_{n+1} \in \mathcal F_{n+1}^j$, and $s(\zeta_{n+1}) = s(\eta_{n+1})$ but $r(\zeta_{n+1}) \neq r(\eta_{n+1})$ or $r(\zeta_{n+1}) = r(\eta_{n+1})$ but $s(\zeta_{n+1}) \neq s(\eta_{n+1})$, together with a path $\xi_{n+1}$ in $G_{n+1}$ with (P1) and (P2) connecting $\eta_{n+1}$ and $\zeta_{n+1}$. Since $\bm{\eta}$ lies in $\bar{G}_{n+1}$, we must have $\eta_{N+1} = (\mu_N, \eta_N)$ for all $N \geq n+1$. Define $\bm{\zeta} \in \bar{G}$ by setting $\zeta_{N+1} \defeq (\mu_N, \zeta_N)$ for all $N \geq n+1$ and $\bm{\zeta} \defeq (\zeta_N)_{N \geq n+1}$. Then $\bm{\zeta}$ inherits the property from $\zeta_{n+1}$ that $s(\bm{\zeta}) = s(\bm{\eta})$ but $r(\bm{\zeta}) \neq r(\bm{\eta})$ or $r(\bm{\zeta}) = r(\bm{\eta})$ but $s(\bm{\zeta}) \neq s(\bm{\eta})$. Now apply Proposition~\ref{prop:path} recursively to construct paths $\xi_N$, $N \geq n+1$, which connect $\eta_N$ and $\zeta_N$ and satisfy $\bm{p}_N \circ \xi_{N+1} = \xi_N$. It follows that $\xi(t) \defeq (\xi_N(t))_{N \geq n+1}$ defines the desired path connecting $\bm{\eta}$ and $\bm{\zeta}$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} We now start to study connected components of $\bar{G}$. \begin{lemma} \label{lem:FiniteCC} After modification (path), $\bar{G}_n$ has only finitely many connected components for all $n$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Given $\gamma \in \mathcal E_n$, let $I_{\gamma} \defeq [0,1]$ if $\gamma \in \mathcal E_{n,0}$ and $\gamma \in \mathcal E_{n,1}$, $I_{\gamma} \defeq [0,1)$ if $\gamma \in \mathcal E_{n,0}$ and $\gamma \notin \mathcal E_{n,1}$, $I_{\gamma} \defeq (0,1]$ if $\gamma \notin \mathcal E_{n,0}$ and $\gamma \in \mathcal E_{n,1}$, and $I_{\gamma} \defeq (0,1)$ if $\gamma \notin \mathcal E_{n,0}$ and $\gamma \notin \mathcal E_{n,1}$. Using Proposition~\ref{prop:path} as in the proof of Proposition~\ref{prop:pathconn}, it is straightforward to see that $\bm{p}_{n,\infty}^{-1}[I_\gamma \times \gekl{\gamma}]$ is path-connected in $\bar{G}_n$. Moreover, it is clear that $\bar{G}_n = \bigcup_{\gamma \in \mathcal E_n} \bm{p}_{n,\infty}^{-1}[I_\gamma \times \gekl{\gamma}]$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} The following is an immediate consequence of Lemma~\ref{lem:FiniteCC} and Corollary~\ref{cor:barGn-clopen}. \begin{corollary} If we arrange condition (clsg) in modification (path), then the connected components in $\bar{G}$ are open. \end{corollary} \begin{proposition} \label{prop:sccb} If we combine modification (path) with modification (sccb) and arrange condition (clsg), then the only connected components of $\bar{G}$ which are also bisections (i.e., source and range maps restrict to bijections) are precisely of the form $\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}] \subseteq \bar{G}_n$ for some $n$ and $\gamma \in \mathcal E_n$ with $\gamma \notin \mathcal E_{n,0}$ and $\gamma \notin \mathcal E_{n,1}$. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Let us first show that sets of the form $\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}]$ for $\gamma$ as in the proposition are indeed connected components and bisections. First of all, an application of Proposition~\ref{prop:path} as in the proof of Proposition~\ref{prop:pathconn}shows that $\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}]$ is path-connected, hence connected. If $C$ is a connected subset of $\bar{G}$ containing $\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}]$, then we must have $C \subseteq \bar{G}_n$ because $\bar{G}_n$ is clopen by Corollary~\ref{cor:barGn-clopen}. Moreover, $[(0,1) \times \gekl{\gamma}] \subseteq \bm{p}_{n,\infty}(C)$. As $[(0,1) \times \gekl{\gamma}]$ is a connected component in $G_n$, it follows that $\bm{p}_{n,\infty}(C) \subseteq [(0,1) \times \gekl{\gamma}]$ and thus $C \subseteq \bm{p}_{n,\infty}^{-1}(\bm{p}_{n,\infty}(C)) \subseteq \bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}]$. This shows that $\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}]$ is a connected component. It is also a bisection: Let $\bm{\eta}, \bm{\zeta} \in \bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}]$ with $s(\bm{\eta}) = s(\bm{\zeta})$. (The case of equal range is analogous.) Write $\bm{\eta} = (\eta_N)_N$, $\bm{\zeta} = (\zeta_N)_N$. It follows that $\eta_n = \zeta_n$. So we have for all $N \geq n$ that $s(\eta_N) = s(\zeta_N)$ and $\bm{p}_{n,N}(\eta_N) = \bm{p}_{n,N}(\zeta_N)$. Since $\bm{p}_{n,N}$ is a fibrewise bijection (see \cite[\S~6.2 and \S~7]{Li18}), it follows that $\eta_N = \zeta_N$ for all $N \geq n$ and hence $\bm{\eta} = \bm{\zeta}$. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Now let $C$ be a connected component in $\bar{G}$. As $\bar{G}_n$ is clopen for each $n$ by Corollary~\ref{cor:barGn-clopen}, there exists $n$ such that $C \subseteq \bar{G}_{n+1}$ and $C \not\subseteq \bar{G}_n$, so that we must have $C \subseteq \bar{G}_{n+1} \setminus \bar{G}_n$. Suppose that $C$ is not of the form $\bm{p}_{n+1,\infty}^{-1}[(0,1) \times \gekl{\gamma}]$ for some $\gamma$ as in the proposition. It follows that $C$ must contain some element $\check{\bm{\eta}}$ as in Proposition~\ref{prop:eta-zeta_bG}, and hence it follows that there exist $\bm{\eta}, \bm{\zeta} \in \bar{G}$ such that $\check{\bm{\eta}} \sim_{\rm conn} \bm{\eta} \sim_{\rm conn} \bm{\zeta}$ in $\bar{G}$, and $s(\bm{\zeta}) = s(\bm{\eta})$ but $r(\bm{\zeta}) \neq r(\bm{\eta})$ or $r(\bm{\zeta}) = r(\bm{\eta})$ but $s(\bm{\zeta}) \neq s(\bm{\eta})$. As $C$ is a connected component, we must have $\bm{\eta}, \bm{\zeta} \in C$. Thus $C$ cannot be a bisection. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} Let $\mathfrak C_{\rm bi}$ be the set of connected components of $\bar{G}$ which are bisections. For two bisections $U$ and $V$ in $\bar{G}$, we define the product $U \cdot V$ only if $s(U) = r(V)$, and in this case $U \cdot V \defeq \menge{\bm{u} \bm{v}}{\bm{u} \in U, \, \bm{v} \in V}$ is another bisection. Let $\spkl{\mathfrak C_{\rm bi}}$ be the smallest collection of bisections in $\bar{G}$ closed under products and containing $\mathfrak C_{\rm bi}$, i.e., the set of all finite products of elements in $\mathfrak C_{\rm bi}$. \begin{lemma} \label{lem:<Cbi>} If we combine modification (path) with modification (sccb) and arrange condition (clsg) as well as \begin{equation} \label{e:np>4ni} \gekl{n,p} > 4 [n,i] \quad \forall \ n,p,i, \end{equation} then $\spkl{\mathfrak C_{\rm bi}} = \menge{\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}]}{n \in \mathbb{Z}_{\geq 1}, \, \gamma \in \mathcal E_n}$. \end{lemma} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Clearly, $\menge{\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}]}{n \in \mathbb{Z}_{\geq 1}, \, \gamma \in \mathcal E_n}$ is a collection of bisections in $\bar{G}$ closed under products and containing $\mathfrak C_{\rm bi}$. This shows \an{$\subseteq$}. To prove \an{$\supseteq$}, we show that for all $n, p$ and $\gamma \in \mathcal E_n^p$ with $s(\gamma) = y^s$ and $r(\gamma) = y^r$ that $\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}] \in \spkl{\mathfrak C_{\rm bi}}$. Recall that $\bm{b}_{n,\bullet}^p$ is given by a composition of the form $\mathcal E_{n,\bullet}^p \isom \coprod_i (\mathcal M_\bullet(p,i) \times \mathcal F_n^i) \onto \coprod_i \mathcal F_n^i = \mathcal F_n$ (see \eqref{e:betanpi}). Let us denote the induced bijections $\mathcal Y_{n,\bullet}^p \isom \coprod_i (\mathcal M_\bullet(p,i) \times \mathcal X_n^i)$ by $y \ma (y)_\bullet$. Suppose that $(y^*)_\bullet = (\mu_\bullet^*,x_\bullet^*)$, with $x_\bullet^* \in \mathcal X_n^{i_\bullet^*}$, for $* = r,s$ and $\bullet = 0,1$. We claim that there exists $y \in \mathcal Y_n^p$ such that $(y)_\bullet \notin \gekl{\mu_\bullet^*} \times \mathcal X_n^{i_\bullet^*}$ for all $* = r,s$ and $\bullet = 0,1$. Indeed, since $\# \big\{ y \in \mathcal Y_n^p: \: (y)_\bullet \in \gekl{\mu_\bullet^*} \times \mathcal X_n^{i_\bullet^*} \big\} = \# \mathcal X_n^{i_\bullet^*} = [n,i_\bullet^*]$ for all $* = r,s$ and $\bullet = 0,1$, we have that $\# \big\{ y \in \mathcal Y_n^p: \: (y)_\bullet \in \gekl{\mu_\bullet^*} \times \mathcal X_n^{i_\bullet^*} \ \text{for some} \ * = r,s, \bullet = 0,1 \big\} = [n,i_0^s] + [n,i_0^r] + [n,i_1^s] + [n,i_1^r] \leq 4 \max \menge{[n,i_\bullet^*]}{* = r,s, \bullet = 0,1} < \gekl{n,p} = \# \mathcal Y_n^p$ by condition \eqref{e:np>4ni}. Now take $y$ with these properties, and let $\gamma_1, \gamma_2 \in \mathcal E_n^p$ be such that $s(\gamma_1) = y^s$, $r(\gamma_1) = y$, $s(\gamma_2) = y$, $r(\gamma_2) = y^r$. Then $\gamma_1, \gamma_2 \notin \mathcal E_{n,0}$ and $\gamma_1, \gamma_2 \notin \mathcal E_{n,1}$, so that $\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma_i}] \in \mathfrak C_{\rm bi}$ for $i=1,2$. It follows that $\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}] = \bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma_2}] \cdot \bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma_1}] \in \spkl{\mathfrak C_{\rm bi}}$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} \begin{definition}[compare {\cite[Definition~3.1]{Nek}}] Let $\mathcal Y$ be a finite set. We call a multisection in $\spkl{\mathfrak C_{\rm bi}}$ the image of an injective map $\mathcal Y \times \mathcal Y \to \spkl{\mathfrak C_{\rm bi}}, \, (x,y) \ma U_{x,y}$ such that $U_{x,y} \cdot U_{y',z}$ is only defined if $y = y'$, and in that case $U_{x,y} \cdot U_{y,z} = U_{x,z}$. We call $\# \mathcal Y$ the degree of $\gekl{U_{x,y}}$. \end{definition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{corollary} In the situation of Lemma~\ref{lem:<Cbi>}, multisections in $\spkl{\mathfrak C_{\rm bi}}$ are precisely of the form $\bm{p}_{n,\infty}^{-1}[(0,1) \times \gekl{\gamma}]$ for some $n, p$ and $\gamma \in \mathcal E_n^p$. \end{corollary} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} As it is clear that the degree can be read off from a multisection, Lemma~\ref{lem:<Cbi>} implies the following. \begin{corollary} \label{cor:GG'} Suppose that we combine modification (path) with modification (sccb) and arrange (clsg), \eqref{e:np>4ni} to obtain classifiable C*-algebras $A$ and $A'$ with the same prescribed Elliott invariant together with C*-diagonals $B$ and $B'$. Let $\bar{G}$ and $\bar{G}'$ be the groupoid models for $(A,B)$ and $(A',B')$ and $\mathcal Y_n^p(\bar{G})$, $\mathcal Y_n^p(\bar{G}')$ the analogues of $\mathcal Y_n^p$ above for $\bar{G}$, $\bar{G}'$. If $\bar{G} \cong \bar{G}'$ as topological groupoids (i.e., if $(A,B) \cong (A',B')$), then we must have $\gekl{\# \mathcal Y_n^p(\bar{G})}_{n,p} = \gekl{\# \mathcal Y_n^p(\bar{G}')}_{n,p}$. \end{corollary} Let us now construct, for every sequence $\mathfrak m = (\mathfrak m_n)$ of non-negative integers a groupoid model $\bar{G}(\mathfrak m)$ for our classifiable C*-algebra such that for any two sequences $\mathfrak m$ and $\mathfrak n$, we have $\bar{G}(\mathfrak m) \not\cong \bar{G}(\mathfrak n)$ if $\mathfrak m \neq \mathfrak n$. First combine modification (path) with modification (sccb) and arrange (clsg), \eqref{e:np>4ni} to obtain a classifiable C*-algebra $A$ with prescribed Elliott invariant $\mathcal E$ as in \S~\ref{ss:(path)} and C*-diagonal $B$. Now we modify the construction. For all $n \geq 1$, choose a direct summand $F_n^j$ of $F_n$ such that, for all $n \geq 1$, we have $F_{n+1}^j \neq F_{n+1}^{j_\mathfrak r^p}$ for all $p$ and $\mathfrak r = 0,1$. Given a sequence $\mathfrak m = (\mathfrak m_n)$, we modify $(A,B)$ by adding ${\rm id}_{(F_n^{j_n})^{\oplus \mathfrak m_n}}$ to $\beta_{n,\bullet}^p$ for all $p$ and enlarging $E_n^p$ correspondingly. In this way, we obtain for each $\mathfrak m$ a classifiable C*-algebra $A(\mathfrak m)$ with the same prescribed Elliott invariant $\mathcal E$ and the same properties as $A$, together with a C*-diagonal $B(\mathfrak m)$ of $A(\mathfrak m)$. Let $\bar{G}(\mathfrak m)$ be the groupoid model of $(A(\mathfrak m),B(\mathfrak m))$. \begin{proposition} If $\mathfrak m \neq \mathfrak n$, then $\bar{G}(\mathfrak m) \not\cong \bar{G}(\mathfrak n)$, i.e., $(A(\mathfrak m),B(\mathfrak m)) \not\cong (A(\mathfrak n),B(\mathfrak n))$. \end{proposition} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} \begin{proof} Let $\bar{G} \defeq \bar{G}(\mathfrak m)$ and $\bar{G}' \defeq \bar{G}(\mathfrak n)$. Suppose that $\mathfrak m_n = \mathfrak n_n$ for all $n \leq N-1$ and that $\mathfrak m_N \neq \mathfrak n_N$, say $\mathfrak m_N < \mathfrak n_N$. As the first $N-1$ steps of the construction coincide, we have $\menge{\# \mathcal Y_n^p(\bar{G})}{n \leq N-1,p} = \menge{\# \mathcal Y_n^p(\bar{G}')}{n \leq N-1,p}$. Now we have $\# \mathcal Y_N^p(\bar{G}') = \# \mathcal Y_N^p(\bar{G}) + (\mathfrak n_N - \mathfrak m_N) \cdot \# \mathcal X_N^{j_N}$ for all $p$. Hence $\# \mathcal Y_N^{p'}(\bar{G}') > \min_p \# \mathcal Y_N^p(\bar{G})$ for all $p'$. As $\# \mathcal Y_{\bar{n}+1}^{q'}(\bar{G}') > \# \mathcal Y_{\bar{n}}^{p'}(\bar{G}')$ for all $\bar{n}$, $q'$ and $p'$ , it follows that $\min_p \# \mathcal Y_N^p(\bar{G})$ does not appear in $\gekl{\# \mathcal Y_n^p(\bar{G}')}_{n,p}$, while it appears in $\gekl{\# \mathcal Y_n^p(\bar{G})}_{n,p}$. Hence Corollary~\ref{cor:GG'} implies that $\bar{G} \not\cong \bar{G}'$, i.e., $(A(\mathfrak m),B(\mathfrak m)) \not\cong (A(\mathfrak n),B(\mathfrak n))$. \end{proof} \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm} All in all, in combination with Corollaries~\ref{cor:Menger_unital} and \ref{cor:M-C_spl}, we obtain \begin{theorem} \label{thm:ManyMenger_GPD_Ell} For every sequence $\mathfrak m$ in $\mathbb{Z}_{\geq 0}$ and every prescribed Elliott invariant $(G_0, G_0^+, u, T, r, G_1)$ as in \cite[Theorem~1.2]{Li18} with torsion-free $G_0$ and trivial $G_1$, our construction produces topological groupoids $\bar{G}(\mathfrak m)$ with the same properties as in \cite[Theorem~1.2]{Li18} (in particular, $C^*_r(\bar{G}(\mathfrak m))$ is a classifiable unital C*-algebra satisfying ${\rm Ell}(C^*_r(\bar{G}(\mathfrak m))) \cong (G_0, G_0^+, u, T, r, G_1)$), such that $\bar{G}(\mathfrak m)^{(0)} \cong \bm{M}$, and $\bar{G}(\mathfrak m) \not\cong \bar{G}(\mathfrak n)$ if $\mathfrak m \neq \mathfrak n$. For every sequence $\mathfrak m$ in $\mathbb{Z}_{\geq 0}$ and every prescribed Elliott invariant $(G_0, T, \rho, G_1)$ as in \cite[Theorem~1.3]{Li18} with torsion-free $G_0$ and trivial $G_1$, our construction produces topological groupoids $\bar{G}(\mathfrak m)$ with the same properties as in \cite[Theorem~1.3]{Li18} (in particular, $C^*_r(\bar{G}(\mathfrak m))$ is a classifiable stably projectionless C*-algebra with continuous scale satisfying ${\rm Ell}(C^*_r(\bar{G}(\mathfrak m))) \cong (G_0, \gekl{0}, T, \rho, G_1)$), such that $\bar{G}(\mathfrak m)^{(0)} \cong \bm{M}_{\setminus C}$, and $\bar{G}(\mathfrak m) \not\cong \bar{G}(\mathfrak n)$ if $\mathfrak m \neq \mathfrak n$. \end{theorem} In combination with the classification result in \cite{Rob}, this yields the following \begin{theorem} \label{thm:ManyMenger_Diag_Ell} For every prescribed Elliott invariant $(G_0, G_0^+, u, T, r, G_1)$ as in \cite[Theorem~1.2]{Li18} with torsion-free $G_0$ and trivial $G_1$, our construction produces a classifiable unital C*-algebra $A$ with ${\rm Ell}(A) \cong (G_0, G_0^+, u, T, r, G_1)$ and continuum many pairwise non-conjugate C*-diagonals of $A$ whose spectra are all homeomorphic to $\bm{M}$. For every Elliott invariant $(G_0, T, \rho, G_1)$ as in \cite[Theorem~1.3]{Li18} with torsion-free $G_0$ and $G_1 = \gekl{0}$, our construction produces a classifiable stably projectionless C*-algebra $A$ having continuous scale with ${\rm Ell}(A) \cong (G_0, \gekl{0}, T, \rho, G_1)$ and continuum many pairwise non-conjugate C*-diagonals of $A$ whose spectra are all homeomorphic to $\bm{M}_{\setminus C}$. \end{theorem} \setlength{\parindent}{0cm} \setlength{\parskip}{0cm} This theorem, combined with classification results for all classifiable C*-algebras, implies Theorems~\ref{thm:main2_unital} and \ref{thm:main2_spl}. \setlength{\parindent}{0cm} \setlength{\parskip}{0.5cm}
1,116,691,498,126
arxiv
\section{Introduction} When studying stellar nucleosynthesis and chemical enrichment, it is difficult to optically distinguish between isotopes of a given element, since their atomic lines are blended. However, microwave lines from rare isotopic substitutions of a given molecular species, so-called ``isotopologues'', are well separated from their parent molecule, typically by a few percent of their rest frequency. Thus, the frequencies of the main and rare species are close enough to be observed with the same technical equipment but without the problem of blending. A few years ago, it became apparent (Wouterloot et al. 2008; Wang et al. 2009) that with respect to its composition, the metal poor outer Galaxy does not provide a ``bridge'' between the solar neighborhood and the even more metal poor Large Magellanic Cloud (LMC). This can be explained by the different age of the bulk of the stellar populations of the outer Galaxy and the LMC and can be exemplified by one of the most thoroughly studied isotope ratios, that of carbon. The two stable isotopes, $^{12}$C and $^{13}$C, have been measured throughout the Galaxy, in prominent star forming regions of the LMC, and in a large number of stellar objects (e.g., Milam et al. 2005; Wang et al. 2009; Abia et al. 2012; Mikolaitis et al. 2012). The $^{12}$C/$^{13}$C ratio is a measure of ``primary'' versus ``secondary'' processing. $^{12}$C is produced on rapid timescales primarily via He burning in massive stars. $^{13}$C is mainly produced via CNO processing of $^{12}$C seeds from earlier stellar generations. This occurs on a slower time scale during the red giant phase in low and intermediate mass stars or novae (for reviews, see Henkel et al. 1994; Wilson \& Rood 1994). Previous observations (e.g., Henkel et al. 1985; Stahl et al. 1989; Wouterloot \& Brand 1996; Milam et al. 2005; Sheffer et al. 2007) have demonstrated that the $^{12}$C/$^{13}$C ratio can vary strongly within the Galaxy. In the outer Galaxy very high ratios of $^{12}$C/$^{13}$C $>$100 are found; in the local interstellar medium $^{12}$C/$^{13}$C $\sim$ 70, while in the inner Galactic disk and Large Magellanic Cloud (LMC) $^{12}$C/$^{13}$C $\sim$ 50. The solar system ratio is 89. Within the framework of ``biased infall'' (e.g., Chiappini \& Matteucci 2001), the Galactic disk is slowly formed from inside out, which causes gradients in the abundances across the disk. The stellar $^{13}$C ejecta, reaching the interstellar medium with a time delay, are less dominant in the young stellar disk of the outer Galaxy than in the inner Galaxy and the old stellar body of the LMC (see, e.g., Hodge 1989 for the star formation history of the LMC). The solar system ratio, referring to a younger more $^{13}$C deficient disk, is therefore higher than that measured in the present local interstellar medium. Consistent with this idea, $^{12}$C/$^{13}$C ratios are particularly low ($\sim$25) in the Galactic center region with its old bulge (e.g., G{\"u}sten et al. 1985), while inflowing or infalling gas from outside appears to be characterized by higher ratios (Riquelme et al. 2010). We note that this scenario also explains other isotope ratios based on differences of primary and secondary nucleosynthesis, like that of $^{16}$O (a product of massive stars, $\ga$8\,M$_{\odot}$) and $^{17}$O (a product of lower mass stars), while $^{18}$O is apparently most efficiently synthesized in metal rich stars of large mass (e.g., Wouterloot et al. 2008). With respect to isotope ratios the extragalactic space beyond the Magellanic Clouds is almost unexplored and therefore very interesting to investigate (for previous pioneering efforts, see Aalto et al. 1991; Casoli et al. 1992; Henkel et al. 1993; Henkel \& Mauersberger 1993; Wang et al. 2004; Muller et al. 2006; Henkel et al. 2010; Mart\'{\i}n et al. 2010; Gonz{\'a}lez-Alfonso et al. 2012; Danielson et al. 2013). What ratios can be found when observing objects outside the Local Group of galaxies at low and high redshifts and in environments, which drastically differ from those in the Milky Way and the LMC? Is the Galaxy typical for its class or are its isotopic properties exceptional? And what kind of isotopic compositions can be expected in optical lines, when trying to determine high precision redshifts and to constrain variations in physical constants through time and space (e.g., Levshakov et al. 2006)? In the following we present and analyze new CN and CO data from the nearby prototypical starburst galaxy NGC~253 and the ultraluminous merger Mrk~231, in order to derive and to compare the carbon isotope ratios in these different environments. Sect.\,2 describes observations and data reduction. Sect.\,3 presents the CN and CO measurements and data analysis toward NGC253, including carbon and oxygen isotope ratios and CN excitation temperatures. In Sect.\,4 we discuss our data from Mrk~231 and provide a general overview over extragalactic carbon isotope determinations in targets beyond the Magellanic Clouds. Sect.\,5 summarizes the main results. \section{Observations} The $\lambda$$\sim$3 and 1.3\,mm measurements toward NGC~253 were obtained with the IRAM 30-m telescope (project 078--12) at Pico Veleta, Spain\footnote{Based on observations carried out with the IRAM 30-m telescope. IRAM is supported by INSU/CNRS (France), the MPG (Germany), and IGN (Spain)} during August 5 and 6, 2012. Full width to half power beam widths (FWHPs) were about 22$''$ and 11$''$. The EMIR SiS receivers were employed with system temperatures of $T_{\rm sys}$ $\sim$ 130, 240, 240, and 260~K at 109.6, 113.4 ($\lambda$$\sim$3\,mm), 221.3, and 225.0\,GHz ($\lambda$$\sim$1.3\,mm) on an antenna temperature scale. Adopted beam and forward hemisphere efficiencies are 0.80 and 0.95 at $\lambda$ $\sim$ 3\,mm and 0.62 and 0.92 at $\lambda$ $\sim$1.3\,mm. As backend we used Fast Fourier Transform spectrometers with a channel spacing of 195.3\,kHz, covering two contiguous 4\,GHz segments in dual linear polarization at $\lambda$ $\sim$ 3 and also at $\lambda$ $\sim$ 1.3\,mm. Channel spacings are 0.53 and 0.26\,km\,s$^{-1}$, respectively. The spectra were obtained with a wobbling secondary mirror using a switch cycle of a few seconds (2\,s on-source, 2\,s off-source) and a beam throw of $\pm$100$''$. No absorption features are seen in any spectrum, potentially caused by using too small a beam throw. The pointing accuracy, based on nearby continuum sources, was accurate to $\sim$5$''$. For calibration, see Sect.\,3.6.2. Table~\ref{tab1} displays some essential parameters of the observations. \begin{table} \caption[]{Observational parameters$^{\rm a)}$} \begin{flushleft} \begin{tabular}{ccccccc} \hline Band & $\nu$ & $T_{\rm sys}$ & $\theta_{\rm b}$ & $f_{\rm mb}$ & $f_{\rm fh}$ & $S$/$T_{\rm mb}$ \\ (mm) & (GHz) & (K) & ($''$) & & & (Jy/K) \\ \hline & & & & & \\ 3 & 109.635 & 130 & 23 & 0.80 & 0.95 & 5.2 \\ 3 & 113.365 & 240 & 22 & 0.80 & 0.95 & 5.1 \\ 1.3 & 221.315 & 240 & 11 & 0.62 & 0.92 & 4.8 \\ 1.3 & 225.045 & 260 & 11 & 0.62 & 0.92 & 4.8 \\ \hline \end{tabular} \end{flushleft} a) Receiver band ($\lambda$ $\sim$ 3 or 1.3\,mm), frequencies ($\nu$), system temperatures ($T_{\rm sys}$) on an antenna temperature scale ($T_{\rm A}^*$), full width to half power (FWHP) beam widths in units of arcseconds ($\theta_{\rm b}$), and adopted main beam efficiencies ($f_{\rm mb}$) and forward hemisphere efficiencies ($f_{\rm fh}$) for the four 4\,GHz wide frequency intervals (2$\times$8\,GHz), simultaneously observed by the 30-m telescope. The last column provides conversion factors from main beam brightness temperatures (in Kelvin) to flux density (Jansky) units. \label{tab1} \end{table} Complementary $\lambda$ $\sim$ 3 and 1.3\,mm observations of Mrk~231 were taken in January and May 2011 (project 233--11) with the IRAM 30-m telescope using the same receivers and observing mode with the WILMA backend under varying weather conditions. This backend provided channel spacings of 2\,MHz. Data analysis was performed with the GILDAS data reduction package\footnote{Grenoble Image and Line Data Analysis Software: http://www.ira.inaf.it/~brand/gag.html}, revealing excellent baselines only requiring the subtraction of baselines of order $\leq$2 for both galaxies. The $\lambda$ = 3 and 1.3\,mm data were taken simultaneously and the pointing difference between the two EMIR receivers was found to be $\la$2$''$. \begin{figure}[h] \vspace{0.0cm} \centering \resizebox{23.0cm}{!}{\rotatebox[origin=br]{-90}{\includegraphics{fig1-ngc253.ps}}} \vspace{-1.0cm} \caption{CN $N$=1$\rightarrow$0 spectra (black) and Gaussian fits (green) from NGC~253 on a Local Standard of Rest (LSR) $V_{\rm LSR}$ = 0\,km\,s$^{-1}$ frequency scale. Both spectra are smoothed to a channel spacing of $\sim$8.5\,km\,s$^{-1}$ (3.125\,MHz). {\it Lower panel}: The $J$ = 1/2$\rightarrow$1/2 (left) and $J$ = 3/2$\rightarrow$1/2 (right) groups of CN lines. {\it Upper panel}: The strongest feature is the 0$_0$$\rightarrow$1$_{-1}$ E line of CH$_3$OH (methanol). Far left: the $F_1$=0, $F_2$=1$\rightarrow$0 and $F_1$=1, $F_2$=1$\rightarrow$1 group of $^{13}$CN $N$ = 1$\rightarrow$0 lines. In between this spectral feature and the methanol line: the $F_1$=1, $F_2$=2$\rightarrow$1 group of $^{13}$CN transitions (for CN and $^{13}$CN rest frequencies, see Skatrud et al. 1983 and Bogey et al. 1984). The emission on the right hand side of the methanol line near 108.9\,GHz might be caused by SiS 6$\rightarrow$5. Numbers at the foot of each spectral CN or $^{13}$CN feature provide expected relative intensities with respect to the weaker group of lines in case of optically thin emission under conditions of local thermodynamical equilibrium. For less sensitive CN spectra obtained with smaller bandwidths, see Fig.~1 of Henkel et al. (1993).} \label{fig1} \end{figure} \section{CN and CO toward NGC~253} \subsection{The galaxy NGC~253} The Sculptor galaxy NGC~253, an almost edge-on barred spiral ($i$=72$^{\circ}$--78$^{\circ}$; Pence 1981; Puche et al. 1991), is one of the most prolific infrared and molecular lighthouses of the entire extragalactic sky. At a distance of $D$ $\sim$ 3\,Mpc (e.g., Mouhcine et al. 2005; Rekola et al. 2005), it is a prime example of a galaxy with a nuclear starburst devoid of an active galactic nucleus (e.g., Ulvestad \& Antonucci 1997; Henkel et al. 2004). Because of the exceptional strength of its molecular lines, NGC~253 was selected as the target of choice for the first unbiased molecular line survey of an extragalactic source (Mart\'{\i}n et al. 2006). It is therefore a highly suitable target for this study. \begin{table*} \caption[]{CN line parameters of NGC~253, obtained from Gaussian fits$^{\rm a)}$} \begin{flushleft} \begin{tabular}{cccccc} \hline Line &$\int T_{\rm mb}$& $\nu$ &$\Delta\nu_{\rm 1/2}$& $T_{\rm mb}$\\ & (K MHz) & (MHz) & (MHz) & (mK) \\ \hline & & & & & \\ CN $N = 1\rightarrow$0 &21.637$\pm$0.083 &113069.244$\pm$0.166& 88.721$\pm$0.388 & 244$\pm$1 \\ &33.537$\pm$0.061 &113393.540$\pm$0.063& 70.041$\pm$0.170 & 479$\pm$1 \\ & & & & \\ $^{13}$CN $N = 1\rightarrow$0 & 0.736$\pm$0.086 &108555.278$\pm$3.931& 70.565$\pm$9.396 &10.4$\pm$1.8 \\ & 0.632$\pm$0.091 &108686.718$\pm$4.555& 61.822$\pm$10.458 &10.2$\pm$2.3 \\ CH$_3$OH 0$_0$$\rightarrow$1$_{-1}$ E& 1.569$\pm$0.108 &108785.756$\pm$1.333& 56.561$\pm$6.061 &27.7$\pm$3.5 \\ & & & & \\ CN $N = 2\rightarrow$1 &15.009$\pm$0.160 &226138.445$\pm$0.556&116.099$\pm$1.369 & 129$\pm$2 \\ &29.210$\pm$0.167 &226518.547$\pm$0.324&115.337$\pm$0.750 & 253$\pm$2 \\ &47.099$\pm$0.164 &226676.359$\pm$0.174&105.801$\pm$0.463 & 445$\pm$2 \\ & & & & & \\ \hline \end{tabular} \end{flushleft} a) Because of large bandwidths (4--8\,GHz), yielding potentially complex velocity-frequency correlations, all fits were obtained on a frequency scale. All fitted CN components refer to groups of individual lines. Col.\,3 displays observed frequencies, referring to the Local Standard of Rest (LSR) $V_{\rm LSR}$ = 0\,km\,s$^{-1}$ frequency scale. All given errors are standard deviations obtained from Gaussian fits. The $T_{\rm mb}$ values (last column) were obtained from the values given in Cols.\,2 and 4. Calibration uncertainties are not considered here but are discussed in Sect\,3.6.2. \label{tab2} \end{table*} \subsection{Our data} CN spectra are complex. Each CN rotational energy level with $N$$>$0 is split into a doublet by spin-rotation interaction. Because of the spin of the nitrogen nucleus ($I_1$=1), each of these components is further split into a triplet of states. The $^{13}$CN spectrum is further complicated by the spin of the $^{13}$C ($I_2$=1/2) nucleus. Figure~\ref{fig1} shows our $\lambda$$\sim$3\,mm CN spectra. The lower panel displays the $N$ = 1$\rightarrow$0 $J$=1/2$\rightarrow$1/2 (left) and $J$=3/2$\rightarrow$1/2 (right) groups of lines of $^{12}$C$^{14}$N (hereafter CN). The upper panel visualizes the blended $N$ = 1$\rightarrow$ 0 $F_2$ = 1$\rightarrow$0 and 1$\rightarrow$1 (far left) and the slightly weaker $F_2$ = 2$\rightarrow$1 (next feature to the right) transitions of $^{13}$C$^{14}$N (hereafter $^{13}$CN). This represents one of the first detections of $^{13}$CN in extragalactic space (cf. Aladro et al. 2013; for the first detection in the local interstellar medium, see Gerin et al. 1984). The $^{13}$CN spectrum has an rms noise level of 2\,mK (channel width: 8.5\,km\,s$^{-1}$). While the upper panel of Fig.~\ref{fig1} only shows a small spectral segment, we note that the entire spectrum has a width of 4\,GHz. Therefore the (flat) baseline and noise level are well defined. Table~\ref{tab2} provides the corresponding line parameters. Again it should be emphasized that the Gaussian fit result for $^{13}$CN, fitting the two $^{13}$CN profiles and the dominant CH$_3$OH feature simultaneously, is very robust. A comparison with the approximate rest frequencies of the different groups of lines shows that we mainly see the high velocity component of NGC~253 with a recessional velocity of $V_{\rm LSR}$ $\sim$ +290\,km\,s$^{-1}$ (e.g., Mart\'{\i}n et al. 2006), located several arcseconds south-west of the kinematical center with an extent of order 10$''$ (Mauersberger et al. 1996; Peng et al. 1996; Garc\'{\i}a-Burillo et al. 2000; Paglione et al. 2004; G{\"u}sten et al. 2006; Lebr{\'o}n et al. 2011; Sakamoto et al. 2011; Bolatto et al. 2013). The similarly extended lower velocity component near 170\,km\,s$^{-1}$, mainly arising from several arcseconds north-east of the dynamical center, is too weak to be detected at significant levels (but see Sect.\,3.8). For the dominant 0$_0$ $\rightarrow$ 1$_{-1}$ E line of methanol (CH$_3$OH), we obtain $V_{\rm LSR}$ = (294$\pm$4)\,km\,s$^{-1}$. While this inhibits a comparison of the two major molecular lobes near the center of NGC~253, the dominance of the high velocity component in the spectra is nevertheless positive. It reduces considerably the line widths and thus blending of nearby spectral features. A search for vibrationally excited CN turned out to be unsuccessful. The higher frequency fine structure components of the $v$ = 1 $N$ = 1$\rightarrow$0 and 2$\rightarrow$1 CN transitions are blended by $^{12}$C$^{17}$O $J$ = 1$\rightarrow$0 and 2$\rightarrow$1. For the lower frequency fine structure components, we obtain with a channel width of 8.5\,km\,s$^{-1}$ $\lambda$ $\sim$ 3 and 1.3\,mm 1$\sigma$ noise levels of 3 and 4\,mK (15 and 20\,mJy). \subsection{On the importance of CN} Toward NGC~253, the $^{12}$C/$^{13}$C carbon isotope ratio has been previously estimated from CS (Henkel et al. 1993) and C$_2$H (Mart{\'i}n et al. 2010). While the former authors propose $^{12}$C/$^{13}$C $\sim$40, the latter find $^{12}$C/$^{13}$C $>$ 81. In view of the scarcity of $^{12}$C/$^{13}$C determinations from extragalactic sources, this discrepancy is is an important issue to resolve. This is one of the main goals of this paper. As explained in Sect.\,3.2, mm-wave CN spectra contain a multitude of individual features. Therefore, a comparison between the tracer species with highest intensities, CO, HCN, HCO$^+$, HNC, and CN, clearly favors CN, when attempting to determine optical depths from relative line intensities to derive carbon isotope ratios (Henkel et al. 1998). This also holds, when including C$_2$H (Mart\'{\i}n et al. 2010), because CN shows the broadest frequency coverage of spectral fine structure within its mm-wave transitions, which is sufficient even in the case of rotationally broadened lines from an edge-on spiral galaxy (e.g., Fig.~\ref{fig1}). \subsection{Problems related to previous $^{12}$C/$^{13}$C determinations} The $^{12}$C/$^{13}$C $\sim$ 40 estimate from CS by Henkel et al. (1993) for NGC~253 was based on the assumption of a $^{32}$S/$^{34}$S ratio of 23 as measured in the solar system and the local interstellar medium (Penzias 1980; Wannier 1980). More recent data, however, indicate a strong positive $^{32}$S/$^{34}$S gradient in the Galactic disk (Chin et al. 1996) with $^{32}$S/$^{34}$S of order 13.5 in the inner disk and a possibly similar ratio in the nuclear starburst environment of another active nearby spiral galaxy, NGC4945 (Wang et al. 2004). For a ratio of $^{32}$S/$^{34}$S = 13.5, following the procedure outlined by Henkel et al. (1993), we would obtain $^{12}$C/$^{13}$C $\sim$ 23 for NGC~253. More recently, Mart\'{\i}n et al. (2005, 2006) obtained $I$($^{12}$C$^{32}$S 3--2)/$I$($^{12}$C$^{34}$S 3--2) $\sim$ 5.7 and $I$($^{12}$C$^{32}$S 3--2)/$I$($^{13}$C$^{32}$S 3--2) $\sim$ 27 (the ratios were calculated from their Tables~1, giving integrated intensities, while Mart\'{\i}n et al. (2005) discuss $I$($^{12}$C$^{32}$S 3--2)/$I$($^{13}$C$^{32}$S 3--2) = 21$\pm$3 in their Sect.\,3.1, derived from peak flux densities). With the data from their Tables~1 and $^{32}$S/$^{34}$S = 13.5, we obtain, following the same procedure (Henkel et al. 1993), from CS a carbon isotope ratio of $^{12}$C/$^{13}$C $\sim$ 70 for NGC~253. In this context it has to be mentioned that $^{32}$S/$^{34}$S = 8$\pm$2 as suggested by Mart\'{\i}n et al. (2005) cannot be adopted because it has been derived from the $^{12}$C/$^{13}$C $\sim$ 40 ratio proposed by Henkel et al. (1993) under the assumption of $^{32}$S/$^{34}$S = 23. However, the sulfur isotope ratio of $>$16, suggested more recently by Mart\'{\i}n et al. (2010), is free of such a contradiction. To summarize, the poorly constrained sulfur isotope ratio in NGC~253 and the (initially unknown) strong Galactic $^{32}$S/$^{34}$S gradient (Chin et al. 1996), allowing for a wide range of ratios at least in the Galaxy, inhibit any reliable determination of NGC~253's carbon isotope ratio based on CS. In view of the qualitative nature of the above mentioned ratios, a systematic analysis of error budgets is not feasible. \subsection{Estimating $^{12}$CN/$^{13}$CN, excitation temperature, and column density} With two detected groups of features in the CN and $^{13}$CN $N$=1$\rightarrow$0 lines (Fig.~\ref{fig1}) and three in the CN $N$=2$\rightarrow$1 transition (Fig.~\ref{fig2}), we are able to quantify opacity effects without having to rely on a second isotope ratio (like that of $^{32}$S/$^{34}$S). Therefore, CN provides a good data base to directly estimate the $^{12}$C/$^{13}$C ratio in NGC~253. If local thermodynamical equilibrium (LTE) holds and lines are optically thin, line intensity ratios should be 1:2 (CN $N$=1$\rightarrow$0), 1.225:1 ($^{13}$CN $N$=1$\rightarrow$0) and 1:5:9 (CN $N$=2$\rightarrow$1), when moving from left to right with increasing frequency in Figs.~\ref{fig1} and \ref{fig2}. Dividing the integrated intensities of the CN and $^{13}$CN $N$=1$\rightarrow$0 transitions, we obtain $I$(CN)/$I$($^{13}$CN) = 40. Also accounting for the $F_1$, $F_2$=0$\rightarrow$1 $^{13}$CN components near 108.4, which are not seen but contribute 7.6\% to the total $^{13}$CN emission in the case of optically thin lines and prevailing LTE conditions (see Bogey et al. 1984), the ratio drops to $I$(CN)/$I$($^{13}$CN) = 37.5 for the $N$ = 1$\rightarrow$0 transition. To estimate the CN excitation temperature, we note that the $N$ = 2$\rightarrow$1/1$\rightarrow$0 line intensity ratio is 1.655 (Table~\ref{tab2}) on a frequency scale and 0.83 on a velocity scale. The latter is relevant here. Accounting for the fact that the CN $N$ = 2$\rightarrow$1 linear beam size $\theta_{\rm b}$ is half that of the $N$ = 1$\rightarrow$0 line and assuming that the source size is small with respect to $\theta_{\rm b, CN 2-1}$, beam dilution is four times higher for the $N$ = 1$\rightarrow$0 than for the 2$\rightarrow$1 transition. The corrected line intensity ratio then becomes 0.21. Fig.~2c of Mauersberger et al. (1996) suggests that the high velocity CO $J$ = 2$\rightarrow$1 emission has an extent comparable to $\theta_{\rm b, CN 2-1}$. Since CN may arise from an even more compact region, this provides an upper limit to the possible extent of the CN emission. Therefore the real CN N = 2$\rightarrow$1/1$\rightarrow$0 intensity ratio may be close to 0.25, but below we will account for the entire range of possible values. For optically thin emission we use the $N$ = 2$\rightarrow$/1$\rightarrow$0 ratio to constrain the excitation temperature via $$ 0.25 = 4 e^{\rm -x} \times\ \frac{1-e^{\rm -2x}}{1-e^{\rm -x}} \times\ \frac{(e^{\rm 2x}-1)^{-1} - (e^{\rm 2y}-1)^{-1}}{(e^{\rm x}-1)^{-1} - (e^{\rm y}-1)^{-1}} $$ (e.g., Wang et al. 2004), where x = h$\nu_{10}$/kT$_{\rm ex}$, $\nu_{10}$ = 113.386\,GHz (an averaged CN $N$ = 1$\rightarrow$0 rest frequency), and y = h$\nu_{10}$/2.73\,k = 1.99. In this way we derive an excitation temperature of $T_{\rm ex}$ = 3.5$^{+3.0}_{-0.3}$\,K. The error limits account for the entire range of possible line intensity ratios from 0.21 to 0.83. For an estimate also discussing effects of line saturation and non-LTE excitation, see Sect.\,3.7. \subsection{Potential uncertainties in the derived carbon isotope ratio} Toward NGC~253, $^{12}$C/$^{13}$C line intensity ratios from carbon bearing species with well detected $^{13}$C-containing isotopologues are given by Henkel et al. (1993) for CO $J$ = 1$\rightarrow$0 and 2$\rightarrow$1, HCN $J$=1$\rightarrow$0, HCO$^+$ $J$=1$\rightarrow$0, and CS $J$=3$\rightarrow$2. These ratios do not surpass 20 and can be taken as lower limits in view of the unknown but certainly substantial optical depths of the main isotopologues. Mart\'{\i}n et al. (2005, 2006) find (as already mentioned) a ratio of 27 for CS $J$=3$\rightarrow$2. In view of all these data, our determination of the CN/$^{13}$CN $N$=1$\rightarrow$0 line intensity ratio of order 40 is a major step ahead. However, questions related to fractionation, isotope selective photodissociation, calibration, and optical depths of the main CN species still remain to be discussed. \begin{figure}[t] \vspace{0.0cm} \centering \resizebox{13.3cm}{!}{\rotatebox[origin=br]{-90}{\includegraphics{fig2-ngc253.ps}}} \vspace{-0.4cm} \caption{The CN $N$=2$\rightarrow$1 spectrum (black) and Gaussian fit (green) of NGC~253 in units of main beam brightness temperature on a Local Standard of Rest $V_{\rm LSR}$ = 0\,km\,s$^{-1}$ frequency scale. The profile has been smoothed to a channel spacing of $\sim$2.1\,km\,s$^{-1}$ (1.5625\,MHz). Numbers at the foot of each spectral feature provide expected relative intensities with respect to the weakest group of lines in case of optically thin emission under conditions of local thermodynamical equilibrium.} \label{fig2} \end{figure} \subsubsection{Chemical fractionation and isotope selective photodissociation} Langer et al. (1984) modeled the fractionation of oxygen and carbon in dense interstellar clouds with time-dependent chemistry, involving cloud lifetimes up to 10$^8$\,yr, kinetic temperatures of 6--80\,K, and densities of 5$\times$10$^2$ -- 10$^5$\,cm$^{-3}$ for a wide range of metal abundances. While oxygen isotope fractionation is insignificant under all considered conditions, carbon fractionation occurs resulting in CO providing too low, HCO$^+$ delivering quite accurate, and CN, CS, HCN, and H$_2$CO yielding too high $^{12}$C/$^{13}$C ratios. Observations of H$_2$CO, C$^{18}$O, and CN, including the corresponding $^{13}$C-bearing species, across the Galactic plane (Henkel et al. 1982; Langer \& Penzias 1990; Milam et al. 2005) demonstrate how large the numerically predicted discrepancies may become in the real world. Observationally, the carbon isotope ratios from H$_2$CO turn out to be larger by $\sim$30\% than those from C$^{18}$O and CN, which are quite similar. Milam et al. (2005), finding a synthesis for all these data sets, conclude that there is good agreement between all measured $^{12}$C/$^{13}$C values at a given galactocentric radius, independent of the kinetic temperature of the cloud observed. Thus chemical fractionation and isotope selective photodissociation do not play a dominant role. In addition to the advantage of CN showing a peculiar spectral fine structure (see Sect.\,3.3), we thus also find that there exists an exemplary Galactic data set on CN/$^{13}$CN (Milam et al. 2005). There is no such analog for C$_2$H, encompassing the Galactic disk and center, which would be relevant for the analysis of NGC~253 by Mart\'{\i}n et al. (2010). Since C$_2$H is a molecule, which may also permit the derivation of line opacities of the main isotopologue in galaxies with moderate line widths, a systematic survey of C$_2$H, $^{13}$CCH, and C$^{13}$CH across the Galaxy would be highly desirable, providing interesting insights into astrochemistry, possibly delivering a benchmark for extragalactic data, and allowing us to critically compare carbon ratios derived from CN and C$_2$H also in extragalactic sources. In view of similar results from H$_2$CO, C$^{18}$O, and CN in the Galaxy (Milam et al. 2005), we expect compatible results from C$_2$H as well, but this has still to be demonstrated in a rigorous way. Although it would be unexpected (see, e.g., Wang et al. 2009, who find that the LMC is well mixed with respect to the carbon isotope ratio), we can also not yet firmly exclude that CN and C$_2$H trace different regions with different $^{12}$C/$^{13}$C values. Deviations from relative LTE intensities of different features within the $N$ = 1$\rightarrow$0 or 2$\rightarrow$1 transitions of C$_2$H have been found to be small (e.g., Padovani et al. 2009) suggesting that C$_2$H will be useful for studies of isotope ratios. Furthermore, CN and C$_2$H appear to be chemically related. Both are common tracers of dark clouds (e.g., Padovani et al. 2009 and references therein) and are also probes of photon dominated regions (PDRs; e.g. Simon et al. 1997; Rimmer et al. 2012). Because of high critical densities (for C$_2$H, see Spielfiedel et al. 2012; for CN, see Sect.\,3.7), their molecular line emission should be weak in clouds of low density. In view of all these common properties, significantly different $^{12}$C/$^{13}$C ratios from CN and C$_2$H would be a great surprise. \subsubsection{Calibration} Table~\ref{tab1} shows a drastic difference between the system temperatures of the two $\lambda$ $\sim$ 3\,mm spectra centered at 109.635 and 113.365\,GHz. This is mainly a consequence of the extinction caused by atmospheric O$_2$ near 118\,GHz. While previous studies were made with comparatively small bandwidths, here we face the problem that a single system temperature stands for a spectrum with supposedly quite different atmospheric extinctions at its low and high frequency edges. Because of this, we have to compare our measured main beam brightness temperatures with those of previous studies. Only data from the 30-m IRAM telescope are considered here. With respect to $^{13}$C$^{16}$O (hereafter $^{13}$CO) and $^{12}$C$^{18}$O (hereafter C$^{18}$O) $J$ = 1$\rightarrow$0 near 110\,GHz (Fig.~\ref{fig3}) agreement with the profiles of Harrison et al. (1999) is excellent. When compared with Fig.~1 of Henkel et al. (1993), this also holds for the two main groups of features of the CN $N$ = 2$\rightarrow$1 transition near 226\,GHz. While our signal-to-noise ratios are much higher, there is also no notable discrepancy between Fig.~1 of Henkel et al. (1993) and our $^{13}$CN spectrum near 108.6\,GHz (Fig.~\ref{fig1}, upper panel). This mainly refers to the CH$_3$OH 0$_0$$\rightarrow$1$_{-1}$ line, because the noise level in the previously published spectrum is too high to detect $^{13}$CN. Here we should note that all lines considered so far are well displaced from the atmospheric 118\,GHz O$_2$ feature so that atmospheric extinction is not expected to vary strongly within the individually observed 4\,GHz wide frequency bands. Nevertheless, there are also inconsistencies. Henkel et al. (1988) measured only $T_{\rm mb}$ $\sim$ 250\,mK for CN $N$ = 2$\rightarrow$1, which is lower than what Henkel et al. (1993) report five years later and what we have observed in this study ($\sim$450\,mK; cf. Table~\ref{tab2}). The IRAM 30-m telescope has been greatly improved during the years between 1988 and 1993, so the later observation should be preferred, providing support for our spectrum displayed in Fig.~\ref{fig2}. More critical is CN $N$ = 1$\rightarrow$0, because it has been observed in the 4\,GHz wide band also covering CO $J$ = 1$\rightarrow$0 at the edge of the atmospheric 118\,GHz O$_2$ feature. Henkel et al. (1988) and Henkel et al. (1993) find $T_{\rm mb}$ $\sim$ 300 and 350\,mK for CN $N$ = 1$\rightarrow$0, which is $\sim$40\% and 25\% below our value (Table~\ref{tab2}). With the CN $N$ = 1$\rightarrow$0 frequencies being located in between those of $^{13}$CO and C$^{18}$O $J$ = 1$\rightarrow$0 and CO $J$ = 1$\rightarrow$0, our CO peak intensity is also relevant for an evaluation of calibration uncertainties. With $T_{\rm mb}$ $\sim$ 5.7 (Fig.~\ref{fig4}), we obtain a value well above that shown in Fig.~5c of Mauersberger et al. (1996), reaching only $\sim$4\,K for the $V_{\rm LSR}$ $\sim$ +290\,km\,s$^{-1}$ velocity component seen by us (Sect.\,3.2). This difference of $\sim$30\% relative to our result is similar to the deviation obtained for CN N = 1$\rightarrow$0 and will be implemented as the main uncertainty in our estimate of the carbon isotope ratio (see Sect.\,3.7). \subsubsection{Optical depths} CN has been observed in several clouds of the Galactic center region (Henkel et al. 1998), which is as close as we can get to the physical conditions prevailing in the nuclear region of NGC~253. Toward cloud cores, deviations of the individual spectral features from LTE were found to be moderate. $I$(CN)/$I$($^{13}$CN) = 9 -- 15 in the $N$=1$\rightarrow$0 line, which is below the canonical Galactic center ratio of $^{12}$C/$^{13}$C $\sim$ 25, indicating moderate CN saturation. In this context, we should also mention the interferometric CN observations of the innermost 4\,pc of our Galaxy (Mart\'{\i}n et al. 2012), which yield carbon isotope ratios of 15--45. We note, however, that this study addresses a very small region compared to those discussed here. In NGC~253, the line intensity ratio of the two $^{13}$CN $N$~= 1$\rightarrow$0 features is 1.165$\pm$0.215, which is consistent with LTE and optically thin emission (Fig.~\ref{fig1}, upper panel). We may, however, face some moderate saturation in the CN $N$~= 1$\rightarrow$0 line. Instead of the 2-to-1 ratio expected under LTE conditions with optically thin lines, the ratio of integrated intensities between the $J$ = 3/2$\rightarrow$1/2 and 1/2$\rightarrow$1/2 groups of lines is 1.55$\pm$0.01 (see Fig.~\ref{fig1} and Table~\ref{tab2}). We note, however, that the peak line intensity ratio between the two groups of line components is almost exactly two. The stronger spectral component is narrower, a consequence of the relative frequencies of its individual hyperfine components. To get twice the integrated intensity, the stronger component would have to have a peak temperature of $T_{\rm mb}$ = 606\,mK, $\sim$2.5 times the $T_{\rm mb}$ value of the weaker component. With the observed 479\,mK instead (Table~\ref{tab2}) and assuming equal excitation temperatures for all individual features we can apply the radiative transfer equation to determine the optical depth via $$ 606/479\ \sim\ 1.265\ =\ \frac{\tau}{(1 - e^{-\tau})}. $$ This gives a peak optical depth of $\tau$ $\sim$ 0.5. For the other measured CN 1$\rightarrow$0 feature we then obtain an optical depth of $\tau$ $\sim$ 0.2 and effects due to saturation should amount to $\sim$26.5\% and 10\%, respectively. \subsection{Consequences} To determine the carbon isotope ratio, we thus multiply the integrated intensity $I_{113.1}$ of the weaker CN $N$ = 1$\rightarrow$0 feature by $f_{113.1}$ = 1.1 and that of the stronger one, $I_{113.4}$, by $f_{113.4}$ = 1.265 to account for line saturation (here and below, the indices refer to redshifted frequencies in units of GHz). We then obtain with the values given in Table~\ref{tab2} and the correction to the measured $I_{\rm 13CN}$ intensity mentioned in Sect.\,3.5 a $^{12}$CN/$^{13}$CN $N$~= 1$\rightarrow$0 ratio of $$ \frac{(I_{113.1} \times\ f_{113.1}) + (I_{113.4} \times\ f_{113.4})}{1.082 \times\ I_{\rm 13CN}} = 45. $$ This should be consistent with the carbon isotope ratio (e.g., Milam et al. 2005). The main error could be caused by an overestimate of our CN 1$\rightarrow$0 main beam brightness temperature scale by $\sim$30\%, as outlined in Sect. 3.6.2. This would reduce the isotope ratio to a value of approximately 30. Since all other errors appear negligible relative to this one, we conclude that {\it the carbon isotope ratio is 30--50 in the south-western starburst core of NGC~253}. This is well below the lower limit proposed by Mart\'{\i}n et al. (2010), while it is perfectly consistent with the ratio derived by Henkel et al. (1993) from CS. Nevertheless, we consider this latter agreement as fortuitous. It implies that the $^{32}$S/$^{34}$S sulfur isotope ratio in the central part of NGC~253 is close to the local interstellar value, in agreement with $>$16, the value suggested by Mart\'{\i}n et al. (2010). While all this is highly consistent and straight forward, we still have to emphasize that the derived carbon isotope ratio is based on the assumption, that the intrinsic CN $N$ = 1$\rightarrow$0 relative line strengths are, like those of $^{13}$CN 1$\rightarrow$0, close to their LTE values (for further support for this assumption, see Sect.\,4.1). \begin{figure}[t] \vspace{0.0cm} \centering \resizebox{25.0cm}{!}{\rotatebox[origin=br]{-90}{\includegraphics{fig3-ngc253.ps}}} \vspace{-1.0cm} \caption{C$^{18}$O and $^{13}$CO $J$=1$\rightarrow$0 spectra from NGC~253 (lower panel) and Mrk~231 (upper panel) on a Local Standard of Rest (LSR) $V_{\rm LSR}$ = 0\,km\,s$^{-1}$ frequency scale. The spectra were smoothed to channel spacings of $\sim$4.26 and $\sim$60.0\,km\,s$^{-1}$ (1.53 and 21.0\,MHz), respectively. In the lower panel, the features at 109.8 and 110.27\,GHz belong to HNCO $J$=5$\rightarrow$4 and CH$_3$CN $J$=6$\rightarrow$5.} \label{fig3} \end{figure} \begin{figure}[t] \vspace{0.0cm} \centering \resizebox{25.0cm}{!}{\rotatebox[origin=br]{-90}{\includegraphics{fig4-ngc253.ps}}} \vspace{-1.0cm} \caption{CO $J$=1$\rightarrow$0 spectra from NGC~253 (lower panel) and Mrk~231 (upper panel) on a Local Standard of Rest (LSR) $V_{\rm LSR}$ = 0\,km\,s$^{-1}$ frequency scale. The spectra were smoothed to channel spacings of $\sim$8.52 and 10.84\,km\,s$^{-1}$ (3.125 and 4.0\,MHz), respectively. The frequency range of the upper panel is wider to show the molecular outflow at the foot of the main spectral component, seen in CO and other species (e.g., Feruglio et al. 2010; Aalto et al. 2012).} \label{fig4} \end{figure} Accounting with $f_{113.1}$ and $f_{113.4}$ for saturation effects in the CN 1$\rightarrow$0 transition and neglecting those eventually existing in the $N$ = 2$\rightarrow$1 line, we can now also re-evaluate the CN excitation temperature from a modified CN N = 2$\rightarrow$1/1$\rightarrow$0 intensity ratio, which drops from 0.83 to 0.69 on a velocity scale (see Sect.\,3.5). This line ratio would be reduced to 0.172 in case of a point source due to the different beam filling factors of the two lines. Following the discussion in Sect.\,3.5, and adopting a ratio of 0.2, this results in $T_{\rm ex}$ = 3.2$^{+2.4}_{-0.2}$\,K with the given errors covering the entire range of allowed line ratios from 0.172 to 0.69. \begin{table*} \caption[]{CO line parameters for NGC~253 and CO and CN line parameters for Mrk~231, obtained from Gaussian fits$^{\rm a)}$} \begin{flushleft} \begin{tabular}{crccrr} \hline Line &\multicolumn{1}{c}{$\int T_{\rm mb}$} & $V$ & $\nu$ & $\Delta\nu_{\rm 1/2}$ & $T_{\rm mb}$ \\ &\multicolumn{1}{c}{(K MHz)} &(km\,s$^{-1}$& (MHz) & (MHz) & (mK) \\ \hline & & & & & \\ {\it NGC~253} & & & & & \\ $^{12}$C$^{16}$O $J = 1\rightarrow$0 &403.040$\pm$0.125& 285$\pm$1 &115161.571$\pm$0.010 & 68.931$\pm$0.025 & 5847$\pm$3 \\ Comp. 1 &360.970$\pm$0.149& 293$\pm$1 &115158.298$\pm$0.002 & 59.460$\pm$0.029 & 6070$\pm$4 \\ Comp. 2 & 45.321$\pm$0.121& 155$\pm$1 &115211.774$\pm$0.032 & 35.844$\pm$0.092 & 1264$\pm$5 \\ $^{13}$C$^{16}$O $J = 1\rightarrow$0 & 28.840$\pm$0.070& 285$\pm$1 &110096.675$\pm$0.075 & 63.038$\pm$0.184 & 458$\pm$2 \\ Comp. 1 & 24.349$\pm$0.075& 300$\pm$1 &110092.448$\pm$0.072 & 49.949$\pm$0.172 & 487$\pm$2 \\ Comp. 2 & 4.593$\pm$0.066& 167$\pm$1 &110139.946$\pm$0.179 & 32.367$\pm$0.358 & 142$\pm$3 \\ $^{12}$C$^{18}$O $J = 1\rightarrow$0 & 8.009$\pm$0.101& 285$\pm$1 &109677.878$\pm$0.372 & 60.921$\pm$0.949 & 131$\pm$3 \\ Comp. 1 & 6.652$\pm$0.084& 297$\pm$1 &109673.695$\pm$0.264 & 46.421$\pm$0.691 & 143$\pm$3 \\ Comp. 2 & 1.379$\pm$0.073& 169$\pm$2 &109720.494$\pm$0.600 & 29.242$\pm$1.412 & 47$\pm$3 \\ $^{12}$C$^{17}$O $J = 1\rightarrow$0 & 0.629$\pm$0.056& 286$\pm$5 &112252.191$\pm$1.894 & 44.837$\pm$5.007 & 14$\pm$2 \\ & & & & \\ $^{13}$C$^{16}$O $J = 2\rightarrow$1 &123.010$\pm$0.261& 278$\pm$1 &220194.289$\pm$0.107 & 105.105$\pm$0.272 & 1170$\pm$4 \\ $^{12}$C$^{18}$O $J = 2\rightarrow$1 & 33.905$\pm$0.212& 278$\pm$1 &219356.938$\pm$0.273 & 92.815$\pm$0.734 & 365$\pm$4 \\ $^{12}$C$^{17}$O $J = 2\rightarrow$1 & 4.216$\pm$0.147& 273$\pm$2 &224509.430$\pm$1.437 & 86.434$\pm$3.724 & 49$\pm$3 \\ & & & & & \\ {\it Mrk~231} & & & & & \\ $^{12}$C$^{16}$O $J = 1\rightarrow$0 & 4.687$\pm$0.034& 12658$\pm$01&110601.313$\pm$00.268& 75.254$\pm$00.650 & 62.3$\pm$0.7 \\ $^{13}$C$^{16}$O $J = 1\rightarrow$0 & 0.178$\pm$0.041& 12723$\pm$28&105714.712$\pm$09.762& 89.608$\pm$31.488 & 2.0$\pm$0.8 \\ $^{12}$C$^{18}$O $J = 1\rightarrow$0 & 0.142$\pm$0.034& 12685$\pm$31&105325.492$\pm$10.778& 75.964$\pm$26.302 & 1.9$\pm$0.8 \\ & & & & & \\ $^{12}$C$^{16}$O $J = 2\rightarrow$1 & 33.178$\pm$0.135& 12661$\pm$01&221196.545$\pm$00.299& 151.903$\pm$00.731 &218.4$\pm$1.4 \\ $^{13}$C$^{16}$O $J = 2\rightarrow$1 & 0.693$\pm$0.142& 12682$\pm$13&211453.470$\pm$09.194& 91.493$\pm$21.645 & 7.6$\pm$2.4 \\ $^{12}$C$^{18}$O $J = 2\rightarrow$1 & 0.544$\pm$0.131& 12691$\pm$16&210643.401$\pm$11.551& 73.979$\pm$22.168 & 7.4$\pm$2.8 \\ & & & & & \\ CN $N = 1\rightarrow$0 & 0.295$\pm$0.053& -- &108590.836$\pm$06.522& 84.815$\pm$22.957 & 3.5$\pm$1.1 \\ CN $N = 1\rightarrow$0 & 0.600$\pm$0.040& -- &108898.443$\pm$02.392& 73.780$\pm$05.832 & 8.1$\pm$0.8 \\ & & & & & \\ CN $N = 2\rightarrow$1 & 1.714$\pm$0.189& -- &217691.713$\pm$07.215& 139.615$\pm$18.827 & 12.3$\pm$2.1 \\ CN $N = 2\rightarrow$1 & 1.544$\pm$0.197& -- &217442.541$\pm$11.022& 177.146$\pm$25.067 & 8.7$\pm$1.7 \\ CN $N = 2\rightarrow$1 & 0.499$\pm$0.117& -- &217165.557$\pm$08.030& 70.279$\pm$19.209 & 7.1$\pm$2.6 \\ & & & & & \\ \hline \end{tabular} \end{flushleft} a) Given frequencies refer to the Local Standard of Rest (LSR) $V_{\rm LSR}$ = 0 km\,s$^{-1}$ frequency scale. All errors are standard deviations obtained from Gaussian fits, except those in the last column. The $T_{\rm mb}$ values and errors were derived by combining the values of columns 2 and 5. Outflow components (see Turner et al. 1985; Feruglio et al. 2010; Aalto et al. 2012; Bolatto et al. 2013), not apparent in the weaker isotopic lines, are not included in the fits. For potential calibration errors, see Sect.\,3.6.2. Because of the large bandwidths of the spectra, yielding potentially complex velocity-frequency correlations, frequencies and not velocities are emphasized. For the CO lines, however, also optical (c$z$) $V_{\rm LSR}$ values are given, mainly to show the relative importance of the two main spectral features in NGC~253. Since each CN component represents a group of hyperfine components of different strength, no velocities are given in this case. \label{tab3} \end{table*} The intensity ratio of the two main CN $N$ = 2$\rightarrow$1 spectral features (Fig.~\ref{fig2}) is close to the LTE value of 1.8:1. Their ratio of peak intensities is 1.76$\pm$0.02 and the ratio of integrated intensities is 1.61$\pm$0.01, while line widths are similar. This suggests that saturation plays only a minor role, affecting line intensities by $\la$10\%, as is also suggested by the moderate opacities of the 1$\rightarrow$0 lines and the low excitation temperature derived above. This effectively reduces populations in the higher $N$-levels. While all this is perfectly consistent, the intensity of the weakest 2$\rightarrow$1 feature (Fig.~\ref{fig2}) diverges significantly. This component appears to be far too strong relative to the others. An alternative view of the $N$ = 2$\rightarrow$1 line (see Fig.~\ref{fig2}) would be that the strongest and weakest $N$ = 2$\rightarrow$1 features are in LTE, while the central feature is depleted by a non-LTE effect. Then saturation effects would reduce the main beam brightness temperature of the strongest feature from 1274\,mK (nine times the peak intensity of the weakest feature multiplied by the ratio of the two line widths, 1.097; see Fig.~\ref{fig2} and Table~\ref{tab2}) to the observed 445\,mK, implying a peak optical depth of $\tau$ $\sim$ 3.7. Calculating the excitation temperature in this way by adopting for the weaker central feature half this peak optical depth, we obtain with the values of Table~\ref{tab2}, i.e. integrating over frequency, a $N$ = 2$\rightarrow$1/1$\rightarrow$0 line intensity ratio of $$ \frac{(I_{226.7}\times f_{226.7}) + (I_{226.5}\times f_{226.5}) + I_{226.1}} {(I_{113.1}\times f_{113.1}) + (I_{113.4}\times f_{113.4})} = 3.23. $$ For the quantities in the denominator, see the previous equation. $f_{226.7}$ = 2.86 and $f_{226.5}$ = 2.20. Integrating instead over velocity, which is the proper unit, yields half the CN line intensity ratio, 1.614, or, for a point source (see Sect.\,3.5), 0.403. The resulting excitation then becomes 4.2\,K $<$ $T_{\rm ex}$ $<$ 11.3\,K. Because the CN emission is likely not extended with respect to the beam (Sect.\,3.5), the actual excitation temperature should be close to 4\,K, still a very low value in spite of all the corrections we have made. It is also small in view of the excitation temperatures derived from other species in NGC~253 (Mart\'{\i}n et al. 2006), strongly indicating subthermal excitation. The critical density of CN $N$=1$\rightarrow$0, where collisional excitation rates match those for spontaneous radiative decay, is high ($n_{\rm crit}$ $\sim$ 10$^6$\,cm$^{-3}$) and almost as large as that for HCN $J$=1$\rightarrow$0. Therefore, subthermal emission from a predominantly lower density medium is no surprise. The corresponding CN column density is 1.7$\times$10$^{15}$\,cm$^{-2}$ but could be up to a factor of 20 higher and almost an order of magnitude lower for the limiting cases $T_{\rm ex}$ = 11.3 and 3.0\,K (the latter obtained prior to correct for CN $N$ = 2$\rightarrow$1 line saturation). We further note that these column density estimates are speculative, because there may exist a hot, dense component with $T_{\rm ex}$ well above 10\,K, which might only become visible when observing higher $N$ transitions (for CN chemistry, see, e.g., Simon et al. 1997; Liszt \& Lucas 2001). Adopting exclusively collisional excitation and using RADEX (van der Tak et al. 2007), $T_{\rm ex}$ = 4.0\,K corresponds for kinetic temperatures of 50--100\,K to a density of $n$(H$_2$) $\sim$ 2.5 $\times$ 10$^4$\,cm$^{-3}$ This involves Einstein coefficients from Klisch et al. (1995), a dipole moment of 1.45\,Debye (Thomson \& Dalby 1968) and He-impact rates from Lique et al. (2010) scaled by 1.37 to simulate H$_2$. \begin{figure}[t] \vspace{-3.8cm} \centering \resizebox{25.0cm}{!}{\rotatebox[origin=br]{-90}{\includegraphics{fig6-ngc253.ps}}} \vspace{-1.0cm} \caption{$^{13}$CO, C$^{18}$O, and C$^{17}$O spectra from NGC~253 (see also Figs.~\ref{fig3} and \ref{fig4}) on a Local Standard of Rest (LSR) $V_{\rm LSR}$ = 0\,km\,s$^{-1}$ frequency scale. The profiles were smoothed to channel spacings of $\sim$8.5, 8.5, and 17\,km\,s$^{-1}$ from top to bottom, which corresponds to 6.25\,MHz.} \label{fig5} \end{figure} \subsection{Beyond CN: Oxygen isotope ratios in NGC~253} Due to bandwidths of 2$\times$2$\times$4\,GHz (Sect.\,2), our spectra do not only contain CN but also include a number of CO lines (Figs.~\ref{fig3}--\ref{fig5} and Sect.\,3.6.2). Because of their strength and because line shapes are reflecting individual components and not groups of hyperfine components (the exception from this rule is $^{12}$C$^{17}$O, hereafter C$^{17}$O), not only the $V_{\rm LSR}$ $\sim$ 290\,km\,s$^{-1}$ feature is seen in the $J$ = 1$\rightarrow$0 lines (see Sect.\,3.2), but also the lower velocity $V_{\rm LSR}$ $\sim$ 170\,km\,s$^{-1}$ component (e.g., Mart\'{\i}n et al. 2006), which contributes 10\%-- 20\% to the total line emission in our $\lambda$ = 3\,mm spectra. The $J$ = 2$\rightarrow$1 lines obtained with a smaller beam size (Table~1) are best fit by a single velocity component. Table~3 summarizes the parameters of the Gaussian fits to the line profiles. Noteworthy are the extremely high $I$(C$^{18}$O)/$I$(C$^{17}$O) $J$ = 1$\rightarrow$0 and 2$\rightarrow$1 ratios with respect to the Galactic interstellar medium ($\sim$3.5; Wouterloot et al. 2008), 12.7$\pm$1.2 and 8.0$\pm$0.3. These were already noted before (e.g., Henkel \& Mauersberger 1993), but the rotational lines of the different isotopologues could previously not be observed simultaneously. The high ratios of order 10 were interpreted in terms of vigorous massive star formation in a nuclear starburst, containing a metal rich gaseous composition. Adopting $^{12}$C/$^{13}$C = $X$ = 40$\pm$10 from CN (Sect.\,3.7) and keeping in mind that at least in Galactic star forming regions $^{12}$C/$^{13}$C ratios from CN and C$^{18}$O are similar (Milam et al. 2005), we can also estimate the optical depths of the various CO isotopologues. Only accounting for the errors obtained from Gaussian fits, the CO/$^{13}$CO $J$ = 1$\rightarrow$0 line intensity ratio becomes 13.98$\pm$0.34 (Table~3). Since the intensity of the CO $J$ = 1$\rightarrow$0 line may be overestimated by 30\% (Sect.\,3.6.2), we estimate a ratio of $R$ = 10--14 and obtain with $$ R = \frac{1 - e^{-X\tau}}{1 - e^{-\tau}} $$ $\tau$(CO 1$\rightarrow$0) = $X$$\tau$ = 1.8 -- 5. This result implies that $^{13}$CO should be optically thin, not only in the $J$ = 1$\rightarrow$0 but also in the 2$\rightarrow$1 transition. Testing this by comparing the $I$($^{13}$CO)/$I$(C$^{18}$O) line intensity ratios in the two ground rotational transitions, we obtain 3.601$\pm$0.046 and 3.628$\pm$0.024, respectively. Within the limits of accuracy, the ratios agree with each other, as expected in the case of optically thin emission. With the $I$($^{13}$CO)/$I$(C$^{18}$O) value and with 8.9$\pm$1.2 being the weighted mean of the $I$(C$^{18}$O)/$I$(C$^{17}$O) ratio from the $J$ = 1$\rightarrow$0 and 2$\rightarrow$1 transitions, $^{16}$O/$^{18}$O = CO/$^{13}$CO $\times$ $^{13}$CO/C$^{18}$O = (40$\pm$10) $\times$ (3.62$\pm$0.05) = 145$\pm$36 and $^{16}$O/$^{17}$O = $^{16}$O/$^{18}$O $\times$ C$^{18}$O/C$^{17}$O = (145$\pm$36) $\times$ (8.9$\pm$1.2) = 1290$\pm$365 (see also Harrison et al. 1999). \begin{figure}[t] \vspace{0.0cm} \centering \resizebox{25.0cm}{!}{\rotatebox[origin=br]{-90}{\includegraphics{fig5-ngc253.ps}}} \vspace{-1.0cm} \caption{CO $J$=2$\rightarrow$1 and CN spectra from Mrk~231 on a Local Standard of Rest (LSR) $V_{\rm LSR}$ = 0\,km\,s$^{-1}$ frequency scale. The spectra were smoothed to channel spacings of $\sim$2.7, 22, 23, and 60\,km\,s$^{-1}$ ($\sim$2, 16, 16, and 21\,MHz) from top to bottom. } \label{fig6} \end{figure} \section{Other galaxies} \subsection{CN in ULIRGs} In view of the extreme usefulness of CN as a tracer of the carbon isotope ratio (Sects.\,3.3 and 3.6.1) and to complement our discussion of the relatively weak starburst in NGC~253, it is of interest to gain some idea of $N$ = 1$\rightarrow$0 and 2$\rightarrow$1 CN emission from a truly luminous local ULIRG (UltraLuminous InfraRed Galaxy). With $L_{\rm IR}$ $\sim$ 2.5$\times$ 10$^{12}$\,L$_{\odot}$ at $z$ = 0.0422 (e.g., Aalto et al. 2012), Mrk~231 is the target of choice. The two lower panels of Fig.~\ref{fig6} show the spectra, Table~\ref{tab3} displays line parameters. With the enormous infrared luminosity of Mrk~231, hinting at a large mass of dense star forming molecular gas, one might naively expect that the CN lines should be more saturated than in NGC~253. However, this is not entirely the case. The two components of the 1$\rightarrow$0 transition have relative intensities exactly as expected for optically thin lines under LTE conditions. Excluding the unlikely case that this is an unfortunate combination of intrinsic non-LTE line strengths and high optical depth, this provides support for our assumption (Sect.\,3.7), that ``intrinsic intensities'' (i.e., intensities after removing saturation effects) follow LTE conditions also in NGC~253. Nevertheless, the CN spectra from Mrk~231 appear to originate in a different environment than those from NGC~253. While the peak line temperatures of the $N$ = 1$\rightarrow$0 and 2$\rightarrow$1 lines are similar toward NGC~253, the 2$\rightarrow$1 features are significantly stronger than the 1$\rightarrow$0 lines in Mrk~231. Furthermore, the integrated line intensity ratios of the three 2$\rightarrow$1 components in Mrk~231 unambiguously indicate saturation. Instead of ratios of 1:5:9, expected in the optically thin case (Fig.~\ref{fig2}), the integrated intensity ratios are roughly 1:3:3.5 (Fig.~\ref{fig6} and Table~\ref{tab3}). Therefore, it is clear that the CN excitation temperature must be higher in Mrk~231 than in NGC~253. For a quantitative estimate, we assume again that the intrinsic intensities (i.e., intensities in the absence of line saturation) follow LTE conditions (there is no indication for non-LTE effects, because features expected to be stronger are stronger and features expected to be weaker are indeed weaker; see Fig.~\ref{fig6}). Using the same approach as in Sect.\,3.7, this yields opacities of $\tau$ = 0.3$\pm$0.2, 1.4$^{+1.2}_{-0.7}$ and 2.9$^{+1.5}_{-0.9}$ for the three CN $N$ = 2$\rightarrow$1 components. To correct relative column densities for these optical depths by $\tau$/(1--e$^{-\tau}$), we obtain a factor of 2.3$^{+1.0}_{-0.6}$, yielding a modified intensity ratio of $I'$(CN 2$\rightarrow$1)/$I$(CN 1$\rightarrow$0) = 9.73$^{+4.35}_{-2.31}$. Since we integrated over frequency and not yet velocity, this value has to be reduced by a factor of two. Accounting for the different beam sizes at $\lambda$ = 3 and 1.3\,mm and realizing that Mrk~231 is spatially unresolved in all of the 30-m beams (e.g., Aalto et al. 2012), the final ratio is $I$(CN 2$\rightarrow$1)/$I$(CN 1$\rightarrow$0) = 1.22$^{+0.54}_{-0.29}$. Adopting the procedure outlined in Sect.\,3.5, this yields an excitation temperature of $T_{\rm ex}$ = 8.4$^{+3.8}_{-1.6}$\,K and a tentative 22$''$ beam averaged column density of $N$(CN) $\sim$ 3.4$^{+7.4}_{-1.7}$ $\times$ 10$^{14}$\,cm$^{-2}$. Using RADEX (see Sect.\,3.7), the corresponding density becomes $n$(H$_2$) $\sim$ 8$\times$10$^4$\,cm$^{-3}$. While uncertainties are large, this hints at an excitation temperature about twice as large as that in NGC~253 (Sect.\,3.5) and NGC~4945 (Wang et al. 2004) and indicates that CN transitions with quantum numbers $N$ $>$ 2 may be of interest, at least in ULIRGs. \subsection{The carbon isotope ratio in starbursts across the universe} As we have seen (Sect.\,3.7), the $^{12}$C/$^{13}$C ratio of the nuclear starburst in NGC~253 is, at least for the high velocity ($V_{\rm LSR}$ $\sim$ 290\,km\,s$^{-1}$; Sect.\,3.2) component, higher than in our Galactic center region. The presence of a bar (e.g., Engelbracht et al. 1998), likely providing significant inflow, may be a crucial factor that directs large quantities of fresh and poorly proccessed gas toward the nuclear region of NGC~253. While such gas is also reaching the central part of our Galaxy (Riquelme et al. 2010), this process may occur on a much larger scale in NGC~253, where even indications of massive molecular feed back have already been detected (Turner 1985; Bolatto et al. 2013). For ongoing star formation, such inflowing gas with high $^{12}$C/$^{13}$C ratios may then become even more enriched in $^{12}$C by the material ejected from young massive stars. Are all starburst galaxies alike with respect to their carbon isotope ratio? Here we may differentiate between starbursts in their early and late stages of evolution as well as between weak and strong starbursts (see, e.g., Fig.~1 in Mao et al. 2010), the latter leading to the presence of (ultra)luminous infrared galaxies ((U)LIRGs). Finally, we may also distinguish between galaxies in the local and in the early universe. NGC~253 has been believed to host a young starburst (e.g., Garc\'{\i}a-Burillo et al. 2000; Wang et al. 2004), but in view of detected large-scale outflows (Turner 1985; Bolatto et al. 2013), an intermediate stage of evolution appears to be more likely. With a total infrared luminosity of $L_{\rm IR}$ $\sim$ 3$\times$10$^{10}$\,L$_{\odot}$ (e.g., Henkel et al. 1986), its level of activity is at the low end of the range observed in starbursts of spiral galaxies. A comparison of the carbon isotope ratio in NGC~253 with a starburst in a late stage of evolution of similar infrared luminosity, M~82, is not yet possible. Henkel et al. (1998) studied M~82 and proposed an isotope ratio of $^{12}$C/$^{13}$C $>$ 40 based on CN and Mart\'{\i}n et al. (2010) reported a value $>$138 from C$_2$H, but both results should be taken with some degree of scepticism, because $^{13}$CN and $^{13}$CCH or C$^{13}$CH were not detected. Mrk~231, one of the most luminous galaxies within a billion lightyears from Earth, contains an active galactic nucleus (AGN) and may host a starburst in a late stage of evolution. This can be deduced from the presence of only one nucleus in this galaxy merger and intense outflows of ionized and molecular gas (e.g., Feruglio et al. 2010; Aalto et al. 2012), rapidly exhausting the molecular star forming fuel in the central region. Figs.~\ref{fig3}, \ref{fig4}, and \ref{fig6} show CO, $^{13}$CO, and C$^{18}$O profiles. Comparing NGC~253 with Mrk~231, we note that the CO and C$^{18}$O $J$ = 1$\rightarrow$0 peak temperatures are $\sim$100 times higher in NGC~253, while for $^{13}$CO $J$ = 1$\rightarrow$0, the ratio exceeds 200. $I$($^{12}$CO)/$I$(C$^{18}$O) values are 30--40 in in both sources. However, the $^{12}$CO/$^{13}$CO line ratios are quite different, with $I$($^{12}$CO)/$I$($^{13}$CO)) = 10--14 in NGC~253 (Sect.\,3.8) and 25--50 in Mrk~231 (Table~\ref{tab3}). While the $^{12}$C/$^{13}$C ratio in the nuclear region of NGC~253 is already higher than in our Galactic center region, it appears to be even higher in Mrk~231. In NGC~253, $^{13}$CO is much stronger than C$^{18}$O in both ground rotational lines. In Mrk~231 both lines show similar intensities (Figs.~\ref{fig3} and \ref{fig6}) and should be optically thin in view of their weakness relative to CO. Overall, assuming that the CO/C$^{18}$O abundance ratios are the same, Mrk~231 should have a deficit in $^{13}$C by a factor of almost three relative to NGC~253 (for a statistical evaluation comprising many galaxies, see Taniguchi \& Ohyama 1998; Taniguchi et al. 1999), possibly yielding $^{12}$C/$^{13}$C $\sim$100 and thus also $^{16}$O/$^{18}$O $\sim$100. Interestingly, Greve et al. (2009) and Mart\'{\i}n et al. (2011) find for the less evolved merger Arp~220 also $I$($^{13}$CO) $\sim$ $I$(C$^{18}$O). Furthermore, Gonz{\'a}lez-Alfonso et al. (2012) derive $^{16}$O/$^{18}$O $\sim$ 100 from OH Herschel data for the same source. Arp~220 still possesses two well separated nuclei and shows not yet any outflow which could match that seen in Mrk~231. In view of their different stages of evolution, it is therefore surprising that Arp~220 and Mrk~231 can be characterized by quite similar carbon and $^{16}$O/$^{18}$O ratios. Even the utraluminous eyelash galaxy at redshift 2.3, the first high-$z$ galaxy with detected C$^{18}$O emission, shows similar $^{13}$CO and C$^{18}$O intensities (Danielson et al. 2013), indicating a $^{13}$C depletion with respect to local more quiescent galaxies. The LIRG NGC~1068 with an uncorrected $I$($^{12}$CN)/$I$($^{13}$CN) ratio of $\sim$50 (Aladro et al. 2013) might be an intermediate case (however, its $^{13}$CN features are weaker and therefore show lower signal-to-noise ratios than those displayed in our Fig.~\ref{fig1}). The Cloverleaf quasar at redshift $z$ $\sim$ 2.5 appears to be even more extreme. Based on a large number of CO data and the first detection of $^{13}$CO at high redshift, the $^{12}$C/$^{13}$C ratio should be well above 100 (Henkel et al. 2010). A summary of these results is given in Table~\ref{tab4}, where sources with different properties are listed together with their carbon isotope ratio. This is a first attempt to set up such a table. In the Galaxy, there is not only a carbon isotope ratio gradient, but there are also indications for dispersion at a given galactocentric radius (e.g., Milam et al. 2005), which is not unexpected in view of radial gas streaming and potential cloud-to-cloud variations due to local supernovae or ejecta by late-type stars. With respect to external galaxies, we are still far away from such a level of precision. More accurate determinations of the carbon isotope ratio in the galaxies listed in Table~\ref{tab4} as well as in other extragalactic targets, also including other classes of objects, are urgently needed. An obvious example would be M~82 as the prototype for a weak starburst at a late stage of evolution. Submillimeter galaxies (SMGs) would also be attractive. The few determined values (Table~\ref{tab4}) indicate a trend with high-$z$ ULIRGs showing the highest carbon isotope ratios. Low-$z$ ULIRGs appear to contain more processed material but still show values near $^{12}$C/$^{13}$C $\sim$ 100. The large values relative to those found in most parts of the Milky Way indicate that (1) the bulk of the material originates from a massive inflow of poorly processed $^{13}$C deficient gas and/or that (2) there is a large input of $^{12}$C-rich gas from ejecta of massive stars. The latter may become more dominant, if the number of such stars would be enhanced by a top-heavy stellar initial mass function, a result of the high kinetic temperatures expected in extreme cosmic-ray dominated environments (Papadopoulos et al. 2011). Weaker starbursts may show a moderate enhancement over the classical value for the Galactic center region, but all these statements are still based on a rather small amount of sources. Right now, we are only beginning to collect the required data for a comprehensive understanding of CNO and S/Si nucleosynthesis based on extragalactic molecular spectra. \begin{table} \caption[]{Extragalactic carbon isotope ratios$^{\rm a)}$} \begin{flushleft} \begin{tabular}{ccrc} \hline Class & Target & $^{12}$C/$^{13}$C & Ref. \\ \hline & & & \\ Quiescent spiral, center & Milky Way & $\sim$25 & 1 \\ Low level starburst & NGC253 & $\sim$40 & 2 \\ Evolved local ULIRG & Mrk~231/Arp~220& $\sim$100 & 2,3 \\ Redshift 2.5 ULIRG & Cloverleaf & $>$100 & 4 \\ & & & \\ \hline \end{tabular} \end{flushleft} a) Note that the carbon isotope ratios become less certain from top to bottom. While the ratio in the central molecular zone of our Galaxy is well established, the ratio for the Cloverleaf quasar is much less constrained. References (last column): [1] G{\"u}sten et al. (1985), [2] this paper, [3] Gonz{\'a}lez-Alfonso et al. 2012, [4] Henkel et al. 2010. \label{tab4} \end{table} \section{Conclusions} Using the IRAM 30-m telescope at Pico Veleta, we have detected two CN isotopologues toward the nearby starburst galaxy NGC~253 and four and three CO isotopologues toward NGC~253 and the ultraluminous merger galaxy Mrk~231, respectively. CN $N$=1$\rightarrow$0 and 2$\rightarrow$1 spectra from Mrk~231 are also presented. The main results of this study are: \begin{itemize} \item CN appears to be the best tracer to determine carbon isotope ratios in nearby external galaxies. \item Toward NGC~253, the measured $^{13}$CN $N$ = 1$\rightarrow$0 line intensities are compatible with local thermodynamical equilbrium (LTE) under optically thin conditions. The relative line intensities of the $^{12}$CN $N$ = 1$\rightarrow$0 features are best explained by LTE conditions modified by moderate saturation, affecting the peak intensity of the weaker component by $\sim$10\% and the stronger component by $\sim$25\%. For $^{12}$CN 2$\rightarrow$1, either the weakest of the three observed line components is enhanced or the feature of intermediate intensity is depleted relative to the expected LTE intensity under optically thin conditions. \item Accounting for calibration uncertainties and moderate saturation in the $^{12}$CN 1$\rightarrow$0 line, the $^{12}$C/$^{13}$C isotope ratio becomes 40$\pm$10 for the molecular core peaking some arcseconds south-west of the dynamical center of NGC~253. Combined with data from several CO isotopologues and adopting this $^{12}$C/$^{13}$C ratio also for the CO emitting gas (which is supported by results from Galactic CN and C$^{18}$O data), this yields $^{16}$O/$^{18}$O = 145$\pm$36 and $^{16}$O/$^{17}$O = 1290$\pm$365. \item CN and C$_2$H show both a number of hyperfine components, which allows us to determine optical depths even in extragalactic spectra covering a broad velocity range. A systematic survey of C$_2$H and its $^{13}$C bearing isotopologues in star forming clouds of the Galaxy would thus be essential to check whether resulting carbon isotope ratios are consistent with those already derived from H$_2$CO, C$^{18}$O, and CN. \item Toward NGC~253, there is no indication for vibrationally excited CN. The lower frequency fine structure components in the $v$ = 1 $N$ = 1$\rightarrow$0 and 2$\rightarrow$1 transitions are not seen down to rms levels of 3 and 4\,mK (15 and 20\,mJy) in 8.5\,km\,s$^{-1}$ wide channels. Those at higher frequency are blended by C$^{17}$O. \item The CN excitation temperature in NGC~253, derived from the $N$ = 1$\rightarrow$0 and 2$\rightarrow$1 lines is 3--11\,K, with a most likely value of $T_{\rm ex}$ $\sim$ 4\,K. With this value, the column density becomes $N$(CN) = 2 $\times$ 10$^{15}$\,cm$^{-2}$ and the density, assuming purely collisional excitation, becomes $n$(H$_2$) $\sim$ 2.5$\times$10$^4$\,cm$^{-3}$. \item CN data from the ultraluminous merger Mrk~231 indicate that the excitation temperature is enhanced by a factor of two with respect to NGC~253 and NGC~4945. In Mrk~231, relative CN line intensities within the $N$ = 1$\rightarrow$0 and 2$\rightarrow$1 transitions are compatible with local thermodynamical equilibrium. While the 1$\rightarrow$0 transitions appear to be optically thin, the 2$\rightarrow$1 lines show significant saturation effects. In view of the excitation temperature, which indicates a density of almost 10$^{5}$\,cm$^{-3}$ assuming exclusively collisional excitation, it would make sense to observe CN transitions with higher quantum numbers $N$ in Mrk~231 and other ultraluminous infrared galaxies (ULIRGs). \item A comparison between NGC~253 and Mrk~231 shows that $^{13}$C$^{16}$O is underabundant in Mrk~231 relative to $^{12}$C$^{16}$O and $^{12}$C$^{18}$O by almost a factor of three. This would yield $^{12}$C/$^{13}$C $\sim$ 100 and, because $^{13}$CO and C$^{18}$O show similar intensities in both the $J$ = 1$\rightarrow$0 and 2$\rightarrow$1 lines, also $^{16}$O/$^{18}$O $\sim$ 100. This is similar to the values determined for Arp~220, even though Arp~220 is a much less evolved ultraluminous merger. \item Obtaining a synthesis of so far obtained carbon isotope ratios from the central regions of actively star forming galaxies, the observed range of values appears to encompass a full order of magnitude. From ultraluminous galaxies at high redshift to local ULIRGs, to weaker local starbursting galaxies and to the central molecular zone of the Milky Way, ratios are $>$100, $\sim$100, $\sim$40, and 25, respectively. While this matches qualitative expectations of decreasing $^{12}$C/$^{13}$C values with time and metallicity, we note that (1) the extragalactic values are based on an extremely small data base and that (2) the ratios for the ULIRGs at high and low $z$ are still rather uncertain. Furthermore, it still has to be evaluated in how far $^{13}$C-deficient gas from the outer galactic regions and $^{12}$C-rich ejecta from massive stars in a nuclear starburst (the latter possibly enhanced by a top-heavy initial mass function), are contributing to raise the carbon isotope ratios during the lifetime of a starburst. \end{itemize} \acknowledgements We wish to thank the IRAM staff at the 30-m for their help with the observations and C.~M. Walmsley and an anonymous referee for carefully reading the manuscript. Some of the work by CH has been carried out while visiting the ESO-ALMA group in Santiago de Chile. ALRD acknowledges an STFC studentship (ST/F007299/1).
1,116,691,498,127
arxiv
\section{Introduction} Supersymmetry (SUSY) provides an elegant solution to naturally resolve the gauge hierarchy problem within the Standard Model (SM), and presuming $R$ parity conservation, the lightest supersymmetric particle (LSP) neutralino serves as a viable cold dark matter (CDM) candidate~\cite{Goldberg:1983nd,Ellis:1983ew}. The empirical search for a weakly interacting massive particle (WIMP) currently evolves on multiple fronts. For instance, the Large Hadron Collider (LHC) at CERN sifts through trillions of proton-proton collisions for a rare glimpse of an anomalous missing transverse energy component of hypothetical supersymmetric interactions, where the SUSY LSP escapes the detector without direct observation as a consequence of its neutral $U(1)_{EM}$ charge and status as an $SU(3)_C$ singlet. Sharing an equivalent objective, the XENON~\cite{Aprile:2012nq}, CDMS~\cite{Agnese:2013rvf}, and LUX~\cite{Akerib:2013tjd} experiments parse through statistics gathered from ionization and scintillation of inert gases and semiconductors to potentially uncover direct observation of elastic collisions of a WIMP within the scintillating material. Likewise, the Fermi Space Telescope~\cite{Atwood:2009ez} strives toward this goal through latent observation of photon decay relics from WIMP annihilations. Status of observability for this latter conjectural phenomena, primarily within the context of a well defined model named No-Scale $\cal{F}$-$SU(5)$, presides as the motivating intent of this work; this approach offers a viable link between SUSY bino dark matter and a recently observed marginal sharp line spectra, and perhaps more pertinently, crafts a roadmap for future discovery of bino dark matter utilizing current and forthcoming sky-scanning surveys. The annihilation of WIMPS within inner galactic regions can be prospective sources of gamma ray emissions that compete with the astrophysical background. SUSY LSP neutralinos can annihilate directly to gamma rays mono-energetically, yielding a (quasi-) monochromatic energy spectrum via annihilation processes $\widetilde{\chi} \widetilde{\chi} \to \gamma \gamma$ ($E_{\gamma} = m_{\chi}$), $\widetilde{\chi} \widetilde{\chi} \to \gamma Z$, and $\widetilde{\chi} \widetilde{\chi} \to \gamma h$. These processes occur at 1-loop, since WIMPS cannot directly couple to the photons, thereby suppressing the cross-section of thermally produced dark matter. Internal bremsstrahlung (IB) photons can also produce sharp spectral features with annihilation into charged particles via $\widetilde{\chi} \widetilde{\chi} \to f \overline{f} \gamma$, with the benefit that IB processes occur at tree level, thus providing a larger annihilation rate for bino neutralinos and amplifying observability. In 2012, a tentative 130 GeV monochromatic gamma ray line was observed~\cite{Bringmann:2012vr,Weniger:2012tx} in the Fermi-LAT all sky surveys, exhibiting a local signal significance of 4.3--4.6$\sigma$ (3.1--3.3$\sigma$ global). Post reprocessing of the data by the Fermi Collaboration, the budding signal shifted closer to 133 GeV with a diminished local signal significance of 3.3$\sigma$ (global 1.6$\sigma$)~\cite{FERMI-LAT:2013uma}, somewhat dampening the enthusiasm for a prospective indirect discovery of dark matter. Additionally, a deviation at this same $E_{\gamma} \sim 133$ GeV has been observed by the LAT instrument in a control sample of gamma rays from the Earth's limb, elevating the likelihood that the reported effects are systematic in origin. Therefore, the jury remains out on the validity of the signal, and a conclusive judgment may not be pronounced for as much as two additional years, pending additional data acquisition and analysis. Yet, this tentative observation highlights the importance of a model dependent analysis of the Fermi-LAT's reach into the supersymmetric parameter space. Due to the small bino annihilation cross-section of $\langle \sigma v \rangle _{\gamma \gamma} \sim 10^{-30}~{\rm cm^3/sec}$, in comparison to the best fit of the deviation in the Fermi-LAT data of $\langle \sigma v \rangle _{\gamma \gamma} \sim 10^{-27}~{\rm cm^3/sec}$~\cite{Bringmann:2012vr,Weniger:2012tx}, the supersymmetric origins of the 130 GeV monochromatic gamma ray signal were quickly dismissed~\cite{Cohen:2012me}. Minus the presence of an extraordinarily large boost factor $(BF)$ of $BF \sim 1000$, the cross-section of the observed 130 GeV signal seemed far too large for bino dark matter annihilations to two gamma rays to be a serious candidate. Despite these objections to solicitation of a supersymmetric explanation for the 133 GeV gamma ray line, it was shown that a WIMP mass capable of producing $\gamma \gamma$ emission at 133 GeV and $\gamma Z$ emission at $\sim 145$ GeV can be naturally explained~\cite{Li:2012jf} in the supersymmetric grand unified theory (GUT) model No-Scale flipped $SU(5)$ with extra vector-like matter multiplets called ``{\it flippons}''~\cite{ Li:2010ws, Li:2010mi,Li:2010uu,Li:2011dw, Li:2011hr, Maxin:2011hy, Li:2011xu, Li:2011in,Li:2011gh,Li:2011rp,Li:2011fu,Li:2011ex,Li:2011av, Li:2011ab,Li:2012hm,Li:2012tr,Li:2012ix,Li:2012yd,Li:2012qv,Li:2012jf,Li:2012mr,Li:2013hpa,Li:2013naa,Li:2013bxh}, the model referred to as $\cal{F}$-$SU(5)$. When considering a dominant contribution from the IB final states, the No-Scale $\cal{F}$-$SU(5)$ upper 2$\sigma$ limit on the WIMP mass for the observed monochromatic gamma ray line is about $M_{1/2} \sim$ 775--800 GeV. While this particular SUSY mass is currently experiencing some tension from the LHC SUSY search~\cite{ATLAS-CONF-2013-061}, sufficient uncertainty remains in our spectrum calculations and Monte-Carlo simulations to likewise caution against its definitive exclusion until the 13 TeV LHC energizes in 2015. Nonetheless, the more pressing question facing association of this result, or the prospect of a feasible near-term future Fermi-LAT observation at some heavier energy scale, with a genuine SUSY signal is that of whether an abnormally large boost factor is necessary to generate the observed photon flux. Moreover, this must be accomplished without overboosting fermion channels in the continuum; for instance, the stau-mediated channel with a $\widetilde{\chi} \widetilde{\chi} \to \tau^+ \tau^-$ final state, where the latest upper limit on the annihilation cross-section from observation of cosmic rays is $\langle \sigma v \rangle _{\tau \tau} \lesssim 5 \times 10^{-25}~{\rm cm^3/sec}$~\cite{Bergstrom:2013jra}, though parallel studies suggest the current limit could be as low as $\langle \sigma v \rangle _{\tau \tau} \lesssim 3 \times 10^{-26} - 10^{-25}~{\rm cm^3/sec}$~\cite{Egorov:2013exa}. However, the SUSY mass $M_{1/2} \sim 775$~GeV in No-Scale $\cal{F}$-$SU(5)$ has an annihilation cross-section of $\langle \sigma v \rangle _{\tau \tau} = 6.8 \times 10^{-29}~{\rm cm^3/sec}$, placing the necessarily required large boost factor of $BF \sim 1000$ near the fringe of the upper limit allowed on any extraneous boost in the cross-section. It is thus preferable to pursue another course that does not involve relying upon such a large boost factor. Because the IB photon flux is about an order or magnitude (to be concrete, around $\sim 8-12\times$ [see Table~\ref{tab:flux}]) larger than the gamma gamma flux, the IB cross section is roughly 20 times larger than the gamma gamma cross section. Therefore, the No-Scale $\cal{F}$-$SU(5)$ IB cross section is on the order of $5 \times 10^{-29}~{\rm cm^3/sec}$. Thus, the corresponding boost factor that is needed to explain the 133 GeV Fermi-LAT gamma ray line in this scenario is substantively smaller, on the order of 50 to 100. If one allows that the dark matter density, which enters into the pairwise interaction as a square, is seven to ten times larger than what is traditionally used in the dark matter subhalo, this mechanism can explain the observed gamma ray line. Regardless of whether the existing marginal 133 GeV gamma ray line eventually is shown to be a systematic or statistical effect, upcoming data from the Fermi Space Telescope (or future projects including Gamma-400, DAMPE and HERD) may provide exclusive insights into the SUSY parameter space in the No-Scale $\cal{F}$-$SU(5)$ model. A central task confronted by this document is classification of the gamma ray signatures associable with $\cal{F}$-$SU(5)$, and quantification of their detection prospects across the model space, especially in the context of an additional six years of data collection by the Fermi-LAT instrument. Given the reality (failing upward revisions in estimates of the dark matter density profile) that the present generation gamma telescope will not achieve the sensitivity required to observe bino dark matter at annihilation cross-sections of $\langle \sigma v \rangle _{\gamma \gamma} \sim 10^{-30}~{\rm cm^3/sec}$, we highlight a phenomenologically viable scenario where the probability of uncovering an observable indirect detection signature is somewhat more appreciable; in particular, we shall consider increasing the photon yield from annihilation via compression of the lightest slepton and LSP neutralino mass difference to near degeneracy, thereby establishing upward pressure on the annihilation rate, which can further elevate the advantage of the already dominant tree level IB effects over monochromatic loop level dark matter annihilation. This methodology can be quite naturally accommodated in No-Scale $\cal{F}$-$SU(5)$ with no effect on the spectrum calculations and experimental constraints established in the model space~\cite{Li:2013naa}. The one unavoidable consequence of such a maneuver manifests itself in a suppressed bino neutralino relic density for $M_{1/2} \lesssim1500$ GeV, transitioning to below the recent Planck measurements~\cite{Ade:2013zuv}, thereby compelling a non-thermal mechanism to generate the correct dark matter density. When the mass difference between the LSP neutralino and light stau is small, the LSP--light stau coannihilation cross section will be large, resulting in a dark matter relic density that is smaller than the observed value. Interestingly, cosmologically late decay of string-theoretic moduli fields provide an alternate mechanism for generating the correct dark matter relic density~\cite{Moroi:1999zb}. As the gaugino mass is increased from smaller values of $M_{1/2}$ in No-Scale $\cal{F}$-$SU(5)$, a naturally occurring linear compression in the light stau and LSP mass difference counteracts this bino relic density suppression in~$\cal{F}$-$SU(5)$~\cite{Li:2013naa} ({\it i.e.} elevation in the annihilation rate induced by mass degeneracy is counteracted by simple mass suppression), eventually generating the Planck measured CDM relic density $\Omega h^2 = 0.1199 \pm 0.0027$~\cite{Ade:2013zuv} at $M_{1/2} \sim 1500$ GeV for a nearly degenerate light stau and LSP ($\Delta M(\widetilde{\chi}_1^0,~\widetilde{\tau}_1) \simeq 2$ GeV). The No-Scale $\cal{F}$-$SU(5)$ framework suggested here as a vehicle for interpreting Fermi-LAT observations has already been well developed. The model is based upon the tripodal foundations of the dynamically established boundary conditions of No-Scale Supergravity, the Flipped $SU(5)$ Grand Unified Theory (GUT), and the pair of TeV-scale hypothetical flippon vector-like super-multiplets~\cite{ Li:2010ws, Li:2010mi,Li:2010uu,Li:2011dw, Li:2011hr, Maxin:2011hy, Li:2011xu, Li:2011in,Li:2011gh,Li:2011rp,Li:2011fu,Li:2011ex,Li:2011av, Li:2011ab,Li:2012hm,Li:2012tr,Li:2012ix,Li:2012yd,Li:2012qv,Li:2012jf,Li:2012mr,Li:2013hpa,Li:2013naa,Li:2013bxh} derived within local F-theory model building. The convergence of these features has been shown to naturally resolve many longstanding theoretical issues, whilst comparing positively with real world experimental observation. Moreover, a recent analysis~\cite{Ellis:2013xoa,Ellis:2013nxa,Ellis:2013nka} suggests that a cosmological model based upon the No-Scale supergravity sector yields compatibility with the Planck satellite measurements. With convenient superpotential parameter choices, the new cosmological model compatible with Planck data is a No-Scale supergravity realization of the Starobinsky model of inflation~\cite{Starobinsky:1980te,Mukhanov:1981xt,Starobinsky:1983zz}. This prospective empirical evidence of the existence of a ubiquitous No-Scale supergravity sector amplifies our motivation for implementing No-Scale $\cal{F}$-$SU(5)$ as a realistic framework appropriate for evaluation against formerly recorded and forthcoming Fermi-LAT gamma ray emission statistics. The structure of this paper is as follows. First we provide a brief review of the No-Scale $\cal{F}$-$SU(5)$ model, and then elaborate the interesting empirical correlation between recent Planck satellite data and cosmological models based upon No-Scale Supergravity that realize inflation in the Starobinsky mode. Next we shall present more detailed aspects of the IB effects on the annihilation rate and, finally, we present some benchmark models with SUSY spectra linked to neutralino annihilation cross-sections testable by the Fermi Space Telescope in the upcoming years, as well as benchmarks consistent with a No-Scale $\cal{F}$-$SU(5)$ explanation of the observed 133 GeV monochromatic gamma ray line. \section{The No-Scale $\cal{F}$-$SU(5)$ Model} Mass degeneracy of the superpartners has not been observed, indicating that SUSY breaking occurs near the TEV scale. Supergravity models are GUTs with gravity mediated supersymmetry breaking, where we can fully characterize the supersymmetry breaking soft terms by a limited set of universal parameters: universal gaugino mass $M_{1/2}$, universal scalar mass $M_0$, Higgsino mixing $\mu$-parameter, Higgs bilinear $B_{\mu}$-parameter, and universal trilinear coupling $A_0$. The $B_{\mu}$ and $|\mu|$ parameters are then determined at low energy through minimization of the Higgs potential triggering radiative electroweak symmetry breaking (REWSB), with the sign of $\mu$ remaining undetermined. Equivalently, we can trade $B_{\mu}$ at low energy for the low energy ratio of the Higgs vacuum expectation values (VEVs) $\tan\beta$. Subsequently remaining are the high-energy boundary conditions $M_{1/2}$, $M_0$, $B_{\mu}$, $A_0$, and the low energy boundary condition $\tan\beta$, plus the undetermined sign of $\mu$, which we always take to be sgn$(\mu) > 0$, as suggested by the results of $(g_{\mu}-2)/2$ of the muon. In order to address the cosmological flatness problem, No-Scale Supergravity was proposed~\cite{Cremmer:1983bf} as the subspace of supergravity models which fulfill three constraints: i) the vacuum energy vanishes automatically due to the appropriate K\"ahler potential; ii) there exist flat directions that leave the gravitino mass $M_{3/2}$ undetermined at the minimum of the scalar potential; iii) the quantity ${\rm Str} {\cal M}^2$ is zero at the minimum. Large one-loop corrections would force $M_{3/2}$ to be either identically zero or of the Planck scale if the third condition were violated. A minimal K\"ahler potential that meets the first two conditions is~\cite{Ellis:1984bm,Cremmer:1983bf} \begin{eqnarray} K &=& -3 {\rm ln}( T+\overline{T}-\sum_i \overline{\Phi}_i \Phi_i)~,~ \label{NS-Kahler} \end{eqnarray} where $T$ is a modulus field and $\Phi_i$ are matter fields, which parameterize the non-compact $SU(N,1)/SU(N) \times U(1)$ coset space. The third condition can always be satisfied in principle and is model dependent~\cite{Ferrara:1994kg}. From the K\"ahler potential in Eq.~(\ref{NS-Kahler}) we automatically attain the No-Scale boundary condition $M_0 = A_0 = B_{\mu} = 0$, while $M_{1/2}$ is allowed to be non-zero and hence evolve naturally, and in fact, is necessary for SUSY breaking. Moreover, the high-energy boundary condition $B_{\mu} = 0$ in principle determines $\tan\beta$ at low energy. The gravitino mass $M_{3/2}$ is determined by the equation $d(V_{EW})_{min}/dM_{3/2}=0$ due to the fact that the minimum of the electroweak (EW) Higgs potential $(V_{EW})_{min}$ depends on $M_{3/2}$, and consequently, the supersymmetry breaking scale is determined dynamically. We are thus left with a natural $one$-$parameter$ model, with the sole degree of freedom being the gaugino mass $M_{1/2}$. As a deep fundamental correlation to string theory, No-scale supergravity can be realized in the compactification of the weakly coupled heterotic string theory~\cite{Witten:1985xb}, as well as the compactification of M-theory on $S^1/Z_2$ at the leading order~\cite{Li:1997sk}. Precise string-scale gauge coupling unification while also evading the Landau pole problem can be realized by supplementing the standard ${\cal F}$-lipped $SU(5)\times U(1)_X$~\cite{Nanopoulos:2002qk,Barr:1981qv,Derendinger:1983aj,Antoniadis:1987dx} SUSY field content with the following TeV-scale vector-like multiplets (flippons)~\cite{Jiang:2006hf} \begin{eqnarray} \hspace{-.3in} & \left( {XF}_{\mathbf{(10,1)}} \equiv (XQ,XD^c,XN^c),~{\overline{XF}}_{\mathbf{({\overline{10}},-1)}} \right)\, ,& \nonumber \\ \hspace{-.3in} & \left( {Xl}_{\mathbf{(1, -5)}},~{\overline{Xl}}_{\mathbf{(1, 5)}}\equiv XE^c \right)\, ,& \label{z1z2} \end{eqnarray} where $XQ$, $XD^c$, $XE^c$, $XN^c$ have the same quantum numbers as the quark doublet, the right-handed down-type quark, charged lepton, and neutrino, respectively. Models of this nature can be realized in ${\cal F}$-ree ${\cal F}$-ermionic string constructions~\cite{Lopez:1992kg} and ${\cal F}$-theory model building~\cite{Jiang:2009zza,Jiang:2009za}, and have been appropriately designated ${\cal F}$-$SU(5)$~\cite{Jiang:2009zza}. The split-unification framework of $\cal{F}$-$SU(5)$~\cite{Nanopoulos:2002qk,Barr:1981qv,Derendinger:1983aj,Antoniadis:1987dx} provides for fundamental GUT scale Higgs representations (not adjoints), natural doublet-triplet splitting, suppression of dimension-five proton decay~\cite{Antoniadis:1987dx,Harnik:2004yp}, and a two-step see-saw mechanism for neutrino masses~\cite{Ellis:1992nq,Ellis:1993ks}. Adjustments to the one-loop gauge $\beta$-function coefficients $b_i$ induced by inclusion of the vector-like flippon multiplets generate the required flattening of the $SU(3)$ Renormalization Group Equation (RGE) running ($b_3 = 0$)~\cite{Li:2010ws}, which manifests as a wide separation between the primary $SU(3)_C \times SU(2)_L$ unification near $10^{16}$~GeV and the secondary $SU(5) \times U(1)_X$ unification near the Planck mass. The corresponding baseline extension for logarithmic running of the No-Scale boundary conditions, especially that of $B_\mu = 0$, permits ample scale for natural dynamic evolution into phenomenologically favorable values consistent with experiment at the EW scale. The $SU(3)_C$ gaugino mass scale flattening generates a stable characteristic mass texture of $M(\widetilde{t}_1) < M(\widetilde{g}) < M(\widetilde{q})$, engendering a light stop and gluino that are lighter than all other squarks~\cite{Li:2010ws}. The No-Scale $\cal{F}$-$SU(5)$ model space satisfies a minimal set of necessary constraints from theory and phenomenology~\cite{Li:2011xu,Li:2013naa}. The constraints are: i) consistency with the dynamically established boundary conditions of No-Scale supergravity (most significantly the strict enforcement of a vanishing $B_{\mu}$ parameter at the ultimate flipped $SU(5)$ GUT unification near $M_{\rm Pl}$, imposed as $\left|B_{\mu}\right(M_{\cal F})| \leq 1$ GeV, about the scale of the EW radiative corrections); ii) REWSB; iii) the centrally observed Planck CDM relic density $\Omega h^2 = 0.1199 \pm 0.0027$~\cite{Ade:2013zuv}) ; iv) the world average top-quark mass $m_t = 173.3 \pm 1.1$~GeV~\cite{:1900yx}; v) precision LEP constraints on the light SUSY chargino and neutralino mass content~\cite{LEP}; and vi) production of a lightest CP-even Higgs boson mass of $m_{h} = 125.5 \pm 1.5$ GeV, accomplished through additional tree level and one-loop contributions to the Higgs boson mass by the flippon supermultiplets~\cite{Li:2011ab,Li:2012jf,Li:2013naa}, supplementing the Minimal Supersymmetric Standard Model (MSSM) Higgs boson mass by just the essential additional 3-5 GeV amount requisite to attain $m_{h} \sim 125$ GeV, while also preserving a testably light SUSY spectrum that does not reintroduce the gauge hierarchy problem via very heavy scalars that SUSY was originally intended to solve in the first place. A two-dimensional parameterization in the vector-like flippon super-multiplet mass scale $M_V$ and the universal gaugino boundary mass scale $M_{1/2}$ is excised from a larger four-dimensional hyper-volume that also includes the top quark mass $m_t$ and the ratio $\tan \beta$. The enduring model space after application of these minimal constraints is capable of maintaining the delicate balance needed to realize the two conditions $B_\mu = 0$ and $\Omega h^2 = 0.1199 \pm 0.0027$. The surviving No-Scale $\cal{F}$-$SU(5)$ model space after direct application of the constraints noted above consists of a diagonal wedge ({\it cf.} Ref.~\cite{Li:2013naa}) in the ($M_{1/2}$, $M_V$) space, the width of which at small $M_{1/2}$ and small $M_V$ is bounded by the LEP constraints and by the CDM constraints and the transition to a charged stau LSP at large $M_{1/2}$ and large $M_V$. Conversely, the upper limit at large $M_V$ and the lower limit at small $M_V$ are constrained by the central experimental range on the top quark mass. The intersection of all constraints yields a net experimentally viable model space extending from $M_{1/2} \simeq 400$ GeV to $M_{1/2} \simeq 1500$ GeV, with an associated vector-like flippon mass of $M_V \simeq 1$ TeV to $M_V \simeq 180$ TeV. \section{No-Scale Supergravity Inflation} The elegantly minimalistic formalism of No-Scale Supergravity~\cite{Cremmer:1983bf,Ellis:1983sf, Ellis:1983ei, Ellis:1984bm, Lahanas:1986uc} allows for a deep fundamental correlation to string theory in the infrared limit, the natural inclusion of general coordinate invariance (general relativity), a supersymmetry breaking mechanism that preserves a vanishing cosmological constant at tree level (facilitating the observed longevity and cosmological flatness of our Universe~\cite{Cremmer:1983bf}), natural suppression of CP violation and flavor-changing neutral currents, dynamic stabilization of the compactified spacetime by minimization of the loop-corrected scalar potential, and a powerful contraction in parameterization freedom. Recently, an added phenomenological boost has been given to No-Scale Supergravities by detailed measurement of the Cosmic Microwave Background (CMB) perturbations (the structural seeds of galactic supercluster formation residually imprinted upon the faint afterglow of the big bang) from the Planck~\cite{Ade:2013uln} satellite. Many important features predicted qualitatively by the cosmological inflationary paradigm have been borne out, for instance, there are no significant signs of non-Gaussian fluctuations or hints of non-trivial topological features such as cosmic strings. Additionally, these observations verified a highly statistically significant tilt $n_s \simeq 0.960 \pm 0.007$ in the spectrum of scalar perturbations, as expected if the effective scalar energy density decreased gradually during inflation, and set stronger upper limits on the ratio $r < 0.08$ of tensor (directional) to scalar (isotropic) perturbations. These measurements, particularly of $n_s$, place many leading models of cosmic inflation in jeopardy (cf. Fig.~1 of Ref.~\cite{Ade:2013uln}), although a curious scenario suggested by Starobinsky~\cite{Starobinsky:1980te} in 1980 is known~\cite{Mukhanov:1981xt} to match the data effortlessly. This model is a rather ad-hoc modification of Einstein's description of gravity, which combines a quadratic power of the Ricci scalar with the standard linear term. At face value, this $(R+R^2)$ model is rather difficult to take seriously, but there is substantial enthusiasm for the observation by John Ellis, Keith Olive and one of the authors (D.V.N), that this esoteric model is in fact conformally equivalent to No-Scale supergravity with an $SU(2,1)/SU(2) \times U(1)$ K\"ahler potential~\cite{Ellis:2013xoa,Ellis:2013nxa,Ellis:2013nka}, which is a subcase of Eq.~(\ref{NS-Kahler}). To be specific, the algebraic equations of motion corresponding to an auxiliary scalar field $\Phi$ with a quadratic potential that couples to a conventional Einstein term may be freely substituted back into the action, resulting in the phenomenologically favorable quadratic power of the scalar curvature~\cite{Stelle:1977ry,Whitt:1984pd}. In short, inflation in our $\cal{F}$-$SU(5)$ No-Scale $SU(N,1)$ framework can be realized naturally and is consistent with the Planck results. \section{Testing No-Scale $\cal{F}$-$SU(5)$ with Fermi-LAT} The monochromatic line signals are not the only mechanism capable of generating gammas visible to the Fermi-LAT instrument. In fact, dark matter annihilation into two Standard Model particles with a radiated photon, a process known as internal bremsstrahlung, can also give sharp spectral features in the ray spectrum close to the dark matter mass~\cite{Bringmann:2007nk}. The photon can arise from the final state radiation (FSR) or virtual charged particle radiation/virtual IB (VIB). Thus, the IB photons will be the total contributions from both FSR and VIB. \begin{table*}[htp] \centering \caption{Ten No-Scale $\cal{F}$-$SU(5)$ benchmarks, with points that can satisfy the Planck satellite relic density measurements, points with $\Delta M(\widetilde{\chi}_1^0,\widetilde{\tau}_1) \simeq 2$ GeV, and points imposing degeneracy, $\Delta M(\widetilde{\chi}_1^0,\widetilde{\tau}_1) \simeq 0$ GeV, between the light stau and LSP mass in order to increase the annihilation rate and raise the IB contributions. Given are the gaugino mass $M_{1/2}$, flippon mass $M_V$, $\tan\beta$, top quark mass $m_t$, relic density $\Omega h^2$, EM $f \overline{f}$, $\gamma \gamma$, and $\gamma Z$ annihilation cross-sections, SUSY masses, and light Higgs boson mass $m_h$. All benchmark LSP compositions are greater than 99\% bino. The $\Omega h^2$ shown is the thermal neutralino density calculated with {\tt MicrOMEGAs~2.4}. For those benchmarks with $\Omega h^2 < 0.1199 \pm 0.0027$, the Planck satellite measured relic density can be generated via non-thermal mechanisms. The annihilation cross-sections given here are the average between the {\tt MicrOMEGAs~2.4} and {\tt DarkSUSY~5.1.1} calculations. The total $\langle \sigma v \rangle _{f \overline{f}}$ annihilation cross-section is composed of $\langle \sigma v \rangle _{f \overline{f}} = \langle \sigma v \rangle _{\tau^+ \tau^-} + \langle \sigma v \rangle _{t\overline{t}} + \langle \sigma v \rangle _{b \overline{b}}$. The $\Delta M$ value refers to the lightest bino neutralino and light stau mass difference. The light Higgs boson mass includes both the tree level+1-loop+2-loop+3-loop+4-loop and flippon contributions. All masses are in GeV and all cross-sections in ${\rm cm^3/sec}$.} \begin{tabular}{|c|c|c|c||c|c|c|c||c|c|c|c|c|c|c|c|c} \hline $M_{1/2}$&$M_{\rm V}$&$\tan\beta$&$m_{t}$&$\Omega h^2$&$\langle \sigma v \rangle _{f \overline{f}} $&$\langle \sigma v \rangle _{\gamma \gamma} $&$\langle \sigma v \rangle _{\gamma Z} $&$m_{\chi^0_1}$&$m_{\widetilde{\tau}_{1}}$&$\Delta M$&$m_{\chi^{0}_{2},\chi^{\pm}_{1}}$&$m_{\widetilde{t}_{1}}$&$m_{\widetilde{g}}$&$m_{\widetilde{u}_{R}}$&$m_h$ \\ \hline \hline $ 775 $&$ 4800 $&$ 22.5 $&$ 174.4 $&$ 0.122 $&$ 68.5 \times 10^{-30} $&$ 2.61 \times 10^{-30} $&$ 0.98 \times 10^{-30} $&$ 161 $&$ 169 $&$ 7.87 $&$ 342 $&$ 861 $&$1047$&$1475$&$124.4$ \\ \hline $ 774 $&$ 4821 $&$ 23.0 $&$ 174.4 $&$0.036$&$ 77.9 \times 10^{-30} $&$3.01 \times 10^{-30} $&$1.11 \times 10^{-30} $&$ 160 $&$ 162 $&$ 1.95 $&$ 341 $&$ 860 $&$ 1046 $&$ 1473 $&$124.4$ \\ \hline $ 774 $&$ 4851 $&$ 23.1 $&$ 174.4 $&$ 0.020 $&$ 81.1 \times 10^{-30} $&$ 3.19 \times 10^{-30} $&$ 1.16 \times 10^{-30} $&$ 160 $&$ 160 $&$0.06 $&$ 341 $&$ 860 $&$1046$&$1473$&$124.4$ \\ \hline\hline $ 990 $&$ 8044 $&$ 23.3 $&$ 174.4 $&$ 0.120 $&$ 45.6 \times 10^{-30} $&$ 1.73 \times 10^{-30} $&$ 0.69 \times 10^{-30} $&$ 214 $&$ 220 $&$ 6.43 $&$ 449 $&$ 1104 $&$1328$&$1824$&$125.1$ \\ \hline $ 990 $&$ 8070 $&$ 23.6 $&$ 174.4 $&$ 0.056 $&$ 47.6 \times 10^{-30} $&$ 1.88 \times 10^{-30} $&$ 0.76 \times 10^{-30} $&$ 213 $&$ 216 $&$ 2.06 $&$ 449 $&$ 1104 $&$1328$&$1824$&$125.1$ \\ \hline $ 1000 $&$ 8083 $&$ 23.7 $&$ 174.4 $&$ 0.036 $&$ 46.7 \times 10^{-30} $&$1.93 \times 10^{-30} $&$ 0.77 \times 10^{-30} $&$ 216 $&$ 216 $&$ 0.21 $&$ 454 $&$ 1116 $&$1341$&$1841$&$125.2$ \\ \hline\hline $ 1200 $&$ 30,830 $&$ 24.3 $&$ 173.3 $&$ 0.122 $&$ 20.5 \times 10^{-30} $&$ 1.26 \times 10^{-30} $&$ 0.51 \times 10^{-30} $&$ 276 $&$ 281 $&$ 4.54 $&$ 572 $&$ 1335 $&$1633$&$2102$&$124.1$ \\ \hline $ 1200 $&$ 30,830 $&$ 24.4 $&$ 173.3 $&$ 0.084 $&$ 20.8 \times 10^{-30} $&$ 1.31 \times 10^{-30} $&$0.53 \times 10^{-30} $&$ 276 $&$ 279 $&$ 2.07 $&$ 572 $&$ 1335 $&$1634$&$2102$&$124.1$ \\ \hline $ 1200 $&$ 30,830 $&$ 24.5 $&$ 173.3 $&$ 0.056 $&$ 21.1 \times 10^{-30} $&$ 1.36 \times 10^{-30} $&$ 0.55 \times 10^{-30} $&$ 276 $&$ 277 $&$ 0.08 $&$ 572 $&$ 1335 $&$1634$&$2102$&$124.1$ \\ \hline\hline $ 1500 $&$ 27,636 $&$ 24.7 $&$ 174.4 $&$ 0.122 $&$ 8.88 \times 10^{-30} $&$ 0.85 \times 10^{-30} $&$ 0.35 \times 10^{-30} $&$ 349 $&$ 351 $&$ 2.06 $&$ 717 $&$ 1661 $&$2009$&$2602$&$126.3$ \\ \hline $ 1500 $&$ 27,636 $&$ 24.8 $&$ 174.4 $&$ 0.086 $&$ 8.93 \times 10^{-30} $&$ 0.88 \times 10^{-30} $&$0.36 \times 10^{-30} $&$ 349 $&$ 349 $&$ 0.07 $&$ 717 $&$ 1661 $&$2009$&$2602$&$126.3$ \\ \hline \end{tabular} \label{tab:benchmarks} \end{table*} \begin{table*}[htp] \centering \caption{The ten No-Scale $\cal{F}$-$SU(5)$ benchmarks of Table~\ref{tab:benchmarks}, with the IB photon flux $\Phi _{IB}$ from $\widetilde{\chi} \widetilde{\chi} \to f \overline{f} \gamma$ events, the photon flux $\Phi _{\gamma \gamma}$ from $\widetilde{\chi} \widetilde{\chi} \to \gamma \gamma$ events, and the photon flux $\Phi _{\gamma Z}$ from $\widetilde{\chi} \widetilde{\chi} \to \gamma Z$ events. The IB flux has been integrated across energy relative to the differential flux plotted in Figure~\ref{fig:ibyield}. All fluxes are also integrated over the solid line-of-sight angle from the center of our galaxy, taking a detector acceptance of 2.5 steradians corresponding to the LAT instrument's 20\% sky field of view, and are in units of photons ${\rm cm^{-2}~sec^{-1}}$. All fluxes are calculated with {\tt DarkSUSY~5.1.1} The $\gamma \gamma$ flux includes the factor of 2 for the two photons. For the local dark matter relic density, we use the value $\rho_0 = 0.3$ GeV/${\rm cm^3}$, with the spherically symmetric NFW halo profile. The column entry $\Phi _{IB}/\Phi _{\gamma \gamma}$ is indicative of the increase in the magnitude of the IB flux over the gamma pair flux, and the adjacent column $\Phi _{\gamma \gamma}/ \Phi _{\gamma Z}$ likewise compares the gamma pair flux to that of the photon plus Z-boson. The final two columns provide the gamma radiation energy in GeV at the IB spectrum peak and its relation to the LSP mass in GeV.} \begin{tabular}{|c|c|c|c||c|c|c|c|c||c|c|} \hline $M_{1/2}$&$M_{\rm V}$&$\tan\beta$&$m_{t}$&$\Phi _{IB}$&$\Phi _{\gamma \gamma} $&$\Phi _{\gamma Z} $&$\Phi _{IB}/\Phi _{\gamma \gamma}$&$\Phi _{\gamma \gamma}/\Phi _{\gamma Z}$& ${\rm IB~Peak}$&$m_{\chi^0_1}$ \\ \hline \hline $ 775 $&$ 4800 $&$ 22.53 $&$ 174.4 $&$ 3.9 \times 10^{-12} $&$ 4.9\times 10^{-13} $&$ 5.5\times 10^{-14} $&$8.1$ & $8.7$& $148$&$161$\\ \hline $ 774 $&$ 4821 $&$ 22.95 $&$ 174.4 $&$ 5.8 \times 10^{-12} $&$ 5.6\times 10^{-13} $&$ 6.2\times 10^{-14} $&$10.3$ & $9.0$& $153$&$160$\\ \hline $ 774 $&$ 4851 $&$ 23.08 $&$ 174.4 $&$ 4.6 \times 10^{-12} $&$ 4.0\times 10^{-13} $&$ 4.4\times 10^{-14} $&$11.5$ & $9.1$& $156$&$160$\\ \hline\hline $ 990 $&$ 8044 $&$ 23.34 $&$ 174.4 $&$ 1.5 \times 10^{-12} $&$ 1.8\times 10^{-13} $&$ 2.3\times 10^{-14} $&$8.4$ & $8.0$& $200$&$214$\\ \hline $ 990 $&$ 8070 $&$ 23.61 $&$ 174.4 $&$ 1.9 \times 10^{-12} $&$ 2.0\times 10^{-13} $&$ 2.4\times 10^{-14} $&$9.8$ & $8.1$& $204$&$213$\\ \hline $ 1000 $&$ 8083 $&$ 23.73 $&$ 174.4 $&$ 2.1 \times 10^{-12} $&$ 2.0\times 10^{-13} $&$ 2.4\times 10^{-14} $&$10.7$ & $8.1$& $211$&$216$\\ \hline\hline $ 1200 $&$ 30,830 $&$ 24.26 $&$ 173.3 $&$ 6.5 \times 10^{-13} $&$ 7.9\times 10^{-14} $&$ 1.0\times 10^{-14} $&$8.2$ & $7.8$& $262$&$276$\\ \hline $ 1200 $&$ 30,830 $&$ 24.41 $&$ 173.3 $&$ 7.3 \times 10^{-13} $&$ 8.2\times 10^{-14} $&$ 1.1\times 10^{-14} $&$8.9$ & $7.8$& $266$&$276$\\ \hline $ 1200 $&$ 30,830 $&$ 24.53 $&$ 173.3 $&$ 8.1 \times 10^{-13} $&$ 8.5\times 10^{-14} $&$ 1.1\times 10^{-14} $&$9.6$ & $7.8$& $271$&$276$\\ \hline\hline $ 1500 $&$ 27,636 $&$ 24.67 $&$ 174.4 $&$ 3.0 \times 10^{-13} $&$ 3.4\times 10^{-14} $&$ 4.3\times 10^{-15} $&$8.8$ & $7.8$& $336$&$349$\\ \hline $ 1500 $&$ 27,636 $&$ 24.77 $&$ 174.4 $&$ 3.3 \times 10^{-13} $&$ 3.5\times 10^{-14} $&$ 4.4\times 10^{-15} $&$9.4$ & $7.8$& $343$&$349$\\ \hline \end{tabular} \label{tab:flux} \end{table*} It is well known that the annihilation cross section of the LSP neutralinos into a pair of light SM fermions is strongly suppressed by a factor $m_f^2/m_{\chi_1^0}^2$ due to the helicity properties of a highly non-relativistic pair of Majorana neutralinos. However, such suppression can be evaded if the fermion final states contain an additional photon $f {\bar f} \gamma$, particularly when the photon is emitted from virtual sfermions with a mass close to the LSP neutralino. Therefore, the IB effects may explain the 133 GeV Fermi-LAT gamma ray line~\cite{Bringmann:2012ez, Shakya:2012fj}, or may predict a higher energy (for example 200 GeV) gamma ray line in No-Scale $\cal{F}$-$SU(5)$. Furthermore, the EW or strong gauge boson IBs have considerably larger rates due to the larger gauge coupling constants. Recently, a complete calculation of the leading EW corrections to the LSP neutralino annihilations for various final states~\cite{Bringmann:2013oja} shows that such corrections may significantly enhance the annihilation rates. Although those processes do not generate the pronounced spectral features in gamma rays like the corresponding electromagnetic (EM) corrections, the integrated photon yield may be enhanced up to two orders of magnitude compared to the tree level results, which may also be probed by the ongoing Fermi Space Telescope experiment. As such, we have ample motivation to study those regions of the viable parameter space with small mass differences between the LSP neutralino and light stau. Our mission here then is to augment the SUSY neutralino annihilation rates to enhance detection opportunity for a nearly pure bino LSP ($> 99\%$ bino). Through near degeneracy amongst the lightest slepton and light bino masses, we can certainly increase the annihilation rate and boost IB effects to a dominant contribution, albeit with downward pressure on the bino relic density. For a SUSY bino, this requires a compressed $\Delta M(\widetilde{\chi}_1^0,\widetilde{\tau}_1) \simeq 0-2$ GeV, with associated decays proceeding through an off-shell or on-shell tau accordingly. Compression of the light stau mass to $\Delta M(\widetilde{\chi}_1^0,\widetilde{\tau}_1) \simeq 0-2$ GeV can be achieved in No-Scale $\cal{F}$-$SU(5)$ quite naturally via slight shifts of the low energy boundary condition $\tan\beta$. The resultant minor increase in $\tan\beta$ does lead to marginally enhanced light stau mixing effects in the stau sector, slightly lowering the light stau mass. Satisfaction of the CDM relic density in a traditional thermal manner leads to an intrinsic escalation in the baseline value of this parameter, from $\tan\beta \simeq 19.5$ to $\tan\beta \simeq 25$ for a corresponding upward escalation in the gaugino mass from $M_{1/2} \simeq 400$ to $M_{1/2} \simeq 1500$~\cite{Li:2013naa}. Because of this, the supplemental incrementation of $\tan\beta$ required to squeeze the light stau mass and LSP to near degeneracy recedes with an inflating SUSY mass scale. The positive deviation in $\tan\beta$, with possibly small shifts in the gaugino mass $M_{1/2}$ and flippon mass $M_V$, are all that are required to achieve the 0--2 GeV delta between the light stau mass and the LSP in the large unprobed region of the parameter space. In particular, no variation of the top quark mass $m_t$ (within its experimental uncertainty) is necessary. As a result, the SUSY spectrum undergoes only a negligible transition, and thus the rich phenomenology (setting aside the relic density constraint, which must now be satisfied through non-thermal mechanisms) prevails wholly preserved. Indeed, the wedge of model space remains relatively static and persists in the form of Ref.~\cite{Li:2013naa}, the lone exception being small shifts in the $\tan\beta$ contours and indiscernible shifts in $M_{1/2}$ and $M_{V}$. \begin{figure*}[htp] \centering \includegraphics[width=0.75\textwidth]{ibyield.eps} \caption{No-Scale $\cal{F}$-$SU(5)$ Electromagnetic IB spectrum, given in terms of photons per annihilation (top frame) and differential flux (bottom frame), as a function of energy. All curves represent the benchmarks given in Tables~\ref{tab:benchmarks}-\ref{tab:flux}. The thin curves (lower) in both frames satisfy the Planck satellite CDM relic density measurements $\Omega h^2 = 0.1199 \pm 0.0027$, while the thicker curves (middle) possess $\Delta M(\widetilde{\chi}_1^0,\widetilde{\tau}_1) \simeq 2$ GeV and the thickest curves (upper) have $\Delta M(\widetilde{\chi}_1^0,\widetilde{\tau}_1) \simeq 0$ GeV. Inclusion of the EW IB photon flux enhancement is reserved for a future work. The $M_{1/2} = 774-775$ GeV benchmarks are consistent with the previously observed 133 GeV monochromatic gamma ray line. The $\Delta M$ value given in the plot legend refers to the lightest neutralino and light stau mass difference. All IB photon counts and fluxes are calculated with {\tt DarkSUSY~5.1.1}. For the local dark matter relic density, we use the value $\rho_0 = 0.3$ GeV/${\rm cm^3}$. All differential fluxes are in units of photons ${\rm cm^{-2}~sec^{-1}~GeV^{-1}}$ and all masses are in GeV. The $\Omega h^2$ shown in the plot legend is the thermal neutralino relic density calculated with {\tt MicrOMEGAs~2.4}. For those benchmarks with $\Omega h^2 < 0.1199 \pm 0.0027$, the Planck satellite measured relic density can be generated via non-thermal mechanisms. The curves demonstrate that compression of the lightest bino neutralino and light stau mass delta does in fact enhance the EM IB effects.} \label{fig:ibyield} \end{figure*} From this perspective, the No-Scale $\cal{F}$-$SU(5)$ SUSY spectra corresponding to the wedge of viable model space provided in Ref.~\cite{Li:2013naa}, duly suppressing the light stau mass, are potentially testable by the Fermi Space Telescope or a future gamma ray telescope; moreover, the two variations in determination of the light stau mass may be observationally distinguished. Crucially, experimental results from both the LHC and the LAT can be connected to the same SUSY spectrum, providing the type of cross-correlation testing which may play a significant role in substantiating any SUSY GUT model. In particular, probing of a specific $(\widetilde{\chi}_1^0, \widetilde{t}_1, \widetilde{g}, \widetilde{q})$ point in the SUSY parameter space may potentially be achieved via dual experimental methodologies. This is possible since the No-Scale $\cal{F}$-$SU(5)$ SUSY spectrum exhibits the rather special attribute of leading order $en~ masse$ proportionality to only $M_{1/2}$. Specifically, the internal physics of $\cal{F}$-$SU(5)$ are predominantly invariant under a numerical rescaling of only $M_{1/2}$. Consequently, each sparticle within the SUSY spectrum can be multiplicatively adjusted by an identical trivial rescaling of only $M_{1/2}$, though the linear slope relationship between $M_{1/2}$ and each sparticle can vary. From a practical point of view, this property of No-Scale $\cal{F}$-$SU(5)$ permits the SUSY spectrum to be approximately determined from only a given value of $M_{1/2}$, or alternatively, from only a given value of any other sparticle mass, exhibiting the pragmatic predictive elegance of the model. The final ingredient of our strategy involves derivation of a suitable set of benchmarks for comparison to experiment. We present ten benchmarks in Table~\ref{tab:benchmarks}, with gaugino mass $M_{1/2}$, flippon mass $M_V$, $\tan\beta$, top quark mass $m_t$, relic density $\Omega h^2$, EM $f \overline{f}$, $\gamma \gamma$, and $\gamma Z$ annihilation cross-sections, SUSY masses, and light Higgs boson mass. All benchmark LSP compositions are greater than 99\% bino. The points have been extracted from a broad numerical scan, utilizing {\tt MicrOMEGAs~2.1}~\cite{Belanger:2008sj} to compute SUSY mass spectra and a proprietary modification of the {\tt SuSpect~2.34}~\cite{Djouadi:2002ze} codebase to run the flippon enhanced RGEs. To be consistent with previous No-Scale $\cal{F}$-$SU(5)$ parameter space analyses~\cite{Li:2011xu,Li:2013naa}, we show in Table~\ref{tab:benchmarks} the thermal relic density as computed by the updated routines in {\tt MicrOMEGAs~2.4}~\cite{Belanger:2010gh}. Serving as a secondary verification, we further compute the thermal relic density with {\tt DarkSUSY~5.1.1}~\cite{Gondolo:2004sc,DarkSUSY}, reading as input an SLHA~\cite{Skands:2003cj,Allanach:2008qq} mass file generated from the flippon enhanced RGEs in our proprietary version of the {\tt SuSpect~2.34}~\cite{Djouadi:2002ze} codebase, finding only a small variation in the respective relic density computations. The annihilation cross-sections $\langle \sigma v \rangle _{f \overline{f}}$, $\langle \sigma v \rangle _{\gamma \gamma}$, and $\langle \sigma v \rangle _{\gamma Z}$ are calculated with both {\tt MicrOMEGAs~2.4} and {\tt DarkSUSY~5.1.1}, where we show the average of the two calculations in Table~\ref{tab:benchmarks}. The total $\langle \sigma v \rangle _{f \overline{f}}$ annihilation cross-section includes the only three non-negligible contributions in No-Scale $\cal{F}$-$SU(5)$ for a nearly pure SUSY bino: $\langle \sigma v \rangle _{f \overline{f}} = \langle \sigma v \rangle _{\tau^+ \tau^-} + \langle \sigma v \rangle _{t\overline{t}} + \langle \sigma v \rangle _{b \overline{b}}$. The $\Delta M$ value in Table~\ref{tab:benchmarks} refers specifically to the light neutralino and light stau mass difference, which we are compressing to increase the annihilation rate and IB effects. The light Higgs boson mass $m_h$ in Table~\ref{tab:benchmarks} includes both the tree level+1-loop+2-loop+3-loop+4-loop contributions and the additional vector-like flippon contribution~\cite{Li:2013naa}. Expected photon flux rates are listed in Table~\ref{tab:flux} for the annihilation channels $\widetilde{\chi} \widetilde{\chi} \to f \overline{f} \gamma$, $\widetilde{\chi} \widetilde{\chi} \to \gamma \gamma$, and $\widetilde{\chi} \widetilde{\chi} \to \gamma Z$, for the same ten No-Scale $\cal{F}$-$SU(5)$ benchmarks of Table~\ref{tab:benchmarks}. For the local dark matter relic density, we use the value $\rho_0 = 0.3$ GeV/${\rm cm^3}$, adopting the spherically symmetric NFW dark matter halo profile. The square of the dark matter density is integrated along the line of sight for each orientation within an angular detector acceptance of 2.5 steradians (sr) about the galactic center. This value is selected in correspondence with the LAT instrument's field of view, which encompasses about 20\% of the sky at any given moment. Results are not overly sensitive to this parameter, given a value sufficiently wide to encapsulate the region of primary density. Since the IB scenario represents a continuum of radiation frequencies, the differential fluxes plotted in the lower panel of Figure~\ref{fig:ibyield} are integrated across energy to yield consistent units of photon counts per square centimeter per second in Table~\ref{tab:flux}. All fluxes are computed with {\tt DarkSUSY~5.1.1}. The $\gamma \gamma$ flux includes the factor of 2 for the two photons. The ratio $\Phi _{IB}/\Phi _{\gamma \gamma}$ in Table~\ref{tab:flux} represents the magnitude of the integrated IB flux relative to the $\gamma \gamma$ line flux, which provides an advantage of about 10 across the full model space. Likewise, the column $\Phi _{\gamma \gamma} / \Phi _{\gamma Z}$. reports the ratio of monochromatic flux rates for a gamma pair relative to a gamma plus Z-boson, which similarly yields an advantage of one magnitude order across the model space. It is evident from Figure~\ref{fig:ibyield} that compressing the light bino neutralino and light stau does indeed enhance the EM IB effects for the benchmarks of Table~\ref{tab:benchmarks}. The curves in the top frame of Figure~\ref{fig:ibyield} depict the number of IB photons per annihilation resulting from annihilation into charged particles. The bottom frame illustrates the IB flux $\Phi_{IB}$ energy spectrum for the same ten benchmarks. The thin curves (lower) in both frames represent a region of the No-Scale $\cal{F}$-$SU(5)$ model space where the thermal LSP relic density can satisfy the Planck satellite CDM measurements $\Omega h^2 = 0.1199 \pm 0.0027$~\cite{Ade:2013zuv}. The thicker curves (middle) in both frames possess an LSP and light stau mass difference of about 2 GeV, with the thickest curves (upper) having a degenerate LSP and light stau, with possibly a long-lived light stau in this degenerate scenario. All IB photon counts in Figure~\ref{fig:ibyield} are computed with {\tt DarkSUSY~5.1.1}, as are the IB fluxes. Clearly, the EM IB photon count, and hence the flux, increases for smaller $\Delta M$, an effect we presume will be enhanced when also including the EW contributions~\cite{Bringmann:2013oja}. We leave the numerical results of the EW IB photon yield and additional flux for a future work~\cite{LMNW-P}. At this juncture, we are content with a projection that the photon counts and fluxes in Figure~\ref{fig:ibyield} could be amplified via the additional EW IB contributions~\cite{Bringmann:2013oja}. Our scale for the benchmarks in Tables~\ref{tab:benchmarks}-\ref{tab:flux} and Figure~\ref{fig:ibyield} begins at $M_{1/2} = 775$ GeV, which is in the vicinity of the scale threshold that may be considered firmly excluded from the No-Scale $\cal{F}$-$SU(5)$ model space by the LHC SUSY search, as based upon a Monte Carlo event analysis~\cite{Li:2013hpa}. We select sufficient points to provide thorough coverage of the entire viable model space. We direct attention to the region of the parameter space exemplified by the $M_{1/2} = 774-775$ GeV benchmarks of Tables~\ref{tab:benchmarks}-\ref{tab:flux} as that consistent with an upper 2$\sigma$ limit on the WIMP mass that can explain the previously observed 133 GeV monochromatic gamma ray line. Comparing Figure~\ref{fig:ibyield} with Table~\ref{tab:flux}, it is apparent that compression of the $\Delta M(\widetilde{\chi}_1^0,\widetilde{\tau}_1)$ mass gap substantially strengthens the IB signal in the narrowly peaked spectral range close to the LSP mass, whereas the advantage in integrated photon flux is less pronounced; this is relevant given higher experimental sensitivity to signals that more closely approximate a line spike. \section{Summary of Experimental Prospects} In this final section, we attempt to make a quantitative, if in some regards na\"{\i}ve, assessment of the experimental prospects of the various $\cal{F}$-$SU(5)$ model benchmarks previously described. The primary metric for assessment will be the integrated photon flux, {\it i.e.} the area under each differential flux curve displayed in the lower element of Figure~\ref{fig:ibyield}, in units of photons ${\rm cm^{-2}~sec^{-1}}$, as reported in Table~\ref{tab:flux}. Since both background (following a power law with spectral index $-2$) and the internal bremsstrahlung signal accrue in linear proportion with time, the $S/\sqrt{B}$ signal to background discriminant may be expected to scale as the square root of time. Based upon four years of data collection in whole-sky survey mode (achieving a full $4\pi$ steradian coverage once per two earth orbits), the Fermi collaboration has established sensitivity at five standard deviations to gamma flux rates above about $3-4 \times 10^{-9}~{\rm cm^{-2}~sec^{-1}}$ for line sources positioned at high galactic latitudes~\cite{LATsensitivity}; the sensitivity is diminished by about half an order of magnitude in the highly active galactic center. Taking an active Fermi mission lifetime of ten years, one sees that the data doubling advantage has already been largely depleted in the existing results, although the remaining multiple of 2.5 in integrated time may yet garner an improvement of around 1.6 deviations in sensitivity; in other words, any potential discovery apparent by the end of the Fermi mission should already be showing evidence above three standard deviations. Likewise, the expected end of mission line sensitivity may be projected at about $2 \times 10^{-9}~{\rm cm^{-2}~sec^{-1}}$. The root-$t$ scaling is actually a bit pessimistic for signals approximating a line width, and better sensitivity is possible. Additionally, the Fermi instrument has begun a transition toward more targeted observation of the galactic center for the remainder of its mission, which may garner an additional factor of about two in sensitivity, admitting however that baseline sensitivities are lower in this region. Likewise, substantial improvements in understanding of the detector and relevant analysis techniques are poised to reduce background contamination and improve overall instrument sensitivity~\cite{Atwood:2013rka}; we likewise assign a factor of about two to processing upgrades of this type, which are retroactive to already collected data. Holding backgrounds constant, a further reduction in the signal flux by a factor around $3/5$ would still be capable of presenting strong evidence for a scale-localized excess. Together, then, we set a working threshold around $3 \times 10^{-10}~{\rm cm^{-2}~sec^{-1}}$ on any potentially visible gamma flux. Given continuum dispersion of the IB gamma signal, it is somewhat over optimistic to apply sensitivities extrapolated from line-signal searches, and this deficiency becomes more pronounced at higher mass scales with widening and flattening of the signal profile, as is visible in Figure~\ref{fig:ibyield}. Nevertheless, it is important to recognize that any IB gamma signal may be compounded with line signals from loop order neutralino annihilation to gamma pairs and/or gamma Z, in the same basic spectral range, although potentially substantially suppressed, as indicated in Table~\ref{tab:flux}. Without an appreciable boost factor $\mathcal{O}$(50--100) in the computed annihilation rate, the $\cal{F}$-$SU(5)$ IB gamma flux, while more favorable for detection than the flux associated with mono-energetic line sources, may remain obscured by background processes to the LAT instrument. However, if there is any validity to the existing 130~GeV signal, then it becomes quite likely that some undiagnosed boost factor is actually in play. Plausible sources of this upward shift in the flux include underestimation of the local dark matter density (or corrections to the assumption of a smooth profile distribution), and internal bremsstrahlung contributions from EW or strong gauge bosons. As a closing note, we draw attention to the increase in the thermally produced bino relic density in Table~\ref{tab:benchmarks} for those points with $\Delta M(\widetilde{\chi}_1^0,\widetilde{\tau}_1) \simeq 0-2$ GeV, as the gaugino mass $M_{1/2}$ is lifted; this is due primarily to the incrementally larger LSP mass, and a corresponding slow increase in the value of $\tan\beta$, which tracks the elevation in $M_{1/2}$, automatically enhancing the light stau mixing for larger SUSY mass scales. Interestingly, the viable No-Scale $\cal{F}$-$SU(5)$ parameter space terminates near $M_{1/2} \sim 1500$ GeV, with a nearly degenerate light stau and LSP, while concurrently maintaining the Planck observed relic density. Furthermore, if we consider an off-shell tau, the parameter space can be extended up to $M_{1/2} \sim 1700$ GeV before incurring a charged light stau as the LSP. In this uppermost region of the model space, no alternate measures, such as non-thermally produced WIMPS, need be invoked to generate the correct relic density. This very large $M_{1/2} \sim 1500$ GeV region may be probed by future gamma ray probe experiments, and any possible gamma ray line signals could be directly correlated to LHC results, where, given the strong light stau and LSP neutralino mass degeneracy in this portion of the model, one may make an additional intriguing prediction for LHC phenomenology: in light stau production, the tau and LSP neutralino missing momentum signal will be collinear. \section{Conclusions} We presented here a methodology for testing No-Scale Supergravity with the FERMI satellite's Large Area Telescope, and similar future gamma ray telescopes. For our testing vehicle, we chose the supersymmetric grand unified model No-Scale Flipped $SU(5)$ with extra vector-like flippon multiplets derived from F-Theory, dubbed $\cal{F}$-$SU(5)$. Building upon ample extant phenomenological motivation for No-Scale $\cal{F}$-$SU(5)$, we discussed the potentially significant empirical support recently provided to cosmological models of inflation based upon No-Scale Supergravity by intrinsic Starobinsky-like conformance with the Planck measurements, for a suitable choice of superpotential parameters. Given this impetus, we discussed how compressing the light stau and LSP mass difference can increase the internal bremsstrahlung effects and thus enhance the photon count from annihilation to elevate detection probabilities, albeit with a reduced bino relic density. We additionally explained how the Planck satellite observed relic density can nevertheless be generated through a non-thermal mechanism. For concrete examples, we gave several benchmark points with light stau and LSP mass differences of 0--2 GeV, achieved by slight upward shifts in the low energy boundary condition $\tan\beta$, in conjunction with negligible variations in the gaugino mass $M_{1/2}$ and flippon mass $M_{V}$; these modifications leave the SUSY spectrum, aside from the light stau mass, unchanged, preserving the rich phenomenology (modulo appeal to non-thermal mechanisms of relic density generation) that is currently being probed by the LHC and several other Beyond the Standard Model (BSM) experiments. While the IB mechanism emerges as a more favorable context for observing a gamma ray signal generated consistently with the $\cal{F}$-$SU(5)$ model than monochromatic sources, a clear signal in the present generation instrument still requires a boost of order $\mathcal{O}$(50--100) in the expected rate of flux. \begin{acknowledgments} This research was supported in part by the DOE grant DE-FG03-95-Er-40917 (DVN) and by the Natural Science Foundation of China under grant numbers 10821504, 11075194, 11135003, and 11275246 (TL). We also thank Sam Houston State University for providing high performance computing resources. \end{acknowledgments}
1,116,691,498,128
arxiv
\section*{Introduction} One of the reasons that $C^*$-algebras are so well studied is that they have a very deep representation theory. Understanding the spectrum or primitive ideal space of a $C^*$-algebra, and in particular the topology on these spaces, can reveal a great deal of information about the underlying algebra. For example, if a separable $C^*$-algebra $A$ has Hausdorff spectrum $\widehat{A}$ then $A$ is naturally isomorphic to the section algebra of an upper-semicontinuous bundle over $\widehat{A}$ such that each fiber of the bundle is isomorphic to the compact operators. The continuous trace $C^*$-algebras, which can be classified by a cohomology element, are then algebras with Hausdorff spectrum whose associated bundles are ``locally trivial'' in an appropriate sense \cite[Chapter 5]{tfb}. Given a class of $C^*$-algebras it is an interesting problem to characterize those algebras which have Hausdorff spectrum. For example, in \cite{tghs} the author proves the following result. Suppose we are given a transformation group $(H,X)$ such that $H$ is abelian and the group action satisfies any of the conditions in the Mackey-Glimm dichotomy \cite{groupoiddichotomy}. Then the transformation group $C^*$-algebra will have Hausdorff spectrum if and only if the stabilizer subgroups of the action vary continuously with respect to the Fell topology and the orbit space $X/H$ is Hausdorff. In this paper we would like to extend the work of \cite{tghs} from transformation groups to groupoids. The most straightforward generalization is the conjecture that, given a groupoid $G$ with abelian stabilizer subgroups which satisfies the conditions of the Mackey-Glimm dichotomy, the groupoid $C^*$-algebra will have Hausdorff spectrum if and only if the stabilizers vary continuously in $G$ and $G^{(0)}/G$ is Hausdorff. Interestingly, we will show that this ``naive'' generalization fails and that characterizing the groupoid $C^*$-algebras with Hausdorff spectrum requires a third condition. Furthermore, the correct generalization, presented in Section \ref{sec:groupoid-c-algebras} as Theorem \ref{thm:groupoidresult}, is in some ways stronger than the results of \cite{tghs}, even for transformation groups. We finish the paper by providing some further examples in Section \ref{sec:hausd-spectr-dual}. In addition, we also prove that, unlike the $T_0$ or $T_1$ case, in the Hausdorff case the spectrum cannot be studied using only the stabilizer subgroupoid. Before we get started we should review some preliminary material. Throughout the paper we will let $G$ denote a second countable, locally compact Hausdorff groupoid with a Haar system $\{\lambda_u\}$. We will use $G^{(0)}$ to denote the unit space, $r$ to denote the range map, and $s$ to denote the source map. We will let $S = \{\gamma\in G : s(\gamma) = r(\gamma)\}$ be the stabilizer, or isotropy, subgroupoid of $G$. Observe that on $S$ the range and source maps are equal and that $r=s:S\to G^{(0)}$ gives $S$ a bundle structure over $G^{(0)}$. Given $u\in G^{(0)}$ the fiber $S_u = r|_S^{-1}(u)$ is a group and is called the stabilizer subgroup at $u$. Since $S$ is a closed subgroupoid of $G$, it is always second countable, locally compact, and Hausdorff. However, $S$ will have a Haar system if and only if the stabilizers vary continuously. That is, if and only if the map $u\mapsto S_u$ is continuous with respect to the Fell topology on closed subsets of $S$ \cite[Lemma 1.3]{renaultgcp}. One of the primary examples of groupoids are those built from transformation groups. If a second countable locally compact Hausdorff group $H$ acts on a second countable locally compact Hausdorff space $X$ then we can form the transformation groupoid $H\ltimes X$ in the usual fashion. The properties of the transformation groupoid are closely tied to those of the group action. For instance, the orbit space $H\ltimes X^{(0)}/H\ltimes X$ is homeomorphic to the orbit space of the action $X/H$. Furthermore, the stabilizer groups $S_X$ of $H\ltimes X$ can be naturally identified with the stabilizer subgroups $H_x$ of $H$ with respect to the group action and the stabilizers will vary continuously in $H\ltimes X$ if and only if they vary continuously in $H$. Given a groupoid $G$ we can construct the groupoid $C^*$-algebra $C^*(G)$ as a universal completion of the convolution algebra $C_c(G)$ \cite{groupoidapproach, coords}. Of particular interest to us will be the spectrum $C^*(G)^{\wedge}$ of the groupoid algebra. One special case which will play a key role in our results is the spectrum of the stabilizer subgroupoid. Suppose that $G$ has abelian stabilizer subgroups, that is, suppose the fibers of $S$ are all abelian. If the stabilizers vary continuously so that $S$ has a Haar system then we may construct the groupoid algebra $C^*(S)$. It turns out that in this case $C^*(S)$ is abelian and the spectrum of $C^*(S)$, denoted by $\widehat{S}$, is a second countable locally compact Hausdorff space which is naturally fibered over $G^{(0)}$. Furthermore the fiber of $\widehat{S}$ over $u\in G^{(0)}$, which we will write as $\widehat{S}_u$, is the Pontryagin dual of the fiber $S_u$ \cite[Section 3]{ctgIII}. We refer to $\widehat{S}$ as the dual stabilizer groupoid. One of the things that makes $\widehat{S}$ so useful is that its topology is relatively well understood; \cite{ctgIII} gives a complete description of the convergent sequences in $\widehat{S}$. Since we will use this characterization quite a bit we have restated it below. \begin{prop}[{\cite[Proposition 3.3]{ctgIII}}] \label{prop:3} Suppose the groupoid $G$ has continuously varying abelian stabilizers and that $\{\chi_n\}$ is a sequence in $\widehat{S}$ with $\chi_n \in \widehat{S}_{u_n}$ for all $n$. Given $\chi\in \widehat{S}_u$ we have $\chi_n \to \chi$ if and only if \begin{enumerate} \item $u_n \to u$ in $G^{(0)}$, and \item given $s_n \in S_{u_n}$ for all $n$ and $s\in S_u$ if $s_n\to s$ then $\chi_n(s_n)\to \chi(s)$. \end{enumerate} \end{prop} The final thing we need to review is the notion of a groupoid action. A groupoid $G$ can only act on spaces $X$ which are fibered over $G^{(0)}$. If there is a surjective function $r_X:X\to G^{(0)}$ then we define a groupoid action via a map $\{(\gamma,x):s(\gamma)=r_X(x)\}\to X$ such that for composable $\gamma$ and $\eta$ we have $\gamma\cdot(\eta\cdot x) = \gamma\eta\cdot x$. Among other things, this implies that $r_X(x)\cdot x = x$ for all $x\in X$ and $r_X(\gamma\cdot x) = r(\gamma)$. We will use the following three actions in this paper. Any groupoid $G$ has actions on its unit space $G^{(0)}$ and its stabilizer subgroupoid $S$ which are defined as follows \begin{align*} \gamma\cdot u = \gamma u \gamma^{-1} = r(\gamma)\quad\text{on $G^{(0)}$, and} \quad \gamma\cdot s = \gamma s \gamma^{-1} \quad\text{on $S$.} \end{align*} Furthermore if $S$ has abelian fibers which vary continuously then there is an action of $G$ on $\widehat{S}$. For $\gamma\in G$, $\chi\in \widehat{S}_{s(\gamma)}$ we define \[ \gamma\cdot \chi(s) = \chi(\gamma^{-1} s \gamma)\quad\text{for $s\in S_{r(\gamma)}$.} \] Given an action of $G$ on a space $X$ we will use $G\cdot x$ to denote the orbit of $x$ in $X$ and $[x]$ to denote the corresponding element of $X/G$. We would also like to recall that the orbit space $X/G$ is locally compact, but not necessarily Hausdorff, and that the quotient map $q:X\to X/G$ is open as long as $G$ has a Haar system \cite[Lemma 2.1]{groupoidcohom}. \section{Groupoid $C^*$-algebras with Hausdorff Spectrum} \label{sec:groupoid-c-algebras} As mentioned in the introduction, we would like to generalize the main result of \cite{tghs}, which has been restated below, from transformation groups to groupoids. \begin{theorem}[{\cite[Page 320]{tghs}}] \label{thm:transresult} Suppose that $(H,X)$ is an abelian transformation group and that the maps of $H/H_x$ onto $H\cdot x$ are homeomorphisms for each $x\in X$. Then the spectrum of the transformation group $C^*$-algebra $C^*(H,X)$ is Hausdorff if and only if the map $x\mapsto H_x$ is continuous with respect to the Fell topology and $X/H$ is Hausdorff. \end{theorem} \begin{remark} The condition that the maps of $H/H_x$ onto $H\cdot x$ are homeomorphisms for each $x\in X$ is one of the equivalent conditions in the Mackey-Glimm dichotomy \cite{groupoiddichotomy}. Following \cite{specpaper} we will refer to groupoids and transformation groups which satisfy one, and hence all, of the conditions of the Mackey-Glimm dichotomy as {\em regular}. \end{remark} An important question is how to generalize the hypothesis that the group $H$ is abelian. The most natural replacement is to assume that the stabilizer subgroups $S_u$ are abelian for all $u\in G^{(0)}$. Since, as we will see, the regularity hypothesis can be removed completely, this leaves us with the following conjecture. \begin{conj} \label{conj-1}Suppose the groupoid $G$ has abelian stabilizers. Then $C^*(G)$ will have Hausdorff spectrum if and only if the stabilizers vary continuously and $G^{(0)}/G$ is Hausdorff. \end{conj} However, we will find that this conjecture fails and the assumption that $G$ has abelian stabilizers is a weaker condition, even for transformation groups. Let us start by assuming $G$ is a second countable, locally compact Hausdorff groupoid with abelian stabilizers and that $C^*(G)^{\wedge}$ is Hausdorff. It then follows from \cite[Proposition 3.1]{ctgIII} that the stabilizers must vary continuously. Next consider the following useful lemma. \begin{lemma} \label{lem:1} Suppose $G$ is a second countable locally compact Hausdorff groupoid with continuously varying abelian stabilizers. Then the following are equivalent: \begin{enumerate} \item $C^*(G)$ has $T_0$ spectrum. \item $C^*(G)$ is GCR. \item $G^{(0)}/G$ is $T_0$. \end{enumerate} Furthermore, if any of these conditions hold then the map $[\gamma]\to r(\gamma)$ from $G_u/S_u$ to $G\cdot u$ is a homeomorphism for all $u\in G^{(0)}$ and $G$ is regular. \end{lemma} \begin{proof} The groupoid algebra is separable since $G$ is second countable. In this case the equivalence of the first two conditions follows from \cite[Theorem 6.8.7]{pedersenauto}. Since the stabilizers are abelian, and therefore amenable and GCR, the equivalence of the second two conditions now follows from the main result of \cite{ccrgca}. Finally, if $G^{(0)}/G$ is $T_0$ then it follows from \cite{groupoiddichotomy} that the map $[\gamma]\mapsto r(\gamma)$ from $G_u/S_u$ onto $G\cdot u$ is a homeomorphism for all $u\in G^{(0)}$ and hence $G$ is regular in the sense of \cite{specpaper}. \end{proof} Since we have assumed $C^*(G)^{\wedge}$ is Hausdorff, Lemma \ref{lem:1} implies that $G$ is regular. We may now use \cite[Theorem 3.5]{specpaper} to conclude that $C^*(G)^{\wedge}$ is homeomorphic to $\widehat{S}/G$. A brief argument shows that $G^{(0)}/G$ is homeomorphic to its image in $\widehat{S}/G$ equipped with the relative topology. Thus $G^{(0)}/G$ is Hausdorff. This demonstrates one direction of our conjecture. On the other hand, suppose that $G$ has continuously varying abelian stabilizers and that $G^{(0)}/G$ is Hausdorff. Then $G^{(0)}/G$ is certainly $T_0$ so that $G$ is regular. It then follows from \cite[Theorem 3.5]{specpaper} that $C^*(G)^{\wedge}$ is homeomorphic to $\widehat{S}/G$. So we will have proven our conjecture if we can show that $\widehat{S}/G$ is Hausdorff. What is more, setting aside the issue of continuously varying stabilizers for the moment, we also have the following suggestive proposition. \begin{prop} \label{prop:1} Suppose $G$ is a second countable, locally compact Hausdorff groupoid with continuously varying abelian stabilizers. Then $C^*(G)^{\wedge}$ is $T_1$ (resp. $T_0$) if and only if $G^{(0)}/G$ is $T_1$ (resp. $T_0$). \end{prop} \begin{proof} It follows from Lemma \ref{lem:1} that $C^*(G)^{\wedge}$ is $T_0$ if and only if $G^{(0)}/G$ is. Now suppose $C^*(G)^{\wedge}$ is $T_1$. Then Lemma \ref{lem:1} and \cite[Theorem 3.5]{specpaper} imply that $C^*(G)^{\wedge}$ is homeomorphic to $\widehat{S}/G$. As noted above, $G^{(0)}/G$ is homeomorphic to its image in $\widehat{S}/G$, and as such $G^{(0)}/G$ is $T_1$. Next suppose that $G^{(0)}/G$ is $T_1$. Again using Lemma \ref{lem:1} and \cite[Theorem 3.5]{specpaper} we have $C^*(G)^{\wedge} \cong \widehat{S}/G$. Thus, mirroring the Hausdorff case, we will be done if we can show that $\widehat{S}/G$ is $T_1$. Suppose that we are given elements $[\rho],[\chi]\in \widehat{S}/G$ such that $[\rho]\ne[\chi]$. Let $p:\widehat{S}\to G^{(0)}$ be the bundle map and $\tilde{p}:\widehat{S}/G\to G^{(0)}/G$ its factorization. Set $[u] = \tilde{p}([\rho])$ and $[v] = \tilde{p}([\chi])$. Suppose $[u]\ne [v]$. Since $G^{(0)}/G$ is $T_1$ we can find open sets $U$ and $V$ such that $[u]\in U$, $[v]\in V$ and $[u]\not\in V$, $[v]\not\in U$. Then $\tilde{p}^{-1}(U)$ is an open set containing $[\rho]$ and not $[\chi]$ and $\tilde{p}^{-1}(V)$ is an open set containing $[\chi]$ and not $[\rho]$. Next suppose $[u] = [v]$. Since the fibers of $S$ are abelian we have \begin{equation} \label{eq:2} s \cdot \chi(t) = \chi(s^{-1} t s) = \chi(t)\quad\text{for all $s\in S$.} \end{equation} Hence the action of $G$ on $S$ is trivial when fixed to a single fiber and we can assume without loss of generality that $\rho,\chi\in \widehat{S}_u$ with $\rho \ne \chi$. Let $q:\widehat{S}\to \widehat{S}/G$ be the quotient map and recall that it is open. Fix a neighborhood $U$ of $\rho$. If $\chi\not\in G\cdot U$ then $[\chi]\not \in q(U)$ and $q(U)$ separates $[\rho]$ from $[\chi]$. Now suppose $\chi\in G\cdot U$ for all neighborhoods $U$ of $\rho$. Then for each $U$ there exists $\gamma_U\in G$ and $\rho_U \in U$ such that $\rho_U = \gamma_U\cdot \chi$. If we direct $\rho_U$ by decreasing $U$ then it is clear that $\rho_U \to \rho$. This implies that \( \gamma_U \cdot u = r(\gamma_U) = p(\rho_U) \to u. \) Since $G$ is regular $[\gamma]\mapsto r(\gamma)$ is a homeomorphism and we must have $[\gamma_U] \to [u]$ in $G_u/S_u$. However, the quotient map on $G_u/S_u$ is open so that we may pass to a subnet, relabel, and choose $r_U\in S_u$ such that $\gamma_Ur_U \to u$. Using \eqref{eq:2} \[ \gamma_Ur_U\cdot\chi = \gamma_U\cdot \chi = \rho_U \to u\cdot \chi = \chi. \] Thus $\rho = \chi$, which is a contradiction. It follows that we must have been able to separate $[\rho]$ from $[\chi]$. This argument is completely symmetric so that we can also find an open set around $[\chi]$ which does not contain $[\rho]$. It follows that $\widehat{S}/G$, and hence $C^*(G)^{\wedge}$, is $T_1$. \end{proof} The essential component of this proof is the argument that $\widehat{S}/G$ is $T_1$ if $G^{(0)}/G$ is $T_1$. We would like to extend this to the Hausdorff case but there are topological obstructions. We start by recalling Green's famous example of a free group action that is not proper. \begin{example}[\cite{tgasos}] \label{ex:1} The space $X\subset\mathbb{R}^3$ will consist of countably many orbits, with the points $x_0=(0,0,0)$ and $x_n = (2^{-2n},0,0)$ for $n\in\mathbb{N}$ as a family of representatives. The action of $\mathbb{R}$ on $X$ is described by defining maps $\phi_n :\mathbb{R}\rightarrow X$ such that $\phi_n(s) = s\cdot x_n$. In particular we let \( \phi_0(s) = (0,s,0) \) and for $n\geq 1$ \[ \phi_n(s) = \begin{cases} (2^{-2n},s,0) & s \leq n \\ (2^{-2n}-(s-n)2^{-2n-1}, n\cos(\pi(s-n)),n\sin(\pi(s-n))) & n < s < n+1 \\ (2^{-2n-1}, s-1-2n,0) & s \geq n+1. \end{cases} \] For instance, brief computations show that \begin{equation} \label{eq:3} 2n+1\cdot (2^{-2n},0,0) = (2^{-2n-1},0,0) \end{equation} for all $n$. It is straightforward to observe that the orbit space $X/\mathbb{R}$ is homeomorphic to the subset $\{x_n\}_{n=0}^\infty$ of $\mathbb{R}^3$. \end{example} In the following we build an example of a transformation groupoid $G$ with continuously varying abelian stabilizers such that $G^{(0)}/G$ is Hausdorff and $\widehat{S}/G$ is not. This shows that, even in the transformation group case, our conjecture fails and that we cannot use the straightforward generalization of Theorem \ref{thm:transresult}. \begin{example} \label{ex:3} Let $\mathbb{R}$ act on $X$ as in Example \ref{ex:1}. Now restrict this action to the action of $\mathbb{Z}$ on the subset \( Y= \{\phi_n(m):n\in \mathbb{N}, m\in \mathbb{Z}\}. \) Let $H = \mathbb{Q}_D\rtimes_\phi\mathbb{Z}$ be the semidirect product, where $\mathbb{Q}_D$ denotes the rationals equipped with the discrete topology and where we define \begin{equation} \phi(n)(r) = r2^n \end{equation} for all $n\in \mathbb{Z}$ and $r\in \mathbb{Q}$. It is easy to show that $\phi$ is a homomorphism from $\mathbb{Z}$ into the automorphism group of $\mathbb{Q}_D$. Thus $H$ is a locally compact Hausdorff group which is second countable because it is a countable discrete space. Recall that the group operations are given by \begin{align*} (q,n)(p,m) & = (q + 2^n p, n + m) & (q,n)^{-1} &= (-2^{-n}q,-n). \end{align*} Let the second factor of $H$ act on $Y$ as in Example \ref{ex:1}. In other words, let $(q,n)\cdot x := n \cdot x$. It is straightforward to show that this is a continuous group action. It follows that the transformation groupoid $G=H\ltimes Y$ is a second countable, locally compact Hausdorff groupoid with a Haar system. Furthermore, the stabilizer subgroup of $H$ at $x$ is $H_x = \{(q,0):q\in\mathbb{Q}\}$ for all $x\in Y$. Since \( (q,0)(r,0) = (q + r2^{0},0) = (q+r,0), \) the stabilizers are abelian, and since the stabilizers are also constant, they must vary continuously in both $H$ and $G$. It will be important for us to observe that $S$ is isomorphic to $\mathbb{Q}_D \times Y$ via the map \(((q,0),x)\mapsto (q,x)\). Finally, $\{x_n\}_{n=0}^\infty$ forms a set of representatives for the orbit space and it is not difficult to show that $Y/G$ is actually homeomorphic to $\{x_n\}_{n=0}^\infty$ and is therefore Hausdorff. To show that $\widehat{S}/G$ is not Hausdorff we must first compute the dual. Since $S$ is isomorphic to $\mathbb{Q}_D\times Y$ we can identify $\widehat{S}$ with $\widehat{\mathbb{Q}_D}\times Y$. While $\widehat{\mathbb{Q}_D}$ is fairly mysterious we do know that since \( \hat{r}(s) = e^{ i r s} \) is a character on $\mathbb{R}$ for all $r\in \mathbb{R}$ it must also be a character on $\mathbb{Q}_D$. Now suppose $((q,n),x)\in G$ and $(\hat{r},-n\cdot x)\in \mathbb{Q}_D\times Y$. We have \begin{align*} ((q,n),x)\cdot (\hat{r},-n\cdot x)(p,x) &= (\hat{r},-n\cdot x)(((q,n),x)^{-1} ((p,0),x)((q,n),x)) \\ &=(\hat{r},-n\cdot x)((-2^{-n}q,-n)(p,0)(q,n),-n\cdot x) \\ &=(\hat{r},-n\cdot x)((2^{-n}p,0),-n\cdot x) \\ &= e^{irp2^{-n}} = (\widehat{2^{-n}r},x)(p,x). \end{align*} Or, more succinctly, \begin{equation} \label{eq:1} ((q,n),x) \cdot (\hat{r},-n\cdot x) = (\widehat{2^{-n}r},x). \end{equation} Next let \( \gamma_n = ((0,2n+1),(2^{-2n-1},0,0))\) for all $n$. Using the inverse of \eqref{eq:3} we have \[ r(\gamma_n) = (2^{-2n-1},0,0)\quad\text{and}\quad s(\gamma_n) = (2^{-2n},0,0). \] If we set \(\chi_n = (\hat{1},(2^{-2n},0,0))\) then clearly $\chi_n \to \chi = (\hat{1},(0,0,0))$. Using \eqref{eq:1} we compute \( \gamma_n\cdot\chi_n = (\widehat{2^{-(2n+1)}},(2^{-2n-1},0,0)). \) A quick calculation shows that $\gamma_n\cdot\chi_n \to \omega = (\hat{0},(0,0,0))$. Hence $[\chi_n] \to [\chi]$ and $[\chi_n]\to [\omega]$. Since the action of $G$ is trivial on fixed fibers this implies that $\widehat{S}/G$, and hence $C^*(G)^{\wedge}$, is not Hausdorff. \end{example} Even though our conjecture fails, we still know that if $G$ has continuously varying abelian stabilizers and $G^{(0)}/G$ is Hausdorff then $C^*(G)^{\wedge}\cong \widehat{S}/G$. What we need is an additional hypothesis which, when taken in conjunction with $G^{(0)}/G$ being Hausdorff, will imply that $\widehat{S}/G$ is Hausdorff. The appropriate condition is given below and forms the main result of the paper. \begin{theorem} \label{thm:groupoidresult} Suppose $G$ is a second countable locally compact Hausdorff groupoid with a Haar system and abelian stabilizers. Then $C^*(G)$ has Hausdorff spectrum if and only if the following conditions hold: \begin{enumerate} \item the stabilizers vary continuously, i.e. $u \mapsto S_u$ is continuous with respect to the Fell topology, \item the orbit space $G^{(0)} /G$ is Hausdorff, and, \item given sequences $\{\chi_i\}\subset \widehat{S}$ and $\{\gamma_i\}\subset G$ with $\chi_i \in \widehat{S}_{s(\gamma_i)}$, if $\chi_i\to \chi$ and $\gamma_i\cdot \chi_i \to \omega$ such that $\chi$ and $\omega$ are in the same fiber then $\chi = \omega$. \end{enumerate} \end{theorem} In essence the third condition prevents the kind of ``looping'' behavior we see in Example \ref{ex:3} and is enough to guarantee that $\widehat{S}/G$ is Hausdorff. \begin{remark} Even in the case of transformation groups Theorem \ref{thm:groupoidresult} is in some ways stronger than Theorem \ref{thm:transresult}. The main advantage is that we only require the stabilizer groups to be abelian, and not the whole group. Furthermore, we also removed the regularity hypothesis. The price is that we have added a slightly technical condition that, while not easy to say, is simple enough to check in practice. \end{remark} \begin{proof} In the discussion following our conjecture at the beginning of the section on page \pageref{conj-1} we showed that if $C^*(G)^{\wedge}$ is Hausdorff then conditions (a) and (b) hold and that $\widehat{S}/G$ is Hausdorff. Now suppose we have $\chi_i \to \chi$ and $\gamma_i\cdot \chi_i \to \omega$ as in condition (c). Then $[\chi_i]\to [\chi]$ and $[\chi_i]\to[\omega]$. Since $\widehat{S}/G$ is Hausdorff this implies $[\omega] = [\chi]$. However, $\chi$ and $\omega$ live in the same fiber and the action of $G$ on a single fixed fiber is free so that $\chi = \omega$. Now suppose conditions (a)-(c) are satisfied. Then again following the discussion on page \pageref{conj-1}, the first two conditions imply that $C^*(G)^{\wedge}$ is homeomorphic to $\widehat{S}/G$. Now suppose $[\chi_i]\to [\chi]$ and $[\chi_i]\to [\omega]$ in $\widehat{S}/G$. Using the fact that the quotient map is open we can pass to a subsequence, relabel, and choose new representatives $\chi_i$ so that $\chi_i \to \chi$. As before let $p:\widehat{S}\to G^{(0)}$ be the bundle map and let $\tilde{p}:\widehat{S}/G \to G^{(0)}/G$ be the natural factorization. Define $u_i = p(\chi_i)$ and $u = p(\chi)$ and observe that $[u_i] \to [u]$. Furthermore if $p(\omega) = v$ then $[u_i] \to [v]$ as well. Since $G^{(0)}/G$ is Hausdorff we have $[u] = [v]$ and we may assume, without loss of generality, that $u = v$. Now pass to a subsequence again, relabel, and find $\gamma_i\in G$ such that $\gamma_i \cdot \chi_i \to \omega$. These sequences satisfy the hypothesis of (c) so $\omega = \chi$. It follows $[\omega] = [\chi]$ and that $\widehat{S}/G$, and hence $C^*(G)^{\wedge}$, is Hausdorff. \end{proof} It should be noted that there are a variety of situations in which condition (c) is guaranteed to hold. \begin{prop} Let $G$ be a second countable, locally compact Hausdorff groupoid with continuously varying abelian stabilizers. Then condition (c) of Theorem \ref{thm:groupoidresult} automatically holds if $G$ satisfies any of the following: \begin{enumerate} \item $G=H\ltimes X$ is an abelian transformation groupoid, \item $G$ is principal, \item $G$ is proper, \item $G$ is Cartan, or \item $G$ is transitive. \end{enumerate} \end{prop} \begin{proof} Let $\chi_i \to \chi$ and $\gamma_i\cdot\chi_i \to \omega$ be as in condition (c). Set $u_i = s(\gamma_i)$, $v_i = r(\gamma_i)$, $u = p(\chi) = p(\omega)$ and observe that $u_i \to u$ and $v_i \to u$. Now suppose $G = H\ltimes X$ where $H$ is abelian. Then we must have $\gamma_i = (t_i, v_i)$ with $u_i = t_i^{-1}\cdot v_i$. Given $s$ in the stabilizer subgroup $H_u$ we can use the fact that the stabilizers vary continuously to pass to a subsequence, relabel, and find $s_i\in H_{u_i}$ such that $s_i \to s$ in $H$. Consequently $(s_i,u_i) \to (s,u)$ and by Proposition \ref{prop:3} \( \chi_i(s_i,u_i) \to \chi(s,u). \) On the other hand, since the group is abelian, we also have $s_i \in H_{v_i} = H_{t_i \cdot u_i}$ for all $i$. It follows that $(s_i,v_i)\to (s,u)$ in $S$ and therefore \[ (t_i,v_i)\cdot \chi_i(s_i,v_i) = \chi_i(t_i^{-1} s_i t_i, u_i) = \chi_i(s_i,u_i) \to \omega(s,u). \] Hence $\chi = \omega$ and condition (c) automatically holds for abelian transformation groups. Moving on, condition (c) trivially holds if $G$ is principal. For the next two conditions observe the following. Suppose we can pass to a subsequence, relabel, and find $\gamma \in G$ such that $\gamma_i \to \gamma$. It follows that $\gamma_i \cdot \chi_i \to \gamma \cdot \chi$ and therefore $\gamma\cdot \chi = \omega$. However, the range and source maps are continuous so we must have $r(\gamma) = s(\gamma) = u$ and hence $\gamma\in S_u$. The fibres of $S$ are abelian so that by \eqref{eq:2} $\omega = \gamma\cdot \chi = \chi$. Thus it will suffice to show that we can prove $\gamma_i$ has a convergent subsequence. However, if $G$ is either proper or Cartan then this follows almost by definition. Finally, suppose $G$ is transitive. Since $G$ is also second countable \cite[Theorem 2.2]{groupoidequiv} implies that the map $\gamma \mapsto (r(\gamma),s(\gamma))$ is open. Thus we can pass to a subsequence, relabel, and find $\eta_i \in G$ such that $r(\eta_i) = v_i$, $s(\eta_i) = u_i$ and $\eta_i \to u$. Observe that $\eta_i ^{-1} \gamma_i \in S_{u_i}$ for all $i$ so that \( \gamma_i \cdot \chi_i = \eta_i \cdot (\eta_i^{-1} \gamma_i \cdot \chi_i) = \eta_i \cdot \chi_i. \) Thus $\gamma_i \cdot \chi_i = \eta_i \cdot \chi_i \to u\cdot \chi = \chi$. It follows that $\chi = \omega$ and condition (c) holds in this case as well. \end{proof} \section{Examples and Duality} \label{sec:hausd-spectr-dual} In this section we would like to begin by applying Theorem \ref{thm:groupoidresult} to several examples. \begin{example} Let $H = SO(3,\mathbb{R})$, $X = \mathbb{R}^3\setminus\{(0,0,0)\}$ and let $H$ act on $X$ by rotation. It is clear that $H$ is not abelian, and therefore we cannot apply Theorem \ref{thm:transresult}. However, it does have abelian stabilizer subgroups. Given a vector $v\in X$ it's easy to see that $S_v$ is the set of rotations about the line described by $v$. In particular, this is isomorphic to the circle group and is therefore abelian. What is more, some computations show that the stabilizers vary continuously and that the stabilizer subgroupoid $S$ is homeomorphic to $X\times \mathbb{T}$. This in turn implies that the dual groupoid is homeomorphic to $X\times \mathbb{Z}$. Now suppose $(U_i^{-1} v_i,\chi_i)\to (v,\chi)$ and $(U_i,v_i)\cdot (U_i^{-1} v_i,\chi_i)\to (v,\omega)$ as in condition (c). Given $\theta_i\to \theta$ in $\mathbb{T}$ we have from Proposition \ref{prop:3} that \[ (U_i^{-1} v_i,\chi_i)(U_i^{-1} v_i,\theta_i) = \chi_i(\theta_i) \to (v,\chi)(v,\theta) = \chi(\theta). \] Using the fact that conjugating rotation about an axis $w$ by $V\in H$ gives us the corresponding rotation about $Vw$, we also have \begin{align*} (U_i, v_i)\cdot(U_i^{-1} v_i, \chi_i)(v_i,\theta_i) &=(U_i^{-1} v_i,\chi_i)(U_i^{-1} v_i,\theta_i) =\chi_i(\theta_i) \to (v,\omega)(v,\theta) = \omega(\theta). \end{align*} It follows that $\chi = \omega$ and condition (c) of Theorem \ref{thm:groupoidresult} holds. Finally, the orbit space $X/H$ is homeomorphic to the open half-line and is therefore Hausdorff. Thus we can conclude that $C^*(H\ltimes X)$ has Hausdorff spectrum. In fact \cite[Theorem 3.5]{specpaper} shows that $C^*(H\ltimes X)$ is homeomorphic to $\widehat{S}/H\ltimes X = (0,\infty)\times \mathbb{Z}$. \end{example} \begin{example} Let $E$ be a row finite directed graph with no sources. Recall that we can build the graph groupoid $G$ as in \cite{graphgroupoid}. Elements of $G$ are triples $(x,n,y)$ where $x$ and $y$ are infinite paths which are shift equivalent with lag $n$, and elements of $G^{(0)}$ are infinite paths.\footnote{We will be using the Raeburn convention for path composition \cite{cbmsgraph}.} Furthermore, the groupoid $C^*$-algebra $C^*(G)$ is isomorphic to the graph $C^*$-algebra. Let us consider the conditions of Theorem \ref{thm:groupoidresult}. First, the stabilizers are all subgroups of $\mathbb{Z}$ and hence abelian. Furthermore, the groupoid $G$ will have nontrivial stabilizers if and only if there exists an infinite path which is shift equivalent to itself. In other words, if and only if there is a cycle. Suppose a cycle on the graph has an entry. Let $x$ be the path created by following the cycle an infinite number of times. For each $i\in \mathbb{N}$ let $x_i$ be the path which, at its head, follows the cycle $i$ times and then has a non-cyclic tail leading off from the entry. Because $x_i$ eventually agrees with $x$ on any finite segment we have $x_i \to x$. However none of the $x_i$ are cycles so that $S_{x_i}$ is trivial for all $i$. On the other hand $S_{x} \cong n\mathbb{Z}$ where $n$ is the length of the cycle. Thus the stabilizers do not vary continuously. This shows that in order for the stabilizers to vary continuously no cycles in the graph can have entries. A similar argument shows that the converse holds as well. For the second condition we require that the orbit space $G^{(0)}/G$ be Hausdorff. In this case the orbit space is the space of shift equivalence classes. Recall that the basic open sets in $G^{(0)}$ are the cylinder sets $V_a$. More specifically, $a$ is a finite path and $V_a$ is the set of all infinite paths which are initially equal to $a$. Given $[x]\in G^{(0)}/G$ we will have $x\in G\cdot V_a$ if and only if $x$ is shift equivalent to a path with initial segment $a$. This is equivalent to there being a path from any vertex on $x$ to the source of $a$. Conversely, $y\not\in G\cdot V_a$ if and only if there is no path from any vertex on $y$ to the source of $a$. Using these facts it follows from a brief argument that $G^{(0)}/G$ will be Hausdorff if and only if given non-shift equivalent paths $x$ and $y$ there exists vertices $u$ and $v$ such that there is a path from a vertex on $x$ to $u$, a path from a vertex on $y$ to $v$, and there is no vertex $w$ which has a path to both $u$ and $v$. Finally, for the third condition we observe that given $(y,n,x)\in G$, $(y,m,y)\in S$ and $\chi \in \widehat{S}_x$ we have \begin{equation} \label{eq:5} (y,n,x)\cdot \chi(y,m,y) = \chi((x,-n,y) (y,m,y) (y,n,x)) = \chi(x,m,x). \end{equation} Now suppose $\chi_i\to \chi$ and $(y_i,n_i,x_i)\cdot \chi_i\to \omega$ in $\widehat{S}$ with $\chi,\omega \in \widehat{S}_x$. Notice that this implies that we must have $x_i\to x$ and $y_i\to x$ in $G^{(0)}$. Let $(x,n,x)\in S_x$. Then $(x_i,n,x_i) \to (x,n,x)$ and by Proposition \ref{prop:3} \( \chi_i(x_i,n,x_i) \to \chi(x,n,x). \) On the other hand we also know $(y_i,n,y_i) \to (x,n,x)$ so that, using \eqref{eq:5} and Proposition \ref{prop:3}, \[ (y_i,n_i,x_i)\cdot \chi_i(y_i,n,y_i) = \chi_i(x_i,n,x_i) \to \omega(x,n,x). \] This implies that $\chi(x,n,x)= \omega(x,n,x)$. Hence $\chi = \omega$ and condition (c) is automatically satisfied. Put together this shows that the graph groupoid algebra, and therefore the graph algebra, will have Hausdorff spectrum if and only if \begin{itemize} \item no cycle has an entry and, \item given non-shift equivalent paths $x$ and $y$ we can find vertices $u$ and $v$ such that there is a path from a vertex on $x$ to $u$, a path from a vertex on $y$ to $v$, and there is no vertex $w$ which has a path to both $u$ and $v$. \end{itemize} \end{example} One annoyance of Theorem \ref{thm:groupoidresult} is that condition (c) requires us to deal with the dual stabilizer groupoid. Using the same technique as the proof of Theorem \ref{thm:groupoidresult} one can show that if $G^{(0)}/G$ is Hausdorff and if condition (c) holds for sequences in $S$ (not $\widehat{S}$) then $S/G$ is Hausdorff. This raises the question of whether $\widehat{S}/G$ is Hausdorff if and only if $S/G$ is Hausdorff, which is interesting in its own right. Similar to the previous section we find that this question can be answered in the affirmative in the $T_0$ and $T_1$ cases. More specifically, using the topological argument given in Proposition \ref{prop:1}, one can prove the following result. \begin{prop} \label{prop:2} Let $G$ be a second countable, locally compact Hausdorff groupoid with continuously varying abelian stabilizers. Then either $G^{(0)}/G$, $S/G$, and $\widehat{S}/G$ are all $T_1$ (resp. $T_0$) or none of them is $T_1$ (resp. $T_0$). \end{prop} Unfortunately, again similar to the previous section, this proposition doesn't extend to the Hausdorff case either, as we demonstrate below. This example also shows that is is not enough to verify (c) on $S$ and that working with the dual is necessary. \begin{example} Let $H$, $Y$ and $G$ be as in Example \ref{ex:3}. Recall that we have already shown that in this case $\widehat{S}/G$ is not Hausdorff. The computations from Example \ref{ex:3} also show that condition (c) does not hold on $\widehat{S}$. Now we will show that $S/G$ is Hausdorff and that $S$ does satisfy condition (c). First, given $((q,n),y)\in G$ and $(r,x)\in S$ a computation similar to the one preceding \eqref{eq:1} shows that \begin{equation} \label{eq:151} ((q,n),y)\cdot (r,x) = (r2^n,y). \end{equation} Suppose $[s_i]\rightarrow [s]$ and $[s_i]\rightarrow [t]$ in $S/G$. Since $Y/G$ is Hausdorff we can follow the same argument given in Theorem \ref{thm:groupoidresult} to pass to subsequences, choose new representatives, and find $\gamma_i\in G$ so that $s_i\rightarrow s$ and $\gamma_i\cdot s_i \rightarrow t$ where $s,t\in S_u$. In particular this implies $s = (r,u)$ and $t = (q,u)$ for $r,q\in\mathbb{Q}$. Suppose $s_i = (r_i,x_i)$ and $\gamma_i = ((p_i,n_i),y_i)$. Then it follows from \eqref{eq:151} that $\gamma_i\cdot s_i = (r_i 2^{n_i},y_i)$. Hence $r_i\rightarrow r$ and $r_i2^{n_i}\rightarrow q$. However, we gave $\mathbb{Q}_D$ the discrete topology so that, eventually, \( q = 2^{n_i}r_i = 2^{n_i}r. \) Now, if either $r = 0$ or $q=0$ then $s=t$. If $r,q\ne 0$ we know that eventually $n_i = n =\log_2(q/r)$. We may as well pass to a subnet and assume this is always true. But then $n_i \cdot x_i \rightarrow n\cdot x$. However, we also have $n_i\cdot x_i = \gamma_i\cdot x_i = y_i \rightarrow x$. Thus $n\cdot x = x$. But the action of $\mathbb{Z}$ is free which implies $n=0$. Thus $\log_2(q/r) = 0$ and $q=r$. It follows that $s=t$ and that $S/G$ is Hausdorff. What is more, the above argument also shows that condition (c) holds for sequences in $S$. \end{example} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\href}[2]{#2}
1,116,691,498,129
arxiv
\section{Introduction} Quantum Phase Estimation (QPE) plays a core role in many quantum algorithms \cite{Hallgren:02, Shor:94, Shor:05, Szegedy:04, WCNA:09}. Some interesting algebraic and theoretic problems can be addressed by QPE, such as prime factorization \cite{Shor:94}, discrete-log finding \cite{Shor:05}, and order finding. \begin{problem}{\bf [Phase Estimation]} Let $U$ be a unitary matrix with eigenvalue $e^{2\pi i \varphi}$ and corresponding eigenvector $|u\>$. Assume only a single copy of $\ket{u}$ is available, the goal is to find $\widetilde{\varphi}$ such that \begin{equation} \Pr(|\widetilde{\varphi}-\varphi|<\frac{1}{2^{n}})> 1-c, \end{equation} where $c$ is a constant less than $\frac{1}{2}$. \end{problem} In this paper we investigate a more general approach for the QPE algorithm. This approach completes the transition from Kitaev's original approach that requires no controlled phase shift operators, to QPE with approximate quantum Fourier transform (AQFT). The standard QPE algorithm utilizes the complete version of the inverse QFT. The disadvantage of the standard phase estimation algorithm is the high degree of phase shift operators required. Since implementing exponentially small phase shift operators is costly or physically not feasible, we need an alternative way to use lower precision operators. This was the motivation for AQFT being introduced --- for lowering the cost of implementation while preserving high success probability. In AQFT the number of required phase shift operators drops significantly with the cost of lower success probability. Such compromise demands repeating the process extra times to achieve the final result. The QPE algorithm has a success probability of at least $\frac{8}{\pi^2}$ \cite{KLM:07}. Phase estimation using AQFT instead, with phase shift operators up to degree $m$ where $m>\log_{2}(n) +2$, has success probability at least $\frac{4}{\pi^2}-\frac{1}{4n}$ \cite{BEST:96, Cheung:04}. On the other hand, Kitaev's original approach requires only the first phase shift operator (as a single qubit gate not controlled). Comparing the existing methods, there is a gap between Kitaev's original approach and QPE with AQFT in terms of the degree of phase shift operators needed. In this paper our goal is to fill this gap and introduce a more general phase estimation algorithm such that it is possible to realize a phase estimation algorithm with any degree of phase shift operators in hand. In physical implementation of the phase estimation algorithm, the depth of the circuit should be small to avoid decoherence. Also, higher degree phase shift operators are costly to implement and in many cases it is not physically feasible. In this paper, we assume only one copy of the eigenvector $\ket{u}$ is available. This implies a restriction on the use of controlled-$U$ gates that all controlled-$U$ gates should be applied on one register. Thus, the entire process is a single circuit that can not be divided into parallel processes. Due to results by Griffiths and Niu, who introduced semi classical quantum Fourier transform \cite{GN:96}, quantum circuits implementing different approaches discussed in this paper would require the same number of qubits. The structure of this paper is organized as follows. In Sec.~\ref{KnownApproaches} we give a brief overview on existing approaches, such as Kitaev's original algorithm and standard phase estimation algorithm based on QFT and AQFT. In Sec.~\ref{OurApproach} we introduce our new approach and discuss the requirements to achieve the same performance output (success probability) as the methods above. Finally, we make our conclusion and compare with other methods. \section{Quantum phase estimation algorithms}\label{KnownApproaches} \subsection{Kitaev's original approach}\label{sec:kitaev} Kitaev's original approach is one of the first quantum algorithms for estimating the phase of a unitary matrix \cite{KSV:02}. Let $U$ be a unitary matrix with eigenvalue $e^{2\pi i \varphi}$ and corresponding eigenvector $\ket{u}$ such that \begin{equation} U\ket{u}=e^{2\pi i \varphi}\ket{u}. \end{equation} In this approach, a series of Hadamard tests are performed. In each test the phase $2^{k-1}\varphi$ ($1\leq k\leq n$) will be computed up to precision $1/16$. Assume an $n$-bit approximation is desired. Starting from $k=n$, in each step the $k$th bit position is determined consistently from the results of previous steps. For the $k$th bit position, we perform the Hadamard test depicted in Figure~\ref{QPE_with_K_Operator}, where the gate $K=I_2$. Denote $\varphi_k = 2^{k-1}\varphi$, the probability of the post measurement state is \begin{equation}\label{Eq:P1} \Pr(0|k)= \frac{1 + \cos(2\pi \varphi_k)}{2}, \quad \Pr(1|k) = \frac{1 - \cos(2\pi \varphi_k)}{2}. \end{equation} In order to recover $\varphi_k$, we obtain more precise estimates with higher probabilities by iterating the process. But, this does not allow us to distinguish between $\varphi_k$ and $-\varphi_k$. This can be solved by the same Hadamard test in Figure~\ref{QPE_with_K_Operator}, but instead we use the gate \begin{equation} K = \left( {\begin{array}{cc} 1 & 0 \\ 0 & i \\ \end{array} } \right). \end{equation} The probabilities of the post-measurement states based on the modified Hadamard test become \begin{figure} \[ \Qcircuit @C=1em @R=1em { \lstick{\ket{0}} & \gate{H} & \gate{K} &\ctrl{1} &\gate{H} &\qw &\meter\\ \lstick{\ket{u}} & \qw & \qw &\gate{U^{2^{k-1}}} &\qw &\qw & & \lstick{\ket{u}} }\] \caption{Hadamard test with extra phase shift operator.} \label{QPE_with_K_Operator} \end{figure} \begin{equation}\label{Eq:P2} \Pr(0|k)=\frac{1 - \sin(2\pi \varphi_k)}{2}, \quad \Pr(1|k) = \frac{1 + \sin(2\pi \varphi_k)}{2}. \end{equation} Hence, we have enough information to recover $\varphi_k$ from the estimates of the probabilities. In Kitaev's original approach, after performing the Hadamard tests, some classical post processing is also necessary. Suppose $\varphi = 0.x_1 x_2\ldots x_n$ is an exact $n$-bit. If we are able to determine the values of $\varphi$, $2\varphi, \ldots ,$ $2^{n-1} \varphi$ with some constant-precision ($1/16$ to be exact), then we can determine $\varphi$ with precision $1/2^n$ efficiently \cite{Kitaev:95, KSV:02}. Starting with $\varphi_n$ we increase the precision of the estimated fraction as we proceed toward $\varphi_1$. The approximated values of $\varphi_k \,(k = n, \ldots, 1)$ will allow us to make the right choices. For $k=1,\ldots,n$ the value of $\varphi_k$ is replaced by $\beta_k$, where $\beta_k$ is the closest number chosen from the set $\{\frac{0}{8},\frac{1}{8},\frac{2}{8},\frac{3}{8},\frac{4}{8},\frac{5}{8},\frac{6}{8},\frac{7}{8} \}$ such that \begin{equation}\label{eqn:pa} |\varphi_k - \beta_k|_{\text{mod 1}} < \frac{1}{8}. \end{equation} The result follows by a simple iteration. Let $\beta_n=\overline{0.x_n x_{n+1} x_{n+2}}$ and proceed by the following iteration: \begin{equation} x_k = \left\{ \begin{array}{l l} 0 & \quad \mbox{if $\overline{0.0x_{k+1}x_{k+2}} - \beta_{k}|_{\text{mod 1}} < 1/4$ }\\ 1 & \quad \mbox{if $\overline{0.1x_{k+1}x_{k+2}} - \beta_{k}|_{\text{mod 1}} < 1/4$}\\ \end{array} \right. \end{equation} \noindent for $k= n-1, \ldots, 1$. By using simple induction, the result satisfies the following inequality: \begin{equation} |\overline{0.x_1x_2\ldots x_{n+2}} - \varphi|_{\text{mod 1}} < 2^{-(n+2)}. \end{equation} In Eq.~\ref{eqn:pa}, we do not have the exact value of $\varphi_k$. So, we have to estimate this value and use the estimate to find $\beta_k$. Let $\widetilde{\varphi_k}$ be the estimated value and \begin{equation} \epsilon=|\widetilde{\varphi_k}-\varphi_k|_{\text{mod 1}} \end{equation} be the estimation error. Now we use the estimate to find the closest $\beta_k$. Since we know the exact binary representation of the estimate $\widetilde{\varphi_k}$, we can choose $\beta_k$ such that \begin{equation} |\widetilde{\varphi_k}-\beta_k|_{\text{mod 1}}\leq\frac{1}{16}. \end{equation} By the triangle inequality we have, \begin{equation} |\varphi_k-\beta_k|_{\text{mod 1}}\leq |\widetilde{\varphi_k}-\varphi_k| _{\text{mod 1}}+ |\widetilde{\varphi_k}-\beta_k|_{\text{mod 1}}\leq\epsilon +\frac{1}{16}. \end{equation} To satisfy Eq.~\ref{eqn:pa}, we need to have $\epsilon<1/16$, which implies \begin{equation}\label{eqn:16} |\widetilde{\varphi_k}-\varphi_k|_{\text{mod 1}}<\frac{1}{16}. \end{equation} Therefore, it is required for the phase to be estimated with precision $1/16$ at each stage. In the first Hadamard test (Eq.~\ref{Eq:P1}), in order to estimate $\Pr(1|k)$ an iteration of Hadamard tests should be applied to obtain the required precision of $1/16$ for $\varphi_k$. This is done by counting the number of states $\ket{1}$ in the post measurement state and dividing that number by the total number of iterations performed. The Hadamard test outputs $\ket{0}$ or $\ket{1}$ with a fixed probability. We can model an iteration of Hadamard tests as Bernoulli trials with success probability (obtaining $\ket{1}$) being $p_k$. The best estimate for the probability of obtaining the post measurement state $\ket{1}$ with $t$ samples is \begin{equation}\label{eqn:est1} \widetilde{p_k}=\frac{h}{t}, \end{equation} where $h$ is the number of ones in $t$ trials. This can be proved by Maximum Likelihood Estimation (MLE) methods \cite{HS:98}. In order to find $\sin(2\pi\varphi_k)$ and $\cos(2\pi\varphi_k)$, we can use estimates of probabilities in Eq.~\ref{Eq:P1} and EQ.~\ref{Eq:P2}. Let $s_k$ be the estimate of $\sin(2\pi\varphi_k)$ and $t_k$ the estimate of $\cos(2\pi\varphi_k)$. It is clear that if \begin{equation} |\widetilde{p_k}-p_k|<\epsilon_0, \end{equation} then \begin{equation} |s_k-\sin(2\pi\varphi_k)|<2\epsilon_0,\quad |t_k-\cos(2\pi\varphi_k)|<2\epsilon_0. \end{equation} Since the inverse tangent function is more robust to error than the inverse sine or cosine functions, we use \begin{equation} \widetilde{\varphi_k}=\frac{1}{2\pi}\arctan\left(\frac{s_k}{t_k}\right) \end{equation} as the estimation of $\varphi_k$. By Eq.~\ref{eqn:16} we should have \begin{equation} \left|\varphi_k-\frac{1}{2\pi}\arctan\left(\frac{s_k}{t_k}\right)\right|_{\text{mod 1}}<\frac{1}{16}. \end{equation} The inverse tangent function can not distinguish between the two values $\varphi_k$ and $\varphi_k \pm 1/2$. However, because we find estimates of the sine and cosine functions as well, it is easy to determine the correct value. The inverse tangent function is most susceptible to error when $\varphi_k$ is in the neighborhood of zero and the reason is that the derivative is maximized at zero. Thus, if \begin{equation} |s_k-\sin(2\pi\varphi_k)|=\epsilon_1\quad\text{and}\quad |t_k-\cos(2\pi\varphi_k)|=\epsilon_2, \end{equation} considering the case where $\varphi_k=0$, then we have \begin{equation} \frac{1}{2\pi}\left|\arctan\left(\frac{\epsilon_1}{1\pm\epsilon_2}\right)\right|<\frac{1}{16}. \end{equation} By simplifying the above inequality, we have \begin{equation} \left|\frac{\epsilon_1}{1\pm\epsilon_2}\right|<\tan(\frac{\pi}{8}). \end{equation} With the following upper bounds for $\epsilon_1$ and $\epsilon_2$, the inequality above is always satisfied when \begin{equation} |\epsilon_1|<1-\frac{1}{\sqrt{2}}\quad\text{and}\quad |\epsilon_2|<1-\frac{1}{\sqrt{2}}. \end{equation} Therefore, in order to estimate the phase $\varphi_k$ with precision $1/16$, the probabilities in Eq.~\ref{Eq:P1} and Eq.~\ref{Eq:P2} should be estimated with error at most $(2-\sqrt{2})/4$ which is approximately 0.1464. In other words, it is necessary to find the estimate of $\Pr(1|k)$ such that \begin{equation} \left|\Pr(1|k)-\frac{h}{t}\right|<\frac{2-\sqrt{2}}{4}\approx 0.1464. \end{equation} There are different ways we can guarantee an error bound with constant probability. The first method, used in \cite{KSV:02}, is based on the Chernoff bound. Let $X_1,\ldots,X_m$ be Bernoulli random variables, by Chernoff's bound we have \begin{equation}\label{eqn:chernoff} \mathrm{Pr}\left(\left|\frac{1}{m}\sum_{i=0}^{m}X_i-p_k\right|\geq \delta\right)\leq 2e^{-2\delta^2 m}, \end{equation} where in our case the estimate is $\widetilde{p_k}=\frac{1}{m}\sum_{i=0}^{m}X_i$. Since we need an accuracy up to $0.1464$, we get \begin{equation}\label{eqn:est2} \mathrm{Pr}\left(|\widetilde{p_k}-p_k|> 0.1464\right)< 2e^{-(0.0429)m}. \end{equation} In order to obtain \begin{equation}\label{eqn:est3} \mathrm{Pr}\left(\left|\widetilde{p_k}-p_k\right|< 0.1464\right)> 1-\frac{\varepsilon}{2}, \end{equation} a minimum of $m_1$ trials is sufficient when \begin{eqnarray}\label{eqn:est4} m_1&\approx&24\ln \frac{4}{\varepsilon}\nonumber\\ &\approx &33+24\ln \frac{1}{\varepsilon} \end{eqnarray} This is the number of trials for each Hadamard test, as we have two Hadamard tests at each stage. Therefore, in order to have \begin{equation} \mathrm{Pr}\left(|\widetilde{\varphi_k}-\varphi_k|< \frac{1}{16}\right)> 1-\varepsilon. \end{equation} we require a minimum of \begin{eqnarray} m &= & 2m_1\nonumber\\ &\approx & 47\ln \frac{4}{\varepsilon}\nonumber\\ &\approx & 66+47\ln \frac{1}{\varepsilon} \end{eqnarray} many trials. In the analysis above, we used the Chernoff bound, which is not a tight bound. If we want to obtain the result with a high probability, we need to apply a large number of Hadamard tests. In this case, we can use an alternative method to analyze the process by employing methods of statistics \cite{Sivia:96}. Iterations of Hadamard tests have a Binomial distribution which can be approximated by a normal distribution. This is a good approximation when $p$ is close to $1/2$ or $mp>10$ and $m(1-p)>10$, where $m$ is the number of iterations and $p$ the success probability. In other words, if we see $10$ successes and $10$ fails in our process, we can use this approximation to obtain a better bound. In Kitaev's algorithm each Hadamard test has to be repeated a sufficient number of times to achieve the required accuracy with high probability. Because only one copy of $\ket{u}$ is available, all controlled-$U$ gates have to be applied to one register. Therefore, all the Hadamard tests have to be performed in sequence, instead of parallel, during one run of the circuit. A good example for this case is the order finding algorithm. We refer the reader to \cite{NC:00} for more details. In Kitaev's approach, there are $n$ different Hadamard tests that should be performed. Thus, if the probability of error in each Hadamard test is $\varepsilon_0$, by applying the union bound, the error probability of the entire process is $\varepsilon=n\varepsilon_0$. Therefore, in order to obtain \begin{equation} \Pr(|\varphi-\widetilde{\varphi}|<\frac{1}{2^n})> 1-\varepsilon, \end{equation} for approximating each bit we need $m$ trials where \begin{equation}\label{m:kitaev} m =47\ln \frac{4n}{\varepsilon}. \end{equation} Since, all of these trials have to be done in one circuit, the circuit consists of $mn$ Hadamard tests. Therefore the circuit involves $mn$ controlled-$U^{2^k}$ operations. As a result, if a constant success probability is desired, the depth of the circuit will be $O(n\log n)$. \subsection{Approach based on QFT}\label{StandardPE} One of the standard methods to approximate the phase of a unitary matrix is QPE based on QFT. The structure of this method is depicted at Figure ~\ref{QPEfig}. The QPE algorithm requires two registers and contains two stages. If an $n$-bit approximation of the phase $\varphi$ is desired, then the first register is prepared as a composition of $n$ qubits initialized in the state $|0\>$. The second register is initially prepared in the state $\ket{u}$. The first stage prepares a uniform superposition over all possible states and then applies controlled-$U^{2^k}$ operations. Consequently, the state will become \begin{equation}\label{stateStage1} \frac{1}{2^{n/2}}\sum_{k=0}^{2^n-1}e^{2 \pi i \varphi k}|k\>. \end{equation} The second stage in the QPE algorithm is the QFT$^\dag$ operation. \begin{figure} \[\scalebox{1}{ \Qcircuit @C=.7em @R=0.7em { \lstick{|0\>} & \qw & \gate{H} & \qw & \qw & \qw & & & \qw & \ctrl{4} & \qw & \multigate{3}{{\rm QFT}^\dagger} & \qw \\ & & & \vdots & & & \cdots & & & & & & \\ \lstick{|0\>} & \qw & \gate{H} & \qw & \ctrl{2} & \qw & & & \qw & \qw & \qw & \ghost{{\rm QFT}^\dagger} & \qw \\ \lstick{|0\>} & \qw & \gate{H} & \ctrl{1} & \qw & \qw & & & \qw & \qw & \qw & \ghost{{\rm QFT}^\dagger} & \qw \\ \lstick{|u\>} & \qw & \qw & \gate{U^{2^0}} & \gate{U^{2^1}} & \qw & & & \qw & \gate{U^{2^{n-1}}} & \qw & \qw & \qw } }\] \caption {Standard Quantum Phase Estimation.} \label{QPEfig} \end{figure} There are different ways to interpret the inverse Fourier transform. In the QPE algorithm, the post-measurement state of each qubit in the first register represents a bit in the final approximated binary fraction of the phase. Therefore, we can consider computing each bit as a step. The inverse Fourier transform can be interpreted such that at each step (starting from the least significant bit), using the information from previous steps, it transforms the state \begin{equation} \frac{1}{\sqrt{2}}(\ket{0}+e^{2\pi i 2^k\varphi}\ket{1}) \end{equation} to get closer to one of the states \begin{eqnarray}\frac{1}{\sqrt{2}}(\ket{0}+e^{2\pi i 0.0}\ket{1})&=&\frac{1}{\sqrt{2}}(\ket{0}+\ket{1}) \nonumber\\ &\text{or}&\notag\\ \frac{1}{\sqrt{2}}(\ket{0}+e^{2\pi i 0.1}\ket{1})&=& \frac{1}{\sqrt{2}}(\ket{0}-\ket{1}). \end{eqnarray} Assume we are at step $k$ in the first stage. By applying controlled-$U^{2^k}$ operators due to phase kick back, we obtain the state \begin{equation}\label{state1} \frac{\ket{0}+e^{2\pi i 0.x_{k+1}x_{k+2}\ldots x_n}\ket{1}}{\sqrt{2}}. \end{equation} Shown in Figure ~\ref{fig:InverseQFT_3qubit}, each step (dashed-line box) uses the result of previous steps, where phase shift operators are defined as \begin{equation}\label{eq:phaseshift} R_k \equiv \left[ {\begin{array}{cc} 1 & 0 \\ 0 & e^{2\pi i/2^k} \\ \end{array} } \right] \end{equation} for $2\leq k \leq n$. \begin{figure} \[\scalebox{1}{ \Qcircuit @C=.4em @R=1em { \lstick{|y_3\>} &\qw &\gate{H} &\qw &\qw &\qw &\qw &\ctrl{1} &\qw &\qw &\qw &\qw &\qw &\ctrl{2} &\qw &\qw &\qw&\qw & \rstick{|x_3\>} \\ \lstick{|y_2\>} &\qw &\qw &\qw &\qw &\qw&\qw &\gate{R_2^{-1}} &\qw &\gate{H} &\qw &\qw &\qw &\qw &\ctrl{1} &\qw &\qw &\qw & \rstick{|x_2\>} \\ \lstick{|y_1\>} &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw&\gate{R_3^{-1}} &\gate{R_2^{-1}} &\qw &\gate{H} &\qw & \rstick{|x_1\>} \gategroup{1}{3}{1}{3}{1.1em}{--} \gategroup{1}{8}{2}{10}{1.3em}{--} \gategroup{1}{14}{3}{17}{1.1em}{--} } } \] \caption{3-qubit inverse QFT where $1 \leq i \leq 3$, $|y_i\>=\frac{1}{\sqrt{2}}(\left|0\right>+e^{2\pi i(0.x_i\ldots x_3)}\left|1\right>$). } \label{fig:InverseQFT_3qubit} \end{figure} By using the previously determined bits $x_{k+2},\ldots, x_n$ and the action of corresponding controlled phase shift operators (as depicted in Figure~\ref{fig:InverseQFT_3qubit}) the state in Eq.~\ref{state1} becomes \begin{equation} \frac{\ket{0}+e^{2\pi i 0.x_{k+1}0\ldots 0}\ket{1}}{\sqrt{2}}=\frac{\ket{0}+(-1)^{x_{k+1}}\ket{1}}{\sqrt{2}}. \end{equation} Thus, by applying a Hadamard gate to the state above we obtain $\ket{x_{k+1}}$. Therefore, we can consider the inverse Fourier transform as a series of Hadamard tests. If $\varphi$ has an exact $n$-bit binary representation the success probability at each step is $1$. While, in the case that $\varphi$ cannot be exactly expressed in $n$-bit binary fraction, the success probability $P$ of the post-measurement state, at step $k$, is \begin{equation} P=\cos^2(\pi \theta) \quad \text{for} \quad |\theta|<\frac{1}{2^{k+1}} \end{equation} Detailed analysis obtaining similar probabilities are given in Sec.~\ref{OurApproach}. Therefore, the success probability increases as we proceed. The following theorem gives us the success probability of the QFT algorithm. \begin{theorem}[\cite{KLM:07}]\label{PElowerbound} If $\frac{x}{2^n} \le \varphi \le \frac{x+1}{2^n}$, then the phase estimation algorithm returns one of $x$ or $x+1$ with probability at least $\frac{8}{\pi^2}$. \end{theorem} \subsection{Approach based on AQFT}\label{sec:AQFT} \begin{figure}[t] \[\scalebox{1}{ \Qcircuit @C=0.7em @R=0.7em { \lstick{|x_1\>} & \gate{H} & \gate{R_2} & \qw &\cdots & &\gate{R_{m-1}} & \qw & \gate{R_m} &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &&& &\lstick{|y_1\>} \\ \lstick{|x_2\>} & \qw & \ctrl{-1} & \qw &\qw & \qw & \qw & \qw & \qw &\gate{H} &\qw &\cdots & &\gate{R_{m-1}} &\gate{R_{m}} &\qw &\qw &\qw &\qw &&& &\lstick{|y_2\>} \\ & & \vdots & &\cdots & &\ctrl{-2} & \qw & \cdots & & & & & & & & & & &&& &\\ & & \vdots & & & & & & \ctrl{-3} & \qw &\qw &\cdots & & \ctrl{-2} & \qw &\qw & \cdots & & &&& &\\ & & \vdots & & & & & & & & & & \cdots & & \ctrl{-3} &\qw &\cdots & & &&& &\\ \lstick{|x_{n-1}\>} & \qw & \qw & \qw &\cdots & & \qw & \qw & \qw &\qw &\qw &\qw &\qw &\qw &\qw &\gate{H} &\gate{R_2} &\qw &\qw &&&& &\lstick{|y_{n-1}\>} \\ \lstick{|x_n\>} & \qw & \qw & \qw &\cdots & & \qw & \qw & \qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\ctrl{-1} &\gate{H} &\qw &&& &\lstick{|y_n\>} } }\] \caption{Quantum circuit for AQFT.} \label{AQFT} \end{figure} AQFT was first introduced by Barenco, et al \cite{BEST:96}. It has the advantage in algorithms that involve periodicity estimation. Its structure is similar to regular QFT but differs by eliminating higher precision phase shift operators. The circuit of AQFT is shown in Figure~\ref{AQFT}. At the RHS of the circuit, for $ n-m < i \leq n$ \begin{equation} \ket{y_i} = \frac{1}{\sqrt{2}}(\left|0\right>+e^{2\pi i(0.x_i\ldots x_n)}\left|1\right>) \end{equation} and for $1<i \leq n-m$, \begin{equation} \ket{y_i}= \frac{1}{\sqrt{2}} (\left|0\right>+e^{2\pi i(0.x_i\ldots x_{i+m-1})}\left|1\right>). \end{equation} Let $0.x_1x_2\ldots x_n$ be the binary representation of eigenphase $\varphi$. For estimating each $x_p$, where $1 \leq p \leq n$, AQFT$_m$ requires at most $m$ phase shift operations. Here $m$ is defined as the degree of the AQFT$_m$. Therefore, phase shift operations in AQFT$_m$ requires precision up to $e^{2 \pi i /2^m}$. The probability $P$ of gaining an accurate output using AQFT$_m$, when $ m \geq \log_2 n + 2$, is at least \cite{BEST:96} \begin{equation}\label{eq1} P \geq \frac{8}{\pi^2}(\sin^2(\frac{\pi}{4}\frac{m}{n})). \end{equation} The accuracy of AQFT$_{m}$ approaches the lower bound for the accuracy of the full QFT, which is $\frac{8}{\pi^2}$. A better lower bound is also achieved by Cheung in \cite{Cheung:04} \begin{equation} P \geq \frac{4}{\pi^2}-\frac{1}{4n}. \end{equation} Moreover, this indicates the logarithmic-depth AQFT provides an alternative approach to replace the regular QFT in many quantum algorithms. The total number of the phase shift operator invocations in AQFT$_m$ is $O(n \log_2 n)$, instead of $O(n^2)$ in the QFT. The phase shift operator precision requirement is only up to $e^{2 \pi i /4n}$, instead of $e^{2 \pi i /2^n}$. By using the AQFT instead of the QFT we trade off smaller success probability with smaller degrees of phase shift operators and a shorter circuit. \section{New approach with constant degree phase shift operators }\label{OurApproach} In this section we introduce our new approach for QPE. Our approach draws a trade-off between the highest degree of phase shift operators being used and the depth of the circuit. As a result, when smaller degrees of phase shift operators are used, the depth of the circuit increases and vice versa. As pointed out in Sec.~\ref{StandardPE}, by using information of previous qubits, the full-fledged inverse QFT transforms the phase such that the phase of the corresponding qubit gets closer to one of the states $\ket{+}$ or $\ket{-}$. For our approach, we first consider the case where only the controlled phase shifts operators $R_2$ and $R_3$ are used (Eq.~\ref{eq:phaseshift}). In this case, we only use the information of the two previous qubits (see Figure~\ref{QFT_Ours}). In such a setting, we show that it is possible to perform the QPE algorithm with arbitrary success probability. \begin{figure} \[\scalebox{0.85}{ \Qcircuit @C=0.7em @R=0.7em { \lstick{|y_n\>} &\gate{H} &\ctrl{1} &\qw &\ctrl{2} &\qw &\qw &\qw & \qw & \qw & \qw &\qw & \qw & \qw &\qw &&& &\lstick{|x_n\>} \\ \lstick{|y_{n-1}\>} &\qw &\gate{R^{-1}_2} &\gate{H} &\qw &\ctrl{1} &\qw &\ctrl{2} &\qw & \qw & \qw & \qw & \qw & \qw &\qw &&&& &\lstick{|x_{n-1}\>} \\ \lstick{|y_{n-2}\>} &\qw &\qw & \qw &\gate{R^{-1}_{3}} &\gate{R^{-1}_{2}} &\gate{H} & \qw & \ctrl{1} & \qw & \qw &\qw & \qw & \qw &\qw &&&& &\lstick{|x_{n-2}\>}\\ \lstick{|y_{n-3}\>} &\qw &\qw &\qw & \qw & \qw & \qw &\gate{R^{-1}_{3}} &\gate{R^{-1}_{2}} &\gate{H} &\qw & \qw &\qw & \qw &\qw &&&& &\lstick{|x_{n-3}\>}\\ & & \vdots & & & & & & &\cdots & & \ctrl{2} & \qw & &\vdots & \\ & &\vdots & & \cdots & & & & &\qw &\qw &\qw &\ctrl{1} &\qw & &&& & \\ \lstick{|y_{1}\>} &\qw &\qw &\qw &\qw & \qw & \qw & \qw & \qw &\qw &\qw &\gate{R^{-1}_{3}} &\gate{R^{-1}_{2}} &\gate{H} &\qw &&& &\lstick{|x_1\>} } }\] \caption{QPE with only two controlled phase shift operations.} \label{QFT_Ours} \end{figure} The first stage of our algorithm is similar to the first stage of QPE based on QFT. Assume the phase is $\varphi=0.x_1x_2x_3\ldots$ with an infinite binary representation. At step $k$, the phase after the action of the controlled gate $U^{2^k}$ is $2^k \varphi=0.x_{k+1}x_{k+2}\ldots$ and the corresponding state is \begin{equation}\ket{\psi_k}=\frac{1}{\sqrt{2}}(\ket{0}+e^{2\pi i 2^k\varphi}\ket{1}).\end{equation} By applying controlled phase shift operators $R_2$ (controlled by the $(k-1)$th qubit) and $R_3$ (controlled by the $(k-2)$th qubit) to the state above, we obtain \begin{equation}\ket{\widetilde{\psi_k}}=\frac{1}{\sqrt{2}}(\ket{0}+e^{2\pi i \widetilde{\varphi}}\ket{1}),\end{equation} where \begin{equation}\widetilde{\varphi}=0.x_{k+1}00x_{k+4}\ldots.\end{equation} It is easy to see that \begin{equation}|\widetilde{\varphi}-0.x_{k+1}|<\frac{1}{8}.\end{equation} Hence, we can express \begin{equation}\widetilde{\varphi}=0.x_{k+1}+\theta\end{equation} where $|\theta|<\frac{1}{8}$. Therefore, the state $\ket{\widetilde{\psi_k}}$ can be rewritten as \begin{equation}\ket{\widetilde{\psi_k}}=\frac{1}{\sqrt{2}}(\ket{0}+e^{2\pi i (0.x_{k+1}+\theta)}\ket{1}).\end{equation} In order to approximate the phase $\varphi$ at this stage ($k$th step), we need to find the value of $x_{k+1}$ by measuring the $k$th qubit. In this regard, we first apply a Hadamard gate before the measurement to the state $\ket{\widetilde{\psi_k}}$. The post-measurement state will determine the value of $x_{k+1}$ correctly with high probability. The post measurement probabilities of achieving $\ket{0}$ or $\ket{1}$ in the case where $x_{k+1}=0$ is \begin{eqnarray} \Pr(0|k)&=&\cos^2(\pi\theta)\nonumber \\ \Pr(1|k)&=&\sin^2(\pi\theta). \end{eqnarray} Therefore, \begin{eqnarray} \Pr(0|k)&\geq&\cos^2(\frac{\pi}{8} )\approx 0.85 \nonumber \\ \Pr(1|k)&\leq&\sin^2(\frac{\pi}{8} )\approx 0.15 \end{eqnarray} In the case where $x_{k+1}=1$, the success probability is similar. By iterating this process a sufficient number of times and then letting the majority decide, we can achieve any desired accuracy. The analysis is similar to Sec.~\ref{sec:kitaev}. In this case, all we require is to find the majority. Therefore, by a simple application of the Chernoff's bound \begin{equation} \mathrm{Pr}\left(\frac{1}{m}\sum_{i=0}^{m}X_i\leq \frac{1}{2}\right)\leq e^{-2m(p-\frac{1}{2})^2 }, \end{equation} where in this case $p=\cos^2(\pi/8)$. It is easy to see that if a success probability of $1-\varepsilon$ is required, then we need at least \begin{equation}\label{m:ours} m=4\ln(\frac{1}{\varepsilon}) \end{equation} many trials for approximating each bit. By comparing Eq.~\ref{m:kitaev} and Eq.~\ref{m:ours} (Table~\ref{table:3}), we see that while preserving the success probability, our new algorithm differs by a constant and scales about 12 times better than Kitaev's original approach in terms of the number of Hadamard tests required (Figure \ref{KitaevVSOurs}). In physical implementations this is very important, especially in the case where only one copy of the eigenvector $\ket{u}$ is available and all Hadamard tests should be performed during one run of the circuit. \begin{figure}[ht] \begin{center} \includegraphics[scale=1.4]{KitaevVSOurs.eps} \caption{Required trails for estimating each bit in Kitaev's original approach and our new approach.}\label{KitaevVSOurs} \end{center} \end{figure} In the algorithm introduced above, only phase shift operators $R_2$ and $R_3$ are used. When higher phase shift operators are used in our algorithm, the success probability of each Hadamard test will increase. As a result, fewer trials are required in order to achieve similar success probabilities. As pointed out in Sec.~\ref{sec:AQFT}, the QPE based on AQFT requires phase shift operators of degree at least $2 +\log n$. With this precision of phase shift operators in hand, the success probability at each step would be high enough such that there is no need to iterate each step. In such scenario, one trial is sufficient to achieve an overall success probability of a constant. \begin{table}[ht] \begin{center} \begin{tabular}{ |c | c|c| } \hline {\bf Success} &{\bf Kitaev's} &{\bf Constant}\\ {\bf Probability} &{\bf Original Approach} & {\bf Precision}\\ \hline 0.50000 & 98& 3 \\ \hline 0.68269 & 120 & 5 \\ \hline 0.95450& 211 & 13 \\ \hline 0.99730& 344 & 24 \\ \hline 0.99993 &515 & 39\\ \hline \end{tabular} \caption{Required trials for estimating each bit by using Chernoff's bound. }\label{table:3} \end{center} \end{table} Recall the phase estimation problem stated in the introduction. If a constant success probability greater than $\frac{1}{2}$ is required, the depth of the circuit for all the methods mentioned in this paper (except the QPE based on full fledged QFT, which is $O(n^2)$), would be $O(n\log n)$ (assuming the cost of implementing the controlled-$U^{2^k}$ gates are all the same). This means the depth of the circuits differ only by a constant. However, the disadvantage of Kitaev's original approach to our new approach is the large number of Hadamard tests required for each bit in the approximated fraction. Therefore, the new method introduced in this paper provides the flexibility of using any available degree of controlled phase shift operators while preserving the success probability and the length of the circuit up to a constant. \label{conclusions} \section{Acknowledgments} We would like to thank Pawel Wocjan for useful discussions and Stephen Fulwider for helpful comments. H.~A. and C.~C. gratefully acknowledge the support of NSF grants CCF-0726771 and CCF-0746600. \\
1,116,691,498,130
arxiv
\section{Introduction} An important problem in the social sciences is estimating the effect of a discrete intervention on a continuous outcome over time. When interventions take place at an aggregate level (e.g., a state), researchers make causal inferences by comparing the post-intervention (``post-period'') outcomes of affected (``treated'') units against the outcomes of unaffected (``control'') units. A common approach to the problem is the synthetic control method (SCM) \citep{abadie2010synthetic}, which predicts the counterfactual outcomes of treated units by finding a convex combination of control units that match the treated units in term of lagged outcomes or pre-intervention (``pre-period'') covariates. The SCM has several limitations. First, the convexity restriction of the synthetic control estimator precludes dynamic, nonlinear interactions between multiple control units. Intuitively, one can expect that the treated unit may exhibit nonlinear or negative correlations with the control units. \citet{ferman2016revisiting} demonstrate that the convexity restriction implies that the SCM estimator may be biased even if selection into treatment is only correlated with time-invariant unobserved covariates. Second, \citet{ferman2018synthetic} demonstrate that the SCM is generally biased if treatment assignment is correlated with unobserved confounders, even when the number of pre-period periods grows. Moreover, the authors show that while the SCM minimizes imbalance in pre-period outcomes, the likelihood of finding exact balancing weights vanishes as the number of time periods increase, which results in bias. While the strength of the SCM lies in its simplicity in setup and implementation, several problems arise from the lack of guidance on how to specify the SCM estimator. The specification of the estimator can produce very different results: \citet{ferman2018cherry} show, for example, how cherry-picking between common SCM specifications can facilitate $p$-hacking. \citet{kaul2015synthetic} show that the common practice of including lagged outcomes as model inputs can render all other covariates irrelevant. Lastly, \citet{klossner2017comparative} demonstrates that the common practice of using cross-validation to select importance weights can yield multiple values and consequently different results. This paper proposes an alternative to the SCM that is capable of automatically selecting appropriate control units at each time-step, allows for nonconvex combinations of control units, and does not rely on pre-period covariates. The method uses recurrent neural networks (RNNs) to predict the counterfactual outcomes of treated units using only control unit outcomes as model inputs. RNNs are a class of neural networks that take advantage of the sequential nature of temporal data by sharing model parameters across multiple time-steps \citep{el1995}. RNNs are nonparametric in that they do not assume a functional form when fitting the data. In addition, RNNs can learn the most useful nonconvex combination of control unit outcomes at each time-step for generating counterfactual predictions. Relaxing the convexity restriction is useful when the data-generating process underlying the outcome of interest depends nonlinearly on the history of its inputs. RNNs have been shown to outperform various linear models on time-series prediction tasks \citep{cinar2017position}. RNNs are end-to-end trainable and very flexible to a given sequential prediction problem. For example, they are capable of sharing learned parameters across time-steps and multiple treated units. while the SCM can be generalized to handle multiple treated units \citep[e.g.,][]{dube2015pooling,xu2017generalized}, the generalized the SCM is not capable of sharing model weights when predicting the outcomes of multiple treated units. Regularization methods such as dropout can easily be incorporated into RNN architectures to prevent overfitting during the training process, which is problematic when the networks learn an overreliance on a few model inputs. The proposed method builds on a new literature that uses machine learning methods for data-driven counterfactual prediction, such as matrix completion \citep{athey2017matrix}, or two-stage estimators that reduce data dimensionality via L1-regularized regression \citep{doudchenko2016balancing,carvalho2018arco} or matrix factorization \citep{amjad2018robust} prior to regressing the outcomes on the reduced data. These methods are data-driven in the sense that they are capable of finding an appropriate subset of control units for comparison in the absence of domain knowledge or pre-period covariates. In the section immediately below, I describe the problem of counterfactual prediction and its relationship to matrix completion and the problem of covariate shift; Section \ref{RNNs-section} introduces the approach of using RNNs for counterfactual prediction; Section \ref{placebo} presents the results of the placebo tests; Section \ref{schooling-app} details the procedure for hypothesis testing and applies the RNN-based method and inferential procedure ot the problem of estimating the impact of homestead policy on long-run state government investment in public schooling; Section \ref{conclusion} concludes and offers potential avenues for future research. \section{Counterfactual prediction} \label{prediction} The proposed method estimates the causal effect of a discrete intervention in observational panel data; i.e., settings in which treatment is not randomly assigned and there exists both pre- and post-period observations of the outcome of interest. Let $\boldsymbol{Y}$ denote a $\text{N} \times \text{T}$ matrix of outcomes for each unit $i =1, \ldots, \text{N}$, at time $t = 1, \ldots, \text{T}$. $\boldsymbol{Y}$ is incomplete because we observe each element $Y_{it}$ for only the control units and the treated units prior to time of initial treatment exposure, $\text{T}_0 < \text{T}$. Let $\mathcal{O}$ denote the set of $(it)$ values that are observed and $\mathcal{M}$ the set of $(it)$ missing values. Let the values of the $\text{N} \times \text{T}$ complete matrix $\boldsymbol{W}$ be $W_{it} =1$ if $(it) \in \mathcal{M}$ and $W_{it} = 0$ if $(it) \in \mathcal{O}$. The pattern of missing data is assumed throughout this paper to follow a simultaneous treatment adoption setting, where treated units are exposed to treatment at time $\text{T}_0$ and every subsequent period. This setup is motivated by the \citet{neyman1923} potential outcomes framework, where for each $it$ value there exists a pair of potential outcomes, $Y_{it}(1)$ and $Y_{it}(0)$, representing the response to treated and control regimes, respectively. The observed outcomes are \begin{align*} Y_{it} = \begin{cases} Y_{it}(0) & \mbox{if } W_{it} = 0 \text{ or } t < \text{T}_0 \\ Y_{it}(1) & \mbox{if } W_{it} = 1 \text{ and } t \geq \text{T}_0. \end{cases} \end{align*} The problem of counterfactual prediction is that we cannot directly observe the missing potential outcomes and instead wish to impute the missing values in $\boldsymbol{Y}(0)$ for treated units with $W_{it} =1$. The potential outcomes framework explicitly assumes unconfoundedness. In an observational setting, this assumption requires $$ \left(\boldsymbol{Y}(0), \boldsymbol{Y}(1) \right) \independent \boldsymbol{W}| \boldsymbol{Y}(\mathcal{O}), $$ where $\boldsymbol{Y}(\mathcal{O})$ is the observed data. The potential outcomes framework also implicitly assumes treatment is well-defined to ensure that each unit has the same number of potential outcomes \citep{imbens2015causal}. It also excludes interference between units, which would undermine the framework by creating more than two potential outcomes per unit, depending on the treatment status of other units \citep{rubin1990}. \subsection{Relationship to matrix completion and covariate shift} The intuition behind the proposed approach to counterfactual prediction is similar to that of the method of matrix completion via nuclear norm minimization (MC-NNM) proposed by \citet{athey2017matrix}. Matrix completion methods attempt to impute missing entries in a low-rank matrix by solving a convex optimization problem via NNM, even when relatively few values are observed in $\boldsymbol{Y}$ \citep{candes2009exact,candes2010matrix}. The estimator recovers a $\text{N} \times \text{T}$ low-rank matrix by minimizing the sum of squared errors via nuclear norm regularized least squares. The estimator reconstructs the matrix by iteratively replacing missing values with those recovered from a singular value decomposition \citep{mazumder2010spectral}. \citet{athey2017matrix} note two drawbacks of MC-NNM. First, the errors may be autocorrelated because the estimator does not account for temporal dependencies in the observed data. The estimator estimate patterns row- and column-wise, but treat the data as perfectly synchronized \citep{yoon2018estimating}. In contrast, the SCM assumes that correlations across units are stable over time, while the RNN-based approach exploits the temporal component of the data and therefore does not have the problem of autocorrelated errors. Second, the MC-NNM estimator penalizes the errors for each observed value equally without regard to the fact that the probability of missingness (i.e, the propensity score), increases with $t$. \citet{athey2017matrix} suggest weighting the loss function by the propensity score, which is similar to the importance weighting scheme proposed by \citet{cortes2008sample} to address the problem of covariate shift, which is a special case of domain adaptation \citep{huang2007correcting,ben2007analysis,bickel2009discriminative,cortes2010learning,2015arXiv150507818G}. \citet{schnabel2016recommendations} first connected the matrix completion problem with causal inference in observational settings in the context of recommender systems under confounding. \citet{johansson2016learning} formulates the general problem of counterfactual inference as a covariate shift problem. The covariate shift problem occurs when training and test data are drawn from different distributions. For notational ease, define the training set input-output pair as $$\left(\boldsymbol{X}^{\text{train}}, \boldsymbol{Y}^{\text{train}}\right) = \left(\boldsymbol{Y}(\boldsymbol{W})^{\left(t < \text{T}_0\right)}, \boldsymbol{Y}(\boldsymbol{W})^{\left(t \geq \text{T}_0\right)}\right)$$ \noindent for units with $\boldsymbol{W}=0$ and the test set pair $\left(\boldsymbol{X}^{\text{test}}, \boldsymbol{Y}^{\text{test}}\right)$ for units with $\boldsymbol{W}=1$. In the proposed approach, the model weights learned on the training set is fit on $\boldsymbol{X}^{\text{test}}$ to predict $\boldsymbol{Y}^{\text{test}}$. The approach therefore assumes similarity between the distributions of $\boldsymbol{X}^{\text{train}}$ and $\boldsymbol{X}^{\text{test}}$. In order to minimize the discrepancy between the training and test set input distributions, I estimate the propensity score $\hat{e}_{it} = \Pr(W_{it}=1 | Z_{it})$, conditional on covariate matrix $\boldsymbol{Z}$ and then weight the training loss by the estimated propensity scores. \subsection{Nonparametric regression} In its most basic form, counterfactual prediction can be represented as a nonparametric regression of the training set outputs on the inputs, \begin{equation}\label{eq:np} \boldsymbol{\hat{\boldsymbol{Y}}^{\text{train}}} = \hat{f_0} \left(\boldsymbol{X}^{\text{train}}\right) + \upepsilon^{(t)}, \end{equation} \noindent where the noise variables $\upepsilon^{(t)}$ are assumed to be i.i.d. standard normal and independent of the observed data. The nonlinear function $\hat{f_0}$ is estimated by minimizing the weighted mean squared error on the training set outputs, \begin{equation} \label{eq:mse} \text{WMSE} = \sum \left(\boldsymbol{Y}^{\text{train}} - \boldsymbol{\hat{Y}}^{\text{train}} \right)^2 \cdot \frac{\boldsymbol{\hat{E}}^\text{train}}{|\boldsymbol{X}^\text{train}|}, \end{equation} \noindent where $\boldsymbol{\hat{E}}^\text{train}$ is a matrix of estimated propensity scores. At test time, the estimated function is used to predict $\boldsymbol{\hat{Y}}^{\text{test}} = \hat{f_0} \left(\boldsymbol{X}^{\text{test}}\right)$. The estimated causal effect of the intervention is then \begin{equation}\label{eq:pointwise} \boldsymbol{\hat{\upphi}} = \boldsymbol{Y}^{\text{test}} - \boldsymbol{\hat{Y}}^{\text{test}}. \end{equation} The estimated average causal effect of the intervention on treated units is calculated by averaging over the time dimension, resulting in the vector $\boldsymbol{\bar{\upphi}}^{(t)}$ of length $\text{T}_\star = \text{T}-\text{T}_0$. \section{RNNs for counteractual prediction} \label{RNNs-section} RNNs \citep{graves2012,goodfellow2016deep} consist of an input $\boldsymbol{X} = \left(\boldsymbol{x}^{(1)}, \ldots, \boldsymbol{x}^{(n_x)}\right)$, an output $\boldsymbol{Y} = \left(\boldsymbol{y}^{(1)}, \ldots, \boldsymbol{y}^{(n_y)}\right)$, and a hidden state $\boldsymbol{h}^{(t)}$. In the plain vanilla RNN it is assumed $n_x = n_y = T$; in the encoder-decoder network architecture described below, $n_x$ and $n_y$ can vary in length. At each $t$, RNNs input $\boldsymbol{x}^{(t)}$ and pass it to the $\boldsymbol{h}^{(t)}$, which is updated with a function $g^{(t)}$ using the entire history of the input, which is unfolded backwards in time: \begin{align} \boldsymbol{h}^{(t)} &= g^{(t)} \left(\boldsymbol{x}^{(t)}, \boldsymbol{x}^{(t-1)}, \ldots, \boldsymbol{x}^{(1)} \right) \\ &= f_1 \left( \boldsymbol{h}^{(t-1)}, \boldsymbol{x}^{(t)}; \, \theta \right). \label{eq:hidden} \end{align} The activation function $f_1 (\cdot)$, parameterized by $\theta$, is shared for all $t$. Parameter sharing is particularly useful in the current application because it allows for better generalization when the dimension of the training data is relatively small. The updated hidden state (\ref{eq:hidden}) is used to generate a sequence of values $\boldsymbol{o}^{(t)}$ in the form of log probabilities corresponding to the output. The loss function computes $\boldsymbol{\hat{y}}^{(t)} = f_2 \left(\boldsymbol{o}^{(t)}\right)$ and calculates the loss. The total loss for the input-output pair is the sum of the losses over all $t$. The RNNs are trained to estimate the conditional distribution of $\boldsymbol{y}^{(t)}$ given the past inputs and also the previous output. This is accomplished by offsetting the input-output pairs by one time-step so that the networks receive $\boldsymbol{y}^{(1)}$ as input at $t + 1$ to be conditioned on for predicting subsequent outputs. This popular training procedure is known as teacher forcing because it forces the networks to stay close to the ground-truth output $\boldsymbol{y}^{(t)}$ \citep{lamb2016professor}. Specifically, the RNNs are trained to maximize the log-likelihood \begin{equation} \label{rnn-obj} \text{log} \Pr \left(\boldsymbol{y}^{(t)} | \boldsymbol{x}^{(1)} \ldots \boldsymbol{x}^{(t)},\boldsymbol{y}^{(1)}, \ldots, \boldsymbol{y}^{(t-1)} \right). \end{equation} \subsection{Encoder-decoder networks} Encoder-decoder networks are the standard for neural machine translation (NMT) \citep{cho2014learning,bahdanau2014neural,vinyals2014grammar} and are also widely used for predictive tasks, including speech recognition \citep{chorowski2015attention} and time-series forecasting \citep{zhu2017deep}. The encoder RNN reads in $\boldsymbol{x}^{(t)}$ sequentially and the hidden state of the network updates according to (\ref{eq:hidden}). The hidden state of the encoder is a context vector $\boldsymbol{c}$ that summarizes the input sequence, which is copied over to the decoder RNN. The decoder generates a variable-length output sequence by predicting $\boldsymbol{y}^{(t)}$ given the encoder hidden state and the previous element of the output sequence. Thus, the hidden state of the decoder is updated recursively by \begin{equation} \boldsymbol{h}^{(t)} = f_1 \left( \boldsymbol{h}^{(t-1)}, \boldsymbol{y}^{(t-1)}, \boldsymbol{c}; \theta \right), \label{eq:decoder} \end{equation} and the conditional probability of the next element of the sequence is \begin{equation} \Pr (\boldsymbol{y}^{(t)} | \boldsymbol{y}^{(t)}, \ldots, \boldsymbol{y}^{(t-1)}, \boldsymbol{c}) = f_1 \left( \boldsymbol{h}^{(t)}, \boldsymbol{y}^{(t-1)}, \boldsymbol{c}; \, \theta \right). \end{equation} Effectively, the decoder learns to generate outputs $\boldsymbol{y}^{(t)}$ given the previous outputs, conditioned on the input sequence. \subsection{Recurrent variational autoencoder} While the encoder-decoder architecture is effective for many sequential prediction tasks, the model does not learn a vector representation of the entire input. The variational autoencoder (VAE) \citep{kingma2013auto} is a generative model that learns a latent variable model for $\boldsymbol{x}^{(t)}$ such that new sequences $\boldsymbol{x'}^{(t)}$ can be generated by sampling from the latent space $q$. Similar to encoder-decoder networks, the VAE has an encoder that learns a latent representation of the input sequence and a decoder that maps the representation back to the inputs. The VAE architecture differs from encoder-decoder networks in that the VAE doesn't have a final dense layer that compares the decoder outputs to $\boldsymbol{x'}^{(t)}$; i.e., it is a ``self-supervised'' technique. Another difference is that the VAE learns parameter weights by mapping the inputs to a distribution over parameters of $q$. The recurrent VAE (RVAE) \citep{fabius2014variational, chung2015recurrent,bowman2015generating} consists of an encoder RNN that maps $\boldsymbol{x}^{(t)}$ to a distribution over parameters of $q$. The model then randomly samples $\boldsymbol{z}$ from the latent distribution, \begin{equation} q(\boldsymbol{z} | \boldsymbol{x}^{(t)}) = q (\boldsymbol{z}; f_3 (\boldsymbol{x}^{(t)};\, \theta)). \end{equation} Finally, a decoder RNN takes the form of a conditional probability model $\Pr (\boldsymbol{x}^{(t)} | \boldsymbol{z})$. The parameters of the model are learned by maximizing the loss function, which takes the difference between the log-likelihood between the decoder outputs $\boldsymbol{x'}^{(t)}$ and $\boldsymbol{x}^{(t)}$ and the relative entropy between $q(\boldsymbol{z} | \boldsymbol{x}^{(t)})$ and the model prior $\Pr (\boldsymbol{z})$. The latter component of the loss function acts as regularizer by forcing the learned latent distribution to be similar to the model prior. \section{Placebo tests} \label{placebo} I conduct placebo tests on actual datasets in order to benchmark the accuracy of RNN-based estimators. There are no actual treated units in the placebo tests, so the estimators are evaluated on their ability to recover a null effect. For each trial run, I randomly select half of the units in the dataset to be treated and predict their counterfactual outcomes for periods following a selected $\text{T}_0$. I compare the predicted values to the observed values by calculating the root-mean squared error $(\text{RMSE})$. I benchmark the encoder-decoder networks and RVAE against the following estimators: \begin{description} {\setlength\itemindent{1mm} \item[(a) DID] Regression of $\textbf{Y}$ on $\textbf{W}$ and unit and time fixed effects \item[(b) MC-NNM] Matrix completion via nuclear norm minimization, with the regularization term on the nuclear norm selected by cross-validation \citep{athey2017matrix} \item[(c) SCM] Approached via exponentiated gradient descent \citep{abadie2010synthetic} \item[(d) VT-EN] Vertical regression with elastic-net regularization, with the regularization and mixing parameters selected by cross-validation \citep{zou2005regularization,athey2017matrix}. } \end{description} Implementation details for the encoder-decoder networks and RVAE are provided in Supporting Materials (SM) Section SM-\ref{imp}. In the placebo tests, the networks are trained using an unweighted MSE loss function for 500 epochs on a 12GB NVIDIA Titan Xp GPU. \subsection{Synthetic control datasets} \label{synth-placebo} I first conduct placebo tests on three datasets common to the synthetic control literature, with the actual treated unit removed from each dataset: \possessivecite{abadie2003economic} study of the economic impact of terrorism in the Basque Country during the late 1960s ($\text{N}=16$, $\text{T}=43$); \possessivecite{abadie2010synthetic} study of the effects of a large-scale tobacco control program implemented in California in 1988 ($\text{N}=38$, $\text{T}=31$); and \possessivecite{abadie2015comparative} study of the economic impact of the 1990 German reunification on West Germany ($\text{N}=16$, $\text{T}=44$). Each dataset is log-transformed to alleviate exponential effects. Figure \ref{california-sim} reports the estimated average prediction error on the California smoking dataset, with the estimates jittered horizontally to reduce overlap. Figures SM-\ref{basque-sim} and SM-\ref{germany-sim} report the estimates for the Basque Country and West Germany datasets, respectively. Error bars are calculated using the standard deviation of the error distribution generated by multiple runs. The RNN-based estimators yield comparable error rates vis-à-vis the alternatives only for high ratios of $\text{T}_0/\text{T}$, which reflect the need for sizeable training sets for the RNN-based approach. The RVAE performs the worse on comparatively small training data since it is learning from less information than the encoder-decoder networks; i.e., without the post-period observations of the control units. The MC-NNM estimator does comparatively well in the simulations due to the fact that it is capable of using additional information in the form of pre-period observations of the treated units, whereas the other estimators train only on the control observations. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{plots/california-sim.png} \caption{Placebo tests on California smoking data: {\protect\tikz \protect\draw[color={rgb:red,4;green,0;yellow,1}] (0,0) -- plot[mark=o, mark options={scale=2}] (0.25,0) -- (0.5,0);}, DID; {\protect\tikz \protect\draw[color={rgb:red,244;green,226;blue,66}] (0,0) -- plot[mark=triangle*, mark options={scale=2,fill=white}] (0.25,0) -- (0.5,0);}, ED; {\protect\tikz \protect\draw[color={rgb:red,0;green,5;blue,1}] (0,0) -- plot[mark=+, mark options={scale=2}] (0.25,0) -- (0.5,0);}, MC-NNM; {\protect\tikz \protect\draw[color={rgb:red,66;green,200;blue,244}] (0,0) -- plot[mark=x, mark options={scale=2}] (0.25,0) -- (0.5,0);}, RVAE; {\protect\tikz \protect\draw[color={rgb:red,66;green,107;blue,244}] (0,0) -- plot[mark=diamond, mark options={scale=2}] (0.25,0) -- (0.5,0);}, SCM; {\protect\tikz \protect\draw[color={rgb:red,244;pink,66;blue,223}] (0,0) -- plot[mark=triangle, mark options={scale=2, rotate=180}] (0.25,0) -- (0.5,0);}, VT-EN.\label{california-sim}} \end{figure} \subsection{Stock market data} The second battery of placebo tests draws on a dataset of stock market returns compiled by \citet{athey2017matrix}. The dataset consists of daily returns for 2,453 stocks over 3,082 days. In order to track how the error rates vary according to the dimensionality of the data, I create six sub-samples of the first $T$ daily returns of $N$ randomly selected stocks for the pairs $(\text{N}, \text{T}) = $ (10, 490), (20, 245), (50, 98), (70, 70), (100, 49), and (140, 35). In each sub-sample, half of the units are randomly selected as treated, and $\text{T}_0 = \text{T}/2$. Figure \ref{stock-sim} reports the average RMSE for each pair with standard errors informed by the error distribution generated by five trial runs. The average RMSE is the lowest for all estimators in the sub-sample $(\text{N}, \text{T}) = (10, 490)$, which reflects the benefit of training on a large number of time periods. Within this sub-sample, encoder-decoder networks and RVAE achieve the lowest average RMSE, followed by MC-NNM, SCM, DID, and lastly, vertical regression. The RNN-based estimators do comparatively less well when $N \gg T$ since there is not an adequate number of training set pre-periods to learn a concise representation of the inputs. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{plots/stock-sim.png} \caption{Placebo tests on stock market data: {\protect\tikz \protect\draw[color={rgb:red,4;green,0;yellow,1}] (0,0) -- plot[mark=o, mark options={scale=2}] (0.25,0) -- (0.5,0);}, DID; {\protect\tikz \protect\draw[color={rgb:red,244;green,226;blue,66}] (0,0) -- plot[mark=triangle*, mark options={scale=2,fill=white}] (0.25,0) -- (0.5,0);}, ED; {\protect\tikz \protect\draw[color={rgb:red,0;green,5;blue,1}] (0,0) -- plot[mark=+, mark options={scale=2}] (0.25,0) -- (0.5,0);}, MC-NNM; {\protect\tikz \protect\draw[color={rgb:red,66;green,200;blue,244}] (0,0) -- plot[mark=x, mark options={scale=2}] (0.25,0) -- (0.5,0);}, RVAE; {\protect\tikz \protect\draw[color={rgb:red,66;green,107;blue,244}] (0,0) -- plot[mark=diamond, mark options={scale=2}] (0.25,0) -- (0.5,0);}, SCM; {\protect\tikz \protect\draw[color={rgb:red,244;pink,66;blue,223}] (0,0) -- plot[mark=triangle, mark options={scale=2, rotate=180}] (0.25,0) -- (0.5,0);}, VT-EN.\label{stock-sim}} \end{figure} \section{Application: Homestead policy and public schooling} \label{schooling-app} Sociologists and political economists \citep[e.g,][]{meyer1979public,alesina2013nation,bandiera2018nation} have viewed the rapid development of public schooling in the U.S. during the 19th century as a nation-building policy. It is argued that states across the U.S. adopted compulsory primary education means to homogenize the population during the `Age of Mass Migration', when of tens of millions of foreign migrants arrived to the country between 1850 and 1914. An alternative explanation for the rise of public schooling is the view of \citet{engerman2005evolution} that frontier state governments sought to increase public investments in order to attract eastern migrants following the passage of the Homestead Act (HSA) of 1862, which opened for settlement hundreds of millions of acres of frontier land. Any adult citizen could apply for a homestead grant of 160 acres of land, provided that they live and make improvements on the land for five years. According to the authors, the sparse population on the frontier meant that state and local governments competed with each other to attract migrants in order to lower local labor costs and to increase land values and tax revenues. Frontier governments offered migrants broad access to cheap land and property rights, unrestricted voting rights, and a more generous provision of schooling and other public goods. The HSA may have also increased state schooling expenditures by reducing the degree of land inequality on the frontier. Policies that led to the decentralization of public land are expected to lower land inequality by fixing land grants to 160 acres, thereby encouraging farm sizes to approach their ideal scale. Political economy frameworks \citep[e.g.,][]{acemoglu2008persistence, besley2009origins} emphasize that greater economic power of the ruling class reduces public investments. In the model of \citet{galor2009inequality}, wealthy landowners block education reforms because public schooling favors industrial labor productivity and decreases the value in farm rents. Inequality in this context can be thought of as a proxy for the amount of \emph{de facto} political influence elites have to block reforms. In the empirical application below, I apply the RNN-based approach to the problem of estimating the long-run impacts of the HSA on state government public education spending. \subsection{Data and assumptions} \label{educ-data} I create a state-level measure of state government education spending from the records of 48 state governments during the period of 1783 to 1932 \citep{sylla1993sources} and the records of 16 state governments during the period of 1933 to 1937 \citep{sylla1995sourcesa,sylla1995sourcesb}. Comparable measures for 48 states are drawn from U.S. Census special reports for the years 1902, 1913, 1932, 1942, 1962, 1972, and 1982 \citep{haines2010}. The data pre-processing steps are as follows. The measure is inflation-adjusted according to the U.S. Consumer Price Index \citep{williamson2017seven} and scaled by the total free population in the decennial census \citep{haines2010}. Missing values are imputed separately in the pre- and -post-periods by carrying the last observation forward and remaining missing values are imputed by carrying the next observation backward. The data are log-transformed to alleviate exponential effects. Lastly, I remove states with no variance in the pre-period outcomes, resulting in a complete matrix of size $(\text{N} \times \text{T})= (32 \times 156)$. In this application, public land states --- i.e., states crafted from the public domain --- serve as treated units (i.e., the test set). State land states, which include states of the original 13 colonies, Maine, Tennessee, Texas, Vermont, and West Virginia, were not directly affected by homestead policies and therefore serve as control units (i.e., the training set). The RNN-based approach assumes the distribution of $\boldsymbol{X}^{\text{train}}$ and $\boldsymbol{X}^{\text{test}}$ are similar. I weight the training loss by propensity scores in order to minimize distributional discrepancy between the training and test set inputs. The propensity scores are estimated via logistic regression with unit-specific, pre-period covariates including state-level average farm sizes measured in the 1860 and average farm values measured in the 1850 and 1860 censuses \citep{haines2010} to control for homesteaders migrating to more productive land. To control for selection bias arising from differences in access to frontier lands, I create a measure of total miles of operational track per square mile aggregated to the state-level using digitized railroad maps provided by \citet{atack2013use}. Fig. SM-\ref{educ-dense} shows that the training and test set input distributions weighted by the propensity scores are visually similar Aggregating to the state level approximately 1.46 million individual land patent records authorized under the HSA, I determine that the earliest homestead entries occurred in 1869 in about half of the frontier states, about seven years following the enactment of the 1862 Homestead Act. Land patent records provide information on the initial transfer of land titles from the federal government and are made accessible online by the U.S. General Land Office (\url{https://glorecords.blm.gov}). Using this information, I set $\text{T}_0 = 87$, which leaves $\text{T} - \text{T}_0 = 69$ time periods when half of the states are exposed to treatment. While the approach assumes that treatment adoption is simultaneous across states, the date of initial treatment exposure varied as new frontier land opened between the period of 1869 to 1902. Also note that while the no interference assumption cannot directly be tested, it is likely that state land states were indirectly affected by the out-migration of homesteaders from frontier states. \subsection{Estimates} Prior to analyzing the data, I conduct placebo tests on the education spending data similar to those described in Section \ref{synth-placebo}. Figure SM-\ref{educ-sim} presents the average RMSE calculated on the control unit outcomes with standard errors originating from 10 runs. In line with the previous placebo tests, the RNN-based estimators yield error rates comparable to the alternative estimators only when there are sufficient pre-period observations to train on; in this case, when $\text{T}_0/\text{T} \geq 0.5$. We can be reasonably confident that the RNN-based estimators will be at least as accurate as the other estimators since $\text{T}_0/\text{T} = 0.55$ in this application. Next, I train a encoder-decoder network on the training set of state land states and use the learned weights to predict the counterfactual outcomes of public land states. The top panel of Figure \ref{educ-ed} compares the average outcomes of treated units and control units along with the average predicted outcomes of treated units. The dashed vertical line represents the first year of treatment exposure in 1869. We are primarily interested in the difference in the observed and predicted treated unit outcomes, which is the quantity $\boldsymbol{\bar{\upphi}^{(t)}}$. These per-period average causal impacts are plotted in the bottom panel and are bounded by 95\% randomization confidence intervals, which are estimated following the procedure described in Section SM-\ref{eval}. Counterfactual predictions of state government education spending in the absence of the HSA generally tracks the observed control time-series until the turn of the 19$^\text{th}$ century, at which the counterfactual flattens and diverges from the increasing observed control time-series. This delay can potentially be explained by the fact that homestead entries did not substantially accumulate until after Congress prohibited the sale of public land in 1889 in all states except Missouri \citep{gates1941land,gates1979federal}. Taking the mean of post-period impacts, I estimate that the impact of the HSA on the state government spending of states exposed to homesteads is 0.69 [-0.19, 2.01]. The confidence intervals surrounding this estimate contains zero, which implies that the estimated impact is not significantly more extreme than the exact distribution of average placebo effects under the null hypothesis. Examining the time-specific causal estimates reveals that fifty years after the first homestead entry, the estimated impact of the HSA on state government education spending in 1919 is 0.68 log points [0.13, 1.24]. The confidence intervals surrounding this time-specific estimate does not contain zero, which implies that the estimated impact is significantly more extreme than the average placebo effects. To put the magnitude of the point estimate in perspective, it represents about 3\% of the total school expenditures per-capita in 1929 \citep{snyder2010digest}. \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{plots/educ-ed.png} \caption{Encoder-decoder estimates of the impact of the HSA on state government education spending, 1809 to 1982: {\color{Darjeeling15}{\sampleline{}}}, observed treated; {\color{Darjeeling11}{\sampleline{dashed}}}, observed control; {\color{Darjeeling15}{\sampleline{dotted}}}, counterfactual treated; {\color{Darjeeling15}{\sampleline{dash pattern=on .7em off .2em on .05em off .2em}}}, $\boldsymbol{\bar{\upphi}^{(t)}}$.\label{educ-ed}} \end{figure} \section{Conclusion} \label{conclusion} This paper makes a methodological contribution in proposing a novel alternative to the SCM for estimating the effect of a policy intervention on an outcome over time in settings where appropriate control units are unavailable. The SCM is growing in popularity in the social sciences despite its limitations --- the most obvious being that the choice of specification can lead to different results, and thus facilitate $p$-hacking. By inputting only control unit outcomes and not relying on pre-period covariates, the proposed method offers a more principled approach than the SCM. The RNN-based approach joins a new generation of data-driven machine learning techniques for generating counterfactual predictions. Machine learning techniques in general have an advantage over the SCM in that they automatically choose appropriate predictors without relying on pretreatment covariates; this capability limits ``researcher degrees of freedom'' that arises from choices on how to specify the model. RNNs do not assume a specific functional distribution, can learn nonconvex combinations of control units, and are specifically structured to exploit temporal dependencies in the data. RNNs are also capable of handling multiple treated units, which is useful because the model can share parameters across treated units, and thus generate more precise predictions in settings in which treated units share similar data-generating processes. In placebo tests, RNN-based estimators perform comparatively worse than the alternatives on small dimensional datasets such as those featured in the original synthetic control papers. Both RNN-based estimators require sufficient pre-period observations in order to learn an informative representation of the control units. The RVAE in particular requires a a large amount of training data since it is a self-supervised method that learns without outputs. In higher dimensional datasets such as the stock market data, the RNN-based methods generally outperform the alternatives when $N \ll T$. The estimators underperform when $N \gg T$, which again reflects the need for sufficient pre-period observations. The matrix completion method performs well in either case, despite of its disadvantage of treating the data as static and thus ignoring the temporal component of the data. A built-in advantage of the matrix completion approach is that it does not assume a specific structure to the treatment assignment mechanism and thus can accommodate settings in which the time of initial treatment exposure varies across treated units. One potential avenue for future research is to integrate RNNs into the matrix completion approach by training multidirectional RNNs \citep[e.g.,][]{yoon2018estimating} to both impute missing values across the unit dimension and interpolate missing values within the time dimension. A second area of future research would explore ways to relax the assumption of equivalence between the distributions of pre-period outcomes between control and treated units. An alternative approach to the one currently proposed is to treat the problem of counterfactual prediction like a NMT problem by training the networks on the pre-period outcomes of control units to predict those of treated units. The learned model weights would then be fit on the post-period outcomes of control units at test time. This setup would instead assume equivalence between the distributions of pre-and post-period outcomes of control units, which is more likely to be satisfied in the absence of interference between treated and control units. \newpage \bibliographystyle{rss} \begin{singlespace} \begin{footnotesize} \begin{multicols}{2} \section{Implementation details} \label{imp} The networks are implemented with the \texttt{Keras} neural network library \citep{chollet2015keras} in Python on top of a TensorFlow backend. When implementing encoder-decoder networks, the encoder takes the form of a two-layer Long Short-Term Memory (LSTM) network \citep{schmidhuber1997long}, each with 128 hidden units, and the decoder is a single-layer Gated Recurrent Unit (GRU) \citep{chung2014} also with 128 hidden units. Each recurrent layer uses a linear activation function ($f_1$) with weights initialized using Xavier initialization \citep{glorot2010}. The loss function internally computes the predicted outputs as a linear function ($f_2$) of the log probabilities. RNN weights are learned with mini-batch gradient descent on the WMSE using \texttt{Adam} stochastic optimization with the learning rate set to $5\,\cdot\,10^{-4}$ \citep{kingma2014adam}. As a regularization strategy, I apply dropout to the inputs and L2 regularization losses to the network weights. The networks are trained for 1,000 epochs, which takes 10 minutes to run on a laptop CPU. The model is validated on the last 20\% of the training set input-out pairs. The RVAE is implemented similarly, but with the following differences: the encoder takes the form of a single-layer LSTM with 32 hidden units and the decoder is a two-layer LSTM with the number of hidden units equal to 32 and the number of predictors, respectively. The latent space $\boldsymbol{z}$ is implemented as a densely-connected layer with a dimension of 200 units and $f_3(\cdot)$ takes the form of a log-normal distribution. The RVAE is trained with stochastic gradient descent for 5,000 epochs, which takes seven minutes to run on the same CPU. \clearpage \section{Hypothesis testing} \label{eval} \citet{abadie2010synthetic} propose a randomization inference approach for calculating the exact distribution of placebo effects under the sharp null hypothesis of no effect. \citet{cavallo2013catastrophic} extends the placebo-based testing approach to the case of multiple (placebo) treated units by constructing a distribution of \emph{average} placebo effects under the null hypothesis. \citet{firpo2018synthetic} derive the conditions under which the randomization inference approach is valid from a finite sample perspective and \citet{hahn2017synthetic} analyze the approach from a repeated sampling perspective. Randomization $p$-values are obtained following these steps: \begin{enumerate} \item Estimate the observed test static $\boldsymbol{\hat{\upphi}}$ from (\ref{eq:pointwise}). Averaging over the time dimension results in a $\text{T}_\star$-length array of observed average treatment effects. \item Calculate every possible average placebo treated effect $\upmu$ by randomly sampling without replacement which $\text{J}-1$ control units are assumed to be treated. There are $\mathcal{Q} = \sum\limits_{\text{g}=1}^{\text{J}-1} {\text{J} \choose \text{g}}$ possible average placebo effects. Since calculating $\mathcal{Q}$ can be computationally burdensome for relatively high values of $J$, I artificially set $\mathcal{Q} = 10,000$ in cases when $\text{J} > 16$. The result is a matrix of dimension $\mathcal{Q} \times \text{T}_\star$ \item Sum over the time dimension the number of $\upmu$ that are greater than or equal to $\boldsymbol{\hat{\upphi}}$. \label{counts} \end{enumerate} Each element of the vector obtained from Step \ref{counts} is divided by $\mathcal{Q}$ to estimate a $\text{T}_\star$-length vector of exact two-sided $p$ values, $\hat{p}$. \subsection{Randomization confidence intervals} Under the assumption that treatment has a constant additive effect $\Delta$, I construct an interval estimate for $\Delta$ by inverting the randomization test. Let $\updelta_\Delta$ be the test statistic calculated by subtracting all possible $\upmu$ by $\Delta$. I derive a two-sided randomization confidence interval by collecting all values of $\updelta_\Delta$ that yield $\hat{p}$ values greater than or equal to significance level $\upalpha=0.05$. I find the endpoints of the confidence interval by randomly sampling 500 values of $\Delta$. \clearpage \section{Supporting Figures} \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{plots/basque-sim.png} \caption{Placebo tests on Basque Country terrorism data: {\protect\tikz \protect\draw[color={rgb:red,4;green,0;yellow,1}] (0,0) -- plot[mark=o, mark options={scale=2}] (0.25,0) -- (0.5,0);}, DID; {\protect\tikz \protect\draw[color={rgb:red,244;green,226;blue,66}] (0,0) -- plot[mark=triangle*, mark options={scale=2,fill=white}] (0.25,0) -- (0.5,0);}, ED; {\protect\tikz \protect\draw[color={rgb:red,0;green,5;blue,1}] (0,0) -- plot[mark=+, mark options={scale=2}] (0.25,0) -- (0.5,0);}, MC-NNM; {\protect\tikz \protect\draw[color={rgb:red,66;green,200;blue,244}] (0,0) -- plot[mark=x, mark options={scale=2}] (0.25,0) -- (0.5,0);}, RVAE; {\protect\tikz \protect\draw[color={rgb:red,66;green,107;blue,244}] (0,0) -- plot[mark=diamond, mark options={scale=2}] (0.25,0) -- (0.5,0);}, SCM; {\protect\tikz \protect\draw[color={rgb:red,244;pink,66;blue,223}] (0,0) -- plot[mark=triangle, mark options={scale=2, rotate=180}] (0.25,0) -- (0.5,0);}, VT-EN.\label{basque-sim}} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{plots/germany-sim.png} \caption{Placebo tests on West German reunification data: {\protect\tikz \protect\draw[color={rgb:red,4;green,0;yellow,1}] (0,0) -- plot[mark=o, mark options={scale=2}] (0.25,0) -- (0.5,0);}, DID; {\protect\tikz \protect\draw[color={rgb:red,244;green,226;blue,66}] (0,0) -- plot[mark=triangle*, mark options={scale=2,fill=white}] (0.25,0) -- (0.5,0);}, ED; {\protect\tikz \protect\draw[color={rgb:red,0;green,5;blue,1}] (0,0) -- plot[mark=+, mark options={scale=2}] (0.25,0) -- (0.5,0);}, MC-NNM; {\protect\tikz \protect\draw[color={rgb:red,66;green,200;blue,244}] (0,0) -- plot[mark=x, mark options={scale=2}] (0.25,0) -- (0.5,0);}, RVAE; {\protect\tikz \protect\draw[color={rgb:red,66;green,107;blue,244}] (0,0) -- plot[mark=diamond, mark options={scale=2}] (0.25,0) -- (0.5,0);}, SCM; {\protect\tikz \protect\draw[color={rgb:red,244;pink,66;blue,223}] (0,0) -- plot[mark=triangle, mark options={scale=2, rotate=180}] (0.25,0) -- (0.5,0);}, VT-EN.\label{germany-sim}} \end{figure} \begin{figure*}[htbp] \centering \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{plots/educ-dens.png} \caption{Unweighted} \end{subfigure} ~ \begin{subfigure}[t]{0.48\textwidth} \centering \includegraphics[width=\textwidth]{plots/educ-dens-w.png} \caption{Weighted by propensity score} \end{subfigure} \caption{Pre-period densities of log per-capita state government education spending by treatment status:{\protect\tikz \protect\draw[color=black] (0,0) -- plot[mark=square, mark options={scale=2, fill=white}] (0.25,0) -- (0.5,0);}, Control; {\protect\tikz \protect\draw[color={rgb:red,104;green,122;blue,255}] (0,0) -- plot[mark=square*, mark options={scale=2,fill={rgb:red,104;green,122;blue,255}}] (0.25,0) -- (0.5,0);}, Treated \label{educ-dense}} \end{figure*} \begin{figure}[htbp] \centering \includegraphics[width=0.9\textwidth]{plots/educ-sim.png} \caption{Placebo tests on education spending data: {\protect\tikz \protect\draw[color={rgb:red,4;green,0;yellow,1}] (0,0) -- plot[mark=o, mark options={scale=2}] (0.25,0) -- (0.5,0);}, DID; {\protect\tikz \protect\draw[color={rgb:red,244;green,226;blue,66}] (0,0) -- plot[mark=triangle*, mark options={scale=2,fill=white}] (0.25,0) -- (0.5,0);}, ED; {\protect\tikz \protect\draw[color={rgb:red,0;green,5;blue,1}] (0,0) -- plot[mark=+, mark options={scale=2}] (0.25,0) -- (0.5,0);}, MC-NNM; {\protect\tikz \protect\draw[color={rgb:red,66;green,200;blue,244}] (0,0) -- plot[mark=x, mark options={scale=2}] (0.25,0) -- (0.5,0);}, RVAE; {\protect\tikz \protect\draw[color={rgb:red,66;green,107;blue,244}] (0,0) -- plot[mark=diamond, mark options={scale=2}] (0.25,0) -- (0.5,0);}, SCM; {\protect\tikz \protect\draw[color={rgb:red,244;pink,66;blue,223}] (0,0) -- plot[mark=triangle, mark options={scale=2, rotate=180}] (0.25,0) -- (0.5,0);}, VT-EN. \label{educ-sim}} \end{figure} \bibliographystyle{rss} \begin{singlespace}
1,116,691,498,131
arxiv
\section{General Overview} The \aipcls{} is a \LaTeXe{} document class for conference proceedings of the American Institute of Physics and other documents with similar layout requirements. Your file will be used to reproduce your paper as is, the only modifications done by the publisher are adding appropriate page numbers and the copyright line. It is therefore essential that you embed all fonts when saving your file. This version of the guide explains all features of the class some of which are only applicable to certain proceeding layouts. The class provides essentially the same markup as implemented by \LaTeX's standard \texttt{article} class. In addition to this it implements the following: \begin{itemize} \item extended set of front matter commands, \item automatic placement of floats into column or page areas including turning of table floats by 90\textdegree{} if necessary, \item allows mixing column and page-wide floats without getting the numbering out of sync, \item footnotes will appear below bottom floats, \item extended set of citation commands if the \texttt{natbib} system is installed, \item support for table notes, \item support for textual page references like ``on the next page''. \end{itemize} Due to the extended functionality an article written for \LaTeX{}'s standard article class might need adjustments in the following places before it can be used with the \aipcls{} (a more detailed description is given in later sections): \begin{itemize} \item In the preamble, since the \aipcls{} requires a |\layoutstyle| declaration. \item In the front matter, since the \aipcls{} uses an extended set of title/author declarations. \item In the body of floats, since the \aipcls{} only allows a single |\caption| command and processes the body in horizontal mode. \end{itemize} \section{Checking your \LaTeX{} distribution} To ensure that your installation of \LaTeX{} contains everything necessary to successfully use the \aipcls{}, run the file \texttt{aipcheck.tex} through \LaTeX, e.g., \begin{verbatim} latex aipcheck \end{verbatim} It will try to determine if everything necessary is available and if not, will make recommendations what can be done about it. In certain cases you might be able to use the class if you follow the suggestions, in other cases the only solution is to upgrade your \LaTeX{} installation. Unfortunately it is impossible to check for all potential problems. If \texttt{aipcheck.tex} claims everything is fine, but you nevertheless have difficulties, consult the ``Frequently Asked Question'' (\texttt{FAQ.txt}) and the readme file in the distribution. \section{Class details} \subsection{Selecting the target layout} The class supports different layouts. These are selected by placing a |\layoutstyle| declaration in the preamble of the document. \BDefC{layoutstyle}[m]{layout name} This command is required. With version 1.3 of the \aipcls{} the following \Larg{layout name}s can be specified. \begin{description} \item[6x9] Layout for the AIP Conference Proceedings with 6 x 9 inches single column format (short name |6s|). \item[8x11single] Layout for the AIP Conference Proceedings with 8.5 x 11 inches single column format (short name |8s|). \item[8x11double] Layout for the AIP Conference Proceedings with 8.5 x 11 inches double column format (short name |8d|). \item[arlo] Layout for the ``Acoustics Research Letters Online'' --- ARLO. \end{description} For example, the current guide was produced using the declaration |\layoutstyle{|\texttt{\selectedlayoutstyle}|}|. \subsection{Supported options}\label{suppopt} As the class is based on the article class of standard \LaTeX{} all reasonable\footnote{Reasonable means not conflicting with fixed requirements for the AIP class, e.g., as this class requires 10pt body size option \texttt{11pt} and \texttt{12pt} are ignored and produce a warning.} options of this class are supported automatically. In addition there are a number of options unique to the \aipcls. \subsubsection{Paper selection} Two options control the placement of the text on the physical page. Choose the one that corresponds to your printer paper. \begin{description} \item[letterpaper] Directs the class to assume that the output is printed on US letter sized paper (default). \emph{Please note that the paper format is typically also specified in the program that turns the \LaTeX{} output into PostScript. For example, some \texttt{dvips} installations have A4 as their default paper (typically those in Europe). In that case you have to call the \texttt{dvips} program with the option \texttt{-t letter} to ensure that the resulting PostScript file has the correct margins!} \item[4apaper] Directs the class to assume that the output is printed on A4 sized paper. \end{description} \subsubsection{Font selection} Five options control the selection of fonts in the document; use at most one of them. \begin{description} \item[mathptmx] Directs the class to use PostScript Times and Symbol fonts (a few missing glyphs are taken from Computer Modern) for math by loading the \texttt{mathptmx} package. This option is the default. This option does not support the |\boldmath| command since there exists no PostScript Symbol font in bold. It is possible, however to use |\mathbf| which allows you to get at least a bold Latin Alphabet. \item[mathptm] Directs the class to use PostScript Times and Symbol fonts but used the older package \texttt{mathptm} which has upright greek lowercase letters. This option does not support the |\boldmath| command since there exists no PostScript Symbol font in bold. It is possible, however to use |\mathbf| which allows you to get at least a bold Latin Alphabet. \item[mathtime] Directs the class to use MathTime fonts for math by loading the \texttt{mathtime} package. These fonts are commercial so that this option will not work if you don't own them. If this option is chosen one can also use the options for this package as global options to the class. \item[mtpro] Directs the class to use MathTime Professional fonts for math by loading the \texttt{mtpro} package. These fonts are commercial (the successors to the MathTime fonts from the previous option) so that this option will not work if you don't own them. If this option is chosen one can also use the options for this package as global options to the class. \item[nomathfonts] Directs the class not to set up math fonts (which means using the installation default which is usually Computer Modern). This option is intended in case a special math font setup is loaded in the document preamble. \item[cmfonts] Directs the class to use standard Computer Modern fonts for math and text. This does not conform to the specification for this class and is intended for draft preparation in environments where the required fonts are unavailable. \end{description} \subsubsection{Textual references} The next options enable textual references; if this is desired select one of them: \begin{description} \item[varioref] Loads the \texttt{varioref} package (see \cite[p.68ff]{A-W:MG04}) allowing to produce textual page references. See section on Cross-references~\vpageref{xref} for details. \item[nonvarioref] Disables the |\reftextvario| command so that the strings produced by \texttt{varioref} commands will not depend on the number of references seen so far. Implies the varioref option. \end{description} \subsubsection{Table note markers} Notes to tables can be influenced as follows: \begin{description} \item[tnotealph] Produce raised lower case alphabetic marks to indicate table notes. \item[tnotesymbol] Use footnote symbols to indicate table notes (default). \end{description} \subsubsection{Citation mode} The citation mode can be influenced with the following two options: \begin{description} \item[numcites] Citations are typeset using numbers. Depending on the proceeding style these might appear raised or in brackets, etc.~(default). \item[bibliocites] Citations are typeset using an author/year scheme. This requires the installation of the \texttt{natbib} system. \end{description} In some layout styles these options might be without effect. \subsubsection{Heading numbers} Heading numbers can be turned on or off with the following two options: \begin{description} \item[numberedheadings] Headings are numbered. \item[unnumberedheadings] Headings are unnumbered (default). \end{description} In some layout styles these options might be without effect. \subsubsection{Drafts} Finally there is one standard \texttt{article} class option which has its functionality extended: \begin{description} \item[draft] Allows |\tableofcontents| and similar commands to work without error message (during development of article). It marks overfull boxes and also provides page numbers in the printout. \textbf{Remove this option when producing the final paper.} \end{description} \subsection{Front matter} The class supports an extended set of front matter commands. These commands differ from those used by standard \LaTeX's \texttt{article} class. Thus, if an article already written is adapted to be used with the \aipcls{}, the front matter has to be modified somewhat. Some of the commands below are required only for certain proceedings. Declarations that are not required will be silently ignored. \BDefC{title}[om]{short title}{title text} In standard \LaTeX{} this command has no optional argument. In the \aipcls{} one can specify an abbreviated title text which is used, for example, in the running footer in draft mode. \BDefC{author}[mm]{author name}{author information} In standard \LaTeX{} this command had only one argument containing both author name and address information. In this class it has two arguments and the second argument contains data structured using key/value pairs separated by commas. For example, the authors of this paper have been specified as: \begin{verbatim} \author{F. Mittelbach}{ address={Zedernweg 62, Mainz}, ,email= {[email protected]}} \author{D. P. Carlisle}{ address={Willow House, Souldern}, ,email={[email protected]}} \end{verbatim} Supported keywords will be \texttt{address}, \texttt{email}, \texttt{altaddress}, \texttt{homepage}, and \texttt{thanks}. (With release 1.3 of \aipcls{} only \texttt{address}, \texttt{altaddress} and \texttt{email} should be used; support for the other keywords will be added later.) Depending on the layout of the target proceedings some of the keys may get ignored! \BDefC{classification}[m]{data} Some proceedings require classification data, e.g., PACS numbers. If not, this declaration is ignored. \BDefC{keywords}[m]{data} Some layouts require keyword data. If not, this declaration is ignored. \BDefC{copyrightholder}[m]{name} Some layouts require copyright information. Normally a default is provided by the class. With this declaration the copyright holder can be overwritten. \BDefC{copyrightyear}[m]{year} Some layouts require copyright data. With this declaration the copyright year can be specified. (If such data is required the current year is provided as default). \BDefE{abstract} In contrast to standard \LaTeX{} the abstract environment has to appear before the |\maketitle| command. \BDefC{maketitle} This command inserts the actual front matter data. It has to follow the above declarations. \subsubsection{Multiple authors} Multiple authors are entered by specifying one |\author| command per author. Care needs to be taken when specifying shared addresses: they have to be absolutely identical. Depending on the chosen layout the class will merge such addresses but will recognize them only as identical, if the input including spaces is the same! The |\and| command as defined in the \texttt{article} class to separate multiple authors is not supported. \subsubsection{Dates} \BDefC{received}[m]{date} \BDefC{revised}[m]{date} \BDefC{accepted}[m]{date} Some layouts require specification of date of arrival, revision, and/or acceptance. The above declarations provide a way to specify such dates if necessary. \BDefC{date}[m]{date} The article class provides the |\date| command which is not used by \aipcls. If supplied it will be ignored unless the \texttt{draft} option is specified in which case it will show up in a footer line together with the title and the page number to ease document development. \subsubsection{Other front matter commands} The |\tableofcontents|, |\listoffigures|, and |\listoftables| commands are provided but produce (beside output) an error message unless the \texttt{draft} option was selected. This is done since the \aipcls{} does not support page numbering and thus the above commands essentially produce incorrect data. \subsection{Headings} The \aipcls{} officially supports three heading levels, i.e., |\section|, |\subsection|, and |\subsubsection|. It also supports the commands |\paragraph| and |\subparagraph| although the latter heading levels are not part of the \aipcls{} specification and are therefore discouraged. In some layouts |\section| headings are changed to UPPERCASE. Special care is taken not to uppercase math material, but this support is only available if the package |textcase| is part of the \LaTeX{} distribution. \subsection{Cross-references}\label{xref} Cross-references to page numbers are not possible with the \aipcls{} as the page numbers are determined after production. For this reason the |\pageref| command of \LaTeX{} is disabled by default. Since headings in most layouts do not carry numbers they can't be referenced either. References to tables, figures, and equations are possible using the \LaTeX{} commands |\label| and |\ref|. However if the class option \texttt{varioref} or \texttt{nonvarioref} is used, references to page numbers are possible again as they will generate textual references of the form ``on the following page'' or ``on an earlier page'' etc. The produced strings are customizable as described in detail in the \texttt{varioref} package documentation or in \cite[p.68ff]{A-W:MG04}. The class defaults are as follows and can be changed with |\renewcommand| in the document preamble. The \texttt{varioref} package normally distinguishes between reference to facing pages and references to pages that need turning over using different strings in these cases. However, since with \aipcls{} class page numbers are not determined at the time of production no assumption can be made that page $x$ and $x+1$ actually fall onto the same double spread. For this reason the defaults used here do not produce strings containing the word ``facing'' or ``opposite''. \begin{verbatim} \renewcommand\reftextfaceafter {on the next page} \renewcommand\reftextfacebefore {on the \reftextvario{previous} {preceding} page} \renewcommand\reftextafter {on the \reftextvario{next} {following} page} \renewcommand\reftextbefore {on the \reftextvario{previous page}{page before}} \renewcommand\reftextcurrent {on \reftextvario{this} {the current} page} \end{verbatim} Normally, text for references which are ``far away'' are produced using |\reftextfaraway| in \texttt{varioref}. However, to produce textual references without referring to actual page numbers even in this case, this command was hijacked in the \aipcls{} and redefined to determine whether or not this is a reference to some earlier or later page. So instead of changing this command the class provides the following two commands for customization: \begin{verbatim} \renewcommand\reftextearlier {\reftextvario{on an earlier page}{earlier on}} \renewcommand\reftextlater {\reftextvario{later on} {further down}} \end{verbatim} To illustrate the result of this package all references in this document are made using |\vref| or |\vpageref|, e.g., references to Figure~\vref{fig:b} and Figure~\vref{fig:a}. These commands work best if used only for important references. Be careful when using them several times close to each other as the automatically generated texts then may sound strange (as they do in the example in this paragraph). \BDefC{eqref}[m]{label} For reference to equation numbers |\eqref| can be used instead of the standard |\ref| command. The |\eqref| command will automatically add any frills required by the layout style, while |\ref| will only typeset the plain number. For example, in the \texttt{arlo} style it will print ``Eq.~(1)'' while |\ref| would result in ``1''. \subsection{Lists} The \aipcls{} supports all standard list environments like \texttt{itemize}, \texttt{enumerate}, etc. \subsection{Graphics support} Support for including and manipulating graphics is provided as the standard \LaTeX{} \texttt{graphicx} package is automatically loaded by the \aipcls. For detailed descriptions of the commands made available by this package see~\cite{A-W:GMR97} or the package documentation coming with the \LaTeX{} release. A sufficient introduction is also given by~\cite{A-W:LLa94} although there only the \texttt{graphics} package (a subset of the \texttt{graphicx} package) is described. A typical application is given in the following example where a picture is resized to span 70\% of one column: \begin{verbatim} \begin{figure}[!b] \resizebox{.7\columnwidth}{!} {\includegraphics{escher}} \source{Guy Shaw} \caption{An illustration taken from~\cite{A-W:MG04}} \label{fig:a} \end{figure} \end{verbatim} resulting in figure \vref{fig:a}. \begin{figure}[!b] \resizebox{.7\columnwidth}{!} {\includegraphics[draft=false]{escher}} \source{Guy Shaw} \caption{An illustration taken from~\cite{A-W:MG04}} \label{fig:a} \end{figure} \subsection{Floats} Floats are objects which do not have to stay in sync with the running text but are allowed to move from their original place to some other position where they fit better for page breaking reasons. Such objects they are typically numbered so that they can be referenced from within the running text. \LaTeX{} by default supports two float types: figures and tables. These float types are also supported by the \aipcls{} although their internal implementation is quite different resulting in a number of important differences in behavior:\footnote{There exist packages that extend the number of float types. (This information is given as a footnote to show that footnotes in this class come out below a bottom float.)} \begin{itemize} \item The position of the float caption is determined automatically, independently of the placement of the |\caption| command within the float body. \item Depending on its width the float automatically spans two columns. In case of a table the whole object (including its caption) might be rotated automatically if its exceeds |\textwidth|. \item The body of the float environments are processed in L-R mode and not in paragraph mode as in standard \LaTeX. This is necessary for measuring its width. Thus if paragraph mode is needed one has to put a \texttt{minipage} environment of the appropriate width (e.g., |\columnwidth|) into the body. \item Only one |\caption| command per float is allowed. \end{itemize} \subsubsection{Figures} \BDefE{figure}[o]{pos} Like with standard \LaTeX{} the optional \Larg{pos} argument can be used to specify into which float areas this float is allowed to migrate (default is |tbp|). The environment \texttt{figure*} is not supported as figures that need to span both columns are automatically recognized in two column mode. \BDefC{source}[m]{text} Command to specify the origin of the picture shown. The \Larg{text} will be printed in small italics below the illustration. A typical example of a figure float would be \begin{verbatim} \begin{figure} \resizebox{.8\textwidth}{!} {\includegraphics{outline}} \caption{PostScript example taken from~\cite{A-W:MG04}} \label{fig:b} \source{F. Mittelbach} \end{figure} \end{verbatim} The result is shown in Figure~\vref{fig:b}. \begin{figure} \resizebox{.8\textwidth}{!}{\includegraphics[draft=false]{outline}} \caption{PostScript example taken from~\cite{A-W:MG04}} \label{fig:b} \source{F. Mittelbach} \end{figure} \BDefC{spaceforfigure}[mm]{horizontal}{vertical} If the illustration is to be manually pasted into the final document one can leave the right amount of space by using this command as follows: \begin{verbatim} \begin{figure} \spaceforfigure{2in}{1cm} \caption{Caption for a figure to be pasted in later} \label{fig:3} \source{F. Mittelbach} \end{figure} \end{verbatim} All standard \TeX{} units can be used to specify the space needed. The above example make room for an illustration that is two inches wide and one centimeter high. The result is shown as Figure~\vref{fig:3}. \begin{figure} \spaceforfigure{2in}{1cm} \caption{Caption for a figure to be pasted in later} \label{fig:3} \source{F. Mittelbach} \end{figure} \subsubsection{Tables} \BDefE{table}[o]{pos} Like with standard \LaTeX{} the optional \Larg{pos} argument can be used to specify into which float areas this float is allowed to migrate (default is |tbp|). The environment \texttt{table*} is not supported as tables that need to span both columns are automatically recognized in two column mode. Typically the body of the environment would consist of a \texttt{tabular} environment responsible for producing the actual table including the table and stub headers. \BDefC{tablehead}[mmmm]{cols}{h-pos}{v-pos}{heading text} To ease the production of tables the command |\tablehead| is provided which is essentially and abbreviation for a |\multicolumn| command that additionally boldens its text argument. I.e., \Larg{cols} specifies the number of columns the \Larg{heading text} should span and \Larg{h-pos} defines the horizontal positioning of the text of the column(s), e.g., |l|, |r|, |c|, or |p{...}|. In contrast to a simple |\multicolumn| command the \Larg{heading text} can be split vertically by using |\\| to denote the line breaks. The \Larg{v-pos} argument should contain either |t|, |c|, or |b| denoting the vertical placement of the text in relation to other cells of that row. It is only relevant if the \Larg{heading text} consists of more than one line. See the example table \vpageref[below]{tab:source} that demonstrates the use of this command. \BDefC{source}[m]{text} Command to specify the origin of the data given in the table. The \Larg{text} will be printed in small italics below the table. \BDefC{tablenote}[m]{text} Command to produce a note to the table. It can only be used within a \texttt{table} environment and should be used only at the right end of a table cell. The command produces a raised footnote symbol at the place used which sticks into the right margin. As far as \LaTeX{} is concerned this symbol does not occupy any space. Thus is will not modify the alignment of table columns. The \Larg{text} will appear below the table. In the current release notes to |\caption| or |\source| are not possible. \BDefC{tablenote*}[m]{text} Like |\tablenote| but this time the raised footnote symbol will occupy space. This version is intended to be used in the middle of cells. An example showing the use of all commands described above is shown in Table~\vref{tab:a}. It was produced by the following input:\label{tab:source} \begin{verbatim} \begin{table} \begin{tabular}{lrrrr} \hline &\tablehead{1}{r}{b}{Single\\outlet} &\tablehead{1}{r}{b}{Small\tablenote {2-9 retail outlets}\\multiple} &\tablehead{1}{r}{b}{Large\\multiple} &\tablehead{1}{r}{b}{Total} \\ \hline 1982 & 98 & 129 & 620 & 847\\ 1987 & 138 & 176 & 1000 & 1314\\ 1991 & 173 & 248 & 1230 & 1651\\ 1998\tablenote{predicted} & 200 & 300 & 1500 & 2000\\ \hline \end{tabular} \source{Central Statistical Office, UK} \caption{Average turnover per shop: by type of retail organisation} \label{tab:a} \end{table} \end{verbatim} \begin{table} \begin{tabular}{lrrrr} \hline & \tablehead{1}{r}{b}{Single\\outlet} & \tablehead{1}{r}{b}{Small\tablenote{2-9 retail outlets}\\multiple} & \tablehead{1}{r}{b}{Large\\multiple} & \tablehead{1}{r}{b}{Total} \\ \hline 1982 & 98 & 129 & 620 & 847\\ 1987 & 138 & 176 & 1000 & 1314\\ 1991 & 173 & 248 & 1230 & 1651\\ 1998\tablenote{predicted} & 200 & 300 & 1500 & 2000\\ \hline \end{tabular} \source{Central Statistical Office, UK} \caption{Average turnover per shop: by type of retail organisation} \label{tab:a} \end{table} \BDefC{setlength}[mm]{\texttt{\upshape\string\hlinesep}}{value} Vertical spacing between horizontal lines produced from |\hline| inside a tabular environment is controlled by the length parameter |\hlinesep| in this class. The default value (1pt) gives one point extra space above such lines and three times as much (i.e. 3pt) extra space below. This is done to implement the layout requirements for tables in the AIP proceedings (which are not supposed to have vertical lines in the tables). If tables with vertical lines are necessary for some reason, then the value of this parameter should be set to \texttt{0pt} either globally for the whole document or locally within the \texttt{table} environment. Otherwise the vertical lines will have strange gaps whenever a |\hline| command is used to produce a horizontal line. \subsubsection{Counters} The |\alph| and |\fnsymbol| commands to represent counter values have extended ranges. For example |\alph| will now count up to 52 (zz) and the |\fnsymbol| command will produce the following symbols \makeatletter \@fnsymbol{1}, \@fnsymbol{2}, \@fnsymbol{3}, \@fnsymbol{4}, \@fnsymbol{5}, \@fnsymbol{6}, \@fnsymbol{7}, \@fnsymbol{8}, \@fnsymbol{9}, \@fnsymbol{10}, \@fnsymbol{11}, \@fnsymbol{12}, \@fnsymbol{13}, \@fnsymbol{14}, \@fnsymbol{15}, and \@fnsymbol{16}. \makeatother This will allow for up to 16 table notes per table. For documents that need a larger number of table notes select the option \texttt{tnotealph} to switch to lower case alphabetic letters to mark such notes. \subsubsection{Long tables} Tables which are longer than one page cannot be placed into a \texttt{table} environment as floats cannot have a size larger than a page. Such tables are supported by the standard \LaTeX{} package \texttt{longtable} written by David Carlisle. However this package only works in single column mode. With two-column layouts, such as the one for the AIP 8x11 double column proceedings, such tables can only be added at the end of the paper by preceding the |longtable| environments with a |\onecolumn| declaration. The package is supported by the class in the sense that captions within a \texttt{longtable} environment will be formatted using the appropriate style; however in contrast to the \texttt{table} environment it is the responsibility of the user to place the caption at the top of the table. The commands |\source| and |\tablenote| are not supported within this environment, but the |\tablehead| command can be used to produce column heads if desired. Refer to the \texttt{longtable} package documentation or to \cite[p.122ff]{A-W:LLa94} for a detailed description of the syntax of the \texttt{longtable} environment. A possible alternative is the package \texttt{supertabular} written by Johannes Braams; however in this case no attempt has been made to ensure that a table produced with \texttt{supertabular} conforms to the layout specification for the \aipcls{}. Be aware that this package defines its own |\tablehead| command (with a completely different function). Refer to the package documentation for the syntax description. A detailed comparison between \texttt{supertabular} and \texttt{longtable} can be found in Chapter~5 of \cite{A-W:LLa94}. \subsubsection{Building floats manually} The original \LaTeX{} environments \texttt{figure} and \texttt{table} as well as their star forms are still available under the names \texttt{ltxfigure} and \texttt{ltxtable}. They should not be used in normal circumstances but are provided in case the automatism of the \aipcls{} needs overwriting. Please note that if these environments are used the position of the |\caption| command determines the placement of the caption within the float body and that the special commands for figures and tables, e.g., |\tablenote|, etc.\ as provided by this class are not available within these environments. \begin{table}[!t] \makeatletter \if8\expandafter\@car\selectedlayoutstyle\@nil\relax \else \fontsize{7}{8}\selectfont \fi \makeatother \begin{tabular}{rrrp{.6\textwidth}} \hline \tablehead{1}{r}{b}{File} & \tablehead{1}{c}{b}{Date} & \tablehead{1}{c}{b}{Version} & \tablehead{1}{c}{b}{Description} \\ \hline aipproc.cls & 2000/08/31 & v1.2a & AIP Proceedings (FMi) \\ fixltx2e.sty & 1999/12/01 & v1.0b & fixes to LaTeX \\ calc.sty & 1998/07/07 & v4.1b & Infix arithmetic (KKT,FJ) \\ ifthen.sty & 1999/09/10 & v1.1b & Standard LaTeX ifthen package (DPC) \\ graphicx.sty & 1999/02/16 & v1.0f & Enhanced LaTeX Graphics (DPC,SPQR) \\ keyval.sty & 1999/03/16 & v1.13 & key=value parser (DPC) \\ graphics.sty & 1999/02/16 & v1.0l & Standard LaTeX Graphics (DPC,SPQR) \\ trig.sty & 1999/03/16 & v1.09 & sin cos tan (DPC) \\ graphics.cfg & \\ dvips.def & 1999/02/16 & v3.0i & Driver-dependant file (DPC,SPQR) \\ url.sty & 1999/03/28 & ver 1.5x & Verb mode for urls, etc. \\ article.cls & 2000/05/19 & v1.4b & Standard LaTeX document class \\ size10.clo & 2000/05/19 & v1.4b & Standard LaTeX file (size option) \\ aipxfm.sty & \\ mathptm.sty & 2000/01/12 &PSNFSS-v8.1 &Times + math package (SPQR) \\ times.sty & 2000/01/12 &PSNFSS-v8.1 &Times font as default roman(SPQR) \\ ot1ptm.fd & 2000/01/12 &PSNFSS-v8.1 & font definitions for OT1/ptm. \\ fontenc.sty & \\ t1enc.def & 2000/08/30 & v1.91 &Standard LaTeX file \\ t1ptm.fd & 2000/01/12 &PSNFSS-v8.1 & font definitions for T1/ptm. \\ textcomp.sty & 2000/08/30 &v1.91 &Standard LaTeX package \\ ts1enc.def & 1998/06/12 & v3.0d & (jk/car/fm) Standard LaTeX file \\ varioref.sty & 1999/12/02 &v1.2c &package for extended references (FMi) \\ aip-8s.clo & \\ ttct0001.sty & \\ shortvrb.sty & 2000/07/04 &v2.0m & Standard LaTeX documentation package (FMi) \\ hyperref.sty & 2000/05/08 &v6.70f & Hypertext links for LaTeX \\ pd1enc.def & 2000/05/08 &v6.70f & Hyperref: PDFDocEncoding definition (HO) \\ hyperref.cfg & \\ hdvips.def & 2000/05/08 &v6.70f & Hyperref driver for dvips \\ pdfmark.def & 2000/05/08 &v6.70f & Hyperref definitions for pdfmark specials \\ ts1cmr.fd & 1999/05/25 &v2.5h & Standard LaTeX font definitions \\ nameref.sty & 2000/05/08 &v2.18 & Cross-referencing by name of section \\ t1pcr.fd & 2000/01/12 &PSNFSS-v8.1 & font definitions for T1/pcr. \\ ot1ptmcm.fd & 2000/01/03 &Fontinst v1.801 & font definitions for OT1/ptmcm. \\ omlptmcm.fd & 2000/01/03 &Fontinst v1.801 & font definitions for OML/ptmcm. \\ omspzccm.fd & 2000/01/03 &Fontinst v1.801 & font definitions for OMS/pzccm. \\ omxpsycm.fd & 2000/01/03 & Fontinst v1.801 &font definitions for OMX/psycm. \\ ts1ptm.fd & 2000/01/12 & PSNFSS-v8.1 &font definitions for TS1/ptm. \\ escher.eps & && Graphic file (type eps) \\ outline.eps & & & Graphic file (type eps) \\ \hline \end{tabular} \caption{Files used by the \aipcls{}} \label{tab:b} \source{Output of \texttt{\string\listfiles} when processing \texttt{aipguide.tex}} \end{table} \subsection{Urls} \BDefC{url}[m]{data} For documenting URLs and related data the |\url| command is provided. It allows breaking the URL in certain places and typesets it in an adequate font and format. Instead of using curly brackets the argument can be delimited by two identical characters not used in the argument. \subsection{Bibliography} Referring to other articles, books, etc.\ can be done using the |\cite| command of standard \LaTeX{}. The list of references itself can either be produced using standard \LaTeX{} methods or using \textsc{Bib}\TeX. If installed, the \aipcls{} class includes the \texttt{natbib} system which offers an extended set of citation commands. These commands have been originally developed to support author/year citation styles but are also useful with numerical citation styles. The \texttt{natbib} system has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. Table~\vref{tab:natbib} shows some examples. \begin{table} \begin{tabular}{@{}l@{\quad$\Rightarrow$\quad}l} \hline \multicolumn{2}{@{}l}{\bfseries Author/year style} \\ \hline |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \\ \hline \multicolumn{2}{@{}l}{\bfseries Numerical style} \\ \hline |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32]\\ \hline \end{tabular} \caption{Example of \texttt{natbib} commands and their results} \label{tab:natbib} \end{table} There are many more commands and variants, see \cite{man:Daly99a} or \cite{man:Daly99b} for further details. \subsubsection{Bibliography produced manually} \BDefE{thebibliography}[m]{widest-label} Environment to hold the list of references. \BDefC{bibitem}[m]{label} Command to start a bibliographical entry having the label \Larg{label} for use in |\cite| commands. Refer to the publishers manual, e.g., \cite{man:aipproceed}, for information on how to lay out individual entries. For example: \begin{verbatim} \bibitem{Brown2000} M.~P. Brown and K. Austin, \emph{The New Physique}, Publisher Name, Publisher City, 2000, pp. 212--213. \end{verbatim} If commands from \texttt{natbib} (e.g., from table~\ref{tab:natbib}) should be usable, then additional information has to be passed to the |\bibitem| via an optional argument. \BDefC{bibitem}[om]{display-info}{label} The optional argument \Larg{display-info} should then, and only then, contain the author(s) name(s) followed by the year in parentheses without any spaces, for example: \begin{verbatim} \bibitem[Brown and Austin(2000)] {Brown2000} ... \end{verbatim} The essential feature is that the label (the part in brackets) consists of the author names, as they should appear in the citation, with the year in parentheses following. There must be no space before the opening parenthesis! This will be automatically produced if \BibTeX{} is used. \subsubsection{Bibliography produced using \textsc{Bib}\TeX} The \aipcls{} is accompanied by \BibTeX{} style files which can be used to produce compliant reference lists from \BibTeX{} database files. To use \BibTeX{} one first has to run the source file through \LaTeX{} then run \BibTeX{} and then rerun \LaTeX{} twice to get all references resolved. \BibTeX{} is described in more detail in appendix B of \cite{A-W:LLa94} and in chapter~13 of \cite{A-W:MG04}. \BDefC{bibliographystyle}[m]{style-name} This declaration specifies to \BibTeX{} that the style \Larg{style-name} should be used. It can be placed anywhere within the document but is usually positioned directly in front of the command described below. For a discussion which of the supplied \BibTeX{} styles should be used for which proceedings see the section ``Special requirements\ldots'' below. \BDefC{bibliography}[m]{bib-list} This command denotes the position where the reference list produced by \BibTeX{} will be included in the document. The \Larg{bib-list} is a comma separated list of \BibTeX{} database files. \section{General requirements and restrictions} This class was designed to work with \LaTeXe{} release 1999/06/01 or a later version. Earlier releases may work but have not been tested. With the exception of the packages \texttt{natbib} and \texttt{url} it only requires files which are part of a standard \LaTeX{} distribution, i.e., it should work if your installation contains the following components: \texttt{base}, \texttt{tools}, \texttt{graphics}, and \texttt{psnfss}, see \vref{tab:b} for files used to produce this document. The most recent \LaTeX{} distribution as well as \texttt{natbib} and \texttt{url} can be obtained from CTAN sites (Comprehensive \TeX{} Archive Network). Refer to \url{http://www.tug.org} for more information on CTAN and \TeX{} in general. A ready to run \TeX{} system for various platforms which has everything required is available on CD-ROM, look into \url{http://www.tug.org/texlive.html}. This \TeX{} implementation is also made available as an add-on to several books on \LaTeX, e.g., \cite{A-W:KD04,A-W:MG04}. For loading individual packages from a CTAN archive refer to \url{http://www.ctan.org} and search for the package name. Please omit extensions such as \texttt{.sty} when searching, e.g., search for \texttt{natbib} rather than \texttt{natbib.sty}, as such packages are often distributed in source form only, e.g., as a \texttt{.dtx} file. It is also possible to download a complete \TeX/\LaTeX{} installation from CTAN, e.g., Miktex + Winedit + Ghostview. Finally, it is also possible to download a CD-ROM image of the \TeX-live CD from CTAN (roughly 300MB): search for \texttt{texlive} (and make sure you select a suitable mirror near you). \section{Special requirements for individual layouts} \subsection{AIP proceeding layout 6x9} \begin{itemize} \raggedright \item The entire paper will be reduced 15\% in the printing process. Please make sure all figures as well as the text within the figures are large enough in the manuscript to be readable in the finished book. \item The use of the |\source| command is discouraged. \item Compliant \BibTeX{} styles are \texttt{aipproc} (for use with \texttt{natbib}) and \texttt{aipprocl} (if \texttt{natbib} is missing at the site). \item The options \texttt{bibliocites} and \texttt{numberedheadings} have no effect. \end{itemize} \subsection{AIP proceeding layout 8x11 single/double} \begin{itemize} \raggedright \item The use of the |\source| command is discouraged. \item Compliant \BibTeX{} styles are \texttt{aipproc} (for use with \texttt{natbib}) and \texttt{aipprocl} (if \texttt{natbib} is missing at the site). \item The options \texttt{bibliocites} and \texttt{numberedheadings} have no effect. \end{itemize} \subsection{ARLO} Note: the ARLO layout is no longer supported. \begin{itemize} \raggedright \item A copyright year (|\copyrightyear|) needs to be provided. \item Pacs numbers should be provided (|\classification|). \item The \texttt{arlo} layout offers one additional environment to specify multimedia files: \begin{verbatim} \begin{multimedia} \multimediauid{523} \multimediatype{gif} \multimediasize{1.2Mb} \multimediaurl{http://yorktown.% eng.yale.edu/test/msXXX/} \multimediacaption{Fancy video} \label{fv} \end{multimedia} \end{verbatim} References to a multimedia file can be made using |\label| and |\ref|. Instead of the latter command |\multimediaref| can be used to automatically get the appropriate frills, e.g., `Mm.~2' instead of just `2' as produced by |\ref|. \item Select the \texttt{draft} option for the initial submission and the copy-editing stage. Replace it by the \texttt{final} option when producing the final paper, so that page numbers and other items are stripped away. \item To conform to the layout specification for citations the \texttt{natbib} system has to be installed. \item For ARLO two compliant \BibTeX{} styles are available: \texttt{arlonum} should be used together with the class option \texttt{numcites}, while \texttt{arlobib} should be used together with the option \texttt{bibliocites}. \item The options \texttt{bibliocites} and \texttt{numberedheadings} can be used to switch to author/year citation scheme and numbered headings, respectively. \end{itemize} \section{Introduction} Double-beta ($\beta\beta$) decay, the rarest process that has been experimentally verified so far, is subject to study in both nuclear and particle physics. Unlike charged particles, electrically neutral neutrinos can be Majorana particles, particles that are their own anti-particles, and hence can introduce lepton number non-conservation. Observation of $\beta\beta$ decay without the emission of neutrinos ($0\nu\beta\beta$) would experimentally demonstrate that lepton number is not conserved, and reveal the Majorana nature of neutrinos. Moreover, if the decay process is mediated by the exchange of a light Majorana neutrino, its decay rate is proportional to the square of the effective Majorana neutrino mass $\left<m_{\beta\beta}\right> \equiv \left| \Sigma_{i} U_{ei}^{2}m_{\nu_{i}} \right|$, and therefore its determination would provide a measure of the absolute neutrino mass scale. \begin{figure}[b] \includegraphics[width=0.65\columnwidth]{fig1.eps} \caption{Schematic view of the KamLAND-Zen detector. Phase-1 had 320 kg of $^{136}$Xe enriched Xe dissolved, phase-2 had 383\,kg.} \label{figure:detector} \end{figure} The KamLAND-Zen $\beta\beta$ decay search experiment started in 2011 and is illustrated schematically in Fig.~\ref{figure:detector}~\cite{Gando2012a}. As the $\beta\beta$ decay source, 320\,kg of $^{136}$Xe enriched xenon gas was dissolved in the liquid scintillator (LS), contained in a transparent nylon balloon (mini-balloon). An initial $0\nu\beta\beta$ decay search with high sensitivity was quickly realized, owing to the extremely low radioacitivity in the already existing KamLAND detector, and the minimization of additional radioacitivities achieved in the manufacturing of the mini-balloon. Based on an initial 213.4 days of measurement (denoted as ``phase-1''), we set a lower limit on the $0\nu\beta\beta$ decay half-life of $T_{1/2}^{0\nu} > 1.9 \times 10^{25}$\,yr at 90\% C.L.~\cite{Gando2013}. \section{Background} The $0\nu\beta\beta$ decay search sensitivity in phase-1 was limited by an identified background peak from metastable $^{110m}$Ag. In order to remove this isotope, we embarked on a purification campaign aiming at the reduction of $^{110m}$Ag by a significant factor. In June 2012, we first extracted Xe from the detector, and confirmed that $^{110m}$Ag remained in the Xe-depleted LS. During this process, a diaphragm pump dedicated to the Xe-LS extraction leaked introducing radioactive environmental impurities into the circulating LS. This resulted in an accumulation of radioactive particulate at the bottom part of the mini-balloon. In the meantime, the extracted Xe and additional newly prepared Xe were purified by distillation and adsorption by a getter material. The LS was purified through water extraction and distillation. The purification was expected to be effective, however, we found that $^{110m}$Ag was reduced by only a factor of 3-4, possibly due to $^{110m}$Ag release from the mini-balloon film or partial convection between the original LS and the purified LS in the mini-balloon during filling. We took therefore extra time for the LS purification by three volume exchanges in circulation mode. The processed Xe was dissolved again into the newly purified LS in November 2013. In December 2013, we started the phase-2 data-taking, and found a reduction of $^{110m}$Ag by more than a factor of 10. After the phase-1 data-taking, we made several efforts for further improvements: (i) the removal of radioactive impurities by Xe-LS purification as mentioned above; (ii) increasing the Xe concentration from $(2.44 \pm 0.01)$~wt\% to $(2.96 \pm 0.01)$~wt\%, indicating the $\beta\beta$ target increase relative to radioactive backgrounds; (iii) developing a spallation background rejection method for $^{10}$C ($\beta^{+}$, \mbox{$\tau = 27.8$~s}, \mbox{$Q = 3.65$~MeV}) from muon-spallation; (iv) optimization of the volume selection to minimize the effect of the mini-balloon backgrounds. \begin{figure} \includegraphics[width=0.7\columnwidth]{fig2.eps} \vspace{-2.0cm} \caption{Vertex distribution of candidate events (black points) and expected $^{214}$Bi background events from a MC simulation (color histogram) for $2.3 < E < 2.7\,{\rm MeV}$. The normalization of the MC event histogram is arbitrary. The solid line indicates the shape of the balloon film.} \label{figure:vertex} \end{figure} In the phase-1 data, we observed energy peaks consistent with $^{110m}$Ag background throughout the entire Xe-LS volume and around the mini-balloon, indicating a uniform distribution of $^{110m}$Ag in the Xe-LS, and also on the mini-balloon. The contributions from the Xe-LS and the mini-balloon were almost the same. By contrast, in the phase-2 data, those peaks disappeared, and at present, the primary backgrounds for the $0\nu\beta\beta$ decay search are $^{214}$Bi (daughter of $^{238}$U) on the mini-balloon, the $^{10}$C muon spallation product, and a small contribution from remaining $^{110m}$Ag. Fig.~\ref{figure:vertex} shows the vertex distribution of candidate events after the $\beta\beta$ selection cuts, and expected $^{214}$Bi background events from a Monte Carlo (MC) simulation for $2.3 < E < 2.7\,{\rm MeV}$. Considering the z-asymmetry of the $^{214}$Bi distribution, the volume is divided into radial-equal-volume bins, 20 bins in the upper and lower hemisphere each for signal-to-background optimization. Due to the larger $^{214}$Bi background on the mini-balloon, the volume bins away from the balloon are expected to have a higher sensitivity, therefore, the background estimation around the central region is especially-important. For the $^{214}$Bi background, the vertex dispersion model was constructed from a full MC simulation based on \texttt{Geant4}~\cite{Agostinelli2003,Allison2006} including decay particle tracking, scintillation photon process, and finite PMT timing resolution. This MC reproduces the observed vertex distance between $^{214}$Bi and $^{214}$Po sequential decay events from the initial radon contamination. The muon spallation backgrounds come mainly from $^{10}$C, as well as other shorter-lived products, e.g., $^{6}$He, $^{12}$B, and $^{8}$Li. In the phase-2 data, additional event selection criteria to reject the spallation backgrounds are newly introduced based on muon-induced neutron events. Post-muon neutrons are identified by neutron-capture $\gamma$-rays by newly introduced dead-time free electronics (MoGURA), and spherical volume cuts ($\Delta R < 1.6\,{\rm m}$) around the reconstructed neutron vertices are applied for 180\,s after the muon producing the neutrons. In the energy range of the $^{10}$C background ($2.2 < E < 3.5\,{\rm MeV}$), 6 events are rejected within a radius of 1.0 m, this rate is consistent with the expectation for the LS from a previous study~\cite{Abe2010}. The livetime reduction by this spallation cut is only 7\%. \section{Results} Preliminary results presented here are based on the phase-2 data, collected between December 11, 2013, and May 1, 2014, after the $^{110m}$Ag background reduction. The total livetime is 114.8 days. The livetime, fiducial Xe-LS mass, Xe concentration, $^{136}$Xe mass, and exposure for the data sets in phase-1~\cite{Gando2013} and phase-2 are summarized in Table~\ref{table:fiducial}. \begin{table}[t] \begin{tabular}{@{}*{6}{lccccc}} \hline \hspace{3.5cm} & \multicolumn{3}{c}{Phase-1~\cite{Gando2013}} & \multicolumn{2}{c}{\hspace{0.5cm} Phase-2} \\ \hspace{3.5cm} & ~~~~~\mbox{DS-1}~~~~~ & ~~~~~\mbox{DS-2}~~~~~ & ~~~~Total~~~~ & \hspace{0.5cm} $R < 1.0\,{\rm m}$ & Full Xe-LS \\ \hline livetime (days) & 112.3 & 101.1 & 213.4 & \hspace{0.5cm} 114.8 & 114.8 \\ fiducial Xe-LS mass (ton) & 8.04 & 5.55 & - & \hspace{0.5cm} 3.27 & 12.88 \\ Xe concentration (wt\%) & 2.44 & 2.48 & - & \hspace{0.5cm} 2.96 & 2.96 \\ $^{136}$Xe mass (kg) & 179 & 125 & - & \hspace{0.5cm} 87.8 & 346 \\ $^{136}$Xe exposure (kg-yr) & 54.9 & 34.6 & 89.5 & \hspace{0.5cm} 27.6 & 108.8 \\ \hline \end{tabular} \caption{\label{table:fiducial}Summary of the phase-1 and phase-2 data used in $^{136}$Xe $\beta\beta$ decay analyses.} \vspace{-0.5cm} \end{table} \subsection{Preliminary $2\nu\beta\beta$ analysis} The analysis for the $2\nu\beta\beta$ decay is limited to the volume within the radius of 1.0 m in order to avoid a large $^{134}$Cs/$^{137}$Cs background at the mini-balloon. Fig.~\ref{figure:energy_2nu} shows the energy spectrum of $\beta\beta$ candidates, with a spectral fit, including backgrounds. The measured $2\nu\beta\beta$ decay half-life of $^{136}$Xe is $T_{1/2}^{2\nu} = 2.32 \pm 0.05({\rm stat}) \pm 0.08({\rm syst}) \times 10^{21}$~yr. This is consistent with the previous result based on the phase-1 data, $T_{1/2}^{2\nu} = 2.30 \pm 0.02({\rm stat}) \pm 0.12({\rm syst}) \times 10^{21}$~yr~\cite{Gando2012b}, and with the result obtained by \mbox{EXO-200}, $T_{1/2}^{2\nu} = 2.165 \pm 0.016({\rm stat}) \pm 0.059({\rm syst}) \times 10^{21}$~yr~\cite{Albert2014a}. \begin{figure} \includegraphics[width=0.65\columnwidth]{fig3.eps} \caption{Preliminary energy spectrum of selected $\beta\beta$ candidates within 1.0\,m fiducial radius is shown together with the best-fit backgrounds, with the $2\nu\beta\beta$ decay fit. The residuals from the best-fit are shown in the upper panel.} \label{figure:energy_2nu} \end{figure} \subsection{Preliminary $0\nu\beta\beta$ analysis} The $0\nu\beta\beta$ decay rate is estimated by the fit to 2-dimensional spectra in energy-volume in the full 2-m-radius analysis volume as described above. To simplify the display of the fit results, the energy spectra in only two volumes, the internal volume ($R < 1.0\,{\rm m}$) and the external volume ($1.0 < R < 2.0\,{\rm m}$), are shown in Fig.~\ref{figure:energy_0nu}, together with the best-fit background composition. The potential background contributions of $^{110m}$Ag, $^{88}$Y, $^{208}$Bi, and $^{60}$Co in the $0\nu\beta\beta$ region of interest, as discussed in \mbox{Ref.~\cite{Gando2012a}}, are allowed to vary in the fit. We found no event excess over the background expectation. The 90\% C.L upper limit on the $^{136}$Xe $0\nu\beta\beta$ decay rate is $<$ 17.0~(kton$\cdot$day)$^{-1}$, in Xe-LS mass units. A MC of an ensemble of experiments assuming the best-fit background spectrum without a $0\nu\beta\beta$ signal indicates a sensitivity of $<$ 16~(kton$\cdot$day)$^{-1}$, and the probability of obtaining a stronger limit is 52\%. The $0\nu\beta\beta$ decay contribution corresponding to the 90\% C.L. upper limit for the internal volume is shown in Fig.~\ref{figure:energy_0nu_limit}. The dominant $^{214}$Bi background at the mini-balloon is radially-attenuated, therefore, the data in the inner volume is more sensitive to $0\nu\beta\beta$ decays, as indicated in Fig.~\ref{figure:R3_0nu}. Considering the Xe concentration in the Xe-LS, we obtain a limit on the $^{136}$Xe $0\nu\beta\beta$ decay half-life of $T_{1/2}^{0\nu} > 1.3 \times 10^{25}$~yr (90\% C.L.). \begin{figure} \includegraphics[width=0.45\columnwidth]{fig4a.eps} \hspace{1.0cm} \includegraphics[width=0.45\columnwidth]{fig4b.eps} \caption{Preliminary energy spectra of selected $\beta\beta$ candidates within the radius cuts, $R < 1.0\,{\rm m}$ (left) and $1.0 < R < 2.0\,{\rm m}$ (right). The best-fit spectra correspond to the 2-dimensional energy-volume analysis fit results described in the text. The residuals from the best-fit are shown in the upper panels.} \label{figure:energy_0nu} \end{figure} \begin{figure} \includegraphics[width=0.7\columnwidth]{fig5.eps} \caption{Preliminary energy spectrum of selected $\beta\beta$ candidates within the radius cut $R < 1.0\,{\rm m}$ is shown together with the best-fit backgrounds and the 90\% C.L. upper limit for $0\nu\beta\beta$ decays. This figure shows the data and backgrounds in a narrower range and linear-scale compared to Fig.~\ref{figure:energy_0nu} (left).} \label{figure:energy_0nu_limit} \end{figure} \begin{figure} \includegraphics[width=0.45\columnwidth]{fig6a.eps} \hspace{1.0cm} \includegraphics[width=0.45\columnwidth]{fig6b.eps} \caption{Radius-cube ($R^{3}$) distributions of selected $\beta\beta$ candidates with $2.3 < E < 2.7\,{\rm MeV}$ in the upper hemisphere (left) and the lower hemisphere (right). The radial position was normalized to the mini-balloon radius ($R = 1.54\,{\rm m}$). The backgrounds from the mini-balloon (Film BG) are radially attenuated.} \label{figure:R3_0nu} \end{figure} As shown in Fig.~\ref{figure:chi2}, the combined KamLAND-Zen result from the phase-1~\cite{Gando2013} and phase-2 data gives a 90\% C.L. lower limit of $T_{1/2}^{0\nu} > 2.6 \times 10^{25}$\,yr. This limit is compared to the recent EXO-200 result, which tends to allow shorter half-life values, and gives a 90\% C.L. lower limit of $T_{1/2}^{0\nu} > 1.1 \times 10^{25}$\,yr~\cite{Albert2014b}. Based on nuclear matrix elements (NMEs) from various (R)QRPA models~\cite{Faessler2012}, the combined KamLAND-Zen half-life limit can be converted to a 90\% C.L. upper limit of $\left<m_{\beta\beta}\right> < (140-280)\,{\rm meV}$. \begin{figure} \includegraphics[width=0.6\columnwidth]{fig7.eps} \caption{$\Delta\chi^{2}$-profile from the fit to the half-life of $^{136}$Xe $0\nu\beta\beta$ decays in this work (phase-2), the previous work (phase-1), and the combined result (phase-1 $+$ phase-2). The result from \mbox{EXO-200}~\cite{Albert2014b} is also shown for comparison.} \label{figure:chi2} \end{figure} \clearpage \section{Prospects} The $0\nu\beta\beta$ decay search sensitivity will steadily increase by accumulating additional low background data after the $^{110m}$Ag reduction. Assuming the best-fit background rates in phase-2, the $T_{1/2}^{0\nu}$ sensitivity at 90\% C.L. will reach $3 \times 10^{25}$\,yr within 2 years using the phase-2 data alone, see Fig.~\ref{figure:future}. It will test the claimed observation of $0\nu\beta\beta$ decay in $^{76}$Ge~\cite{Klapdor2006} more stringently. We plan to rebuild the mini-balloon to increase the Xe amount to 600\,kg (700-800 kg if possible) and reduce the mini-balloon radioactivity by introducing a cleaner material for the balloon film. In that case, owing to the increase of the Xe-LS fiducial mass, the sensitivity will be close to $2 \times 10^{26}$\,yr in a 2 year measurement (Fig.~\ref{figure:future}), which corresponds to $\left<m_{\beta\beta}\right> = 50\,{\rm meV}$ for the largest NME in the (R)QRPA models~\cite{Faessler2012}. The next near-future $0\nu\beta\beta$ decay search milestone is to reach a sensitivity of $\left<m_{\beta\beta}\right> \sim 20\,{\rm meV}$ which covers the inverted neutrino mass hierarchy. The neutrino mass spectrum may be more clarified by long-baseline neutrino oscillation experiments, cosmological observations, and single-$\beta$ decay experiments in the future. Under such circumstances, the inverted mass hierarchy search will provide an important outcome even without the observation of the positive $0\nu\beta\beta$ signal. A sensitivity covering the inverted hierarchy is projected to be achieved by ``KamLAND2-Zen'', a detector upgrade proposal with better energy resolution against the $2\nu\beta\beta$ background, by introducing light collective mirrors ($1.8 \times$ light yield), new brighter LS ($1.4 \times$ light yield), and high quantum efficiency PMTs ($1.9 \times$ light yield). The energy resolution is expected to be improved from 4.0\% to $<$2.5\% at the Q-value of $^{136}$Xe $\beta\beta$ decay. The enriched Xe amount will be increased to 1,000\,kg or more, and the target sensitivity of 20\,meV will be achieved in a 5 year measurement. The access hole at the top of the detector will be enlarged for the larger mini-balloon installation, as it will also accommodate various additional devices, such as scintillating crystals containing other $0\nu\beta\beta$ decay nuclei, and a NaI crystal for a dark matter search. In addition, other R\&D efforts aiming at the introduction of new technology, such as an imaging device to reject $\gamma$-emitting backgrounds, and scintillating film to reject $^{214}$Bi-$^{214}$Po sequential decay backgrounds, are going forward. \begin{figure} \includegraphics[width=0.65\columnwidth]{fig8.eps} \caption{Expected $T_{1/2}^{0\nu}$ sensitivity at 90\% C.L. in the near future for KamLAND-Zen. The red line at less than 2 years corresponds to phase-2 only, and the following red line is next phase only. The three horizontal lines indicate the lower $T_{1/2}^{0\nu}$ limits reported here (phase-2), the previous results (phase-1), and the combined result (phase-1 $+$ phase-2).} \label{figure:future} \end{figure} \section{Summary} KamLAND-Zen realized the initial $0\nu\beta\beta$ decay search by utilizing an extremely low-background detector, and demonstrated the effective background reduction in the xenon loaded liquid scintillator after the purification. We find that the limits on the half-life of $^{136}$Xe $0\nu\beta\beta$ decays and the Majorana neutrino mass are improved. In the near future, the search sensitivity will be enhanced by accumulating additional low background data. A phased-program with several detector improvements is planned for even better sensitivity enhancement. \begin{theacknowledgments} The \mbox{KamLAND-Zen} experiment is supported by the Grant-in-Aid for Specially Promoted Research under grant 21000001 of the Japanese Ministry of Education, Culture, Sports, Science and Technology; the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan; Stichting FOM in the Netherlands; and under the US Department of Energy, Office of Science, Office of Nuclear Physics under contract No. DE-AC02-05CH11231, as well as other DOE awards to individual institutions. The Kamioka Mining and Smelting Company has provided service for activities in the mine. \end{theacknowledgments} \bibliographystyle{aipproc}
1,116,691,498,132
arxiv
\section{Introduction} Cataclysmic Variables are at the low mass end of objects known as Interactive Binaries, where the primary, more massive and accreting component is a White Dwarf. The secondary is a pre-main sequence late type star that fills its Roche lobe and loses matter through the inner Lagrangian point L$_1$ to its compact companion. The transferred matter forms an accretion disc that usually is also the most luminous component of the binary system. Accretion is the cause of modulated brightness behavior with the Dwarf Novae (DN) class showing semi-periodic outbursts. According to the traditional models of Cataclysmic Variables, the accretion disc forms as a result of exchange of the angular momentum between the elements or particles comprising the disc, which otherwise move in Keplerian orbits in a ring with its radius being uniquely determined by the angular momentum. Thus, the ring (or torus) spreads out into a disk. It is obvious then that between the inner edge of the disc and the surface of the accreting star, which rotates with a different velocity, the excess mechanical energy of disk's element must be dissipated and its excess angular momentum transferred away, before that element can be accreted onto the stellar surface. This region is called the boundary layer. The processes occurring in the boundary layer and their observational fingerprints are not very well understood and remain a topic of controversy and debate in-spite of it relatively simple general picture \cite{2001LNP...563..110S}. Since the discovery of the magnetic CVs \cite{1977ApJ...212L.125T}, it became obvious that at least some CVs possess a primary White Dwarf (WD) with a magnetic field strong enough to disrupt formation of the accretion discs and channel the transfered material through the magnetic lines directly onto the magnetic poles on the surface of the WD. It is also strong enough to overcome the spin-up torque of the accreting matter (see e.g., \cite{1991MNRAS.250..152K}) and synchronize rotation of the WD with orbital period. Magnetic field strength of these can be measured directly from observations and subsequently they were called Polars. Later these were joined by Intermediate Polars (IP) which are notorious for showing spin period of asynchronously rotating primary and often signs of presence of accretion disc. The magnetic field in IPs could not be measured directly, but it is unambiguously established that the variety of periods observed there in X-rays and optical are result of spinning WD which beams an intense X-ray emission modulated with P$_{\rm spin}$. It is also universally accepted that inner parts of the accretion disk in IPs are truncated within corresponding Alfv\'en radius and the matter from there channeled onto the surface of the WD through the magnetic field lines, very much alike Polars. These two types of Cataclysmic Variables are defined as magnetic CVs and are the topic of another review talk at this conference \cite{2005AIPC}. Here, however, I will show that many observed features in different sub-classes of CVs usually considered as non-magnetic can be generally explained in terms of truncated accretion discs as in IPs and that the magnetic governed accretion plays significant role across the entire family of Cataclysmic Variables. \section{Why are White Dwarfs magnetic?} It is natural to compare properties of isolated White Dwarfs and primaries of CVs in order to find out how binary evolution and accretion processes influence their physics. And comparison of magnetic properties of these seemingly similar stars reveals significant differences. Wickramasinghe and Ferrario \cite{2000PASP..112..873W} published a large study of magnetic White Dwarfs where they show that isolated WDs are remnants of Ap and Bp stars with fossil magnetic fields of order of ~0.1-1000 MG. They are thought to have significantly higher mean masses than their non-magnetic counterparts. They constitute about 5\% of WD population. In contrast White Dwarfs in interacting binaries do not reach so high magnetic fields, and their masses are not much different from overall distribution of masses in CVs. And even taking into account only CVs recognized as magnetic (Polars and Intermediate Polars $\approx10^5$ to $10^8$ Gauss) with significant and often measurable strength of magnetic fields, the fraction of them easily reaches 25\% of the total. While the lack of very high magnetic field CVs is a matter of ongoing discussions and speculations, the disparity of numbers can be attributed to processes taking place in close binary systems, rather than selection effects. More recently Aznar Cuadrado et al.\cite{2004A&A...423.1081A} discovered that probably up to 25\% of White Dwarfs posses low magnetic fields of a few kG. They were previously considered as non-magnetic. Tout et al. \cite{2004MNRAS.tmp..635T} suggest that their magnetic field also has fossil origin of a cloud from which the stars emerged, and if so, all WDs born from stars over 2M$_{\odot}$ might be a low field magnetic White Dwarfs. Regardless of the origins of magnetic field in WDs, it is important to stress for further consideration that the number of magnetic compact stars is much higher than previously thought. \section{CV types that are suspected to be magnetic} \subsubsection{SW Sex stars} Similarly, the number of CVs in which magnetic driven accretion plays a significant role seems to be much higher than previously accepted. In most of the cases, it can not be measured directly, but there is observational evidence indicating influence of the magnetic field in the process of accretion on the WDs in a broad range of CV types. The most obvious is the case of much debated SW Sex stars. They were distinguished \cite{1991AJ....102..272T} for their peculiar emission and absorption line behavior. First thought to be eclipsing systems they were considered a rarity, but soon many other systems appeared showing one or other characteristics known as SW Sex phenomenon. According to Hellier \cite{2004RMxAC..20..148H}, more than 20 systems show SW Sex phenomenon. Although he himself remains skeptical of the magnetic model for the explanation of SW Sex phenomenon, he admits that the evidence is mounting (see references therein). He assumes that periodic modulation of polarized emission is not enough evidence because the spin period of the WD does not come up in the photometric observations persistently, over a large time sets, as it easily happens in Intermediate Polars (IP). But it also can be argued that our inability to observe spin/beat modulations in SW Sex proves that there are many other CVs which do not show apparent IP photometric characteristics, but do accrete under magnetic field influence. LS\,Peg and V795\,Her, both members of SW\,Sex group are found to have circular polarization \cite{2001ApJ...548L..49R,2002pcvr.conf..533R}. This is direct evidence of magnetic field presence and its modulation with short periods most probably binds it with the spin period of WD. LS\,Peg shows circular polarization modulations with 0.3\% amplitude and 29.6 min period. Simultaneously it shows emission line flaring with period corresponding to the beat period, if 29.6 min is considered as a spin period of WD. Circular polarization of 0.12\% peak-to-peak amplitude was also detected in V795\,Her with periods close to the optical quasi-periodic oscillations (QPO). But except detection of circular polarization there are many indirect indications of magnetic driven accretion onto white dwarf in SW\,Sex systems. The same kind of emission line flaring with short periods as in V795 Her is detected in a number of other SW\,Sex objects: BT\,Mon and DW\,UMa. They are also typical to many Intermediate Polars (FO\,Aqr for example). A good example to demonstrate link between these seemingly different classes of CVs is V533\,Her \cite{2002MNRAS.337..209R}. It erupted as Nova in 1963. In 1979, Patterson \cite{1979ApJ...233L..13P} reported rapid 63.5 sec variability classifying it as a DQ Her system (magnetic). But later this period disappeared and the system emerged as a 3.53 hour non-eclipsing SW\,Sex object showing among other SW\,Sex features flaring of emission lines. Turning our attention to Nova remnants we find RR\,Cha, another Nova remnant that turned into an IP. Woudt \& Warner \cite{2002MNRAS.335...44W} discovered a 1950\,sec stable period and positive and negative superhumps in the system. Meanwhile Rodriguez-Gil and Potter \cite{2003MNRAS.342L...1R} observed variable circular polarization and noted some distinct SW\,Sex features in its spectra. Here we touch upon another phenomenon (negative superhumps) that is common to number of systems. Patterson et al. \cite{2002PASP..114.1364P} in a study of yet another two SW\,Sex objects find QPOs with periods around 1000\,sec and negative superhumps. They believe the presence of negative superhumps can be best ascribed to the strong magnetism of the white dwarf. The warping of the disc is the most natural way of explanation of negative superhumps. It is widely agreed that the cause of warping is a magnetic field. But there are different approaches to whether the source of the magnetic field is the primary \cite{1999ApJ...524.1030L} or the secondary \cite{2002MNRAS.335..247M}. All these independent approaches to the SW\,Sex phenomenon show that the number of objects experiencing it are quite large and that the best explanation offered to explain variety of features is magnetic accretion onto a white dwarf. \subsubsection{VY Scl stars} \begin{figure}[t] \resizebox{7.5cm}{!}{ \begin{picture}(120,120)(0,15) \put (-45,160){\includegraphics[height=.24\textheight,angle=-90]{dwcnc_longterm.ps}} \put(100,70){\includegraphics[height=.15\textheight]{fig_phot5_2.ps}} \put(-50,10){\includegraphics[height=.13\textheight]{fig3b_new.ps}} \caption{DW Cnc exhibiting features proper to a)VY Scl objects ($\approx2$\,mag drop in brightness from quiescence levels); b) IPs (spin period of magnetic WD); c) SW SEX objects (emission line flaring). Adopted from Rodriguez-Gil et al. (2004) } \end{picture}} \end{figure} Among other features shown by SW\,Sex objects are so called VY\,Scl characteristics. Or rather SW\,Sex sometimes considered to be part of larger VY\,Scl type of CVs. VY\,Scl are another growing group of CVs that are increasingly associated with magnetic CVs. Here, it is first worth mentioning DW\,Cnc. It was very recently studied by Rodriguez-Gil et al. \cite{2004MNRAS.349..367R}. They show that DW Cnc is a short period CV, P$_{\rm orb}=86$\,min, which also shows 38.51 min photometric variability identified as a spin period of magnetic WD. Emission line flaring, another feature common to SW\,Sex objects, is present too. Most interestingly, this short period CV shows low states, down to 2 magnitude from its quiescence and no outbursts (see Fig. 1a-c borrowed from \cite{2004MNRAS.349..367R} exhibiting these features). The cyclical low states (anti dwarf nova behavior) is a main characteristic of VY\,Scl objects. They are believed to be concentrated in the 3-6 hours period range, same as SW\,Sex. A few years ago, we \cite{1999A&A...343..183G} demonstrated that the VY Scl star V751\,Cyg shows a transient soft X-ray emission. The X-ray emission appears when the system is in low state. The interpretation was that V751 Cyg behaves very much alike super-soft X-ray binaries, e.g., RX\,J0513.9-6951. Later, however, Hamuery \& Lasota \cite{2002A&A...394..231H} suggested that VY Scl objects contain magnetic WDs (with magnetic field of order of $5\times10^{30} {\rm G\,cm}^3$ corresponding to $0.4$ mG for 0.7M$_{\odot}$ WD) based on their models and absence of outburst of VY Scl objects in low/intermediate states. This offers an alternative explanation of the soft X-ray emission in the low state. The spectrum of V751\,Cyg in a low state, obtained almost simultaneously with the X-ray observations, shows a spectrum similar to a mCV. It has highly variable continuum, He\,II line becomes intense compared to the high state, lines are narrow, X-ray spectrum is soft. We considered the magnetic/polar scenario based on spectral appearance in the process of preparation of 1998 paper. The reason why we dismissed it, is still fundamental: how to switch off and on magnetic field or magnetic driven accretion between low and high luminosity states. Is it possible that with increased $\dot M$\ the magnetic field can not cope anymore with the amount of incoming matter and the accretion geometry changes? Hamuery \& Lasota are primarily concerned with conditions of accretion disc and argue that the truncation of the disk is a key to the VY Scl phenomenon, but they do not reflect upon this question. But it needs to be answered. So far very little has been done to explore interaction and dependence of magnetic field to the mass transfer rate of accreted material. However there are hopeful signs that the problem can be tackled. Cumming \cite{2002MNRAS.333..589C} examined the problem to some depth. According to him compressional heating by accreting material can maintain interiors of WD in a liquid state. It allows to a decrease in the ohmic decay times to a few $10^9$ years in contrast to isolated WDs, where ohmic decay time is always longer than the cooling time. He shows that as a consequence of accretion significant changes in surface magnetic field can occur. He also demonstrates that the higher is the magnetic field of the system, the lower the mass accretion rate. It is not immediately clear if the decrease of mass transfer/accretion rate in a VY\,Scl system provokes extension of magnetosphere and further truncation of the disc or just the decreased disc luminosity allow us to observe mCV features in the spectrum. Or what a Polar would look like if you increase the accretion rate an order or two from usual $10^{11}$ M$_{\odot}$/year value. Certainly this problem needs additional research. VY\,Scl and SW\,Sex objects comprise considerable part of CVs at the upper edge of Period Gap. But the problem is not confined only to VY\,Scl or SW\,Sex objects. There were reports of apparently ordinary Dwarf Novae displaying features like those of VY\,Scl. Recently explored DW Cnc is just one such case. Another one was reported in \cite{1988AdSpR...8..329T,1986Ap.....24..131E}. SS\,Aur, a classical SS\,Cyg, system was caught in a low state for a short period of time. The object was down about a magnitude from its usual quiescence level. There were apparent changes in the spectrum of the object: instead of a power law corresponding to the accretion disc spectrum, two blackbody curves corresponding to the stellar components of this binary system nicely fit the observed flux distribution. The temperatures of components derived from this fitting were later confirmed from UV observations and parallax determinations by HST \cite{1999ApJ...515L..93H,2000AJ....120.2649H,2004AJ....128.1834S}. Most importantly it exhibited quasi-periodic photometric variations (Fig 2) with periods around 20 min. That is exactly what one would expect if the above described scenario is right: the diminished disc luminosity either from decreased mass transfer or truncation of the disc (or most probably from both at the same time) reveals periodic light variations best explainable in terms of IP model. According to AAVSO light curves SS\,Aur was in a low state for very short time in contrast to usual VY\,Scl objects and our observation of quasi-periodic variations at that moment was completely accidental. We may assume that other Dwarf Novae also experiment episodes of low state with short duration and they are mostly unnoticed. It would be interesting to conduct a systematic search of such events and examine light curves for presence of periodic variations. Certainly it is necessary and would be easier to do that for members of VY\,Scl type of objects. \begin{figure}[t] \includegraphics[height=.35\textheight]{ssaur_powsp.eps} \includegraphics[height=.34\textheight]{ssaur_lc.eps} \caption{Quasiperiodic variations detected in the light curve of SS\,Aur during low luminosity states. On the left panel the power spectra of four different nights are presented, on the right corresponding light curves with $sin$ fits. Adopted from Tovmassian (1986).} \end{figure} \section{Other Dwarf Novae} The DNe constitute one of the most numerous group of objects among CVs. The suggestion that many more DNe might experience short lasting low states (and their number is not limited to a few known cases) is completely speculative. However, another argument which favors the presence of magnetic driven accretion in DNe comes from observations of outbursts. Outbursts are the main feature that distinguish them from rest of CVs. Another well known fact is that during outbursts DNe show quasi-periodic oscillations of three distinct types \cite{2004PASP.116.115}. These oscillations have long been associated with boundary layer based on observation of eclipsing systems, but their nature was not clearly understood and described. The study of quasi-periodic oscillations is complicated by the fact that they are not observed in every DNe, their magnitudes are small and highly variable and high time resolution, high precision photometry is required. One remarkable system that best suited for such study is VW Hydri. Warner et al. \cite{2002MNRAS.335...84W,2003MNRAS.344.1193W}, Woudt \& Warner \cite{2002MNRAS.333...411} in series of papers present results of long term study of QPOs mostly concentrated on this object, but not limited to it. They developed a Low Inertia Magnetic Accretor (LIMA) model which allows to explain origin of QPOs and the existing relation between different types (different frequencies). The essence of the model is that the rapidly rotating equatorial belt, formed as a result of accretion of matter through disc on a surface of WD, enhances magnetic field of the primary. The magnetic field of a primary that is expected to be weaker than in regular Intermediate Polars, nevertheless reaches enough strength to channel accreting matter the way it does in IPs, but onto the equatorial belt instead of magnetic poles. The QPOs then arise due to a prograde traveling wave at the inner edge of the disc that reprocesses high energy radiation from accreting zones close to the primary. The frequency may be variable since the belt spins up during high accretion phase and decelerates after. The details of this model and certain relation existing between frequencies of QPOs common not only for CVs but also higher mass X-ray binaries are discussed in the Warner's \cite{2005AIPCW} presentation included in this volume. Interestingly, Huang et al. \cite{1996ApJ...458..355H} detected inverse P\,Cyg profiles during superoutburst of VW\,Hyi and concluded that detached disc and structured gas flow is necessary for best-fitting model to describe their observation. Subsequently in \cite{APJL1996.471.41} the same group demonstrate existence of equatorial belt around the WD after the outburst. On the other hand, X-ray observations of high inclination system OY Car \cite{2003MNRAS.345.1009W} prove that X-rays come from an area much smaller than WD, probably upper polar region of the white dwarf, which testify that at least some DNe might have magnetic field strong enough to channel the accretion to the magnetic pole. Some theoretical aspects of how the magnetic field can be induced/enhanced by a shear and influence the processes in boundary layer were considered by Armitage \cite{2002MNRAS.330..895A}. Completely different approach taken by Lasota \cite{2004RMxAC..20..124L} on the basis of Disc Instability Model (DIM) leads again to the idea that the internal parts of the accretion discs in most of DNe should be destroyed by the magnetic field and final stage of accretion occurs through magnetic lines. It is an extension of the idea first proposed for VY\,Scl objects, now applied to the OY\,Car, a classical Dwarf Nova for which the necessity of truncated disc was raised earlier but was attributed to the disc evaporation \cite{1994A&A...288..175M}. \section{ Conclusions} There is a growing observational evidence that the number of magnetic WDs is larger than was thought. Observations of limited sample of isolated WDs show that as many as 25\% of WDs might have magnetic field strength of order of a few kG. Regardless of that fact, the number of systems considered as magnetic in Interactive Binaries with WD as a primary is unusually high compared to the distribution of isolated WDs. In addition to this, there are numerous groups of CVs traditionally not considered as magnetic, which increasingly require the presence and influence of the magnetic field on the accretion process in order to explain their observational characteristics. Probably the Intermediate Polar scenario of accretion on WDs in CVs, where the inner disc is truncated and matter channeled to the primary along magnetic lines is universal and accretion processes influence magnetic field strength in accreting compact objects. Retter and Naylor \cite{2000MNRAS.319..510R} suggested that properties of CVs and thus their classification on both sides of Period Gap depends on their periods and mass transfer rate. Their scheme however does not include finer subdivision of CVs. If the hypothesis that the magnetic field plays important role in shaping properties of above-mentioned objects is correct, it could be stated that the classification of CVs is a function of their orbital period, mass transfer rate and magnetic field strength.
1,116,691,498,133
arxiv
\section{Introduction} In Part I (Sekanina \& Kracht 2017; hereafter Paper~1) the orbit of comet C/1995 O1 was determined from astrometric observations covering an arc of 17.6~yr; the conditions were examined that existed at the time of a predicted close encounter with Jupiter on $-$2251 November 7; a correlation was established between a nongravitational acceleration in the comet's orbital motion and the mass loss rates of water and other ices sublimated~from the nucleus; and the production and dynamics of dust ejecta were addressed. Emphasized was the role of non-water compounds --- low-volatility organic molecules in particular --- in the process of outgassing and their contribution to the total outgassed mass, and also noted was a high mass loading of the gas flow from the nucleus by the released dust. Finally, the issue was brought up of a major disparity between the comet's mass determined dynamically vs photometrically, which is critical for a makeup of the nucleus, the objective of this study. \section{Nongravitational Effect on the Nucleus and Conservation-of-Momentum Law} It was briefly remarked in Paper 1 that the~\mbox{magnitude} of the nongravitational acceleration in the motion of C/1995 O1 was equivalent, after integrating over the orbital period, to a momentum change per unit mass of \mbox{2.46$\,\pm\,$0.14~m~s$^{-1}$}. This result was derived by optimizing a modified version of Marsden et al.'s (1973) nongravitational law, whose scaling distance of 15.36~AU was determined, by fitting 1950 astrometric observations from 1993--2010, as part of a preferred orbital solution.~Fully 80\% of the effect was contributed by the component of the nongravitational acceleration that was directed away from the Sun and long recognized as the primary trigger of the outgassing-driven recoil motion of the nucleus, invoked by a conservation-of-momentum law (e.g., Bessel 1836; Whipple 1950; Marsden 1968, 1969). For a given position in a comet's orbit, at time $t$,~the conservation-of-momentum law is commonly written as a one-dimensional condition. If only a single species sublimates, the condition reads \begin{equation} {\cal M} \, \zeta_{\rm ng}(t) + \alpha_{\rm gas}(t) \, \dot{m}_{\rm sub}(t) \, v_{\rm sub}(t) = 0, \end{equation} where ${\cal M}$ is the nucleus' mass and, at time $t$,~\mbox{$\zeta_{\rm ng}(t) > 0$} the nongravitational acceleration, \mbox{$\dot{m}_{\rm sub}(t) < 0$} the mass-loss rate by outgassing (i.e., the mass production~rate~of gas), $v_{\rm sub}$ the velocity with which the gas sublimates, and \mbox{$\alpha_{\rm gas}(t) < 1$} a recoil parameter that accounts for a vectorial distribution of the outgassed mass, whereby only a fraction of the momentum generated by the sublimating ice is, in general, transformed into the detected nongravitational acceleration. The recoil parameter depends on the surface distribution of the ice, the nucleus' shape and rotation, the gas-flow collimation, etc. When a number of different species, $n_{\rm gas}$, sublimate at the same time, the second term in {\vspace{-0.05cm}} Equation~(1) is to be replaced with a sum \mbox{$\Sigma_{i=1} ^{n_{\rm gas}} \,\alpha_{{\rm gas},i}\,\dot{m}_{{\rm sub},i}\,v_{{\rm sub},i}$}. Because $v_{{\rm sub},i}$ is a function of the mass loading of the gas flow by dust, {\vspace{-0.03cm}} it could depend not only on $\dot{m}_{{\rm sub},i}$, but on \mbox{$\dot{m}_{{\rm sub},j}$ ($j \!=\!1, \ldots, n_{\rm gas}, j \!\neq\! i)$} as well. The mass of the nucleus is assumed not to vary with time in Equation~(1), because \mbox{$\dot{m}_{\rm sub} \, dt \!\ll\! {\cal M}$}. The conservation-of-momentum condition may also be integrated over the entire orbit, specifically, \begin{equation} {\cal M} \!\! \int_{t_\pi-\frac{1}{2}P}^{t_\pi+\frac{1}{2}P} \!\!\!\! \zeta_{\rm ng}(t) \, dt + \langle \alpha_{\rm gas} v_{\rm sub} \rangle \! \sum_{i=1}^{n_{\rm gas}} \! \int_{t_\pi-\frac{1}{2}P}^{t_\pi+\frac{1}{2}P} \!\!\!\! \dot{m}_{{\rm sub},i}(t) \, dt = 0, \end{equation} where $t_\pi$ is the perihelion time, $P$ the orbital period, and $\langle \alpha_{\rm gas} v_{\rm sub} \rangle$ is an orbit-averaged value of the product of the recoil parameter $\alpha_{\rm gas}$ and the sublimation velocity $v_{\rm sub}$. While the integrated expressions were determined in Paper~1, the product $\langle \alpha_{\rm gas} v_{\rm sub} \rangle$ is subject to some uncertainty, but can be constrained as follows. For any particular ice, the initial velocity of sublimation is subsonic in the presence of dust (Probstein 1969). Hence, if $v_{\rm son}$ is the speed of sound in the gas flow, then \begin{equation} v_{\rm sub}(t) = \beta_{\rm gas}(t) \, v_{\rm son}(t) = \beta_{\rm gas}(t) \sqrt{\frac{\Re \, \gamma_{\rm gas} \, T_{\rm sub}(t)}{\mu_{\rm gas}}}, \end{equation} where \mbox{$\beta_{\rm gas} \!<\! 1$} depends on the mass{\vspace{-0.03cm}} loading of the gas flow by dust, $\Re$ is the gas constant (8.31\,J\,K$^{-1}$mol$^{-1}$)~and $T_{\rm sub}$, $\gamma_{\rm gas}$, and $\mu_{\rm gas}$ are, respectively, the~gas~\mbox{temperature} at sublimation, the heat-capacity ratio, and the molar mass of the species. Because of a rapid drop in the mass-loss rate with heliocentric distance, the greatest weight in the expression{\vspace{-0.05cm}} for $\langle \alpha_{\rm gas} v_{\rm sub} \rangle$ have the values near peri\-helion.~For water, \mbox{$\gamma_{_{{\rm H}_2{\rm O}}} \!=\! 1.33$}, \mbox{$\mu_{_{{\rm H}_2{\rm O}}} \!=\! 18.02$ g mol$^{-1}$},~and{\vspace{-0.06cm}} for C/1995~O1 at perihelion --- employing an isothermal model of Paper~1 --- a sonic flow (Mach number of \mbox{${\sf M} = 1$}) has a sublimation temperature of \mbox{$T_{{\rm sub},{\scriptscriptstyle {\rm H}_2{\rm O}}} = 176$\,K} {\vspace{-0.04cm}}and, accordingly, \mbox{$v_{{\rm son},{\scriptscriptstyle {\rm H}_2{\rm O}}} \simeq 330$ m s$^{-1}$}. For carbon monoxide the problem is more complex because of a possible effect of superheating (Fulle et al.\ 1998). While from the isothermal model one finds that at perihelion \mbox{$v_{{\rm son},{\scriptscriptstyle {\rm CO}}} \simeq 130$\,K}, Biver et al.'s (2002) monitoring program of the CO kinetic temperature in C/1995~O1 between perihelion and some \mbox{6--8}~AU both before and after perihelion suggests \mbox{$T_{{\rm kin},{\scriptscriptstyle {\rm CO}}} \!=\! 113 \pm 6$\,K} {\vspace{-0.04cm}}at perihelion. Since \mbox{$T_{\rm kin} \!>\! T_{\rm sub}$}, this implies that \mbox{$v_{{\rm son},{\scriptscriptstyle {\rm CO}}} \!< 220$ m s$^{-1}\!$}. For other ices the isothermal~model~offers~for~the~speed of sound numbers that are~\mbox{intermediate}~\mbox{between}~\mbox{carbon} monoxide and water. In particular,~for~\mbox{complex}~\mbox{organic} molecules appropriate estimates\footnote{Statistically, numerous hydrocarbons appear to satisfy a relation between the molar mass (in g~mol$^{-1}$) and the heat-capacity ratio that is approximately expressed as \mbox{$\gamma_{\rm gas} \!=\! 1.56 \!-\! 0.25 \log \mu_{\rm gas}$} for \mbox{$30 \!<\! \mu_{\rm gas} \!<\! 120$ g mol$^{-1}$}.}\,are \mbox{$T_{\rm sub} \simeq 270$--280\,K}, \mbox{$\mu_{\rm gas} \!\simeq\! 70\:$g$\: $mol$^{-1}\!$}, and \mbox{$\gamma_{\rm gas} \!\simeq\! 1.1$}, thus \mbox{$v_{\rm son} \!\simeq\! 190$\,m\,s$^{-1}\!$}. For the speed of sound I now accept a representative value of \mbox{$\langle v_{\rm son} \rangle = 270$ m s$^{-1}$}, a compromise between water and the other parent molecules. With two basic quantities that enter Equation~(2) --- the integrated nongravitational effect, \begin{equation} \Delta V_{\rm ng} = \!\! \int_{t_\pi-\frac{1}{2}P}^{t_\pi+\frac{1}{2}P} \!\!\! \zeta_{\rm ng}(t) \, dt = 2.46 \pm 0.14\;{\rm m}\;{\rm s}^{-1}, \end{equation} and the integrated mass loss of water ice, \begin{equation} \Delta {\cal M}_{\scriptscriptstyle {\rm H}_2{\rm O}} = \!\! \int_{t_\pi-\frac{1}{2}P}^{t_\pi+\frac{1}{2}P} \!\!\! \dot{m}_{{\rm sub}, {\scriptscriptstyle {\rm H}_2{\rm O}}}(t) \, dt = -3.4\:\!\!\times\:\!\!\!10^{15}\,{\rm g} \end{equation} --- already determined in Paper 1, I next turn to a total orbit-integrated mass loss by outgassing. Examination of the contributions from a large set of non-water species resulted in Paper~1 in a total {\it documented\/}~mass~loss~of~175\% of the loss of water and a {\it predicted\/} range of total~losses well over 200\%, a conclusion based primarily on a recognition of an apparently highly incomplete inventory of complex organic molecules. \mbox{Crovisier}~et~a.~(2004)~similarly argued that there were still many molecular species to be discovered in comets.~However, rather~than the orbit-integrated mass-loss data they used near-perihelion abundances, in which case the degree of incompleteness --- while still detectable --- appears to be less prominent. Based on the results of Paper~1, I adopt for the total orbit-integrated mass loss by outgassing a representative value of 250\% of the mass loss by water ice: \begin{eqnarray} \Delta {\cal M}_{\rm gas} & = & \sum_{i=1}^{n_{\rm gas}} \int_{t_\pi-\frac{1}{2}P}^{t_\pi+\frac{1}{2}P} \!\!\! \dot{m}_{{\rm sub},i}(t) \, dt \nonumber \\[-0.36cm] & & \\[-0.06cm] & = & 2.5 \Delta {\cal M}_{\scriptscriptstyle {\rm H}_2{\rm O}} = -8.5\:\!\!\times\:\!\!\!10^{15}\,{\rm g}. \nonumber \end{eqnarray} Equation (2), in which --- following (3) --- $\langle \alpha_{\rm gas} \, v_{\rm sub} \rangle$ is replaced with $\langle \alpha_{\rm gas} \, \beta_{\rm gas} \, v_{\rm son} \rangle$, can be used to estimate an upper limit on the mass of the nucleus by substituting $\langle v_{\rm son} \rangle$ for this expression: \begin{equation} {\cal M} = \langle \alpha_{\rm gas} \, \beta_{\rm gas} \, v_{\rm son} \rangle \frac{|\Delta {\cal M}_{\rm gas}|}{\Delta V_{\rm ng}} < \langle v_{\rm son} \rangle \frac{|\Delta {\cal M}_{\rm gas}|}{\Delta V_{\rm ng}} . \end{equation} The inequality follows from $\alpha_{\rm gas}$ and $\beta_{\rm gas}$ being always smaller than unity. This relation indicates that since the sublimation velocity amounted to less than the speed of sound~and the gas flow was imperfectly collimated, the momentum of the outgassed mass per orbit should have been less than \mbox{2.3$\,\times$10$^{20}$\,g cm s$^{-1}$} and the nucleus less than \mbox{1$\,\times$10$^{18}$\,g} in mass. Assuming a bulk density of 0.4~g~cm$^{-3}$, the diameter should under no~\mbox{circumstances} have exceeded 17~km. In reality, the heavy mass loading of the gas flow by dust, which --- based on the results of Paper~1 {\it and\/} including the contributions from as yet undetected molecules, as implied by Equation~(6) --- is likely to have exceeded~4, suggests an initial Mach number of \mbox{${\sf M} < 0.3$} for the gas (Probstein~1969), while the absence of perfect gas-flow collimation may have reduced effects of the momentum by another factor of 1.5 or more. And even though there may have existed phenomena that worked in the opposite direction (such as recondensation; fallback on the surface by boulders; etc.), the momentum imparted to the nucleus should still have been substantially lower than implied by $\langle v_{\rm son}\rangle$. Allowing~the~product of \mbox{$\langle \alpha_{\rm gas} \beta_{\rm gas} \rangle$} to vary from 0.1 to a stretched value of 0.35, the diameter of a model spherical nucleus consistent with Equation~(7) should be in a range from $\sim$8~km to $\sim$12~km. Szab\'o et al.\ (2011, 2012) analyzed several images of C/1995 O1 along the post-perihelion leg of the orbit up to a heliocentric distance of 32~AU. The authors concluded that the comet's activity ceased between late 2007 and early 2009; this estimate can be refined with use of the results by Kramer et al.\ (2014), who still detected minor activity in 2008 August-September. The inactive nucleus was detected at optical wavelengths of 0.55--0.9~$\mu$m on a few occasions, including with the Hubble Space Telescope (HST) on 2009 September 8 and with the Very Large Telescope (VLT) on 2011 October 5--25. It was also observed with the Herschel Space Observatory at 70~$\mu$m on 2010 June 10. These observations allowed Szab\'o et al.\ (2012) to determine separately a cross-sectional area of the nucleus, yielding a mean diameter of \mbox{74$\,\pm\,$6}~km with an axial ratio of at least \mbox{1.72$\,\pm\,$0.07}; and a post-perihelion geometric albedo of \mbox{0.081$\,\pm\,$0.009}, in contrast to a preperihelion albedo of only \mbox{0.03$\,\pm\,$0.01}. Szab\'o et al.'s (2012) determination of the nucleus' dimensions is in remarkably good agreement with an earlier result by Sekanina (1999a), who derived an average diameter of \mbox{71$\,\pm\,$4 km} (cf.\ Table~5) from six HST images exposed between October 1995 and October 1996 (i.e., long before perihelion). Other researchers arrived at less consistent dimensions, commenting on large discrepancies among the determinations by diverse methods. Employing the same HST images but a different approach, Weaver \& Lamy (1999) did estimate the most probable diameter at 70~km, yet not entirely ruling out \mbox{30--40 km}; they also reviewed the results by another group of an occultation of a star by the comet and arrived at an admittedly model dependent diameter of less than 52~km, while from three independent microwave observations the diameter came out to be near 40~km. Thermal-infrared observations aboard the Infrared Space Observatory resulted in a diameter of \mbox{70--112}~km depending on the applied physical model (Jorda et al.\ 2000). Re-reviewing the constraints from these data and near-perihelion radiometric data, Fern\'andez (2002) estimated the diameter at \mbox{60$\,\pm\,$20 km}. In their summary table, Lamy et al.\ (2004) provide two numbers for the nucleus' diameter, 74~km and 60~km, with no errors provided. In summary, the {\it dimensions of an inactive nucleus\/} of C/1995~O1 derived for a single-body model from a~far-infrared\mbox{\hspace{0.07cm}}observation\mbox{\hspace{0.07cm}}at\mbox{\hspace{0.07cm}}a\mbox{\hspace{0.07cm}}record\,large\,heliocentric\,distance do {\it under no circumstances accommodate the magnitude of the outgassing-driven nongravitational acceleration\/} in the comet's motion. In the conservation-of-momentum equation, this disparity exceeds two orders of magnitude in terms of the mass of the nucleus, so that one confronts a {\it major contradiction\/}. Interestingly, this problem was independently mentioned by Sosa \& Fern\'andez (2011), yet it has never been solved nor its possible implications seriously addressed in the literature. \section{Nucleus of C/1995 O1 as a Compact Cluster of Fragments of the Original Body} I argue that the only feasible solution to this contention requires one to postulate that the nucleus of C/1995~O1 at its recent return to perihelion was being made up of~a {\it compact cluster of massive fragments\/} into which the original nucleus broke up by the action of tidal forces exerted by Jupiter in the course of the comet's close encounter with the planet in the 3rd millennium BCE (Paper~1). I further postulate that the detected nongravitational acceleration refers to the principal, most massive fragment, \mbox{8--12}~km in diameter and orbiting the Sun close to the cluster's center of mass, while the nongravitational accelerations~on other outgassing fragments trigger minor perturbations of their motions relative to the principal fragment and remain undetected. For this model to be physically meaningful, the cluster must be bound enough by gravity to survive as a densely-packed assemblage over more than 4000~yr against both the Sun's perturbations and collisional self-destruction. It is also necessary to account for the observed nucleus' brightness in terms of a cross-sectional area of the (optically thin) cloud of fragments, to establish their size range and distribution to make the model self-consistent, and to describe the fragment properties in the context of both observations and constraints on the model --- including its gravitational stability and collisional history. \section{Stability of Gravitationally Bound Orbits:\ The Analogs} Globular clusters come to mind as a convenient cosmic analog for investigating the conditions of gravity-driven stability of a compact cometary assemblage. Kennedy's (2014) recent paper extensively deals with this issue and predicts the radius of stability, $r_{\rm st}$, employing Mardling's (2008) analysis. The outcome is a simple expression \begin{equation} r_{\rm st} = r_{\rm gal} f_0 \! \left( \!\frac{{\cal M}_{\rm gc}}{{\cal M}_{\rm gal}} \! \right)^{\!\!\frac{1}{3}} \!\!, \end{equation} where ${\cal M}_{\rm gal}$, ${\cal M}_{\rm gc}$, $r_{\rm gal}$, and $f_0$ are, respectively, the mass of the galaxy, the mass of the globular cluster, the distance of its closest approach to the center of the galaxy, and a parameter that is a function of the globular cluster's orbital eccentricity. In a limiting parabolic scenario, the orbits of all stars in the cluster are stable when \mbox{$f_0 < 0.18$}. For comparison, the radius of Hill's sphere~of C/1995~O1 at time $t$ is (e.g., Chebotarev 1964) \begin{equation} r_{\rm Hill}(t) = r_{\rm st}(t,h_0) = r(t) \, h_0 \! \left( \! \frac{{\cal M}_{\rm C}}{{\cal M}_{\rm Sun}} \! \right)^{\!\!\frac{1}{3}} \!\!, \end{equation} where \mbox{$h_0 = 3^{-\frac{1}{3}} = 0.69$} and ${\cal M}_{\rm Sun}$, ${\cal M}_{\rm C}(t)$, and $r(t)$ are, respectively, the mass of the Sun, and the comet's mass and heliocentric distance at time $t$. Kennedy's (2014) results show that the stability-zone's radius is about four or more times smaller than the radius of Hill's sphere. Two papers by Hamilton \& Burns (1991, 1992) on orbital stability zones about asteroids are highly relevant. The authors fortunately extended the range of investigated orbital eccentricities to 0.9, thereby covering effec\-tively comets as well. Establishing the stability of gravitationally bound orbits of particles around asteroids as a function of a deviation from orbit circularity, they also considered effects of the Coriolis force, orbital inclination, and solar radiation pressure. They further showed that escape to interplanetary space was not the only loss mechanism for particles in the initially bound trajectories; impact~on the asteroid's surface was another means of removal. Their effort focused on the determination of a ratio equivalent to $D_{\rm st}/D_{\rm C}$, where $D_{\rm st}$ is the stability-zone's diameter (equaling $2r_{\rm st}$) and $D_{\rm C}$ is the comet's or asteroid's diameter. For elongated orbits they independently concluded that the heliocentric distance in Equation~(9) was to be taken at perihelion, \mbox{$r(t_\pi) \!=\! q$}, so that \begin{equation} \frac{D_{\rm st}}{D_{\rm C}} = 191.76 h_0 q \rho^{\frac{1}{3}}, \end{equation} where $\rho$ is the bulk density of the comet's nucleus or the asteroid in g~cm$^{-3}$ and $q$ is in AU. Hamilton \& Burns (1991) demonstrated that the loss~of orbital stability was considerably higher among particles in prograde than in retrograde trajectories. In fact, extrapolation to the parent body's parabolic orbit would leave practically no gravitationally bound particles after 20~yr, the period of time over which their integrations extended. For retrograde orbits, impacts on the asteroid's surface became extremely rare and extrapolation of the stability-zone's radius to parabolic motion was uncertain, in part because of insufficient integration times, but some particles were likely to have survived in bound orbits. Estimating crudely from Hamilton \& Burns' plot that \mbox{$D_{\rm st}/D_{\rm C} > 10$}, one obtains \mbox{$h_0 > 0.08$} from Equation~(10) with \mbox{$\rho = 0.4$ g cm$^{-3}$}. Combining this lower limit with the upper bound implied by Kennedy's (2014) computations, I adopt in the following \mbox{$h_0 = 0.1$} to derive $D_{\rm st}$ for C/1995~O1. Hamilton \& Burns (1992) found that radiation pressure eliminated from the stable orbits all particles smaller than $\sim$1~mm for an asteroid $\sim$200~km in diameter and all particles smaller than $\sim$1~cm for an asteroid $\sim$20~km in diameter. There was little difference between radiation-pressure effects on prograde and retrograde orbits. Based on these results, it can be expected that no fragments smaller than a few centimeters would survive in stable orbits in the presumed nucleus' cluster of C/1995~O1. The results by Hamilton \& Burns (and others referred to in their papers) were a product of solving a \mbox{three-body} problem:\ the Sun, an asteroid, and a single orbiting particle. By contrast, a cluster of fragments involves of course an $n$-body problem, as described below. \section{Formulation of a Compact Cluster Model} To describe the compact cluster of fragments, it is desirable to first constrain its properties by requiring that they satisfy relevant observations. If a product of~a collisional process that began with an initial tidal breakup in close proximity of Jupiter and was characterized by very low relative velocities (Section 8), one expects the differential size distribution function of fragments, i.e., their number, $d{\cal N}_{\rm frg}$, with diameters from $D$ to \mbox{$D \!+\! dD$}, to eventually approach steady state. At a constant bulk density, this scenario implies (Dohnanyi 1969; Williams \& Wetherill 1994) \begin{equation} d{\cal N}_{\rm frg}(D) = k_{\rm frg} D^{-\frac{7}{2}} dD, \end{equation} where $k_{\rm frg}$ is a normalization constant. The cumulative distribution, i.e., the number of fragments whose diameters are greater than, or equal to, $D$, is then \begin{equation} {\cal N}_{\rm frg}(D) = \!\!\int_{D}^{\infty}\!\!\!k_{\rm frg} D^{-\frac{7}{2}}\,dD = \frac{2k_{\rm frg}}{5} D^{-\frac{5}{2}}. \end{equation} For the principal fragment, whose diameter $D_0$ was constrained by the orbital-momentum condition (Sections~2 and 3) to a range of \mbox{8--12}~km, {\vspace{-0.09cm}}this expression requires that \mbox{${\cal N}_{\rm frg}(D_0) \!=\! 1$}, so that \mbox{$k_{\rm frg} \!=\! \frac{5}{2} D_0^{5/2}$} and Equation~(12) simplifies to \begin{equation} {\cal N}_{\rm frg}(D) = \!\left(\!\frac{D_0}{D} \!\right)^{\!\!\frac{5}{2}} \!\!. \end{equation} If, reckoned in the order of decreasing size, an $i$-th fragment (${\cal N}_{\rm frg} \!=\! i$) has a diameter $D_{i-1}$, the diameter $D_i$ of the next smaller, \mbox{$(i\!+\!1)$}st fragment equals \begin{equation} D_i = D_{i-1}\!\left[ 1 \!+\! \frac{1}{{\cal N}_{\rm frg}(D_{i-1})} \right]^{\!-\frac{2}{5}} \!\! = D_{i-1} \! \left( \frac{i}{i\!+\!1} \! \right)^{\!\!\frac{2}{5}}\!\!. \end{equation} For example, the diameter of the second largest~\mbox{fragment} is expected to be $0.76\,D_0$, its mass should be 0.435 the mass of the principal fragment, and, if outgassing, its nongravitational acceleration should amount to 1.32 the acceleration of the principal fragment. An expression similar to Equation (14) can be derived for $D_i$ as a function of $D_{i+1}$ and ${\cal N}_{\rm frg}(D_{\rm i+1})$. The cross-sectional area of the nucleus' model by Szab\'o et al.\ (2012), \mbox{$X_{\rm Sz} = 4300$ km$^2$}, is now to be interpreted as a sum of the cross-sectional areas of all surviving fragments in the cluster that is presumed to be optically~thin, \mbox{$X_{\rm frg} = X_{\rm Sz}$}; it serves as another constraint on the distribution function, \begin{equation} X_{\rm frg} \!=\! {\textstyle \frac{5}{2}}D_0^{\frac{5}{2}} \!\!\!\!\: \int_{D_{\scriptstyle \star}}^{D_0} \!\!\! {\textstyle \frac{1}{4}} \pi D^2 \!\cdot\!\!\!\: D^{-\frac{7}{2}} dD \!=\! {\textstyle \frac{5}{4}} \pi D_0^2 \!\left( \!\!\sqrt{\frac{D_0}{D_{\textstyle \star}}} \!-\! 1 \!\!\right) \!, \end{equation} where $D_{\textstyle \star}$ is the diameter of the smallest fragment that contributes to the observed cross-sectional area $X_{\rm frg}$. As it follows from Equation (15) that \begin{equation} D_{\textstyle \star} = D_0 \!\left(\!1\!+\!\frac{4X_{\rm frg}}{5\pi D_0^2}\! \right)^{\!\!-2}\!\!, \end{equation} the total number of fragments in the cluster is equal to \begin{equation} {\cal N}_{\rm frg}^{\textstyle \star} = {\cal N}_{\rm frg}(D_{\textstyle \star}) = \!\left(\! \frac{D_0}{D_{\textstyle \star}}\!\right)^{\!\!\frac{5}{2}} \!\! = \!\left( \! 1 \!+\! \frac{4X_{\rm frg}}{5 \pi D_0^2} \!\right)^{\!\!5} \end{equation} and an average fragment diameter in the distribution is \begin{equation} \langle D \rangle \!=\! \frac{{\displaystyle \int_{D_{\scriptstyle \star}}^{D_0}}\! \!\! D \!\cdot\!\!\!\:D^{-\frac{7}{2}} \, dD}{{\displaystyle \int_{D_{\scriptstyle \star}}^{D_0}} \!\!\! D^{-\frac{7}{2}} \, dD} \!=\! {\textstyle \frac{5}{3}} D_{\textstyle \star} \!\! \left[ 1 \!-\!\!\left( \! \frac{D_{\textstyle \star}}{D_0} \! \right)^{\!\!\frac{3}{2}} \right] \!\! \cdot \!\!\left[ 1 \!-\!\! \left( \! \frac{D_{\textstyle \star}}{D_0} \!\right)^{\!\! \frac{5}{2}} \right]^{\!\!-1} \!\!\! . \end{equation} The mass contained in the cluster of fragments equals \begin{eqnarray} {\cal M}_{\rm frg} & = & {\textstyle \frac{5}{2}} D_0^{\frac{5}{2}} \!\! \int_{D_{\scriptstyle \star}}^{D_0} \!\!\! {\textstyle \frac{1}{6}} \pi \rho D^3 \!\cdot\! D^{-\frac{7}{2}} \:\! dD \nonumber \\[-0.18cm] & & \\[-0.23cm] & = & \frac{5 \pi}{6} \rho D_0^3 \! \left( \!1 \!-\! \sqrt{\frac{D_{\textstyle \star}}{D_0}} \right) = 5 {\cal M}_0 \! \left[ 1 \!-\! ({\cal N}_{\rm frg}^{\textstyle \star})^{-\frac{1}{5}} \right]\!, \nonumber \end{eqnarray} where $\rho$ is a bulk density of the fragments and ${\cal M}_0$ the mass of the principal fragment, \mbox{${\cal M}_0 \!=\! \frac{1}{6} \pi \rho D_0^3$}, {\vspace{-0.03cm}}which always exceeds 20\% of the cluster's total mass. The minimum and average fragment diameters, the number of fragments, and the cluster's total mass, all derived from the observed cross-sectional area, are for three principal fragment's diameters from the adopted range (Section~3) listed in the top section of Table~1.{\vspace{-0.05cm}} \section{Birth and Early Evolution of\\Compact Cluster} The postulation of a breakup of the original nucleus of C/1995 O1 that was triggered by Jupiter's tidal forces at or very near the time of closest approach to the planet in November of $-$2251 (Section~1) is a plausible hypothesis for the origin of the compact cluster, because if it had come into existence way before the encounter, Jupiter's gravity field would have dissipated the cluster into a long filament of fragments at the time of the event. As described in detail in Paper 1, the comet has not approached Jupiter to within 0.7~AU since the encounter, but it passed through perihelion about 13~months \mbox{after} the near miss, at a heliocentric distance almost identical to that in 1997. During the more than 42 centuries the size distribution of fragments, especially near its lower end should have changed dramatically, as illustrated by Hamilton \& Burns (1992) for asteroids. All fragments smaller than a few centimeters across were removed by radiation pressure within years after the breakup. Of the remaining fragments, including the more sizable ones, all those moving in prograde orbits were soon lost as well.\footnote{Fragments larger than a few centimeters but smaller than $D_{\textstyle \star}$ in diameter (Table~1) were probably removed by the outgassing-driven momentum, whose influence --- at least at heliocentric distances of up to a few AU --- resembles that of solar radiation pressure, varying as an inverse diameter. For example, scaling the observed effect, a fragment 50~m in diameter is expected at 1~AU from the Sun to be subjected to an outgassing-driven acceleration on the order of 0.002 the Sun's gravitational acceleration, equivalent to a radiation-pressure acceleration on a particle of about 1.5~mm in diameter at the same density. As activity ceases at larger heliocentric distances, so does the momentum and, accordingly, the resulting orbit-integrated effect is more heavily dependent on its magnitude near the Sun than is radiation pressure.} For fragments in retrograde orbits the situation is less clear, but, as a rule of thumb, one can assume that at least 50\% of them failed to survive until the 1997 apparition. Because the initial orbital velocities of fragments relative to the cluster's center of mass (dictated by the rotation of the original nucleus at the time of breakup) were very low, the high collisional rate (Section~8) must have soon brought a near equilibrium between the numbers of fragments moving in prograde and retrograde orbits, so for the sake of argument one can assume that about one quarter of all fragments with diameters larger than $D_{\textstyle \star}$ survived until the 1997 apparition. If no fragment with a diameter smaller than $D_{\textstyle \star}$ but one quarter of the fragments with diameters \mbox{$D_{\textstyle \star} \!\leq\! D \!\leq\! D_0$} survived to the 1997 apparition, the mass ${\cal M}_{\rm C}$ of the original nucleus of C/1995~O1 at the time of encounter with Jupiter is estimated to have equaled \begin{equation} {\cal M}_{\rm C} = {\cal M}_{\rm C} \sqrt{D_{\textstyle \star}/D_0} + 4 {\cal M}_{\rm frg}, \end{equation} where the mass ${\cal M}_{\rm frg}$ is given by Equation~(19). Solving Equation~(20) for ${\cal M}_{\rm C}$, one finds in terms of the principal fragment's parameters: \begin{equation} {\cal M}_{\rm C} = \frac{10\pi}{3} \rho D_0^3 = 20 {\cal M}_0. \end{equation} This is of course a very crude estimate, but it is needed only for an assessment of the probability of tidal breakup in Section~7. In the range of the three solutions in Table~1, the original nucleus was between 2.1 and 7.3$\,\times$10$^{18}$g in mass and between 21 and 33~km{\vspace{-0.04cm}} in diameter (at the adoped bulk density of 0.4~g~cm$^{-3}$), a considerably less impressive and statistically more probable size than 74~km. Yet, because of the greater mass and the very limited volume of space involved, the collisional rate in the early phase of cluster evolution should have been orders of magnitude higher than at the 1997 apparatition. Judging from the numerical experiments by Hamilton \& Burns (1991, 1992), much of the dust generated~in~the course of fragmentation is expected to have been blown away by solar radiation pressure, especially near the perihelion passage 13 months after the Jovian encounter, whereas smaller, active boulder-sized objects were subjected to nongravitational accelerations whose effects ultimately were not unlike those of radiation pressure. I return to the issue of small fragments in Section~8, after introducing a constraint on the collisional rate. \begin{table}[b] \vspace{-3.57cm} \hspace{4.22cm} \centerline{ \scalebox{1}{ \includegraphics{t1_HB2.ps}}} \vspace{-15.17cm} \end{table} \section{Tidal Force of Jupiter vs Dimensions and\\Tensile Strength of Original Nucleus} Consider the original, pre-encounter nucleus of comet C/1995 O1 as a porous rigid rotating spherical aggregate of dust and ices in static equilibrium, moving in a strongly hyperbolic orbit about Jupiter and exposed to its tidal forces. A comprehensive stress theory by Aggarwal \& Oberbeck (1974) suggests that at a Jovicentric distance $\Delta_{\rm J}$ fissures should start propagating from the equatorial regions on the nucleus' surface at which Jupiter is rising above, or setting below, the local horizon, when the tensile strength, $T_{\rm tens}$, satisfies a condition \begin{equation} T_{\rm tens} = {\textstyle \frac{10}{19}} P_{\rm c} \!\left(\! \frac{R_{\rm J}}{\Delta_{\rm J}} \! \right)^{\!\!3} \! \frac{\rho_{\rm J}}{\rho}, \end{equation} where $R_{\rm J}$ and $\rho_{\rm J}$ are Jupiter's radius and mean density, $\rho$ is again the nucleus' bulk density, and \begin{equation} P_{\rm c} = {\textstyle \frac{1}{6}} \pi G \rho^2 D_{\rm C}^2 \end{equation} is the gravitational pressure in the center of the nucleus of diameter $D_{\rm C}$, with $G$ being~the~gravitation\-al constant. Inserting \mbox{$\Delta_{\rm J}/R_{\rm J} = 10.73$} (Paper~1), \mbox{$\rho_{\rm J} = 1.33$ g cm$^{-3}$}, and \mbox{$\rho \!=\! 0.4$ g cm$^{-3}$}, the tensile strength is between~3.7~Pa and 8.4~Pa for the original nucleus 21.7~km to 32.6~km in diameter (Table~1). These tensile strength values are near the lower end of a range reported by Groussin et~al.\ (2015) [3~Pa] and Basilevsky et al.\ (2016) [$>$1.5~Pa]~from their studies of outcropped consolidated material in cliff-like features on the nucleus' surface of comet 67P. As~a short-period comet, 67P was exposed to processes~such~as sintering, which have a tendency to increase the strength of material and which C/1995~O1 is not expected~to~have experienced before the encounter. Its tidal breakup in close proximity of Jupiter should accordingly be judged as plausible. Once a fissure began to propagate inside the comet's nucleus, a separation of the early fragments was only a matter of time. The fragmentation may have been assisted by the rotational velocity, which --- if the spin rate was close to that observed during the 1997 apparition (a rotation period of $\sim$11.35~hr; Licandro {\vspace{-0.04cm}}et al.\ 1998) --- amounted to between 1.7~m~s$^{-1}$ and~2.5~m~s$ ^{-1}$. Since the velocity of escape equaled~5.1~m~s$^{-1}$~to~7.7~m~s$^{-1}$, the early fragments moved along ballistic trajectories, resulting in imminent impacts, further fragmentation, and random walk of secondary fragments superposed on their rotationally-driven motions.~The~momentum~should~have progressively built up to make the developing \mbox{cluster}~of colliding fragments slowly expand around its center~of mass. On the assumption that the cluster was~\mbox{gradually} acquiring spherical symmetry,{\vspace{-0.08cm}} a root-mean-squared cir\-cular velocity, \mbox{$\langle V_{{\rm circ},\ell}^2 \rangle{\mbox{\raisebox{0.2ex}{$^{\! {\frac{1}{2}}\!}$}}}$}, at distances between \mbox{$\ell \!-\! d\ell$} and~$\ell$ should satisfy a condition \begin{equation} \langle V_{{\rm circ},\ell}^2 \rangle \, 4\pi\ell^2 d\ell = \frac{G{\cal M}_{\rm f}(\ell)}{\ell} \, 4\pi \ell^2 d\ell, \end{equation} where ${\cal M}_{\rm f}(\ell)$ is the mass of the fragments located at distances smaller than $\ell$ from the center of mass, to whose gravitational attraction the fragments orbiting between \mbox{$\ell \!-\! d\ell$} and $\ell$ are subjected to. All fragments at distances greater than $\ell$ represent minor perturbers. Assuming the cluster's spatial density to be independent of the distance from the center of mass, ${\cal M}_{\rm f}(\ell)$ varies as the volume confined within $\ell$, and if $D_{\rm frg}$~is the cluster's diameter, the{\vspace{-0.06cm}} root-mean-squared circular velocity averaged over the cluster, $\langle V_{\rm circ}^2 \rangle {\mbox{\raisebox{-0.6ex}{$^{\!^{\frac{1}{2}}\!}$}} }$, is determined by a condition{\vspace{-0.05cm}} \begin{equation} \langle V_{\rm circ}^2 \rangle \!\! \int_{0}^{\frac{1}{2}D_{\rm frg}} \!\! \ell^{\:\!2} d\ell = \!\! \int_{0}^{\frac{1}{2}D_{\rm frg}} \frac{G {\cal M}_{\rm frg}}{\ell} \! \left( \! \frac{2 \ell}{D_{\rm frg}} \! \right)^{\!\!3} \! \ell^{\:\!2} d\ell, \end{equation} from which \begin{equation} \langle V_{\rm circ}^2 \rangle^{\!\frac{1}{2}} = \left( \!\frac{6 \:\! G {\cal M}_{\rm frg}}{5D_{\rm frg}} \! \right)^{\!\!\frac{1}{2}}\! , \end{equation} where ${\cal M}_{\rm frg}$ is given by Equation~(19). \section{Rate of Collisions Among Fragments and\\the Cluster's Size} Consider a spherical fragment of a diameter $D_i$ moving with a velocity $V_{i,j}$ relative to another fragment whose diameter is $D_j$. The cross-sectional area for a collison~between these two fragments equals (e.g., Kessler 1981){\vspace{-0.05cm}} \begin{equation} \sigma_{i,j} = \frac{\pi}{4} (D_i \!+\! D_j)^2 , \end{equation} where the fragments' escape velocity is being neglected. If $\nu$ is a number density of fragments in the cluster, i.e., their number per unit volume, an average number of~collisions {\vspace{-0.07cm}}that the fragment with a diameter $D_i$ experiences per unit time,\,$\dot{N}_{\rm coll}^{(i)}$, then equals{\vspace{-0.05cm}} \begin{equation} \dot{N}_{\rm coll}^{(i)} = \nu \langle \sigma_i \rangle \langle V_{{\rm rel},i}^2 \rangle^{\!\frac{1}{2}}, \end{equation} where $\langle \sigma_i \rangle$ is an average collisional{\vspace{-0.06cm}} cross-sectional area for a fragment of diameter $D_i$ and{\vspace{-0.04cm}} $\langle V_{{\rm rel},i}^2 \rangle^{\!\frac{1}{2}}$ is its root-mean-squared impact velocity averaged over all fragments with which it collides. If the population of fragments has the size distribution introduced in Section~4, the collisional cross-sectional area $\langle \sigma_i \rangle$ equals \begin{eqnarray} \langle \sigma_i \rangle & = & - {\textstyle \frac{5}{2}} D_{\textstyle \star}^{\frac{5}{2}} \!\! \left[1 \!-\! \left(\!\frac{D_{\textstyle \star}}{D_0} \! \right)^{\!\!\frac{5}{2}} \right]^{\!-1} \!\!\!\! \int_{D_{\scriptstyle \star}}^{D_0} \!\!\! \sigma_{i,j} D_j^{-\frac{7}{2}} dD_j \nonumber \\[-0.35cm] & & \\[-0.05cm] & = & \frac{\pi}{4} D_i^2 \!\left[1 \!+\! \frac{5}{\Gamma_5} \frac{D_{\textstyle \star}}{D_i} \!\left(\! \frac{2\Gamma_3}{3} \!+\! \frac{D_{\textstyle \star}}{D_i} \!\right) \! \right] \!, \nonumber \end{eqnarray} where{\vspace{-0.11cm}} \begin{equation} \Gamma_m = \!\sum_{k = 0}^{m-1} \!\left(\! \frac{D_{\textstyle \star}}{D_0} \!\right)^{\!\!\frac{1}{2}k} \!\! = \frac{1 \!-\! \sqrt{(D_{\textstyle \star}/D_0)^{m}}}{1 \!-\! \sqrt{D_{\textstyle \star}/D_0}} . \end{equation} Averaging now $D_i$ over all fragment diameters between $D_{\textstyle \star}$ and $D_0$, the mean cross-sectional area $\langle \sigma \rangle$ for collisions between any two such fragments becomes \begin{eqnarray} \langle \sigma \rangle & = & - {\textstyle \frac{5}{2}} D_{\textstyle \star}^{\frac{5}{2}} \!\!\left[1 \!-\! \left(\! \frac{D_{\textstyle \star}}{D_0} \!\right)^{\!\!\frac{5}{2}} \right]^{\!-1} \!\!\!\! \int_{D_{\scriptstyle \star}}^{D_0} \!\!\! \langle \sigma_i \rangle D_i^{-\frac{7}{2}} dD_i \nonumber \\[-0.35cm] & & \\[-0.05cm] & = & \frac{5 \pi}{2 \Gamma_5} D_{\textstyle \star}^2 \!\left( \!1 \!+\! \frac{5 \Gamma_3^2}{9 \Gamma_5} \:\!\!\right) \!, \nonumber \end{eqnarray} and an average number of collisions per unit time, $\dot{N}_{\rm coll}$, experienced by the fragments with diameters between $D_{\textstyle \star}$ and $D_0$ is \begin{equation} \dot{N}_{\rm coll} = \nu \langle \sigma \rangle \langle V_{\rm rel}^2 \rangle^{\!\frac{1}{2}}, \end{equation} where $\langle V_{\rm rel}^2 \rangle^{\!\frac{1}{2}}$ is their root-mean-squared velocity averaged over the cluster. I now assume that this velocity varies in proportion to the root-mean-squared average circular velocity, derived in Section~5, \begin{equation} \langle V_{\rm rel}^2 \rangle^{\!\frac{1}{2}} = \eta \langle V_{\rm circ}^2 \rangle^{\!\frac{1}{2}}. \end{equation} In his elaborate collisional model for the asteroid population, Dohnanyi (1969) adopted an impact velocity equivalent to \mbox{$\eta \simeq 0.29$}. However, the motions of asteroids in the belt are much more organized than are fragments in the proposed cluster, for which $\eta$ should be much greater but not exceeding unity, because there is not enough energy in the system to achieve \mbox{$\langle V_{\rm rel}^2 \rangle \gg \langle V_{\rm circ}^2 \rangle$}. An important constraint follows from Schr\"{a}pler et al.'s (2012) microgravity experiments that showed that in order for fluffy dust aggregates to fragment upon impact, their relative velocity should be at least 0.4~m~s$^{-1}$, since at lower velocities they bounce or stick. Similar independent experiments by Gunkelmann et al.\ (2016) suggest that for highly porous submicron-grain~\mbox{agglomerates}, comparable in porosity to cometary nuclei, the minimum impact velocity triggering fragmentation is still lower, at $\sim$0.17~m~s$^{-1}$. Since the impact velocity is a function of the cluster size, which in turn depends on the velocity, the parameter $\eta$ is constrained but not well determined. \begin{figure*}[t] \vspace{-4.4cm} \hspace{-0.67cm} \centerline{ \scalebox{0.85}{ \includegraphics{f1_HB2.ps}}} \vspace{-12.39cm} \caption{Visual light curve of the inner coma of comet C/1995 O1 (normalized to a nucleus-centered field of 24\,660~km on a side when observed from a distance of 1~AU) based on the CCD observations by Liller (1997, 2001) made with a 20-cm f/1.5 Celestron camera between 1995 August 2 and 2000 January 21 (608 days preperihelion to 1025 days post-perihelion). The curve is very smooth before perihelion, but dotted with at least five prominent flare-ups after perihelion, between 1998 January and 1999 April, when the comet was 4 AU to 8.2 AU from the Sun. The onset times of the events are marked \mbox{I--V}. Their amplitudes, \mbox{0.7--1.6 mag}, imply a sudden increase in the cross-sectional area of the dust ejecta in the 34$^{\prime\prime}$ field of up to nearly 3 million km$^2$ at an assumed geometric albedo of 0.04 (Table~2).{\vspace{0.5cm}}} \end{figure*} With the number density of fragments in the cluster being{\vspace{-0.2cm}} \begin{equation} \nu = \frac{6 {\cal N}_{\rm frg}^{\textstyle \star}}{\pi D_{\rm frg}^3} = \frac{6}{\pi D_{\rm frg}^3} \!\left( \! \frac{D_0}{D_{\textstyle \star}} \!\right)^{\!\!\frac{5}{2}}\!\!, \end{equation} I insert from Equations (31), (33), and (34) as well as from (26), (19), and (15) into Equation~(32) and obtain for a mean free time{\vspace{-0.04cm}} between two consecutive collisions, \mbox{$\tau_{\rm coll} \!=\! \dot{N}_{\rm coll}^{-1}$}, an expression \begin{equation} \tau_{\rm coll} = \frac{\sqrt{5}}{30\eta}\, \frac{\Gamma_5^2}{\Gamma_5 \!+\! {\textstyle \frac{5}{9}}\Gamma_3^2} \, (G \rho X_{\rm frg})^{-\frac{1}{2}} D_{\textstyle \star}^{\frac{1}{4}} D_0^{-\frac{11}{4}} D_{\rm frg}^{\frac{7}{2}}. \end{equation} The mean free time depends heavily on the dimensions of the cluster and the principal fragment, but only weakly on the size of the smallest fragment, which besides its quartic root enters the expression via the sums $\Gamma_3$~and~$\Gamma_5$. Equation (35) can in principle serve to determine the size of the cluster at the 1997 apparition as a function of $D_0$, once the collisional mean free time $\tau_{\rm coll}$ and the impact velocity parameter $\eta$ are known. Although the exact dimensions of the cluster at the 1997 apparition are unknown, they are rather strongly constrained; the cluster's gravitational stability over more than four millennia requires that its diameter not exceed the stability limit at large heliocentric distances, where the comet spends nearly all of its life. Another upper limit is provided by the HST images taken between October 1995 and October 1996; one pixel, which the cluster's diameter should never exceed by more than a factor of about two, equaled between 90~km and 220~km. On the other hand, assuming that the cluster was optically thin, its diameter should much exceed Szab\'o et al.'s (2012) 74~km. \section{Liller's Detection of Recurring Flare-Ups and Their Proposed Interpretation} A pair of important papers on C/1995 O1 was written by Liller (1997, 2001). He monitored the brightness of the inner coma using a 20-cm f/1.5 Celestron camera, a CCD detector, and a filter to obtain magnitudes in the $V$ system. His dataset consists of exposures on 360 nights, {\vspace{-0.02cm}}covering a time period of nearly 4$\frac{1}{2}$~yr, from 1995 August 2 (608 days before perihelion) to 2000 January 21 (1025 days after perihelion). The comet was 7.06~AU and 10.28~AU from the Sun on, respectively, the former and the latter dates. {\vspace{-0.06cm}}For each exposure Liller measured an apparent magnitude $\widehat{H}$ in a square field of 34$^{\prime\prime}$ on a{\vspace{-0.07cm}} side, with the nucleus in its center, and converted it to $\widehat{H}_\Delta$, by removing the effect of a variable field size due to the comet's changing geocentric{\vspace{-0.07cm}} distance $\Delta$, with an expression \mbox{$\widehat{H}_\Delta\!=\!\widehat{H} \!-\! 2.5 \log \Delta$}. This light curve is reproduced in Figure~1 as a function of the time from perihelion. Liller (2001) called attention to a prominent anomaly in the light curve, which is very smooth before perihelion but dotted with at least five flare-ups after perihelion. He noted that the flare-ups were distributed approximately uniformly in time, with gaps of 96~days~to~125~days; that the peak amplitudes ranged from 0.7~mag to 1.6~mag; that their heliocentric distances varied from 4.0~AU to 8.1~AU; and that the expansion velocities{\vspace{-0.01cm}} of the ejected material were confined to a range from 62~m~s$^{-1}$ to 217~m~s$^{-1}$. The peaks of the five flare-ups were observed between 1998 January 11 and 1999 April 14, and there could well have been two additional flare-ups, one in late August 1997 and the other in mid-October 1999. Liller expressed his belief that the flare-ups were caused by the nucleus' recurring activity, not by collisions with asteroids, but he did not propose any specific active process. \begin{table*} \vspace{-4.08cm} \hspace{-0.54cm} \centerline{ \scalebox{1}{ \includegraphics{t2_HB2.ps}}} \vspace{-19.24cm} \end{table*} Because the flare-ups were observed at large heliocentric distances, they could not be triggered by a suddenly elevated sublimation rate of water ice. Instead, the driver would have to have been explosions of carbon monoxide (possibly assisted by carbon dioxide). Unfortunately, as seen from Figure~9 of Paper~1, there were no obvious peaks on the carbon-monoxide production curve temporally coinciding with Liller's flare-up times in Figure~1. The single strongly elevated carbon-monoxide production rate, based on an observation made with an instrument onboard the Infrared Space Observatory (ISO) on 1998 April 6 (370~days after perihelion, when the comet was 4.9~AU from the Sun) precedes the flare-up II in Figure~1 by five weeks, and its origin is unclear. Thus, an intrinsic, carbon-monoxide driven event was not a primary trigger of the five (or six) post-perihelion flare-ups detected by Liller (2001). Could each of the observed flare-ups contain~in~fact~the debris of particular fragments in the cluster that collided with one another and broke up? In order to examine the implications of this hypothesis, let the masses of~the colliding fragments be ${\cal M}_i$ and ${\cal M}_j$, respectively, and let their relative velocity $(V_{\rm rel})_{i,j}$ upon impact be high enough so that they both fracture (rather than bounce or stick together). In addition, let the slope of the size distribution of the debris generated by this collision equal the slope of the size distribution of the cluster's fragments, in which case the mass contained in the collisional debris\\[-0.2cm] \begin{equation} ({\cal M}_{\rm deb})_{i,j} = {\cal M}_i + {\cal M}_j, \end{equation} amounts to, in analogy to Equation (19), \begin{eqnarray} ({\cal M}_{\rm deb})_{i,j} & = & \frac{5 \pi}{6} \rho \, (D_{\rm max})_{i,j}^3 \!\left[ 1 \!-\! \sqrt{\frac{(D_{\rm min})_{i,j}}{(D_{\rm max})_{i,j}}} \, \right] \nonumber \\[-0.2cm] & & \\[-0.2cm] & = & 5\,({\cal M}_{\rm max})_{i,j} \!\left[ 1 \!-\! \sqrt{\frac{(D_{\rm min})_{i,j}}{(D_{\rm max})_{i,j}}} \,\right] \!, \nonumber \end{eqnarray} where\,$(D_{\rm max})_{i,j}$\,and\,$(D_{\rm min})_{i,j}$\,are, respectively, the diam\-eters of the largest and smallest pieces in the debris{\vspace{-0.02cm}} of colliding {\vspace{-0.05cm}}fragments $i$ and $j$, while \mbox{$({\cal M}_{\rm max})_{i,j} \!=\! \frac{1}{6} \pi \:\!\!\rho \:\!(D_{\rm max})_{i,j}^3$} is the mass of the largest piece. Furthermore, similarly~to Equation~(15), the cross-sectional area of the debris is \begin{equation} (X_{\rm deb})_{i,j} = \frac{5 \pi}{4} (D_{\rm max})_{i,j}^2 \!\left[ \sqrt{\frac{(D_{\rm max})_{i,j}}{(D_{\rm min})_{i,j}}} \!-\! 1 \right] \end{equation} and, following Equation (17), the number of pieces in the debris field becomes \begin{equation} (N_{\rm deb})_{i,j}^{\textstyle \star} = \left[ \frac{(D_{\rm max})_{i,j}}{(D_{\rm min})_{i,j}} \right]^{\!\frac{5}{2}} \! = \left[ 1 \!+\! \frac{4\:\! (X_{\rm deb})_{i,j}}{5 \pi (D_{\rm max})_{i,j}^2} \right]^{\!5} \!\!. \end{equation} The five flare-ups reported by Liller (2001) are summarized in Table~2, the first eight columns of which are self-explanatory. Dropping the subscripts $i$ and $j$, the critical quantity is the cross-sectional area $X_{\rm obs}$, computed from a normalized amplitude of the flare-up's light curve by assuming a geometric albedo of 0.04 and allowing for the phase effect with the use of the Marcus (2007) modification~of~the~Henyey-Greenstein scattering law. Observed in a \mbox{34$^{\prime\prime} \!\! \times \! 34^{\prime\prime}$} aperture, $X_{\rm obs}$ is listed in the penultimate column. In the last column it is scaled to a constant linear aperture of 10$^5$\,km on a side, assuming that the surface brightness of the flare-up varied inversely as the distance from the cluster's center of light. The tabulated cross-sectional data appear to be in a range of \mbox{$\sim$1--3 million km$^2$} in the 34$^{\prime\prime\!}$ by~34$^{\prime\prime}$ field, more than two orders of magnitude greater than the cross-sectional area of the comet's nucleus, based on the work by Szab\'o et al.\ (2012) and identified here with the total cross section of the proposed cluster of fragments. Some of this disparity is explained by the presence of optically effective, very fine dust in the flare-up debris, in contrast to the absence of fragments less than a few tens of meters across in the cluster once a flare-up fades away (Table~1). Accordingly, I adopt \mbox{$D_{\rm min} \simeq 0.3 \; \mu$m} in Equations~(37) to (39), meaning it refers to the submicron-sized grains embedded in porous aggregate particles (e.g., Brownlee 1985)\footnote{Aggregate particles of this kind were recently reported to make up major part of the dust population of comet 67P/Churyumov-Gerasimenko (Della Corte et al.\ 2015; Merouane et al.\ 2016).{\vspace{-0.33cm}}} that are believed to make up much of the refractory mass of the fragments. Another part of the disparity derives from the fact that a substantial increase in the surface area during the collisional fragmentation triggered off an increased production of carbon monoxide from the newly exposed surface, which necessarily entailed an increased production of dust. Evidence of CO-driven dust is implied by Liller's (2001) remark that the expansion{\vspace{-0.04cm}} rate of ejecta during the flare-ups exceeded 60~m~s$^{-1}$, about two orders of magnitude higher than the impact velocity. This means that a flare-up's amplitude and the corresponding cross-sectional area $X_{\rm obs}$ consisted of at least two different components, $X_{\rm deb}$ referring only to the low-velocity mass. Since, as already noted, no spike was apparent in the carbon-monoxide production rate at the times of the flare-ups, the amplitude of the high-velocity component did not exceed the overall scatter in the CO production rate, which according to Table~19 of Paper~1 amounted to 10$^{\pm 0.12}$, translating to a peak amplitude of 0.60~mag. Table~2 shows that Liller was able to detect flare-ups with an amplitude of 0.7~mag, so that a conservative estimate for a minimum detectable amplitude of the low-velocity component is $\sim$0.1~mag, equivalent to a lower limit of the cross-sectional area of collisional debris of \mbox{$X_{\rm lim} \simeq 1 \times$10$^5$\,km$^2$}, which can now readily be equated with $X_{\rm deb}$ from Equation~(38). The issue now is how does $X_{\rm lim}$ compare with the cross-sectional area of the debris generated by a collision of two least massive fragments in the cluster, of a diameter $D_{\textstyle \star}$. The mass of this debris is from Equation~(36), \begin{equation} {\cal M}_{\rm deb}(D_{\textstyle \star},D_{\textstyle \star}) = 2{\cal M}(D_{\textstyle \star}) = \frac{\pi}{3} \rho D_{\textstyle \star}^3 \end{equation} and, on the other hand, since \mbox{$D_{\rm min} \!\ll\! D_{\textstyle \star}$}, in terms of the largest piece of the debris, from Equation (37), \begin{equation} {\cal M}_{\rm deb}(D_{\textstyle \star},D_{\textstyle \star}) = \frac{5\pi}{6} \rho D_{\rm max}^3. \end{equation} Using these equations to eliminate $D_{\rm max}$ and given again that \mbox{$D_{\rm min} \!\ll\! D_{\textstyle \star}$}, one obtains for the cross-sectional area of the debris from Equation~(38)\\[-0.15cm] \begin{equation} X_{\rm deb}(D_{\textstyle \star},D_{\textstyle \star}) = \!\left(\! \frac{5}{128} \!\right)^{\!\!\frac{1}{6}} \!\!\pi D_{\textstyle \star}^{\frac{5}{2}} D_{\rm min}^{-\frac{1}{2}}. \end{equation} For the three scenarios from Table~1, the cross-sectional areas of the debris generated by a collision of the smallest fragments in the cluster are 10~km$^2$ for \mbox{$D_0 = 8$ km}, 137~km$^2$ for \mbox{$D_0 = 10$ km}, and 1116~km$^2$ for \mbox{$D_0 = 12$ km}. These cross sections are all smaller than $X_{\rm lim}$, which indicates that Liller (2001) missed these collisions because the triggered flare-ups had amplitudes that were too shallow to detect. Accordingly, Equation (35) needs to be corrected for incomplete statistics before the mean collisional rate (or the mean free time between collisions) based on Liller's flare-up observations can be employed to derive the dimensions of the cluster. A correction is to be applied in such a way that the smaller of any pair of colliding fragments should be allowed to have a diameter from the entire range of \mbox{$D_{\textstyle \star} \!\leq\! D \!<\! D_0$}, whereas the larger one only from a range of \mbox{$D_{\rm lim} \!\leq\! D \!\leq D_0$}; the task is to find $D_{\rm lim}$ such that a collision involving this fragment generates {\vspace{-0.06cm}}a debris whose cross-sectional area equals~$X_{\rm lim}$ (Liller's detection limit); all collisions with a rate of $\dot{N}_{\rm coll}^{\textstyle \star}$, for which the diameter of the larger fragment is from a range \mbox{$D_{\textstyle \star} \!\leq\! D \!<\! D_{\rm lim}$}, are to be excluded from the count. The procedure is very similar to the one used in Equations (40) to (42), starting now with a condition \begin{eqnarray} \!\!\!\!\! {\cal M}_{\rm deb}(D_{\rm lim},D_{\textstyle \star}) & = & {\cal M}(D_{\rm lim}) + {\cal M}(D_{\textstyle \star}) \nonumber \\[-0.1cm] & & \\[-0.28cm] & = & \frac{\pi}{6} \rho D_{\rm lim}^3 \!\left[ 1 \!+\!\! \left(\! \frac{D_{\textstyle \star}}{D_{\rm lim}} \!\right)^{\!\!3} \right] \!\simeq\! \frac{\pi}{6} \rho D_{\rm lim}^3 \nonumber \end{eqnarray} and resulting in \begin{equation} D_{\rm lim} = 5^{-\frac{1}{15}} \!\!\left(\! \frac{4X_{\rm lim}}{\pi} \!\right)^{\!\! \frac{2}{5}} \!\! D_{\rm min}^{\frac{1}{5}} = 1.2\;{\rm km}. \end{equation} The collisional rate for Liller's observations becomes \begin{equation} (\dot{N}_{\rm coll})_{\rm obs} = \dot{N}_{\rm coll} \!-\! \dot{N}_{\rm coll}^{\textstyle \star}, \end{equation} where $\dot{N}_{\rm coll}$ is given by Equation~(32) and the respective mean free time between collisions by Equation~(35).{\vspace{-0.07cm}} The collisional rate $\dot{N}_{\rm coll}^{\textstyle \star}$ is similarly expressed as \begin{equation} \dot{N}_{\rm coll}^{\textstyle \star} = \nu^{\textstyle \star} \langle \sigma^{\textstyle \star} \rangle \langle V_{\rm rel}^2 \rangle^{\frac{1}{2}} \end{equation} where \begin{eqnarray} \nu^{\textstyle \star} & = & \frac{6}{\pi D_{\rm frg}^3} \!\left[\!\left(\! \frac{D_0}{D_{\textstyle \star}} \!\right)^{\!\!\frac{5}{2}} \!\!-\!\left(\! \frac{D_0}{D_{\rm lim}} \!\right)^{\!\!\frac{5}{2}} \right] \nonumber \\[-0.05cm] & & \\[-0.35cm] & = & \frac{6 \Psi_5}{\pi D_{\rm frg}^3} \!\left(\! \frac{D_0}{D_{\textstyle \star}} \!\right)^{\!\!\frac{5}{2}} \!\!\!\cdot \!\!\!\:\left(\! 1 \!-\! \sqrt{\frac{D_{\textstyle \star}}{D_{\rm lim}}} \right) \nonumber \end{eqnarray} and, in analogy to Equation (31), \begin{equation} \langle \sigma^{\textstyle \star} \rangle = \frac{5\pi}{2 \Psi_5} D_{\textstyle \star}^2 \!\left(\! 1 \!+\! \frac{5 \Psi_3^2}{9 \Psi_5} \right) \!, \end{equation} with \begin{equation} \Psi_m = \!\sum_{k=0}^{m-1} \!\left(\! \frac{D_{\textstyle \star}}{D_{\rm lim}} \!\right)^{\!\!\frac{1}{2}k} \!\! = \frac{1 \!-\! \sqrt{(D_{\textstyle \star}/D_{\rm lim})^m}}{1 \!-\! \sqrt{D_{\textstyle \star}/D_{\rm lim}}} \end{equation} The mean impact velocity is independent of fragment dimensions. Inserting from Equations (46), (34), (31),~(33), (26), (47), (48), (19), and (15) into Equation~(45), one finds that the collisional mean free time that is consistent with the observational limitations, \mbox{$(\tau_{\rm coll})_{\rm obs} = (\dot{N}_{\rm coll})_{\rm obs}^{-1}$}, equals{\vspace{-0.1cm}} \begin{equation} (\tau_{\rm coll})_{\rm obs} = \frac{\sqrt{5}}{30\eta \Phi}\:\!(G\rho X_{\rm frg})^{-\frac{1}{2}} D_{\textstyle \star}^{\frac{1}{4}} D_0^{-\frac{11}{4}} \! D_{\rm frg}^{\frac{7}{2}}, \end{equation} where \mbox{$\eta \approx 1$}, \mbox{$G = 6.647 \!\times \!\!\!\: 10^7\!$\,cm$^3$\,g$^{-1}$\,yr$^{-2}$} is the{\nopagebreak} gravitational constant, and \begin{equation} \Phi = \frac{1}{\Gamma_5} \!\left(\!1 \!+\! \frac{5\Gamma_3^2}{9\Gamma_5} \! \right) \!-\! \left(\! 1 \!+\! \frac{5\Psi_3^2}{9\Psi_5} \!\right) \!\!\cdot\! \!\left(\! 1 \!-\! \sqrt{\frac{D_{\textstyle \star}}{D_{\rm lim}}} \right) \!. \end{equation} Equation (50) replaces (35) as an expression for the mean free time between collisions from the temporal distribution of the flare-ups observed by Liller (2001). Solving this equation for the cluster's collisional diameter $D_{\rm frg}$, I list its values and the collisional parameters in Table 3 on the {\vspace{-0.05cm}}assumptions that \mbox{$(\tau_{\rm coll})_{\rm obs} = 0.31$ yr},~\mbox{$\eta = 1$}, and \mbox{$D_{\rm lim} = 1.2$ km}. In addition to $\langle \sigma \rangle$, $\langle V_{\rm rel}^2\rangle{\mbox{\raisebox{-0.4ex}{$^{\!^{1/2} \!}$}}}$, and the true mean free time between collisions (i.e., both detected as the flare-ups and undetected), $\tau_{\rm coll}$, I also list four key parameters of the cluster of fragments from Table~1, as well as its average optical depth, $\Theta$, defined as \begin{equation} \Theta = -\ln \!\left(\!1 \!-\! \frac{4X_{\rm frg}}{\pi D_{\rm frg}^2} \! \right) \!, \end{equation} and an average distance between the centers of neighboring fragments, $s_{\rm frg}$, expressed by \begin{equation} s_{\rm frg} = \!\left( \! \frac{\pi \sqrt{2}}{6} \right)^{\!\!\frac{1}{3}}\!\!\! \cdot\!\left(\! \frac{D_{\textstyle \star}}{D_0} \!\right)^{\!\!\frac{5}{6}} \!\! D_{\rm frg} = 0.9047 \!\left(\!\frac{D_{\textstyle \star}}{D_0} \! \right)^{\!\! \frac{5}{6}} \!\! D_{\rm frg}. \end{equation} \begin{table}[b] \vspace{-3.6cm} \hspace{4.22cm} \centerline{ \scalebox{1}{ \includegraphics{t3_HB2.ps}}} \vspace{-13.83cm} \end{table} Table 3 shows that for any of the three potential principal fragment's diameters considered, the cluster's outer regions are exposed to the Sun's significant perturbations at (and near) perihelion, as the collisional diameter then exceeds the stability diameter. It is expected that many fragments, especially at larger distances from the center of mass, entered markedly different trajectories after perihelion. This development should clearly increase the fragments' orbital diversity, thus the parameter $\eta$, and thereby give rise to a higher collisional rate than before perihelion, which is consistent with Liller's prime conclusion --- the absence of major preperihelion flare-ups. Table 3 also suggests that given the validity of the size distribution of fragments and the detection limit, Liller observed, on the average, every fifth impact involving fragments larger than $D_{\textstyle \star}$ in diameter if the principal fragment was 12~km across, but only every seventeenth impact if it was 8~km across. There are two additional more subtle effects mentioned by Liller (1997, 2001) that could likewise be explained by the proposed hypothesis of a cluster-like nucleus. At heliocentric distances $r$ larger than $\sim$2.5~AU Liller fits the quiescent phase of the light curve before and after perihelion by the same power law, $r^{-n}$, where \mbox{$n = 2.55$}. However, cursory inspection of the light curve reveals that the post-perihelion data marginally deviate from this slope, suggesting that the inner coma was fading at a slightly, but perceptibly, lower rate than it was brightening before perihelion. Yet, the intrinsic brightness was nearly 0.2~mag higher before perihelion. This behavior is qualitatively consistent with two properties implied by the proposed hypothesis:\ (i)~the comet continued to lose massive fragments from its nucleus' cluster in the long run, hence it was brighter preperihelion; but (ii)~the collisional rate was higher after perihelion (owing to the Sun's major perturbations of fragments' trajectories around perihelion), hence some of the new fragments lingered in the inner coma over longer periods of time after perihelion and the inner coma's brightness was fading somewhat less steeply. Indeed, the post-perihelion normalized brightness was lower near 2.5~AU but caught up with the preperihelion brightness by $\sim$7~AU. The other subtle peculiarity is Liller's (1997) reference to an apparent quasi-periodic variability in the preperihelion light curve, with an average period of \mbox{20$\,\pm\,$4}~days and a very small amplitude. It is noted from Table~3 that the {\it true\/} mean free time between {\it all\/} collisions was as short as $\sim$7~days when one adopts a post-perihelion flare-up triggering collisional rate of 3.2 per year. However, I note that solutions consistent with a $\sim$20~day periodicity require that the diameter $D_0$ of the principal fragment not exceed about 11~km under any circumstances. There are no solutions fitting this periodicity for the larger dimensions. \begin{table}[b] \vspace{-3.6cm} \hspace{4.22cm} \centerline{ \scalebox{1}{ \includegraphics{t4_HB2.ps}}} \vspace{-17.43cm} \end{table} If the collisional rate was lower before perihelion, the 20-day period might fit with \mbox{$D_{\rm lim} \!\sim\! D_{\textstyle \star}$}. The dependence of a mean free time between collisions of fragments on a cross-sectional area of collisional debris and a limiting diameter $D_{\rm lim}$ is exhibited in Table~4 for the principal fragment's adopted diameters of 8~km and 10~km. The cross-sectional area of the debris is on the order of hundreds of square kilometers only, too low to detect with Liller's instrumentation, and an amplitude of the~\mbox{20-day} variations in the light curve, triggered by the periodic presence of collisional debris in the inner coma, is estimated to be merely on the order of thousandths of a magnitude. The statistically extracted amplitude of \mbox{0.1--0.2 mag}, apparent from Figure~5 of Liller (1997), might be a product of the ensuing modest variations in the carbon-monoxide production and in the associated ejection of microscopic dust, as explained above. \section{Resulting Constraints on Dimensions of\\Principal Fragment} Up to Table 3, I executed all computations on three~different assumptions regarding the diameter of the principal fragment --- 8~km, 10~km, and 12~km. The tests~carried out in Sections~\mbox{2--9} and further evidence allow now one to narrow down the width of the appropriate size span. In the following I separately discuss the various criteria in terms of the preferences for particular segments of the 4-km wide range of the fragment's diameter. \subsection{Nongravitational Acceleration} The conservation-of-momentum criterion is of fundamental significance, because it is on the strength of this evidence that the hypothesis of the comet's nucleus in the form of a compact cluster of massive fragments has been contemplated as the {\it only\/} credible scenario. The estimated disparity between the derived orbit-integrated nongravitational acceleration and the one expected on the assumption that the nucleus was a single body of~the same cross-sectional area amounts to more than two orders of magnitude in terms of the momentum, or a little less than a factor of 10 in terms of the nucleus' size. The estimated uncertainty by a factor of \mbox{3--4} in the efficiency of the momentum applied by outgassing is equivalent to an uncertainty by a factor of only $\sim$1.5 in the linear dimension. The principal fragment's diameter of 12~km refers to a momentum transfer so efficient that it is barely at a limit of feasibility; the lower end of the 4-km range for the principal fragment's diameter (Table 3) should accordingly be assigned a much higher probability. \subsection{High-Resolution Imaging} The high resolution imaging, especially with the HST, provides a strong argument in favor of a very tight cluster. Because the nucleus' image peaks up rather sharply, its overall dimension can extend at most over just two pixels of the HST's WFPC-2 sensor, each of which corresponds to \mbox{90--100}~km across on four of the six analyzed preperihelion images between 1995 October 23 and 1996 October 17 (Sekanina 1999a), thus limiting the cluster's diameter to about 200~km at the extreme. It is therefore {\it only\/} the lower end of the principal fragment's range of dimensions that passes this test (Table~3). \subsection{Stellar Occultation} Fern\'andez et al.\ (1999) published the results of their campaign to observe an occultation by the comet's nucleus of the star PPM\,200723 on 1996 October 5. The event's light curve, obtained by apparently the only team that reported a positive detection, was {\it V shaped\/}, without any step-like variations or a trough implied by an occulting monolithic body between the ingress and egress of the star. Indeed, the best models offered by Fern\'andez et al.\ included those with an occultation chord of zero length, even though the degree of dimming indicated the star's complete disappearance behind the comet at the time of maximum drop in the count rate. Accordingly, the authors could only conclude that the nucleus was less than 60~km in diameter, an estimate that is too low and inconsistent with Szab\'o et al.'s (2012) result. For the nucleus in the form of a compact cluster of massive fragments one expects a light curve that should be fairly smooth and essentially trough-free. The smaller the principal fragment's size, the more consistent the cluster scenario is with the apparent absence of a trough. In spite of the uncertainties involved, the event appears to have lasted \mbox{40$\,\pm\,$5 s}, suggesting the cluster's collisional diameter of \mbox{204$\,\pm\,$26 km}, implying the principal fragment \mbox{8.4$\,\pm\,$0.8 km} across. \subsection{Sun's Perturbations of Cluster Fragments' Orbits} Because the size of the zone of stability determined in Section~4 is approximate, all that can be stated about the perturbation effects of the Sun near the 1997 perihelion is that they should have made the orbits of fragments near the outer boundary of the cluster essentially chaotic. As seen from Table~3,~the extent of this instability increases with the size of the principal fragment more steeply than linearly. As a corollary,~the collisional rate among the fragments in the cluster should have gotten augmented after perihelion regardless of the principal fragment's size, as implied by Liller's (2001) results. \subsection{Properties of Original Nucleus} Listed in Table 1, the fundamental parameters of the original nucleus that is presumed to have begun to fragment at the time of close encounter with Jupiter in the year $-$2251 (or 2252 BCE) suggest, to a degree, that the principal fragment's diameter of 8~km is more likely than 12~km. Statistically, an original nucleus 22~km in diameter has a higher probability of occurrence than a nucleus 33~km in diameter. Similarly, the fragments would have stayed tighter together (and the cluster would have had better gravitational stability) at a lower rotation velocity (i.e., a smaller nucleus). The lower end of the size range is also more plausible because the central gravitational pressure is than more in line with the compressive strength, estimated for 67P at \mbox{1--3}~kPa by Basilevsky et al.\ (2016) and its upper limit at only 1.5~kPa by Groussin et al.\ (2015). The latter team pointed out that diagenesis may then be initiated in the interior of the comet's nucleus. The probability of this process to have commenced in C/1995~O1 increases with the square of the size of the original nucleus. On the other hand, the tensile strength needed for C/1995~O1 to begin to fragment at the Jovian encounter just matched the lower end of the range reported for 67P, thus making a larger original nucleus slightly more likely but by no means indispensable. \subsection{Distance Between Fragments in the Cluster} It is noted from Table 3 that $s_{\rm frg}$, an average distance between the centers of fragments, is in each of the three cases shorter than the diameter of the principal fragment. In fact, \mbox{$s_{\rm frg} \!\leq\! D_{\mbox{\tiny \boldmath $\otimes$}} \leq D_0$} and the number of fragments whose diameters are equal to or greater than $D_{\mbox{\tiny \boldmath $\otimes$}}$ is \begin{equation} {\cal N}_{\rm frg}(D_{\mbox{\tiny \boldmath $\otimes$}}) < \!\left( \! \frac{D_0}{s_{\rm frg}} \! \right)^{\!\!\frac{5}{2}}\!\!. \end{equation} It turns out that nearly 80 most massive fragments comply with this condition when the principal fragment is 8~km across, 12 fragments when it is 10~km across, but only the principal fragment and the second most massive fragment when 12~km across. This exercise provides a yet another argument for a high chance, if not inevitability, of frequent collisions among fragments in the cluster. In practice, this means that near the cluster's center of mass, where the principal fragment and perhaps some other major fragments should reside, the number density of fragments is much lower than the average, probably not more than $\sim$10$^{-3}$ per km$^3$, given that the volumes of the 8~km, 10~km, and 12~km fragments are, respectively, 268~km$^3$, 524~km$^3$, and 905~km$^3$. Collisions of the principal fragment with other fairly large fragments are likely to account for a fraction of the post-perihelion collisional rate derived from Liller's (2001) observations. \subsection{Impact Velocity and Collisional Rate} Table 3 shows that an average impact velocity depends only moderately on the principal fragment's dimensions and is close to 0.5~m~s$^{-1}$, high enough to assure continuing fracture (rather than bouncing) of the initial tidally-generated fragments of the original nucleus. This expectation is based on the assumption that, on the average, the impact velocities crudely equal in magnitude the velocities of fragments about the center of mass of the cluster in nearly circular orbits. This is a plausible assumption, if the fragments' motions are essentially random. One deals here with a self-feeding mechanism:\ the more often the collisions occur, the more random the orbits become; the stochastic nature is also aided by the Sun's perturbations, especially near perihelion. Two further post-perihelion flare-ups were reported in gaps of Liller's (2001) observing run, one in late August 1997 (McCarthy et al.\ 2007), four months before Liller's first flare-up; the other in mid-October 1999 (Pearce 1999; Griffin \& Bos 1999), six months after Liller's last flare-up. If included, they would increase $(\tau_{\rm coll})_{\rm obs}$ in Equation~(50) from 0.31~yr to 0.36~yr, which would change the cluster's collisional diameter by $\sim$4\%, rather an insignificant effect. \subsection{A Verdict} In summary, the lower end of the range of~the principal fragment's size --- a diameter of \mbox{8--9}~km and a mass of \mbox{1.1--1.5$\,\times$10$^{17}$\,g} --- comes out from this discussion as the preferred one by far, because of the arguments presented primarily in Sections 10.1--10.2, but~also~in~10.3,~and,~in part, in 10.5. The cluster of fragments, some \mbox{210$\,\pm\,$20 km} in~diameter,\,and~the~\mbox{pre-encounter} nucleus are described by the data that can be interpolated from Tables~1~and~3.\hspace{0.1cm} A complete summary of the cluster's adopted parameters for the principal fragment's representative diameter of 8.5~km is tabulated in Section~12. I may point out that from their nongravitational model for comet motions and independently determined nongravitational parameters (with the radial and transverse components only), Sosa \& Fern\'andez (2011) derived for the nucleus of{\vspace{-0.05cm}} C/1995~O1 a diamater of 9.6~km and a mass of \mbox{1.9$\,\times$10$^{17}$\,g}, remarkably close to the present results for the principal fragment. For a cluster-like nucleus, many observed properties of C/1995 O1 (such as the rotation vector, albedo, activity variations, complex dust-coma morphology and its evolution, striation pattern in the tail, gravitationally-bound satellite, etc.) will require a profound re-interpretation. While a complete overhaul of the large body of models that address these issues is outside the scope of this paper, I pay attention to the problem of a satellite, a topic that turns out to be particularly closely related to the proposed paradigm of a compact cluster. It is the orbital instability of fragments in the outer reaches of the cluster --- which will from now on be referred to as a {\it primary (or main) nucleus' cluster\/} or just a {\it primary\/} --- that lends legitimacy to such a link. This instability virtually warrants that, from time to time, a fragment or a subcluster of fragments escapes from the primary, thus contributing to a population of boulder-sized debris scattered over an expanding volume of hundreds or thousands of kilometers across around the primary. A vast majority of individual fragments are too feeble to detect even with the HST, but subclusters of fragments should over a limited period of time show up as faint companions. This likely scenario invites a suggestion to conduct a computer search for such companions in the HST images. \section{On the Occurrence of Companion Nuclei} My presentation of evidence on a major satellite orbiting the primary nucleus of C/1995~O1 (Sekanina~1999b) has been a subject to controversy ever since, in part~because the reported detection --- in five preperihelion images (in 1996 May--October) taken with the HST~Wide-Field Planetary Camera 2 (WFPC-2; imaging scale~of 0$^{\prime\prime\!\!}$.0455 per pixel) through~an F675W filter ---~was~made digitally; no companion was readily apparent in the images when inspected visually. The applied computer~pro\-cedure, based on an iterative least-squares differential-correction technique, was described in detail elsewhere (Sekanina 1995; an upgraded version in Sekanina 1999a). The identification of this object as a satellite gravitationally bound to the main nucleus was based on the assumption that the primary was at least \mbox{3.4$\,\times$10$^{19}$\,g} in mass (i.e., 55~km across at a bulk density of 0.4~g~cm$^{-3}$) and that therefore the radius of a gravitationally stable zone around the nucleus was at least 370~km at perihelion [Equation~(9) gives $\sim$350~km with \mbox{$h_0 = 0.1$}]. The satellite, with its average projected distance of $\sim$180~km from the primary, was safely inside the stability zone unless it was in each of the five images located near the line of sight, a statistically unlikely scenario. With a cluster of fragments replacing the solid nucleus, the situation is rather different. The stability zone being at perihelion less than 100~km in radius (Table~1), the companion would have been located outside the zone at heliocentric distances of up to at least $\sim$2~AU, i.e., over a period of$\:\!\!${$\;$\raisebox{0.3ex}{$>$}\hspace{-0.28cm}\raisebox{-0.75ex}{$\sim$}$\;$}$\!\!$200~days around perihelion. (On the other hand, the companion should have remained inside the Hill sphere at all times.) Dynamically, it is possible but unlikely that these conditions would have sufficed for the satellite to escape along an unbound orbit over a period of several months. It does not appear it happened, if the five satellite images refer to the same object. However, if the companion's existence dated back to the close encounter with Jupiter, the 1997 perihelion was already a second instance of severe solar perturbations.\footnote{This statement does not apply to the published evidence on~the satellite based on the preperihelion observations (Sekanina 1999b); however, as explained in Section~11.4, at least one major companion was likewise detected in each of three post-perihelion HST images.} \subsection{Doubts on the Existence of the Satellite} In the past, dynamical issues were not at the focus~of a controversy on whether the satellite (or companion){\nopagebreak} did in fact exist. The doubts were expressed because of the conditions under which the satellite's signature was extracted from the HST images, in the presence of large amounts of dust and its uneven spatial distribution in the inner coma of C/1995~O1. Weaver \& Lamy (1999) questioned the detection on the grounds that the excess signal attributed to the satellite is ``due to inadequate modeling of the complex coma morphology and/or temporal variability.'' They also warned that the HST's CCD arrays ``are imperfect detectors whose noise does not always obey the laws of counting statistics,'' a problem that of course is model independent. In a follow-up paper, Weaver et al.\ (1999) reported that they found no companions in the post-perihelion HST images taken in 1997--1998 with a Space Telescope Imaging Spectrograph (STIS), but admitted that their detection threshold was rather limited; I return to this topic in Section 11.4. In his review of the topics related to the size and activity of C/1995~O1, Fern\'andez (2002) responded to the report of the satellite more suavely, pointing out that the detection ``remains controversial because of the difficulty in understanding the inner coma's brightness distribution.''{\vspace{-0.15cm}} \subsection{Evidence Supporting a Companion's Existence} A series of images of the comet was obtained on 1996 September 30 with a newly commisioned adaptive optics system PUEO (also referred to as Bonnette, AOB) on the {\vspace{-0.03cm}}Canada-France-Hawaii 3.6-m f/8 telescope on Mauna Kea (Rigaut et al.\ 1998).\footnote{For the comet's images and their description, see {\tt http://www. cfht.hawaii.edu/$\!\!\:$Instruments/$\!\!\:$Imaging/$\!$AOB/best\_pictures.html.}} When deconvolved, the images showed a ``knot of material'' 0$^{\prime\prime\!\!}$.15 north of the nucleus, at a position close to that of the reported satellite in an HST image taken a week earlier (Sekanina 1999b). Marchis et al.\ (1999) used another adaptive optics system, ADONIS, on the ESO 3.6-m telescope at La Silla, Chile, to take the comet's images on 1996 November 6 and 1997 January 15; in their deconvolved frames,~the central peak is clearly resolved into two maxima of~uneven brightness, the fainter separated from the brighter by 0$^{\prime\prime\!\!}$.23 at a position angle of 102$^\circ\pm\,$4$^\circ$ in November and by 0$^{\prime\prime\!\!}$.36 at 78$^\circ\pm\,$5$^\circ$ in January. Marchis et al.\ considered three interpretations for the secondary peak, including a companion nucleus, and concluded that {\it based on their observations alone they remained undecided as to whether the feature on either night was a near-nucleus footprint of a jet or a secondary nucleus, but when combining their results with the findings by others, the scenario involving} ``{\it a double nucleus \ldots seems to be the most likely}.''~These authors also noted that in both cases the compact feature projected close enough to the nucleus to qualify as~a gravitationally bound object and that its position angles did not coincide with the directions of the jets observed at 0$^\circ$--30$^\circ$, 75$^\circ$--95$^\circ$, 115$^\circ$--135$^\circ$, and 240$^\circ$--260$^\circ$ in November, and at 0$^\circ$--40$^\circ$ and 90$^\circ$--130$^\circ$ in January.{\vspace{-0.15cm}} \subsection{Arguments and Counter-Arguments Based on Modeling Dust-Coma Morphology} In a study of the dust-coma morphology of C/1995~O1 (Sekanina 1998), I pointed out that --- given the comet's well determined spin vector --- a system of about eight {\it evenly separated\/} halos in the southeastern quadrant of the coma, prominently apparent in the comet's images from 1997 late February and early March, could not (unlike the halos in the southwestern quadrant in the same images) be modeled as dust ejecta from any source on the nucleus (not even on the antisunward side) and that one has to admit that the observed morphology is a product of dust ejecta from {\it two independent objects\/}\footnote{More accurately, from {\it at least\/} two independent objects.} of different axial orientation, thus providing further support for the existence of a companion. This same conclusion was independently reached by Vasundhara \& Chakraborty (1999) in their morphological study of C/1995~O1, and the argument was also raised by Marchis et al.\ (1999). On the other hand, Samarasinha (2000) argued that the discussed system of dust halos in the southeastern quadrant could in fact be successfully modeled as a product of an extended emission region (about 40$^\circ$ wide) on the surface of the main nucleus. Unfortunately, in an effort to demonstrate his idea, he applied an approach that ignored effects of solar radiation pressure, an impermissible omission. As it turns out, an integrated contribution from the radiation pressure became comparable in magnitude to the contribution from the ejection velocity --- the variable that Samarasinha did account for in his model --- not later than in the course of the third rotation (of the eight involved), but possibly earlier, depending on the projected ejection velocity and the radiation pressure acceleration of the submicron-sized grains that made up the features' outer boundaries. An even distribution of the consecutive halos in Samarasinha's (2000) model is an artifact of his neglect of radiation pressure; its incorporation into the model would compress the halos into a bright, extended blob in the southeastern quadrant of the nucleus. No such feature is apparent in the comet's pertinent images, suggesting that the proposed active region of enormous extent did not exist. In general, the morphology of a dust-coma feature that is produced by an extended source is modeled by using a collection of densely distributed point sources (Sekanina 1987); this technique would certainly have been applied to C/1995\,O1 should it have been of any help. \subsection{Companions in HST's STIS Images} In Section 11.1 I remarked on three post-perihelion images of C/1995~O1 taken in 1997--1998 with a then~newly installed STIS instrument on board the HST as well as on Weaver et al.'s (1999) report of their non-detection~of any companions. In this paper I present for the first~time the results of my subsequent search for companions in these images, using the technique that was previously applied to the HST's preperihelion WFPC-2 images (Sekanina 1999a, 1999b). The advantages of STIS over \mbox{WFPC-2} are a higher quantum efficiency and a lower readout~noise of its CCD array and much broader imaging passbands, thus reaching objects $\sim$1.5 mag fainter. On the other hand, STIS is less well photometrically defined than WFPC-2, with its point spread function expected to degrade approximately 30\% near the boundary of the field of view (Baum 1996).{\vspace{-0.2cm}} \begin{table*}[t] \vspace{-4.12cm} \hspace{-0.53cm} \centerline{ \scalebox{1}{ \includegraphics{t5_HB2.ps}}} \vspace{-16.52cm} \end{table*} \subsubsection{Method of CCD signature extraction} The observed surface brightness distribution was available as an array of pixel signals measured in CCD analog-to-digital intensity units (ADU px$^{-2}$), with {\vspace{-0.045cm}}each pixel 0$^{\prime\prime\!\!}$.0508 on a side. A background noise of 3~ADU px$^{-2}$ was subtracted. The net pixel signals were assumed to consist of a convolved sum of three contributions:\ (i)~from one or more extended sources (to model the coma's complex morphology); (ii)~from the primary nucleus (dominant point source); and (iii)~from additional point sources, some of which could represent genuine companions (nuclear fragments or their clusters), while others are fictitious spots of light of instrumental or unknown origin. Employing its surface brightness map for STIS images, the point spread function (PSF) was approximated by a quasi-Gaussian law with a symmetrical surface brightness distribution $b_{\rm psf}(X,Y)$, which at a point \{$X,Y$\}, whose distance from the PSF's peak at \{$X_{\textstyle \star}, Y_{\textstyle \star}$\} equaled \mbox{$\Delta X = X \!\!-\!\! X_{\textstyle \star}$} and \mbox{$\Delta Y = Y \!\!-\!\! Y_{\textstyle \star}$}, was expressed as \begin{equation} b_{\rm psf}(X,Y) = b_{\textstyle \star} \exp \! \left[ - \! \left( \! \frac{\Delta X^2 \!+ \Delta Y^2}{2 \sigma_{\rm psf}^2} \! \right)^{\!\!\nu_{\rm psf}} \right], \end{equation} where $\sigma_{\rm psf}$ is the PSF's dispersion parameter, $\nu_{\rm psf}$ is a dimensionless constant, and \mbox{$b_{\textstyle \star} = b_{\rm psf}(X_{\textstyle \star}, Y_{\textstyle \star})$} is the peak surface brightness. The integrated brightness $I_{\textstyle \star}$ of the point source is then \begin{equation} I_{\textstyle \star} = 2 \pi b_{\textstyle \star} \sigma_{\rm psf}^2 \nu_{\rm psf}^{-1} \Gamma\:\!\!\!\left(\! \nu_{\rm psf}^{-1} \!\right), \end{equation} where $\Gamma(z)$ denotes the Gamma function of argument~$z$. For the long-pass (LP) filter the parametric values are \mbox{$\sigma_{\rm psf} = 0.1461$\,px} and \mbox{$\nu_{\rm psf} = 0.3034$}, so that $I_{\textstyle \star}$ in ADU is \begin{equation} I_{\textstyle \star} = 1.181 b_{\textstyle \star}, \end{equation} where $b_{\textstyle \star}$ is in ADU px$^{-2}$. Each point source is fully~described by three constants:\ $X_{\textstyle \star}$, $Y_{\textstyle \star}$, and $I_{\textstyle \star}$. A surface brightness distribution in extended sources, $b_{\rm ext}(X,Y)$, was approximated (after its convolution with the PSF) by an ellipsoidal power law (referred to as law A in Sekanina 1999a, 1999b), which allowed for a deviation of the peak's location in the ellipsoid's center, described by \{$X_{\rm ext}, Y_{\rm ext}$\}, from the origin of the coordinate system, as well as for anisotropy and an arbitrary orientation, the latter defined by an angle $\theta_{\rm ext}$ in the direction of the most gentle rate of signal decline from the peak: \begin{equation} b_{\rm ext}(X,Y) = \frac{b_0}{1 \!+\! \left[ \!\left( \! {\displaystyle \frac{\Delta X}{\sigma_x}} \!\right)^{\!\!2} \!\!+\! \left( \! {\displaystyle \frac{\Delta Y}{\sigma_y}} \!\right)^{\!\!2} \right]^{\!\frac{1}{2}\nu_{\rm ext}}} \, , \end{equation} where \begin{equation} \left(\!\! \begin{array}{c} \Delta X \\ \Delta Y \end{array} \!\!\right) \!=\! \left(\!\! \begin{array}{cc} \cos \theta_{\rm ext} & \sin \theta_{\rm ext} \\ -\sin \theta_{\rm ext} & \cos \theta_{\rm ext} \end{array} \!\!\right) \!\cdot\! \left(\!\! \begin{array}{c} X\!\!-\!\!X_{\rm ext} \\ Y\!\!-\!\!Y_{\rm ext} \end{array} \!\!\right) \!, \end{equation} $\sigma_x$ and $\sigma_y$ are the maximum and minimum dispersions of the surface brightness distribution along, respectively, the $X$ and $Y$ axes, and $\nu_{\rm ext}$ is the exponent of the power law. Each extended source is fully described by seven~independent constants:\ $X_{\rm ext}$, $Y_{\rm ext}$, $\sigma_x$, $\sigma_y$, $b_0$, $\nu_{\rm ext}$,~and~$\theta_{\rm ext}$. To summarize, in a search for $N_{\rm pt}$ point sources and $N_{\rm ext}$ extended sources, the deconvolving procedure's optimization least-squares differential-correction technique was required to iteratively solve for \mbox{$3 N_{\rm pt} \!+\! 7 N_{\rm ext}$} parameters; typically, signals in 157 pixels were fitted.{\vspace{-0cm}} \begin{figure} \vspace{-1.65cm} \hspace{0.26cm} \centerline{ \scalebox{0.525}{ \includegraphics{f2_HB2.ps}}} \vspace{-5.1cm} \caption{Spatial distribution of detected companions B--T relative to the primary nucleus A of comet C/1995~O1 in projection onto the plane of the sky on 1997 August 27. The size of the circles is to scale on the assumption of infinite opacity, i.e., for an equivalent diameter listed in Table~6. The dotted circle shows the dimensions of the compact cluster of fragments that represents the primary nucleus A, as adopted in Section~10.8. The diameters of the companion nuclei should be adjusted proportionately, if they too are judged to consist of compact clusters of fragments. Unlike dust jets, most companions project relative to the primary nucleus in directions that are far from the direction to the Sun.{\vspace{0.4cm}}} \end{figure} \subsubsection{Results} The extracted dimensions of the primary nucleus and some results of a search for companions in the STIS images are presented in Table~5 together with the~partially revised results of a previous analysis of the preperihelion WFPC-2 images\footnote{The image from 1996 October~17 was reanalyzed, because only after the original papers (Sekanina 1999a, 1999b) were accepted~for publication it became known that the initially announced exposure time was incorrect (see Table~I in Sekanina 1999a). In addition, recent inspection of the computer runs revealed that the mean~residuals in Table~II of Sekanina (1999a) were inadvertently multiplied by a factor of~10, an error that has now been corrected in Table~5.} (Sekanina 1999a, 1999b). The tabulated numbers and experience with the fitting{\nopagebreak} procedure offer these conclusions: (1)~The fitted dimensions{\nopagebreak} of the primary nucleus are remarkably consistent over the orbital arc of 28~months, with an {\it equivalent diameter\/} (measuring the observed cross-sectional area) averaging \mbox{73.1$\,\pm\,$2.4 km}, within 0.4$\sigma$ of the result derived by Szab\'o et al.\ (2012); (2)~introduction of a second extended source in the solutions consistently failed~to improve the fit to the distribution of dust in the coma, implying --- together with a low rms residual of about $\pm$2--4~ADU --- that the employed distribution~function given by Equation~(58) provided an adequate approximation; (3)~a large number of companion nuclei was detected with both instruments, the larger post-perihelion numbers being owing in part to the STIS instrument's higher sensitivity; (4)~as shown in an example~in Figure~2, most companion nuclei~were~not,~unlike dust jets, located in directions close to that of the Sun\footnote{Contained in the 90$^\circ$ sector centered on the projected sunward direction are only five of the 17~companions in the 1997 August 27 image; only five of the 12~companions in the 1997 November 11 image; and --- astonishingly --- none of the 10~companions in the 1998 February 19 image.} and were not concentrated densely along particular lines, thus making their interpretation as phenomena that were closely related to dust jets quite unlikely; (5)~as further documented by \mbox{Tables~6--8}, the brighter companions had an extremely high signal-to-noise ratio close to or exceeding 10, and only for a few of the tabulated objects could their existence be readily questioned, in particular, the companions R, S, and T in Table~6, N$^\prime$ in Table~7, and K$^{\prime\prime}$ and L$^{\prime\prime}$ in Table~8. \begin{table}[t] \vspace{-4.08cm} \hspace{4.25cm} \centerline{ \scalebox{1.0}{ \includegraphics{t6_HB2.ps}}} \vspace{-16.6cm} \end{table} \begin{table}[b] \vspace{-3.85cm} \hspace{4.25cm} \centerline{ \scalebox{1.0}{ \includegraphics{t7_HB2.ps}}} \vspace{-18.8cm} \end{table} The primary nucleus (i.e., the compact cluster of fragments proposed to make it up) is marked A in all three STIS images; \mbox{B--T} are the companion nuclei detected in the image taken on 1997 August 27 (Table~6); \mbox{B$^\prime$--N$^\prime$} the companions detected in the image of 1997 November 11 (Table~7); and \mbox{B$^{\prime\prime}$--L$^{\prime\prime}$} the companions in the image of 1998 February 19. As pointed out, the existence of the objects R, S, T, N$^\prime$, K$^{\prime\prime}$, and L$^{\prime\prime}$ is questionable; the existence of G$^{\prime\prime}$, H$^{\prime\prime}$, and J$^{\prime\prime}$ is somewhat uncertain. It may be significant that the bright companion in the image of 1996 October~17 (Table~5), taken only 20 days before the ESO observation that we referred to in Section~11.2, is located at a position angle that differs from that in the ESO image by less than 4$\sigma$ and at a comparable angular distance from the primary nucleus. The positional difference may be due in part to the companion's motion in the course of the 20~days, in part to effects introduced by the heavy processing of the ESO image. \begin{table}[t] \vspace{-4.1cm} \hspace{4.25cm} \centerline{ \scalebox{1}{ \includegraphics{t8_HB2.ps}}} \vspace{-18.85cm} \end{table} A list of all detected companion nuclei is in the order of increasing projected distance from the primary nucleus presented in Table~9. Since the radius of the stability zone defined by{\vspace{-0.04cm}} Equation~(9) (with \mbox{$h_0 = 0.1$} and the primary nucleus' mass of 6$\,\times$10$^{17}$g; cf.\ Section 12) amounted to 250~km for the image of 1997 August 27, to 340~km for the image of 1997 November 11, and to 450~km for the image of 1998 February 19, only 3--6 innermost companions were likely to have been, on any of the three dates, located inside the stability zone of the primary. On the other hand, all companions were located deep inside the Hill sphere of the primary nucleus --- whose radii at the three times were between 1700~km and 3100~km --- unless the distances were in each image strongly foreshortened. This result suggests that the primary nucleus was still likely to exert much influence over the motions of many if not all of the detected companions at the times of observation. \begin{figure}[b] \vspace{-1.9cm} \hspace{0.57cm} \centerline{ \scalebox{0.52}{ \includegraphics{f3_HB2.ps}}} \vspace{-6.1cm} \caption{Cumulative distribution of equivalent diameters of the detected companions in the STIS images. The primary nucleus~(A) and the companions of questionable existence (signal-to-noise ratio of $\leq$3 in Tables~6--8), shown by open symbols on the left, deviate from the distribution and were not employed in the fitting. The slope of the distribution drops rapidly with time.{\vspace{-0.1cm}}} \end{figure} \begin{table}[t] \vspace{-4.1cm} \hspace{4.25cm} \centerline{ \scalebox{1}{ \includegraphics{t9_HB2.ps}}} \vspace{-7.50cm} \end{table} Figure 3 displays the cumulative distribution of the detected companions' equivalent diameters, which measure, as in the instance of the primary nucleus, their observed cross-sectional areas. A striking property of the distribution is a very rapid rate of slope flattening with time, from $D^{-2.35 \pm 0.21}$ in August 1997 to $D^{-1.97 \pm 0.06}$ in November 1997, to $D^{-1.37 \pm 0.14}$ in February 1998. As the expected slope of a steady-state distribution law for fragments of a common parent is $D^{-2.5}$, described by Equation~(12), the trend in the slope in Figure~3 points to a peculiar behavior, to be addressed below. Furthermore, the equivalent diameters of most companions are so large that they cannot be single fragments; instead, they appear to consist of subclusters of fragments. The data points in Figure 3 that significantly deviate from the fitted power laws include the companions~whose existence is questionable; this is understandable, because their signals barely exceeded the background signal of the extended source. Not only was their detection marginal, but there may have existed additional companions of similar eqivalent diameters that failed to be detected at all, an argument that is in agreement with the positions of the doubtful companions consistently below the fitted laws in the figure. Alternatively, of course, there may not be any companions comparable in size or smaller than the questionable ones and the population of companions may terminate right there. Also deviating substantially from the fitted laws is the primary nucleus, especially in the first two of the STIS images; fairly large offsets are not unusual at the lower end of the cumulative distributions of statistical sets and are not necessarily worrisome. As for the rapidly dropping slope of the cumulative distribution of the companions in Figure~3, I test an assumption that it is caused by temporal variations in the objects' cross-sectional area (measured by their equivalent diameter). The issue is important because if the companions are compact subclusters of fragments, their total cross sections may either increase with time as a result of progressive fragmentation because of collisions at very low velocities --- or decrease also as a result of the fragmentation that entails escape of much of the involved mass from the gravity field of the subcluster once the debris acquired velocities that exceeded the escape limit. Which of the two processes dominates is determined by the systematic variations in the distribution's slope, linked to the cross-sectional variations with time. I now examine under what set of circumstances could the steady-state cumulative distribution of companions, given by Equations~(13), change dramatically its slope to fit the distributions in Figure~3. Let at time $t_0$, when the process affecting the distribution of companions was set off, an equivalent diameter of the primary nucleus be \mbox{$D_0^{\textstyle \ast} = D_0^{\textstyle \ast}(t_0)$}, while a companion's equivalent diameter be \mbox{$D^{\textstyle \ast} \!= D^{\textstyle \ast}(t_0) = D_1^{\textstyle \ast}(t_0), D_2^{\textstyle \ast}(t_0)$, \ldots \,[$D^{\textstyle \ast}(t_0) \!<\! D_0^{\textstyle \ast}(t_0)$]}. Calling \mbox{$x = x(t_0) = D^{\textstyle \ast}(t_0)/D_0^{\textstyle \ast}(t_0)$}, the steady-state cumulative distribution, ${\cal N}_{\rm nuc}$, of the companion nuclei at time $t_0$ is, following Equation~(13), \begin{equation} {\cal N}_{\rm nuc}(t_0) = x^{-\kappa_0}, \end{equation} where \mbox{$x \!\leq\! 1$}, \mbox{$\kappa_0 \!=\! \frac{5}{2}$}, \mbox{${\cal N}_{\rm nuc} \!=\! 1$} for \mbox{$x \!=\! 1$} (when \mbox{$D^{\textstyle \ast} \!=\! D_0^{\textstyle \ast}$}), and \mbox{${\cal N}_{\rm nuc} \!>\! 1$} for \mbox{$x \!<\! 1$}. At a time \mbox{$t > t_0$}, a different relationship~applies,~as~is demonstrated by Figure~3. In particular, one~now~has \mbox{$D_0^{\textstyle \ast} \!=\! D_0^{\textstyle \ast}(t)$},\,\mbox{$D^{\textstyle \ast} \!=\! D_1^{\textstyle \ast}(t),\,D_2^{\textstyle \ast}(t)$},\,\ldots{\vspace{-0.02cm}}[with~\mbox{$ D_0^{\textstyle \ast}(t) \!\neq\! D_0^{\textstyle \ast}(t_0)$}, \mbox{$D_i^{\textstyle \ast}(t) \neq D_i^{\textstyle \ast}(t_0), \,i = 1, 2$, \ldots]}, {\vspace{-0.02cm}}\mbox{$x = x(t) = D^{\textstyle \ast}(t)/D_0^{\textstyle \ast}(t)$}, and, generally, \mbox{$x(t) \neq x(t_0)$} with the exception of \mbox{$x = 1$}. The cumulative distribution at time $t$ is described by \begin{equation} {\cal N}_{\rm nuc}(t) = x^{-\kappa}, \end{equation} where \mbox{$\kappa \neq \kappa_0$} and constraints similar to those in Equation~(60) apply to ${\cal N}_{\rm nuc}$. The issue now is to modify Equation~(60) in a way such that it describes the cumulative distribution of equivalent diameters observed at time $t$ and simultaneously reproduces the distribution in Equation~(61). I search for a solution by adding a function $y(x)$, to be determined, to the variable $x$ from Equation~(61), so that Equation~(60) appears at time $t$ as follows: \begin{equation} {\cal N}_{\rm nuc}(t) = (x \!+\! y)^{-\kappa_0}. \end{equation} The function $y$ is subject to a boundary condition \mbox{$y = 0$} at \mbox{$x = 1$} in order that \mbox{${\cal N}_{\rm nuc} = 1$}. The log-log derivative~of the expression (62) becomes \begin{equation} \frac{d \log {\cal N}_{\rm nuc}}{d \log x} = \frac{x}{{\cal N}_{\rm nuc}} \frac{d {\cal N}_{\rm nuc}}{dx} = -\kappa_0 \, \frac{x}{x \!+\! y} \left( \! 1 \!+\! \frac{dy}{dx} \right) \!, \end{equation} while from Equation~(61) one gets immediately \begin{equation} \frac{d \log {\cal N}_{\rm nuc}}{d \log x} = -\kappa. \end{equation} Comparing the right-hand sides of the expressions (63) and (64), one obtains a linear differential equation of the first order, \begin{equation} \frac{dy}{dx} - \chi \frac{y}{x} = \chi \!-\! 1, \end{equation} where \mbox{$\chi \!=\! \kappa/\kappa_0 \!<\! 1$} because \mbox{$\kappa \!<\! \kappa_0$} from Figure~3. The general solution to Equation~(65) is \begin{equation} y(x) = c_0 x^\chi - x, \end{equation} where $c_0$ is a constant; from the boundary condition~for~$y$ in Equation~(62) one finds \mbox{$c_0 = 1$}, so that \begin{equation} y(x) = x \, (x^{\chi-1} \!-\! 1). \end{equation} Inserting from the solution (67) for $y$ into Equation~(62), one indeed obtains at once Equation~(61). Since \mbox{$\chi \!<\! 1$}, $y$ is positive for any \mbox{$x \!<\! 1$} and, in conformity with Figure~3, the expression (67) implies that \mbox{$x(t) \!<\! x(t_0)$} for any \mbox{${\cal N}_{\rm nuc} \!>\! 1$}; the process of accelerating escape of fragments from the companion clusters, entailing a progressively~increasing loss of their cross-sectional area, dominates. The observed slopes from Figure 3 can be fitted as a function of heliocentric distance $r$, expressed in units of peri\-helion distance $q$, by an exponential law of the type \begin{equation} \kappa(r) = \kappa_0 \exp \!\left\{ C_1 \!\left[ 1 \!-\! \left( \frac{r}{q} \right)^{\!\!C_2} \right] \! \right\}, \end{equation} where $C_1$ and $C_2$ are constants. A fairly broad range~of the pairs $C_1$, $C_2$ fits the slopes of the three observed distributions from Figure~3 about equally well, with the~re\-sulting residuals much smaller than the errors involved.{\hspace{0.3cm}} \begin{table}[b] \vspace{-3.5cm} \hspace{4.03cm} \centerline{ \scalebox{0.975}{ \includegraphics{t10_HB2.ps}}} \vspace{-20.52cm} \end{table} Table 10 presents the calculated values of the slopes~$\kappa$ and their residuals from one particular fit that employs{\nopagebreak} \mbox{$C_1 = +0.0024$} and \mbox{$C_2 = +3.5$}. Note that for \mbox{$r \!=\! q$}, Equation~(68) always satisfies a condition \mbox{$\kappa = \kappa_0 = 2.5$} regardless of the choice for $C_1$ and $C_2$. It thus appears that the gradual loss of the cross-sectional area of the detected companions may have been triggered off by the Sun's significant near-perihelion perturbations of the motions of fragments in each companion's cluster, with some of them presumably lost to space at an accelerated rate after perihelion. The time of birth of the companions is unknown but the results suggest that the distribution of these subclusters of fragments still conformed to steady state at the time of perihelion passage. If correct, this argument implies that the {\it preperihelion\/} distribution of equivalent diameters of companions was essentially in steady state. To test this inference, I examined the distribution for the image of 1996 October 17, in which 11 companions were detected, a greater number than in any other preperihelion image. I found that, once again, with the exception of the primary nucleus and a few companions of questionable existence at the other end of the distribution, the equivalent diameters of the remaining 8 companions fitted a power law with a slope of \mbox{$\kappa = 2.67 \pm 0.22$}, consistent within errors with the steady-state slope of 2.5. A general picture of the distribution of equivalent diameters of the companions that emerges from these considerations is a possible cyclic variation in the slope:\ the assemblage of companions approaches perihelion with a steady-state distribution, but departs it with a distribution that is increasingly flatter. To retain this cycle from one revolution about the Sun to the next, the process of slope flattening is required to terminate at some time after perihelion and steady state to be gradually restored. It is probable that far from the Sun it is the collisions of fragments in the cluster of each companion (as well as the primary nucleus) that in the absence of the Sun's perturbations gradually re-establish steady state within the primary's cluster and, by extension, in the distribution of the companions' equivalent diameters as well. If so, the rapidly diminishing tilt of the distribution in an early post-perihelion span of time is merely a short-lived solar-perturbation effect. Alternatively, the rapidly flattening slope of the distribution of the companions' equivalent diameters could represent a lasting effect that wiped out the steady-state distribution once and for all. If this interpretation is correct, it would imply the birth of the detected companions to date back to a time (or times) {\it after\/} the previous perihelion in the year of $-$2250, as the steady-state distribution should otherwise have been done away with shortly after that time. In either scenario, the practical issue is the degree of contamination caused by the companions in the primary nucleus' signal detected in the images taken at very large heliocentric distances after perihelion that Szab\'o et al.\ (2012) used in their investigation. For example, within a projected distance of 500~km of the primary nucleus, the companions contributed 71\% of the primary nucleus' signal in the 1997 August image (Table~6), 40\% in the 1997 November image (Table~7), and 39\% in the 1998 February image (Table~8). If this trend continued, the contribution became probably close to negligible in an HST image of 2009 September 8, one of the images employed by Szab\'o et al. However, since the degree of contamination depends on the instrumental constants, it may represent one of possible explanations for a minor gap of 0.19 mag between the 2009 HST observation and the 2011 October 23 VLT observations, which Szab\'o et al.\ attributed to an albedo difference. In closing, a correlation is noticed in this context that appears to exist after perihelion between the decreasing slope of the distribution curve of the companions' equivalent diameters and the presence of the striking flare-ups on the comet's inner-coma light curve, observed by Liller (2001) (Section~9). Either phenomenon is proposed to be a signature of fragment collisions, in both the primary nucleus' cluster and the subclusters of the companions. \subsection{Final Comments on the Problem of\\Companion Nuclei in C/1995~O1} The first comment concerns the terminology. In my earlier paper (Sekanina 1999b) I consistently referred to a {\it satellite\/} or {\it satellites\/}, whereas now I am dealing with a {\it companion\/} or {\it companions\/}. As remarked at the beginning of Section~11, this change of terminology is a corollary of the new model for the comet's nucleus that implies a substantially lower mass of the cluster that makes up the primary nucleus, by a factor of more than 100, relative to the mass estimated for a solid nucleus of the same cross-sectional area. This difference clearly has an effect on the dimensions of the stability zone, with the result that the range of distances for gravitationally bound companions --- the satellites --- is now curtailed significantly. The second comment is to underscore a point that appears to have never been contemplated in the controversy of the detection of a companion (or companions) in close proximity of the primary nucleus:\ the unequal degrees of fitness and resolution offered by the various applied techniques toward achieving a detection. I argue that a two-dimensional modeling of the type that the method employed here is based on is more robust and less prone to missing inconspicuous objects in close proximity of a major object than is, for example, the method of radial cuts used in Lamy's approach (e.g., Lamy et al.\ 1996). The superior qualities of the applied technique are apparent not only from the high signal-to-noise ratios of the detected companions (as listed in Tables~6--8), but also from the results of experimentation with fitting additional extended sources to account for a complex distribution of the signal over the investigated field of view. Solutions with more than one extended source were not successful in fitting the local signal peaks, unlike the solutions with additional point sources. Thus, the present results cast doubts on the detected bumps in the digital maps as imprints of complex morphological features of the ambient dust coma (such as jets or hoods) and, instead, support the notion that they are signatures of point-like companion nuclei immersed in the coma. \section{Summary and Conclusions} A consensus is that C/1995~O1 was one of the most spectacular comets of the 20th century with an unusually large nucleus. However, even the best determination --- based on the far-infrared observations with the Herschel Space Observatory when the comet was no longer active --- entailed assumptions by employing a model that converted the observed flux, that is, a measure of the {\it cross-sectional area\/}, of 4300 km$^2$, to what should be called an equivalent diameter of \mbox{74\,$\pm$\,6 km} (Szab\'o et al.\ 2012). Accordingly, it is the cross-sectionial area --- a quantity more directly measured than the diameter --- that describes the nucleus more faithfully. It is then a matter of interpretation to decide what kind of nucleus the measured quantity characterizes, whether a single spherical solid body, or a binary object, or a cluster of solid spherical bodies of the same overall cross sectional area, etc. Subject to additional constraints, they all satisfy the flux condition equally well. It is noted that Szab\'o et al.'s result is in excellent agreement with a mean equivalent diameter of 73.1\,$\pm$\,2.4~km (Table~5), derived from the HST images taken on nine dates between 1995 October and 1998 February, when the comet was always less than 6.4~AU, and as close as 2.7~AU, from the Sun. This correspondence suggests that the comet's dust coma was at these heliocentric distances optically thin all the way to the surface of the nucleus. A gigantic size of the nucleus is fundamentally at odds with the independent detection of a fairly high outgassing-driven nongravitational acceleration that the comet's orbital motion was subjected to{\vspace{-0.04cm}} (Paper~1). The acceleration,{\vspace{-0.04cm}} (0.707\,$\pm$\,0.039)$\times 10^{-8}$AU~day$^{-2}$ at a heliocentric distance of 1~AU and equal to (2.39\,$\pm$\,0.13)$\times 10^{-5}$ the Sun's gravitational acceleration, follows a modified Marsden-Sekanina-Yeomans (1973) law with a scaling distance of \mbox{$r_0 \!=\! 15.36$}~AU. When integrated over the entire orbit about the Sun, the nongravitational effect is found to be equivalent to a momentum change per unit mass of 2.46\,$\pm$\,0.14~m~s$^{-1}$. Accounting for the momentum exerted by the mass sublimated from the nucleus over the orbit, the conservation-of-momentum law suggests that such a nucleus of a bulk density of 0.4~g~cm$^{-3}$ should not exceed $\sim$10~km in diameter and its mass should be on the order of 10$^{17}$g, more than {\it two orders of magnitude\/} less than the mass of the nucleus with the dimensions determined by Szab\'o et al.\ (2012). I argue that this major conflict can only be avoided by postulating that the nucleus of C/1995~O1 at its recent return to perihelion was made up of a {\it compact cluster of massive fragments\/} of the original nucleus that broke up by the action of tidal forces exerted by Jupiter during the comet's close encounter with the planet in the 23rd century BCE (Paper~1). Dominated by collisions, the cluster is assumed to have a size distribution of fragments that reached steady state, with their cumulative number varying inversely as a $\frac{5}{2}$th power of fragment diameter. The nongravitational acceleration detected in the comet's orbital motion is in this scenario interpreted as referring to the principal, most massive fragment, located near the cluster's center of mass and, as required by the conservation-of-momentum law, up to 10~km in diameter. The nongravitational accelerations on other~outgassing fragments remain undetected, triggering perturbations of their motions relative to the principal fragment. Besides having a correct cross-sectional area, the cluster ought to appear as a nearly point-like feature in the high-resolution images taken with the HST instruments. This requisite limits the models to a strongly compacted cluster not exceeding $\sim$200~km in diameter, constraining its image's extent to no more than two pixels across on the HST detectors at geocentric distances of $\sim$3~AU. Independently, the steady-state size distribution of fragments restricts the total mass of the cluster to less than five times the mass of the principal fragment. Further critical properties of C/1995~O1 are the tensile strength of its original, pre-encounter nucleus as well as a degree of gravitational stability and collisional rate of the cluster-like nucleus. The mass of the pre-encounter nucleus is estimated at about 20 masses of the principal fragment and more than 20~km in diameter, with most of the mass having been lost by the time the comet was discovered in 1995. Given the minimum encounter distance of less than 11~Jovian radii (Paper~1), a critical tensile strength along fissures could not be higher than several Pa for the nucleus to fracture, while the central gravitational pressure did not exceed $\sim$3000~Pa. Based on existing studies of gravitational stability of globular clusters on the one hand and binary asteroids on the other hand, I adopt for the radius of a stability zone a conservative limit equaling $\sim \! \frac{1}{7}$th the radius of the Hill sphere. For a cluster of fragments of the considered mass, the sphere of gravitational stability at perihelion of C/1995~O1 is slightly smaller than the cluster's dimensions. Accordingly, significant perturbations of the cluster's outer reaches by the Sun are likely near perihelion, resulting presumably in a higher collisional rate after perihelion, at least over limited periods of time. Possible evidence for this corollary of the Sun's near-perihelion perturbations is Liller's (2001) list of recurring flare-ups in the comet's inner coma, five of which were detected between early 1998 and mid-1999. An additional flare-up of similar nature was observed by~other astronomers in late 1999, when Liller's monitoring was incomplete. Interpreting the flare-ups as due to dust ejecta from colliding kilometer-sized fragments, Liller's result provides their collisional rate, allowing thus to correlate the cluster's dimensions with the principal fragment's size and to select a narrow range of cluster models centered on the most probable one, presented in Table~11. The high chance of collisions among fragments is illustrated by an average distance between their centers, which is shorter than the diameters of the $\sim$50 most massive fragments. The proposed cluster model for the nucleus of comet C/1995~O1 so dramatically contrasts with the traditional single-body model that the published interpretations of the comet's coma morphology and brightness variations have now become largely invalidated and will have to be reinvestigated essentially from scratch, an effort that is beyond the scope of this study. However, the flare-ups in the post-perihelion light curve of the inner coma (Liller 2001) are unlikely to be products of sudden local activity on a single rotating nucleus because identical areas of the surface would have been exposed to the Sun before perihelion, yet no flare-ups were detected along the incoming branch of the orbit. \begin{table}[t] \vspace{-4.2cm} \hspace{4.22cm} \centerline{ \scalebox{1}{ \includegraphics{t11_HB2.ps}}} \vspace{-13cm} \end{table} Also beyond this investigation's scope is a highly desirable Monte Carlo-type modeling of the fragments' motions in the nucleus cluster over long periods of time, involving an n-body problem. This task should be undertaken to assess the degree of the cluster's gravitational stability along the preperihelion leg of the orbit, the severity of the Sun's perturbations especially around perihelion given that the size of the stability zone is then found to be slightly smaller than the cluster's size, and the magnitude of their effects on the motions of individual fragments inside or outside the cluster along the post-perihelion leg of the orbit. A final comment on the compact-cluster model of the nucleus of C/1995~O1 relates to a major imbalance between the masses of the original, pre-encounter nucleus and the cluster structure at the 1997 apparition, by which time the comet is estimated to have lost about three quarters of its initial mass, most of it in direct orbits apparently soon after the encounter. It is expected that attrition was also likely to accompany the process of perturbing the motions of fragments near the 1997 perihelion. Given the submeter-per-second velocities of fragments, this scenario invites a suggestion that a fraction of the perturbed fragments or subclusters of fragments escaped the main cluster's gravity shortly following the perihelion passage and that such boulder-sized debris should be scattered near the comet and might show up in the HST's post-perihelion images from 1997 August 27 through 1998 February 19. Indeed, a fragment moving with a velocity of escape radially away from the primary nucleus' cluster in free flight would be expected to reach a distance of $\sim$1000~km from it in a matter of a few weeks; in reality, the time needed to reach this distance would be longer by a factor of a few. And closer objects may represent stray fragments or subclusters of fragments whose relative velocities were just below the velocity of escape. Inspection of the immediate proximity of the primary nucleus for such stray objects was therefore eminently desirable and a computer search harvested more than 30 objects in the three post-perihelion HST images and at least 15 in the first of them alone. The detected signals of these companions suggest that they in fact are subclusters of fragments because, if single objects, most of them would be larger than the principal fragment of the primary nucleus' cluster. Their signal-to-noise ratios are generally quite high, and the possibility that they all are factitious products of the search algorithm is so remote that it can safely be dismissed. Also, there is no correlation between their locations and the positions and directions of bright jet-like features in the images. As to the old controversy on the detection of a companion (or satellite) in the HST images, it could very well be that it is the applied computer technique's faculty, aptitude, and sensitivity to extracting the object's signal that makes the difference, a point never raised in the past. While the times of the post-perihelion HST images were judged to be separated by gaps too wide to establish identities of the companions in different images, the data set was considered appropriate for investigating the cumulative distribution of their signals (or equivalent diameters) separately in each frame. The result was a surprisingly rapid systematic drop in the slope of the distribution from a rate lower than, but fairly close to, the steady-state rate in the 1997 August 27 image to a rate about 1.7 times less steep in the 1998 February 19 image. The trend is explained as an effect of a gradual dissipation of the subclusters that make up the companions, whereby the more massive subclusters decay at a slower pace than the less massive ones. Quantitative analysis implies that when extrapolated back in time, the steady-state distribution would have been reached near perihelion, a coincidence that suggests a possible relationship between the separation of the subclusters from the primary and the Sun's peak perturbations. I do realize that the developed compact-cluster~model, while self-consistent, might be deemed by some~as~controversial and certainly in need of further testing.~Similarly, some of the comet's properties might be hard to make readily compatible with this model or they might require additional constraints or introduce further conditions. Whatever difficulties might lie ahead,\,however,\,the major disparity between the outgassing-driven \mbox{nongravitational} acceleration --- which for C/1995~O1 exceeds, or is comparable to, nongravitational accelerations derived in~the past for fairly bright, but by no means spectacular,~long-period comets\footnote{For example, for comet{\vspace{-0.04cm}} C/1998~T1 the total nongravitational parameter amounted to 0.88\mbox{$\:\!$}$\times 10^{-8}$ AU day$^{-2}$ (Nakano {\vspace{-0.04cm}}2000); for C/2000 WM$_1$ to 0.52\mbox{$\:\!$}$\times 10^{-8}$ AU day$^{-2}$ (Nakano 2002);{\vspace{-0.04cm}} and for the main component B of C/2001~A2 to 0.59\mbox{$\:\!$}$\times 10^{-8}$ AU day$^{-2}$ (Nakano 2001). (The errors are in the second or higher decimal.)} --- and the single-body model, which for {\vspace{-0.04cm}}C/1995~O1 requires a nucleus 74~km in diameter~and nearly 10$^{20}$\,g in mass, strikes one as so utterly compelling that to me this argument alone rules the traditional, single-body model out completely. And I am aware of no other model that would stand.\\[0.17cm] I thank Franck Marchis for providing me with the digital maps of the three post-perihelion HST images made with the STIS instrument in 1997--1998. This research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.\\[-0.2cm] \begin{center} \raisebox{0.03cm}{\footnotesize REFERENCES} \end{center} \vspace*{-0.3cm} \begin{description} {\footnotesize \item[\hspace{-0.3cm}] Aggarwal, H. R., \& Oberbeck, V. R. 1974, ApJ, 191, 577 \\[-0.57cm] \item[\hspace{-0.3cm}] Basilevsky, A. T., Krasil'nikov, S. S., Shirayev, A. A., et al.\ 2016,{\linebreak} {\hspace*{-0.6cm}}Solar Sys. Res., 50, 225 \\[-0.57cm] \item[\hspace{-0.3cm}] Baum, S. 1996, STIS Instrument Handbook, Version 1.0. (Balti-{\linebreak} {\hspace*{-0.6cm}}more:\ Space Telescope Science Institute) \\[-0.57cm] \item[\hspace{-0.3cm}] Bessel, F. W. 1836, AN, 13, 185 \\[-0.57cm] \item[\hspace{-0.3cm}] Biver, N., Bockel\'ee-Morvan, D., Crovisier, J., et al.\ 2002, Earth{\linebreak} {\hspace*{-0.6cm}}Moon Plan., 90, 5 \\[-0.57cm] \item[\hspace{-0.3cm}] Brownlee, D. E. 1985, Annu.\ Rev.\ Earth Plan.\ Sci., 13, 147 \\[-0.57cm] \item[\hspace{-0.3cm}] Chebotarev, G. A. 1964, Sov.\ Astron., 7, 618 \\[-0.57cm] \item[\hspace{-0.3cm}] Crovisier, J., Bockel\'ee-Morvan, D., Colom, P., et al.\ 2004, A\&A,{\linebreak} {\hspace*{-0.6cm}}418, 1141 \\[-0.57cm] \item[\hspace{-0.3cm}] Della Corte, V., Rotundi, A., Fulle, M., et al. 2015, A\&A, 583, A13 \\[-0.57cm] \item[\hspace{-0.3cm}] Dohnanyi, J. S. 1969, JGR, 74, 2531 \\[-0.57cm] \item[\hspace{-0.3cm}] Fern\'andez, Y. R. 2002, Earth Moon Plan., 89, 3 \\[-0.57cm] \item[\hspace{-0.3cm}] Fern\'andez, Y. R., Wellnitz, D. D., Buie, M. W., et al.\ 1999,~Icarus,{\linebreak} {\hspace*{-0.6cm}}140, 205 \\[-0.57cm] \item[\hspace{-0.3cm}] Fulle, M., Cremonese, G., \& B\"{o}hm, C.\ 1998, AJ, 116, 1470 \\[-0.57cm] \item[\hspace{-0.3cm}] Griffin, I. P., \& Bos, M. 1999, IAUC 7288 \\[-0.57cm] \item[\hspace{-0.3cm}] Groussin, O., Jorda, L., Auger, A.-T., et al.\ 2015, A\&A 583, A32 \\[-0.57cm] \item[\hspace{-0.3cm}] Gunkelmann, N., Ringl, C., \& Urbassek, H. M. 2016, A\&A, 589,{\linebreak} {\hspace*{-0.6cm}}A30 \\[-0.57cm] \item[\hspace{-0.3cm}] Hamilton, D. P., \& Burns, J. A. 1991, Icarus, 92, 118 \\[-0.57cm] \item[\hspace{-0.3cm}] Hamilton, D. P., \& Burns, J. A. 1992, Icarus, 96, 43 \\[-0.57cm] \item[\hspace{-0.3cm}] Jorda, L., Lamy, P., Groussin, O., et al.\ 2000, in ISO Beyond Point{\linebreak} {\hspace*{-0.6cm}}Sources:\ Studies of Extended Infrared Emission, ESA-SP 455, ed.{\linebreak} {\hspace*{-0.6cm}}R.\,J.\,Laureijs,\,K.\,Leech, \&\,M.\,F.\,Kessler\,(Noordwijk,\,Netherlands:{\linebreak} {\hspace*{-0.6cm}}ESTEC), 61 \\[-0.57cm] \item[\hspace{-0.3cm}] Kennedy, G. F. 2014, MNRAS, 444, 3328 \\[-0.57cm] \item[\hspace{-0.3cm}] Kessler, D. J. 1981, Icarus, 48, 39 \\[-0.57cm] \item[\hspace{-0.3cm}] Kramer, E. A., Fern\'andez, Y. R., Lisse, C. M., et al.\ 2014, Icarus,{\linebreak} {\hspace*{-0.6cm}}236, 136 \\[-0.57cm] \item[\hspace{-0.3cm}] Lamy, P. L., Toth, I., Gr\"{u}n, E., et al.\ 1996, Icarus, 119, 370 \\[-0.57cm] \item[\hspace{-0.3cm}] Lamy, P. L., Toth, I., Fern\'andez, Y. R., \& Weaver, H. A. 2004, in{\linebreak} {\hspace*{-0.6cm}}Comets II,\,ed.\,M.\,C.\,Festou,\,H.\,U.\,Keller,\,\&\,H.\,A.\,Weaver\,(Tucson:{\linebreak} {\hspace*{-0.6cm}}University of Arizona Press), 223 \\[-0.57cm] \item[\hspace{-0.3cm}] Licandro, J., Bellot Rubio, L. R., Boehnhardt, H., et al.\ 1998, ApJ,{\linebreak} {\hspace*{-0.6cm}}501, L221 \\[-0.65cm] \item[\hspace{-0.3cm}] Liller, W. 1997, Plan.\ Space Sci., 45, 1505} \end{description} \pagebreak \vspace*{0.3cm} \begin{description} {\footnotesize \item[\hspace{-0.3cm}] Liller, W. 2001, Int.\ Comet Q., 23, 93 \\[-0.57cm] \item[\hspace{-0.3cm}] Marchis,\,F.,\,Boehnhardt,\,H.,\,Hainaut,\,O.\,R.,\,\&\,Le\,Mignant,\,D.~1999, {\hspace*{-0.6cm}}A\&A, 349, 985 \\[-0.57cm] \item[\hspace{-0.3cm}] Marcus, J. N. 2007, Int.\ Comet Q., 29, 39 \\[-0.57cm] \item[\hspace{-0.3cm}] Mardling, R. A. 2008, in The Cambridge N-Body Lectures, Lecture{\linebreak} {\hspace*{-0.6cm}}Notes in Physics, ed.\ S.\ J.\ Aarseth, C.\ A.\ Tout, \& R.\ A.\ Mardling{\linebreak} {\hspace*{-0.6cm}}(Berlin:\ Springer), 760, 59 \\[-0.57cm] \item[\hspace{-0.3cm}] Marsden, B. G. 1968, AJ, 73, 367 \\[-0.57cm] \item[\hspace{-0.3cm}] Marsden, B. G. 1969, AJ, 74, 720 \\[-0.57cm] \item[\hspace{-0.3cm}] Marsden, B.\,G., Sekanina, Z., \& Yeomans, D.\,K.\ 1973, AJ,\,78,\,211 \\[-0.57cm] \item[\hspace{-0.3cm}] McCarthy, D. W., Stolovy, S., Campins, H., et al.\ 2007, Icarus, 189,{\linebreak} {\hspace*{-0.6cm}}184 \\[-0.57cm] \item[\hspace{-0.3cm}] Merouane, S., Zaprudin, B., Stenzel, O., et al.\ 2016, A\&A,~596,~A87 \\[-0.57cm] \item[\hspace{-0.3cm}] Nakano, S. 2000, Minor Plan.\ Circ.\ 40668 \\[-0.57cm] \item[\hspace{-0.3cm}] Nakano, S. 2001, Minor Plan.\ Circ.\ 44030 \\[-0.57cm] \item[\hspace{-0.3cm}] Nakano, S. 2002, Minor Plan.\ Circ.\ 46619 \\[-0.57cm] \item[\hspace{-0.3cm}] Pearce, A. R. 1999, IAUC 7288 \\[-0.57cm] \item[\hspace{-0.3cm}] Probstein, R. F. 1969, in Problems of Hydrodynamics~and~Con-{\linebreak} {\hspace*{-0.6cm}}tinuum Mechanics, ed.\ F. Bisshopp \& L. I. Sedov (Philadelphia:{\linebreak} {\hspace*{-0.6cm}}Soc.\ Ind.\ Appl.\ Math.), 568 \\[-0.57cm] \item[\hspace{-0.3cm}] Rigaut, F., Salmon, D., Arsenault, R., et al.\ 1998, PASP, 110, 152 \\[-0.57cm] \item[\hspace{-0.3cm}] Samarasinha, N. H. 2000, ApJ, 529, L107 \\[-0.57cm] \item[\hspace{-0.3cm}] Schr\"{a}pler,\,R., Blum,\,J., Seizinger,\,A., \& Kley,\,W. 2012, ApJ,~758,~35 \\[-0.57cm] \item[\hspace{-0.3cm}] Sekanina, Z. 1987, in Diversity and Similarity of Comets, ESA{\linebreak} {\hspace*{-0.6cm}}SP-278, ed. E. J. Rolfe \& B. Battrick (Noordwijk, Netherlands:{\linebreak} {\hspace*{-0.6cm}}ESTEC), 315 \\[-0.57cm] \item[\hspace{-0.3cm}] Sekanina, Z. 1995, A\&A, 304, 296 \\[-0.57cm] \item[\hspace{-0.3cm}] Sekanina, Z. 1998, ApJ, 509, L133 \\[-0.57cm] \item[\hspace{-0.3cm}] Sekanina, Z. 1999a, Earth Moon Plan., 77, 147 \\[-0.57cm] \item[\hspace{-0.3cm}] Sekanina, Z. 1999b, Earth Moon Plan., 77, 155 \\[-0.57cm] \item[\hspace{-0.3cm}] Sekanina, Z., \& Kracht, R. 2017, eprint arXiv:1703.00928 (Paper 1) \\[-0.57cm] \item[\hspace{-0.3cm}] Sosa, A., \& Fern\'andez, J. A. 2011, MNRAS, 416, 767 \\[-0.57cm] \item[\hspace{-0.3cm}] Szab\'o, Gy. M., S\'arneczky, K., \& Kiss, L. L. 2011, A\&A, 531, A11 \\[-0.57cm] \item[\hspace{-0.3cm}] Szab\'o, Gy. M., Kiss, L. L., P\'al, A., et al. 2012, ApJ, 761, 8 \\[-0.57cm] \item[\hspace{-0.3cm}] Vasundhara, R., \& Chakraborty, P.\ 1999, Icarus, 140, 221 \\[-0.57cm] \item[\hspace{-0.3cm}] Weaver, H. A., \& Lamy, P. L. 1999, Earth Moon Plan., 79, 17 \\[-0.57cm] \item[\hspace{-0.3cm}] Weaver, H. A., Feldman, P. D., A'Hearn, M. F., et al.\ 1999, Icarus,{\linebreak} {\hspace*{-0.6cm}}141, 1 \\[-0.57cm] \item[\hspace{-0.3cm}] Whipple, F. L. 1950, ApJ, 111, 375 \\[-0.638cm] \item[\hspace{-0.3cm}] Williams, D. R., \& Wetherill, G. W. 1994, Icarus, 107, 117} \\ \vspace*{0.35cm} \end{description} \end{document}
1,116,691,498,134
arxiv
\section{\label{sec:intro}Introduction} ------------------------------------------------------------------------------------------------ Perovskite transition-metal oxides challenge electronic structure theory since several decades, due to the variety of collective structural, electronic, and magnetic phenomena which are responsible for the formation of complex orbital- and spin-ordered states~\cite{imada98,salamon01,dagotto}. A prototypical textbook example of this class of materials is the antiferromagnetic insulator LaMnO$_3$. The ground state electronic structure of LaMnO$_3$ is characterized by the crystal-field induced breaking of the degeneracy of the Mn$^{3+}$ 3$d^4$ manifold in the high-spin configuration ($t_{2g}$)$^3$($e_g$)$^1$, with the $t_{2g}$ orbitals lying lower in energy than the two-fold degenerate $e_g$ ones. Due to the strong Hund's rule coupling, the spins of the fully occupied majority $t_{2g}$ orbitals are aligned parallel with the spin of the singly occupied majority $e_g$ states on the same site. The orbital degeneracy in the $e_g$ channel is further lifted via cooperative Jahn-Teller (JT) distortions~\cite{rodriguez98,chatterji03,sanchez03,qiu05}, manifested by long and short Mn-O octahedral bonds alternating along the conventional orthorhombic basal plane, which are accompanied by GdFeO$_3$-type (GFO) checkerboard tilting and rotations of the oxygen octahedra~\cite{elemans71,norby95,woodward} (see \fref{fig:0}). As a result, the ideal cubic perovskite structure is strongly distorted into an orthorhombic structure with $Pbnm$ symmetry~\cite{elemans71,norby95}, and a $d$-type orbital-ordered (OO) state emerges~\cite{murakami98}. The corresponding occupied $e_g$ orbital can be written as $|\theta\rangle = \mathrm{cos}\frac{\theta}{2}|3z^2-r^2\rangle + \mathrm{sin}\frac{\theta}{2}|x^2 - y^2\rangle~$\cite{kanamori60,yin06,pavarini10,sikora02}, with the sign of $\theta\sim 108^\circ$ alternating along $x$ and $y$ and repeating along $z$. This particular orbital ordering is responsible for the observed A-type antiferromagnetic arrangement below $T_{\mathrm{N}}=140$~K~\cite{wollan55,elemans71}. It was found that long-range order disappears above 750~K, whereas a local JT distortion (without long-range order) remains (dynamically) active up to $>$ 1150~K~\cite{sanchez03,qiu05,pavarini10}. \begin{figure} \centering \includegraphics[clip,width=0.5\textwidth]{LaMnO} \caption{Representation of the JT/GFO distorted LaMnO$_3$ structure. Small red and light blue spheres indicate oxygen and manganese atoms, respectively, whereas the larger spheres refer to the La atoms. Plot done using the VESTA visualization program~\cite{vesta}.} \label{fig:0} \end{figure} The question of whether the origin of orbital ordering should be attributed to a superexchange mechanism (O-mediated virtual hopping of electrons between nearest neighbor $S=2$ Mn cations, associated with a local Coulomb electron-electron interaction: $d^4_id^4_j \rightleftharpoons d^3_id^5_j$)~\cite{kugel73} or to an electron-lattice coupling effect (structural-induced splitting of the degenerate $e_g$ levels)~\cite{kanamori60} has been the subject of numerous studies~\cite{feiner99,hotta00,bala00,ahn00,okamoto02,tyer04,zenia05,yin06,lin08,pavarini10}. Considering that there is no clear experimental evidence to support one mechanism over the other, the employment of theoretical models and computer simulations has become an essential tool to explain the complicated coupling between structural and electronic degrees of freedom and to interpret the experimental observations. On the basis of model calculations, it has been recognized that the simultaneous inclusion of both superexchange and JT interactions is crucial to provide, to some extent, a satisfactory description of the observed transition temperatures $T_{\mathrm{N}}$, $T_{\mathrm{OO}}$ and $T_{\mathrm{JT}}$~\cite{feiner99,yin06,pavarini10}. This approach typically relies on a suitable mapping between a realistic band structure calculated e.g. via density functional theory (DFT)~\cite{dft} and an effective many-body Hamiltonian, which is often achieved by downfolding the relevant bands and constructing a localized Wannier basis~\cite{pavarini05,yin06,solovyev08,kovacik10,kovacik11}. The quality and characteristics of the Wannier representation inevitably depend on the underlying Kohn-Sham states. It is well known that the mean-field-type one-particle description of the electronic structure within the standard local density (LDA)~\cite{dft} or generalized gradient (GGA)~\cite{pbe} approximations to DFT is incapable to correctly describe exchange and correlation effects in the so called \emph{strongly-correlated materials}, resulting, among other failures \footnote{ We note that the underestimation of the band gap and related failures are of course also partly due to the intrinsic limitation of the Kohn-Sham approach, which is not meant to describe quasi-particle excitations correctly. } in much too small band gaps and magnetic moments~\cite{imada98}. For this reason, the DFT-derived subset of orbitals is typically employed as reference for the one-electron (i.e. non-interacting) part of the effective Hamiltonian, where all approximated contributions coming from LDA/GGA exchange-correlation effects are subtracted in order to avoid double-counting~\cite{czyzyk94}. For example, in the DFT+DMFT method (combination of DFT and dynamical mean-field theory (DMFT))~\cite{dmft}, the effective Hamiltonian can be written as $\hat{H} = \hat{H}_{\mathrm{DFT}} - \hat{H}_{\mathrm{dc}} + \hat{H}_{U}$, where $\hat{H}_{\mathrm{DFT}}$ is the Kohn-Sham Hamiltonian, $\hat{H}_{\mathrm{dc}}$ accounts for the double-counting correction, and $\hat{H}_{U}$ represents the Hubbard-like term which describes the electronic interactions in the strongly correlated bands. A critical issue of the DFT+DMFT approach is that a well defined expression for the double-counting potential is not known and several forms have been suggested~\cite{czyzyk94,petukhov03}. Karolak and coworkers have recently addressed this issue by treating the double-counting term as an adjustable parameter and suggested that the best agreement with experiment is achieved by setting the double-counting potential in the middle of the gap of the impurity spectral function~\cite{karolak10}. Within this context, it is therefore justified to construct effective Hamiltonians starting from band structures obtained using different schemes, such as e.g. LDA$+U$~\cite{yin06} or hybrid functionals~\cite{jacob08}, which usually provide much better gaps for semiconducting materials than conventional DFT approximations and could therefore represent a more appropriate ``non-interacting'' reference for model calculations. For practical purposes, the most suitable starting point to study the physics of complex transition-metal oxides is probably the tight-binding (TB) scheme, which relies on a proper representation of the electronic structure using a localized basis set~\cite{imada98,dagotto}. Some of the authors have recently shown that maximally localized Wannier functions (MLWFs) can be used to extract an effective TB description of the $e_g$ subspace in LaMnO$_3$~\cite{kovacik10,kovacik11}. The calculated TB parameters can then be used to construct a simplified TB Hamiltonian in the form that is very often used for the description of manganites, $\hat{H}_{\mathrm{TB}} = \hat{H}_{\mathrm{kin}} + \hat{H}_{\mathrm{Hund}} + \hat{H}_{\mathrm{JT}} + \hat{H}_{\mathrm{e-e}}$, which then provides a very accurate representation of the underlying Kohn-Sham band structure. Motivated by the reasons outlined above, here we calculate MLWFs for LaMnO$_3$ using several different methods, including both the conventional GGA scheme and the more sophisticated GGA$+U$, hybrid functionals, and GW approaches. Besides providing a detailed description of the electronic and magnetic properties of LaMnO$_3$ at various levels of theory, we investigate how the corresponding differences in the treatment of exchange-correlation effects influence the specific features of the MLWFs and the TB parameters derived from them. -------------------------------------------------------------------- \section{\label{sec:comp}Methodology and Computational Details} -------------------------------------------------------------------- \subsection{\label{ssec:comp-dft} DFT-based calculations} All our calculations are based on DFT within the Perdew-Burke-Ernzerhof~\cite{pbe} (PBE) approximation to the exchange-correlation energy. The one-particle Kohn-Sham orbitals are computed within a plane-wave basis employing two different codes: (i) the program PWscf in combination with ultrasoft pseudopotentials included in the {\sc Quantum ESPRESSO} package~\cite{espresso}, and (ii) the projector augmented wave~\cite{paw1,paw2} (PAW) based Vienna ab initio simulation package (VASP)~\cite{vasp1,vasp2}. In particular, the PWscf program is used to benchmark the implementation of the VASP2WANNIER90 interface at PBE and PBE$+U$ level. Due to the well known limitations of standard DFT in describing the electronic structure of ``strongly-correlated'' compounds, three different corrections to the PBE wavefunctions are adopted: (i) PBE$+U$: inclusion of a repulsive on-site Coulomb interaction $U$ following the recipe of Dudarev et al.~\cite{dudarev}; (ii) Hybrid functionals: suitable mixing between density functional and Hartree-Fock theory~\cite{becke93} within the scheme proposed by Heyd, Scuseria, and Ernzerhof (HSE06, HSE hereafter) in which one quarter of the short-ranged exchange-correlation PBE functional is replaced by one quarter of the short-ranged part of Hartree-Fock exchange~\cite{hse,marsman09}; (iii) GW: explicit evaluation of the self-energy $\Sigma = iGW$ within a partially self-consistent GW$_0$ procedure made up of self-consistent update of the eigenvalues in the Green's function G and a fixed screened exchange $W_0$, evaluated using PBE wavefunctions~\cite{gw,franchini10}. In accordance with previous studies~\cite{gw,franchini10}, five iterations were sufficient to obtain quasiparticle energies converged to about 0.05 eV. These four methodologies (PBE, PBE+U, HSE and GW) differ in a few fundamental issues: (i) PBE relies on an approximate treatment of exchange-correlation effects; (ii) PBE+U contains the same PBE approximate correlation, but takes into account orbital dependence (applied to the $d$ states of manganese) of the Coulomb and exchange interactions which is absent in the PBE; (iii) HSE includes a portion of non-local fully orbital dependent exact exchange and PBE correlation (iv) In GW exchange and correlation contributions are directly computed from the self-energy. An identical technical setup is adopted for VASP and PWscf calculations. All ground state electronic and magnetic properties are calculated for the experimental low temperature $Pbnm$ structure reported in \cite{elemans71} using a regular $\Gamma$-centered 7$\times$7$\times$5 and 6$\times$6$\times$6 k-point mesh in PWscf and VASP, respectively (reduced to 4$\times$4$\times$4 at the GW$_0$ level), and a plane wave energy cutoff of 35~Ry ($\approx\!{476}$~eV) and 300~eV in PWscf and VASP, respectively. Spin-polarized calculations were performed within a collinear setup without the inclusion of spin-orbit effects. Except where otherwise noted, all PBE and PBE$+U$ results discussed in the present work refer to PWscf calculations whereas HSE and GW$_0$ results are obtained using VASP. In both PWscf and VASP we include the Mn(3$s$), Mn(3$p$), La(5$s$), and La(5$p$) semi-core states in the valence. In PWscf the (unoccupied) La(4$f$) states are excluded from the ultrasoft pseudopotential, whereas they are present in the corresponding VASP PAW potential.\footnote{ In the construction of the MLWFs within VASP we have shifted the La(4$f$) states to higher energies through the application of a large $U=10$~eV in order to avoid the overlap between La(4$f$) and unoccupied Mn($e_g$) states, which would otherwise deteriorate the disentanglement procedure. } To obtain the model TB parameters we perform additional calculations for a simplified crystal structure with the same unit cell volume as the experimental $Pbnm$ structure, but which involves only the staggered ($Q^x$-type) JT distortion and no GFO distortion and no orthorhombic deformation of the lattice vectors ($Q^z=0$). See~\cite{ederer07,kovacik10,kovacik11} for more details and an exact definition of the different distortion modes. The amplitude of $Q^x$ is 0.199 and 0.184~\AA\ in the experimental $Pbnm$ and in the simplified JT($Q^x$) structure, respectively, and the amplitude of $Q^z$ in the experimental $Pbnm$ structure is -0.071~\AA\ . \subsection{\label{ssec:comp-wf}Maximally localized Wannier functions} A set of $N$ localized Wannier functions $\vert w_{n\bi{T}} \rangle$ corresponding to a group of $N$ bands that are described by delocalized Bloch states $\vert\psi_{m\bi{k}}\rangle$ is defined by the following transformation: \begin{equation}\label{eq:mlwf} \vert{w_{n\bi{T}}}\rangle = \frac{V}{\left({2\pi}\right)^{3}} \int_{\mathrm{BZ}} \mathrm{d}\bi{k} \left[{\sum_{m=1}^{N} U_{mn}^{\left(\bi{k}\right)} \vert{\psi_{m\bi{k}}}\rangle} \right] \mathrm{e}^{-\mathrm{i}\bi{k}\cdot\bi{T}} \,, \end{equation} where $\bi{T}$ is the lattice vector of the unit cell associated with the Wannier function, $m$ is a band index, $\bi{k}$ is the wave-vector of the Bloch function, and the integration is performed over the first Brillouin zone (BZ) of the lattice. Different choices for the unitary matrices $\uuline{U}^{(\bi{k})}$ lead to different Wannier functions, which are thus not uniquely defined by \eref{eq:mlwf}. A unique set of \emph{maximally localized Wannier functions} (MLWFs) can be generated by minimizing the total quadratic spread of the Wannier orbitals~\cite{1997_marzari}. Once the transformation matrices $\uuline{U}^{(\bi{k})}$ are determined, a TB representation of the Hamiltonian in the basis of MLWFs is obtained: \begin{equation}\label{eq:tbh} \hat{H} = \sum_{\bi{T}, \Delta\bi{T}} h_{nm}^{\Delta\bi{T}} \, \hat{c}^\dagger_{n\bi{T}+\Delta\bi{T}} \hat{c}_{m\bi{T}} \ + \mathrm{h.c.} \ , \end{equation} with \begin{equation}\label{eq:hr} h^\bi{T}_{nm} = \frac{V}{(2\pi)^3} \int_\mathrm{BZ} \mathrm{d}\bi{k} \left[ \sum_{l} \left(U^{(\bi{k})}_{ln}\right)^* \epsilon_{l\bi{k}} \, U^{(\bi{k})}_{lm} \right] \mathrm{e}^{-\mathrm{i}\bi{k}\cdot\bi{T}} \,. \end{equation} Here, $\epsilon_{l\bi{k}}$ is the eigenvalue corresponding to Bloch function $\vert\psi_{l\bi{k}}\rangle$. For cases where the bands of interest do not form an isolated set of bands but are entangled with other bands, a two step procedure for obtaining the unitary transformation matrices (which in this case are typically rectangular) is employed~\cite{2001_souza}. We note that $\bi{T}$ and $\Delta\bi{T}$ in \eref{eq:mlwf}-\eref{eq:hr} indicate lattice translations, whereas for crystal structures with more than one atom per unit cell, $n$ and $m$ generally represent combined orbital, spin, and site indeces, specifying the various orbitals at all sites within the primitive unit cell. Based on the projected densities of states (PDOS) calculated within DFT, we determine a suitable energy window for the construction of the MLWFs (more details follow in \sref{ssec:res-mlwf}). MLWFs are constructed by merging PWscf and VASP with the wannier90 code using the available PW2WANNIER90 tool~\cite{wannier90,marzari12} and the newly introduced VASP2WANNIER90 interface, respectively. Technical details on the construction of MLWFs within the PAW formalism can be found in Ref.\cite{Ferretti}. Starting from an initial projection of the Bloch bands onto atomic $e_g$ basis functions \mbox{$\vert{3z^2-r^2}\rangle$} and \mbox{$\vert{x^2-y^2}\rangle$} centered at different Mn sites within the unit cell, we obtain a set of two $e_{g}$-like MLWFs per spin channel for each Mn site. The spread functional (both gauge-invariant and non-gauge-invariant parts) is considered to be converged if the corresponding fractional change between two successive iterations of the spread minimization is smaller than $10^{-10}$. {\em Practical instructions for the use of VASP2WANNIER90:} VASP uses wannier90 in library mode to generate all ingredients which are required to run the wannier90 code as a post-processing tool. Apart from the main wannier90 input file (wannier90.win) the input files needed by wannier90 are~\cite{wannier90}: (i) the overlaps between the cell periodic parts of the Bloch states (wannier90.mmn), (ii) the projections of the Bloch states onto trial localized orbitals (wannier90.amn), and (iii) the eigenvalues file (wannier90.eig). This set of files is generated by VASP by setting LWANNIER90 = .TRUE. in the main VASP input file (INCAR). If the file wannier90.win already exists, VASP will properly generate the files (i)-(iii) according to the instructions specified in wannier90.win. If wannier90.win does not exist, VASP will generate a default wannier90.win file, which should be suitably modified in accordance to the keyword list described in the wannier90 user guide~\cite{wannier90online} in order to tell VASP what quantities to compute. Then, VASP has to be run again in order to create the additional wannier90 input files. To construct the UNK files (the periodic part of the Bloch states represented on a regular real space grid), which are required to plot the MLWFs, it is necessary to set LWRITE\_UNK = .TRUE. in the INCAR file. In a spin-polarized calculation two sets of input files are generated (VASP2WANNIER90 is employed only once to generate the files wannier90.mmn, wannier90.amn, and wannier90.eig. These files are then used as input files for wannier90, which is serially run for each energy window). Please refer to the online documentation of wannier90 for a detailed description of all relevant instructions~\cite{wannier90online}. ------------------------------------------------------------------------------------------------ \section{\label{sec:res}Results and Discussion} ------------------------------------------------------------------------------------------------ In this section we will first present and compare the electronic and magnetic ground state obtained within the various levels of approximation (PBE, PBE$+U$, HSE and GW$_0$), before we will describe the downfolding of the resulting band structure by Wannier function decomposition. Finally, TB parameterizations corresponding to effective $e_g$ models, either with or without explicit electron-electron interaction term, will be derived from these results, and implications of the different underlying band-structures will be discussed. \subsection{\label{ssec:res-em} Electronic and magnetic ground state} \begin{figure} \includegraphics[clip,width=1.0\textwidth]{pbnm-bs} \caption{Calculated band structure along certain high-symmetry directions within the BZ. Each panel reports results obtained by a different method, as specified in the panel title. E=0 is aligned to the middle of the gap.} \label{fig:1} \end{figure} The calculated band structures are displayed in \fref{fig:1} and the corresponding indirect ($E_\mathrm{i}$) and smallest direct ($E_\mathrm{d}$) band gaps are listed in \tref{tab:1}. The calculated valence and conduction band spectra and the PDOS (corresponding to Mn($e_g$), Mn($t_{2g}$), and O($p$) states), are represented in \fref{fig:2} and \fref{fig:bs-mlwf}, respectively. It can be seen from \fref{fig:1} that the eigenvalue dispersion in LaMnO$_3$ is characterized by an insulating state with indirect energy gap. By comparing with the PDOS shown in \fref{fig:bs-mlwf}, it becomes clear that within all methods the Mott-Hubbard gap is opened between occupied and empty states with predominant Mn($e_g$) character. While the width of the band gap differs strongly between the various methods, each one is in good agreement with previous LDA/GGA~\cite{pickett97,sawada96,Ravindran02,hashimoto}, (LDA/GGA)$+U$~\cite{sawada96,hashimoto}, and hybrid functionals~\cite{munoz04}, respectively (see \tref{tab:1}). Our partially-self consistent GW$_0$ cannot be directly compared with the single-shot G$_0$W$_0$ results of Nohara {\em et al.}\cite{nohara06} since the latter depend much more on the initial LDA wavefunction and consequentially convey a smaller bandgap. \begin{table} \caption{Collection of calculated (present work and previous studies) and experimental value for the indirect ($E_{\mathrm{i}}$) and direct ($E_{\mathrm{d}}$) band gap of LaMnO$_3$. The measured values refer to optical conductivity~\cite{arima,Jung97,Jung98}, Raman~\cite{Kruger04}, and photoemission~\cite{saitoh} experiments.} \begin{indented} \item[] \begin{tabular}{@{}ccccccccc} \br \multicolumn{7}{c}{This Work} \\ &HSE & GW$_0$@PBE & PBE &\multicolumn{3}{c}{PBE$+U$} \\ && & &$U=2$ &$U=3$ &$U=4$ \\ \mr $E_{\mathrm{i}}$ &2.25 & 1.41 & 0.38 & 0.82 & 0.98 & 1.10 \\ $E_{\mathrm{d}} $&2.55 & 1.68 & 0.75 & 1.15 & 1.30 & 1.42 \\ \br \multicolumn{7}{c}{Previous studies} \\ &B3LYP\cite{munoz04} & G$_0$W$_0$@LDA\cite{nohara06} & GGA\cite{hashimoto} & GGA$+U$\cite{hashimoto} & \multicolumn{2}{c}{Expt.}\\ && & &$U$=2 & & \\ \mr $E_{\mathrm{i}}$&2.3 & 0.82 & 0.27 & 0.81 & & \\ $E_{\mathrm{d}}$& & 1.00 & 0.70 & 1.18 & \multicolumn{2}{c}{1.1$^a$, 1.9$^b$, 2.0$^{c,d}$, 1.7$^e$} \\ \br \end{tabular} \end{indented} \label{tab:1} \begin{flushleft} $^a$Ref. \cite{arima}, $^b$Ref. \cite{Jung97}, $^c$Ref. \cite{Jung98}, $^d$Ref. \cite{Kruger04}, $^e$Ref. \cite{saitoh}, \end{flushleft} \end{table} Due to the inadequate treatment of exchange-correlation effects, conventional PBE-DFT leads to a significant underestimation of $E_{\mathrm{d}}^{\mathrm{PBE}}=0.75$~eV compared to the experimental values obtained from optical conductivity measurements (1.1~eV~\cite{arima}, 1.9~eV~\cite{Jung97}, 2.0~eV~\cite{Jung98}), Raman (2.0~eV~\cite{Kruger04}), and photoemission data (1.7~eV~\cite{saitoh}). In addition, the uppermost filled Mn($e_g$) bands (with energies in the region between $-$1.3~eV and 0.0~eV) are well separated from the lower-lying mostly Mn($t_{2g}$)- and O($p$)-like states (below $-1.5$~eV). In contrast, while the lower part of the group of bands immediately above the gap (up to about 2~eV) exhibits predominant local majority spin $e_g$ character, these bands are strongly entangled with local minority spin $t_{2g}$ states at slightly higher energies (between approximately 1-2~eV). The inclusion of the on-site interaction term within the PBE$+U$ approach, separates these higher-lying local minority spin $t_{2g}$ states from the local majority $e_g$ bands directly above the gap for $U>2$~eV. Furthermore, increasing $U$ also increases the band gap ($E_{\mathrm{d}}^{\mathrm{PBE}+U}=1.42$~eV for $U=4$~eV) and lowers the filled $e_g$ states relative to the bands with dominant Mn($t_{2g}$) and O($p$) character, which leads to an appreciable overlap between these sets of bands around the $\Gamma$ point for $U=4$~eV. Changing to a more elaborate treatment of the exchange-correlation kernel, we observe that HSE provides a value of the bandgap ($E_{\mathrm{d}}^{\mathrm{HSE}}=2.55$~eV) that is significantly larger (by $\approx$ 0.5 eV) than the experimental measurements. This is in line with previous hybrid functional estimates based on the B3LYP approach implemented within a Gaussian basis set~\cite{munoz04}. By comparing the PBE and HSE band gap one could argue that a smaller portion of exact Hartree-Fock exchange should be included in the hybrid functional framework in order to obtain a better agreement with experiment. Indeed, a reduced mixing parameter $a_{\mathrm{mix}}=0.15$ shrinks the direct gap down to 1.79~eV, almost on par with the photoemission measurements of Saitoh and coworkers~\cite{saitoh}, and with the more recent optical conductivity data of Jung {\em et al.}\cite{Jung97,Jung98}, and Kr\"uger {\em et al.}\cite{Kruger04}. LaMnO$_3$ therefore seems to represent another example for which the one-quarter compromise (mixing 1/4 of exact exchange with 3/4 of DFT exchange) is not the ideal choice~\cite{Franchini11}. Finally, the parameter-free GW$_0$ technique leads to a quite satisfactory prediction of the band gap, $E_{\mathrm{d}}^{\mathrm{GW_0}}=1.68$~eV, and about significantly larger than the only previous single-shot (i.e. perturbative) G$_0$W$_0$ study of Nohara {\em et al.} based on initial LDA wavefunctions~\cite{nohara06}. Similarly to HSE and PBE$+U$ (for $U=3$~eV), GW$_0$ deliver $e_g$ bands around $\rm E_F$ well separated from the O($p$) and Mn($t_{2g}$) bands below and, to a lesser extent, above (there is an appreciable mixing of Mn($e_{g}$) and Mn($t_{2g}$) states along the T-Z-$\Gamma$ path around 2 eV), in clear contrast with the PBE picture which predicts a certain degree of overlap between the $e_g$ bands and the higher lying $t_{2g}$ bands. \begin{figure} \centering \includegraphics[clip,width=0.5\textwidth]{alldos} \caption{Comparison between experimental~\cite{park96} (blue squares) and calculated valence and conduction band spectra for PBE, PBE$+U$ ($U=3$ and 4~eV), HSE, and GW$_0$. The calculated and measured spectra have been aligned by overlapping the valence band maxima and conduction band minima.} \label{fig:2} \end{figure} In order to provide further assessment of the quality of the various methods in describing the electronic structure of LaMnO$_3$ we compare in \fref{fig:2} the simulated valence and conduction band spectra with the corresponding photoemission spectroscopy and X-ray absorption spectroscopy data~\cite{park96}. For negative energies (occupied states) none of the four methods differs dramatically from the experimental spectrum, even though the multi-peak structures in the range of $-$7~eV to $-$4~eV seen within PBE$+U$ and HSE do not have a clear experimental correspondence, whereas PBE and GW$_0$ profiles better follow the main three experimental peak/shoulders. The situation is more critical for the unoccupied region, since none of the methods is capable to correctly reproduce the two-peaks structure characterizing the onset of the conduction band right above $\rm E_F$. These two peaks could be interpreted as formed by $e_g$ (lower one) and $t_{2g}$ (second ones) contributions and are described differently by the various schemes, following the corresponding band dispersions discussed in Fig. \ref{fig:1}: (i) PBE both peaks merge in one single strong electronic signal, reflecting the large overlap between $e_g$ and $t_{2g}$ bands right above $\rm E_F$; (ii) in PBE+U the two peak are much too separated, reflecting the wide $e_g$-$t_{2g}$ band splitting; (iii) HSE and GW$_0$ are rather similar. Their spectra are characterized by a lower $e_g$ small bunch of states (onset of the conduction band spectra) associated to a more intense $t_{2g}$-like peak, but the GW$_0$ $e_g$/$t_{2g}$ splitting ($\approx$ 1.4 eV) better matches the experimental one ($\approx$ 1.1 eV) as compared to the larger HSE splitting ($\approx$ 1.7 eV). From these results we can infer that GW$_0$ and HSE convey the most satisfactory picture in terms of peak position and corresponding spectral weight for both occupied and unoccupied states, with GW$_0$ better reproducing the splitting between the two lower conduction peaks. However, it should be noted that the relative weights of the two lower conduction peaks do not match with experiment, indicating that it is necessary go beyond the GW approximation to obtain a refined agreement with experiment, for instance using the Bethe-Salpeter equation (this is beyond the scope of the present study). We underline once more that unlike PBE+U and HSE (in which the proper adjustment of the parameters $U$ and $a_{\mathrm{mix}}$ can cure the bandgap problem and lead to values of the gap close to the experimental ones), the parameter-free G$W_0$ scheme is capable to provide a rather accurate picture without the need of any adjustable parameter. Next, we analyze the magnetic properties in terms of the nearest-neighbor magnetic exchange interactions within the orthorhombic $ab$ plane ($J_{ab}$) and along $c$ ($J_c$)~\cite{solovyev96,munoz04,evarestov}. This will provide further insights into the performance of the various methods with respect to energetic properties of LaMnO$_3$. By mapping the calculated total energies for different magnetic configurations onto a classical Heisenberg Hamiltonian $H=-\frac{1}{2}\sum_{i{\neq}j}J_{ij}\,{{S_i\cdot S_j}}$, the following equations for $J_{ab}$ and $J_c$ can be obtained (see also \cite{munoz04,evarestov}): \begin{equation} E_{\mathrm{FM}} - E_{\mathrm{AAF}} = -32 J_c \end{equation} \begin{equation} E_{\mathrm{CAF}} - E_{\mathrm{FM}} = 64 J_{ab} \,. \end{equation} Here, $E_{\mathrm{FM}}$ corresponds to the total energy for the ferromagnetic (FM) configuration, whereas $E_{\mathrm{AAF}}$ and $E_{\mathrm{CAF}}$ indicate the total energies associated with antiferromagnetic (AFM) ordering along $z$, and a two-dimensional checker-board like arrangement within the $xy$ plane, respectively~\cite{munoz04}. The values of $J_{ab}$ and $J_c$ obtained using the various methods considered within this work are listed in \tref{tab:2} along with the calculated magnetic moments at the Mn site. We note that, due to the neglect of orbital degrees of freedom which in LaMnO$_3$ are strongly coupled to spin degrees of freedom, it is not obvious whether a classical Heisenberg model is well suited to give a complete picture of the magnetic properties of LaMnO$_3$. Nevertheless it can still provide an accurate parameterization of the energy differences between the various magnetic configurations. However, the quantitative comparison with the experimental coupling constants derived from spin-wave spectra, i.e. small fluctuations around the AFM ground state, should be taken with care. In view of this, we can draw the following conclusions about the efficiency of the various DFT and beyond-DFT methods employed in the present study: (i) the magnetic energy differences exhibit appreciable variation between VASP and PWscf leading to differences of about 1-2 meV in the magnetic coupling constants. This is most likely due to the different pseudopotential technique employed in the two codes (PAW method vs. ultrasoft pseudopotential), which lead to qualitative differences especially at PBE$+U$ level, as discussed below. A more elaborate discussion on the performance of different functionals and methods in predicting the magnetic couplings is given in Refs. \cite{hashimoto, evarestov}, where it is concluded that the PAW values are very similar to the full potential FLAPW ones. (ii) In both codes PBE gives the correct A-AFM ground state, delivering a negative $J_c$ ($J_c^\mathrm{VASP} = -2.13$~meV, $J_c^\mathrm{PWscf}=-0.81$~meV) and a positive $J_{ab}$ ($J_{ab}^\mathrm{VASP}=3.22$~meV, $J_{ab}^\mathrm{PWscf}=4.56$~meV). (iii) The ``$+U$'' correction to PBE decreases the $E_{\mathrm{FM}} - E_{\mathrm{AAF}}$ energy difference and eventually leads to the prediction of a FM ground state for $U$ larger than a certain value. This critical value is rather different within the two codes used in this study: $J_c$ becomes positive for $U=2$~eV and $U=4$~eV, in PWscf and VASP, respectively. We note that this difference is almost entirely due to the difference in the corresponding PBE results. The $U$-induced changes in the magnetic coupling constant $J_c$ relative to the $U=0$ reference are nearly identical within the two codes. (iv) While $J_{c}$ within HSE and PBE+$U$(VASP) are very similar for $U$ between 2-3~eV, the ratio between $J_c$ and $J_{ab}$ is rather different within the two approaches. (v) Within the limitations regarding the applicability of a Heisenberg picture to LaMnO$_3$ stated above, HSE seems to be most consistent with the values of the magnetic coupling constants derived from neutron diffraction measurements of spin-wave spectra~\cite{moussa96} and magnon data~\cite{hirota96}. This further confirms the predictive power of HSE in describing exchange interactions in transition metal oxides, as compared to other available beyond-DFT schemes~\cite{archer}. We can also see that all methods result in values for the local magnetic moments of the Mn cation that are within the range of variation of the experimental data. Generally, increasing $U$ within PBE+$U$ leads to a more localized magnetization density compared to PBE, and thus increases the local magnetic moments. On the basis of the above analysis both of the electronic and magnetic properties of LaMnO$_3$, we can conclude that HSE and, when applicable, GW$_0$ (the calculation of magnetic energies at GW level to extract exchange coupling constants is presently not possible, or at least extremely difficult) are the most consistent with the available experimental data in terms of spectral properties, electronic structure and magnetic exchange interactions of LaMnO$_3$. In view of this, we can now proceed to the discussion of the Wannier-based description of the $e_g$ bands and the associated TB parameterization. \begin{table} \caption{PBE, PBE$+U$, HSE and GW$_0$ derived magnetic exchange parameters (meV) and magnetic moment at Mn sites $\mu$ ($\mu_{\mathrm{B}}$). The experimental and previously published computed data are taken from: $^{a}$~Ref.~\cite{hashimoto}, $^{b}$~Ref.~\cite{munoz04}, $^{c}$~Ref.~\cite{moussa96}, $^{d}$~Ref.~\cite{elemans71}, $^{e}$~Ref.~\cite{hauback96}, and $^{f}$~Ref.~\cite{hirota96} } \begin{indented} \item[] \begin{tabular}{@{}lccc} \br & $J_{ab}$ & $J_c$ & $\mu$ \\ \mr \multicolumn{4}{c}{PWscf} \\ PBE & 4.56 & $-$0.81 & 3.67 \\ $U=2$~eV & 5.02 & 0.37 & 3.82 \\ $U=3$~eV & 5.30 & 0.98 & 3.89 \\ $U=4$~eV & 5.63 & 1.55 & 3.96 \\ \multicolumn{4}{c}{VASP} \\ PBE & 3.22 & $-$2.13 & 3.50 \\ $U=2$~eV & 3.54 & $-$0.84 & 3.68 \\ $U=3$~eV & 3.57 & $-$0.30 & 3.76 \\ $U=4$~eV & 3.61 & 0.17 & 3.83 \\ HSE & 2.56 & $-$0.53 & 3.74 \\ GW$_0$ & & & 3.51 \\ \multicolumn{4}{c}{Previous studies} \\ GGA$+U$ ($U=2$~eV)$^{a}$ & & $-$1.30 & 3.46 \\ B3LYP$^{b}$ & 2.09 & $-$1.01 & 3.80 \\ Expt & 1.66$^{c}$ & $-$1.16$^{c}$ & 3.87$^{c}$, 3.7$\pm$0.1$^{d}$, 3.4$^{e}$ \\ & 1.67$^{f}$ & $-$1.21$^{f}$ & \\ \br \end{tabular} \end{indented} \label{tab:2} \end{table} \subsection{\label{ssec:res-mlwf} Maximally localized Wannier functions} \label{ssec:mlwf} In this section we present the details for the construction of the MLWFs with predominant $e_{g}$ character from the calculated bands around the gap. In a TB picture, these MLWFs can be seen as ``antibonding'' bands resulting from the $\sigma$-type hybridization between the Mn($d$) and O($p$) atomic orbitals. Note that in this and the next section the discussion of the PBE$+U$ results refers to the representative value of $U=3$~eV, unless explicitly stated otherwise. \Fref{fig:bs-mlwf} shows the PBE and beyond-PBE (PBE$+U$, HSE and GW$_0$) band structures and the corresponding PDOS with Mn($e_{g}$), Mn($t_{2g}$), and O($p$) character. Apart from the obvious hybridization between Mn($d$) and O($p$) states, ``$e_g$-like'' orbitals at a certain site can hybridize with ``$t_{2g}$-like'' orbitals at a neighboring site as a result of the tilt and rotation of the oxygen octahedra. This leads to bands with mixed $e_{g}$/$t_{2g}$ character (note the bands around the gap with strong PDOS components of both $e_{g}$ and $t_{2g}$ character). Due to this strong mixing it is not possible to construct 8 $e_{g}$ character MLWFs within one energy window used in the disentanglement procedure.\footnote{In the antiferromagnetic case each band is of course two-fold degenerate with respect to the global spin projection. Here and in the following we refer to such pairs of spin-degenerate bands as ``one band''.} The corresponding energy window would inevitably also contain the local minority spin ``$t_{2g}$'' bands. Since due to the GFO distortion these bands can hybridize with the minority spin ``$e_g$'' bands, this would lead to MLWFs with strongly mixed $e_g$/$t_{2g}$ character. To circumvent this problem, we therefore construct two separate sets of 4 local majority and 4 local minority spin MLWFs using two different energy windows~\cite{kovacik10}. These energy windows have to be chosen carefully for each individual method. (This problem is not present for the purely JT($Q^x$) distorted structure, from which we derive most of the model parameters, see \sref{ssec:res-tb}. In this case we calculate a full set of 8 MLWFs). \begin{figure} \includegraphics[clip,width=1.0\textwidth]{pdos-bs} \caption{Effective $e_g$ MLWF bands (thick red lines) for LaMnO$_3$ superimposed to the {\em ab initio} electronic bands (gray thin solid/dotted lines) and associated normalized PDOS (to the left and right of the band structure plots) corresponding to Mn($e_g$) (red filled areas), Mn($t_{2g}$) (green lines), and O($p$) character (blue dots). In the left/right PDOS graphs, Mn($d$) PDOSs correspond to the local majority/minority Mn sites while the O($p$) PDOS is calculated as an average over all O sites. The two energy windows used in the wannier-downfolding are indicated by dashed and dot-dashed lines. The Fermi level (E=0 eV) is set in the middle of the gap.} \label{fig:bs-mlwf} \end{figure} To find a suitable energy window is quite straightforward for the local majority spin case. The upper bound of the energy window is determined by the upper bound of the highest (in energy) peak of the local majority spin Mn($e_{g}$) PDOS, while the lower bound of the energy window should be placed above the occupied bands with strong O($p$) and/or local majority spin Mn($t_{2g}$) character. It can be seen from \fref{fig:bs-mlwf} (and perhaps more clearly from \fref{fig:1}), that both the lower and the upper bound fall within small gaps separating the bands within the energy window from other bands at lower and higher energies. Furthermore, for PBE+U and HSE the MLWFs can be constructed from a completely isolated set of bands, whereas in the case of PBE and GW$_0$ additional bands with predominant minority spin Mn($t_{2g}$) character are included in the energy window. However, due to the different local spin projection, these latter bands have no noticeable effect on the final MLWFs. For the local minority spin MLWFs, the upper bound of the energy window can be found in the same way as for the local majority spin bands. Within PBE the lower bound is also easily determined, since it falls within a small gap separating the local minority spin bands with predominant $e_{g}$ and $t_{2g}$ character. However, no such gap exists within PBE$+U$, HSE, and GW$_0$, and it is thus not possible to fully exclude the $t_{2g}$ character from the resulting MLWFs. Instead, the lower bound of the energy window has to be carefully adjusted by manually checking the $e_g$ character of the calculated MLWFs in real space. The band dispersion of the so-obtained MLWFs is shown in \fref{fig:bs-mlwf} as thick red lines. The 4 (energetically lower) local majority MLWF bands follow very closely the underlying PWscf/VASP bands and the overall dispersion is very similar for all methods. Despite the strong band-entanglement, the dispersion of the 4 (energetically higher) local minority MLWF bands is also very similar within all methods. Only the energetically lowest local minority spin band within PBE$+U$ and HSE exhibits strong deviations from the corresponding PBE and GW$_0$ case. This is due to the above-mentioned difficulty to exclude the $t_{2g}$ character in a controlled way. Conclusions drawn from such sets of MLWFs should therefore be taken with care. Overall, we note that the similarities in the band structure and PDOS between PBE and GW$_0$ as well as between PBE$+U$ and HSE, regarding the degree of hybridization between Mn($e_g$), Mn($t_{2g}$) and O($p$) orbitals, that have been pointed out in the previous section, are also reflected in the MLWF bands. To further demonstrate the similarities between MLWFs calculated at different levels of theory, we show in \fref{fig:rs-mlwf} the real space representation of the 2 MLWFs localized at a certain Mn site, projected on the $xy$ plane. The dominant $e_{g}$ character at the central Mn site together with the ``hybridization tails'' of mostly $p$ character at the surrounding O sites is clearly visible for all MLWFs and methods. For the local majority spin MLWFs (1st and 3rd row), there is essentially no visible difference in orbital character between PBE and PBE$+U$, only the O($p$) tails are marginally stronger if the Hubbard $U$ correction is applied. At the HSE level, both local majority MLWFs exhibit significant $x$/$y$ asymmetry, leading to more pronounced O($p$) hybridization tails along the short and long Mn-O bond for the $\vert{3z^2-r^2}\rangle$-like and $\vert{x^2-y^2}\rangle$-like function, respectively. Within GW$_0$, the central $e_{g}$-like part as well as the O($p$) tails are less asymmetric than for HSE, and appear similar to PBE/PBE$+U$ for both local majority MLWFs. In comparison with the local majority MLWFs, the O($p$) hybridization tails of the local minority MLWFs (2nd and 4th row) are generally less pronounced. There is no significant difference between the local minority spin MLWFs calculated using the different methods. Even at the PBE$+U$ and HSE levels, for which the admixture of the $t_{2g}$ character could not be controlled systematically, there is no apparent difference in comparison with PBE and GW$_0$. \begin{figure} \centering \includegraphics[clip,width=0.75\textwidth]{lmo-wf} \caption{Real space representation of the four $e_g$ MLWFs corresponding to a certain Mn site, projected on the $xy$ plane cutting through the Mn site. Black iso-lines correspond to $\pm N/\sqrt{V}$ with integer $N \ge 1$, the white region is defined by values in the interval $[-1/\sqrt{V},+1/\sqrt{V}]$, where $V$ is the volume of the unit cell. Blueish/reddish hue denotes negative/positive values of MLWFs and Mn and O atoms are shown as blue and red spheres, respectively.} \label{fig:rs-mlwf} \end{figure} The orbitally ordered states resulting from this set of MLWFs basis set is shown in Fig.\ref{fig:OOc-mlwf} in terms of charge density isosurfaces of the highest occupied and lower unoccupied orbitals associated to the $e_g$ bands below and above $\rm E_F$ in the lower energy window as defined in Fig. \ref{fig:bs-mlwf}. This plot clearly show the staggered ordering at neighbouring Mn sites and the significant $p-d$ hybridization at the oxygen sites. \begin{figure} \centering \includegraphics[clip,width=1.00\textwidth]{lmo-wfqx} \caption{Charge density isosurfaces of the orbitally ordered states associated to the highest occupied (a) and lower unoccupied (b) MLWFs orbitals. Color coding and symbols are the same as in Fig. \ref{fig:rs-mlwf}.} \label{fig:OOc-mlwf} \end{figure} As a comparison we provide in Fig. \ref{fig:oo} the corresponding staggered ordering associated to the highest occupied $e_g$-like bands as obtained from the full {\em ab initio} self-consistent charge density (without downfolding) within the various methods employed in the present study. The similarities between the {\em ab initio} and wannierized orbital ordering is a further demonstration of the quality and reliability of our wannierization procedure. \begin{figure} \centering \includegraphics[clip,width=0.22\textwidth]{theta_oo_pbe} \hspace{0.1cm} \includegraphics[clip,width=0.22\textwidth]{theta_oo_pbeu} \hspace{0.1cm} \includegraphics[clip,width=0.22\textwidth]{theta_oo_hse} \hspace{0.1cm} \includegraphics[clip,width=0.22\textwidth]{theta_oo_gw} \caption{Charge density isosurfaces of the highest occupied $e_g$ orbitals (from E$_F$ to the lower energy bound as defined in Fig.\ref{fig:bs-mlwf}) showing the orbitally ordered state of LaMnO$_3$ obtained using the different methodologies employed in this study.} \label{fig:oo} \end{figure} \subsection{\label{ssec:res-tb} Tight binding model Hamiltonian} As already mentioned in the introduction, the electronic Hamiltonian of the $e_{g}$ manifold in manganites is generally described within the TB formalism as a sum of the kinetic energy $\hat{H}_{\mathrm{kin}}$ and several local interaction terms, the Hund's rule coupling to the $t_{2g}$ core spin $\hat{H}_{\mathrm{Hund}}$, the JT coupling to the oxygen octahedra distortion $\hat{H}_{\mathrm{JT}}$, and eventually the electron-electron interaction $\hat{H}_{\mathrm{e-e}}$, which can be written as (see e.g. Refs.~\cite{dagotto,ahn00,ederer07,lin08,kovacik10,kovacik11}) \numparts \begin{eqnarray} \hat{H}_{\mathrm{kin}}= -\sum_{a,b,\bi{R},\Delta\bi{R},\sigma} \hat{c}_{\sigma,a\left(\bi{R}+\Delta\bi{R}\right)}^{\dagger} t_{\sigma,a\left(\bi{R}+\Delta\bi{R}\right)b\left(\bi{R}\right)} \hat{c}_{\sigma,b\left(\bi{R}\right)}\,, \label{eq:tb-kin}\\ \hat{H}_{\mathrm{Hund}}= -J_{\mathrm{H}}\sum_{\bi{R}} \bi{S_R} \sum_{a,\sigma,\sigma'} \hat{c}_{\sigma,a\left(\bi{R}\right)}^{\dagger} \btau_{\sigma\sigma'} \hat{c}_{\sigma',a\left(\bi{R}\right)}\,, \label{eq:tb-hund}\\ \hat{H}_{\mathrm{JT}}= -\lambda\sum_{a,b,\bi{R},i,\sigma} \hat{c}_{\sigma,a\left(\bi{R}\right)}^{\dagger} Q^{i}_{\bi{R}}\tau_{ab}^{i} \hat{c}_{\sigma,b\left(\bi{R}\right)}\,, \label{eq:tb-jt}\\ \hat{H}_{\mathrm{e-e}}= \case{1}{2}\sum_{a,b,c,d,\sigma,\sigma'} U_{abcd} \hat{c}_{\sigma,a(\bi{R})}^{\dagger} \hat{c}_{\sigma',b(\bi{R})}^{\dagger} \hat{c}_{\sigma',d(\bi{R})} \hat{c}_{\sigma,c(\bi{R})}\,. \label{eq:tb-ee} \end{eqnarray} \endnumparts Here, $\hat{c}_{\sigma,a\left(\bi{R}\right)}$ and $\hat{c}_{\sigma,a\left(\bi{R}\right)}^{\dagger}$ are the annihilation and creation operators associated with orbital $\vert{a}\rangle$ and spin $\sigma$, centered at site $\bi{R}$. Furthermore, $t_{\sigma,a\left(\bi{R}+\Delta\bi{R}\right)b\left(\bi{R}\right)}$ are the hopping amplitudes between orbitals at site $\bi{R}$ and $\bi{R}+\Delta\bi{R}$, $\tau_{ab}^{i}$ are the standard Pauli matrices, $J_{\mathrm{H}}$ is the Hund's rule coupling strength, $\bi{S_R}$ is the normalized $t_{2g}$ core spin at site $\bi{R}$, $\lambda$ is the JT coupling constant, and $Q^{i}_{\bi{R}}$ is the amplitude of a particular JT mode $(i=\{x,z\})$. In our TB analysis we will only consider the electron-electron interaction within a mean-field approximation and use a simplified version of Eq.~(\ref{eq:tb-ee}) corresponding to $U_{aaaa}=U_{abab}=U_{\mathrm{W}}$ and all other interaction matrix elements set to zero, which is consistent with the PBE+$U$ treatment according to Dudarev et al.~\cite{dudarev}. The resulting shift in the one-electron potential due to the electron-electron interaction then becomes \begin{equation}\label{eq:tb-hubpot} V_{\sigma,ab}=U_{\mathrm{W}}\left(\case{1}{2}\delta_{ab}-n_{\sigma,ab}\right)\,, \end{equation} where $U_{\mathrm{W}}$ is the Hubbard parameter in the basis of MLWFs and $n_{\sigma,ab}$ are the corresponding occupation matrix elements.\footnote{Here and in the following we often suppress either site or spin indeces or both of them, unless the corresponding values do not become clear from the context. Apart from the hopping amplitudes all quantities are diagonal in site index. In addition, for the collinear configurations of core-spins $\bi{S}_\bi{R}$ considered here, the Hamiltonian and all quantities involved are also diagonal with respect to the global spin projection.} The model parameters ($t_{\sigma,a(\bi{R}+\Delta\bi{R})b(\bi{R})}$, $J_{\mathrm{H}}$, $\lambda$, $U_{\mathrm{W}}$) which determine the TB model Hamiltonian can in principle be obtained from the Hamiltonian matrix elements $h_{nm}^{\Delta\bi{T}}$ in the MLWF basis. We note that $\Delta\bi{T}$ in \eref{eq:tbh} refers to lattice translations whereas $\Delta\bi{R}$ in \eref{eq:tb-kin} refers to the relative position with respect to the lattice of Mn sites. In the following we will therefore use the following simplified notation: $h^{\Delta\bi{T}}_{nm} \rightarrow h^{\Delta\bi{T}}_{a\bi{R},b\bi{R}'} \rightarrow h^{\Delta\bi{R}}_{ab}$ where $\Delta\bi{R}=\bi{R}'-\bi{R}+\Delta\bi{T}$. Then $a$ and $b$ correspond to the two effective $e_{g}$ orbitals centered at individual Mn sites separated by $\Delta\bi{R}$. In order to further simplify the notation for the hopping amplitudes, we choose one Mn site as the origin ($\bi{R}=\mathbf{0}$) and align the $x$ and $y$ axes of our coordinate system with the directions corresponding to the long and short Mn-O bond of the JT($Q^x$) mode, respectively. We then define the vectors $\hat{\bi{x}}$, $\hat{\bi{y}}$, $\hat{\bi{z}}$ according to the nearest-neighbor spacing of the Mn sites along the respective axes. Our TB parameterization is based on the procedure described by some of the authors in~\cite{kovacik10}, with certain modifications, explained in the following. In~\cite{kovacik10} it was shown that, at least at the PBE level, the influence of an individual structural distortion (JT or GFO) on the Hamiltonian matrix elements $h_{ab}^{\Delta\bi{R}}$ expressed in the basis of $e_{g}$-like MLWFs is to a great extent independent from the other distortion, and that furthermore the magnetic configuration has only a weak influence on the resulting model parameters. The TB parameterization was therefore based on various model structures with both FM (which always leads to a metallic system) and A-AFM order, with individual structural distortion modes frozen in. Due to the significantly increased computational cost of the HSE and GW$_0$ methods in comparison with PBE (in particular for the metallic state for which a dense k-points mesh is required to achieve a well converged solution), it is desirable to derive the TB parameters from as few (and if possible insulating) model structures as possible. In the present study, we therefore construct the TB parameterization from only two crystal structures: the purely JT($Q^x$) distorted structure and the experimental $Pbnm$ structure, in both cases with A-AFM order, which then yields an insulating solution. As we will show in \tref{tab:tbparI}, the TB parameters derived in this way at the PBE level deviate only marginally from the parameters found in the previous study~\cite{kovacik10}. In the following we describe the modified method we use to construct parameters of the TB model \eref{eq:tb-kin}-\eref{eq:tb-ee}. Many of the simplifications on which our effective TB description of LaMnO$_3$ is based can be understood from the MLWF matrix elements shown in \fref{fig:me-mlwf} and will be discussed in the remainder of this section. We will first consider an effectively ``noninteracting'' case in which we neglect the term $\hat{H}_{\mathrm{e-e}}$ and consider how the more sophisticated beyond-PBE treatment of the exchange-correlation kernel affects the hopping, JT, and GFO-related parameters. We name this approach Model 1. Then, we discuss an alternative way which involves an explicit treatment of $\hat{H}_{\mathrm{e-e}}$ in the model Hamiltonian within mean-field approximation. This allows us to obtain estimates for the corresponding on-site interaction parameters, by keeping the conventional PBE description as reference. We call this Model 2. Further technical details can be found in the Appendix. \subsubsection{\label{sssec:ni-tb} TB parameterization with implicit el-el interaction: Model 1.} As shown in \cite{kovacik10} for the PBE case, good agreement between an effective $e_g$ TB model and the underlying Kohn-Sham band structure can be achieved by considering hopping only between nearest neighbor Mn sites, next-nearest Mn neighbors, and second-nearest Mn neighbors along the $x$, $y$, and $z$ axes, described by parameters $t^{ss}$, $t^{xy}$, and $t^{2z}$, respectively (see Appendix). Thereby it is necessary to take into account the spin dependence of the nearest neighbor hopping amplitudes. This can also be seen from \fref{fig:me-mlwf}(a) and (b), where (for PBE) the difference between $(h_{aa}^{x})^{\uparrow}$ and $(h_{aa}^{x})^{\downarrow}$ (from which $t^{\uparrow\uparrow}$ and $t^{\downarrow\downarrow}$ are calculated using \eref{eq:tb-tz}) is indeed significant. On the other hand, the further neighbor hoppings ($t^{xy}$ and $t^{2z}$) show only negligible spin dependence, and are therefore calculated from the corresponding spin averaged Hamiltonian matrix elements. We note that $s$ in $t^{ss}$ should be read as a \emph{local} spin index (i.e. relative to the orientation of the local core-spin $\bi{S}_\bi{R}$) corresponding to the sites between the electron hops, which can have the values $\pm 1$ corresponding to $\uparrow$/$\downarrow$. The parameters $t^{\uparrow\uparrow}$ and $t^{\downarrow\downarrow}$ thus represent hopping amplitudes within FM ordered planes. As a result of the GFO distortion, $t^{\uparrow\uparrow}$ and $t^{\downarrow\downarrow}$ are reduced by a factor $(1-\eta_{t}^{s})$, where $\eta_{t}^{s}$ is determined from the ratio of the $t^{ss}$ calculated for the $Pbnm$ and JT($Q^{x}$) structures (see \eref{eq:tb-etat} in Appendix). The hopping amplitude $t^{\uparrow\downarrow}$ between A-AFM ordered planes is then calculated as average of $t^{\uparrow\uparrow}$ and $t^{\downarrow\downarrow}$. As also shown in \cite{kovacik10}, the JT distortion induces a strong splitting between the nondiagonal elements of the nearest-neighbor hopping matrix within the $xy$ plane (see the differences between $h_{12}^{x}$ and $h_{21}^{x}$ in \fref{fig:me-mlwf}(a,b)), which is parameterized via a non-local JT coupling strength $\widetilde{\lambda}$ (see \eref{eq:tb-lambdatilde} in Appendix). Within model 1 only two contributions to the on-site part of the TB Hamiltonian are considered: the Hund's rule coupling $\hat{H}_{\mathrm{Hund}}$ and the Jahn-Teller coupling $\hat{H}_{\mathrm{JT}}$. The strength of the Hund's rule coupling $J_{\mathrm{H}}$ is determined from the spin splitting of the on-site diagonal matrix elements $h^0_{aa}$ for the $Pbnm$ structure, averaged over both orbitals (see Eq.~\eref{eq:tb-J}). The JT coupling strength $\lambda^s$ for local spin-projection $s$ is determined according to Eq.~\eref{eq:tb-jt} from the splitting of the eigenvalues of the on-site Hamiltonian matrix $\uuline{{h}}^{0}$ and the JT amplitude $Q^x$ for the purely JT($Q^{x}$) distorted structure. As can be seen from \fref{fig:me-mlwf}(c,d), the corresponding matrix elements are strongly spin-dependent, leading to large differences in the corresponding JT coupling constants. Similar to the hopping amplitudes, $\lambda^{s}$ is reduced by a factor $\left({1-\eta_{\lambda}^s}\right)$ due to the GFO distortion, which is determined from the ratio between $\lambda^s$ calculated for the $Pbnm$ and JT($Q^{x}$) structures. \begin{figure} \includegraphics[clip,width=1.0\textwidth]{mlwfh} \caption{Hamiltonian matrix elements in the basis of MLWFs for the experimental $Pbnm$ structure: nearest-neighbor terms corresponding to local majority (a) and minority (b) spin projection, diagonal (c) and off-diagonal (d) on-site terms. Local majority and minority spin projections are indicated by up and down triangles, respectively. Left/right parts of the horizontal axis corresponds to PWscf/VASP results.} \label{fig:me-mlwf} \end{figure} \Tref{tab:tbparI} lists the obtained TB parameters corresponding to Model 1 calculated within the various levels of approximation. Both hopping amplitudes and JT coupling strength correspond to the case without GFO distortion. It can be seen from the first two rows of \tref{tab:tbparI} that the parameterization we use in the present study yields only marginal differences for the PBE hopping parameters and Hund's rule coupling in comparison with~\cite{kovacik10}. This corroborates the quality of our TB parameterization based on only two structures (JT($Q^x$) and $Pbnm$ with A-AFM order). Note, that here we use a crystal structure derived from low-temperature measurements~\cite{elemans71}, whereas in \cite{kovacik10} the room temperature measurements of Ref.~\cite{norby95} have been used. The JT coupling parameters differ slightly more from \cite{kovacik10}, due to the revised definition of $\lambda^{s}$ used in the present study. Another important change arises from the use of 3 separate GFO reduction factors $\eta_t^{\uparrow}$, $\eta_t^{\downarrow}$, and $\eta_{\lambda}$, instead of using one averaged value as it was done in \cite{kovacik10}), which provides a more accurate TB description of the MLWF bands. It can be also seen from \tref{tab:tbparI} that at the PBE level, there is essentially no difference between the hopping amplitudes calculated using PWscf and VASP. There is a 12~\% difference in $J_{\mathrm{H}}$ between PBE(VASP) and PBE(PWscf), which could be related to the noticeable differences in the energetics of the various magnetic configurations discussed earlier. \begin{table} \caption{The TB model parameters as derived from PBE and beyond-PBE band structures (Model 1, in PBE+U we used U=3 eV). Since the PBE$+U$ values of $\eta_t^{\downarrow}$ and $\lambda^\downarrow$ are unreliable (see text), we use the corresponding PBE values (in brackets) to compute the TB bands displayed in \fref{fig:bs-tb}. Units: $t^{\uparrow\uparrow}$, $t^{\downarrow\downarrow}$, $t^{xy}$, $t^{2z}$ in meV; $\widetilde{\lambda}$ in meV/\AA; $J_{\mathrm{H}}$ in eV; $\lambda^\uparrow$, $\lambda^\downarrow$ in eV/\AA; $\eta_t^{\uparrow}$, $\eta_t^{\downarrow}$, $\eta_{\lambda}$ are unit-less.} \lineup \begin{indented} \item[] \centering \begin{tabular}{@{}lccccccccccc} \br & \centre{7}{Hopping parameters} & \centre{4}{On-site parameters}\\\ns\ns & \crule{7} & \crule{4}\\ &$t^{\uparrow\uparrow}$% &$t^{\downarrow\downarrow}$% &$\widetilde{\lambda}$% &$t^{xy}$% &$t^{2z}$% &$\eta_t^{\uparrow}$% &$\eta_t^{\downarrow}$% &$J_{\mathrm{H}}$% &$\lambda^\uparrow$% &$\lambda^\downarrow$% &$\eta_{\lambda}$\\ \mr\bs \multicolumn{12}{c}{PWscf}\\\bs PBE \cite{kovacik10} &648&512&530&18&30&0.26&0.26&1.50&\03.19&1.33&0.26\\ PBE &632&512&523&12&51&0.28&0.39&1.56&\03.35&1.07&0.22\\ PBE+$U$ &748&482&516&12&51&0.41&(0.39)&2.16&\05.22&(1.07)&0.21\\\bs \multicolumn{12}{c}{VASP}\\\bs PBE &630&503&516&13&50&0.35&0.42&1.33&\03.21&1.02&0.23\\ HSE &750&497&707&13&50&0.40&0.20&2.42& 10.25&0.96&0.28\\ GW$_0$ &746&469&490&13&50&0.24&0.41&1.90&\04.43&0.88&0.04\\ \br \end{tabular} \end{indented} \label{tab:tbparI} \end{table} Comparing the parameters obtained from the beyond-PBE methods with the pure PBE case, we observe that the hopping parameter $t^{\uparrow\uparrow}$ is generally increased in all beyond-PBE methods. As was shown in~\cite{kovacik11}, this can be understood within an extended nearest neighbor TB model including both Mn($d$) and O($p$) states, from which an effective $e_g$-only model can be derived in the limit of large energy separation $\varepsilon_{dp}$ between the $d$ and $p$ orbitals. The effective hopping $t_{dd}^{\mathrm{eff}}$ in the $e_g$ model is then given in terms of the nearest neighbor hopping amplitude $t_{dp}$ of the extended $d$-$p$ model as $t_{dd}^{\mathrm{eff}}=t_{dp}^{2}/\varepsilon_{dp}$. The increase of $t^{\uparrow\uparrow}$ is therefore consistent with the observation that all beyond-PBE methods lower the $e_g$ bands relative to the lower-lying oxygen $p$ bands. The small decrease of $t^{\downarrow\downarrow}$ within PBE$+U$ (for small values of $U \lesssim 2$~eV) can be explained in the same way, since here the corresponding energy separation between O($p$) and Mn($e_g$) increases. The JT parameter $\widetilde{\lambda}$ is generally very similar for PBE, PBE$+U$, and GW$_0$, while a strong enhancement of $\widetilde{\lambda}$ can be seen for HSE, which is consistent with the strong $x/y$ asymmetry of the corresponding MLWFs seen in \fref{fig:rs-mlwf}(c). Since the changes of the already rather small further-neighbor hoppings within the beyond-PBE methods are very small, we use the corresponding PBE values for simplicity. The GFO reduction factors for the hopping amplitudes, $\eta_{t}^{\uparrow}$ and $\eta_{t}^{\downarrow}$, are slightly decreased within GW$_0$, whereas $\eta_{t}^{\uparrow}$ is increased for PBE$+U$ and HSE, and $\eta_{t}^{\downarrow}$ is strongly decreased in HSE. Due to the strong mixing between minority spin $e_{g}$ and $t_{2g}$ bands within PBE+$U$, which was already discussed in section~\ref{ssec:mlwf} (see also \fref{fig:bs-tb}(b)), the determination of $\eta_t^\downarrow$ is rather unreliable in this case, and we therefore use the corresponding PBE value. We note that the same effect also leads to the strong changes in the l cal minority hopping matri xelements within the $xy$ plane calculated within PBE$+U$ for $U \gtrsim 3$~eV (see \fref{fig:me-mlwf}(b)). Using the HSE and GW$_0$ methods we do not encounter this problem. For all beyond-PBE methods, a significant increase of $J_{\mathrm{H}}$ and $\lambda^{\uparrow}$ can be observed, which in the TB model gives rise to an increase of the spin splitting and the band gap, respectively. The change of $\lambda^{\downarrow}$ compared to PBE is very small for both HSE and GW$_0$. Due to the inaccurate treatment of the minority spin bands, PBE$+U$ gives an unrealistically small value of $\lambda^{\downarrow}=0.30$~eV/\AA, which we therefore substitute with the corresponding PBE value. While $\eta_{\lambda}$ does not change significantly for small values of the Hubbard $U$, a small increase (significant decrease) is observed for HSE (GW$_0$). To assess the quality of our parameterization we now use the TB parameters tabulated in \tref{tab:tbparI} to compute the resulting $e_g$ band structure. In \fref{fig:bs-tb}(a) and (c), we compare the band dispersions of the TB model (blue filled circles) and the MLWFs (thick red lines) for the experimental $Pbnm$ structure within the PBE approximation. Despite the many simplifications made in the construction of the model parameters, the TB model can reproduce the MLWF bands to a remarkable accuracy (for both PWscf and VASP). The reliability of the beyond-PBE TB representation can be appreciated by the overall excellent match between the TB and MLWFs bands shown in \fref{fig:bs-tb}(b), (d) and (e), which exhibit the same quality as observed at the PBE level. This is particularly true for the band gap, whose method-dependent changes (see \tref{tab:1}) are perfectly reflected in the TB description.\footnote{The MLWF and TB bands were aligned by minimizing a mean deviation which was calculated as an average of the corresponding eigenvalue differences over all bands and k-points. The maximum and mean deviation is very similar for all methods and does not exceed 0.37 and 0.12~eV, respectively.} \begin{figure} \centering \includegraphics[clip,width=\textwidth]{tb_bs} \caption{Comparison of the band dispersion corresponding to MLWFs (red lines), the TB Model 1 using parameters given in \tref{tab:tbparI} (blue circles), and the TB Model 2 with interaction parameters given in \tref{tab:tbparII} (green crosses).} \label{fig:bs-tb} \end{figure} \subsubsection{\label{sssec:i-tb} TB parameterization with explicit el-el interaction: Model 2.} Now, we turn our attention on the alternative TB parameterization in which we attempt to treat the modifications induced by the beyond-PBE methods as a perturbation to the ``noninteracting'' PBE description by explicitly considering the el-el interaction \eref{eq:tb-ee} and using the simplified mean-field approximation \eref{eq:tb-hubpot} in the TB Hamiltonian. It is clear from the discussion in the preceeding section that it is not straightforward to parameterize the hopping amplitudes in terms of $U_{\mathrm{W}}$. We will therefore limit ourselves to analyzing the effect of \eref{eq:tb-hubpot} on the local Hamiltonian, which can be represented as 2$\times$2 matrix in terms of the two local $e_g$ states in the following form: \begin{equation} \label{eq:tb-local} \uuline{{H}}{}_{\,\mathrm{local}}^s = \uuline{\tilde{{H}}}{}_{\,0}^s - U_{\mathrm{W}} \uuline{{n}}^s \end{equation} with \begin{equation} \uuline{\tilde{{H}}}{}_{\,0}^s = \uuline{{1}} \left( \case{1}{2} U_{\mathrm{W}} - J_H \cdot s \right) - \lambda^s Q^x \uuline{\tau}{}^x - \lambda^s Q^z \uuline{\tau}^z \,. \end{equation} By identifying \eref{eq:tb-local} with the corresponding MLWF matrix, we obtain the local spin splitting as a combination of Hund's rule coupling and el-el interaction: \begin{equation} \left(h_{aa}^{0}\right)^{\downarrow}-\left(h_{aa}^{0}\right)^{\uparrow}= U_{\mathrm{W}}^{(J)}\big(n_{aa}^{\uparrow}-n_{aa}^{\downarrow}\big) +2J_{\mathrm{H}}^{(\mathrm{PBE})}\,, \label{eq:tb-UWJder} \end{equation} from which we can calculate $U_{\mathrm{W}}^{(J)}$ by averaging over the two orbital characters and using the previously determined PBE value for the Hund's rule coupling. Thereby, the occupation matrix elements in the basis of MLWFs are calculated as \begin{equation}\label{eq:n-mlwf} n_{nm} = \int_{-\infty}^{E_{\mathrm{F}}} \mathrm{d}\epsilon \int_\mathrm{BZ} \mathrm{d}\bi{k} \sum_{l} \left(U^{(\bi{k})}_{lm}\right)^* \delta(\epsilon-\epsilon_{l\bi{k}}) \, U^{(\bi{k})}_{ln} \,, \end{equation} where $E_{\mathrm{F}}$ is the Fermi energy. In a similar way we can obtain another estimate for the Hubbard parameter, $U_{\mathrm{W}}^{(\lambda)}$, from the total JT induced splitting within the majority spin $e_g$ orbital manifold, expressed through the difference in eigenvalues of the local Hamiltonian: \begin{equation} \Delta{\varepsilon}^{\uparrow} = 2 \lambda^{\uparrow(\mathrm{PBE})} \sqrt{(Q^x)^2+(Q^z)^2} + U_{\mathrm{W}}^{(\lambda)}\Delta{n}^{\uparrow} = \Delta{\varepsilon}^{\uparrow(\mathrm{PBE})} + U_{\mathrm{W}}^{(\lambda)}\Delta{n}^{\uparrow} \,. \label{eq:tb-UWlambdader} \end{equation} Here, $\Delta{n}^{\uparrow}$ is the difference in majority spin eigenvalues of the MLWF occupation matrix and we have used the observation that, to a very good approximation, both $\uuline{\tilde{{H}}}{}_{\,0}$ and $\uuline{{n}}$ can be diagonalized by the same unitary transformation. The difference between the corresponding transformation angles is less than $0.6^{\circ}$ for the $Pbnm$ structure. Since the difference is somewhat larger for the JT($Q^x$) structure (up to $\approx 6^{\circ}$) we derive the interaction parameter $U_{\mathrm{W}}^{(\lambda)}$ from the MLWF Hamiltonian of the $Pbnm$ structure. The resulting values of $U_{\mathrm{W}}^{(J)}$ and $U_{\mathrm{W}}^{(\lambda)}$ are given in \tref{tab:tbparII}. \begin{table} \caption{\label{tab:tbparII}The interaction parameters determined in Model 2. Note, that in Model 2 the on-site parameters are set to the PBE values while the hopping parameters are set to the values given in \tref{tab:tbparI}. Units: all quantities are in eV except $\Delta{n}^{\uparrow}$ which is unit-less.} \begin{indented} \item[] \begin{tabular}{lcccccc} \br &&&& \centre{3}{Interaction parameters}\\\ns\ns &&&& \crule{3}\\ &$J_{\mathrm{H}}$% &$\Delta{\varepsilon}^{\uparrow}$% &$\Delta{n}^{\uparrow}$% &$U_{\mathrm{W}}^{(J)}$% &$U_{\mathrm{W}}^{(\lambda)}$% &$\Delta{J_{\mathrm{W}}^{(\lambda)}}$\\ \mr\bs \multicolumn{7}{c}{PWscf}\\\bs PBE &1.56&1.09&0.71& - & - & - \\ PBE$+U$ &2.16&1.66&0.80&2.40&0.70&0.42\\\bs \multicolumn{7}{c}{VASP}\\\bs PBE &1.33&1.04&0.70& - & - & - \\ HSE &2.42&3.10&0.89&4.37&2.31&0.51\\ GW$_0$ &1.90&1.80&0.70&2.30&1.09&0.30\\ \br \end{tabular} \end{indented} \end{table} It can be seen that within PBE$+U$, the parameter $U_{\mathrm{W}}^{(J)}$ is almost as large as the value of $U=3$~eV used for the Hubbard parameter within the PBE$+U$ calculation, whereas the parameter $U_{\mathrm{W}}^{(\lambda)}$ is significantly smaller than that. We note that, as discussed in \cite{kovacik11}, the Hubbard correction within PBE+$U$ is applied to rather localized atomic-like orbitals, whereas the parameter $U_{\mathrm{W}}$ corresponds to more extended $e_g$-like Wannier orbitals. The JT splitting is strongly affected by hybridization with the surrounding oxygen ligands and is thus quite different for atomic-like and extended Wannier states \cite{kovacik11}. As a result, $U_{\mathrm{W}}^{(\lambda)}$ is quite different from the $U$ value used within PBE+$U$, and the smaller value of $U_{\mathrm{W}}^{(\lambda)}$ can thus be related to the fact that the electron-electron interaction is more screened in the more extended effective $e_g$ Wannier orbitals. On the other hand, the similarity between $U_{\mathrm{W}}^{(J)}$ and the $U$ value used within PBE+$U$ indicates that the local spin-splitting is more or less the same for both sets of orbitals, which is consistent with the view that this splitting is essentially an atomic property. A similar difference between $U_{\mathrm{W}}^{(J)}$ and $U_{\mathrm{W}}^{(\lambda)}$ is also observed for HSE and GW$_0$. The large values of $U_{\mathrm{W}}$ delivered by HSE reflects the larger spin splitting and band gap in the corresponding band structure compared to PBE+$U$ and GW$_0$. The large difference between the two parameters $U_{\mathrm{W}}^{(J)}$ and $U_{\mathrm{W}}^{(\lambda)}$ also indicates that it is not possible to map the electron-electron interaction effects manifested in the on-site matrix corresponding to effective $e_g$ orbitals to only one interaction parameter while using PBE as ``noninteracting'' reference. Similar conclusions have already been reached in \cite{kovacik11} for the PBE+$U$ case. From the current study we can conclude that the modification of the local spin splitting (described by $U_{\mathrm{W}}^{(J)}$) and the enhancement of the JT induced orbital splitting (described by $U_{\mathrm{W}}^{(\lambda)}$) that arise in the Kohn-Sham or GW$_0$ quasiparticle band structures due to the beyond-PBE treatment of exchange and correlation, are not compatible with a simple mean-field Hubbard-like correction to an otherwise ``non-interacting'' TB Hamiltonian with two effective $e_g$ orbitals per Mn site and only one parameter describing the electron-electron interaction. This leads to an important conclusion of the present study with regard to methods such as LDA+$U$ or LDA+DMFT, which supplement a ``non-interacting'' Kohn-Sham Hamiltonian with a Hubbard interaction between a strongly interacting subset of orbitals: using different methods for obtaining the noninteracting reference can lead to significant differences, and it is by no means clear whether PBE (GGA) or even LDA always provides the best starting point for a more sophisticated treatment of correlation effects. Our results also emphasize the importance of finding improved ways to account for the double counting correction when using different electronic structures as noninteracting reference. In order to see how, within the limitations discussed in the preceeding paragraph, a TB Hamiltonian of the form \eref{eq:tb-kin}-\eref{eq:tb-ee} can reproduce the MLWF band dispersion, we consider a modified parameterization using $U_{\mathrm{W}}^{(\lambda)}$ to model the el-el interactions. Since in that way the correlation-induced increase of the spin splitting is only partially covered by the el-el term \eref{eq:tb-hubpot}, we correct this by introducing an ``empirical'' correction to the Hund's rule coupling: \begin{equation}\label{eq:tb-DeltaJ} \Delta{J_{\mathrm{W}}^{(\lambda)}}= J_{\mathrm{H}}-J_{\mathrm{H}}^{(\mathrm{PBE})}-\case{1}{4}{U_{\mathrm{W}}^{(\lambda)}}\,. \end{equation} Note, that analogously we could choose $U_{\mathrm{W}}^{(J)}$ as the el-el interaction parameter and define an appropriate correction to $\lambda^{\uparrow}$. However, since the fundamental band gap in LaMnO$_3$ is largely controlled by the JT induced splitting between occupied and unoccupied $e_g$ bands, and since in a TB model for LaMnO$_{3}$ it seems most desirable to describe the band gap correctly, we choose $U_{\mathrm{W}}^{(\lambda)}$ to model the el-el interactions. If the correction $\Delta{J_{\mathrm{W}}^{(\lambda)}}$ is neglected, the local majority spin bands around the band gap are still described quite well, even though the splitting with respect to the local minority spin bands will be underestimated, which might be acceptable for certain applications. \Fref{fig:bs-tb} also shows the dispersion calculated from such a modified TB model with explicit el-el interaction, where the correlation induced change of the spin splitting and band gap is described by two interaction parameters, $U_{\mathrm{W}}^{(\lambda)}$ and $\Delta{J_{\mathrm{W}}^{(\lambda)}}$, while $J_{\mathrm{H}}$, $\lambda^{\uparrow}$, $\lambda^{\downarrow}$, and $\eta_{\lambda}$ are fixed at their respective PBE values. In addition, the hopping amplitudes are set to the values given in \tref{tab:tbparI}. The band dispersions using these sets of parameters (shown as green crosses in \fref{fig:bs-tb}) again almost perfectly follow the MLWF bands. The agreement between the bands calculated within the two parameterizations (Model 1 and 2) also reflects the transferability of the on-site parameters between the structures with and without the GFO distortion. \section{\label{sec:sum} Summary} In this paper we have presented a general scheme to parameterize, within a TB picture, the band structure of the prototypical JT distorted $e_g$ perovskite LaMnO$_3$ by means of a suitable downfolding of the {\em ab initio} electron dispersion relations onto a small set of MLWFs. The tabulated TB parameters should provide an interpretative direction for more sophisticated many-body model Hamiltonian investigations of similar systems~\cite{pavarini10,kotliar06,kumar06,yamasaki06}. By comparing the PBE and beyond-PBE findings we can draw the following conclusions: \begin{enumerate}[label=(\roman{*}), ref=(\roman{*})] \item {\em Ab initio} electronic structure results. We find that all methods consistently find a Mott-Hubbard insulating state. GW$_0$ provides the best agreement with experiments in terms of bandgap value, and both GW$_0$ and HSE convey a satisfactory description of valence and conduction band spectra. While in the PBE+$U$ and HSE cases a suitable adjustment of the parameters $U$ and $a_{\mathrm{mix}}$ can selectively improve the performance with respect to either bandgap or magnetic exchange interactions, a universal value that provides all quantities with good accuracy cannot be found. Even though the standard value $a_{\mathrm{mix}}=0.25$ in HSE seems to provide rather accurate magnetic coupling constants, clearly a smaller $a_{\mathrm{mix}}$ is necessary to obtain a better Mott-Hubbard gap. While the two different codes used in the present study lead only to marginal differences in the Kohn-Sham band structure and the corresponding TB parameterization, the relative energies of different magnetic configurations depend on subtle details of the used methods, which hampers a concise comparison between the different energy functional (it should be noted however, that the PAW approach is usually considered superior to pure pseudopotential schemes). Within VASP a value for the Hubbard $U$ between 2-3~eV leads to similar magnetic coupling along $c$ as HSE, but somewhat stronger FM coupling within the $ab$ planes. Despite all its well-known limitations when applied to strongly-correlated materials, PBE does not seem to perform too badly (of course the fact the we have used the experimental structure helps in that respect, since PBE is known to fail in properly reproducing the JT distortion in LaMnO$_3$~\cite{hashimoto}). \item MLWFs. Despite the difficulties to fully disentangle the effective $e_g$ bands from other bands with similar energies, which are most pronounced within PBE+$U$ and HSE, the resulting MLWFs and associated ordering (Fig.\ref{fig:oo}) look rather similar and are in good agreement with the precedent plots of Yin\cite{yin06}. This represents a further proof of the quality and reliability of the wannier construction of the $e_g$ $|3z^2-r^2\rangle$ and $|x^2 -y^2\rangle$ orbitals. Despite these similarities, the differences in the underlying band structures lead to distinct differences in the Hamilt nian matri xelements in reciprocal space, and allow for an accurate quantitative analysis of the differences between the various approximations for the exchange-correlation kernel. \item TB parameterization. We have demonstrated that the methods-derived changes in the TB parameters due to the different treatment of the el-el exchange-correlation kernel in conventional and beyond-PBE approaches can be accounted for using two different routes: (a) Model 1 ($\hat{H}_{\mathrm{TB}}=\hat{H}_{\mathrm{kin}} + \hat{H}_{\mathrm{Hund}} + \hat{H}_{\mathrm{JT}}$). In this model the TB Hamiltonian does not explicitly incorporate an el-el interaction term. All changes in the beyond-PBE band structure with respect to the ``noninteracting'' PBE bands are integrated in the hopping, JT and Hund parameters (in particular $t^{\uparrow\uparrow}$, $\lambda^\uparrow$, and $J_{\mathrm{H}}$). (b) Model 2 ($\hat{H}_{\mathrm{TB}}=\hat{H}_{\mathrm{kin}} + \hat{H}_{\mathrm{Hund}} + \hat{H}_{\mathrm{JT}} + \hat{H}_{\mathrm{e-e}}$). In this second type of parameterization we have build in an el-el term in the TB Hamiltonian explicitly. The el-el interaction effects are treated by parameterizing the on site Hund and JT parameters into a noninteracting (PBE) and interacting (dependent on $U_{\mathrm{W}}^{(\lambda)}$ and $U_{\mathrm{W}}^{(J)}$) part. Since we found that $U_{\mathrm{W}}^{(\lambda)}$ $\neq$ $U_{\mathrm{W}}^{(J)}$, in order to achieve a correct parameterization it is necessary to fix one $U_{\mathrm{W}}$ channel ($U_{\mathrm{W}}^{(\lambda)}$) and evaluate the changes on the remaining one ($\Delta{J_{\mathrm{W}}^{(\lambda)}}$). Both, Model 1 and 2, yield excellent TB bands, essentially overlapping with the underlying MLWFs ones. We note that the different levels of approximation for the non-interacting band structure can lead to significant changes in the hopping amplitudes, which cannot easily be accounted for by a local double-counting correction. In addition, we have also shown that the influence of the beyond-PBE treatment on the model parameters of the local Hamiltonian cannot be captured by a simple mean-field Hubbard term with only one interaction parameter. For an accurate many-body or effective model treatment of LaMnO$_3$ and similar materials it thus seems most desirable to start from the most realistic single particle band-structure (i.e. not necessarily LDA or GGA) and use an appropriate double counting correction. The exact form of such a correction term, however, is still unclear at this point. A possible alternative solution to correctly treat correlation effects without being contaminated by the double-counting problem is the GW+DMFT scheme, which has recently attracted several research groups and will be most likely available in the next future\cite{Biermann03,Karlsson05}. \end{enumerate} In summary, we have shown that MLWFs can be constructed efficiently not only at conventional DFT level (PBE), but also from hybrid functional (HSE) and quasiparticle (GW$_0$) wavefunctions, through the creation of an appropriate interface between the electronic structure code VASP~\cite{vasp1,vasp2} and the publicly available Wannier90 code~\cite{wannier90}. Thereby, we have used the well-established PW2WANNIER90 interface as benchmark at the PBE and PBE+$U$ level~\cite{espresso}. Given the booming application of hybrid DFT and GW$_0$ calculations for a wide variety of materials for which the possibility to describe the relevant physics using a minimal basis set is important (these include, e. g., Fe-based superconductors~\cite{wojdel}, cuprates~\cite{rivero10} and multiferroics~\cite{stroppa10,stroppa11}), the VASP2WANNIER90 interface which allows to construct MLWFs directly from the widely-used VASP code, will provide a valuable tool for future research. From the practical point of view, we have demonstrated that MLWFs can be efficiently used to accurately interpolate the HSE and GW$_0$ band structure from the coarse uniform k-points mesh to the desirable (and dense) symmetry lines, thereby remedying the fundamental practical limitation of HSE and GW$_0$ scheme in computing energy eigenvalues for selected k-points\cite{2001_souza,hamann09}. We expect that our study will serve as a reference for future studies involving MLWFs-based downfolding procedure. \ack The authors would like to thank Silvia Picozzi and Alessandro Stroppa (CNR-SPIN, L'Aquila) for initiating the implementation of the VASP2WANNIER90 interface and for hosting Martijn Marsman in L'Aquila where large parts of this interface were written. This work has been supported by the 7$^{th}$ Framework Programme of the European Community, within the project ATHENA, by Science Foundation Ireland under Ref.~SFI-07/YI2/I1051, and by the Austrian FWF within the SFB ViCoM (F41). PWscf and VASP calculations have been performed at the Trinity Centre for High-Performance Computing (TCHPC) and the Vienna Scientific Cluster (VSC), respectively. \newpage
1,116,691,498,135
arxiv
\section{Introduction} \label{introduction} In recent years, model-free deep reinforcement learning (RL) algorithms have demonstrated the capacity to learn sophisticated behavior in complex environments. Starting with Deep Q-Networks (DQN) achieving human-level performance on Atari games \cite{DBLP:journals/corr/MnihKSGAWR13}, deep RL has led to impressive results in several classes of challenging tasks. While many deep RL methods were initially limited to discrete action spaces, there has since been substantial interest in applying deep RL to continuous action domains. In particular, deep RL has increasingly been studied for use in continuous control problems, both in simulated environments and on robotic systems in the real world. A number of challenges exist for practical control tasks such as robotics. For tasks involving a physical robot where on-robot training is desired, the physical constraints of robotic data collection render data acquisition costly and time-consuming. Thus, the use of off-policy methods like Q-learning is a practical necessity, as data collected during development or by human demonstrators can be used to train the final system, and data can be re-used during training. However, even when using off-policy Q-learning methods for continuous control, several other challenges remain. In particular, training stability across random seeds, hyperparameter sensitivity, and runtime are all challenges that are both relatively understudied and are critically important for practical use. Inconsistency across runs, e.g. due to different random initializations, is a major issue in many domains of deep RL, as it makes it difficult to debug and evaluate an RL system. Deep Deterministic Policy Gradients (DDPG), a popular off-policy Q-learning method \cite{Lillicrap2015}, has been repeatedly characterized as unstable \cite{pmlr-v48-duan16,Islam2017}. While some recent work has improved stability in off-policy Q-learning \cite{pmlr-v70-haarnoja17a, Haarnoja2018a, pmlr-v80-fujimoto18a}, there remains significant room for improvement. Sensitivity to hyperparameters (i.e. batch size, network architecture, learning rate, etc) is a particularly critical issue when system evaluation is expensive, since debugging and task-specific tuning are difficult and time consuming to perform. Finally, many real robotics tasks have strict runtime and hardware constraints (i.e. interacting with a dynamic system), and any RL control method applied to these tasks must be fast enough to compute in real time. Mitigating these challenges is thus an important step in making deep RL practical for continuous control. In this paper, we introduce Cross-Entropy Guided Policy (CGP) learning, a general Q-function and policy training method that can be combined with most deep Q-learning methods and demonstrates improved stability of training across runs, hyperparameter combinations, and tasks, while avoiding the computational expense of a sample-based policy at inference time. CGP is a multi-stage algorithm that learns a Q-function using a heuristic Cross-Entropy Method (CEM) sampling policy to sample actions, while training a deterministic neural network policy in parallel to imitate the CEM policy. This learned policy is then used at inference time for fast and precise evaluation without expensive sample iteration. We show that this method achieves performance comparable to state-of-the-art methods on standard continuous-control benchmark tasks, while being more robust to hyperparameter tuning and displaying lower variance across training runs. Further, we show that its inference-time runtime complexity is 3-6 times better than when using the CEM policy for inference, while slightly outperforming the CEM policy. This combination of attributes (reliable training and cheap inference) makes CGP well suited for real-world robotics tasks and other time/compute sensitive applications. \section{Related Work} \label{related_work} The challenge of reinforcement learning in continuous action spaces has been long studied \cite{Silver2014,Hafner2011}, with recent work building upon \textit{on-policy} policy gradient methods \cite{Sutton1999} as well as the \textit{off-policy} deterministic policy gradients algorithm \cite{Silver2014}. In addition to classic policy gradient algorithms such as REINFORCE \cite{Sutton1999} or Advantage Actor Critic \cite{DelaCruz2018}, a number of recent on-policy methods such as TRPO \cite{Schulman2015} and PPO \cite{Schulman2017} have been applied successfully in continuous-action domains, but their poor sample complexity makes them unsuitable for many real world applications, such as robotic control, where data collection is expensive and complex. While several recent works \cite{pmlr-v87-matas18a, Zhu-RSS-18, Andrychowicz2017} have successfully used simulation-to-real transfer to train in simulations where data collection is cheap, this process remains highly application-specific, and is difficult to use for more complex tasks. Off-policy Q-learning methods have been proposed as a more data efficient alternative, typified by Deep Deterministic Policy Gradients (DDPG) \cite{Lillicrap2015}. DDPG trains a Q-function similar to \cite{Mnih2016}, while in parallel training a deterministic policy function to sample good actions from the Q-function. Exploration is then achieved by sampling actions in a noisy way during policy rollouts, followed by off-policy training of both Q-function and policy from a replay buffer. While DDPG has been used to learn non-trivial policies on many tasks and benchmarks \cite{Lillicrap2015}, the algorithm is known to be sensitive to hyperparameter tuning and to have relatively high variance between different random seeds for a given configuration \cite{pmlr-v48-duan16, AAAI1816669}. Recently multiple extensions to DDPG have been proposed to improve performance, most notably Twin Delayed Deep Deterministic Policy Gradients (TD3) \cite{pmlr-v80-fujimoto18a} and Soft Q-Learning (SQL)/Soft Actor-Critic (SAC) \cite{pmlr-v70-haarnoja17a, pmlr-v80-haarnoja18b}. TD3 proposes several additions to the DDPG algorithm to reduce function approximation error: it adds a second Q-function to prevent over-estimation bias from being propagated through the target Q-values and injects noise into the target actions used for Q-function bootstrapping to improve Q-function smoothness. The resulting algorithm achieves significantly improved performance relative to DDPG, and we use their improvements to the Q-function training algorithm as a baseline for CGP. In parallel with TD3, \cite{pmlr-v80-haarnoja18b} proposed Soft Actor Critic as a way of improving on DDPG's robustness and performance by using an entropy term to regularize the Q-function and the reparametrization trick to stochastically sample the Q-function, as opposed to DDPG and TD3's deterministic policy. SAC and the closely related Soft Q-Learning (SQL) \cite{pmlr-v70-haarnoja17a} have been applied successfully for real-world robotics tasks \cite{Haarnoja2018a, DBLP:conf/icra/HaarnojaPZDAL18}. Several other recent works propose methods that use CEM and stochastic sampling in RL. Evolutionary algorithms take a purely sample-based approach to fitting a policy, including fitting the weights of neural networks, such as in \cite{Salimans2017}, and can be very stable to train, but suffer from very high computational cost to train. Evolutionary Reinforcement Learning (ERL) \cite{khadka2018evolutionary} combines evolutionary and RL algorithms to stabilize RL training. CEM-RL \cite{pourchot2018cemrl} uses CEM to sample populations of policies which seek to optimize actions for a Q-function trained via RL, while we optimize the Q-function actions directly via CEM sampling similar to Qt-Opt \cite{pmlr-v87-kalashnikov18a}. There exists other recent work that aims to treat learning a policy as supervised learning \cite{Abdolmaleki2018, Abdolmaleki2018a, Wirth2016}. \nolink{\citeauthor{Abdolmaleki2018a}} propose a formulation of policy iteration that samples actions from a stochastic learned policy, then defines a locally optimized action probability distribution based on Q-function evaluations, which is used as a target for the policy to learn \cite{Abdolmaleki2018, Abdolmaleki2018a}. The baseline for our method is modeled after the CEM method used in the Qt-Opt system, a method described by \cite{pmlr-v87-kalashnikov18a} for vision-based dynamic manipulation trained mostly off-policy on real robot data. Qt-Opt eschews the use of a policy network as in most other continuous-action Q-learning methods, and instead uses CEM to directly sample actions that are optimal with respect to the Q-function for both inference rollouts and training. They describe the method as being stable to train, particularly on off-policy data, and demonstrate its usefulness on a challenging robotics task, but do not report its performance on standard benchmark tasks or against other RL methods for continuous control. We base our CEM sampling of optimal actions on their work, generalized to MuJoCo benchmark tasks, and extend it by learning a deterministic policy for use at inference time to improve performance and computational complexity, avoiding the major drawback of the method- the need to perform expensive CEM sampling for every action at inference time (which must be performed in real time on robotic hardware). \section{Notation and Background} \label{notation} We describe here the notation of our RL task, based on the notation defined by Sutton and Barto \cite{DBLP:journals/tnn/SuttonB98}. Reinforcement learning is a class of algorithms for solving Markov Decision Problems (MDPs), typically phrased in the finite time horizon case as an agent characterized by a policy $\pi$ taking actions $a_t$ in an environment, with the objective of maximizing the expected total reward value $\mathbb{E}\sum_{t=1}^{T}\gamma^t r(s_t,a_t)$ that agent receives over timesteps $t \in \{1 \dots T\}$ with some time decay factor per timestep $\gamma$. To achieve this, we thus seek to find an optimal policy $\pi^{*}$ that maximizes the following function: $$J(\pi) = \mathbb{E}_{s, a \sim \pi}[\sum_{t = 1}^{T}{\gamma^{t}r(s_t, a_t)}]$$ A popular class of algorithms for solving this is Q-learning, which attempts to find an optimal policy by finding a function $$Q^{*}(s_t, a_t) = r(s_t, a_t) + \gamma \text{max}_{a_{t+1}}(Q^{*}(s_{t+1}, a_{t+1}))$$ which satisfies the Bellman equation \cite{DBLP:journals/tnn/SuttonB98}: $$ Q(s, a) = r(s, a) + \mathbb{E}[Q(s', a')],\quad a' \sim \pi^{*}(s') $$ Once $Q^*$ is known $\pi^*$ can easily be defined as $\pi^*(s) = \text{argmax}_a(Q^*(s,a))$. Q-learning attempts to learn a function $Q_{\theta}$ that converges to $Q^*$, where $\theta$ is the parameters to a neural network. $Q_{\theta}$ is often learned through bootstrapping, wherein we seek to minimize the function $$J(\theta) = \mathbb{E}_{s,a}[(Q_{\theta} - [r(s, a) + \gamma \text{max}_{a'}(\hat{Q}(s', a'))])^2]$$ where $\hat{Q}$ is a target Q-function, here assumed to be a time delayed version of the current Q-function, $\hat{Q}_{\hat{\theta}}$\cite{Mnih2016}. To use the above equation, it is necessary to define a function $\pi(s)$ which computes $\text{argmax}_{a}(Q(s, a))$. In discrete action spaces, $\pi(s)$ is trivial, since $\text{argmax}_{a}$ can be computed exactly by evaluating each possible $a$ with $Q$. In continuous-valued action spaces, such a computation is intractable. Further, as most neural network Q-functions are highly non-convex, an analytical solution is unlikely to exist. Various approaches to solving this optimization problem have been proposed, which have been shown to work well empirically. \cite{Lillicrap2015} show that a neural network function for sampling actions that approximately maximize the Q-function can be learned using gradients from the Q-function. This approach forms the basis of much recent work on continuous action space Q-learning. \section{From Sampling-based Q-learning to Cross-Entropy Guided Policies (CGP)} \begin{algorithm}[tb] \caption{Cross Entropy Method Policy ($\pi_{\text{CEM}}$) for Q-Learning } \label{alg:cem} \begin{algorithmic} \STATE {\bfseries Input:} state $s$, Q-function $Q$, iterations $N$, samples $n$, winners $k$, action dimension $d$ \STATE $\bm{\mu} \leftarrow \bm{0}^d$ \STATE $\bm{\sigma}^2 \leftarrow \bm{1}^d$ \FOR{$t=1$ {\bfseries to} $N$} \STATE $A \leftarrow \{\bm{a_i}:\bm{a_i}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(\bm{\mu}, \bm{\sigma}^2)\}$ \STATE $\tilde{A} \leftarrow \{\bm{\tilde{a}_i}:\bm{\tilde{a}_i} = \tanh(\bm{a_i})\}$ \STATE $\mathcal{Q} \leftarrow \{q_i:q_i=Q(\bm{\tilde{a_i}})\}$ \STATE $I \leftarrow \{\text{sort}(\mathcal{Q})_i:i\in[1, \dots, k]\}$ \STATE $\bm{\mu} \leftarrow \frac{1}{k}\sum_{i \in I}\bm{a_i}$ \STATE $\bm{\hat\sigma^2} \leftarrow \text{Var}_{i \in I}(\bm{a_i})$ \STATE $\bm{\sigma^2} \leftarrow \bm{\hat\sigma^2}$ \ENDFOR \RETURN $\bm{\tilde{a}^*}\in\tilde{A}$ such that $Q(\bm{\tilde{a}^*})=\max_{i \in I}Q(\bm{\tilde{a_i}})$ \end{algorithmic} \end{algorithm} \begin{algorithm}[tb] \caption{CGP: Cross-Entropy Guided Policies} \label{alg:cgp} \begin{algorithmic} \STATE \textbf{TRAINING} \STATE Initialize Q-functions $Q_{\theta_1},Q_{\theta_2}$ and policy $\pi_\phi$ with random parameters $\theta_1,\theta_2,\phi$, respectively \STATE Initialize target networks $\theta_1^\prime\leftarrow\theta_1,\theta_2^\prime\leftarrow\theta_2,\phi^\prime\leftarrow\phi$ \STATE Initialize CEM policies $\pi_{\text{CEM}}^{Q_{\theta_1}},\pi_{\text{CEM}}^{Q_{\theta_1^\prime}}$ \STATE Initialize replay buffer $\mathcal{B}$ \STATE Define batch size $b$ \FOR{$e=1$ {\bfseries to} $E$} \FOR{$t=1$ {\bfseries to} $T$} \STATE \textbf{Step in environment:} \STATE Observe state $s_t$ \STATE Select action $a_t\sim\pi_{\text{CEM}}^{Q_{\theta_1}}(s_t)$ \STATE Observe reward $r_t$, new state $s_{t+1}$ \STATE Save step $(s_t,a_t,r_t,s_{t+1})$ in $\mathcal{B}$ \STATE \textbf{Train on replay buffer ($j\in{1,2}$):} \STATE Sample minibatch $(s_i,a_i,r_i,s_{i+1})$ of size $b$ from $\mathcal{B}$ \STATE Sample actions $\tilde{a}_{i+1}\sim\pi_{\text{CEM}}^{\theta_1^\prime}$ \STATE Compute $q^*=r_i+\gamma \min_{j\in{1,2}}Q_{\theta_j^\prime}(s_{i+1},\tilde{a}_{i+1})$ \STATE Compute losses $\ell_{Q_j}=\left(Q_{\theta_j}(s_i,a_i)-q^*\right)^2$ \STATE \textbf{CGP loss:} $\ell_\pi^\text{CGP}=(\pi_\phi(s_i) - \pi_{\text{CEM}}^{\theta_1}(s_i))^2$ \STATE \textbf{QGP loss:} $\ell_\pi^\text{QGP}=-Q_{\theta_1}(s_i, \pi_\phi(s_i))$ \STATE Update $\theta_j\leftarrow\theta_j-\eta_Q\nabla_{\theta_j}\ell_{Q_j}$ \STATE Update $\phi\leftarrow\phi-\eta_\pi\nabla_\phi\ell_\pi$ \STATE \textbf{Update target networks:} \STATE $\theta_j^\prime\leftarrow\tau\theta_j + (1-\tau)\theta_j^\prime,\quad j\in{1,2}$ \STATE $\phi^\prime\leftarrow\tau\phi + (1-\tau)\phi^\prime$ \ENDFOR \ENDFOR \STATE \textbf{INFERENCE} \FOR{$t=1$ {\bfseries to} $T$} \STATE Observe state $s_t$ \STATE Select action $a_t\sim\pi_{\phi}(s_t)$ \STATE Observe reward $r_t$, new state $s_{t+1}$ \ENDFOR \end{algorithmic} \end{algorithm} \begin{figure*}[!ht] \centering \includegraphics[width=0.8\textwidth]{images/cgp_arch_final.pdf} \caption{Both CGP and QGP utilize the same training method to train their respective Q-functions. However, in CGP (left) we regress $\pi_{CGP}$ on the L2-norm between the current $\pi_{CGP}$ and the CEM-based policy $\pi_{CEM}$. In QGP (right), we train $\pi_{QGP}$ to maximize $Q$ given $s_t$ by directly performing gradient ascent on $Q$. } \label{fig:cgp_qgp} \end{figure*} In this section, we first describe an established method for using a sampling-based optimizer to optimize inputs to a Q-function which can be used as a policy to train the Q-function via standard Q-learning. We then present two novel methods for training deterministic policies separately from the Q-function. \subsection{Q-Learning with Sampling-Based Policies} The basis for our method is the use of a sampling-based optimizer to compute approximately optimal actions with respect to a given $Q$ function and a given state $s$. Formally, we define the policy $\pi_{S_{Q}}(s) = S_{Q}(s)$, where $S_{Q}$ is a sampling-based optimizer that approximates $\text{argmax}_{a} Q(s, a)$ for action $a$ and state observation $s$. We can then train a Q-function $Q_{\theta}$ parameterized by the weights of a neural network using standard Q-learning as described in Section \ref{notation} to minimize: $$J(\theta) = \mathbb{E}_{s,a}[(Q_{\theta} - [r(s, a) + \gamma \hat{Q}(s', \pi_{S_{\hat{Q}_{\theta}}}(s'))])^2]$$ The choice of sampling-based optimizer $S_{Q}$ can have a significant impact on the quality of the policy it induces, and therefore has a significant impact on the quality of $Q_{\theta}$ after training - while we leave the exploration of optimal sampling methods to future work, we used a simple instantiation of the Cross-Entropy method (CEM), which was empirically demonstrated by \nolink{\citeauthor{pmlr-v87-kalashnikov18a}} to work well for certain continuous-control tasks \cite{pmlr-v87-kalashnikov18a}. In this formulation, each action vector is represented as a collection of independent Gaussian distributions, initially with mean $\mu=1$ and standard deviation $\sigma=1$. These variables are sampled $n$ times to produce action vectors ${a_0, a_1, ..., a_{n-1}}$, which are then scored by $Q$. The top $k$ scoring action vectors are then used to reparameterize the Gaussian distributions, and this process is repeated $N$ times. For brevity, we refer to this policy as $\pi_{\text{CEM}}$. The full algorithm can be found in Algorithm \ref{alg:cem}. \subsection{Imitating $\pi_{\text{CEM}}$ with a Deterministic Policy} \label{subsec:cgp} While $\pi_{\text{CEM}}$ is competitive with learned policies for sampling the Q-function (described in Section \ref{sec:experiments}), it suffers from poor runtime efficiency, as evaluating many sampled actions is computationally expensive, especially for large neural networks. Further, there is no guarantee that sampled actions will lie in a local minimum of the Q-value energy landscape due to stochastic noise. Our main methodological contribution in this work, formalized in Algorithm \ref{alg:cgp}, is the extension of $\pi_{\text{CEM}}$ by training a deterministic neural network policy $\pi_{\phi}(s)$ to predict an approximately optimal action at inference time, while using $\pi_{\text{CEM}}$ to sample training data from the environment and to select bootstrap actions for training the Q-function. A single evaluation of $\pi_{\phi}$ is much less expensive to compute than the multiple iterations of Q-function evaluations required by $\pi_{\text{CEM}}$. Even when evaluating CEM samples with unbounded parallel compute capacity, the nature of iterative sampling imposes a serial bottleneck that means the theoretical best-case runtime performance of $\pi_{\text{CEM}}$ will be $N$ times slower than $\pi_{\phi}$. Additionally, as $\pi_{\text{CEM}}$ is inherently noisy, by training $\pi_{\phi}$ on many approximately optimal actions from $\pi_{\text{CEM}}(s)$ evaluated on states from the replay buffer, we expect that, for a given state $s$ and $Q_{\theta}$, $\pi_{\phi}$ will converge to the mean of the samples from $\pi_{\text{CEM}}$, reducing policy noise at inference time. While the idea of training an inference-time policy to predict optimal actions with respect to $Q_{\theta}$ is simple, there are several plausible methods for training $\pi_{\phi}$. We explore four related methods for training $\pi_{\phi}$, the performance of which are discussed in Section \ref{sec:experiments}. The high-level differences between these methods can be found in Figure \ref{fig:cgp_qgp}. \subsubsection{Q-gradient-Guided Policy} A straightforward approach to learning $\pi_{\phi}$ is to use the same objective as DDPG \cite{Lillicrap2015}: $$J(\phi) = \mathbb{E}_{s \sim \rho^{\pi_{\text{CEM}}}}(\nabla_{\pi_{\phi}}Q_{\theta}(s, \pi_{\phi}(s)))$$ to optimize the weights $\phi$ off-policy using $Q_{\theta}$ and the replay data collected by $\pi_{\text{CEM}}$. This is the gradient of the policy with respect to the Q-value, and for an optimal Q should converge to an optimal policy. Since the learned policy is not used during the training of the Q-function, but uses gradients from Q to learn an optimal policy, we refer to this configuration as Q-gradient Guided Policies (QGP), and refer to policies trained in this fashion as $\pi_{\text{QGP}}$. We tested two versions of this method, an ``offline" version where $\pi_{\phi}$ is trained to convergence on a fixed Q-function and replay buffer, and an ``online" version where $\pi_{\phi}$ is trained in parallel with the Q-function, analogous to DDPG other than that $\pi_{\phi}$ is not used to sample the environment or to select actions for Q-function bootstrap targets. We refer to these variants as QGP-Offline and QGP-Online respectively. \subsubsection{Cross-Entropy-Guided Policy} However, as shown in Figure \ref{cheetah-stability}, while both variants that train $\pi_{\phi}$ using the gradient of $Q_{\theta}$ can achieve good performance, their performance varies significantly depending on hyperparameters, and convergence to an optimal (or even good) policy does not always occur. We hypothesize that the non-convex nature of $Q_{\theta}$ makes off-policy learning somewhat brittle, particularly in the offline case, where gradient ascent on a static Q-function is prone to overfitting to local maxima. We therefore introduce a second variant, the Cross-Entropy Guided Policy (CGP), which trains $\pi_{\phi}$ using an L2 regression objective $$J(\phi) = \mathbb{E}_{s_t \sim \rho^{\pi_{\text{CEM}}}}(\nabla_{\pi_{\phi}}||\pi_{\phi}(s_t) - \pi_{\text{CEM}}(s_t)||^2)$$ This objective trains $\pi_{\phi}$ to imitate the output of $\pi_{\text{CEM}}$ without relying on CEM for sampling or the availability of $Q_{\theta}$ at inference time. If we assume $\pi_{\text{CEM}}$ is an approximately optimal policy for a given $Q_{\theta}$ (an assumption supported by our empirical results in Section \ref{sec:experiments}), this objective should converge to the global maxima of $Q_{\theta}$, and avoids the local maxima issue seen in QGP. As $\pi_{\text{CEM}}$ can only be an approximately optimal policy, CGP may in theory perform worse than QGP since QGP optimizes $Q_{\theta}$ directly, but we show that this theoretical gap does not result in diminished performance. Moreover, we demonstrate that CGP is significantly more robust than QGP, especially in the offline case. We explore both online and offline versions of this method similar to those described for QGP. While QGP and CGP are compatible with any Q-learning algorithm, to improve performance and training stability further we combine them with the TD3 Q-learning objective described in \cite{pmlr-v80-fujimoto18a}, which adds a second Q-function for target Q-value computation to minimize function approximation error, among other enhancements. Our method of using $\pi_{\text{CEM}}$ to sample actions for Q-function training and training $\pi_{\phi}$ for use at inference time is agnostic to the form of the Q-function and how it is trained, and could be combined with future Q-learning methods. Pseudocode for the full CGP method can be found in Algorithm \ref{alg:cgp}. \section{Experiments} \label{sec:experiments} To characterize our method, we conduct a number of experiments in various simulated environments. \subsection{Experiment Setup} Our experiments are intended to highlight differences between the performance of CGP and current state-of-the-art methods on standard RL benchmarks. We compare against DDPG, TD3, Soft Actor-Critic (SAC), and an ablation of our method which does not train a deterministic policy but instead simply uses $\pi_{\text{CEM}}$ to sample at test time similar to the method of \cite{pmlr-v87-kalashnikov18a}. To obtain consistency across methods and with prior work we used the author's publicly available implementations for TD3 and SAC, but within our own training framework to ensure consistency. We attempt to characterize the behavior of these methods across multiple dimensions, including maximum final reward achieved given well-tuned hyperparameters, the robustness of the final reward across diverse hyperparameters, the stability of runs within a given hyperparameter set, and the inference time computational complexity of the method. We assess our method on an array of continuous control tasks in the MuJoCo simulator through the OpenAI gym interface, including HalfCheetah-v2, Humanoid-v2, Ant-v2, Hopper-v2, Pusher-v2 \cite{DBLP:journals/corr/BrockmanCPSSTZ16}. These tasks are intended to provide a range of complexity, the hardest of which require significant computation in order to achieve good results. The dimensionality of the action space ranges from 2 to 17 and the state space 8 to 376. Because of the large amount of computation required to train on these difficult tasks, robustness to hyperparameters is extremely valuable, as the cost to exploring in this space is high. For similar reasons, stability and determinism limit the number of repeated experiments required to achieve an estimate of the performance of an algorithm with a given degree of certainty. In order to test robustness to hyperparameters, we choose one environment (HalfCheetah-v2) and compare CGP with other methods under a sweep across common hyperparameters. To test stability, we perform 4 runs with unique random seeds for each hyperparameter combination. Each task is run for 1 million time steps, with evaluations every 1e4 time steps. After tuning hyperparameters on HalfCheetah-v2, we then selected a single common "default" configuration that worked well on each method, the results of which for HalfCheetah-v2 are shown in Figure \ref{benchmarks}. We then ran this configuration on each other benchmark task, as a form of holdout testing to see how well a generic set of hyperparameters will do for unseen tasks. We also include several variants of our method, as described in Section \ref{subsec:cgp}. We compare robustness and peak performance for both online and offline versions of CGP and QGP. \subsection{Comparisons} \begin{figure*}[!ht] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/cheetah_bench.pdf} \caption{\label{fig:figa} HalfCheetah-v2} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/humanoid_bench.pdf} \caption{\label{fig:fige} Humanoid-v2} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/hopper_bench.pdf} \caption{\label{fig:figb} Hopper-v2} \end{subfigure} \medskip \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/pusher_bench.pdf} \caption{\label{fig:figd} Pusher-v2} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/ant_bench.pdf} \caption{\label{fig:figc} Ant-v2} \end{subfigure} \caption{Performance of various methods (CGP, SAC, DDPG, and TD3) on OpenAI Gym benchmark tasks, simulated in a MuJoCo environment. We note that in all cases CGP is either the best or second best performing algorithm, while both TD3 and SAC perform poorly on one or more tasks, and DDPG fails to train stably on most tasks. The thick lines represent the mean performance for a method at step $t$ across 4 runs, and upper and lower respectively represent the max and min across those runs. Parameters for each method were optimized on the HalfCheetah-v2 benchmark, and then applied across all other benchmark tasks. Due to high variance we applied smoothing to all trend lines for all methods for Hopper-v2.} \label{benchmarks} \end{figure*} \paragraph{Performance on standard benchmarks} When run on 5 different standard benchmark tasks for continuous-valued action space learning (HalfCheetah-v2, Humanoid-v2, Hopper-v2, Pusher-v2, and Ant-v2), CGP achieves maximum reward competitive with the best methods on that task (with the exception of Ant-v2, where TD3 is a clear winner). Importantly, CGP performed consistently well across all tasks, even though its hyperparameters were only optimized on one task- in all tasks it is either the best or second best method. Other methods (i.e. SAC and TD3) perform well on one task with the given set of hyperparameters, such as TD3 on Ant-v2 or SAC on Humanoid-v2, but perform poorly on one or more other tasks, as TD3 performs poorly on Humanoid-v2 and SAC on Ant-v2. We note that for each method better performance can be achieved using hyperparameters tuned for the task (for example, \nolink{\citeauthor{Haarnoja2018b}} report much better performance on Humanoid-v2 using task-specific hyperparameters \cite{Haarnoja2018b}), but as we are interested in inter-task hyperparameter robustness we do not perform such tuning. Additionally, even though CGP is based on the Q-function used in the TD3 method, it greatly outperforms TD3 on complex tasks such as Humanoid-v2, suggesting that the CEM policy is more robust across tasks. See Figure \ref{benchmarks} for details. \paragraph{Stability across runs} Across a wide range of hyperparameters (excluding very large learning rate $\geq 0.01$), CGP offers a tight clustering of final evaluation rewards. Other methods demonstrated higher-variance results, where individual runs with slightly different hyperparameters would return significantly different run distributions. To arrive at this conclusion, we ran a large battery of hyperparameter sweeps across methods, the detailed results of which can be observed in Appendix A of the supplement. We consider CGP's relative invariance to hyperparameters that are sub-optimal one of its most valuable attributes; we hope that it can be applied to new problems with little or no hyperparameter tuning. \begin{figure}[ht] \begin{center} \centerline{\includegraphics[width=\columnwidth]{images/stability_all.png}} \caption{Stability of various methods on the HalfCheetah-v2 benchmark task. This figure shows the percentage of runs across all hyperparameter configurations that reached at least the indicated reward level. Each hyperparameter set was run for 1000 episodes, with 4 replicates for each run. The same hyperparameter sets were used across methods. While CGP is outperformed with optimized parameters, its performance decays much slower for sub-optimal configurations. } \label{cheetah-stability} \end{center} \vskip -0.4in \end{figure} \paragraph{Robustness across hyperparameters} We evaluated the robustness of each method over hyperparameter space, using a common set of hyperparameter configurations (with small differences for specific methods based on the method). For most hyperparameters, we held all others fixed while varying only that parameter. We varied learning rate (LR) and batch size jointly, with smaller learning rates matching with smaller batch sizes, and vice versa. We varied LR among the set \{0.01, 0.001, 0.0001\}, and batch size among \{256, 128, 64, 32\}. We also independently varied the size of the network in \{512, 256, 128, 32\}. For methods using random sampling for some number of initial timesteps (CGP and TD3), we varied the number in \{0, 1000, 10000\}, and for those which inject noise (all other than SAC) we varied the exploration and (for CGP and TD3) next action noise in \{0.05, 0.1, 0.2, 0.3\}. We evaluated CGP entirely with no exploration noise, which other methods using deterministic policies (TD3, DDPG) cannot do while remaining able to learn a non-trivial policy. The overall results of these sweeps can be seen in Figure \ref{cheetah-stability}, while detailed results breaking the results down by hyperparameter are in the supplement. Overall, we see that while CGP does not perform as well in the top quartile of parameter sweeps, it displays a high degree of stability over most hyperparameter combinations, and displays better robustness than SAC in the lower half of the range and DDPG everywhere. CGP also performs slightly better for almost all states than the CEM policy it learns to imitate. The failure cases in the tail were, specifically, too high a learning rate (LR of 0.01, which is a failure case for CGP but not for CEM) and less initial random sampling (both 0 and 1000 produced poor policies for some seeds). \paragraph{Inference speed and training efficiency} We benchmarked the training and inference runtime of each method. We computed the mean over 10 complete training and inference episodes for each method with the same parameters. The results can be found in Table \ref{runtime-table}. CEM-2, CGP-2, CEM-4, and CGP-4 refer to the number of iterations of CEM used(2 or 4). $\pi_{\text{CGP}}$ greatly outperforms $\pi_{\text{CEM}}$ at inference time, and performs the same at training time. Other methods are faster at training time, but run at the same speed at inference time Importantly, the speed of $\pi_{\text{CGP}}$ at inference time is independent of the number of iterations of CEM sampling used for training. \begin{table}[t] \caption{Runtime in average seconds per episode of HalfCheetah-v2 (without rendering) on an otherwise-idle machine with a Nvidia GTX 1080 ti GPU. CGP achieves a constant inference runtime independent of the number of CEM iterations used, which matches the performance of other methods.} \label{runtime-table} \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Method & Mean Train (s) & Mean Inference (s) \\ \midrule Random & - & 0.48 \\ DDPG & 5.75 & 2.32 \\ TD3 & 5.67 & 2.35 \\ SAC & 11.00 & 2.35 \\ CEM-2 & 7.1 & 6.3 \\ CEM-4 & 9.3 & 10.1 \\ CGP-2 & 11.03 & 2.35 \\ CGP-4 & 14.46 & 2.35 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \end{table} \subsection{CGP Variants} \begin{figure}[ht] \begin{center} \centerline{\includegraphics[width=\columnwidth]{images/stability_variants.pdf}} \caption{Stability of variants of CGP, as measured by the percent of runs across all hyperparameter configurations tested reaching at least the total reward given on the y-axis. Three of the four methods (CGP-Online, CGP-Offline, and QGP-Online) performed roughly equivalently in the upper quartiles, while CGP-Online performed better in the bottom quartile. } \label{cgp-stability} \end{center} \vskip -0.4in \end{figure} We consider several variants of our method, as detailed in Section \ref{subsec:cgp}. We ran each variant on a suite of learning rate and batch size combinations to evaluate their robustness. We tested LR values in \{0.001, 0.0001\} and batch sizes in \{32, 128, 256\}. See Figure \ref{cgp-stability} for a comparison of all runs performed. \paragraph{CGP versus QGP} The source of the supervision signal is a critical determinant of the behavior of the policy. Thus it is important to compare the performance of the policy when trained to directly optimize the learned Q-function and when trained to imitate CEM inference on that same policy. We find that directly optimizing the learned Q-function suffers from more instability and sensitivity to chosen hyperparameters, particularly when learning offline. In comparison, both CGP variants train well in most cases. This suggests that the CEM approximation to the policy gradient is not only a reasonable approximation but is also easier to optimize than the original gradient. \paragraph{Online versus Offline} Another dimension of customization of CGP is the policy training mode; training can either be online (train the policy function alongside the Q-function, with one policy gradient step per Q-function gradient step) or offline (train only at the end of the Q-function training trajectory). An advantage of the CGP method is that it performs similarly in both paradigms; thus, it is suitable for completely offline training when desired and online learning when the Q-function is available during training. We find that the online training runs of both CGP and QGP are mildly better than offline training. This result is somewhat intuitive if one considers the implicit regularization provided by learning to optimize a non-stationary Q-function, rather than a static function, as in the offline learning case. Ultimately, CGP is effective in either regime. \section{Discussion} In this work, we presented Cross-Entropy Guided Policies (CGP) for continuous-action Q-learning. We show that CGP is robust and stable across hyperparameters and random seeds, competitive in performance when compared with state of the art methods, as well as both faster and more accurate than the underlying CEM policy. We demonstrate that not only is CEM an effective and robust general-purpose optimization method in the context of Q-learning, it is an effective supervision signal for a policy, which can be trained either online or offline. Our findings support the conventional wisdom that CEM is a particularly flexible method for reasonably low-dimensional problems \cite{RUBINSTEIN199789}, and our findings suggest that CEM remains effective even for problems that have potentially high-dimensional latent states, such as Q-learning. We would also like to consider more of the rich existing analysis of CEM's properties in future work, as well as explore other sample-based algorithms for optimizing the actions of Q-functions, such as covariance matrix adaptation \cite{Hansen2001}. Another direction to explore is entropy-based regularization of the Q-function similar to SAC \cite{Haarnoja2018b}, which may further improve stability and make the Q-function easier to optimize, as an entropy objective encourages Q-value smoothness. We believe that there is potential for further gains in stable and robust continuous action Q-learning through sampling methods. While such developments may come at a computational cost, our success in training inference-time policies shows that by doing so we achieve runtime performance comparable to other non-sampling methods independent of sampling compute times. Therefore, we believe that sample-based Q-function optimization represents a promising new direction for continuous-action Q-learning research that offers unique advantages and can combine well with other Q-learning methods. \section*{Appendix A: Detailed Method Stability Analysis} As part of our exploration of method stability, we ran a battery of hyperparameter sweeps on the HalfCheetah-v2 benchmark task. See our results in Figures 1, 2, 3, 4 and 5. \label{appendix:a} \begin{figure*}[!ht] \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/cemp_noise.pdf} \caption{\label{suppfig:sense_a} CGP} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/ddpg_noise.pdf} \caption{\label{suppfig:sense_b} DDPG} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{images/td3_noise.pdf} \caption{\label{suppfig:sense_c} TD3} \end{subfigure} \caption{Sensitivity of various methods (CGP, DDPG, and TD3) to variations in noise parameters. These methods all use noise as part of their specification. We vary all sources of noise on the discrete interval $\{0.05, 0.1, 0.2, 0.3\}$. All other parameters are held fixed in their default configuration. Soft Actor-Critic does not make use of noise as a tunable parameter. Both CGP and TD3 can tolerate variations in noise well, but TD3 performance falls off when noise is reduced, as they require sufficient noise for sampling diverse training data.} \end{figure*} \begin{figure*}[!ht] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/cemp_net_size.pdf} \caption{\label{suppfig:net_a} CGP} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/ddpg_net_size.pdf} \caption{\label{suppfig:net_b} DDPG} \end{subfigure} \medskip \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/td3_net_size.pdf} \caption{\label{suppfig:net_c} TD3} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/sac_net_size.pdf} \caption{\label{suppfig:net_d} SAC} \end{subfigure} \caption{Sensitivity of various methods (CGP, DDPG, TD3, and SAC) to differences in number of units in their fully-connected layers for both Q-function and policy network. All sub-networks in each algorithm are instantiated with the same network structure unless otherwise specified in the method. We vary all layer sizes on the discrete interval $\{32, 128, 256, 512\}$. All other parameters are held fixed in their default configuration. We observe that all methods other than DDPG degrade in performance with a 32 size network, but CGP is affected much less by large/small networks outside that extreme.} \end{figure*} \begin{figure*}[!ht] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/cemp_lrbs.pdf} \caption{\label{suppfig:lr_a} CGP} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/ddpg_lrbs.pdf} \caption{\label{suppfig:lr_b} DDPG} \end{subfigure} \medskip \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/td3_lrbs.png} \caption{\label{suppfig:lr_c} TD3} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/sac_lrbs.pdf} \caption{\label{suppfig:lr_d} SAC} \end{subfigure} \caption{Sensitivity of various methods (CGP, DDPG, TD3, and SAC) to different combinations of learning rate and batch size. All sub-networks in each algorithm are instantiated with the same network structure unless otherwise specified in the method. We vary all layer sizes on the discrete learning rate interval $\{0.0001, 0.001, 0.01\}$ and the discrete batch size interval $\{32, 64, 128, 256\}$. Only a subset of these combinations are used, given well-known poor performance of large batch sizes with small learning rates, and vice versa. All other parameters are held fixed in their default configuration. We see that the performance spread across learning rates and batch sizes is much narrower for CGP compared to other methods, with the exception of a learning rate of 0.01, which was too high for CGP to stably train.} \end{figure*} \begin{figure*}[!ht] \centering \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/cemp_init.pdf} \caption{\label{suppfig:init_a} CGP} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/td3_init.pdf} \caption{\label{suppfig:init_b} TD3} \end{subfigure} \caption{Sensitivity of two methods (CGP, and TD3) to differences in number of random samples of the action space seeding the replay buffer. All sub-networks in each algorithm are instantiated with the same network structure unless otherwise specified in the method. We vary number of random steps on the discrete interval $\{0, 1000, 10000\}$. All other parameters are held fixed in their default configuration. As described by the TD3 authors, seeding the buffer is crucial to performance. We observed that for runs with lower random sample counts, the agent gets stuck in a local reward maxima of around 2000 with some decent probability. The ordering of cgp\_initrand0 and cgp\_initrand1k is due to sampling noise from this local minima.} \end{figure*} \begin{figure*}[!ht] \centering \label{suppfig:on_offline} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/cemp_on_offline.pdf} \caption{\label{suppfig:on_a} CGP} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/ddpg_on_offline.pdf} \caption{\label{suppfig:on_b} DDPG} \end{subfigure} \medskip \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/td3_on_offline.pdf} \caption{\label{suppfig:on_c} TD3} \end{subfigure} \begin{subfigure}[b]{0.4\textwidth} \centering \includegraphics[width=\textwidth]{images/sac_on_offline.pdf} \caption{\label{suppfig:on_d} SAC} \end{subfigure} \caption{Sensitivity of various methods (CGP, DDPG, TD3, and SAC) to training procedure of training online vs. offline. Online training is defined as, after each step through the environment, training the Q function for at least 1 step and potentially updating the policy. Offline training is defined as rolling out the policy uninterrupted for a full episode, and then training the Q function for a fixed number of steps and/or updating the policy. All other parameters are held fixed in their default configuration. Only DDPG experiences significantly worse performance in one mode or the other, though online training usually performs slightly better for CGP and SAC.} \end{figure*} \clearpage \section*{Appendix B: Hyperparameters} \begin{table}[t] \caption{Hyperparameters used for CGP benchmarking runs.} \label{hyper-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Hyperparameters \\ \midrule CEM \\ \hspace{5mm} iterations & 2 \\ \hspace{5mm} sample size & 64 \\ \hspace{5mm} top k & 6 \\ Networks\\ \hspace{5mm} num units & 256\\ Training \\ \hspace{5mm} policy lr & 0.001 \\ \hspace{5mm} q lr & 0.001 \\ \hspace{5mm} batch size & 128 \\ \hspace{5mm} discount $(\gamma)$ & 0.99 \\ \hspace{5mm} weight decay & 0 \\ \hspace{5mm} soft target update $(\tau)$ & 0.005 \\ \hspace{5mm} target update freq & 2 \\ Noise \\ \hspace{5mm} policy noise & 0.2 \\ \hspace{5mm} noise clip & 0.5 \\ \hspace{5mm} exploration noise & 0.0 \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} To facilitate reproducibility, we present our hyperparameters in Table \ref{hyper-table}. \section*{Appendix C: Implementation Details} One critical implementation detail that was found to dramatically affect performance is the handling of the \textit{done} state at the end of the system during a training episode. The OpenAI Gym environment will emit a boolean value \textit{done} which indicates whether an episode is completed. This variable can signal one of two things: the episode cannot continue for physical reasons (i.e. a pendulum has fallen past an unrecoverable angle), or the episode has exceeded its maximum length specified in a configuration. Since the first case is Markovian (in that it depends exclusively on the state when \textit{done} is emitted, with current timestep not considered part of the state for these tasks), it can safely indicate that the value of the next state evaluated at training time can be ignored. However, in the second case the process is non-Markovian (meaning it is independent of the state, assuming time remaining is not injected into the observation state), and if the \textit{done} value is used to ignore the next state in the case where the episode has ended for timing reasons, the Q-function learned will reflect this seemingly stochastic drop in reward for arbitrary states, and policies sampling this Q function (either learned or induced) will empirically perform ~20-30\% worse in terms of final reward achieved. To resolve this, for tasks which are non-Markovian in nature (in this paper, HalfCheetah-v2 and Pusher-v2), we do not use a \textit{done} signal, which means that from the perspective of the Q-function the task has an infinite time horizon, where future states outside the time limit are considered as part of the Q-function but never experienced. For TD3\footnote{https://github.com/sfujim/TD3} and soft actor critic\footnote{https://github.com/vitchyr/rlkit}, we used the author's published implementations in our experiments, combined with our outer loop training code to ensure the training process is consistent across all methods. Experiments are run with Python 3.6.7. Critical Python packages include \texttt{torch==1.0.0}, \texttt{numpy==1.16.0}, \texttt{mujoco-py==1.50.1.68}, and \texttt{gym==0.10.9}. Our simulator is MuJoCo Pro version 1.50. Performance benchmarks were measured on workstations with a single 24-core Intel 7920X processors and four GTX 1080Ti GPUs. \section{Electronic Submission} \label{submission} Submission to ICML 2019 will be entirely electronic, via a web site (not email). Information about the submission process and \LaTeX\ templates are available on the conference web site at: \begin{center} \textbf{\texttt{http://icml.cc/}} \end{center} The guidelines below will be enforced for initial submissions and camera-ready copies. Here is a brief summary: \begin{itemize} \item Submissions must be in PDF\@. \item Submitted papers can be up to eight pages long, not including references, and up to twelve pages when references and acknowledgments are included. Any paper exceeding this length will automatically be rejected. \item \textbf{Do not include author information or acknowledgements} in your initial submission. \item Your paper should be in \textbf{10 point Times font}. \item Make sure your PDF file only uses Type-1 fonts. \item Place figure captions \emph{under} the figure (and omit titles from inside the graphic file itself). Place table captions \emph{over} the table. \item References must include page numbers whenever possible and be as complete as possible. Place multiple citations in chronological order. \item Do not alter the style template; in particular, do not compress the paper format by reducing the vertical spaces. \item Keep your abstract brief and self-contained, one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. The title should have content words capitalized. \end{itemize} \subsection{Submitting Papers} \textbf{Paper Deadline:} The deadline for paper submission that is advertised on the conference website is strict. If your full, anonymized, submission does not reach us on time, it will not be considered for publication. \textbf{Anonymous Submission:} ICML uses double-blind review: no identifying author information may appear on the title page or in the paper itself. Section~\ref{author info} gives further details. \textbf{Simultaneous Submission:} ICML will not accept any paper which, at the time of submission, is under review for another conference or has already been published. This policy also applies to papers that overlap substantially in technical content with conference papers under review or previously published. ICML submissions must not be submitted to other conferences during ICML's review period. Authors may submit to ICML substantially different versions of journal papers that are currently under review by the journal, but not yet accepted at the time of submission. Informal publications, such as technical reports or papers in workshop proceedings which do not appear in print, do not fall under these restrictions. \medskip Authors must provide their manuscripts in \textbf{PDF} format. Furthermore, please make sure that files contain only embedded Type-1 fonts (e.g.,~using the program \texttt{pdffonts} in linux or using File/DocumentProperties/Fonts in Acrobat). Other fonts (like Type-3) might come from graphics files imported into the document. Authors using \textbf{Word} must convert their document to PDF\@. Most of the latest versions of Word have the facility to do this automatically. Submissions will not be accepted in Word format or any format other than PDF\@. Really. We're not joking. Don't send Word. Those who use \textbf{\LaTeX} should avoid including Type-3 fonts. Those using \texttt{latex} and \texttt{dvips} may need the following two commands: {\footnotesize \begin{verbatim} dvips -Ppdf -tletter -G0 -o paper.ps paper.dvi ps2pdf paper.ps \end{verbatim}} It is a zero following the ``-G'', which tells dvips to use the config.pdf file. Newer \TeX\ distributions don't always need this option. Using \texttt{pdflatex} rather than \texttt{latex}, often gives better results. This program avoids the Type-3 font problem, and supports more advanced features in the \texttt{microtype} package. \textbf{Graphics files} should be a reasonable size, and included from an appropriate format. Use vector formats (.eps/.pdf) for plots, lossless bitmap formats (.png) for raster graphics with sharp lines, and jpeg for photo-like images. The style file uses the \texttt{hyperref} package to make clickable links in documents. If this causes problems for you, add \texttt{nohyperref} as one of the options to the \texttt{icml2019} usepackage statement. \subsection{Submitting Final Camera-Ready Copy} The final versions of papers accepted for publication should follow the same format and naming convention as initial submissions, except that author information (names and affiliations) should be given. See Section~\ref{final author} for formatting instructions. The footnote, ``Preliminary work. Under review by the International Conference on Machine Learning (ICML). Do not distribute.'' must be modified to ``\textit{Proceedings of the $\mathit{36}^{th}$ International Conference on Machine Learning}, Long Beach, USA, 2019. Copyright 2019 by the author(s).'' For those using the \textbf{\LaTeX} style file, this change (and others) is handled automatically by simply changing $\mathtt{\backslash usepackage\{icml2019\}}$ to $$\mathtt{\backslash usepackage[accepted]\{icml2019\}}$$ Authors using \textbf{Word} must edit the footnote on the first page of the document themselves. Camera-ready copies should have the title of the paper as running head on each page except the first one. The running title consists of a single line centered above a horizontal rule which is $1$~point thick. The running head should be centered, bold and in $9$~point type. The rule should be $10$~points above the main text. For those using the \textbf{\LaTeX} style file, the original title is automatically set as running head using the \texttt{fancyhdr} package which is included in the ICML 2019 style file package. In case that the original title exceeds the size restrictions, a shorter form can be supplied by using \verb|\icmltitlerunning{...}| just before $\mathtt{\backslash begin\{document\}}$. Authors using \textbf{Word} must edit the header of the document themselves. \section{Format of the Paper} All submissions must follow the specified format. \subsection{Length and Dimensions} Submitted papers can be up to eight pages long, not including references, and up to twelve pages when references and acknowledgments are included. Acknowledgements should be limited to grants and people who contributed to the paper. Any submission that exceeds this page limit, or that diverges significantly from the specified format, will be rejected without review. The text of the paper should be formatted in two columns, with an overall width of 6.75~inches, height of 9.0~inches, and 0.25~inches between the columns. The left margin should be 0.75~inches and the top margin 1.0~inch (2.54~cm). The right and bottom margins will depend on whether you print on US letter or A4 paper, but all final versions must be produced for US letter size. The paper body should be set in 10~point type with a vertical spacing of 11~points. Please use Times typeface throughout the text. \subsection{Title} The paper title should be set in 14~point bold type and centered between two horizontal rules that are 1~point thick, with 1.0~inch between the top rule and the top edge of the page. Capitalize the first letter of content words and put the rest of the title in lower case. \subsection{Author Information for Submission} \label{author info} ICML uses double-blind review, so author information must not appear. If you are using \LaTeX\/ and the \texttt{icml2019.sty} file, use \verb+\icmlauthor{...}+ to specify authors and \verb+\icmlaffiliation{...}+ to specify affiliations. (Read the TeX code used to produce this document for an example usage.) The author information will not be printed unless \texttt{accepted} is passed as an argument to the style file. Submissions that include the author information will not be reviewed. \subsubsection{Self-Citations} If you are citing published papers for which you are an author, refer to yourself in the third person. In particular, do not use phrases that reveal your identity (e.g., ``in previous work \cite{langley00}, we have shown \ldots''). Do not anonymize citations in the reference section. The only exception are manuscripts that are not yet published (e.g., under submission). If you choose to refer to such unpublished manuscripts \cite{anonymous}, anonymized copies have to be submitted as Supplementary Material via CMT\@. However, keep in mind that an ICML paper should be self contained and should contain sufficient detail for the reviewers to evaluate the work. In particular, reviewers are not required to look at the Supplementary Material when writing their review. \subsubsection{Camera-Ready Author Information} \label{final author} If a paper is accepted, a final camera-ready copy must be prepared. For camera-ready papers, author information should start 0.3~inches below the bottom rule surrounding the title. The authors' names should appear in 10~point bold type, in a row, separated by white space, and centered. Author names should not be broken across lines. Unbolded superscripted numbers, starting 1, should be used to refer to affiliations. Affiliations should be numbered in the order of appearance. A single footnote block of text should be used to list all the affiliations. (Academic affiliations should list Department, University, City, State/Region, Country. Similarly for industrial affiliations.) Each distinct affiliations should be listed once. If an author has multiple affiliations, multiple superscripts should be placed after the name, separated by thin spaces. If the authors would like to highlight equal contribution by multiple first authors, those authors should have an asterisk placed after their name in superscript, and the term ``\textsuperscript{*}Equal contribution" should be placed in the footnote block ahead of the list of affiliations. A list of corresponding authors and their emails (in the format Full Name \textless{}[email protected]\textgreater{}) can follow the list of affiliations. Ideally only one or two names should be listed. A sample file with author names is included in the ICML2019 style file package. Turn on the \texttt{[accepted]} option to the stylefile to see the names rendered. All of the guidelines above are implemented by the \LaTeX\ style file. \subsection{Abstract} The paper abstract should begin in the left column, 0.4~inches below the final address. The heading `Abstract' should be centered, bold, and in 11~point type. The abstract body should use 10~point type, with a vertical spacing of 11~points, and should be indented 0.25~inches more than normal on left-hand and right-hand margins. Insert 0.4~inches of blank space after the body. Keep your abstract brief and self-contained, limiting it to one paragraph and roughly 4--6 sentences. Gross violations will require correction at the camera-ready phase. \subsection{Partitioning the Text} You should organize your paper into sections and paragraphs to help readers place a structure on the material and understand its contributions. \subsubsection{Sections and Subsections} Section headings should be numbered, flush left, and set in 11~pt bold type with the content words capitalized. Leave 0.25~inches of space before the heading and 0.15~inches after the heading. Similarly, subsection headings should be numbered, flush left, and set in 10~pt bold type with the content words capitalized. Leave 0.2~inches of space before the heading and 0.13~inches afterward. Finally, subsubsection headings should be numbered, flush left, and set in 10~pt small caps with the content words capitalized. Leave 0.18~inches of space before the heading and 0.1~inches after the heading. Please use no more than three levels of headings. \subsubsection{Paragraphs and Footnotes} Within each section or subsection, you should further partition the paper into paragraphs. Do not indent the first line of a given paragraph, but insert a blank line between succeeding ones. You can use footnotes\footnote{Footnotes should be complete sentences.} to provide readers with additional information about a topic without interrupting the flow of the paper. Indicate footnotes with a number in the text where the point is most relevant. Place the footnote in 9~point type at the bottom of the column in which it appears. Precede the first footnote in a column with a horizontal rule of 0.8~inches.\footnote{Multiple footnotes can appear in each column, in the same order as they appear in the text, but spread them across columns and pages if possible.} \begin{figure}[ht] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=\columnwidth]{icml_numpapers}} \caption{Historical locations and number of accepted papers for International Machine Learning Conferences (ICML 1993 -- ICML 2008) and International Workshops on Machine Learning (ML 1988 -- ML 1992). At the time this figure was produced, the number of accepted papers for ICML 2008 was unknown and instead estimated.} \label{icml-historical} \end{center} \vskip -0.2in \end{figure} \subsection{Figures} You may want to include figures in the paper to illustrate your approach and results. Such artwork should be centered, legible, and separated from the text. Lines should be dark and at least 0.5~points thick for purposes of reproduction, and text should not appear on a gray background. Label all distinct components of each figure. If the figure takes the form of a graph, then give a name for each axis and include a legend that briefly describes each curve. Do not include a title inside the figure; instead, the caption should serve this function. Number figures sequentially, placing the figure number and caption \emph{after} the graphics, with at least 0.1~inches of space before the caption and 0.1~inches after it, as in Figure~\ref{icml-historical}. The figure caption should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. You may float figures to the top or bottom of a column, and you may set wide figures across both columns (use the environment \texttt{figure*} in \LaTeX). Always place two-column figures at the top or bottom of the page. \subsection{Algorithms} If you are using \LaTeX, please use the ``algorithm'' and ``algorithmic'' environments to format pseudocode. These require the corresponding stylefiles, algorithm.sty and algorithmic.sty, which are supplied with this package. Algorithm~\ref{alg:example} shows an example. \begin{algorithm}[tb] \caption{Bubble Sort} \label{alg:example} \begin{algorithmic} \STATE {\bfseries Input:} data $x_i$, size $m$ \REPEAT \STATE Initialize $noChange = true$. \FOR{$i=1$ {\bfseries to} $m-1$} \IF{$x_i > x_{i+1}$} \STATE Swap $x_i$ and $x_{i+1}$ \STATE $noChange = false$ \ENDIF \ENDFOR \UNTIL{$noChange$ is $true$} \end{algorithmic} \end{algorithm} \subsection{Tables} You may also want to include tables that summarize material. Like figures, these should be centered, legible, and numbered consecutively. However, place the title \emph{above} the table with at least 0.1~inches of space before the title and the same after it, as in Table~\ref{sample-table}. The table title should be set in 9~point type and centered unless it runs two or more lines, in which case it should be flush left. \begin{table}[t] \caption{Classification accuracies for naive Bayes and flexible Bayes on various data sets.} \label{sample-table} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccr} \toprule Data set & Naive & Flexible & Better? \\ \midrule Breast & 95.9$\pm$ 0.2& 96.7$\pm$ 0.2& $\surd$ \\ Cleveland & 83.3$\pm$ 0.6& 80.0$\pm$ 0.6& $\times$\\ Glass2 & 61.9$\pm$ 1.4& 83.8$\pm$ 0.7& $\surd$ \\ Credit & 74.8$\pm$ 0.5& 78.3$\pm$ 0.6& \\ Horse & 73.3$\pm$ 0.9& 69.7$\pm$ 1.0& $\times$\\ Meta & 67.1$\pm$ 0.6& 76.5$\pm$ 0.5& $\surd$ \\ Pima & 75.1$\pm$ 0.6& 73.9$\pm$ 0.5& \\ Vehicle & 44.9$\pm$ 0.6& 61.5$\pm$ 0.4& $\surd$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table} Tables contain textual material, whereas figures contain graphical material. Specify the contents of each row and column in the table's topmost row. Again, you may float tables to a column's top or bottom, and set wide tables across both columns. Place two-column tables at the top or bottom of the page. \subsection{Citations and References} Please use APA reference format regardless of your formatter or word processor. If you rely on the \LaTeX\/ bibliographic facility, use \texttt{natbib.sty} and \texttt{icml2019.bst} included in the style-file package to obtain this format. Citations within the text should include the authors' last names and year. If the authors' names are included in the sentence, place only the year in parentheses, for example when referencing Arthur Samuel's pioneering work \yrcite{Samuel59}. Otherwise place the entire reference in parentheses with the authors and year separated by a comma \cite{Samuel59}. List multiple references separated by semicolons \cite{kearns89,Samuel59,mitchell80}. Use the `et~al.' construct only for citations with three or more authors or after listing all authors to a publication in an earlier reference \cite{MachineLearningI}. Authors should cite their own work in the third person in the initial version of their paper submitted for blind review. Please refer to Section~\ref{author info} for detailed instructions on how to cite your own papers. Use an unnumbered first-level section heading for the references, and use a hanging indent style, with the first line of the reference flush against the left margin and subsequent lines indented by 10 points. The references at the end of this document give examples for journal articles \cite{Samuel59}, conference publications \cite{langley00}, book chapters \cite{Newell81}, books \cite{DudaHart2nd}, edited volumes \cite{MachineLearningI}, technical reports \cite{mitchell80}, and dissertations \cite{kearns89}. Alphabetize references by the surnames of the first authors, with single author entries preceding multiple author entries. Order references for the same authors by year of publication, with the earliest first. Make sure that each reference includes all relevant information (e.g., page numbers). Please put some effort into making references complete, presentable, and consistent. If using bibtex, please protect capital letters of names and abbreviations in titles, for example, use \{B\}ayesian or \{L\}ipschitz in your .bib file. \subsection{Software and Data} We strongly encourage the publication of software and data with the camera-ready version of the paper whenever appropriate. This can be done by including a URL in the camera-ready copy. However, do not include URLs that reveal your institution or identity in your submission for review. Instead, provide an anonymous URL or upload the material as ``Supplementary Material'' into the CMT reviewing system. Note that reviewers are not required to look at this material when writing their review. \section*{Acknowledgements} \textbf{Do not} include acknowledgements in the initial version of the paper submitted for blind review. If a paper is accepted, the final camera-ready version can (and probably should) include acknowledgements. In this case, please place such acknowledgements in an unnumbered section at the end of the paper. Typically, this will include thanks to reviewers who gave useful comments, to colleagues who contributed to the ideas, and to funding agencies and corporate sponsors that provided financial support. \nocite{langley00}
1,116,691,498,136
arxiv
\section{Introduction} In non-parametric classification, we are given a \emph{training set} \ensuremath{P}\xspace consisting of $n$ points in a metric space $(\metricSet,\metricFunc)$\xspace, with domain $\mathcal{X}$ and distance function $\textup{\textsf{d}}: \mathcal{X}^2 \rightarrow \mathbb{R}^{+}$. Additionally, \ensuremath{P}\xspace is partitioned into a finite set of \emph{classes} by associating each point $p \in \ensuremath{P}\xspace$ with a \emph{label} $l(p)$, indicating the class to which it belongs. Given an \emph{unlabeled} query point $q \in \mathcal{X}$, the goal of a \emph{classifier} is to predict $q$'s label using the training set \ensuremath{P}\xspace. The \emph{nearest-neighbor rule} is among the best-known classification techniques~\cite{fix_51_discriminatory}. It assigns a query point the label of its closest point in $\ensuremath{P}\xspace$, according to the metric~$\textup{\textsf{d}}$. The nearest-neighbor rule exhibits good classification accuracy both experimentally and theoretically \cite{stone1977consistent,Cover:2006:NNP:2263261.2267456,devroye1981inequality}, but it is often criticized due to its high space and time complexities. Clearly, the training set \ensuremath{P}\xspace must be stored to answer nearest-neighbor queries, and the time required for such queries depends to a large degree on the size and dimensionality of the data. These drawbacks inspire the question of whether it is possible replace \ensuremath{P}\xspace with a significantly smaller~subset, without significantly reducing the classification accuracy under the nearest-neighbor rule. This problem~is called \emph{nearest-neighbor~condensation}~\cite{Hart:2006:CNN:2263267.2267647,ritter1975algorithm,gottlieb2014near,DBLP:conf/jcdcg/Toussaint02}. There are obvious parallels between condensation and the concept of \emph{coresets} in geometric approximation~\cite{agarwal2005geometric,phillips2016coresets,feldman2020core,har2004coresets}. Intuitively, a \emph{coreset} is small subset of the original data,~that well approximates some statistical properties of the original set. Coresets have also been applied to many problems in machine learning, such as clustering and neural network compression~\cite{baykal2018data,braverman2016new,feldman2011unified,liebenwein2019provable}. This includes recent results on coresets for the SVM classifier~\cite{tukan2020coresets}. This paper presents the first approach to compute coresets for the nearest-neighbor rule, leveraging its resemblance to the problem of nearest-neighbor condensation. We also present one of the first results on practical condensation algorithms with theoretical guarantees. \subparagraph*{Preliminaries.} Given any point $q \in \mathcal{X}$ in the metric space, its nearest-neighbor, denoted $\nn{q}$, is the closest point of \ensuremath{P}\xspace according the the distance function $\textup{\textsf{d}}$. The distance from $q$ to its nearest-neighbor is denoted by $\dnn{q,\ensuremath{P}\xspace}$, or simply $\dnn{q}$ when \ensuremath{P}\xspace is clear. Given a point $p \in \ensuremath{P}\xspace$ from the training set, its nearest-neighbor in \ensuremath{P}\xspace is point $p$ itself. Additionally, any point of $\ensuremath{P}\xspace$ whose label differs from $p$'s is called an \emph{enemy} of $p$. The closest such point is called $p$'s \emph{nearest-enemy}, and the distance to this point is called $p$'s \emph{nearest-enemy distance}. These are denoted by $\nenemy{p}$ and $\dne{p,\ensuremath{P}\xspace}$ (or simply $\dne{p}$), respectively. Clearly, the size of a coreset for nearest-neighbor classification depends on the spatial characteristics of the classes in the training set. For example, it is much easier to find a small coreset for two spatially well separated clusters than for two classes that have a high degree of overlap. To model the intrinsic complexity of nearest-neighbor classification, we define $\kappa$ to be the number of nearest-enemy points of \ensuremath{P}\xspace, \emph{i.e.},\xspace the cardinality of set $\{\nenemy{p} \mid p \in \ensuremath{P}\xspace\}$. Through a suitable uniform scaling, we may assume that the \emph{diameter} of \ensuremath{P}\xspace (that is, the maximum distance between any two points in the training set) is 1. The \emph{spread} of \ensuremath{P}\xspace, denoted as $\Delta$, is the ratio between the largest and smallest distances in \ensuremath{P}\xspace. Define the \emph{margin} of \ensuremath{P}\xspace, denoted $\gamma$, to be the smallest nearest-enemy distance in \ensuremath{P}\xspace. Clearly, $1/\gamma \leq \Delta$. A metric space $(\metricSet,\metricFunc)$\xspace is said to be \emph{doubling}~\cite{heinonen2012lectures} if there exist some bounded value $\lambda$ such~that any metric ball of radius $r$ can be covered with at most $\lambda$ metric balls of radius $r/2$. Its \emph{doubling dimension} is the base-2 logarithm of $\lambda$, denoted as $\textup{ddim}(\metricSet) = \log{\lambda}$. Throughout, we assume that $\textup{ddim}(\metricSet)$ is a constant, which means that multiplicative factors depending on $\textup{ddim}(\metricSet)$ may be hidden in our asymptotic notation. Many natural metric spaces of interest are doubling, including $d$-dimensional Euclidean space whose doubling dimension is $\Theta(d)$. It is well know that for any subset $R \subseteq \mathcal{X}$ with some spread $\Delta_R$, the size of $R$ is bounded by $|R| \leq \lceil\Delta_R\rceil^{\textup{ddim}(\metricSet)+1}$. \subparagraph*{Related Work.} A subset $\ensuremath{R}\xspace \subseteq \ensuremath{P}\xspace$ is said to be \emph{consistent}~\cite{Hart:2006:CNN:2263267.2267647} if and only if for every $p \in \ensuremath{P}\xspace$~its nearest-neighbor in \ensuremath{R}\xspace is of the same class as $p$. Intuitively, \ensuremath{R}\xspace is consistent if and only if all points of \ensuremath{P}\xspace are correctly classified using the nearest-neighbor rule over \ensuremath{R}\xspace. Formally, the problem of \emph{nearest-neighbor condensation} consists of finding a consistent subset of \ensuremath{P}\xspace. Another criterion used for condensation is known as \emph{selectiveness}~\cite{ritter1975algorithm}. A subset $\ensuremath{R}\xspace \subseteq \ensuremath{P}\xspace$~is said to be \emph{selective} if and only if for all $p \in \ensuremath{P}\xspace$ its nearest-neighbor in \ensuremath{R}\xspace is closer to~$p$~than its nearest-enemy in \ensuremath{P}\xspace. Clearly, any selective subset is also consistent. Observe that these condensation criteria ensure that every point in the training set will be correctly classified after condensation, but they do not imply the same for arbitrary points in the metric space. \begin{figure*}[t] \centering \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{sel/Dataset.png} \caption{Training set ($10^4$\,pts)}\label{fig:algexample:dataset} \end{subfigure}% \newcommand{\printalgexample}[3]{% \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{sel/#1.pdf} \caption{#3}\label{fig:algexample:#2} \end{subfigure}% }% \printalgexample{FCNN}{fcnn}{\textup{FCNN}\xspace (222 pts)}% \printalgexample{VSS}{vss}{\textup{VSS}\xspace (233 pts)}% \printalgexample{RSS}{rss}{\textup{RSS}\xspace (233 pts)}% \bigskip \printalgexample{01-RSS}{0.1-rss}{\paramRSS{0.1} (300 pts)}% \printalgexample{05-RSS}{0.5-rss}{\paramRSS{0.5} (540 pts)}% \printalgexample{1-RSS}{1-rss}{\paramRSS{1} (846 pts)}% \printalgexample{14142-RSS}{1.4142-rss}{\paramRSS{\sqrt{2}} (1066 pts)}% \caption{An illustrative example of the subsets selected by different condensation algorithms from an initial training set \ensuremath{P}\xspace in $\mathbb{R}^2$ of $10^4$ points. \textup{FCNN}\xspace, \textup{VSS}\xspace, and \textup{RSS}\xspace, are known algorithms for this problem, while \mbox{\textup{$\alpha$-RSS}}\xspace is proposed in this paper, along with new condensation criteria. The subsets selected by \mbox{\textup{$\alpha$-RSS}}\xspace depend on the parameter $\alpha \geq 0$, here assigned to the values $\alpha = \{0.1,0.5,1,\sqrt{2}\}$.}\label{fig:algexample} \vspace*{-10pt} \end{figure*} It is known that the problems of computing consistent and selective subsets of minimum cardinality are both NP-hard~\cite{Wilfong:1991:NNP:109648.109673,Zukhba:2010:NPP:1921730.1921735,khodamoradi2018consistent}. An approximation algorithm called \textup{NET}\xspace~\cite{gottlieb2014near} was proposed for the problem of finding minimum cardinality consistent subsets, along with almost matching hardness lower-bounds. The algorithm simply computes a $\gamma$-net of \ensuremath{P}\xspace, where $\gamma$ is the minimum nearest-enemy distance in \ensuremath{P}\xspace, which clearly results in a consistent subset of \ensuremath{P}\xspace (also selective). In practice, $\gamma$ tends to be small, which results in subsets of much higher cardinality than needed. To overcome this issue, the authors proposed a post-processing pruning technique to further reduce the selected subset. Even with the extra pruning, \textup{NET}\xspace is often outperformed on typical data sets by more practical heuristics with respect to runtime and selection size. More recently, a subexponential-time algorithm was proposed~\cite{biniaz2019minimum} for finding minimum cardinality consistent subsets of point sets $\ensuremath{P}\xspace \subset \mathbb{R}^2$ in the plane, along with other case-specific algorithms for special instances of the problem in $\mathbb{R}^2$. On the other hand, less is known about computing minimum cardinality selective subsets: there is only a worst-case exponential time algorithm called \textup{SNN}\xspace~\cite{ritter1975algorithm} for computing such optimal subsets. \begin{comment} \begin{figure*}[h] \centering \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{sel/FCNN.pdf} \caption{\textup{FCNN}\xspace (222 pts)}\label{fig:algexample:fcnn} \end{subfigure}% \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{sel/RSS.pdf} \caption{\textup{RSS}\xspace (233 pts)}\label{fig:algexample:rss} \end{subfigure}% \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{sel/01-RSS.pdf} \caption{\paramRSS{0.1} (300 pts)}\label{fig:algexample:0.1-rss} \end{subfigure}% \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{sel/05-RSS.pdf} \caption{\paramRSS{0.5} (540 pts)}\label{fig:algexample:0.5-rss} \end{subfigure}% \caption{Subsets selected by condensation algorithms from a training set \ensuremath{P}\xspace in $\mathbb{R}^2$ of $10^4$ points. \alejandro{explain the training set}}\label{fig:algexample} \end{figure*} \end{comment} Most research has focused on proposing practical heuristics to find either consistent or selective subsets of \ensuremath{P}\xspace (for comprehensive surveys see \cite{DBLP:conf/jcdcg/Toussaint02,jankowski2004comparison}). \textup{CNN}\xspace~(\emph{Condensed Nearest-Neighbor})~\cite{Hart:2006:CNN:2263267.2267647} was the first algorithm proposed to compute consistent subsets. Even though it has been widely used in the literature, \textup{CNN}\xspace suffers from several drawbacks: its running time is cubic in the worst-case, and the resulting subset is \emph{order-dependent}, meaning that the result is determined by the order in which points are considered by the algorithm. Alternatives include \textup{FCNN}\xspace (\emph{Fast} \textup{CNN}\xspace)~\cite{angiulli2007fast} and \textup{MSS}\xspace (\emph{Modified Selective Subset})~\cite{barandela2005decision}, which compute consistent and selective subsets respectively. Both algorithms run in $\mathcal{O}(n^2)$ worst-case time, and are order-independent. While such heuristics have been extensively studied experimentally~\cite{Garcia:2012:PSN:2122272.2122582}, theoretical results are scarce. Recently, we have shown~\cite{DBLP:conf/cccg/Flores-VelazcoM19, esa20afloresv} that~the size of the subsets selected by \textup{MSS}\xspace and \textup{FCNN}\xspace cannot be bounded. Alternatively, these papers propose three new quadratic-time algorithms that are both efficient in practice, and have provable upper-bounds on their selection size. These algorithms are called \textup{RSS}\xspace~(\emph{Relaxed Selective Subset}) and \textup{VSS}\xspace~(\emph{Voronoi Selective Subset}) for finding selective subsets, and \textup{SFCNN}\xspace (\emph{Single} \textup{FCNN}\xspace) for finding consistent subsets. \subparagraph*{Contributions.} As mentioned in the previous section, consistency and selectivity imply correct classification to points of the training set, but not to arbitrary points of the metric space (This is striking since this is the fundamental purpose of classification!). In this paper, we introduce the concept of a coreset for classification with the nearest-neighbor rule, which provides approximate guarantees on correct classification for all query points. We demonstrate their existence, analyze their size, and discuss their efficient computation. We say that a subset $\ensuremath{R}\xspace \subseteq \ensuremath{P}\xspace$ is an \emph{$\varepsilon$-coreset for the nearest-neighbor rule} on \ensuremath{P}\xspace, if and only if for every query point $q \in \mathcal{X}$, the class of its exact nearest-neighbor in \ensuremath{R}\xspace is the same as the class of some $\varepsilon$-approximate nearest-neighbor of $q$ in \ensuremath{P}\xspace (see Section~\ref{sec:approx-sensitive-nnc} for definitions). Recalling the concepts of $\kappa$ and $\gamma$ introduced in the preliminaries, here is our main result: \begin{theorem \label{thm:coreset:main} Given a training set \ensuremath{P}\xspace in a doubling metric space $(\metricSet,\metricFunc)$\xspace, there exist an $\varepsilon$-coreset for the nearest-neighbor rule of size $\mathcal{O}(\kappa\,\log{\frac{1}{\gamma}}\,(1/\varepsilon)^{\textup{ddim}(\metricSet)+1})$, and this coreset can be computed in subquadratic worst-case time. \end{theorem} \noindent Here is a summary of our principal results: \begin{itemize} \item We extend the criteria used for nearest-neighbor condensation, and identify sufficient conditions to guarantee the correct classification of any query point after condensation. \item We prove that finding minimum-cardinality subsets with this new criteria is NP-hard. \item We provide quadratic-time approximation algorithms with provable upper-bounds on the sizes of their selected subsets, and we show that the running time of one such algorithm can be improved to be subquadratic. \end{itemize} Our subquadratic-time algorithm is the first with such worst-case runtime for the problem of nearest-neighbor condensation. \begin{comment} \subsection{Contributions} \label{sec:contributions} \careful{ In this paper, we propose new approximation-sensitive criteria for the nearest-neighbor condensation problem, called $\alpha$-\emph{consistency} and $\alpha$-\emph{selectiveness}, defined as extensions of the popular criteria used in the literature. The following is a summary of our contributions. \begin{itemize} \item A characterization of sufficient conditions to guarantee correct classification of query points using \textup{ANN}\xspace queries on $\alpha$-consistent and $\alpha$-selective subsets of \ensuremath{P}\xspace. This introduces the notion of coresets for the problem of classification using the \textup{NN}\xspace rule (Section~\ref{sec:approx-sensitive-nnc}). \item Results on the hardness of approximation for finding minimum cardinality $\alpha$-consistent and $\alpha$-selective subsets of \ensuremath{P}\xspace on general metric spaces (Section~\ref{sec:hardness-alpha-nnc}). \item A quadratic-time algorithm called \mbox{\textup{$\alpha$-RSS}}\xspace for finding $\alpha$-selective subsets of \ensuremath{P}\xspace (Section~\ref{sec:alpha-rss}). For sets \ensuremath{P}\xspace in doubling spaces and spread $\Delta$, \mbox{\textup{$\alpha$-RSS}}\xspace computes an $\mathcal{O}(\log{\Delta})$-approximation of the minimum cardinality $\alpha$-selective subset of \ensuremath{P}\xspace. For sets \ensuremath{P}\xspace in $\ell_p$ metric of bounded dimension and $p \geq 2$, \mbox{\textup{$\alpha$-RSS}}\xspace computes a constant-factor approximation of the minimum cardinality $\alpha$-selective subset of \ensuremath{P}\xspace. \item An implementation scheme that reduces the time complexity of \mbox{\textup{$\alpha$-RSS}}\xspace to subquadratic, while still providing size guarantees (Section~\ref{sec:alpha-eps-rss}). \item Experiments showing the performance of \mbox{\textup{$\alpha$-RSS}}\xspace compared to other algorithms (Section~\ref{sec:experiments}). \end{itemize} } \end{comment} \section{Experimental Evaluation} \label{sec:experiments} In order to get a clearer impression of the relevance of these results in practice, we performed experimental trials on several training sets, both synthetically generated and widely used benchmarks. First, we consider 21 training sets from the UCI \emph{Machine Learning Repository}\footnote{\url{https://archive.ics.uci.edu/ml/index.php}} which are commonly used in the literature to evaluate condensation algorithms~\cite{Garcia:2012:PSN:2122272.2122582}. These consist of a number of points ranging from 150 to $58000$, in $d$-dimensional Euclidean space with $d$ between 2 and 64, and 2 to 26 classes. We also generated some synthetic training sets, containing $10^5$ uniformly distributed points, in 2 to 3 dimensions, and 3 classes. All training sets used in these experimental trials are summarized in Table~\ref{table:data}. The implementation of the algorithms, training sets used, and raw results, are publicly available\footnote{\url{https://github.com/afloresv/nnc/}}. These experimental trials compare the performance of different condensation algorithms when applied to vastly different training sets. We use two measures of comparison on these algorithms: their runtime in the different training sets, and the size of the subset selected. Clearly, these values might differ greatly on training sets whose size are too distinct. Therefore, before comparing the raw results, these are normalized. The runtime of an algorithm for a given training set is normalized by dividing it by $n$, the size of the training set. The size of the selected subset is normalized by dividing it by $\kappa$, the number of nearest-enemy points in the training set, which characterizes the complexity of the boundaries between classes. \subparagraph*{Algorithm Comparison.} The first experiment evaluates the performance of the five algorithms discussed in this paper: \mbox{\textup{$\alpha$-RSS}}\xspace, \mbox{\textup{$\alpha$-FCNN}}\xspace, \mbox{\textup{$\alpha$-SFCNN}}\xspace, \mbox{\textup{$\alpha$-HSS}}\xspace, and \mbox{\textup{$\alpha$-NET}}\xspace. The evaluation is carried out by varying the value of the $\alpha$ parameter from 0 to 1, to understand the impact of increasing this parameter. The implementation of \mbox{\textup{$\alpha$-HSS}}\xspace uses the well-known greedy algorithm for set cover~\cite{10.2307/3689577}, and solves the problem using the reduction described in Section~\ref{sec:hardness}. In the other hand, recall that the original \textup{NET}\xspace algorithm (for $\alpha=0$) implements an extra pruning technique to further reduce the training set after computing the $\gamma$-net~\cite{gottlieb2014near}. For a fair comparison, we implemented the \mbox{\textup{$\alpha$-NET}}\xspace algorithm with a modified version of this pruning technique that guarantees that the selected subset is still $\alpha$-selective. The results show that \mbox{\textup{$\alpha$-RSS}}\xspace outperforms the other algorithms in terms of running time by a big margin, and irrespective of the value of $\alpha$ (see Figure~\ref{fig:exp:alpha:time}). Additionally, the number of points selected by \mbox{\textup{$\alpha$-RSS}}\xspace, \mbox{\textup{$\alpha$-FCNN}}\xspace, and \mbox{\textup{$\alpha$-SFCNN}}\xspace is comparable to \mbox{\textup{$\alpha$-HSS}}\xspace, which guarantees the best possible approximation factor in general metrics, while \mbox{\textup{$\alpha$-NET}}\xspace is significantly outperformed. \begin{figure*}[h!] \centering \begin{subfigure}[b]{.48\linewidth} \centering\includegraphics[width=\textwidth]{exp/plot_alphas_time} \vspace*{-10pt} \caption{Running time.}\label{fig:exp:alpha:time} \end{subfigure}% \hfill \begin{subfigure}[b]{.48\linewidth} \centering\includegraphics[width=\textwidth]{exp/plot_alphas_size} \vspace*{-10pt} \caption{Size of the selected subsets.}\label{fig:exp:alpha:size} \end{subfigure}% \vspace*{-5pt} \caption{Comparison \mbox{\textup{$\alpha$-RSS}}\xspace, \mbox{\textup{$\alpha$-FCNN}}\xspace, \mbox{\textup{$\alpha$-SFCNN}}\xspace, \mbox{\textup{$\alpha$-NET}}\xspace, and \mbox{\textup{$\alpha$-HSS}}\xspace, for different values of $\alpha$.}\label{fig:exp:alpha} \end{figure*} \subparagraph*{Subquadratic Approach.} Using the same experimental framework, we evaluate performance of the subquadratic implementation \paramRSS{(\alpha,\xi)} described in Section~\ref{sec:subquadratic}. In this case, we change the value of parameter $\xi$ to assess its effect on the running time and selection size over the algorithm, for two different values of $\alpha$ (see Figure~\ref{fig:exp:eps}). The results show an expected increase of the number of selected points, while significantly improving its running time. \begin{figure*}[h!] \centering \begin{subfigure}[b]{.48\linewidth} \centering\includegraphics[width=\textwidth]{exp/plot_aerss_time} \vspace*{-10pt} \caption{Running time.}\label{fig:exp:eps:time} \end{subfigure}% \hfill \begin{subfigure}[b]{.48\linewidth} \centering\includegraphics[width=\textwidth]{exp/plot_aerss_size} \vspace*{-10pt} \caption{Size of the selected subsets.}\label{fig:exp:eps:size} \end{subfigure}% \vspace*{-5pt} \caption{Evaluating the effect of increasing the parameter $\xi$ on \paramRSS{(\alpha,\xi)} for $\alpha=\{0, 0.2\}$.}\label{fig:exp:eps} \end{figure*} \input{table/res-data.tex} \begin{comment} \input{table/res-data.tex} For our first experiment, we computed the percentage of points that were selected by each of the various condensation algorithms. The results are presented in Table~\ref{table:size}. The results show that most of the state-of-the-art algorithms achieve similar sizes, and the various \textup{RSS}\xspace implementations produced slightly larger sets. \input{table/res-size.tex} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{plot-sizekappa.pdf} \caption{Boxplot of the proportion of points selected by the different \textup{NN}\xspace condensation algorithms with respect to $\kappa$, the number of \textup{NE}\xspace points in each training set.} \label{fig:exp:sizekappa} \end{figure} Finally, we computed the median chromatic density of the of the various points. The results are presented in Table~\ref{table:cd}. As expected, the various \textup{RSS}\xspace variants resulted in the highest chromatic densities. \input{table/res-cd.tex} \begin{figure}[h!] \centering \includegraphics[width=\textwidth]{plot-cd.pdf} \caption{Boxplot of the median chromatic density.} \label{fig:exp:cd} \end{figure} \end{comment} \section{Coreset Computation} \label{sec:computation} \subsection{Hardness Results} \label{sec:hardness} Define \textsc{Min-$\alpha$-CS} to be the problem of computing an $\alpha$-consistent subset of minimum cardinality for a given training set \ensuremath{P}\xspace. Similarly, let \textsc{Min-$\alpha$-SS} be the corresponding optimization problem for $\alpha$-selective subsets. Following known results from standard condensation~\cite{Wilfong:1991:NNP:109648.109673,Zukhba:2010:NPP:1921730.1921735,khodamoradi2018consistent}, when $\alpha$ is set to zero, the \textsc{Min-0-CS} and \textsc{Min-0-SS} problems are both known to be NP-hard. Being special cases of the general problems just defined, this implies that both \textsc{Min-$\alpha$-CS} and \textsc{Min-$\alpha$-SS} are NP-hard. In this section, we present results related to the hardness of approximation of both problems, along with simple algorithmic approaches with tight approximation factors. \begin{theorem} \label{thm:hardness:min-alpha-cs} The \textsc{Min-$\alpha$-CS} problem is \textup{NP}-hard to approximate in polynomial time within a factor of $2^{({\textup{ddim}(\metricSet) \log{((1+\alpha)/\gamma)})}^{1-o(1)}}$. \end{theorem} The full proof is omitted, as it follows from a modification of the hardness bounds proof for the \textsc{Min-0-CS} problem described in~\cite{gottlieb2014near}, which is based on a reduction from the \emph{Label Cover} problem. Proving Theorem~\ref{thm:hardness:min-alpha-cs} involves a careful adjustment of the distances in this reduction, so that all the points in the construction have chromatic density at least $\alpha$. Consequently, this implies that the minimum nearest-enemy distance is reduced by a factor of $1/(1+\alpha)$, explaining the resulting bound for \textsc{Min-$\alpha$-CS}. The \textup{NET}\xspace algorithm~\cite{gottlieb2014near} can also be generalized to compute $\alpha$-consistent subsets of \ensuremath{P}\xspace as follows. We define \mbox{\textup{$\alpha$-NET}}\xspace as the algorithm that computes a $\gamma/(1+\alpha)$-net of \ensuremath{P}\xspace, where $\gamma$ is the smallest nearest-enemy distance in \ensuremath{P}\xspace. The covering property of nets~\cite{har2006fast} implies that the resulting subset is $\alpha$-consistent, while the packing property suggests that its cardinality is $\mathcal{O}\left( ((1+\alpha)/\gamma)^{\textup{ddim}(\metricSet)+1} \right)$, implying a tight approximation to the \textsc{Min-$\alpha$-CS} problem. \begin{theorem} \label{thm:hardness:min-alpha-ss} The \textsc{Min-$\alpha$-SS} problem is \textup{NP}-hard to approximate in polynomial time within a factor of $(1-o(1))\ln{n}$ unless $\textup{NP} \subseteq \textup{DTIME}(n^{\log\log{n}})$. \end{theorem} \begin{proof} The result follows from the hardness of another related covering problem: the mi\-nimum \emph{dominating set}~\cite{feige1998threshold,paz1981non,lund1994hardness}. We describe a simple L-reduction from any instance of this problem to an instance of \textsc{Min-$\alpha$-SS}, which preserves the approximation ratio. \begin{enumerate} \item Consider any instance of minimum dominating set, consisting of the graph $G=(V,E)$. \item Generate a new edge-weighted graph $G'$ as follows:\\ Create two copies of $G$, namely $G_\textsf{r}=(V_\textsf{r},E_\textsf{r})$ and $G_\textsf{b}=(V_\textsf{b},E_\textsf{b})$, of \emph{red} and \emph{blue} nodes respectively. Set all edge-weights of $G_\textsf{r}$ and $G_\textsf{b}$ to be 1. Finally, connect each red node $v_\textsf{r}$ to its corresponding blue node $v_\textsf{b}$ by an edge $\{v_\textsf{r},v_\textsf{b}\}$ of weight $1+\alpha+\xi$ for a sufficienly small constant $\xi>0$. Formally, $G'$ is defined as the edge-weighted graph $G' =(V',E')$ where the set of nodes is $V' = V_\textsf{r} \cup V_\textsf{b}$, the set of edges is $E' = E_\textsf{r} \cup E_\textsf{r} \cup \{ \{v_\textsf{r},v_\textsf{b}\} \mid v \in V \}$, and an edge-weight function $w : E' \rightarrow \mathbb{R}^+$ where $w(e) = 1$ iff $e \in E_\textsf{r} \cup E_\textsf{b}$, and $w(e) = 1+\alpha+\xi$ otherwise. \item A labeling function $l$ where $l(v) = \emph{red}$ iff $v \in V_\textsf{r}$, and $l(v) = \emph{blue}$ iff $v \in V_\textsf{b}$. \item Compute the shortest-path metric of $G'$, denoted as $\textup{\textsf{d}}_{G'}$. \item Solve the \textsc{Min-$\alpha$-SS} problem for the set $V'$, on metric $\textup{\textsf{d}}_{G'}$, and the labels defined by $l$. \end{enumerate} A dominating set of $G$ consists of a subset of nodes $D \subseteq V$, such that every node $v \in V \setminus D$ is adjacent to a node in $D$. Given any dominating set $D \subseteq V$ of $G$, it is easy to see that the subset $R = \{ v_\textsf{r}, v_\textsf{b} \mid v \in D\}$ is an $\alpha$-selective subset of $V'$, where $|R| = 2|D|$. Similarly, given an $\alpha$-selective subset $R \subseteq V'$, there is a corresponding dominating set $D$ of $G$, where $|D| \leq |R|/2$, as $D$ can be either $R \cap V_\textsf{r}$ or $R \cap V_\textsf{b}$. Therefore, \textsc{Min-$\alpha$-SS} is as hard to approximate as the minimum dominating set problem. \end{proof} There is a clear connection between the \textsc{Min-$\alpha$-SS} problem and covering problems, in particular that of finding an optimal hitting set. Given a set of elements $U$ and a family $C$ of subsets~of~$U$, a \emph{hitting set} of $(U,C)$ is a subset $H \subseteq U$ such that every set in $C$ contains at least one element of $H$. Therefore, let $N_{p,\alpha}$ be the set of points of \ensuremath{P}\xspace whose distance to $p$ is less~than $\dne{p}/(1+\alpha)$, then any hitting set of $(\ensuremath{P}\xspace, \{N_{p,\alpha} \mid p \in \ensuremath{P}\xspace\})$ is also an $\alpha$-selective subset of \ensuremath{P}\xspace, and vice versa. This simple reduction implies a $\mathcal{O}(n^3)$ worst-case time $\mathcal{O}(\log{n})$-approximation algorithm for \textsc{Min-$\alpha$-SS}, based on the classic greedy algorithm for set cover~\cite{10.2307/3689577,Slavik:1996:TAG:237814.237991}. Call this approach \mbox{\textup{$\alpha$-HSS}}\xspace or \emph{$\alpha$-Hitting Selective Subset}. It follows from Theorem~\ref{thm:hardness:min-alpha-ss} that for training sets in general metric spaces, this is the best approximation possible under standard complexity assumptions. While both \mbox{\textup{$\alpha$-NET}}\xspace and \mbox{\textup{$\alpha$-HSS}}\xspace compute tight approximations of their corresponding problems, their performance in practice does not compare to heuristic approaches for standard condensation (see Section~\ref{sec:experiments} for experimental results). Therefore, in the following sections, we consider two practical algorithms for this problem, namely \textup{FCNN}\xspace and \textup{RSS}\xspace, and extend them to compute subsets with the newly defined criteria. \subsection{An Algorithm for $\alpha$-Selective Subsets} \label{sec:algorithm:selective} For standard condensation, the \textup{RSS}\xspace algorithm was recently proposed~\cite{DBLP:conf/cccg/Flores-VelazcoM19} to compute selective subsets. It runs in quadratic worst-case time and exhibits good performance in practice. The selection process of this algorithm is heuristic in nature and can be described as follows: beginning with an empty set, the points in $p \in \ensuremath{P}\xspace$ are examined in increasing order with respect to their nearest-enemy distance $\dne{p}$. The point $p$ is added to the subset \ensuremath{R}\xspace if $\dnn{p,\ensuremath{R}\xspace} \geq \dne{p}$. It is easy to see that the resulting subset is selective. We define a generalization, called \mbox{\textup{$\alpha$-RSS}}\xspace, to compute $\alpha$-selective subsets of \ensuremath{P}\xspace. The condition to add a point $p \in \ensuremath{P}\xspace$ to the selected subset checks if any previously selected point is closer to $p$ than $\dne{p}/(1+\alpha)$, instead of just $\dne{p}$. See Algorithm~\ref{alg:alpha-rss} for a formal description, and Figure~\ref{fig:alpha:rss:process} for an illustration. It is easy to see that this algorithm computes an $\alpha$-selective subset, while keeping the quadratic time complexity of the original \textup{RSS}\xspace algorithm. \begin{algorithm} \DontPrintSemicolon \vspace*{0.1cm} \KwIn{Initial training set \ensuremath{P}\xspace and parameter $\alpha \geq 0$} \KwOut{$\alpha$-selective subset $\ensuremath{R}\xspace \subseteq \ensuremath{P}\xspace$} $\ensuremath{R}\xspace \gets \phi$\; Let $\left\lbrace p_i \right\rbrace^n_{i=1}$ be the points of \ensuremath{P}\xspace sorted increasingly \emph{w.r.t.}\xspace $\dne{p_i}$\; \ForEach{$p_i \in \ensuremath{P}\xspace$, where $i = 1\dots n$}{ \If{$\dnn{p_i, \ensuremath{R}\xspace} \geq \dne{p_i}/(1+\alpha)$}{ $\ensuremath{R}\xspace \gets \ensuremath{R}\xspace \cup \left\lbrace p_i \right\rbrace$\; } } \KwRet{\ensuremath{R}\xspace} \vspace*{0.1cm} \caption{\mbox{\textup{$\alpha$-RSS}}\xspace} \label{alg:alpha-rss} \end{algorithm} Naturally, we want to analyze the number of points this algorithm selects. The remainder of this section establishes upper-bounds and approximation guarantees of the \mbox{\textup{$\alpha$-RSS}}\xspace algorithm for any doubling metric space, with improved results in the Euclidean space. This resolves the open problem posed in~\cite{DBLP:conf/cccg/Flores-VelazcoM19} of whether \textup{RSS}\xspace computes an approximation of the \textsc{Min-0-CS} and \textsc{Min-0-SS}~problems. \subparagraph*{Size in Doubling spaces.} First, we consider the case where the underlying metric space $(\metricSet,\metricFunc)$\xspace of \ensuremath{P}\xspace is doubling. The following results depend on the doubling dimension $\textup{ddim}(\metricSet)$ of the metric space (which is assumed to be constant), the margin $\gamma$ (the smallest nearest-enemy distance of any point in \ensuremath{P}\xspace), and $\kappa$ (the number of nearest-enemy points in \ensuremath{P}\xspace). \begin{theorem} \label{thm:rss-approx-factor-cs} \mbox{\textup{$\alpha$-RSS}}\xspace computes a tight approximation for the \textsc{Min-$\alpha$-CS} problem. \end{theorem} \begin{proof} This follows from a direct comparison to the resulting subset of the \mbox{\textup{$\alpha$-NET}}\xspace algorithm from the previous section. For any point $p$ selected by \mbox{\textup{$\alpha$-NET}}\xspace, let $B_{p,\alpha}$ be the set of points of \ensuremath{P}\xspace ``covered'' by $p$, that is, whose distance to $p$ is at most $\gamma/(1+\alpha)$. By the covering property of $\varepsilon$-nets, this defines a partition on \ensuremath{P}\xspace when considering every point $p$ selected by \mbox{\textup{$\alpha$-NET}}\xspace. Let \ensuremath{R}\xspace be the set of points selected by \mbox{\textup{$\alpha$-RSS}}\xspace, we analyze the size of $B_{p,\alpha} \cap \ensuremath{R}\xspace$, that is, for any given $B_{p,\alpha}$ how many points could have been selected by the \mbox{\textup{$\alpha$-RSS}}\xspace algorithm. Let $a, b \in B_{p,\alpha} \cap \ensuremath{R}\xspace$ be any two such points, where without loss of generality, $\dne{a} \leq \dne{b}$. By the selection process of the algorithm, we know that $\dist{a}{b} \geq \dne{b}/(1+\alpha) \geq \gamma/(1+\alpha)$. A simple packing argument in doubling metrics implies that $|B_{p,\alpha} \cap \ensuremath{R}\xspace| \leq 2^{\textup{ddim}(\metricSet)+1}$. Altogether, we have that the size of the subset selected by \mbox{\textup{$\alpha$-RSS}}\xspace is $\mathcal{O}\left( (2(1+\alpha)/\gamma)^{\textup{ddim}(\metricSet)+1} \right)$. \end{proof} \begin{figure}[t!] \centering \includegraphics[width=.75\textwidth]{proof/rss-alpha-sel} \caption{Selection of \mbox{\textup{$\alpha$-RSS}}\xspace for $\alpha\texttt{=}0.5$. Faded points are not selected, while selected points~are drawn along with a ball of radius $\dne{p}$ (dotted outline) and a ball of radius $\dne{p}/(1+\alpha)$ (solid outline). A point $p$ is selected if no previously selected point is closer to $p$ than $\dne{p}/(1+\alpha)$.} \label{fig:alpha:rss:process} \end{figure} \begin{theorem} \label{thm:rss-approx-factor} \mbox{\textup{$\alpha$-RSS}}\xspace computes an $\mathcal{O}\left( \log{(\min{(1+2/\alpha,1/\gamma)})} \right)$-factor approximation for the \textsc{Min-$\alpha$-SS} problem. For $\alpha = \Omega(1)$, this is a constant-factor approximation. \end{theorem} \begin{proof} Let \textup{OPT}$_\alpha$ be the optimum solution to the \textsc{Min-$\alpha$-SS} problem, \emph{i.e.},\xspace the minimum cardinality $\alpha$-selective subset of \ensuremath{P}\xspace. For every point $p \in \textup{OPT}_\alpha$ in such solution, define $S_{p,\alpha}$ to be the set of points in \ensuremath{P}\xspace ``covered'' by $p$, or simply $S_{p,\alpha} = \{ r \in \ensuremath{P}\xspace \mid \dist{r}{p} < \dne{r}/(1+\alpha) \}$. Additionally, let \ensuremath{R}\xspace be the set of points selected by \mbox{\textup{$\alpha$-RSS}}\xspace, define $\ensuremath{R}\xspace_{p,\sigma}$ to be the points selected by \mbox{\textup{$\alpha$-RSS}}\xspace which also belong to $S_{p,\alpha}$ and whose nearest-enemy distance is between $\sigma$ and~$2\sigma$, for $\sigma \in [\gamma,1]$. That is, $\ensuremath{R}\xspace_{p,\sigma} = \{ r \in \ensuremath{R}\xspace \cap S_{p,\alpha} \mid \dne{r} \in [\sigma,2\sigma) \}$. Clearly, these subsets define a partitioning of \ensuremath{R}\xspace for all $p \in \textup{OPT}_\alpha$ and values of $\sigma = \gamma\,2^i$ for $i=\{0,1,2,\dots,\lceil\log{\frac{1}{\gamma}}\rceil\}$. However, depending on $\alpha$, some values of $\sigma$ would yield empty $\ensuremath{R}\xspace_{p,\sigma}$ sets. Consider some point $q \in S_{p,\alpha}$, we can bound its nearest-enemy distance with respect to the nearest-enemy distance of point $p$. In particular, by leveraging simple triangle-inequality arguments, it is possible to prove that $\frac{1+\alpha}{2+\alpha} \,\dne{p} \leq \dne{q} \leq \frac{1+\alpha}{\alpha} \,\dne{p}$. Therefore, the values of $\sigma$ for which $\ensuremath{R}\xspace_{p,\sigma}$ sets are not empty, are $\sigma = 2^j \,\frac{1+\alpha}{2+\alpha}\,\dne{p}$ for $j = \{0,\dots,\lceil\log{(1+2/\alpha)}\rceil\}$. \begin{comment} \alejandro{Derivation of the inequalities above.} \begin{align*} \dne{q} &\geq (1+\alpha)\, \dist{p}{q}\\ &\geq (1+\alpha) (\dist{q}{\nenemy{p}} - \dist{\nenemy{p}}{p})\\ &\geq (1+\alpha) (\dne{q} - \dne{p})\\ -\alpha \dne{q} &\geq - (1+\alpha)\, \dne{p}\\ \dne{q} &\leq \frac{1+\alpha}{\alpha} \dne{p} \end{align*} \begin{align*} \dne{p} &\leq \dist{p}{\nenemy{q}}\\ &\leq \dist{p}{q} + \dist{q}{\nenemy{q}}\\ &= \dist{p}{q} + \dne{q}\\ &\leq \dne{q}/(1+\alpha) + \dne{q}\\ &= \dne{q}\, \left( 1 + \frac{1}{1+\alpha} \right) = \dne{q} \frac{2+\alpha}{1+\alpha}\\ \dne{q} &\geq \dne{p} \frac{1+\alpha}{2+\alpha} \end{align*} \end{comment} The proof now follows by bounding the size of $\ensuremath{R}\xspace_{p,\sigma}$ which can be achieved by bounding its spread. Thus, lets consider the smallest and largest pairwise distances among points in $\ensuremath{R}\xspace_{p,\sigma}$. Take any two points $a,b \in \ensuremath{R}\xspace_{p,\sigma}$ where without loss of generality, $\dne{a} \leq \dne{b}$. Note that points selected by \mbox{\textup{$\alpha$-RSS}}\xspace cannot be ``too close'' to each other; that is, as $a$ and $b$ were selected by the algorithm, we know that $(1+\alpha)\,\dist{a}{b} \geq \dne{b} \geq \sigma$. Therefore, the smallest pairwise distance in $\ensuremath{R}\xspace_{p,\sigma}$ is at least $\sigma/(1+\alpha)$. Additionally, by the triangle inequality, we can bound the maximum pairwise distance using their distance to $p$ as $\dist{a}{b} \leq \dist{a}{p} + \dist{p}{b} \leq 4\sigma / (1+\alpha)$. Then, by the packing properties of doubling spaces, the size of $\ensuremath{R}\xspace_{p,\sigma}$ is at most $4^{\textup{ddim}(\metricSet)+1}$. Altogether, for every $p \in \textup{OPT}_\alpha$ there are up to $\lceil\log{(\min{(1+2/\alpha,1/\gamma)})}\rceil$ non-empty $\ensuremath{R}\xspace_{p,\sigma}$ subsets, each containing at most $4^{\textup{ddim}(\metricSet)+1}$ points. In doubling spaces with constant doubling dimension, the size of these subsets is also constant \end{proof} While these results are meaningful from a theoretical perspective, it is also useful to establishing bounds in terms of the geometry of the learning space, which is characterized by the boundaries between points of different classes. Thus, using similar packing arguments as above, we bound the selection size of the algorithm with respect to $\kappa$. \begin{theorem} \label{thm:rss-size} \mbox{\textup{$\alpha$-RSS}}\xspace selects $\mathcal{O}\left( \kappa \log{\frac{1}{\gamma}}\ (1+\alpha)^{\textup{ddim}(\metricSet)+1}\right)$ points. \end{theorem} \begin{proof} This follows from similar arguments to the ones used to prove Theorem~\ref{thm:rss-approx-factor}, using an alternative charging scheme for each nearest-enemy point in the training set. Consider one such point $p \in \{ \nenemy{r} \mid r \in \ensuremath{P}\xspace\}$ and a value $\sigma \in [\gamma,1]$, we define $\ensuremath{R}\xspace'_{p,\sigma}$ to be the subset of points from \mbox{\textup{$\alpha$-RSS}}\xspace whose nearest-enemy is $p$, and their nearest-enemy distance is between $\sigma$ and $2\sigma$. That is, $\ensuremath{R}\xspace'_{p,\sigma} = \{ r \in \ensuremath{R}\xspace \mid \nenemy{r}=p \wedge \dne{r} \in [\sigma,2\sigma) \}$. These subsets partition \ensuremath{R}\xspace for all nearest-enemy points of \ensuremath{P}\xspace, and values of $\sigma = \gamma\, 2^i$ for $i=\{0,1,2,\dots,\lceil\log{\frac{1}{\gamma}}\rceil\}$. For any two points $a, b \in \ensuremath{R}\xspace'_{p,\sigma}$, the selection criteria of \mbox{\textup{$\alpha$-RSS}}\xspace implies some separation between selected points, which can be used to prove that $\dist{a}{b} \geq \sigma/(1+\alpha)$. Additionally, we know that $\dist{a}{b} \leq \dist{a}{p} + \dist{p}{b} = \dne{a} + \dne{b} \leq 4\sigma$. Using a simple packing argument, we have that $|\ensuremath{R}\xspace'_{p,\sigma}| \leq \lceil 4(1+\alpha) \rceil^{\textup{ddim}(\metricSet)+1}$. \begin{comment} \[ |\ensuremath{R}\xspace| = \sum_{p} \sum_{i=0}^{\lceil\log\frac{1}{\gamma}\rceil} |\ensuremath{R}\xspace'_{p,2^i}| \leq \kappa \left\lceil\log \frac{1}{\gamma} \right\rceil \left\lceil 4(1+\alpha) \right\rceil^{\textup{ddim}(\metricSet)+1} \] \end{comment} Altogether, by counting all sets $\ensuremath{R}\xspace'_{p,\sigma}$ for each nearest-enemy in the training set and values of $\sigma$, the size of \ensuremath{R}\xspace is upper-bounded by $|\ensuremath{R}\xspace| \leq \kappa \left\lceil\log{1/\gamma} \right\rceil \left\lceil 4(1+\alpha) \right\rceil^{\textup{ddim}(\metricSet)+1}$. Based on the assumption that $\textup{ddim}(\metricSet)$ is constant, this completes the proof. \end{proof} As a corollary, this result implies that when $\alpha = 2/\varepsilon$, the $\alpha$-selective subset computed by \mbox{\textup{$\alpha$-RSS}}\xspace contains $\mathcal{O}\left( \kappa \log{1/\gamma}\ (1/\varepsilon)^{\textup{ddim}(\metricSet)+1} \right)$ points. This establishes the size bound on the $\varepsilon$-coreset given in Theorem~\ref{thm:coreset:main}, which can be computed using the \mbox{\textup{$\alpha$-RSS}}\xspace algorithm. \subparagraph*{Size in Euclidean space.} In the case where $\ensuremath{P}\xspace \subset \mathbb{R}^d$ lies in $d$-dimensional Euclidean~space, the analysis of \mbox{\textup{$\alpha$-RSS}}\xspace can be further improved, leading to a constant-factor approximation of \textsc{Min-$\alpha$-SS} for any value of $\alpha \geq 0$, and reduced dependency on the dimensionality of \ensuremath{P}\xspace. \begin{theorem} \label{thm:rss-approx-factor-euclidean} \mbox{\textup{$\alpha$-RSS}}\xspace computes an $\mathcal{O}(1)$-approximation for the \textsc{Min-$\alpha$-SS} problem in $\mathbb{R}^d$. \end{theorem} \begin{proof} Similar to the proof of Theorem~\ref{thm:rss-approx-factor}, define $\ensuremath{R}\xspace_p = S_{p,\alpha} \cap \ensuremath{R}\xspace$ as the points selected by \mbox{\textup{$\alpha$-RSS}}\xspace that are ``covered'' by $p$ in the optimum solution \textup{OPT}$_\alpha$. Consider two such points $a, b \in \ensuremath{R}\xspace_p$ where without loss of generality, $\dne{a} \leq \dne{b}$. By the definition~of $S_{p,\alpha}$ we know that $\dist{a}{p} < \dne{a}/(1+\alpha)$, and similarly with $b$. Additionally, from the selection of the algorithm we know that $\dist{a}{b} \geq \dne{b}/(1+\alpha)$. Overall, these inequalities imply that the angle $\angle apb \geq \pi/3$. By a simple packing argument, the size of $\ensuremath{R}\xspace_p$ is bounded by the \emph{kissing number} in $d$-dimensional Euclidean space, or simply $\mathcal{O}((3/\pi)^{d-1})$. \mbox{Therefore, we have} that $|\ensuremath{R}\xspace| \leq \sum_p |\ensuremath{R}\xspace_p| = |\textup{OPT}_\alpha|\ \mathcal{O}((3/\pi)^{d-1})$. Assuming $d$ is constant, this completes the proof. \end{proof} \begin{comment} \alejandro{The following result can be moved to the appendix and bring the algorithms formal description} \begin{wrapfigure}{r}{.46\textwidth} \vspace*{-.6cm} \begin{center} \includegraphics[width=.26\textwidth]{proof/tight-euclidean} \end{center} \caption{Instance where the analysis of the approximation factor \mbox{of \mbox{\textup{$\alpha$-RSS}}\xspace in $\mathbb{R}^d$ is tight.}}\label{fig:tightexample:euclidean} \end{wrapfigure} This analysis is tight up to constant factors. In Figure~\ref{fig:tightexample:euclidean}, we illustrate a training set \ensuremath{P}\xspace consisting of \emph{red} and \emph{blue} points in $\mathbb{R}^d$, where \mbox{\textup{$\alpha$-RSS}}\xspace selects $\Theta(c^{d-2}\ |\textup{OPT}_\alpha|)$ points. Consider two helper points (which do not belong to \ensuremath{P}\xspace) $c_r = 0\vec{u}_d$ and $c_b = (1+\alpha)\vec{u}_d$, where $\vec{u}_d$ is the unit vector parallel to the $d$-th coordinate. Add red points $r_i$ on the surface of the $d-1$ unit ball centered at $c_r$ and perpendicular to $\vec{u}_d$. Similarly with blue points $b_i$ around $c_b$. Finally, add two points $r_* = -\xi \vec{u}_d$ and $b_* = (1+\alpha+\xi) \vec{u}_d$, for a suitable value $\xi$ such that $\lVert r_* r_i \rVert < 1$. Clearly, the nearest-enemy distance of all $r_i$ and $b_i$ points is $1+\alpha$, while the one of $r_*$ and $b_*$ is strictly greater than $1+\alpha$. Thus, $\textup{OPT}_\alpha = \{r_*, b_*\}$ but \mbox{\textup{$\alpha$-RSS}}\xspace selects $\Theta(c^{d-2})$ points $r_i$ and $b_i$ at distance greater than 1 from each other. \end{comment} Furthermore, a similar constant-factor approximation can be achieved for any training set \ensuremath{P}\xspace in $\ell_p$ space for $p\geq 3$. This follows analogously to the proof of Theorem~\ref{thm:rss-approx-factor-euclidean}, exploiting the bounds between $\ell_p$ and $\ell_2$ metrics, where $1/\sqrt{d}\ \lVert v \rVert_p \leq \lVert v \rVert_2 \leq \sqrt{d}\ \lVert v \rVert_p$. This would imply that the angle between any two points in $\mbox{\textup{$\alpha$-RSS}}\xspace_{p}$ is $\Omega(1/d)$. Therefore, it shows that \mbox{\textup{$\alpha$-RSS}}\xspace achieves an approximation factor of $\mathcal{O}(d^{d-1})$, or simply $\mathcal{O}(1)$ for constant dimension. Similarly to the case of doubling spaces, we also establish upper-bounds in terms of $\kappa$ for the selection size of the algorithm in Euclidean space. The following result improves the exponential dependence on the dimensionality of \ensuremath{P}\xspace (from $\textsf{ddim}(\mathbb{R}^d) = \Theta(d)$ to $d-1$), while keeping the dependency on the margin $\gamma$, which contrast with the approximation~factor~results. \begin{theorem} \label{thm:rss-size-euclidean} In Euclidean space $\mathbb{R}^d$, \mbox{\textup{$\alpha$-RSS}}\xspace selects $\mathcal{O}\left( \kappa \log{\frac{1}{\gamma}}\ (1+\alpha)^{d-1} \right)$ points. \end{theorem} \begin{proof} Let $p$ be any nearest-enemy point of \ensuremath{P}\xspace and $\sigma \in [\gamma,1]$, similarly define $\ensuremath{R}\xspace'_{p,\sigma}$ to be the set of points selected by \mbox{\textup{$\alpha$-RSS}}\xspace whose nearest-enemy is $p$ and their nearest-enemy distance is between $\sigma$ and $b\sigma$, for $b = \frac{(1+\alpha)^2}{\alpha(2+\alpha)}$. Equivalently, these subsets define a partitioning of \ensuremath{R}\xspace for all nearest-enemy points $p$ and values of $\sigma = \gamma\,b^k$ for $k=\{0,1,2,\dots,\lceil\log_b{\frac{1}{\gamma}}\rceil\}$.~Thus, the proof follows from bounding the minimum angle between points in these subsets. For any two such points $p_i, p_j \in \ensuremath{R}\xspace'_{p,\sigma}$, we lower bound the angle $\angle p_i p p_j$. Assume without loss of generality that $\dne{p_i} \leq \dne{p_j}$. By definition of the partitioning, we also know that $\dne{p_j} \leq b\sigma \leq b \, \dne{p_i}$. Therefore, altogether we have that $\dne{p_i} \leq \dne{p_j} \leq b \, \dne{p_i}$. First, consider the set of points whose distance to $p_i$ is $(1+\alpha)$ times their distance~to~$p$, which defines a multiplicative weighted bisector~\cite{AURENHAMMER1984251} between points $p$ and $p_i$, with weights equal to $1$ and $1/(1+\alpha)$ respectively. This is characterized as a $d$-dimensional ball (see Figure~\ref{fig:proof:euclidean:bisector}) with center $c_i = (p_i-p)\, b + p$ and radius $\dne{p_i} \, b/(1+\alpha)$. Thus $p$, $p_i$ and $c_i$ are collinear, and the distance between $p$ and $c_i$ is $\dist{p}{c_i} = b \, \dne{p_i}$. In particular, let's consider the relation between $p_j$ and such bisector. As $p_j$ was selected by the algorithm after $p_i$, we know that $(1+\alpha)\,\dist{p_j}{p_i} \geq \dne{p_j}$ where $\dne{p_j} = \dist{p_j}{p}$. Therefore, clearly $p_j$ lies either outside or in the surface of the weighted bisector between $p$ and $p_i$ (see Figure~\ref{fig:proof:euclidean:general}). \begin{figure*}[h!] \centering \begin{subfigure}[b]{.237\linewidth} \centering\includegraphics[width=\textwidth]{proof/euclidean-bisector} \caption{Multiplicatively weighted bisectors for different weights.} \label{fig:proof:euclidean:bisector} \end{subfigure}\hfill% \begin{subfigure}[b]{.356\linewidth} \centering\includegraphics[width=\textwidth]{proof/euclidean-general} \caption{Position of point $p_j$\\ \emph{w.r.t.}\xspace the weighted bisector\\ between points $p$ and $p_i$.} \label{fig:proof:euclidean:general} \end{subfigure}\hfill% \begin{subfigure}[b]{.356\linewidth} \centering\includegraphics[width=\textwidth]{proof/euclidean-detail} \caption{The intersection points $x$ and $y$ between the weighted bisector and the limit balls of $\ensuremath{R}\xspace_{p,\sigma}$.} \label{fig:proof:euclidean:detail} \end{subfigure}% \caption{Construction for the analysis of the minimum angle between two points in $\ensuremath{R}\xspace'_{p,\sigma}$ \emph{w.r.t.}\xspace some nearest-enemy point $p \in \ensuremath{P}\xspace$. Let points $p_i, p_j \in \ensuremath{R}\xspace'_{p,\sigma}$, we analyze the angle $\angle p_i p p_j$.}\label{fig:proof:euclidean} \end{figure*} For angle $\angle p_i p p_j$, we can frame the analysis to the plane defined by $p$, $p_i$ and $p_j$. Let $x$ and $y$ be two points in this plane, such that they are the intersection points between the weighted bisector and the balls centered at $p$ of radii $\dne{p_i}$ and $b\, \dne{p_i}$ respectively (see Figure~\ref{fig:proof:euclidean:detail}). By the convexity of the weighted bisector between $p$ and $p_i$, we can say that $\angle p_i p p_j \geq \min(\angle x p p_i , \angle y p c_j)$. Now, consider the triangles $\triangle p x p_i$ and $\triangle p y c_i$. By the careful selection of $b$, these triangles are both isosceles and similar. In particular, for $\triangle p x p_i$ the two sides incident to $p$ have length equal to $\dne{p_i}$, and the side opposite to $p$ has length equal to $\dne{p_i}/(1+\alpha)$. For $\triangle p y c_i$, the side lengths are $b\,\dne{p_i}$ and $\dne{p_i}\, b/(1+\alpha)$. Therefore, the angle $\angle p_i p p_j \geq \angle x p p_i \geq 1/(1+\alpha)$. By a simple packing argument based on this minimum angle, we have that the size of $\ensuremath{R}\xspace'_{p,\sigma}$ is $\mathcal{O}((1+\alpha)^{d-1})$. All together, following the defined partitioning, we have that: \[ |\ensuremath{R}\xspace| = \sum_{p} \sum_{k=0}^{\lceil\log_b\frac{1}{\gamma}\rceil} |\ensuremath{R}\xspace'_{p,b^k}| \leq \kappa \left\lceil\log_b \frac{1}{\gamma} \right\rceil \mathcal{O}\left((1+\alpha)^{d-1} \right) \] For constant $\alpha$ and $d$, the size of \mbox{\textup{$\alpha$-RSS}}\xspace is $\mathcal{O}(\kappa \log{\frac{1}{\gamma}})$. Moreover, when $\alpha$ is zero \mbox{\textup{$\alpha$-RSS}}\xspace selects $\mathcal{O}(\kappa\ c^{d-1})$, matching the previously known bound for \textup{RSS}\xspace in Euclidean space. \end{proof} \subsection{Subquadratic Algorithm} \label{sec:subquadratic} In this section we present a subquadratic implementation for the \mbox{\textup{$\alpha$-RSS}}\xspace algorithm, which completes the proof of our main result, Theorem~\ref{thm:coreset:main}. Prior to this result , among algorithms for nearest-neighbor condensation, \textup{FCNN}\xspace achieves the best worst-case time complexity, running in $\mathcal{O}(nm)$ time, where $m = |\ensuremath{R}\xspace|$ is the size of the selected subset. The \mbox{\textup{$\alpha$-RSS}}\xspace algorithm consists of two main stages: computing the nearest-enemy distances of all points in \ensuremath{P}\xspace (and sorting the points based on these), and the selection process itself. The first stage requires a total of $n$ nearest-enemy queries, plus additional $\mathcal{O}(n\log{n})$ time for sorting. The second stage performs $n$ nearest-neighbor queries on the current selected subset \ensuremath{R}\xspace, which needs to be updated $m$ times. In both cases, using exact nearest-neighbor search would degenerate into linear search due to the \emph{curse of dimensionality}. Thus, the first and second stage of the algorithm would need $\mathcal{O}(n^2)$ and $\mathcal{O}(nm)$ worst-case time respectively. These bottlenecks can be overcome by leveraging approximate nearest-neighbor techniques. Clearly, the first stage of the algorithm can be improved by computing nearest-enemy distances approximately, using as many \textup{ANN}\xspace structures as classes there are in \ensuremath{P}\xspace, which is considered to be a small constant. Therefore, by also applying a simple brute-force search for nearest-neighbors in the second stage, result (i) of the next theorem follows immediately. Moreover, by combining this with standard techniques for static-to-dynamic conversions~\cite{bentley1980decomposable}, we have result (ii) below. Denote this variant of \mbox{\textup{$\alpha$-RSS}}\xspace as \paramRSS{(\alpha,\xi)}, for a parameter $\xi \geq 0$. \begin{theorem} \label{thm:subquadratic:general} Given a data structure for $\xi$-\textup{ANN}\xspace searching with construction time $t_c$ and query time $t_q$ (which potentially depend on $n$ and $\xi$), the \paramRSS{(\alpha,\xi)} variant can be implemented with the following worst-case time complexities, where $m$ is the size of the~selected~subset. \begin{romanenumerate} \item $\mathcal{O} \left( t_c + n\,(t_q + m + \log{n}) \right)$ \item $\mathcal{O} \left( (t_c + n\,t_q) \log{n} \right)$ \end{romanenumerate} \end{theorem} More generally, if we are given an additional data structure for dynamic $\xi$-\textup{ANN}\xspace searching with construction time $t'_c$, query time $t'_q$, and insertion time $t'_i$, the overall running time will be $\mathcal{O} \left( t_c + t'_c + n\,(t_q + t'_q + \log{n}) + m\,t'_i \right)$. Indeed, this can be used to obtain (ii) from the static-to-dynamic conversions~\cite{bentley1980decomposable}, which propose an approach to convert static search structures into dynamic ones. These results directly imply implementations of \paramRSS{(\alpha,\xi)} with subquadratic worst-case time complexities, based on \textup{ANN}\xspace techniques~\cite{arya2009space,arya2018approximate} for low-dimensional Euclidean space, and using techniques like LSH~\cite{andoni2018approximate} that are suitable for \textup{ANN}\xspace in high-dimensional Hamming and Euclidean spaces. More generally, subquadratic runtimes can be achieved by leveraging techniques~\cite{cole2006searching} for dynamic \textup{ANN}\xspace search in doubling spaces. \begin{comment} \begin{lemma} There exist a data structure for dynamic $\xi$-\textup{ANN}\xspace queries in sets \ensuremath{P}\xspace in $d$-dimensional Euclidean space, that can be constructed in $t'_c = \mathcal{O}(n \log{n})$ time, queried in $t'_q = \mathcal{O}(\log{n} + 1/\xi^{d-1})$ time, and where points of \ensuremath{P}\xspace can be inserted in $t'_i = \mathcal{O}(\log{n})$ time. \end{lemma} \careful{Together with the dynamic-structure scheme described above, this lemma implies that there is a variant of \mbox{\textup{$\alpha$-RSS}}\xspace for Euclidean space that runs in $\mathcal{O}(n\log{n} + n/\xi^{d-1})$ time. Such data structure can be build from a standard BBD tree~\cite{arya1998optimal,chanminimalist} as follows. First, construct the tree from the entire set \ensuremath{P}\xspace, thus taking $t'_c = \mathcal{O}(n\log{n})$ time. However, each node of the tree has some additional data: a boolean flag indicating if the subtree rooted at such node contains a point of the ``active'' subset \ensuremath{R}\xspace. Initially, all flags are set to \emph{false}, making the initial active subset being empty. To add a point $p \in \ensuremath{P}\xspace$ to the active subset \ensuremath{R}\xspace, all the flags from the root of the tree to the leaf node containing $p$ must be set to \emph{true}, thus making the insertion time $t'_i = \mathcal{O}(\log{n})$. Finally, an $\xi$-\textup{ANN}\xspace query on such tree would perform as usual, only avoiding to visit nodes whose flag is set to \emph{false}, yielding a query time of $t'_q = \mathcal{O}(\log{n} + 1/\xi^{d-1})$.} \end{comment} \subparagraph*{Dealing with uncertainty.} Such implementation schemes for \mbox{\textup{$\alpha$-RSS}}\xspace would incur an approximation error (of up to $1+\xi$) on the computed distances: either only during the first stage if (i) is implemented, or during both stages if (ii) or the dynamic-structure scheme are implemented. The uncertainty introduced by these approximate queries, imply that in order to guarantee finding $\alpha$-selective subsets, we must modify the condition for adding point during the second stage of the algorithm. Let $\dne{p,\xi}$ denote the $\xi$-approximate nearest-enemy distance of $p$ computed in the first stage, and let $\dnn{p,\ensuremath{R}\xspace,\xi}$ denote the $\xi$-approximate nearest-neighbor distance of $p$ over points of the current subset (computed in the second stage). Then, \paramRSS{(\alpha,\xi)} adds a point $p$ into the subset if $(1+\xi)(1+\alpha)\,\dnn{p,\ensuremath{R}\xspace,\xi} \geq \dne{p,\xi}$. By similar arguments to the ones described in Section~\ref{sec:algorithm:selective}, size guarantees can be extended to \paramRSS{(\alpha,\xi)}. First, the size of the subset selected by \paramRSS{(\alpha,\xi)}, in terms of the number of nearest-enemy points in the set, would be bounded by the size of the subset selected by \paramRSS{\hat{\alpha}} with $\hat{\alpha} = (1+\alpha)(1+\xi)^2-1$. Additionally, the approximation factor of \paramRSS{(\alpha,\xi)} in both doubling and Euclidean metric spaces would increase by a factor of $\mathcal{O}((1+\xi)^{2(\textup{ddim}(\metricSet)+1)})$. \vspace*{5pt}\noindent This completes the proof of Theorem~\ref{thm:coreset:main}. \subsection{An Algorithm for $\alpha$-Consistent Subsets} \label{sec:algorithm:consistent} Even thought the main result of this paper relies on the computation of $\alpha$-selective subsets, Theorem~\ref{thm:coreset:weak} shows that even $\alpha$-consistency is enough to guarantee the correct classification of certain query points. In practice, \textup{FCNN}\xspace~\cite{angiulli2007fast} is the most efficient algorithm for computing consistent subsets. Therefore, in this section, we discuss a simple extension of this algorithm in order to compute $\alpha$-consistent subsets. Recent efforts~\cite{cccg20afloresv} show the first theoretical analysis on the selection size of \textup{FCNN}\xspace. The results are two fold: while the size of the subset selected by \textup{FCNN}\xspace cannot be upper-bounded, a simple modification of the algorithm is sufficient to obtain provable upper-bounds. This modified algorithm is called \textup{SFCNN}\xspace. Both algorithms, \textup{FCNN}\xspace and \textup{SFCNN}\xspace, select points iteratively as follows (see Algorithm~\ref{alg:alpha-sfcnn}). First, the subset \ensuremath{R}\xspace is initialized with one point per class (\emph{e.g.},\xspace the centroids of each class). During every iteration, the algorithm identifies all the points in \ensuremath{P}\xspace that are incorrectly classified with the current \ensuremath{R}\xspace, or simply, those whose nearest-neighbor in \ensuremath{R}\xspace is of different class. This is formalized as the \emph{voren} function, defined for every point $p \in \ensuremath{R}\xspace$ as follows: \[ \textup{voren}(p,\ensuremath{R}\xspace,\ensuremath{P}\xspace) = \{ q \in \ensuremath{P}\xspace \mid \nn{q,\ensuremath{R}\xspace} = p \wedge l(q) \neq l(p) \} \] This function identifies all the enemies of $p$ whose nearest-neighbor in \ensuremath{R}\xspace is $p$ itself. The only difference between the original \textup{FCNN}\xspace algorithm and the modified \textup{SFCNN}\xspace appears next. While \textup{FCNN}\xspace adds one point per each $p \in \ensuremath{R}\xspace$ in a batch% \footnote{For \textup{FCNN}\xspace, line 4 of Algorithm~\ref{alg:alpha-sfcnn} updates \ensuremath{R}\xspace by adding all the points in $S$, instead of only one point of $S$.}% , potentially doubling the size of \ensuremath{R}\xspace, \textup{SFCNN}\xspace adds only \emph{one} point per iteration. Then, both algorithms terminate when no other points can be added (\emph{i.e.},\xspace all $\textup{voren}(p,\ensuremath{R}\xspace,\ensuremath{P}\xspace)$ are empty), implying that \ensuremath{R}\xspace is consistent. We can now extend both algorithms to compute $\alpha$-consistent subsets, namely \mbox{\textup{$\alpha$-FCNN}}\xspace and \mbox{\textup{$\alpha$-SFCNN}}\xspace, by redefining the \emph{voren} function. The idea is to identify those points whose nearest-neighbor in \ensuremath{R}\xspace is $p$, such that are either enemies of $p$, or whose chromatic density with respect to \ensuremath{R}\xspace is less than $\alpha$. This is formally defined as follows: \[ \textup{voren}_\alpha(p,\ensuremath{R}\xspace,\ensuremath{P}\xspace) = \{ q \in \ensuremath{P}\xspace \mid \nn{q,\ensuremath{R}\xspace} = p \wedge (l(q) = l(p) \Rightarrow \chromdens{q,\ensuremath{R}\xspace} < \alpha ) \} \] By plugging this function into the algorithms (see Algorithm~\ref{alg:alpha-sfcnn}), it is easy to show that the resulting subsets are $\alpha$-consistent. Moreover, this can be easily implemented to run in $\mathcal{O}(nm)$ worst-case time, where $m$ is the final size of \ensuremath{R}\xspace, extending the implementation scheme described in the paper where \textup{FCNN}\xspace was initially proposed~\cite{angiulli2007fast}. \begin{algorithm} \DontPrintSemicolon \vspace*{0.1cm} \KwIn{Initial training set \ensuremath{P}\xspace and parameter $\alpha \geq 0$} \KwOut{$\alpha$-consistent subset $\ensuremath{R}\xspace \subseteq \ensuremath{P}\xspace$} $\ensuremath{R}\xspace \gets \phi$\; $S \gets \textup{centroids}(\ensuremath{P}\xspace)$\; \While{$S \neq \phi$}{ $\ensuremath{R}\xspace \gets \ensuremath{R}\xspace \cup \{ \text{Choose one point from } S\}$\; $S \gets \phi$\; \ForEach{$p \in \ensuremath{R}\xspace$}{ $S \gets S \cup \{ \text{Choose one point from } \textup{voren}_\alpha(p,\ensuremath{R}\xspace,\ensuremath{P}\xspace) \}$\; } } \KwRet{\ensuremath{R}\xspace} \vspace*{0.1cm} \caption{\mbox{\textup{$\alpha$-SFCNN}}\xspace} \label{alg:alpha-sfcnn} \end{algorithm} Finally, leveraging the analysis described in~\cite{cccg20afloresv}, together with the proofs of Theorems~\ref{thm:rss-approx-factor-cs} and~\ref{thm:rss-size}, we upper-bound the selection size of the \mbox{\textup{$\alpha$-SFCNN}}\xspace algorithm. The proofs of the next results depend on the following observation. Let $a,b \in \ensuremath{R}\xspace$ be two points selected by \mbox{\textup{$\alpha$-SFCNN}}\xspace, where $\dne{a}, \dne{b} \geq \beta$ for some $\beta \geq 0$, it is easy to show that $\dist{a}{b} \geq \beta/(1+\alpha)$. This follows from a fairly simple argument: to the contrary, suppose that $\dist{a}{b} < \beta/(1+\alpha)$, which would imply that $a$ and $b$ belong to the same class. Without loss of generality, point $a$ was added to \ensuremath{R}\xspace before point $b$. Note that after adding point $a$ to \ensuremath{R}\xspace, the chromatic density of $b$ \emph{w.r.t.}\xspace \ensuremath{R}\xspace is $\chromdens{b,\ensuremath{R}\xspace} > \alpha$, which contradicts the statement that $b$ could be added to \ensuremath{R}\xspace. \begin{theorem} \label{thm:sfcnn-approx-factor} \mbox{\textup{$\alpha$-SFCNN}}\xspace computes a tight approximation for the \textsc{Min-$\alpha$-CS} problem. \end{theorem} This result follows by similar arguments as the proof of Theorem~\ref{thm:rss-approx-factor-cs}. By considering any two points $a,b \in B_{p,\alpha} \cap \ensuremath{R}\xspace$, we know that $\dist{a}{b} \geq \gamma/(1+\alpha)$ as $\gamma$ is the smallest nearest-enemy distance in \ensuremath{P}\xspace. This implies \mbox{\textup{$\alpha$-SFCNN}}\xspace can select up to $2^{\textup{ddim}(\metricSet)+1}$ times more points as the \mbox{\textup{$\alpha$-NET}}\xspace algorithm, which yields the proof. \begin{theorem} \label{thm:sfcnn-size} \mbox{\textup{$\alpha$-SFCNN}}\xspace selects $\mathcal{O}\left( \kappa \log{\frac{1}{\gamma}}\ (1+\alpha)^{\textup{ddim}(\metricSet)+1}\right)$ points. \end{theorem} Similarly, this result can be proven using the same arguments outlined to prove Theorem~\ref{thm:rss-size}. After partitioning the selection of \mbox{\textup{$\alpha$-SFCNN}}\xspace into $\mathcal{O}(\kappa \log{1/\gamma})$ subsets, consider any two points $a,b$ in one of these subsets, where $\dne{a}, \dne{b} \in [\sigma, 2\sigma)$, for some $\sigma \in [\gamma,1]$. Therefore, we can show that $\dist{a}{b} \geq \sigma/(1+\alpha)$, which implies that each subset in the partitioning contains at most $\lceil 4(1+\alpha) \rceil^{\textup{ddim}(\metricSet)+1}$ points. This yields the proof. \section{Coreset Characterization} \label{sec:approx-sensitive-nnc} In practice, nearest-neighbors are usually not computed exactly, but rather approximately. Given an approximation parameter $\varepsilon \geq 0$, an $\varepsilon$-\emph{approximate} nearest-neighbor or $\varepsilon$-\textup{ANN}\xspace query returns any point whose distance from the query point is within a factor of $(1+\varepsilon)$ times the true nearest-neighbor distance. Intuitively, a query point should be easier to classify if its nearest-neighbor is significantly closer than its nearest-enemy. This intuition can be formalized through the concept of the \emph{chromatic density}~\cite{MOUNT200097} of a query point $q \in \mathcal{X}$ with respect to a set $\ensuremath{R}\xspace \subseteq \ensuremath{P}\xspace$, defined as: \begin{equation} \chromdens{q,\ensuremath{R}\xspace} = \frac{\dne{q,\ensuremath{R}\xspace}}{\dnn{q,\ensuremath{R}\xspace}} -1. \end{equation} Clearly, if $\chromdens{q,\ensuremath{R}\xspace} > \varepsilon$ then $q$ will be correctly classified% \footnote{By \emph{correct classification}, we mean that the classification is the same as the classification that results from applying the nearest-neighbor rule exactly on the entire training set \ensuremath{P}\xspace.} by an $\varepsilon$-\textup{ANN}\xspace query \mbox{over \ensuremath{R}\xspace, as all} possible candidates for the approximate nearest-neighbor belong to the same class as $q$'s true nearest-neighbor. However, as evidenced in Figures~\ref{fig:cdheatmap:fcnn} and~\ref{fig:cdheatmap:rss}, one side effect of existing condensation algorithms is a significant reduction in the chromatic density of query points. Consequently, we propose new criteria and algorithms that maintain high chromatic densities after condensation, which are then leveraged to build coresets for the nearest-neighbor rule. \subsection{Approximation-Sensitive Condensation} The decision boundaries of the nearest-neighbor rule (that is, points $q$ such that $\dne{q,\ensuremath{P}\xspace} = \dnn{q,\ensuremath{P}\xspace}$) are naturally characterized by points that separate clusters of points of different classes. As illustrated in Figures~\ref{fig:algexample:fcnn}-\ref{fig:algexample:rss}, condensation algorithms tend to select such points. However, this behavior implies a significant reduction of the chromatic density of query points that are far from such boundaries (see Figures~\ref{fig:cdheatmap:fcnn}-\ref{fig:cdheatmap:rss}). \begin{figure*}[t!] \centering \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{heatmap/FCNN.png} \caption{\textup{FCNN}\xspace}\label{fig:cdheatmap:fcnn} \end{subfigure}% \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{heatmap/RSS.png} \caption{\textup{RSS}\xspace}\label{fig:cdheatmap:rss} \end{subfigure}% \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{heatmap/01-RSS.png} \caption{\paramRSS{0.1}}\label{fig:cdheatmap:arss-0.1} \end{subfigure}% \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{heatmap/05-RSS.png} \caption{\paramRSS{0.5}}\label{fig:cdheatmap:arss-0.5} \end{subfigure}% \caption{Heatmap of \emph{chromatic density} values of points in $\mathbb{R}^2$ \emph{w.r.t.}\xspace the subsets computed by different condensation algorithms: \textup{FCNN}\xspace, \textup{RSS}\xspace, and \mbox{\textup{$\alpha$-RSS}}\xspace (see Figure~\ref{fig:algexample}). \emph{Yellow}~{\color{yellowcd}$\bullet$} corresponds to chromatic density values $\geq 0.5$, while \emph{blue}~{\color{bluecd}$\bullet$} corresponds to $0$. Evidently, \mbox{\textup{$\alpha$-RSS}}\xspace helps maintaining high chromatic density values when compared to standard condensation algorithms.}\label{fig:cdheatmap} \end{figure*} A natural way to define an approximate notion of consistency is to ensure that all points in \ensuremath{P}\xspace are correctly classified by \textup{ANN}\xspace queries over the condensed subset \ensuremath{R}\xspace. Given a condensation parameter $\alpha \geq 0$, we define a subset $\ensuremath{R}\xspace \subseteq \ensuremath{P}\xspace$ to be: \vspace*{5pt} \begin{description} \setlength{\itemsep}{5pt} \item[$\alpha$-consistent] if $\forall\ p \in \ensuremath{P}\xspace,\ \dnn{p,\ensuremath{R}\xspace} < \dne{p,\ensuremath{R}\xspace}/(1+\alpha)$. \item[$\alpha$-selective]\ \ if $\forall\ p \in \ensuremath{P}\xspace,\ \dnn{p,\ensuremath{R}\xspace} < \dne{p,\ensuremath{P}\xspace}/(1+\alpha)$. \end{description} It is easy to see that the standard forms arise as special cases when $\alpha = 0$. These new condensation criteria imply that $\chromdens{p,\ensuremath{R}\xspace} > \alpha$ for every $p \in \ensuremath{P}\xspace$, meaning that $p$ is correctly classified using an $\alpha$-\textup{ANN}\xspace query on \ensuremath{R}\xspace. Note that any $\alpha$-selective subset is also $\alpha$-consistent. Such subsets always exist for any $\alpha \geq 0$ by taking $\ensuremath{R}\xspace = \ensuremath{P}\xspace$. Current condensation algorithms cannot guarantee either $\alpha$-consistency or $\alpha$-selectiveness unless $\alpha$ is equal to zero. Therefore, the central algorithmic challenge is how to efficiently compute such sets whose sizes are significantly smaller than \ensuremath{P}\xspace. We propose new algorithms to compute such subsets, which showcase how to maintain high chromatic density values after condensation, as evidenced in Figures~\ref{fig:cdheatmap:arss-0.1} and \ref{fig:cdheatmap:arss-0.5}. This empirical evidence is matched with theoretical guarantees for $\alpha$-consistent and $\alpha$-selective subsets, described in the following section. \subsection{Guarantees on Classification Accuracy} These newly defined criteria for nearest-neighbor condensation enforce lower-bounds on the chromatic density of any point of \ensuremath{P}\xspace after condensation. However, this doesn't immediately imply having similar lower-bounds for unlabeled query points of $\mathcal{X}$. In this section, we prove useful bounds on the chromatic density of query points, and characterize sufficient conditions to correctly classify some of these query points after condensation. Intuitively, the chromatic density determines how easy it is to correctly classify a query point $q \in \mathcal{X}$. We show that the ``ease'' of classification of $q$ after condensation (\emph{i.e.},\xspace~$\chromdens{q,\ensuremath{R}\xspace}$) depends on both the condensation parameter $\alpha$, and the chromatic density of $q$ before condensation (\emph{i.e.},\xspace~$\chromdens{q,\ensuremath{P}\xspace}$). This result is formalized in the following lemma: \begin{lemma} \label{lemma:bound-chromdens} Let $q \in \mathcal{X}$ be a query point, and \ensuremath{R}\xspace an $\alpha$-consistent subset of \ensuremath{P}\xspace, for $\alpha \geq 0$. Then, $q$'s chromatic density with respect to \ensuremath{R}\xspace is: \begin{equation*} \chromdens{q,\ensuremath{R}\xspace} > \frac{\alpha \, \chromdens{q,\ensuremath{P}\xspace} - 2}{\chromdens{q,\ensuremath{P}\xspace} + \alpha + 3}. \end{equation*} \end{lemma} \begin{proof} The proof follows by analyzing $q$'s nearest-enemy distance in \ensuremath{R}\xspace. To this end, consider the point $p \in \ensuremath{P}\xspace$ that is $q$'s nearest-neighbor in \ensuremath{P}\xspace. There are two possible cases: \begin{description} \item[Case 1:] If $p \in \ensuremath{R}\xspace$, clearly $\dnn{q,\ensuremath{R}\xspace} = \dnn{q,\ensuremath{P}\xspace}$. Additionally, it is easy to show that after condensation, $q$'s nearest-enemy distance can only increase: \emph{i.e.},\xspace~$\dne{q,\ensuremath{P}\xspace} \leq \dne{q,\ensuremath{R}\xspace}$. This implies that $\chromdens{q,\ensuremath{R}\xspace} \geq \chromdens{q,\ensuremath{P}\xspace}$. \item[Case 2:] If $p \not\in \ensuremath{R}\xspace$, we can upper-bound $q$'s nearest-neighbor distance in \ensuremath{R}\xspace as follows: \end{description} Since \ensuremath{R}\xspace is an $\alpha$-consistent subset of \ensuremath{P}\xspace, we know that there exists a point $r \in \ensuremath{R}\xspace$~such~that $\dist{p}{r} < \dne{p,\ensuremath{R}\xspace}/(1+\alpha)$. By the triangle inequality and the definition of nearest-enemy, $\dne{p,\ensuremath{R}\xspace} \leq \dist{p}{\nenemy{q,\ensuremath{R}\xspace}} \leq \dist{q}{p} + \dne{q,\ensuremath{R}\xspace}$. Additionally, applying the definition of chromatic density on $q$ and knowing that $\dne{q,\ensuremath{P}\xspace} \leq \dne{q,\ensuremath{R}\xspace}$, we have $\dist{q}{p} = \dnn{q,\ensuremath{P}\xspace} \leq \dnn{q,\ensuremath{R}\xspace} = \dne{q,\ensuremath{R}\xspace}/(1+\chromdens{q,\ensuremath{P}\xspace})$. Therefore: \begin{align*} \dnn{q,\ensuremath{R}\xspace} \leq \dist{q}{r} &\leq \dist{q}{p} + \dist{p}{r}\\ &< \dist{q}{p} + \frac{\dist{q}{p} + \dne{q,\ensuremath{R}\xspace}}{1+\alpha} \leq \left( \frac{\chromdens{q,\ensuremath{P}\xspace} + \alpha + 3}{(1+\alpha)(1+\chromdens{q,\ensuremath{P}\xspace})} \right) \dne{q,\ensuremath{R}\xspace}. \end{align*} \vspace*{10pt} Finally, from the definition of $\chromdens{q,\ensuremath{R}\xspace}$, we have:\\ \indent\( \displaystyle \chromdens{q,\ensuremath{R}\xspace} = \frac{\dne{q,\ensuremath{R}\xspace}}{\dnn{q,\ensuremath{R}\xspace}} - 1 > \frac{(1+\alpha)(1+\chromdens{q,\ensuremath{P}\xspace})}{\chromdens{q,\ensuremath{P}\xspace}+\alpha+3} - 1 = \frac{\alpha\,\chromdens{q,\ensuremath{P}\xspace} - 2}{\chromdens{q,\ensuremath{P}\xspace} + \alpha + 3}. \) \end{proof} The above result can be leveraged to define a coreset, in the sense that an exact result on the coreset corresponds to an approximate result on the original set. As previously defined, we say that a set $\ensuremath{R}\xspace \subseteq \ensuremath{P}\xspace$ is an \emph{$\varepsilon$-coreset for the nearest-neighbor rule} on \ensuremath{P}\xspace, if and only if for every query point $q \in \mathcal{X}$, the class of $q$'s exact nearest-neighbor in \ensuremath{R}\xspace is the same as the class of any of its $\varepsilon$-approximate nearest-neighbors in \ensuremath{P}\xspace. \begin{lemma} \label{thm:coreset:isconsistent} Any $\varepsilon$-coreset for the nearest-neighbor rule is an $\alpha$-consistent subset, for $\alpha \geq 0$. \end{lemma} \begin{proof} Consider any $\varepsilon$-coreset $C \subseteq \ensuremath{P}\xspace$ for the nearest-neighbor rule on \ensuremath{P}\xspace. Since the approximation guarantee holds for any point in $\mathcal{X}$, it holds for any $p \in \ensuremath{P}\xspace \setminus C$. We know $p$'s nearest-neighbor in the original set \ensuremath{P}\xspace is $p$ itself, thus making $\dnn{p,\ensuremath{P}\xspace}$ zero. This implies that $p$ must be correctly classified by a nearest-neighbor query on $C$, that is, $\dnn{p,C} < \dne{p,C}$, which is the definition of $\alpha$-consistency for any $\alpha \geq 0$. \end{proof} \begin{theorem} \label{thm:coreset:nn} Any $2/\varepsilon$-selective subset is an $\varepsilon$-coreset for the nearest-neighbor rule. \end{theorem} \begin{proof} Let \ensuremath{R}\xspace be an $\alpha$-selective subset of \ensuremath{P}\xspace, where $\alpha = 2/\varepsilon$. Consider any query point $q \in \mathcal{X}$ in the metric space. It suffices to show that its nearest-neighbor in \ensuremath{R}\xspace is of the same class as any $\varepsilon$-approximate nearest-neighbor in \ensuremath{P}\xspace. To this end, consider $q$'s chromatic density with respect to both \ensuremath{P}\xspace and \ensuremath{R}\xspace, denoted as $\chromdens{q,\ensuremath{P}\xspace}$ and $\chromdens{q,\ensuremath{R}\xspace}$, respectively. We identify two cases: \begin{description} \setlength{\itemsep}{5pt} \item[Case 1 (Correct-Classification guarantee):] If $\chromdens{q,\ensuremath{P}\xspace} \geq \varepsilon$.\\ Consider the bound derived in Lemma~\ref{lemma:bound-chromdens}. Since $\alpha \geq 0$, and by our assumption that $\chromdens{q,\ensuremath{P}\xspace} \geq \varepsilon > 0$, setting $\alpha=2/\varepsilon$ implies that $\chromdens{q,\ensuremath{R}\xspace} > 0$. This means that the nearest-neighbor of $q$ in \ensuremath{R}\xspace belongs to the same class as the nearest-neighbor of $q$ in \ensuremath{P}\xspace. Intuitively, this guarantees that $q$ is correctly classified by the nearest-neighbor rule in \ensuremath{R}\xspace. \item[Case 2 ($\varepsilon$-Approximation guarantee):] If $\chromdens{q,\ensuremath{P}\xspace} < \varepsilon$.\\ Let $p \in \ensuremath{P}\xspace$ be $q$'s nearest-neighbor in \ensuremath{P}\xspace, thus $\dist{q}{p} = \dnn{q,\ensuremath{P}\xspace}$. Since \ensuremath{R}\xspace is $\alpha$-selective, there exists a point $r \in \ensuremath{R}\xspace$ such that $\dist{p}{r} = \dnn{p,\ensuremath{R}\xspace} < \dne{p,\ensuremath{P}\xspace}/(1+\alpha)$. Additionally, by the triangle inequality and the definition of nearest-enemies, we have \[ \dne{p,\ensuremath{P}\xspace} \leq \dist{p}{\nenemy{q,\ensuremath{P}\xspace}} \leq \dist{p}{q} + \dist{q}{\nenemy{q,\ensuremath{P}\xspace}} = \dnn{q,\ensuremath{P}\xspace}+\dne{q,\ensuremath{P}\xspace}. \] From the definition of chromatic density, $\dne{q,\ensuremath{P}\xspace} = (1+\chromdens{q,\ensuremath{P}\xspace})\,\dnn{q,\ensuremath{P}\xspace}$. Together, these inequalities imply that $(1+\alpha)\,\dist{p}{r} \leq (2+\chromdens{q,\ensuremath{P}\xspace})\,\dnn{q,\ensuremath{P}\xspace}$. Therefore: \begin{equation*} \dnn{q,\ensuremath{R}\xspace} \leq \dist{q}{r} \leq \dist{q}{p} + \dist{p}{r} \leq \left( 1 + \frac{2+\chromdens{q,\ensuremath{P}\xspace}}{1+\alpha} \right)\dnn{q,\ensuremath{P}\xspace}. \end{equation*} Now, assuming $\chromdens{q,\ensuremath{P}\xspace} < \varepsilon$ and setting $\alpha = 2/\varepsilon$, imply that $\dnn{q,\ensuremath{R}\xspace} < (1+\varepsilon)\,\dnn{q,\ensuremath{P}\xspace}$. Therefore, the nearest-neighbor of $q$ in \ensuremath{R}\xspace is an $\varepsilon$-approximate nearest-neighbor of $q$ in \ensuremath{P}\xspace. \end{description} \vspace*{3pt} Cases~1 and 2 imply that setting $\alpha = 2/\varepsilon$ is sufficient to ensure that the nearest-neighbor rule classifies any query point $q \in \mathcal{X}$ with the class of one of its $\varepsilon$-approximate nearest-neighbors in \ensuremath{P}\xspace. Therefore, \ensuremath{R}\xspace is an $\varepsilon$-coreset for the nearest-neighbor rule on \ensuremath{P}\xspace. \end{proof} So far, we have assumed that nearest-neighbor queries over \ensuremath{R}\xspace are computed exactly, as this is the standard notion of coresets. However, it is reasonable to compute nearest-neighbors approximately even for \ensuremath{R}\xspace. How should the two approximations be combined to achieve a desired final degree of accuracy? Consider another approximation parameter $\xi$, where $0\leq\xi<\varepsilon$. We say that a set $\ensuremath{R}\xspace \subseteq \ensuremath{P}\xspace$ is an \emph{$(\xi,\varepsilon)$-coreset} for the approximate nearest-neighbor rule on \ensuremath{P}\xspace, if and only if for every query point $q \in \mathcal{X}$, the class of any of $q$'s $\xi$-approximate nearest-neighbor in \ensuremath{R}\xspace is the same as the class of any of its $\varepsilon$-approximate nearest-neighbors in \ensuremath{P}\xspace. The following result generalizes Theorem~\ref{thm:coreset:nn} to accommodate for $\xi$-\textup{ANN}\xspace queries after condensation. \begin{theorem} \label{thm:coreset:ann} Any $\alpha$-selective subset is an $(\xi,\varepsilon)$-coreset for the approximate nearest-neighbor rule when $\alpha = \Omega\kern-1pt\left( 1/(\varepsilon - \xi) \right)$. \end{theorem} \begin{proof} This follows from similar arguments to the ones described in the proof of Theorem~\ref{thm:coreset:nn}. Instead, here we set $\alpha = (\varepsilon\kern1pt\xi +3\xi + 2)/(\varepsilon - \xi)$. Let \ensuremath{R}\xspace be an $\alpha$-selective subset of \ensuremath{P}\xspace, and $q \in \mathcal{X}$ any query point in the metric space, consider the same two cases: \vspace*{5pt} \begin{description} \setlength{\itemsep}{5pt} \item[Case 1 (Correct-Classification guarantee):] If $\chromdens{q,\ensuremath{P}\xspace} \geq \varepsilon$.\\Consider the bound derived in Lemma~\ref{lemma:bound-chromdens}. By our assumption that $\chromdens{q,\ensuremath{P}\xspace} \geq \varepsilon > 0$, and since $\alpha \geq 0$, the following inequality holds true: \[ \chromdens{q,\ensuremath{R}\xspace} > \frac{\alpha\,\chromdens{q,\ensuremath{P}\xspace} - 2}{\chromdens{q,\ensuremath{P}\xspace} + \alpha + 3} \geq \frac{\alpha\varepsilon - 2}{\varepsilon + \alpha + 3} \] Based on this, it is easy to see that the assignment of $\alpha = (\varepsilon\kern1pt\xi +3\xi + 2)/(\varepsilon - \xi)$ implies that $\chromdens{q,\ensuremath{R}\xspace} > \xi$, meaning that any of $q$'s $\xi$-approximate nearest-neighbors in \ensuremath{R}\xspace belong to the same class as $q$'s nearest-neighbor in \ensuremath{P}\xspace. Intuitively, this guarantees that $q$ is correctly classified by the $\xi$-\textup{ANN}\xspace rule in \ensuremath{R}\xspace. \item[Case 2 ($\varepsilon$-Approximation guarantee):] If $\chromdens{q,\ensuremath{P}\xspace} < \varepsilon$.\\ The assignment of $\alpha$ implies that $\dnn{q,\ensuremath{R}\xspace} < \frac{1+\varepsilon}{1+\xi}\,\dnn{q,\ensuremath{P}\xspace}$. This means that an $\xi$-\textup{ANN}\xspace query for $q$ in \ensuremath{R}\xspace, will return one of $q$'s $\varepsilon$-approximate nearest-neighbors in \ensuremath{P}\xspace. \end{description} \vspace*{5pt} All together, this implies that \ensuremath{R}\xspace is an $(\xi,\varepsilon)$-coreset for the nearest-neighbor rule on \ensuremath{P}\xspace. \end{proof} In contrast with standard condensation criteria, these new results provide guarantees on either approximation or the correct classification, of any query point in the metric space. This is true even for query points that were ``hard'' to classify with the entire training set, formally defined as query points with low chromatic density. Consequently, Theorems~\ref{thm:coreset:nn}~and~\ref{thm:coreset:ann} show that $\alpha$ must be set to large values if we hope to provide any sort of guarantees for these query points. However, better results can be achieved by restricting the set of points that are guaranteed to be correctly classified. This relates to the notion of \emph{weak} coresets, which provide approximation guarantees only for a subset of the possible queries. Given $\beta \geq 0$, we define $\mathcal{Q}_\beta$ as the set of query points in $\mathcal{X}$ whose chromatic density with respect to \ensuremath{P}\xspace is at least $\beta$ (\emph{i.e.},\xspace $\mathcal{Q}_\beta = \{ q \in \mathcal{X} \mid \chromdens{q,\ensuremath{P}\xspace} \geq \beta \}$). The following result describes the trade-off between $\alpha$ and $\beta$ to guarantee the correct classification of query points in $\mathcal{Q}_\beta$ after condensation. \begin{theorem} \label{thm:coreset:weak} Any $\alpha$-consistent subset is a weak $\varepsilon$-coreset for the nearest-neighbor rule for queries in $\mathcal{Q}_\beta$, for $\beta = 2/\alpha$. Moreover, all query points in $\mathcal{Q}_\beta$ are correctly classified. \end{theorem} The proof of this theorem is rather simple, and follows the same arguments outlined in Case 1 of the proof of Theorem~\ref{thm:coreset:nn}. Basically, we use Lemma~\ref{lemma:bound-chromdens} to show that for any query point $q \in \mathcal{Q}_\beta$, $q$'s chromatic density after condensation is greater than zero if $\alpha \beta \geq 2$. Note that $\varepsilon$ plays no role in this result, as the guarantee on query points of $\mathcal{Q}_\beta$ is of correct classification (\emph{i.e.},\xspace the class of its \emph{exact} nearest-neighbor in \ensuremath{P}\xspace), rather than an approximation. The trade-off between $\alpha$ and $\beta$ is illustrated in Figure~\ref{fig:weakcoreset}. From an initial training set $\ensuremath{P}\xspace \subset \mathbb{R}^2$ (Figure~\ref{fig:weakcoreset:dataset}), we show the regions of $\mathbb{R}^2$ that comprise the sets $\mathcal{Q}_\beta$ for $\beta = 2/\alpha$, using $\alpha = \{ 0.1, 0.2, \sqrt{2} \}$ (Figures~\ref{fig:weakcoreset:20}-\ref{fig:weakcoreset:sqrt2}). While evidently, increasing $\alpha$ guarantees that more query points will be correctly classified after condensation, this example demonstrates a phenomenon commonly observed experimentally: most query points lie far from enemy points, and thus have high chromatic density with respect to \ensuremath{P}\xspace. Therefore, while Theorem~\ref{thm:coreset:nn} states that $\alpha$ must be set to $2/\varepsilon$ to provide approximation guarantees on all query points, Theorem~\ref{thm:coreset:weak} shows that much smaller values of $\alpha$ are sufficient to provide guarantees on some query points, as evidenced in the example in Figure~\ref{fig:weakcoreset}. \begin{figure*}[h!] \vspace*{.2cm} \centering \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{coreset/dataset.png} \caption{Training set (200\,pts)}\label{fig:weakcoreset:dataset} \end{subfigure}% \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{coreset/20.png} \caption{$\mathcal{Q}_{2/\alpha}$ for $\alpha = 0.1$}\label{fig:weakcoreset:20} \end{subfigure}% \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{coreset/10.png} \caption{$\mathcal{Q}_{2/\alpha}$ for $\alpha = 0.2$}\label{fig:weakcoreset:10} \end{subfigure}% \begin{subfigure}[b]{.25\linewidth} \centering\includegraphics[width=.9\textwidth]{coreset/sqrt2.png} \caption{$\mathcal{Q}_{2/\alpha}$ for $\alpha = \sqrt{2}$}\label{fig:weakcoreset:sqrt2} \end{subfigure}% \caption{Depiction of the $\mathcal{Q}_\beta$ sets for which any $\alpha$-consistent subset is weak coreset ($\beta = 2/\alpha$). Query points in the \emph{yellow}~{\color{yellowcd}$\bullet$} areas are inside $\mathcal{Q}_\beta$, and thus correctly classified after condensation. Query points in the \emph{blue}~{\color{bluecd}$\bullet$} areas are not in $\mathcal{Q}_\beta$, and have no guarantee of correct classification.}\label{fig:weakcoreset} \end{figure*} \begin{comment} \begin{theorem} \label{thm:coreset:lowalpha} Every query point in $\mathcal{Q}_\alpha$ is correctly classified by a $\xi$-\textup{ANN}\xspace query on any $\alpha$-consistent subset when $\alpha = 2+2\xi$. These subsets are weak $(\xi,\varepsilon)$-coresets for the nearest-neighbor rule for queries in $\mathcal{Q}_\alpha$, for any $\xi,\varepsilon > 0$. \end{theorem} \begin{proof} First, recall the definition of the set $\mathcal{Q}_\alpha = \bigcup_{p \in \ensuremath{P}\xspace} \mathcal{B}(p,\dne{p}/(1+\alpha))$ containing all possible query points from $\mathcal{X}$ whose distance to its nearest-neighbor $p$ in \ensuremath{P}\xspace is within $\dne{p}/(1+\alpha)$. Let \ensuremath{R}\xspace be an $\alpha$-consistent subset of \ensuremath{P}\xspace, and consider any query point $q \in \mathcal{Q}_\alpha$. Without loss of generality, let point $p \in \ensuremath{P}\xspace$ be $q$'s nearest-neighbor in \ensuremath{P}\xspace, then by the definition of $\mathcal{Q}_\alpha$ we know that $\dist{q}{p} \leq \dne{p}/(1+\alpha) \leq \dne{p,\ensuremath{R}\xspace}/(1+\alpha)$. Similarly, by $\alpha$-consistency, there exists a point $r \in \ensuremath{R}\xspace$ such that $\dist{p}{r} = \dnn{p,\ensuremath{R}\xspace} < \dne{p,\ensuremath{R}\xspace}/(1+\alpha)$. By simple applications of the triangular inequality, it is easy to show the following bounds: \begin{align*} \dne{q,\ensuremath{R}\xspace} &\geq \dist{p}{\nenemy{q,\ensuremath{R}\xspace}} - \dist{q}{p}\\ &\geq \dne{p,\ensuremath{R}\xspace} - \dne{p,\ensuremath{R}\xspace}/(1+\alpha)\\ &= \frac{\alpha}{1+\alpha}\,\dne{p,\ensuremath{R}\xspace}.\\ \dnn{q,\ensuremath{R}\xspace} &\leq \dist{q}{p} + \dist{p}{r}\\ &< \frac{2}{1+\alpha}\,\dne{p,\ensuremath{R}\xspace}. \end{align*} Therefore, we can bound $q$'s chromatic density after condensation as $\chromdens{q,\ensuremath{R}\xspace} > \alpha/2 - 1$. Setting $\alpha = 2+2\xi$ is enough to ensure that $\chromdens{q,\ensuremath{R}\xspace} > \xi$, which implies that any $\xi$-approximate nearest-neighbor of $q$ in \ensuremath{R}\xspace belongs to the same class as $q$'s nearest-neighbor in \ensuremath{P}\xspace. Therefore, any query point $q \in \mathcal{Q}_\alpha$ is correctly classified by the $\xi$-\textup{ANN}\xspace rule in \ensuremath{R}\xspace. \end{proof} \end{comment} These results establish a clear connection between the problem of condensation and that of finding coresets for the nearest-neighbor rule, and provides a roadmap to prove Theorem~\ref{thm:coreset:main}. This is the first characterization of sufficient conditions to correctly classify any query point in $\mathcal{X}$ after condensation, and not just the points in \ensuremath{P}\xspace (as the original consistency criteria implies). In the following section, these existential results are matched with algorithms to compute $\alpha$-selective subsets of \ensuremath{P}\xspace of bounded cardinality.
1,116,691,498,137
arxiv
\section{Introduction} \label{sec:Intro} The application of neural networks (NNs) as approximation architecture in numerical solution methods of partial differential equations (PDEs), possibly on high-dimensional parameter- and state-spaces, has attracted significant and increasing attention in recent years. We mention only \cite{SirSpil2018,EDeepRitz,RaisKarnHidPhys,RaisKarnPINN,sheng2020pfnn} and the references therein. In these works, the solution of elliptic and parabolic boundary value problems is approximated by NNs which are found by minimization of a residual of the NN in the PDE. A necessary condition for the performance of the mentioned NN-based numerical approximation methods is a high rate of approximation which is to hold uniformly over the solution set associated with the PDE under consideration. For elliptic boundary and eigenvalue problems, the function classes that weak solutions of the problems belong to are well known. Moreover, in many cases, representation systems such as splines or polynomials that achieve optimal linear or nonlinear approximation rates for functions belonging to these function classes have been identified. For functions belonging to a Sobolev or Besov type smoothness space of finite differentiation order such as continuously differentiable or Sobolev-regular functions on a bounded domain, upper bounds for the approximation rate by NNs were established for example in \cite{YAROTSKY2017103,guhring2019error,yarotsky2018optimal,lu2020deep,suzuki2018adaptivity}. Here, we only mentioned results that use the ReLU activation function. Besides, for PDEs, the solutions of which have a particular structure, approximation rates of the solution that go beyond classical smoothness-based results can be achieved, such as in \cite{EGJS2018,schwab2017deep,laakmann2020efficient,berner2018analysis,jentzen2018proof}. Again, we confine the list to publications with approximation rates for NNs with the ReLU activation function (referred to as ReLU NNs below). In the present paper, we analyze approximation rates provided by ReLU NNs for solution classes of linear and nonlinear elliptic source and eigenvalue problems on polygonal and polyhedral domains. Mathematical results on weighted analytic regularity \cite{GuoBab1,GuoBab2,GuoBab3,GuoBabCurv,BDC85,GuoScStokes,Mazya2010,CoDaNi12,MadMarc2019,MS19_2743} imply that these classes consist of functions that are \emph{analytic with possible corner, edge, and corner-edge singularities}. Our analysis provides, for the aforementioned functions, approximation errors in Sobolev norms that decay exponentially in terms of the number of parameters $M$ of the ReLU NNs. \subsection{Contribution} \label{S:Contrib} The principal contribution of this work is threefold: \begin{enumerate} \item We prove, in Theorem \ref{th:ReLUapprox}, a general result on the approximation by ReLU NNs of weighted analytic function classes on $Q \coloneqq (0,1)^d$, where $d = 2,3$. The analytic regularity of solutions is quantified via countably normed, analytic classes, based on weighted Sobolev spaces of Kondrat'ev type in $Q$, which admit corner and, in space dimension $d=3$, also edge singularities. Such classes were introduced, e.g., in \cite{BDC85,GuoBabCurv,GuoBab3,GuoBab1,GuoBab2,CoDaNi12} and in the references there. We prove exponential expression rates by ReLU NNs in the sense that for a number $M$ of free parameters of the NNs, the approximation error is bounded, in the $H^1$-norm, by $C\exp(-bM^{1/(2d+1)})$ for constants $b,C > 0$. \item Based on the ReLU NN approximation rate bound of Theorem \ref{th:ReLUapprox}, we establish, in Section \ref{sec:applications}, approximation results for solutions of different types of PDEs by NNs with ReLU activation. Concretely, in Section \ref{sec:NonlSchrEq}, we study the reapproximation of solutions of nonlinear Schr\"odinger equations with singular potentials in space dimension $d=2,3$. We prove that for solutions which are contained in weighted, analytic classes in $\Omega$, ReLU NNs (whose realizations are continuous, piecewise affine) with at most $M$ free parameters yield an approximation with accuracy of the order $\exp(-bM^{1/(2d+1)})$ for some $b>0$. Importantly, this convergence is in the $H^1(\Omega)$-norm. In Section \ref{sec:HF}, we establish the same exponential approximation rates for the eigenstates of the Hartree-Fock model with singular potential in $\mathbb{R}^3$. This result constitutes the first, to our knowledge, mathematical underpinning of the recently reported, high efficiency of various NN-based approaches in variational electron structure computations, e.g., \cite{pfau2019abinitio,hermann2019deep,ESchroed2019}. In Section \ref{sec:polygonal}, we demonstrate the same approximation rates also for elliptic boundary value problems with analytic coefficients and analytic right-hand sides, in plane, polygonal domains $\Omega$. The approximation error of the NNs is, again, bound in the $H^1(\Omega)$-norm. We also infer an exponential NN expression rate bound for corresponding traces, in $H^{1/2}(\partial\Omega)$ and for viscous, incompressible flow. Finally, in Section \ref{sec:EllPDEFichera}, we obtain the same asymptotic exponential rates for the approximation of solutions to elliptic boundary value problems, with analytic data, on so-called Fichera-type domains $\Omega\subset {\mathbb R}^3$ (being, roughly speaking, finite unions of tensorized hexahedra). These solutions exhibit corner, edge and corner-edge singularities. \item The exponential approximation rates of the ReLU NNs established here are based on emulating corresponding variable grid and degree (``$hp$'') piecewise polynomial approximations. In particular, our construction comprises tensor product $hp$-approximations on Cartesian products of geometric partitions of intervals. In Theorem \ref{prop:internal}, we establish novel \emph{tensor product $hp$-approximation results} for weighted analytic functions on $Q$ of the form $\| u - u_{\mathsf{hp}} \|_{H^1(Q)} \leq C \exp(-b\sqrt[2d]{N})$ for $d=1,2,3$, where $N$ is the number of degrees of freedom in the representation of $u_{\mathsf{hp}}$ and $C,b>0$ are independent of $N$ (but depend on $u$). The geometric partitions employed in $Q$ and the architectures of the constructed ReLU NNs are of tensor product structure, and generalize to space dimension $d>3$. We note that $hp$-approximations based on non-tensor-product, geometric partitions of $Q$ into hexahedra have been studied before e.g. in \cite{SSS15_2016,SchSch2018} in space dimension $d=3$. There, the rate of $\| u - u_{\mathsf{hp}} \|_{H^1(Q)} \lesssim \exp(-b\sqrt[5]{N})$ was found. Being based on tensorization, the present construction of exponentially convergent, tensorized $hp$-approximations in Appendix \ref{sec:hp-analysis} does not invoke the rather involved polynomial trace liftings in \cite{SSS15_2016,SchSch2018}, and is interesting in its own right: the geometric and mathematical simplification comes at the expense of a slightly smaller (still exponential) rate of approximation. Moreover, we expect that this construction of $u_{\mathsf{hp}}$ will allow a rather direct derivation of rank bounds for tensor structured function approximation of $u$ in $Q$, generalizing results in \cite{KORS17_2264,KS18_2116} and extending \cite{MRS19_872} from point to edge and corner-edge singularities. \end{enumerate} \subsection{Neural network approximation of weighted analytic function classes} \label{sec:ReapprNNs} The proof strategy that yields the main result, Theorem \ref{th:ReLUapprox}, is as follows. We first establish exponential approximation rates in the Sobolev space $H^1$ for tensor-product, so-called $hp$-finite elements for weighted analytic functions. Then, we re-approximate the corresponding quasi-interpolants by ReLU NNs. The emulation of $hp$-finite element approximation by ReLU NNs is fundamentally based on the approximate multiplication network formalized in \cite{YAROTSKY2017103}. Based on the approximate multiplication operation and an extension thereof to errors measured with respect to $W^{1,q}$-norms, for $q \in [1,\infty]$, we established already in \cite{OPS19_811} a reapproximation theorem of univariate splines of order $p\in \mathbb{N}$ on arbitrary meshes with $N\in \mathbb{N}$ cells. There, we observed that there exists a NN that reapproximates a variable-order, free-knot spline $u$ in the $H^1$-norm up to an error of $\epsilon>0$ with a number of free parameters that scales logarithmically in $\epsilon$ and $|u|_{H^1}$, linearly in $N$ and quadratically in $p$. We recall this result in Proposition \ref{prop:relupwpolynom} below. From this, it is apparent by the triangle inequality that, in univariate approximation problems where $hp$-finite elements yield exponential approximation rates, also ReLU NNs achieve exponential approximation rates, (albeit with a possibly smaller exponent, because of the quadratic dependence on $p$, see \cite[Theorem 5.12]{OPS19_811}). The extension of this result to higher dimensions for high-order finite elements that are built from univariate finite elements by tensorization is based on the underlying compositionality of NNs. Because of that, it is possible to compose a NN implementing a multiplication of $d$ inputs with $d$ approximations of univariate finite elements. We introduce a formal framework describing these operations in Section \ref{sec:ReLUCalc}. We remark that for high-dimensional functions with a radial structure, of which the univariate radial profile allows an exponentially convergent $hp$-approximation, exponential convergence was obtained in \cite[Section 6]{OPS19_811} by composing ReLU approximations of univariate splines with an exponentially convergent approximation of the Euclidean norm, obtaining exponential convergence without the curse of dimensionality. \subsection{Outline} \label{sec:outline} The manuscript is structured as follows: in Section~\ref{sec:setting}, in particular Section~\ref{sec:WgtSpcNonHomNrm}, we review the weighted function spaces which will be used to describe the weighted analytic function classes in polytopes $\Omega$ that underlie our approximation results. In Section~\ref{sec:hpTP}, we present an approximation result by tensor-product $hp$-finite elements for functions from the weighted analytic class. A proof of this result is provided in Appendix~\ref{sec:hp-analysis}. In Section~\ref{sec:ReLUCalc} we review definitions of NNs and a ``ReLU calculus'' from \cite{EGJS2018,PETERSEN2018296} whose operations will be required in the ensuing NN approximation results. In Section~\ref{sec:hpReapproxReLU}, we state and prove the key results of the present paper. In Section \ref{sec:applications}, we illustrate our results by deducing novel NN expression rate bounds for solution classes of several concrete examples of elliptic boundary-value and eigenvalue problems where solutions belong to the weighted analytic function classes introduced in Section~\ref{sec:setting}. Some of the more technical proofs of Section \ref{sec:applications} are deferred to Appendix \ref{sec:proofs-appendix}. In Section~\ref{sec:ConclExt}, we briefly recapitulate the principal mathematical results of this paper and indicate possible consequences and further directions. \section{Setting and functional spaces} \label{sec:setting} We start by recalling some general notation that will be used throughout the paper. We also introduce some tools that are required to describe two and three dimensional domains as well as the associated weighted Sobolev spaces. \subsection{Notation} \label{sec:Notat} For $\alpha\in \mathbb{N}^d_0$, define ${|\alpha|} \coloneqq \alpha_1+\dots+\alpha_d$ and ${|\alpha|_\infty} \coloneqq \max\{\alpha_1, \dots, \alpha_d\}$. When we indicate a relation on ${|\alpha|}$ or ${|\alpha|_\infty}$ in the subscript of a sum, we mean the sum over all multi-indices that fulfill that relation: e.g., for a $k\in \mathbb{N}_0$ \begin{equation*} \sum_{{|\alpha|} \leq k} = \sum_{\alpha\in \mathbb{N}^d_0:{|\alpha|}\leq k}. \end{equation*} For a domain $\Omega\subset\mathbb{R}^d$, $k\in\mathbb{N}_0$ and for $1\leq p\leq \infty$, we indicate by $W^{k,p}(\Omega)$ the classical $L^p(\Omega)$-based Sobolev space of order $k$. We write $H^k(\Omega) = W^{k,2}(\Omega)$. We introduce the norms $\| \cdot \|_{W_{\mathrm{mix}}^{1,p}(\Omega)}$ as \begin{equation*} \| v\|_{W_{\mathrm{mix}}^{1,p}(\Omega)}^{p} \coloneqq \sum_{{|\alpha|_\infty}\leq 1} \| \partial^\alpha v\|^p_{L^p(\Omega)}, \end{equation*} with associated spaces \begin{equation*} W_{\mathrm{mix}}^{1,p}(\Omega) \coloneqq \left\{ v\in L^p(\Omega): \|v\|_{W_{\mathrm{mix}}^{1,p}(\Omega)} < \infty \right\}. \end{equation*} We denote $H_{\mathrm{mix}}^1(\Omega) = W_{\mathrm{mix}}^{1,2}(\Omega)$. For $\Omega = I_1\times\dots\times I_d$, with bounded intervals $I_j\subset\mathbb{R}$, $j=1, \dots, d$, $H_{\mathrm{mix}}^{1}(\Omega) = H^1(I_1)\otimes\dots\otimes H^1(I_d)$ with Hilbertian tensor products. Throughout, $C$ will denote a generic positive constant whose value may change at each appearance, even within an equation. The $\ell^p$ norm, $1\leq p\leq \infty$, on $\mathbb{R}^n$ is denoted by $\normc[p]{x}$. The number of nonzero entries of a vector or matrix $x$ is denoted by $\|x\|_0$. \paragraph{Three dimensional domain.} Let $\Omega \subset \mathbb{R}^3$ be a bounded, polygonal/polyhedral domain. Let $\mathcal{C} $ denote a set of isolated points, situated either at the corners of $\Omega$ or in the interior of $\Omega$ (that we refer to as the singular corners in either case, for simplicity), and let $\mathcal{E}$ be a subset of the edges of $\Omega$ (the singular edges). Furthermore, denote by $\mathcal{E}_c \subset \mathcal{E}$ the set of singular edges abutting at a corner $c$. For each $c\in \mathcal{C}$ and each $e\in \mathcal{E}$, we introduce the following weights: \begin{equation*} r_c(x) \coloneqq |x-c| = \dist(x, c),\qquad r_e(x) \coloneqq \dist(x, e),\qquad \rho_{ce}(x) \coloneqq \frac{r_e(x)}{r_c(x)}\quad \text{ for }x \in \Omega. \end{equation*} For $\varepsilon>0$, we define edge-, corner-, and corner-edge neighborhoods: \begin{align*} \Omega_{e}^\varepsilon &\coloneqq \bigg\{ x\in \Omega: r_e(x)< \varepsilon \text{ and }r_c(x)>\varepsilon, \forall c\in\mathcal{C}\bigg\},\\ \Omega_c^\varepsilon &\coloneqq \bigg\{ x\in \Omega: r_c(x)< \varepsilon \text{ and }\rho_{ce}(x)>\varepsilon, \forall e\in \mathcal{E}\bigg\},\quad \Omega_{ce}^\varepsilon \coloneqq \bigg\{ x\in \Omega: r_c(x)< \varepsilon \text{ and }\rho_{ce}(x)<\varepsilon\bigg\}. \end{align*} We fix a value ${\widehat{\varepsilon}}>0$ small enough so that $\Omega_c^{{\widehat{\varepsilon}}}\cap \Omega^{{\widehat{\varepsilon}}}_{c'} = \emptyset$ for all $c\neq c'\in \mathcal{C}$ and $Q_{ce}^{\widehat{\varepsilon}} \cap \Omega^{\widehat{\varepsilon}}_{ce'} = \Omega^{\widehat{\varepsilon}}_e \cap \Omega^{\widehat{\varepsilon}}_{e'}=\emptyset$ for all singular edges $e\neq e'$. In the sequel, we omit the dependence of the subdomains on ${\widehat{\varepsilon}}$. Let also \begin{equation*} \Omega_{\mathcal{C}}\coloneqq \bigcup_{c\in\mathcal{C}}\Omega_c,\qquad \Omega_{\mathcal{E}}\coloneqq\bigcup_{e\in\mathcal{E}}\Omega_e,\qquad \Omega_{\mathcal{C}\mathcal{E}} \coloneqq \bigcup_{c\in\mathcal{C}}\bigcup_{e\in\mathcal{E}_c}\Omega_{ce}, \end{equation*} and \begin{equation*} \Omega_0 \coloneqq \Omega\setminus\overline{(\Omega_{\mathcal{C}}\cup \Omega_{\mathcal{E}}\cup \Omega_{\mathcal{C}\mathcal{E}})}. \end{equation*} In each subdomain $\Omega_{ce}$ and $\Omega_e$, for any multi-index $\alpha\in \mathbb{N}_0^3$, we denote by ${\alpha_\parallel}$ the multi-index whose component in the coordinate direction parallel to $e$ is equal to the component of $\alpha$ in the same direction, and which is zero in every other component. Moreover, we set ${\alpha_\bot} \coloneqq \alpha -{\alpha_\parallel}$. \paragraph{Two dimensional domain.} Let $\Omega \subset \mathbb{R}^2$ be a polygon. We adopt the convention that $\mathcal{E} \coloneqq \emptyset$. For $c\in\mathcal{C}$, we define \begin{equation*} Q_c^\varepsilon \coloneqq \bigg\{ x\in \Omega: r_c(x)< \varepsilon \bigg\} \;. \end{equation*} As in the three dimensional case, we fix a sufficiently small ${\widehat{\varepsilon}}>0$ so that $\Omega^{{\widehat{\varepsilon}}}_{c}\cap \Omega^{{\widehat{\varepsilon}}}_{c'}=\emptyset$ for $c\neq c'\in \mathcal{C}$ and write $\Omega_c = \Omega_c^{\widehat{\varepsilon}}$. Furthermore, $\Omega_{\mathcal{C}}$ is defined as for $d=3$, and $\Omega_0 \coloneqq \Omega\setminus \overline{\Omega_{\mathcal{C}}}$. \subsection{Weighted spaces with nonhomogeneous norms} \label{sec:WgtSpcNonHomNrm} We introduce classes of weighted, analytic functions in space dimension $d = 3$, as arise in analytic regularity theory for linear, elliptic boundary value problems in polyhedra, in the particular form introduced in \cite{CoDaNi12}. There, the structure of the weights is in terms of Cartesian coordinates which is particularly suited for the presently adopted, tensorized approximation architectures. The definition of the corresponding classes when $d=2$ is analogous. For a \emph{weight exponent vector} ${\underline{\gamma}} = \{\gamma_c, \gamma_e, \, c\in \mathcal{C}, e\in \mathcal{E}\}$, we introduce the \emph{nonhomogeneous, weighted Sobolev norms} \begin{align*} \|v\|_{\mathcal{J}^{k}_{\underline{\gamma}}(\Omega)} \coloneqq \sum_{{|\alpha|}\leq k}\|\partial^\alpha v\|_{L^2(\Omega_0)} & + \sum_{c\in\mathcal{C}}\sum_{{|\alpha|}\leq k}\|r_c^{({|\alpha|} -\gamma_c)_+}\partial^\alpha v \|_{L^2(\Omega_c)}\\ & + \sum_{e\in \mathcal{E}}\sum_{{|\alpha|}\leq k}\|r_e^{({|\alpha_\bot|} -\gamma_e)_+}\partial^\alpha v \|_{L^2(\Omega_e)}\\ &+ \sum_{c\in\mathcal{C}}\sum_{e\in\mathcal{E}_c} \sum_{{|\alpha|}\leq k}\|r_c^{({|\alpha|} -\gamma_c)_+}\rho_{ce}^{({|\alpha_\bot|}-\gamma_e)_+}\partial^\alpha v\|_{L^2(\Omega_{ce})} \end{align*} where $(x)_+ = \max\{0, x\}$. Moreover, we define the associated function space by \begin{equation*} \mathcal{J}^k_{\underline{\gamma}} (\Omega; \mathcal{C}, \mathcal{E}) \coloneqq \bigg\{ v\in L^2(\Omega): \| v\|_{\mathcal{J}^k_{\underline{\gamma}}(\Omega)}< \infty\bigg\}. \end{equation*} Furthermore, \begin{equation*} \mathcal{J}^\infty_{\underline{\gamma}} (\Omega;\mathcal{C}, \mathcal{E}) = \bigcap_{k\in \mathbb{N}_0} \mathcal{J}^k_{\underline{\gamma}}(\Omega;\mathcal{C}, \mathcal{E}). \end{equation*} For $A, C>0$, we define the space of weighted analytic functions with nonhomogeneous norm as \begin{equation} \label{eq:analytic} \begin{aligned} \mathcal{J}^{\varpi}_{\underline{\gamma}}(\Omega;\mathcal{C}, \mathcal{E};C, A) \coloneqq \bigg\{v\in \mathcal{J}^\infty_{\underline{\gamma}}(\Omega;\mathcal{C}, \mathcal{E}): {}&\sum_{{|\alpha|}=k}\|\partial^\alpha v\|_{L^2(\Omega_0)}\leq CA^kk!,\\ &\sum_{{|\alpha|}=k}\|r_c^{({|\alpha|} -\gamma_c)_+}\partial^\alpha v \|_{L^2(\Omega_c)}\leq CA^kk!\quad \forall c\in \mathcal{C},\\ &\sum_{{|\alpha|}=k}\|r_e^{({|\alpha_\bot|} -\gamma_e)_+}\partial^\alpha v \|_{L^2(\Omega_e)}\leq CA^kk!\quad \forall e\in \mathcal{E},\\ &\sum_{{|\alpha|}=k}\|r_c^{({|\alpha|} -\gamma_c)_+}\rho_{ce}^{({|\alpha_\bot|}-\gamma_e)_+}\partial^\alpha v\|_{L^2(\Omega_{ce})}\leq CA^kk!\quad \\ &\forall c\in \mathcal{C}\text{ and }\forall e\in \mathcal{E}_c, \forall k\in \mathbb{N}_0 \bigg\}. \end{aligned} \end{equation} Finally, we denote \begin{equation*} \mathcal{J}^\varpi_{\underline{\gamma}}(\Omega;\mathcal{C}, \mathcal{E}) \coloneqq \bigcup_{C, A>0} \mathcal{J}^\varpi_{\underline{\gamma}}(\Omega;\mathcal{C}, \mathcal{E}; C, A). \end{equation*} \subsection{Approximation of weighted analytic functions on tensor product geometric meshes} \label{sec:hpTP} The approximation result of weighted analytic functions via NNs that we present below is based on emulating an approximation strategy of tensor product $hp$-finite elements. In this section, we present this $hp$-finite element approximation. Let $I \subset \mathbb{R}$ be an interval. A \emph{partition of $I$ into $N \in \mathbb{N}$ intervals} is a set $\mathcal{G}$ such that $|\mathcal{G}|= N$, all elements of $\mathcal{G}$ are disjoint, connected, and open subsets of $I$, and $$ \bigcup_{U \in \mathcal{G}} \overline{U} = \overline{I}. $$ We denote, for all $p\in \mathbb{N}_0$, by $\mathbb{Q}_p(\mathcal{G})$ the piecewise polynomials of degree $p$ on $\mathcal{G}$. One specific partition of $I= [0,1]$ is given by the \emph{one dimensional geometrically graded grid}, which for $\sigma\in (0, 1/2]$ and $\ell\in \mathbb{N}$, is given by \begin{equation} \label{eq:1dmesh} \mathcal{G}^\ell_{1} \coloneqq \left\{J^\ell_k, \, k=0, \dots, \ell\right\},\quad \text{where } J_0^\ell \coloneqq (0, \sigma^\ell)\quad\text{and} \quad J_k^{\ell} \coloneqq (\sigma^{\ell-k+1}, \sigma^{\ell-k}), \, k=1, \dots, \ell. \end{equation} \begin{theorem}\label{thm:Interface} Let $d \in \{2,3\}$ and $Q \coloneqq (0,1)^d$. Let $\mathcal{C} =\{c\}$ where $c$ is one of the corners of $Q$ and let $\mathcal{E} = \mathcal{E}_c$ contain the edges adjacent to $c$ when $d=3$, $\mathcal{E}=\emptyset$ when $d=2$. Further assume given constants $C_f, A_f>0$, and \begin{alignat*}{3} &{\underline{\gamma}} = \{\gamma_c: c\in \mathcal{C}\}, &&\text{with } \gamma_c>1,\; \text{for all } c\in\mathcal{C} &&\text{ if } d = 2,\\ % &{\underline{\gamma}} = \{\gamma_c, \gamma_e: c\in \mathcal{C}, e\in \mathcal{E}\}, \quad&&\text{with } \gamma_c>3/2\text{ and } \gamma_e>1,\; \text{for all }c\in\mathcal{C}\text{ and }e\in \mathcal{E}\quad &&\text{ if } d = 3. % \end{alignat*} Then, there exist $C_p>0$, $C_L>0$ such that, for every $0< \epsilon<1$, there exist $p, L \in \mathbb{N}$ with \begin{equation*} p \leq C_p(1+\left|\text{log } (\epsilon)\right|),\quad L \leq C_L (1+\left|\text{log } (\epsilon)\right|), \end{equation*} so that there exist piecewise polynomials \begin{equation*} v_{i}\in \mathbb{Q}_p(\mathcal{G}^L_1)\cap H^1(I),\qquad i=1, \dots, N_{\mathrm{1d}}, \end{equation*} with $N_{\mathrm{1d}} = (L+1)p + 1$, and, for all $f\in \mathcal{J}^{\varpi}_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E};C_f,A_f)$ there exists a $d$-dimensional array of coefficients \[ c = \left\{c_{{i_1\dots i_d}}: ({i_1,\dots, i_d}) \in \{1, \dots, N_{\mathrm{1d}}\}^d\right\} \] such that \begin{enumerate} \item \label{item:vli}For every $i = 1, \dots N_{\mathrm{1d}}$, $\supp(v_{i})$ intersects either a single interval or two neighboring subintervals of $\mathcal{G}^L_1$. Furthermore, there exist constants $C_v$, $b_v$ depending only on $C_f$, $A_f$, $\sigma$ such that \begin{equation} \label{eq:vli} \|v_{ i}\|_\infty \leq 1, \qquad \|v_{i}\|_{H^1(I)} \leq C_v \epsilon^{-b_v}, \qquad \forall i=1, \dots, N_{\mathrm{1d}}. \end{equation} \item\label{item:appx-eps} There holds \begin{equation} \label{eq:appx-eps} \|f - \sum_{{i_1,\dots, i_d} = 1}^{N_{\mathrm{1d}}} c_{{i_1\dots i_d}} \phi_{{i_1\dots i_d}}\|_{H^1(Q)} \leq \epsilon \qquad \text{with}\qquad\phi_{{i_1\dots i_d}} = \bigotimes_{j=1}^d v_{ i_j} ,\,\forall {i_1,\dots, i_d}=1, \dots, N_{\mathrm{1d}}. \end{equation} \item\label{item:c} $\|c\|_\infty \leq C_2 (1+\left| \text{log } (\epsilon) \right|)^d$ and $\|c\|_1 \leq C_c (1+\left| \text{log } (\epsilon) \right|)^{2d}$, for $C_2, C_c>0$ independent of $p$, $L$, $\epsilon$. \end{enumerate} \end{theorem} We present the proof in Subsection \ref{subsec:ProofOfInterface} after developing an appropriate framework of $hp$-approximation in Section \ref{sec:hp-analysis}. \section{Basic ReLU neural network calculus} \label{sec:ReLUCalc} In the sequel, we distinguish between a neural network, as a collection of weights, and the associated \emph{realization of the NN}. This is a function that is determined through the weights and an activation function. In this paper, we only consider the so-called ReLU activation: \begin{equation* \varrho: \mathbb{R} \to \mathbb{R}: x \mapsto \max\{0, x\}. \end{equation*} \begin{definition}[{\cite[Definition 2.1]{PETERSEN2018296}}] \label{def:NeuralNetworks} Let $d, L\in \mathbb{N}$. A \emph{neural network $\Phi$ with input dimension $d$ and $L$ layers} is a sequence of matrix-vector tuples \[ \Phi = \big((A_1,b_1), (A_2,b_2), \dots, (A_L, b_L)\big), \] where $N_0 \coloneqq d$ and $N_1, \dots, N_{L} \in \mathbb{N}$, and where $A_\ell \in \mathbb{R}^{N_\ell\times N_{\ell-1}}$ and $b_\ell \in \mathbb{R}^{N_\ell}$ for $\ell =1,...,L$. For a NN $\Phi$, we define the associated \emph{realization of the NN $\Phi$} as \[ \mathrm{R}(\Phi): \mathbb{R}^d \to \mathbb{R}^{N_L} : x\mapsto x_L \rev{\eqqcolon} \mathrm{R}(\Phi)(x), \] where the output $x_L \in \mathbb{R}^{N_L}$ results from \begin{equation} \label{eq:NetworkScheme} \begin{split} x_0 &\coloneqq x, \\ x_{\ell} &\coloneqq \varrho(A_{\ell} \, x_{\ell-1} + b_\ell) \quad \text{ for } \ell = 1, \dots, L-1,\\ x_L &\coloneqq A_{L} \, x_{L-1} + b_{L}. \end{split} \end{equation} Here $\varrho$ is understood to act component-wise on vector-valued inputs, i.e., for $y = (y^1, \dots, y^m) \in \mathbb{R}^m$, $\varrho(y) := (\varrho(y^1), \dots, \varrho(y^m))$. We call $N(\Phi) \coloneqq d + \sum_{j = 1}^L N_j$ the \emph{number of neurons of the NN} $\Phi$, $L(\Phi)\coloneqq L$ the \emph{number of layers} or \emph{depth}, $M_j(\Phi)\coloneqq \| A_j\|_{0} + \| b_j \|_{0}$ the \emph{number of nonzero weights in the $j$-th layer}, and $M(\Phi) \coloneqq \sum_{j=1}^L M_j(\Phi)$ the \emph{number of nonzero weights of $\Phi$}, also referred to as its \emph{size}. We refer to $N_L$ as the \emph{dimension of the output layer of $\Phi$}. \end{definition} \subsection{Concatenation, parallelization, emulation of identity} \label{S:ConcParEm} An essential component in the ensuing proofs is to construct NNs out of simpler building blocks. For instance, given two NNs, we would like to identify another NN so that the realization of it equals the sum or the composition of the first two NNs. To describe these operations precisely, we introduce a formalism of operations on NNs below. The first of these operations is the concatenation. \begin{proposition}[NN concatenation, {{\cite[Remark 2.6]{PETERSEN2018296}}}] \label{prop:conc} Let $L_1, L_2 \in \mathbb{N}$, and let $\Phi^1, \Phi^2$ be two NNs of respective depths $L_1$ and $L_2$ such that $N^1_0 = N^2_{L_2}\eqqcolon d$, i.e., the input layer of $\Phi^1$ has the same dimension as the output layer of $\Phi^2$. Then, there exists a NN $\Phi^1 \odot \Phi^2$, called the \emph{sparse concatenation of $\Phi^1$ and $\Phi^2$}, such that $\Phi^1 \odot \Phi^2$ has $L_1+L_2$ layers, $\mathrm{R}(\Phi^1 \odot \Phi^2) = \mathrm{R}(\Phi^1) \circ \mathrm{R}(\Phi^2)$ and $M\left(\Phi^1 \odot \Phi^2\right) \leq 2M\left(\Phi^1\right) + 2M\left(\Phi^2\right)$. \end{proposition} The second fundamental operation on NNs is parallelization, achieved with the following construction. \begin{proposition}[NN parallelization, {\cite[Definition 2.7]{PETERSEN2018296}}]\label{prop:parall} Let $L, d \in \mathbb{N}$ and let $\Phi^1, \Phi^2$ be two NNs with $L$ layers and with $d$-dimensional input each. Then there exists a NN $\mathrm{P}(\Phi^1, \Phi^2)$ with $d$-dimensional input and $L$ layers, which we call the \emph{parallelization of $\Phi^1$ and $\Phi^2$}, such that \begin{equation*} \mathrm{R}\left(\mathrm{P}\left(\Phi^1,\Phi^2\right)\right) (x) = \left(\mathrm{R}\left(\Phi^1\right)(x), \mathrm{R}\left(\Phi^2\right)(x)\right), \text{ for all } x \in \mathbb{R}^d \end{equation*} and $M(\mathrm{P}(\Phi^1, \Phi^2)) = M(\Phi^1) + M(\Phi^2)$. \end{proposition} Proposition \ref{prop:parall} requires two NNs to have the same depth. If two NNs have different depth, then we can artificially enlarge one of them by concatenating with a NN that implements the identity. One possible construction of such a NN is presented next. \begin{proposition}[NN emulation of $\mathrm{Id}$, {{\cite[Remark 2.4]{PETERSEN2018296}}}]\label{prop:Id} For every $d,L\in \mathbb{N}$ there exists a NN $\Phi^{\mathrm{Id}}_{d,L}$ with $L(\Phi^{\mathrm{Id}}_{d,L}) = L$ and $M(\Phi^{\mathrm{Id}}_{d,L}) \leq 2 d L$, such that $\mathrm{R} (\Phi^{\mathrm{Id}}_{d,L}) = \mathrm{Id}_{\mathbb{R}^d}$. \end{proposition} Finally, we sometimes require a parallelization of NNs that do not share inputs. \begin{proposition}[Full parallelization of NNs with distinct inputs, {\cite[Setting 5.2]{EGJS2018}}] \label{prop:parallSep} Let $L \in \mathbb{N}$ and let $$ \Phi^1 = \left(\left(A_1^1,b_1^1\right), \dots, \left(A_{L}^1,b_{L}^1\right)\right), \quad \Phi^2 = \left(\left(A_1^2,b_1^2\right), \dots, \left(A_{L}^2,b_{L}^2\right)\right) $$ be two NNs with $L$ layers each and with input dimensions $N^1_0=d_1$ and $N^2_0=d_2$, respectively. Then there exists a NN, denoted by $\mathrm{FP}(\Phi^1, \Phi^2)$, with $d$-dimensional input where $d = (d_1+d_2)$ and $L$ layers, which we call the \emph{full parallelization of $\Phi^1$ and $\Phi^2$}, such that for all $x = (x_1,x_2) \in \mathbb{R}^d$ with $x_i \in \mathbb{R}^{d_i}, i = 1,2$ \begin{equation*} \mathrm{R}\left(\mathrm{FP}\left(\Phi^1,\Phi^2\right)\right) (x_1,x_2) = \left(\mathrm{R}\left(\Phi^1\right)(x_1), \mathrm{R}\left(\Phi^2\right)(x_2)\right) \end{equation*} and $M(\mathrm{FP}(\Phi^1, \Phi^2)) = M(\Phi^1) + M(\Phi^2)$. \end{proposition} \begin{proof} Set $\mathrm{FP}\left(\Phi^1,\Phi^2\right) \coloneqq \left(\left(A_1^3, b_1^3\right), \dots, \left(A_L^3, b_L^3\right)\right)$ where, for $j = 1, \dots, L$, we define \begin{align*} A_{j}^3 \coloneqq \left(\begin{array}{cc} A_j^1 & 0 \\ 0 & A_j^2 \end{array}\right) \text{ and } b_{j}^3 \coloneqq \left(\begin{array}{c} b_j^1 \\ b_j^2 \end{array}\right). \end{align*} All properties of $\mathrm{FP}\left(\Phi^1,\Phi^2\right)$ claimed in the statement of the proposition follow immediately from the construction. \end{proof} \subsection{Emulation of multiplication and piecewise polynomials} \label{S:EmMult} In addition to the basic operations above, we use two types of functions that we can approximate especially efficiently with NNs. These are high dimensional multiplication functions and univariate piecewise polynomials. We first give the result of an emulation of a multiplication in arbitrary dimension. \begin{proposition}[{\cite[Lemma C.5]{guhring2019error}, \cite[Proposition 2.6]{OSZ19_839}}] \label{prop:Multiplication} There exists a constant $C>0$ such that, for every $0<\varepsilon< 1$, $d \in \mathbb{N}$ and $M \geq 1$ there is a NN $\Pi_{\epsilon, M}^{d}$ with $d$-dimensional input- and one-dimensional output, so that \begin{align*} &\left|\prod_{\ell = 1}^d x_\ell - \Realiz(\Pi_{\epsilon, M}^{d})(x)\right| \leq \epsilon, \text{ for all } x=(x_1, \dots, x_d) \in [-M,M]^d, \\ &\left|\frac{\partial}{\partial x_j} \prod_{\ell = 1}^d x_\ell - \frac{\partial}{\partial x_j} \Realiz(\Pi_{\epsilon, M}^{d})(x)\right| \leq \epsilon, {\begin{aligned}\text{ for almost every }x =(x_1, \dots, x_d) \in [-M,M]^{d} \\ \text{ and all } j = 1, \dots, d,\end{aligned}} \end{align*} and $\Realiz(\Pi_{\epsilon, M}^{d})(x) = 0$ if $\prod_{\ell=1}^dx_\ell = 0$, for all $x = (x_1, \dots, x_d)\in \mathbb{R}^d$. Additionally, $\Pi_{\epsilon, M}^{d}$ satisfies \begin{align*} \max\left\{ L\left(\Pi_{\epsilon, M}^{d}\right), M\left(\Pi_{\epsilon, M}^{d}\right)\right\} \leq C \left( 1+ d \text{log } (d M^d/\epsilon)\right). \end{align*} \end{proposition} In addition to the high-dimensional multiplication, we can efficiently approximate univariate continuous, piecewise polynomial functions by realizations of NNs with the ReLU activation function. \begin{proposition}[{\cite[Proposition 5.1]{OPS19_811}}] \label{prop:relupwpolynom} There exists a constant $C>0$ such that, for all ${\mathbf{m}} p = (p_i)_{i\in\{1,\ldots,{N_{\mathrm{int}}}\}} \subset \mathbb{N}$, for all partitions $\mathcal{T}$ of $I=(0,1)$ into ${N_{\mathrm{int}}}$ open, disjoint, connected subintervals $I_i$, $i=1,\ldots,N_{\mathrm{int}}$, for all $v\in {S_{{\mathbf{m}} p} (I,\mathcal{T})} \coloneqq \{v\in H^1(I): v|_{I_i} \in \mathbb{P}_{p_i}(I_i), i=1,\ldots,N_{\mathrm{int}}\}$, and for every $0<\varepsilon< 1$, there exist NNs $\{\Phi^{v,\mathcal{T},{\mathbf{m}} p}_{\varepsilon}\}_{\varepsilon\in(0,1)}$ such that for all $1\leq q'\leq \infty$ it holds that \begin{align*} \normc[W^{1,q'}(I)]{v-\mathrm{R}\left(\Phi^{v,\mathcal{T},{\mathbf{m}} p}_{\varepsilon}\right)} \leq &\, \varepsilon \snormc[W^{1,q'}(I)]{v}, \\ L\left(\Phi^{v,\mathcal{T},{\mathbf{m}} p}_{\varepsilon}\right) \leq &\, C (1+\text{log } (p_{\max})) \left( p_{\max} + \left|\text{log } \varepsilon\right| \right), \\ M\left(\Phi^{v,\mathcal{T},{\mathbf{m}} p}_{\varepsilon}\right) \leq &\, C{N_{\mathrm{int}}} (1+\text{log } (p_{\max})) \left( p_{\max} + \left|\text{log } \varepsilon\right| \right) + C \sum_{i=1}^{N_{\mathrm{int}}} p_i\left(p_i + |\text{log } \varepsilon| \right) , \end{align*} where $p_{\max} \coloneqq \max \{p_i \colon i = 1, \dots, N_{\mathrm{int}}\}$. In addition, $\mathrm{R}\left( \Phi^{v,\mathcal{T},{\mathbf{m}} p}_{\varepsilon} \right)(x_j)=v(x_j)$ for all $j\in\{0,\ldots,{N_{\mathrm{int}}}\}$, where $\{x_j\}_{j=0}^{N_{\mathrm{int}}}$ are the nodes of $\mathcal{T}$. \end{proposition} \begin{remark}\label{rem:RemarkGeneralIntervals} It is not hard to see that the result holds also for $I = (a,b)$, where $a,b\in \mathbb{R}$, with $C>0$ depending on $(b-a)$. Indeed, for any $v \in H^1((a,b))$ the concatenation of $v$ with the invertible, affine map $T \colon x \mapsto (x-a)/(b-a)$ is in $H^1((0,1))$. Applying Proposition \ref{prop:relupwpolynom} yields NNs $\{\Phi^{v,\mathcal{T},{\mathbf{m}} p}_{\varepsilon}\}_{\varepsilon\in(0,1)}$ approximating $v \circ T$ to an appropriate accuracy. Concatenating these networks with the $1$-layer NN $(A_1,b_1)$, where $A_1x + b_1 = T^{-1}x$ yields the result. The explicit dependence of $C>0$ on $(b-a)$ can be deduced from the error bounds in $(0,1)$ by affine transformation. \end{remark} \section{Exponential approximation rates by realizations of NNs} \label{sec:hpReapproxReLU} We now establish several technical results on the \emph{exponentially consistent} approximation by realizations of NNs with ReLU activation of univariate and multivariate tensorized polynomials. These results will be used to establish Theorem \ref{th:ReLUapprox}, which yields exponential approximation rates of NNs for functions in the weighted, analytic classes introduced in Section \ref{sec:WgtSpcNonHomNrm}. They are of independent interest, as they imply that spectral and pseudospectral methods can, in principle, be emulated by realizations of NNs with ReLU activation. \subsection{NN-based approximation of univariate, piecewise polynomial functions} \label{S:1dBasFct} We start with the following corollary to Proposition \ref{prop:relupwpolynom}. It quantifies stability and consistency of realizations of NNs with ReLU activation for the emulation of the univariate, piecewise polynomial basis functions in Theorem \ref{thm:Interface}. \begin{corollary} \label{cor:basis-NN} Let $I=(a,b)\subset \mathbb{R}$ be a bounded interval. Fix $C_p>0$, $C_v>0$, and $b_v>0$. Let $0<\epsilon_{\mathsf{hp}} < 1$ and $p, N_{\mathrm{1d}}, N_{\mathrm{int}}\in \mathbb{N}$ be such that $p \leq C_p(1+\left| \text{log } \epsilon_{\mathsf{hp}} \right|)$ and let $\mathcal{G}_{\mathrm{1d}}$ be a partition of $I$ into $N_{\mathrm{int}}$ open, disjoint, connected subintervals and, for $i\in\{1, \dots, N_{\mathrm{1d}}\}$, let $v_i\in \mathbb{Q}_p(\mathcal{G}_{{\mathrm{1d}}}) \cap H^1(I)$ be such that $\supp(v_i)$ intersects either a single interval or two adjacent intervals in $\mathcal{G}_{\mathrm{1d}}$ and $ \|v_i\|_{H^1(I)}\leq C_v \epsilon_{\mathsf{hp}}^{-b_v}$, for all $i\in \{1, \dots, N_{\mathrm{1d}}\}$. Then, for every $0 < \epsilon_1 \leq \epsilon_{\mathsf{hp}}$, and for every $i\in\{1, \dots, N_{\mathrm{1d}}\}$, there exists a NN $\Phi^{v_{i}}_{\epsilon_1}$ such that \begin{align} \normc[H^1(I)]{v_{i}-\Realiz\left(\Phi^{v_{i}}_{\epsilon_1}\right)} \leq{} & \epsilon_1 |v_i|_{H^1(I)} , \label{eq:Corboundvjk1} \\ L\left(\Phi^{v_{i}}_{\epsilon_1}\right) \leq{} & C_4 (1 + \left|\text{log } (\epsilon_1)\right|)(1 + \text{log } (1+\left|\text{log } (\epsilon_1)\right|)) ,\label{eq:Corboundvjk2} \\ M\left(\Phi^{v_{i}}_{\epsilon_1}\right) \leq{} & C_5 (1 + \left|\text{log } (\epsilon_1)\right|^2) , \label{eq:Corboundvjk3} \end{align} for constants $C_4, C_5>0$ depending on $C_p>0$, $C_v>0$, $b_v>0$ and $(b-a)$ only. In addition, $\Realiz\left( \Phi^{v_i}_{\varepsilon_1} \right)(x_j)=v_i(x_j)$ for all $i\in\{1,\ldots,N_{\mathrm{1d}}\}$ and $j\in\{0,\ldots,{N_{\mathrm{int}}}\}$, where $\{x_j\}_{j=0}^{N_{\mathrm{int}}}$ are the nodes of $\mathcal{G}_{{\mathrm{1d}}}$. \end{corollary} \begin{proof} Let $i=1, \dots, N_{\mathrm{1d}}$. For $v_{i}$ as in the assumption of the corollary, we have that either $\supp(v_{i}) = \overline{J}$ for a unique $J\in \mathcal{G}_{\mathrm{1d}}$ or $\supp(v_{i}) =\overline{J \cup J'}$ for two neighboring intervals $J, J'\in \mathcal{G}_{\mathrm{1d}}$. Hence, there exists a partition $\mathcal{T}_{i}$ of $I$ of at most four subintervals so that $v_{i} \in S_{{\mathbf{m}} p} (I,\mathcal{T}_{i})$, where ${\mathbf{m}} p = (p)_{i\in\{1,\ldots,4\}}$. Because of this, an application of Proposition \ref{prop:relupwpolynom} with $q' = 2$ and Remark \ref{rem:RemarkGeneralIntervals} yields that for every $0<\epsilon_1 \leq \epsilon_{\mathsf{hp}}< 1$ there exists a NN $\Phi^{v_{i}}_{\epsilon_1} := \Phi^{v_{i},\mathcal{T}_{i},{\mathbf{m}} p}_{\epsilon_1}$ such that \eqref{eq:Corboundvjk1} holds. In addition, by invoking $p \lesssim 1+\left|\text{log } (\epsilon_\mathsf{hp}) \right|\leq 1+\left|\text{log } (\epsilon_1) \right|$, we observe that \begin{align*} L\left(\Phi^{v_{i}}_{\epsilon_1}\right) &\leq \, C (1+\text{log } (p)) \left( p + \left|\text{log } \left({\epsilon_1}\right)\right| \right) \lesssim 1 + \left|\text{log } (\epsilon_1)\right|(1 + \text{log } (1+\left|\text{log } (\epsilon_1)\right|)). \end{align*} Therefore, there exists $C_4 >0$ such that \eqref{eq:Corboundvjk2} holds. Furthermore, \begin{align*} M\left(\Phi^{v_{i}}_{\epsilon_1}\right) \leq &\, 4C(1+\text{log } (p)) \left( p + \left|\text{log } \left(\epsilon_1\right)\right| \right) +C \sum_{i=1}^{4} p(p+\left|\text{log } \left({\epsilon_1}\right)\right| )\\ \lesssim &\, p^2 + \left|\text{log } \left({\epsilon_1}\right)\right| p + (1+\text{log } (p)) \left( p + \left|\text{log } \left({\epsilon_1}\right)\right|\right). \end{align*} We use $p \lesssim 1+\left|\text{log } (\epsilon_1)\right|$ and obtain that there exists $C_5 >0$ such that \eqref{eq:Corboundvjk3} holds. \end{proof} \subsection{Emulation of functions with singularities in cubic domains by NNs} \label{S:ApprCub} Below we state a result describing the efficiency of re-approximating continuous, piecewise tensor product polynomial functions in a cubic domain, as introduced in Theorem \ref{thm:Interface}, by realizations of NNs with the ReLU activation function. \begin{theorem} \label{th:ReLU-hp} Let $d\in \{2,3\}$, let $I = (a,b)\subset \mathbb{R}$ be a bounded interval, and let $Q=I^d$. Suppose that there exist constants $C_p>0$, $C_{N_{\mathrm{1d}}}>0$, $C_v>0$, $C_c>0$, $b_v>0$, and, for $0< \epsilon \leq 1$, assume there exist $p, N_{\mathrm{1d}}, N_{\mathrm{int}}\in \mathbb{N}$, and $c\in \mathbb{R}^{N_{\mathrm{1d}}\times\dotsN_{\mathrm{1d}}}$, such that \begin{equation*} N_{\mathrm{1d}} \leq C_{N_{\mathrm{1d}}}(1+\left| \text{log } \epsilon \right|^2),\quad \|c\|_{1} \leq C_{c}(1+\left| \text{log } \epsilon \right|^{2d}),\quad p \leq C_p(1+\left| \text{log } \epsilon \right|). \end{equation*} Further, let $\mathcal{G}_{\mathrm{1d}}$ be a partition of $I$ into $N_{\mathrm{int}}$ open, disjoint, connected subintervals and let, for all $i\in\{1, \dots, N_{\mathrm{1d}}\}$, $v_i\in\mathbb{Q}_p(\mathcal{G}_{{\mathrm{1d}}}) \cap H^1(I)$ be such that $\supp(v_i)$ intersects either a single interval or two neighboring subintervals of $\mathcal{G}_{\mathrm{1d}}$ and \begin{equation*} \|v_i\|_{H^1(I)}\leq C_v \epsilon^{-b_v}, \qquad \|v_i\|_{L^\infty(I)}\leq 1,\qquad \forall i\in \{1, \dots, N_{\mathrm{1d}}\}. \end{equation*} Then, there exists a NN $\Phi_{\epsilon, c}$ such that \begin{align}\label{eq:ReLuhp-approx} \left \| \sum_{{i_1,\dots, i_d}=1}^{N_{\mathrm{1d}}} c_{{i_1\dots i_d}}\bigotimes_{j=1}^dv_{i_j} - \Realiz\left(\Phi_{\epsilon, c}\right) \right\|_{H^1(Q)} \leq \epsilon. \end{align} Furthermore, there holds $ \left\|\Realiz\left(\Phi_{\epsilon, c}\right) \right\|_{L^\infty(Q)} \leq (2^d+1)C_c (1 + \left| \text{log } \epsilon \right|^{2d}), $ \begin{equation*} M(\Phi_{\epsilon, c}) \leq C (1+\left|\text{log } \epsilon\right|^{2d+1}), \; L(\Phi_{\epsilon, c}) \leq C (1+ \left|\text{log } \epsilon\right|\text{log } (\left|\text{log } \epsilon\right|)), \end{equation*} where $C>0$ depends on $C_p$, $C_{N_{\mathrm{1d}}}$, $C_v$, $C_c$, $b_v$, $d$, and $(b-a)$ only. \end{theorem} \begin{proof} Assume $I \neq \emptyset$ as otherwise there is nothing to show. Let $C_I\geq1$ be such that $C_I^{-1}\leq (b-a) \leq C_I$. Let $c_{v, \mathrm{max}} \coloneqq \max\{\|v_i\|_{H^1(I)}\colon i \in \{1, \dots, N_{\mathrm{1d}}\}\} \leq C_v \epsilon^{-b_v}$, let $\epsilon_1 \coloneqq \min \{\epsilon/ (2 \cdot d \cdot (c_{v, \mathrm{max}}+1)^{d} \cdot \|c\|_1), 1/2, C_I^{-1/2}C_v^{-1}\epsilon^{b_v}\}$, and let $\epsilon_2 \coloneqq \min\{\epsilon /(2 \cdot (\sqrt{d}+1) \cdot (c_{v, \mathrm{max}}+1) \cdot \|c\|_1), 1/2 \}$. \paragraph{Construction of the neural network.} Invoking Corollary \ref{cor:basis-NN} we choose, for $i=1, \dots, N_{\mathrm{1d}}$, NNs $\Phi_{\epsilon_1}^{v_{i}}$ so that \[ \left\|\Realiz(\Phi_{\epsilon_1}^{v_{i}}) - v_{ i} \right\|_{H^1(I)} \leq C_v \epsilon_1 \epsilon^{-b_v} \leq 1. \] It follows that for all $i \in \{1, \dots, N_{\mathrm{1d}}\}$ \begin{align} \left\|\Realiz\left(\Phi_{\epsilon_1}^{v_{i}}\right) \right\|_{H^1(I)} \leq & \left\|\Realiz\left(\Phi_{\epsilon_1}^{v_{i}}\right) - v_{i}\right\|_{H^1(I)} + \left\|v_{i}\right\|_{H^1(I)} \leq 1+c_{v,\max} \label{eq:vjiH1err} \end{align} and that, by Sobolev imbedding, \begin{equation} \begin{aligned} \left\|\Realiz\left(\Phi_{\epsilon_1}^{v_{i}}\right) \right\|_{\infty} &\leq \left\|\Realiz\left(\Phi_{\epsilon_1}^{v_{i}}\right) - v_{i}\right\|_{\infty} + \left\|v_{i}\right\|_\infty \leq C_I^{1/2} \left\|\Realiz\left(\Phi_{\epsilon_1}^{v_{i}}\right) - v_{i}\right\|_{H^1(I)} + 1 \\ & \leq C_I^{1/2}C_v \epsilon_1 \epsilon^{-b_v} + 1 \leq 2. \end{aligned} \label{eq:vjipointwise} \end{equation} Then, let $\Phi_{\mathrm{basis}}$ be the NN defined as \begin{equation} \label{eq:Phibasis} \Phi_{\mathrm{basis}} \coloneqq \FPar\left( \Par(\Phi^{v_{ 1}}_{\epsilon_1}, \dots, \Phi^{v_{ N_{\mathrm{1d}}}}_{\epsilon_1}), \dots, \Par(\Phi^{v_{ 1}}_{\epsilon_1}, \dots, \Phi^{v_{N_{\mathrm{1d}}}}_{\epsilon_1}) \right), \end{equation} where the full parallelization is of $d$ copies of $\Par(\Phi^{v_{ 1}}_{\epsilon_1}, \dots, \Phi^{v_{ N_{\mathrm{1d}}}}_{\epsilon_1})$. Note that $\Phi_{\mathrm{basis}}$ is a NN with $d$-dimensional input and $dN_{\mathrm{1d}}$-dimensional output. Subsequently, we introduce the $N_{\mathrm{1d}}^d$ matrices $E^{(i_1, \dots, i_d)} \in \{0,1\}^{d\times dN_{\mathrm{1d}} }$ such that, for all $(i_1, \dots, i_d)\in \{1, \dots,N_{\mathrm{1d}}\}^d$, \begin{equation*} E^{(i_1, \dots, i_d)} a = \{a_{(j-1)N_{\mathrm{1d}}+i_j} : j=1, \dots,d\} \qquad\text{for all }a = (a_1, \dots, a_{dN_{\mathrm{1d}}})\in \mathbb{R}^{dN_{\mathrm{1d}}}. \end{equation*} Note that, for all $(i_1, \dots, i_d)\in \{1, \dots, N_{\mathrm{1d}}\}^d$, \begin{equation*} \Realiz((E^{(i_1, \dots, i_d)}, 0)\odot \Phi_{\mathrm{basis}}) : (x_1, \dots, x_d)\mapsto \left\{ \Realiz(\Phi^{v_{i_j}}_{\epsilon_1} )(x_j): j=1, \dots, d \right\}. \end{equation*} Then, we set \begin{align}\label{eq:ConstructionOfPhiEpsilon} \Phi_{\epsilon} \coloneqq \Par\left(\Pi_{\epsilon_2, 2}^d \odot(E^{(i_1, \dots, i_d)}, 0) :({i_1,\dots, i_d})\in \{1, \dots, N_{\mathrm{1d}}\}^d \right)\odot \Phi_{\mathrm{basis}}, \end{align} where $\Pi_{\epsilon_2, 2}^d$ is according to Proposition \ref{prop:Multiplication}. Note that, by \eqref{eq:vjipointwise}, the inputs of $\Pi_{\epsilon_2, 2}^d$ are bounded in absolute value by $2$. Finally, we define \[ \Phi_{\epsilon, c} \coloneqq (( \vvec( c )^\top, 0 )) \odot \Phi_\epsilon, \] where $\vvec(c) \in \mathbb{R}^{N_{\mathrm{1d}}^d}$ is the reshaping such that, for all $({i_1,\dots, i_d})\in \{1, \dots, N_{\mathrm{1d}}\}^d$ \begin{equation}\label{def:vec} ( \vvec(c))_{i} = c_{{i_1\dots i_d}}, \qquad \text{with } i = 1+\sum_{j=1}^d (i_j-1) N_{\mathrm{1d}}^{j-1}. \end{equation} See Figure \ref{fig:NN-appx} for a schematic representation of the NN $\Phi_{\epsilon, c}$. \begin{figure} \centering \includegraphics[width=.8\textwidth]{NN.pdf} \caption{Schematic representation of the neural network $\Phi_{\epsilon, c}$, for the case $d=2$ constructed in the proof of Theorem \ref{th:ReLU-hp}. The circles represent subnetworks (i.e., the neural networks $\Phi^{v_{ i}}_{\epsilon_1}$, $\Pi^d_{\epsilon_2, 2}$, and $((\vvec(c)^\top,0))$). Along some branches, we indicate $\Phi_{i, k}(x_1, x_2) = \Realiz\left(\Pi^2_{\epsilon_2,2}\odot ((E^{(i,k)}, 0)) \odot \Phi_{\mathrm{basis}}\right)(x_1, x_2)$.} \label{fig:NN-appx} \end{figure} \paragraph{Approximation accuracy.} Let us now analyze if $\Phi_{\epsilon, c}$ has the asserted approximation accuracy. Define, for all $({i_1,\dots, i_d})\in \{1, \dots, N_{\mathrm{1d}}\}^d$ \[ \phi_{i_1\dots i_d} = \bigotimes_{j = 1}^d v_{ i_j} , \] Furthermore, for each $({i_1,\dots, i_d})\in \{1, \dots, N_{\mathrm{1d}}\}^d$, let $\Phi_{{i_1\dots i_d}}$ denote the NNs \begin{equation*} \Phi_{{i_1\dots i_d}} = \Pi_{\epsilon_2, 2}^d \odot((E^{(i_1, \dots, i_d)}, 0)) \odot \Phi_{\mathrm{basis}}. \end{equation*} We estimate by the triangle inequality that \begin{equation} \begin{aligned} \left\|\sum_{i_1, \dots, i_d=1}^{\Noned} c_{i_1\dots i_d} \phi_{i_1\dots i_d} - \Realiz(\Phi_{\epsilon, c}) \right\|_{H^1(Q)} &= \left\|\sum_{i_1, \dots, i_d=1}^{\Noned} c_{i_1\dots i_d} \phi_{i_1\dots i_d} - \sum_{i_1, \dots, i_d=1}^{\Noned} c_{i_1\dots i_d} \Realiz(\Phi_{{i_1\dots i_d}}) \right\|_{H^1(Q)} \\ & \leq \sum_{i_1, \dots, i_d=1}^{\Noned} |c_{i_1\dots i_d}| \left\| \phi_{i_1\dots i_d} - \Realiz(\Phi_{{i_1\dots i_d}}) \right\|_{H^1(Q)}.\label{eq:SecondTriangleInequality} \end{aligned} \end{equation} We have that \[ \left\| \phi_{i_1\dots i_d} - \Realiz(\Phi_{{i_1\dots i_d}}) \right\|_{H^1(Q)} = \left\|\bigotimes_{j = 1}^d v_{ i_j} - \Realiz\left(\Pi_{\epsilon_2, 2}^d\right) \circ \left[\Realiz\left(\Phi_{\epsilon_1}^{v_{i_1}}\right), \dots, \Realiz\left(\Phi_{\epsilon_1}^{v_{i_d}}\right)\right] \right\|_{H^1(Q)} \] and, by another application of the triangle inequality, we have that \begin{align} \left\| \phi_{i_1\dots i_d} - \Realiz\left(\Phi_{{i_1\dots i_d}}\right) \right\|_{H^1(Q)} \leq & \left\|\bigotimes_{j = 1}^d v_{ i_j} - \bigotimes_{j = 1}^d \Realiz\left(\Phi_{\epsilon_1}^{v_{i_j}}\right)\right\|_{H^1(Q)} \nonumber\\ & \qquad + \left\|\bigotimes_{j = 1}^d \Realiz\left(\Phi_{\epsilon_1}^{v_{i_j}}\right) -\Realiz\left(\Pi_{\epsilon_2, 2}^d\right) \circ \left[\Realiz\left(\Phi_{\epsilon_1}^{v_{i_1}}\right), \dots, \Realiz\left(\Phi_{\epsilon_1}^{v_{i_d}}\right)\right] \right\|_{H^1(Q)} \nonumber\\ \leq & \left\|\bigotimes_{j = 1}^d v_{i_j} - \bigotimes_{j = 1}^d \Realiz\left(\Phi_{\epsilon_1}^{v_{i_j}}\right)\right\|_{H^1(Q)} + (\sqrt{d}+1)\epsilon_2(c_{v,\max}+1), \label{eq:ThirdTriangleInequality} \end{align} where the last estimate follows from Proposition \ref{prop:Multiplication} and the chain rule: \begin{equation*} \left\|\bigotimes_{j = 1}^d \Realiz\left(\Phi_{\epsilon_1}^{v_{i_j}}\right) -\Realiz\left(\Pi_{\epsilon_2, 2}^d\right) \circ \left[\Realiz\left(\Phi_{\epsilon_1}^{v_{i_1}}\right), \dots, \Realiz\left(\Phi_{\epsilon_1}^{v_{i_d}}\right)\right] \right\|_{L^2(Q)} \leq \epsilon_2 \end{equation*} and \begin{align*} & \left|\bigotimes_{j = 1}^d \Realiz\left(\Phi_{\epsilon_1}^{v_{i_j}}\right) -\Realiz\left(\Pi_{\epsilon_2, 2}^d\right) \circ \left[\Realiz\left(\Phi_{\epsilon_1}^{v_{i_1}}\right), \dots, \Realiz\left(\Phi_{\epsilon_1}^{v_{i_d}}\right)\right] \right|_{H^1(Q)}^2 \\ &\qquad = \sum_{k=1}^d \left\| \frac{\partial}{\partial x_k} \bigotimes_{j = 1}^d \Realiz\left(\Phi_{\epsilon_1}^{v_{i_j}}\right) - \frac{\partial}{\partial x_k} \Realiz\left(\Pi_{\epsilon_2, 2}^d\right) \circ \left[\Realiz\left(\Phi_{\epsilon_1}^{v_{i_1}}\right), \dots, \Realiz\left(\Phi_{\epsilon_1}^{v_{i_d}}\right)\right] \right\|_{L^2(Q)}^2 \\ & \qquad = \sum_{k=1}^d \left\| \left( \bigotimes_{\substack{j = 1 \\ j\neq k}}^d \Realiz\left(\Phi_{\epsilon_1}^{v_{i_j}}\right) - \left( \frac{\partial}{\partial x_k} \Realiz\left(\Pi_{\epsilon_2, 2}^d\right) \right) \circ \left[\Realiz\left(\Phi_{\epsilon_1}^{v_{i_1}}\right), \dots, \Realiz\left(\Phi_{\epsilon_1}^{v_{i_d}}\right)\right] \right) \left( \frac{\partial}{\partial x} \Realiz\left(\Phi_{\epsilon_1}^{v_{i_k}} \right) \right) \right\|_{L^2(Q)}^2 \\ & \qquad \leq \sum_{k=1}^d \epsilon_2^2 \left\| \frac{\partial}{\partial x} \Realiz\left(\Phi_{\epsilon_1}^{v_{i_k}} \right) \right\|_{L^2(I)}^2 \leq d \epsilon_2^2 (c_{v,\max}+1)^2, \end{align*}% where we used \eqref{eq:vjiH1err}. We now use \eqref{eq:vjipointwise} to bound the first term in \eqref{eq:ThirdTriangleInequality}: for $d = 3$, we have that, for all $({i_1,\dots, i_d}) \in \{1, \dots, N_{\mathrm{1d}}\}^d$, \begin{align*} \left\|\bigotimes_{j = 1}^d v_{ i_j} - \bigotimes_{j = 1}^d \Realiz\left(\Phi_{\epsilon_1}^{v_{i_j}}\right)\right\|_{H^1(Q)} &\leq \left\|(v_{ i_1} - \Realiz(\Phi_{\epsilon_1}^{v_{i_1}}))\otimes \bigotimes_{j = 2}^d v_{i_j}\right\|_{H^1(Q)}\\ &\qquad + \left\|\Realiz\left(\Phi_{\epsilon_1}^{v_{i_j}}\right) \otimes \left(v_{ i_2} - \Realiz\left(\Phi_{\epsilon_1}^{v_{i_2}}\right)\right) \otimes v_{ i_d}\right\|_{H^1(Q)}\\ &\qquad \qquad + \left\|\bigotimes_{j = 1}^{d-1}\Realiz(\Phi_{\epsilon_1}^{v_{i_j}}) \otimes (v_{ i_d} - \Realiz(\Phi_{\epsilon_1}^{v_{i_d}}))\right\|_{H^1(Q)} \eqqcolon \mathrm{(I)}. \end{align*} For $d = 2$, we end up with a similar estimate with only two terms. By the tensor product structure, it is clear that $\mathrm{(I)} \leq d \epsilon_1 (c_{v, \mathrm{max}}+1)^{d}$. We have from \eqref{eq:SecondTriangleInequality} and the considerations above that \begin{align*} \left\|\sum_{i_1, \dots, i_d=1}^{\Noned} c_{i_1\dots i_d} \phi_{i_1\dots i_d} - \Realiz\left(\Phi_{\epsilon, c}\right) \right\|_{H^1(Q)} \leq \|c\|_1 \left( d \epsilon_1 (c_{v, \mathrm{max}}+1)^{d} + (\sqrt{d}+1) \epsilon_2 (c_{v,\max}+1) \right) \leq \epsilon. \end{align*} This yields \eqref{eq:ApproximationPartOfReLUStatement}. \paragraph{Bound on the $L^\infty$ norm of the neural network.} As we have already shown, $\left\|\Realiz\left(\Phi_{\epsilon_1}^{v_{i}}\right) \right\|_{\infty} \leq 2$. Therefore, by Proposition \ref{prop:Multiplication}, $ \left\|\Realiz\left(\Phi_\epsilon\right) \right\|_{\infty} \leq 2^d + \epsilon_2$. It follows that \begin{equation*} \left\|\Realiz\left(\Phi_{\epsilon, c}\right) \right\|_{\infty} \leq \|c\|_1\left(2^d + \epsilon_2 \right) \leq (2^d+1)C_c (1 + \left| \text{log } \epsilon \right|^{2d}). \end{equation*} \paragraph{Size of the neural network.} Bounds on the size and depth of $\Phi_{\epsilon, c}$ follow from Proposition \ref{prop:Multiplication} and Corollary \ref{cor:basis-NN}. Specifically, we start by remarking that there exists a constant $C_1 > 0$ depending on $C_v$, $b_v$, $C_I$ and $d$ only, such that $\left|\text{log } (\epsilon_1)\right|\leq C_1 (1+\left| \text{log } \epsilon \right|)$. Then, by Corollary \ref{cor:basis-NN}, there exist constants $C_{4}$, $C_{5}>0$ depending on $C_p, C_v, b_v, (b-a),$ and $d$ only such that for all $i=1, \dots, N_{\mathrm{1d}}$, \begin{equation*} L\left(\Phi^{v_{i}}_{\epsilon_1}\right) \leq{} C_{4} (1 + \left|\text{log } \epsilon\right|)(1 + \text{log } (1+\left|\text{log } \epsilon\right|)) \text{ and } M\left(\Phi^{v_{i}}_{\epsilon_1}\right) \leq{} C_{5} (1 + \left|\text{log } \epsilon\right|^2). \end{equation*} Hence, by Propositions \ref{prop:parallSep} and \ref{prop:parall}, there exist $C_{6}$, $C_{7}>0$ depending on $C_p, C_v, b_v, (b-a),$ and $d$ only such that \begin{equation*} L(\Phi_{\mathrm{basis}})\leq C_{6} (1 + \left|\text{log } \epsilon\right|)(1 + \text{log } (1+\left|\text{log } \epsilon\right|)) \text{ and } M(\Phi_{\mathrm{basis}})\leq C_{7} d N_{\mathrm{1d}} (1 + \left|\text{log } \epsilon\right|^2). \end{equation*} Then, remarking that for all $({i_1,\dots, i_d})\in \{1, \dots, N_{\mathrm{1d}}\}^d$ there holds $\|E^{({i_1,\dots, i_d})}\|_0 =d$ and, by Propositions \ref{prop:conc}, \ref{prop:Multiplication}, and \ref{prop:parall}, we have \begin{equation*} L(\Phi_{\epsilon})\leq C_{8} (1 + \left|\text{log } \epsilon\right|)(1 + \text{log } (1+\left|\text{log } \epsilon\right|)) , \;\; M(\Phi_{\epsilon})\leq C_{9}\left( N_{\mathrm{1d}}^d(1+\left| \text{log } \epsilon \right|) + M(\Phi_{\mathrm{basis}})\right). \end{equation*} For $C_{8}, C_{9}>0$ depending on $C_p, C_v, b_v, (b-a), d$ and $C_c$ only. Finally, we conclude that there exists a constant $C_{10}>0$ depending on $C_p, C_v, b_v, (b-a), d$ and $C_c$ only such that \begin{equation*} L( \Phi_{\epsilon, c} ) \leq C_{10}(1 + \left|\text{log } \epsilon\right|)(1 + \text{log } (1+\left|\text{log } \epsilon\right|)). \end{equation*} Using also the fact that $N_{\mathrm{1d}} \leq C (1+\left| \text{log } \epsilon \right|^2)$ for $C>0$ independent of $\epsilon$ and since $d\geq 2$, \begin{equation*} M(\Phi_{\epsilon, c})\leq C_{11} (1+\left|\text{log } \epsilon\right|)^{2d+1}, \end{equation*} for a constant $C_{11}>0$ depending on $C_p, C_v, b_v, (b-a), d$ and $C_c$ only. \end{proof} Next, we state our main approximation result, which describes the approximation of singular functions in $(0,1)^d$ by realizations of NNs. \begin{theorem} \label{th:ReLUapprox} Let $d \in \{2,3\}$ and $Q \coloneqq (0,1)^d$. Let $\mathcal{C} =\{c\}$ where $c$ is one of the corners of $Q$ and let $\mathcal{E} = \mathcal{E}_c$ contain the edges adjacent to $c$ when $d=3$, $\mathcal{E}=\emptyset$ when $d=2$. Assume furthermore that $C_f, A_f>0$, and \begin{alignat*}{3} &{\underline{\gamma}} = \{\gamma_c: c\in \mathcal{C}\}, &&\text{with } \gamma_c>1,\; \text{for all } c\in\mathcal{C} &&\text{ if } d = 2,\\ % &{\underline{\gamma}} = \{\gamma_c, \gamma_e: c\in \mathcal{C}, e\in \mathcal{E}\}, \quad&&\text{with } \gamma_c>3/2\text{ and } \gamma_e>1,\; \text{for all }c\in\mathcal{C}\text{ and }e\in \mathcal{E}\quad &&\text{ if } d = 3. % \end{alignat*} Then, for every $f\in \mathcal{J}^{\varpi}_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E};C_f,A_f)$ and every $0< \epsilon <1$, there exists a NN $\Phi_{\epsilon, f}$ so that \begin{equation}\label{eq:ApproximationPartOfReLUStatement} \left \| f - \Realiz\left(\Phi_{\epsilon, f}\right) \right\|_{H^1(Q)} \leq \epsilon. \end{equation} In addition, $\|\Realiz\left(\Phi_{\epsilon, f}\right)\|_{L^\infty(Q)} = \mathcal{O}(\left| \text{log } \epsilon \right|^{2d})$ for $\epsilon \to 0$. Also, $M(\Phi_{\epsilon, f}) = \mathcal{O}(\left|\text{log } \epsilon\right|^{2d+1})$ and $L(\Phi_{\epsilon, f}) = \mathcal{O}(\left|\text{log } \epsilon\right|\text{log } (\left|\text{log } \epsilon\right|))$, for $\epsilon \to 0$. \end{theorem} \begin{proof} Denote $I\coloneqq (0,1)$ and let $f\in \mathcal{J}^{\varpi}_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E}; C_f,A_f)$ and $0<\epsilon<1$. Then, by Theorem \ref{thm:Interface} (applied with $\epsilon/2$ instead of $\epsilon$) there exists $N_{\mathrm{1d}}\in \mathbb{N}$ so that $N_{\mathrm{1d}} = \mathcal{O}((1+\left|\text{log } \epsilon\right|)^{2})$, $c \in \mathbb{R}^{N_{\mathrm{1d}}\times\dots\timesN_{\mathrm{1d}}}$ with $\|c\|_1 \leq C_c (1+\left|\text{log } \epsilon\right|^{2d})$, and, for all $(i_1, \dots, i_d)\in\{1, \dots, N_{\mathrm{1d}}\}^d$, \[ \phi_{i_1\dots i_d} = \bigotimes_{j = 1}^d v_{ i_j} , \] such that the hypotheses of Theorem \ref{th:ReLU-hp} are met, and \[ \left\|f - \sum_{i_1, \dots i_d=1}^{N_{\mathrm{1d}}} c_{i_1\dots i_d} \phi_{i_1\dots i_d} \right\|_{H^1(Q)} \leq \frac\epsilon2. \] We have, by Theorem \ref{thm:Interface} and the triangle inequality, that for $\Phi_{\epsilon, f} := \Phi_{\epsilon/2, c}$ \begin{align*} \left\|f - \Realiz(\Phi_{\epsilon, f}) \right\|_{H^1(Q)} \leq \frac\epsilon2 + \left\|\sum_{i_1, \dots, i_d=1}^{\Noned} c_{i_1\dots i_d} \phi_{i_1\dots i_d} - \Realiz(\Phi_{\epsilon/2, c}) \right\|_{H^1(Q)}. \end{align*} Then, the application of Theorem \ref{th:ReLU-hp} (with $\epsilon/2$ instead of $\epsilon$) concludes the proof of \eqref{eq:ApproximationPartOfReLUStatement}. Finally, the bounds on $L(\Phi_{\epsilon, f}) = L(\Phi_{\epsilon/2, c})$, $M(\Phi_{\epsilon, f}) = M(\Phi_{\epsilon/2, c})$, and on $\|\Realiz(\Phi_{\epsilon, f})\|_{L^\infty(Q)} = \|\Realiz(\Phi_{\epsilon/2, c})\|_{L^\infty(Q)} $ follow from the corresponding estimates of Theorem \ref{th:ReLU-hp}. \end{proof} Theorem \ref{th:ReLUapprox} admits a straightforward generalization to functions with multivariate output, so that each coordinate is a weighted analytic function with the same regularity. Here, we denote for a NN $\Phi$ with $N$-dimensional output, $N\in \mathbb{N}$, by $\Realiz(\Phi)_n$ the $n$-th component of the output (where $n\in \{1, \dots, N\}$). \begin{corollary} \label{cor:ReLUapprox-vector} Let $d \in \{2,3\}$ and $Q \coloneqq (0,1)^d$. Let $\mathcal{C} =\{c\}$ where $c$ is one of the corners of $Q$ and let $\mathcal{E} = \mathcal{E}_c$ contain the edges adjacent to $c$ when $d=3$; $\mathcal{E}=\emptyset$ when $d=2$. Let $N_f\in \mathbb{N}$. Further assume that $C_f, A_f>0$, and \begin{alignat*}{3} &{\underline{\gamma}} = \{\gamma_c: c\in \mathcal{C}\}, &&\text{with } \gamma_c>1,\; \text{for all } c\in\mathcal{C} &&\text{ if } d = 2,\\ % &{\underline{\gamma}} = \{\gamma_c, \gamma_e: c\in \mathcal{C}, e\in \mathcal{E}\}, \quad&&\text{with } \gamma_c>3/2\text{ and } \gamma_e>1,\; \text{for all }c\in\mathcal{C}\text{ and }e\in \mathcal{E}\quad &&\text{ if } d = 3. % \end{alignat*} Then, for all $\bm{f} = (f_1, \dots, f_{N_f}) \in \left[\mathcal{J}^{\varpi}_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E};C_f,A_f) \right]^{N_f}$ and every $0< \epsilon <1$, there exists a NN $\Phi_{\epsilon, \bm{f}}$ with $d$-dimensional input and $N_f$-dimensional output such that, for all $ n=1, \dots, N_f$, \begin{equation}\label{eq:ReLUapprox-vector} \left \| f_n - \Realiz\left(\Phi_{\epsilon, \bm{f}}\right)_n \right\|_{H^1(Q)} \leq \epsilon. \end{equation} In addition, $\|\Realiz(\Phi_{\epsilon, \bm{f}})_n\|_{L^\infty(Q)} = \mathcal{O}(\left| \text{log } \epsilon \right|^{2d})$ for every $n = \{1, \dots, N_f\}$, $M(\Phi_{\epsilon, f}) = \mathcal{O}(\left|\text{log } \epsilon\right|^{2d+1} + N_f\left|\text{log } \epsilon\right|^{2d})$ and $L(\Phi_{\epsilon, f}) = \mathcal{O}(\left|\text{log } \epsilon\right|\text{log } (\left|\text{log } \epsilon\right|))$, for $\epsilon \to 0$. \end{corollary} \begin{proof} Let $\Phi_\epsilon$ be as in \eqref{eq:ConstructionOfPhiEpsilon} and let $c^{(n)} \in\mathbb{R}^{N_{\mathrm{1d}}\times\cdots\timesN_{\mathrm{1d}}}$, $n=1, \dots, N_f$ be the matrices of coefficients such that, in the notation of the proof of Theorems \ref{th:ReLU-hp} and \ref{th:ReLUapprox}, for all $n\in \{1, \dots, N_f\}$, \[ \left\|f_n - \sum_{i_1, \dots i_d=1}^{N_{\mathrm{1d}}} c^{(n)}_{i_1\dots i_d} \phi_{i_1\dots i_d} \right\|_{H^1(Q)} \leq \frac\epsilon2. \] We define, for $\vvec$ as defined in \eqref{def:vec}, the NN $\Phi_{\epsilon, \bm{f}}$ as \begin{equation*} \Phi_{\epsilon, \bm{f}} \coloneqq \Par\left(((\vvec(c^{(1)} )^\top, 0)), \dots, ((\vvec(c^{(N_f)} )^\top,0))\right) \odot \Phi_\epsilon. \end{equation*}% The estimate \eqref{eq:ReLUapprox-vector} and the $L^\infty$-bound then follow from Theorem \ref{th:ReLU-hp}. The bound on $L(\Phi_{\epsilon, \bm{f}})$ follows directly from Theorem \ref{th:ReLU-hp} and Proposition \ref{prop:conc}. Finally, the bound on $M(\Phi_{\epsilon, \bm{f}})$ follows by Theorem \ref{th:ReLU-hp} and Proposition \ref{prop:conc}, as well as, from the observation that \begin{equation*} M\left( \Par\left(((\vvec(c^{(1)} )^\top,0)), \dots, ((\vvec(c^{(N_f)} )^\top, 0 ))\right) \right)\leq N_f N_{\mathrm{1d}}^{d}\leq C N_f (1+\left| \text{log } \epsilon \right|^{2d}), \end{equation*} for a constant $C>0$ independent of $N_f$ and $\epsilon$. \end{proof} \section{Exponential expression rates for solution classes of PDEs} \label{sec:applications} In this section, we develop Theorem \ref{th:ReLUapprox} into several exponentially decreasing upper bounds for the rates of approximation, by realizations of NNs with ReLU activation, for solution classes to elliptic PDEs with singular data (such as singular coefficients or domains with nonsmooth boundary). In particular, we consider elliptic PDEs in two-dimensional \emph{general} polygonal domains, in three-dimensional domains that are a union of cubes, and elliptic eigenvalue problems with isolated point singularities in the potential which arise in models of electron structure in quantum mechanics. In each class of examples, the solution sets belong to the class of weighted analytic functions introduced in Subsection \ref{sec:WgtSpcNonHomNrm}. However, the approximation rates established in Section \ref{sec:hpReapproxReLU} only hold on tensor product domains with singularities on the boundary. Therefore, we will first extend the exponential NN approximation rates to functions which exhibit singularities on a set of isolated points internal to the domain, arising from singular potentials of nonlinear Schrödinger operators. In Section \ref{sec:polygonal}, we demonstrate, using an argument based on a partition of unity, that the approximation problem on general polygonal domains can be reduced to that on tensor product domains and Fichera-type domains, and establish exponential NN expression rates for linear elliptic source and eigenvalue problems. In Section \ref{sec:EllPDEFichera}, we show exponential NN expression rates for classes of weighted analytic functions on two- and three-dimensional Fichera-type domains. \subsection{Nonlinear eigenvalue problems with isolated point singularities} \label{sec:EVPPtSing} Point singularities emerge in the solutions of elliptic eigenvalue problems, as arise, for example, for electrostatic interactions between charged particles that are modelled mathematically as point sources in $\mathbb{R}^3$. Other problems that exhibit point singularities appear in general relativity, and for electron structure models in quantum mechanics. We concentrate here on the expression rate of ``ab initio'' NN approximation of the electron density near isolated singularities of the nuclear potential. Via a ReLU-based partition of unity argument, an exponential approximation rate bound for a single, isolated point singularity in Theorem \ref{prop:Schrodinger} is extended in Corollary \ref{coro:multnucl} to electron densities corresponding to potentials with multiple point singularities at a priori known locations, modeling (static) molecules. The numerical approximation in ab initio electron structure computations with NNs has been recently reported to be competitive with other established, methodologies (e.g. \cite{pfau2019abinitio,hermann2019deep} and the references there). The exponential ReLU expression rate bounds obtained here can, in part, underpin competitive performances of NNs in (static) electron structure computations. We recall that all NNs are realized with the ReLU activation function, see \eqref{eq:NetworkScheme}. \subsubsection{Nonlinear Schr\"{o}dinger equations} \label{sec:NonlSchrEq} Let $\Omega = \mathbb{R}^d/(2\mathbb{Z})^d$, where $d \in \{2,3\}$, be a flat torus and let $V:\Omega\to \mathbb{R}$ be a potential such that $V(x)\geq V_0>0$ for all $x\in \Omega$ and there exists $\delta>0$ and $A_V>0$ such that \begin{equation} \label{eq:V} \|r^{2+{|\alpha|}-\delta} \partial^\alpha V\|_{L^\infty(\Omega)} \leq A_V^{{|\alpha|}+1}{|\alpha|}!\qquad \forall \alpha\in \mathbb{N}_0^d, \end{equation} where $r(x) = \dist(x, (0, \dots, 0))$. For $k \in \{0, 1, 2\}$, we introduce the Schr\"{o}dinger eigenproblem that consists in finding the smallest eigenvalue $\lambda \in \mathbb{R}$ and an associated eigenfunction $u \in H^1(\Omega)$ such that \begin{equation} \label{eq:Schrodinger} \begin{aligned} (-\Delta +V +|u|^k) u = \lambda u \quad \text{in }\Omega, \quad \|u\|_{L^2(\Omega)} = 1. \end{aligned} \end{equation} There holds the following approximation result. \begin{theorem} \label{prop:Schrodinger} Let $k \in \{0,1,2\}$ and $(\lambda, u)\in \mathbb{R}\times H^1(\Omega)\backslash \{ 0 \}$ be a solution of the eigenvalue problem \eqref{eq:Schrodinger} with minimal $\lambda$, where $V$ satisfies \eqref{eq:V}. Then, for every $0< \epsilon \leq 1$ there exists a NN $\Phi_{\epsilon, u}$ such that \begin{equation}\label{eq:SchrodingerNN} \left \| u - \Realiz\left(\Phi_{\epsilon, u}\right) \right\|_{H^1(Q)} \leq \epsilon. \end{equation} In addition, as $\epsilon \to 0$, $$ M(\Phi_{\epsilon, u}) = \mathcal{O}(|\text{log } (\epsilon)|^{2d+1}), \; L(\Phi_{\epsilon, u}) = \mathcal{O}(|\text{log } (\epsilon)|\text{log } (|\text{log } (\epsilon)|)) \;. $$ \end{theorem} \begin{proof} Let $\mathcal{C} = \{(0, \dots, 0)\}$ and $\mathcal{E}=\emptyset$. The regularity of $u$ is a consequence of \cite[Theorem 2]{Maday2019b} (see also \cite[Corollary 3.2]{MadMarc2019} for the linear case $k=0$): there exists $\gamma_c> d/2$ and $C_u, A_u>0$ such that $u\in \mathcal{J}^\varpi_{\gamma_c}(\Omega; \mathcal{C}, \mathcal{E}; C_u, A_u)$. Here, $\gamma_c$ and the constants $C_u$ and $A_u$ depend only on, $V_0$, $A_V$ and $\delta$ in \eqref{eq:V}, and on $k$ in \eqref{eq:Schrodinger}. Then, for all $0 < \epsilon \leq 1$, by Theorem \ref{th:ReLU-hp} and Proposition \ref{prop:internal}, there exists a NN $\Phi_{\epsilon, u}$ such that \eqref{eq:SchrodingerNN} holds. Furthermore, there exist constants $C_1$, $C_2 > 0$ dependent only on $V_0$, $A_V$, $\delta$, and $k$, such that \begin{equation*} M(\Phi_{\epsilon, u}) \leq C_1(1 + |\text{log } (\epsilon)|^{2d+1}) \text{ and } L(\Phi_{\epsilon, u}) \leq C_2 \big(1+|\text{log } (\epsilon)|\big)\big(1+\text{log } (1+|\text{log } (\epsilon)|)\big). \end{equation*} \end{proof} \subsubsection{Hartree-Fock model} \label{sec:HF} The Hartree-Fock model is an approximation of the full many-body representation of a quantum system under the Born-Oppenheimer approximation, where the many-body wave function is replaced by a sum of Slater determinants. Under this hypothesis, for $M, N \in \mathbb{N}$, the Hartree-Fock energy of a system with $N$ electrons and $M$ nuclei with positive charges $Z_i$ at isolated locations $R_i\in \mathbb{R}^3$, reads \begin{multline} \label{eq:EHF} E_{\mathrm{HF}} = \inf\bigg\{ \sum_{i=1}^N\int_{\mathbb{R}^3}\left(|\nabla\varphi_i|^2 + V|\varphi_i|^2 \right) +\frac{1}{2} \int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{\rho(x)\rho(y)}{|x-y|}dxdy -\frac{1}{2} \int_{\mathbb{R}^3}\int_{\mathbb{R}^3} \frac{\tau(x,y)^2}{|x-y|}dxdy :\\ (\varphi_1, \dots, \varphi_N)\in H^1(\mathbb{R}^3)^N\text{ such that }\int_{\mathbb{R}^3} \varphi_i\varphi_j = \delta_{ij} \bigg\}, \end{multline} where $\delta_{ij}$ is the Kronecker delta, $V(x) = -\sum_{i=1}^{M} Z_i/|x-R_i|$, $\tau(x, y) = \sum_{i=1}^N\varphi_i(x)\varphi_i(y)$, and $\rho(x) = \tau(x,x)$, see, e.g., \cite{Lieb1977,Lions1987}. The Euler-Lagrange equations of \eqref{eq:EHF} read \begin{equation} \label{eq:HF} (-\Delta+V(x))\varphi_i(x) + \int_{\mathbb{R}^3}\frac{\rho(y)}{|x-y|}dy\varphi_i(x) - \int_{\mathbb{R}^3} \frac{\tau(x, y)}{|x-y|}\varphi_i(y) dy= \lambda_i \varphi_i(x), \qquad i=1, \dots, N, \, \text{and } x\in\mathbb{R}^3 \end{equation} with $\int_{\mathbb{R}^3}\varphi_i\varphi_j=\delta_{ij}$. \begin{remark} \label{rmk:ExGrndStat} It has been shown in \cite{Lieb1977} that, if $\sum_{k=1}^MZ_k>N-1$, there exists a ground state $\varphi_1,\dots, \varphi_N $ of \eqref{eq:EHF}, solution to \eqref{eq:HF}. \end{remark} The following statement gives exponential expression rate bounds of the NN-based approximation of electronic wave functions in the vicinity of one singularity (corresponding to the location of a nucleus) of the potential. \begin{theorem} \label{prop:HF} Assume that \eqref{eq:HF} has $N$ real eigenvalues $\lambda_1, \dots, \lambda_N$ with associated eigenfunctions $\varphi_1, \dots, \varphi_N$, such that $\int_{\mathbb{R}^3}\varphi_i\varphi_j = \delta_{ij}$. Fix $k\in\{1, \dots, M\}$, let $R_k$ be one of the singularities of $V$ and let $a>0$ such that $|R_j-R_k|>2a$ for all $j\in \{1, \dots, M\}\setminus \{k\}$. Let $\Omega_k$ be the cube $\Omega_k = \left\{ x\in \mathbb{R}^3:\|x - R_k\|_{\infty}\leq a \right\}$. Then there exists a NN $\Phi_{\epsilon, \varphi}$ such that $\Realiz(\Phi_{\epsilon, \varphi}) : \mathbb{R}^3\to \mathbb{R}^N$, satisfies \begin{equation}\label{eq:ReLUapprox-HF} \left \| \varphi_i - \Realiz(\Phi_{\epsilon, \varphi})_i \right\|_{H^1(\Omega_k)} \leq \epsilon,\qquad\forall i\in\range{N}. \end{equation} In addition, as $\epsilon \to 0$, $\|\Realiz(\Phi_{\epsilon, \varphi})_i\|_{L^\infty(\Omega_k)} = \mathcal{O}(\left| \text{log } \epsilon \right|^{6})$ for every $i = \{1, \dots, N\}$, $$ M(\Phi_{\epsilon, \varphi}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|^{7} + N\left|\text{log } (\epsilon)\right|^{6}), \;\; L(\Phi_{\epsilon, \varphi}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|\text{log } (\left|\text{log } (\epsilon)\right|)). $$ \end{theorem} \begin{proof} Let $\mathcal{C} = \{(0, 0,0)\}$ and $\mathcal{E} = \emptyset$ and fix $k\in\{1 ,\dots, M\}$. From the regularity result in \cite[Corollary 1]{MadayMarcati2020}, see also \cite{Flad2008,Fournais2009}, there exist $C_\varphi$, $A_\varphi$, and $\gamma_c>3/2$ such that $(\varphi_1, \dots, \varphi_N) \in \left[\mathcal{J}^\varpi_{\gamma_c}(\Omega_k; \mathcal{C}, \mathcal{E}; C_\varphi, A_\varphi)\right]^N$. Then, \eqref{eq:ReLUapprox-HF}, the $L^\infty$ bound and the depth and size bounds on the NN $\Phi_{\epsilon, \varphi}$ follow from the $hp$ approximation result in Proposition \ref{prop:internal} (centered in $R_k$ by translation), from Theorem \ref{th:ReLU-hp}, as in Corollary \ref{cor:ReLUapprox-vector}. \end{proof} The arguments in the preceding subsections applied to wave functions for a single, isolated nucleus modelled by the singular potential $V$ as in \eqref{eq:V} can then be extended to give upper bounds on the approximation rates achieved by realizations of NNs of the wave functions in a bounded, sufficiently large domain containing all singularities of the nuclear potential in \eqref{eq:EHF}. \begin{corollary} \label{coro:multnucl} Assume that \eqref{eq:HF} has $N$ real eigenvalues $\lambda_1, \dots, \lambda_N$ with associated eigenfunctions $\varphi_1, \dots, \varphi_N$, such that $\int_{\mathbb{R}^3}\varphi_i\varphi_j = \delta_{ij}$. Let $a_i, b_i\in\mathbb{R}$, $i=1,2,3$, and $\Omega = \bigtimes_{i=1}^d(a_i, b_i)$ such that $\{R_j\}_{j=1}^M\subset \Omega$. Then, for every $0 < \epsilon<1$, there exists a NN $\Phi_{\epsilon, \varphi}$ such that $\Realiz(\Phi_{\epsilon, \varphi}) : \mathbb{R}^3\to \mathbb{R}^N$ and \begin{equation}\label{eq:ReLUapprox-HF-multising} \left \| \varphi_i - \Realiz(\Phi_{\epsilon, \varphi})_i \right\|_{H^1(\Omega)} \leq \epsilon,\qquad\forall i=1, \dots, N. \end{equation} Furthermore, as $\epsilon \to 0$ $M(\Phi_{\epsilon, \varphi}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|^{7} + N\left|\text{log } (\epsilon)\right|^{6})$ and $L(\Phi_{\epsilon, \varphi}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|\text{log } (\left|\text{log } (\epsilon)\right|))$. \end{corollary} \begin{proof} The proof is based on a partition of unity argument. We only sketch it at this point, but will develop it in detail in the proof of Theorem \ref{th:polygon}. Let $\mathcal{T}$ be a tetrahedral, regular triangulation of $\Omega$, and let $\{\kappa_k \}_{k=1}^{N_\kappa}$ be the hat-basis functions associated to it. We suppose that the triangulation is sufficiently refined to ensure that, for all $k\in\range{N_\kappa}$, exists a cube $\widetilde{\Omega}_{k}\subset \Omega$ such that $\supp(\kappa_k) \subset \widetilde{\Omega}_{k}$ and that there exists at most one $j\in\range{M}$ such that $\overline{\widetilde{\Omega}}_k\cap R_j \neq \emptyset$. For all $k\in \range{N_\kappa}$, by \cite[Theorem 5.2]{he2020}, which is based on \cite{TM1999}, there exists a NN $\Phi^{\kappa_k}$ such that \begin{equation*} \Realiz (\Phi^{\kappa_k}) (x) = \kappa_k(x), \qquad \forall x\in \Omega. \end{equation*} For all $0<\epsilon<1$, let \begin{equation*} \epsilon_1 \coloneqq \frac{\epsilon}{ 2 N_\kappa\left(\max_{k\in\{1, \dots, N_\kappa\}}\|\kappa_k \|_{W^{1,\infty}(\Omega)} \right) }. \end{equation*} For all $k\in\range{N_\kappa}$ and $i\in\{1,\ldots,N\}$, there holds $\varphi_i|_{\widetilde{\Omega}_k} \in \mathcal{J}^{\varpi}_\gamma(\widetilde{\Omega}_k; \{R_1, \dots, R_M\}\cap \overline{\widetilde{\Omega}}_k, \emptyset)$. Then there exists a NN $\Phi_{\epsilon_1, \varphi}^{k}$, as defined in Theorem \ref{prop:HF}, such that \begin{equation} \label{eq:HF-multising-element} \| \varphi_i - \Realiz(\Phi_{\epsilon_1, \varphi}^k)_i \|_{H^1(\widetilde{\Omega}_k)} \leq \epsilon_1,\qquad\forall i\in\range{N}. \end{equation} Let \begin{equation*} C_\infty\coloneqq \max_{k \in \range{N_\kappa}} \sup_{\hat{\epsilon} \in (0,1)} \frac{\|\Realiz(\Phi^{k}_{\hat{\epsilon}, \varphi}) \|_{L^\infty(\widetilde{\Omega}_k)}}{1+\left| \text{log } \hat{\epsilon} \right|^6} < \infty \end{equation*} where the finiteness is due to Theorem \ref{prop:HF}. Then, we denote \begin{equation*} \epsilon_{\times}\coloneqq \frac{\epsilon}{2N_\kappa({|\Omega|^{1/2}+} 1+\max_{i=1, \dots, N}|\varphi_i|_{H^1(\Omega)} + \max_{k=1, \dots, N_\kappa}\|\kappa_k\|_{W^{1, \infty}(\Omega)}|\Omega|^{1/2})} \end{equation*} and $M_{\times}(\epsilon_1) \coloneqq C_\infty (1+\left| \text{log } \epsilon_1 \right|^6)$. As detailed in the proof of Theorem \ref{th:polygon} below, after concatenating with identity NNs and possibly after increasing the constants, we assume that $L(\Phi_{\epsilon_1, \varphi}^k)$ is independent of $k$ and that the bound on $M(\Phi_{\epsilon_1, \varphi}^k)$ is independent of $k$, and that the same holds for $\Phi^{\kappa_k}$, $k=1,\ldots,N_\kappa$. Let now, for $i\in\range{N}$, $E_i:\mathbb{R}^{N+1}\to\mathbb{R}^2$ be the matrices such that, for all $x = (x_1, \dots, x_{N+1})$, $E_i x = (x_i, x_{N+1})$. Let also $A \in \mathbb{R}^{N\times N_\kappa}$ be a matrix of ones. Then, we introduce the NN \begin{equation} \label{eq:NN-HF-multising-def} \Phi_{\epsilon, \varphi} = (A, 0)\odot \Par\left(\left\{\Par\left( \left\{ \Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)} \odot (E_i, 0) \right\}_{i=1}^N\right)\odot \Par(\Phi^{k}_{\epsilon_1, \varphi}, \Phi^{\Id}_{1,L} \odot \Phi^{\kappa_k})\right\}_{k=1}^{N_\kappa} \right), \end{equation} where $L\in\mathbb{N}$ is such that $L(\Phi^{\Id}_{1,L} \odot \Phi^{\kappa_k}) = L(\Phi^{k}_{\epsilon_1, \varphi})$, from which it follows that M(\Phi^{\Id}_{1,L}) \leq CL(\Phi^{k}_{\epsilon_1, \varphi})$. There holds, for all $i\in \range{N}$, \begin{equation*} \Realiz(\Phi_{\epsilon, \varphi})(x)_i = \sum_{k=1}^{N_\kappa} \Realiz(\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1) }) (\Realiz(\Phi^k_{\epsilon_1, \varphi})(x)_i, \kappa_k(x)), \qquad \forall x\in \Omega. \end{equation*} By the triangle inequality, \cite[Theorem 2.1]{Melenk1996}, \eqref{eq:HF-multising-element}, and Proposition \ref{prop:Multiplication}, for all $i\in \range{N}$, \begin{equation*} \begin{aligned} &\| \varphi_i - \Realiz(\Phi_{\epsilon, \varphi})_i \|_{H^1(\Omega)} \\ &\quad \leq \| \varphi_i - \sum_{i=1}^{N_\kappa}\kappa_k \Realiz(\Phi^{k}_{\epsilon_1, \varphi})_i \|_{H^1(\Omega)} + \sum_{k=1}^{N_\kappa} \| \Realiz(\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)}) \left(\Realiz(\Phi^{k}_{\epsilon_1, \varphi})_i, \kappa_k \right) - \kappa_k\Realiz(\Phi^{k}_{\epsilon_1, \varphi})_i \|_{H^1(\Omega_k)} \\ & \quad \leq N_\kappa \left(\max_{k\in \{1, \dots, N_\kappa\}}\|\kappa_k\|_{W^{1, \infty}(\Omega)} \right)\epsilon_1\\ &\quad \qquad + N_\kappa (|\Omega|^{1/2}+ 1+\max_{i=1, \dots, N}|\varphi_i|_{H^1(\Omega)} + \max_{k=1, \dots, N_\kappa}\| \kappa_k \|_{W^{1, \infty}(\Omega)}|\Omega|^{1/2}) \epsilon_{\times} \\ &\quad \leq \epsilon. \end{aligned} \end{equation*} The asymptotic bounds on the size and depth of $\Phi_{\epsilon, \varphi}$ can then be derived from \eqref{eq:NN-HF-multising-def}, using Theorem \ref{prop:HF}, as developed in more detail in the proof of Theorem \ref{th:polygon} below. \end{proof} \subsection{Elliptic PDEs in polygonal domains} \label{sec:polygonal} We establish exponential expressivity for realizations of NNs with ReLU activation of solution classes to elliptic PDEs in polygonal domains $\Omega$, the boundaries $\partial \Omega$ of which are Lipschitz and consist of a finite number of straight line segments. Notably, $\Omega \subset \mathbb{R}^2$ need not be a finite union of axiparallel rectangles. In the following lemma, we construct a partition of unity in $\Omega$ subordinate to an open covering, of which each element is the affine image of one out of three \emph{canonical patches}. Remark that we admit corners with associate angle of aperture $\pi$; this will be instrumental, in Corollaries \ref{cor:polygon-BVP} and \ref{cor:Eigen}, for the imposition of different boundary conditions on $\partial \Omega$. The three canonical patches that we consider are listed in Lemma \ref{lemma:exist-PU}, item [P2]. Affine images of $(0,1)^2$ are used away from corners of $\partial\Omega$ and when the internal angle of a corner is smaller than $\pi$. Affine images of $(-1,1)\times(0,1)$ are used near corners with internal angle $\pi$. PDE solutions may exhibit point singularities near such corners e.g. if the two neighboring edges have different types of boundary conditions. Affine images of $ (-1,1)^2\setminus (-1, 0]^2$ are used near corners with internal angle larger than $\pi$. In the proof of Theorem \ref{th:polygon}, we use on each patch Theorem \ref{th:ReLUapprox} or a result from Subsection \ref{sec:EllPDEFichera} below. A triangulation $\mathcal{T}$ of $\Omega$ is defined as a finite partition of $\Omega$ into open triangles $K$ such that $\bigcup_{K\in\mathcal{T}} \overline{K} = \overline{\Omega}$. A \emph{regular triangulation} of $\Omega$ is, additionally, a triangulation $\mathcal{T}$ of $\Omega$ such that, for any two neighboring elements $K_1, K_2\in \mathcal{T}$, $\overline{K}_1\cap \overline{K}_2$ is either a corner of both $K_1$ and $K_2$ or an entire edge of both $K_1$ and $K_2$. For a regular triangulation $\mathcal{T}$ of $\Omega$, we denote by $S_1(\Omega, \mathcal{T})$ the space of functions $v\in C(\Omega)$ such that for every $K \in \mathcal{T}$, $v|_{K} \in \mathbb{P}_1$. We postpone the proof of Lemma \ref{lemma:exist-PU} to Appendix \ref{sec:triangulationpolygon}. \begin{lemma} \label{lemma:exist-PU} Let $\Omega\subset\mathbb{R}^2$ be a polygon with Lipschitz boundary, consisting of straight sides, and with a finite set $\mathcal{C}$ of corners. Then, there exists $N_p\in \mathbb{N}$, a regular triangulation $\mathcal{T}$ of $\mathbb{R}^2$, such that for all $K\in\mathcal{T}$ either $K\subset\Omega$ or $K\subset\Omega^c$. Moreover, there exists a partition of unity $\{\phi_i\}_{i=1}^{N_p}\subset\left[S_1(\Omega,\mathcal{T})\right]^{N_p}$ such that \begin{itemize} \item[\rm{[P1]}] $\supp(\phi_i)\cap\Omega\subset\Omega_i$ for all $i=1,\ldots,N_p$, \item[\rm{[P2]}] for each $i\in\range{N_p}$, there exists an affine map $\psi_i \colon \mathbb{R}^2 \to \mathbb{R}^2$ such that $\psi_i^{-1}(\Omega_i) = \widehat{\Omega}_i$ for $$ \widehat{\Omega}_i \in \{(0,1)^2, \Omega_{DN}, \Omega\}, \quad \text{ with } \quad \Omega_{DN} := (-1,1)\times(0,1), \quad \Omega := (-1,1)^2\setminus (-1, 0]^2; $$ \item[\rm{[P3]}] $\mathcal{C} \cap\overline{\Omega}_i \subset \psi_i(\{(0,0)\})$ for all $i\in \{1, \dots, N_p\}$. \end{itemize} \end{lemma} The following statement, then, provides expression rates for the NN approximation of functions in weighted analytic classes in polygonal domains. We recall that all NNs are realized with the ReLU activation function, see \eqref{eq:NetworkScheme}. \begin{theorem} \label{th:polygon} Let $\Omega\subset\mathbb{R}^2$ be a polygon with Lipschitz boundary consisting of straight sides and with a finite set $\mathcal{C}$ of corners. Let ${\underline{\gamma}} = \{\gamma_c: c\in \mathcal{C}\}$ such that $\min{\underline{\gamma}} >1$. Then, for all $u\in \mathcal{J}^\varpi_{\underline{\gamma}}(\Omega; \mathcal{C}, \emptyset)$ and for every $0 < \epsilon < 1$, there exists a NN $\Phi_{\epsilon, u}$ such that \begin{equation} \label{eq:polygon} \| u - \Realiz(\Phi_{\epsilon, u}) \|_{H^1(\Omega)}\leq \epsilon. \end{equation} In addition, as $\epsilon \to 0$, $$ M(\Phi_{\epsilon, u}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|^{5}), \;\; L(\Phi_{\epsilon,u}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|\text{log } (\left|\text{log } (\epsilon)\right|)). $$ \end{theorem} \begin{proof} We introduce, using Lemma \ref{lemma:exist-PU}, a regular triangulation $\mathcal{T}$ of $\mathbb{R}^2$, an open cover $\{\Omega_i\}_{i=1}^{N_p}$ of $\Omega$, and a partition of unity $\{\phi_i\}_{i=1}^{N_p} \in \left[ S_1(\Omega,\mathcal{T}) \right]^{N_p}$ such that the properties \textrm{[P1]} -- \textrm{[P3]} of Lemma \ref{lemma:exist-PU} hold. We define $\hat{u}_i \coloneqq u_{|_{\Omega_i}}\circ \psi_i : \widehat{\Omega}_i\to \mathbb{R}$. Since $u\in \mathcal{J}^\varpi_{{\underline{\gamma}}}(\Omega; \mathcal{C}, \emptyset)$ with $\min{\underline{\gamma}}>1$ and since the maps $\psi_i$ are affine, we observe that for every $i\in\{1, \dots, N_p\}$, there exists ${\underline{\gamma}}$ such that $\min{\underline{\gamma}}>1$ and $\hat{u}_i\in \mathcal{J}^\varpi_{{\underline{\gamma}}}(\widehat{\Omega}_i, \{(0,0)\}, \emptyset)$, because of \textrm{[P2]} and \textrm{[P3]}. Let \begin{equation*} \epsilon_1 \coloneqq \frac{\epsilon}{2 N_p\max_{i\in\{1, \dots, N_p\}} \|\phi_i\|_{W^{1,\infty}(\Omega)} \left(\| \det J_{\psi_i}\|_{L^\infty((0,1)^2)} \left( 1 + \|\| J_{\psi^{-1}_i}\|_2\|_{L^\infty(\Omega_i)}^{2} \right) \right)^{1/2} }. \end{equation*} By Theorem \ref{th:ReLUapprox} and by Lemma \ref{lem:straightcorner} and Theorem \ref{prop:Fichera-appx} in the forthcoming Subsection \ref{sec:EllPDEFichera}, there exist $N_p$ NNs $\Phi^{\hat{u}_i}_{\epsilon_1}$, $i\in \{1,\dots, N_p\}$, such that \begin{equation} \label{eq:Phi-hu} \|\hat{u}_i - \Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}) \|_{H^1(\widehat{\Omega}_i)}\leq \epsilon_1, \qquad \forall i\in \{1, \dots, N_p\}, \end{equation} and there exists $C_\infty>0$ independent of $\epsilon_1$ such that, for all $i \in \{1, \dots, N_p\}$ and all $\hat{\epsilon} \in (0,1)$ \begin{equation*} \|\Realiz(\Phi^{\hat{u}_i}_{\hat{\epsilon}}) \|_{L^\infty(\widehat{\Omega}_i)} \leq C_\infty (1+\left| \text{log } \hat{\epsilon} \right|^4). \end{equation*} The NNs given by Theorem \ref{th:ReLUapprox}, Lemma \ref{lem:straightcorner} and Theorem \ref{prop:Fichera-appx}, which we here denote by $\widetilde{\Phi}^{\hat{u}_i}_{\epsilon_1}$ for $i = 1,\ldots,N_p$, may not have equal depth. Therefore, for all $i=1,\ldots,N_p$ and suitable $L_i\in\mathbb{N}$ we define $\Phi^{\hat{u}_i}_{\epsilon_1} := \Phi^{\Id}_{1,L^1_i} \odot \widetilde{\Phi}^{\hat{u}_i}_{\epsilon_1}$, so that the depth is the same for all $i=1,\ldots,N_p$. To estimate the size of the enlarged NNs, we use the fact that the size of a NN is not smaller than the depth unless the associated realization is constant. In the latter case, we could replace the NN by a NN with one non-zero weight without changing the realization. By this argument, we obtain for all $i=1,\ldots,N_p$ that $M(\Phi^{\hat{u}_i}_{\epsilon_1}) \leq 2M(\Phi^{\Id}_{1,L_i}) + 2M(\widetilde{\Phi}^{\hat{u}_i}_{\epsilon_1}) \leq C \max_{j=1,\ldots,N_p} L(\widetilde{\Phi}^{\hat{u}_j}_{\epsilon_1}) + C M(\widetilde{\Phi}^{\hat{u}_i}_{\epsilon_1}) \leq C \max_{j=1,\ldots,N_p} M(\widetilde{\Phi}^{\hat{u}_j}_{\epsilon_1})$. Furthermore, as shown in \cite{he2020}, there exist NNs $\Phi^{\phi_i}$, $i\in\range{N_p}$, such that \begin{equation*} \Realiz(\Phi^{\phi_i})(x) = \phi_i(x), \qquad \forall x\in \Omega,\; \forall i\in\{1, \dots, N_p\}. \end{equation*} Here we use that $\mathcal{T}$ is a partition $\mathbb{R}^2$, so that $\phi_i$ is defined on all of $\mathbb{R}^2$ and \cite[Theorem 5.2]{he2020} applies, which itself is based on \cite{TM1999}. Similarly to the previously handled case of $\Phi^{\hat{u}_i}_{\epsilon_1}$, we can assume that $\Phi^{\phi_i}$ for $i=1,\ldots,N_p$ all have equal depth and that the size of $\Phi^{\phi_i}$ is bounded independent of $i$. Since by \textrm{[P2]} the mappings $\psi_i$ are affine and invertible, it follows that $\psi_i^{-1}$ is affine for every $i \in \{1, \dots, N_p\}$. Thus, there exist NNs $\Phi^{\psi^{-1}_i}$, $i\in\range{N_p}$, of depth $1$, such that \begin{equation} \label{eq:Phi-psi} \Realiz(\Phi^{\psi_i^{-1}})(x) = \psi_i^{-1}(x), \qquad \forall x\in \Omega_i,\; \forall i\in\{1, \dots, N_p\}. \end{equation} Next, we define \begin{equation*} \epsilon_{\times}\coloneqq \frac{\epsilon}{2N_p(|\Omega|^{1/2}+ 1+|u|_{H^1(\Omega)} + \max_{i=1, \dots, N_p}\|\phi_i\|_{W^{1, \infty}(\Omega)}|\Omega|^{1/2})} \end{equation*} and $M_{\times}(\epsilon_1)\coloneqq C_\infty (1+\left| \text{log } \epsilon_1 \right|^4)$. Finally, we set \begin{equation} \label{eq:NN-polygon-def} \Phi_{\epsilon, u} \coloneqq ((\underbrace{1, \dots, 1}_{N_p\text{ times}}), 0) \odot \Par\left(\left\{\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)} \odot \Par(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi^{-1}_i}, \Phi^{\Id}_{1,L} \odot \Phi^{\phi_i})\right\}_{i=1}^{N_p} \right), \end{equation} where $L\in\mathbb{N}$ is such that $L(\Phi^{\hat{u}_1}_{\epsilon_1}\odot \Phi^{\psi^{-1}_1}) = L(\Phi^{\Id}_{1,L} \odot \Phi^{\phi_1})$, which yields that M(\Phi^{\Id}_{1,L}) \leq C L(\Phi^{\hat{u}_1}_{\epsilon_1}\odot \Phi^{\psi^{-1}_1})$. \paragraph{Approximation accuracy.} By \eqref{eq:NN-polygon-def}, we have for all $x\in \Omega$, \begin{equation*} \Realiz(\Phi_{\epsilon, u})(x) = \sum_{i=1}^{N_p} \Realiz(\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)}) \left( \Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot\Phi^{\psi^{-1}_i})(x), \Realiz(\Phi^{\phi_i})(x)\right). \end{equation*} Therefore, \begin{equation} \label{eq:polygon-appx-triangle} \begin{aligned} \| u - \Realiz(\Phi_{\epsilon,u}) \|_{H^1(\Omega)} &\leq \| u - \sum_{i=1}^{N_p}\phi_i\Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi^{-1}_i}) \|_{H^1(\Omega)} \\ &\qquad + \sum_{i=1}^{N_p} \| \Realiz(\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)}) \left(\Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi^{-1}_i}), \phi_i \right) - \phi_i\Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi_i^{-1}}) \|_{H^1(\Omega)} \\ & = (I) + (II). \end{aligned} \end{equation} We start by considering term $(I)$. For each $i\in \{1, \dots, N_p\}$, thanks to \eqref{eq:Phi-hu}, there holds, with $\| J_{\psi^{-1}_i} \|_2^2$ denoting the square of the matrix $2$-norm of the Jacobian of $\psi_i^{-1}$, \begin{equation} \label{eq:appx-omegai} \begin{aligned} & \| u - \Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi^{-1}_i}) \|_{H^1(\Omega_i)} \\ &\qquad = \| \hat{u}_i\circ \psi^{-1}_i - \Realiz(\Phi^{\hat{u}_i}_{\epsilon_1})\circ \psi^{-1}_i \|_{H^1(\Omega_i)} \\ & \qquad = \left( \int_{\widehat{\Omega}_i} \left(\snormc{\hat{u}_i}^2 + \normc[2]{J_{\psi_i^{-1}} \nabla \left(\hat{u}_i - \Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}) \right)}^2 \right) \det J_{\psi_i} dx \right)^{1/2} \\ &\qquad \leq \epsilon_1 \left( \| \det J_{\psi_i}\|_{L^\infty(\widehat{\Omega}_i)} + \| \det J_{\psi_i}\|_{L^\infty(\widehat{\Omega}_i)} \| \| J_{\psi^{-1}_i} \|_2^2 \|_{L^\infty(\Omega_i)} \right)^{1/2} \\ & \qquad \leq \epsilon_2:= \epsilon_1 \max_i \left( \| \det J_{\psi_i}\|_{L^\infty(\widehat{\Omega}_i)} + \| \det J_{\psi_i}\|_{L^\infty(\widehat{\Omega}_i)} \| \| J_{\psi^{-1}_i} \|_2^2 \|_{L^\infty(\Omega_i)} \right)^{1/2}. \end{aligned} \end{equation} By \cite[Theorem 2.1]{Melenk1996}, \begin{equation} \label{eq:polygon-appx-I} (I) \leq N_p \epsilon_2\max_{i\in \{1, \dots, N_p\}}\|\phi_i\|_{W^{1, \infty}(\Omega)} \leq \frac\epsilon2. \end{equation} We now consider term $(II)$ in \eqref{eq:polygon-appx-triangle}. There holds, by Theorem \ref{th:ReLUapprox} and \eqref{eq:Phi-psi}, \begin{equation*} \|\Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi^{-1}_i})\|_{L^\infty(\Omega_i)} = \|\Realiz(\Phi^{\hat{u}_i}_{\epsilon_1})\|_{L^\infty(\widehat{\Omega}_i)} \leq C_{\infty} (1 + \left| \text{log } \epsilon_1 \right|^4) \end{equation*} for all $i\in \{1, \dots, N_p\}$. Furthermore, by [P1], $\phi_i(x) = 0$ for all $x\in \Omega\setminus\Omega_i$ and, by Proposition \ref{prop:Multiplication}, \begin{equation*} \Realiz(\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)}) \left(\Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi^{-1}_i})(x), \phi_i (x)\right) = 0, \qquad \forall x\in \Omega\setminus\Omega_i. \end{equation*} From \eqref{eq:appx-omegai}, we also have \begin{equation*} | \Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi^{-1}_i}) |_{H^1(\Omega_i)} \leq | u |_{H^1(\Omega_i)} + \| u - \Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi^{-1}_i}) \|_{H^1(\Omega_i)} \leq 1+ | u |_{H^1(\Omega_i)}. \end{equation*} Hence, \begin{equation} \label{eq:polygon-appx-II} \begin{aligned} (II) & = \sum_{i=1}^{N_p} \| \Realiz(\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)}) \left(\Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi^{-1}_i}), \phi_i \right) - \phi_i\Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi_i^{-1}}) \|_{H^1(\Omega_i)} \\ &\leq \sum_{i=1}^{N_p}\bigg( \| \Realiz(\Pi^{2}_{\epsilon_{\times}, M_{\times}(\epsilon_1)}) (a,b) - ab\|_{W^{1, \infty}([-M_{\times}(\epsilon_1), M_{\times}(\epsilon_1)]^2)} \\ &\qquad \qquad\cdot \left(|\Omega|^{1/2}+ | \Realiz(\Phi^{\hat{u}_i}_{\epsilon_1}\odot \Phi^{\psi^{-1}_i}) |_{H^1(\Omega_i)} +| \phi_i |_{H^1(\Omega_i)} \right) \bigg) \\ &\leq N_p \epsilon_{\times} \left(|\Omega|^{1/2}+ 1 + | u |_{H^1(\Omega_i)} + |\Omega|^{1/2}\max_{i=1,\dots, N_p} \|\phi_i\|_{W^{1, \infty}(\Omega)}\right) \\ & \leq \frac{\epsilon}{2}. \end{aligned} \end{equation} The asserted approximation accuracy follows by combining \eqref{eq:polygon-appx-triangle}, \eqref{eq:polygon-appx-I}, and \eqref{eq:polygon-appx-II}. \paragraph{Size of the neural network.} To bound the size of the NN, we remark that $N_p$ and the sizes of $\Phi^{\psi_i^{-1}}$ and of $\Phi^{\phi_i}$ only depend on the domain $\Omega$. Furthermore, there exist constants $C_{\Omega, i}$, $i=1,2,3$, that depend only on $\Omega$ and $u$ such that \begin{equation} \label{eq:epsilons-polygon} \begin{gathered} \left| \text{log } \epsilon_1 \right| \leq C_{\Omega, 1} (1+\left| \text{log } \epsilon \right|),\qquad \qquad \left| \text{log } \epsilon_\times \right| \leq C_{\Omega, 2}(1+ \left| \text{log } \epsilon \right|), \\ \left| \text{log } M_\times(\epsilon_1) \right| \leq C_{\Omega, 3}(1+ \text{log } (1+\left|\text{log } \epsilon \right|)). \end{gathered} \end{equation} From Theorem \ref{th:ReLUapprox} and Proposition \ref{prop:Multiplication}, in addition, there exist constants $C^L_{\hat{u}}, C^M_{\hat{u}}, C_{\times}>0$ such that, for all $0< \epsilon_1 , \epsilon_\times\leq 1$, \begin{equation} \label{eq:sizes-polygon} \begin{gathered} L(\Phi^{\hat{u}_i}_{\epsilon_1}) \leq C^L_{\hat{u}} (1+\left| \text{log } \epsilon_1 \right|)(1 + \text{log } (1+\left| \text{log } \epsilon_1 \right|)), \qquad \qquad M(\Phi^{\hat{u}_i}_{\epsilon_1}) \leq C^M_{\hat{u}} (1+\left| \text{log } \epsilon_1 \right|^{5}),\\ \max(M(\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)}), L(\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)})) \leq C_\times (1+\text{log } (M_{\times}(\epsilon_1)^2/\epsilon_{\times})). \end{gathered} \end{equation} Then, by \eqref{eq:NN-polygon-def}, we have \begin{equation} \label{eq:depth-size-polygon} \begin{aligned} & L(\Phi_{\epsilon, u}) = 1 + L(\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)}) +\max_{i=1, \dots, N_p} \left(L(\Phi^{\hat{u}_i}_{\epsilon_1}) + L(\Phi^{\psi_i^{-1}}) \right), \\ & M(\Phi_{\epsilon, u}) \leq {C}\left( N_p + M(\Pi^2_{\epsilon_{\times}, M_{\times}(\epsilon_1)}) +\sum_{i=1}^{N_p} \left(M(\Phi^{\hat{u}_i}_{\epsilon_1}) + M(\Phi^{\psi_i^{-1}}) + M(\Phi^{\Id}_{1,L}) + M(\Phi^{\phi_i}) \right) \right). \end{aligned} \end{equation} The desired depth and size bounds follow from \eqref{eq:epsilons-polygon}, \eqref{eq:sizes-polygon}, and \eqref{eq:depth-size-polygon}. This concludes the proof. \end{proof} The exponential expression rate for the class of weighted, analytic functions in $\Omega$ by realizations of NNs with ReLU activation in the $H^1(\Omega)$-norm established in Theorem \ref{th:polygon} implies an exponential expression rate bound on $\partial\Omega$, via the trace map and the fact that $\partial\Omega$ can be \emph{exactly parametrized by the realization of a shallow NN with ReLU activation}. This is relevant for NN-based solution of boundary integral equations. \begin{corollary}{(NN expression of Dirichlet traces)}\label{cor:polygontrace} Let $\Omega\subset \mathbb{R}^2$ be a polygon with Lipschitz boundary and a finite set $\mathcal{C}$ of corners. Let ${\underline{\gamma}} = \{\gamma_c: c\in \mathcal{C}\}$ such that $\min{\underline{\gamma}} >1$. For any connected component $\Gamma$ of $\partial\Omega$, let $\ell_\Gamma>0$ be the length of $\Gamma$, such that there exists a continuous, piecewise affine parametrization $\theta:[0,\ell_\Gamma]\to\mathbb{R}^2:t\mapsto\theta(t)$ of $\Gamma$ with finitely many affine linear pieces and $\normc[2]{\tfrac{d}{dt}\theta} = 1$ for almost all $t\in[0,\ell_\Gamma]$. Then, for all $u\in \mathcal{J}^\varpi_{\underline{\gamma}}(\Omega; \mathcal{C}, \emptyset)$ and for all $0 < \epsilon < 1$, there exists a NN $\Phi_{\epsilon, u, \theta}$ approximating the trace $T u := u_{|_{\Gamma}}$ such that \begin{equation} \label{eq:polygontrace} \| T u- \Realiz(\Phi_{\epsilon, u, \theta})\circ\theta^{-1} \|_{H^{1/2}(\Gamma)}\leq \epsilon. \end{equation} In addition, as $\epsilon \to 0$, $$ M(\Phi_{\epsilon, u, \theta}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|^{5}), \;\; L(\Phi_{\epsilon, u, \theta}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|\text{log } (\left|\text{log } (\epsilon)\right|)). $$ \end{corollary} \newcommand{C_{\mathrm{\Gamma}}}{C_{\mathrm{\Gamma}}} \begin{proof} We note that both components of $\theta$ are continuous, piecewise affine functions on $[0,\ell_\Gamma]$, thus they can be represented exactly as realization of a NN of depth two, with the ReLU activation function. Moreover, the number of weights of these NNs is of the order of the number of affine linear pieces of $\theta$. We denote the parallelization of the NNs emulating exactly the two components of $\theta$ by $\Phi^{\theta}$. By continuity of the trace operator $T: H^1(\Omega) \to H^{1/2}(\partial\Omega)$ (e.g. \cite{Gagliardo1957,Brenner2008}), there exists a constant $C_{\mathrm{\Gamma}}>0$ such that for all $v\in H^{1}(\Omega)$ it holds $ \normc[H^{1/2}(\Gamma)]{T v } \leq C_{\mathrm{\Gamma}} \normc[H^{1}(\Omega)]{ v }, $ and without loss of generality we may assume $C_{\mathrm{\Gamma}}\geq1$. Next, for any $\varepsilon\in(0,1)$, let $\Phi_{\epsilon/C_{\mathrm{\Gamma}}, u}$ be as given by Theorem \ref{th:polygon}. Define $\Phi_{\epsilon, u, \theta} := \Phi_{\epsilon/C_{\mathrm{\Gamma}}, u} \odot \Phi^{\theta}$. It follows that \begin{align*} \normc[H^{1/2}(\Gamma)]{T u - \Realiz(\Phi_{\epsilon, u, \theta})\circ\theta^{-1} } = \normc[H^{1/2}(\Gamma)]{T \left(u - \Realiz(\Phi_{\epsilon/C_{\mathrm{\Gamma}}, u}) \right) } \leq C_{\mathrm{\Gamma}} \normc[H^{1}(\Omega)]{ u - \Realiz(\Phi_{\epsilon/C_{\mathrm{\Gamma}}, u}) } \leq \varepsilon. \end{align*} The bounds on its depth and size follow directly from Proposition \ref{prop:conc}, Theorem \ref{th:polygon}, and the fact that the depth and size of $\Phi^{\theta}$ are independent of $\varepsilon$. This finishes the proof. \end{proof} \begin{remark} The exponent $5$ in the bound on the NN size $M(\Phi_{\epsilon, u, \theta})$ in Corollary \ref{cor:polygontrace} is likely not optimal, due to it being transferred from the NN rate in $\Omega$. \end{remark} The proof of Theorem \ref{th:polygon} established exponential expressivity of realizations of NNs with ReLU activation for the analytic class $\mathcal{J}^\varpi_{{\underline{\gamma}}}(\Omega; \mathcal{C}, \emptyset)$ in $\Omega$. This implies that realizations of NNs can approximate, with exponential expressivity, solution classes of elliptic PDEs in polygonal domains $\Omega$. We illustrate this by formulating concrete results for three problem classes: second order, linear, elliptic source and eigenvalue problems in $\Omega$, and viscous, incompressible flow. To formulate the results, we specify the assumptions on $\Omega$. \begin{definition}[Linear, second order, elliptic divergence-form differential operator with analytic coefficients] \label{def:Dop} Let $d\in\{2, 3\}$ and let $\Omega\subset\mathbb{R}^d$ be a bounded domain. Let the coefficient functions $a_{ij},b_i, c:\overline{\Omega}\to \mathbb{R}$ be real analytic in $\overline{\Omega}$, and such that the matrix function $A = (a_{ij})_{1\leq i,j\leq d}:\Omega \to \mathbb{R}^{d \times d}$ is symmetric and uniformly positive definite in $\Omega$. With these functions, we define the linear, second order, elliptic divergence-form differential operator $\Dop$ acting on $w\in C^\infty_0(\Omega)$ via (summation over repeated indices $i,j\in \{1,\dots, d\}$) $$ (\Dop w)(x) := -\partial_i(a_{ij}(x)\partial_j w(x)) + b_j(x)\partial_j w(x) + c(x)w(x) \;, \quad x\in \Omega\;. $$ \end{definition} \begin{setting} \label{setting:polygon} We assume that $\Omega\subset \mathbb{R}^2$ is an open, bounded polygon with boundary $\partial\Omega$ that is Lipschitz and connected. In addition, $\partial\Omega$ is the closure of a finite number $J \geq 3$ of straight, open sides $\Gamma_j$, i.e., $\Gamma_{i} \cap \Gamma_j =\emptyset$ for $i\ne j$ and $\partial\Omega = \bigcup_{1\leq j \leq J} \overline{\Gamma_j}$. We assume the sides are enumerated cyclically, according to arc length, i.e. $\Gamma_{J+1} = \Gamma_1$. By $n_j$, we denote the exterior unit normal vector to $\Omega$ on $\Gamma_j$ and by $\bbc_j := \overline{\Gamma_{j-1}} \cap \overline{\Gamma_j}$ the corner $j$ of $\Omega$. With $\Dop$ as in Definition \ref{def:Dop}, we associate on boundary segment $\Gamma_j$ a boundary operator $\Bop_j \in \{\gamma^j_0,\gamma^j_1\}$, i.e. either the Dirichlet trace $\gamma_0$ or the distributional (co-)normal derivative operator $\gamma_1$, acting on $w\in C^1(\overline{\Omega})$ via \begin{equation} \label{eq:boundarycond} \gamma^j_0 w := w|_{\Gamma_j} \, ,\qquad \gamma^j_1 w := (A\nabla w) \cdot n_j|_{\Gamma_j}, \quad j=1,...,J\;. \end{equation} We collect the boundary operators $\Bop_j$ in $\Bop := \{ \Bop_j \}_{j=1}^J$. \end{setting} The first corollary addresses exponential ReLU expressibility of solutions of the source problem corresponding to $(\Dop,\Bop)$. \begin{corollary} \label{cor:polygon-BVP} Let $\Omega$, $\Dop$, and $\Bop$ be as in Setting \ref{setting:polygon} with $d=2$. For $f$ analytic in $\overline{\Omega}$, let $u$ denote a solution to the boundary value problem \begin{equation}\label{eq:HomDiri} \Dop u = f \;\text{ in }\;\Omega, \qquad \Bop u = 0\;\text{ on }\;\partial \Omega \;. \end{equation} Then, for every $0 < \epsilon < 1$, there exists a NN $\Phi_{\epsilon, u}$ such that \begin{equation} \label{eq:polygon-BVP} \| u - \Realiz(\Phi_{\epsilon, u}) \|_{H^1(\Omega)}\leq \epsilon. \end{equation} In addition, $M(\Phi_{\epsilon, u}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|^{5})$ and $L(\Phi_{\epsilon,u}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|\text{log } (\left|\text{log } (\epsilon)\right|))$, as $\epsilon \to 0$. \end{corollary} \begin{proof} The proof is obtained by verifying weighted, analytic regularity of solutions. By \cite[Theorem 3.1]{GuoBab4} there exists ${\underline{\gamma}}$ such that $\min{\underline{\gamma}}>1$ and $u\in \mathcal{J}^\varpi_{\underline{\gamma}}(\Omega; \mathcal{C}, \emptyset)$. Then, the application of Theorem \ref{th:polygon} concludes the proof. \end{proof} Next, we address NN expression rates for eigenfunctions of $(\Dop,\Bop)$. \begin{corollary}\label{cor:Eigen} Let $\Omega$, $\Dop$, $\Bop$ be as in Setting \ref{setting:polygon} with $d=2$, and $b_i = 0$ in Definition \ref{def:Dop} and let $ 0 \ne w\in H^1(\Omega)$ be an eigenfunction of the elliptic eigenvalue problem \begin{equation}\label{eq:EllEVP} \Dop w = \lambda w \text{ in }\Omega, \qquad \Bop w = 0 \text{ on }\partial \Omega. \end{equation} Then, for every $0 < \epsilon < 1$, there exists a NN $\Phi_{\epsilon, w}$ such that \begin{equation} \label{eq:polygon-EVP} \| w - \Realiz(\Phi_{\epsilon, w}) \|_{H^1(\Omega)}\leq \epsilon. \end{equation} In addition, $M(\Phi_{\epsilon, w}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|^{5})$ and $L(\Phi_{\epsilon,w}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|\text{log } (\left|\text{log } (\epsilon)\right|))$, as $\epsilon \to 0$. \end{corollary} \begin{proof} The statement follows from the regularity result \cite[Theorem 3.1]{Babuska1989}, and Theorem \ref{th:polygon} as in Corollary \ref{cor:polygon-BVP}. \end{proof} The analytic regularity of solutions $u$ in the proof of Theorem \ref{th:polygon} also holds for certain nonlinear, elliptic PDEs. We illustrate it for the velocity field of viscous, incompressible flow in $\Omega$. \begin{corollary}\label{cor:NSE} Let $\Omega\subset \mathbb{R}^2$ be as in Setting \ref{setting:polygon}. Let $\nu>0$ and let ${\mathbf{m}}{u}\in H^1_0(\Omega)^2$ be the velocity field of the Leray solutions of the viscous, incompressible Navier-Stokes equations in $\Omega$, with homogeneous Dirichlet (``no slip'') boundary conditions \begin{equation} \label{eq:NSE} -\nu\Delta {\mathbf{m}}{u} + ({\mathbf{m}}{u}\cdot\nabla) {\mathbf{m}}{u} + \nabla p = {\mathbf{m}}{f} \text{ in }\Omega,\qquad \nabla\cdot{\mathbf{m}}{u} = 0\text{ in }\Omega,\qquad {\mathbf{m}}{u} ={\mathbf{m}}{0}\text{ on }\partial\Omega, \end{equation} where the components of ${\mathbf{m}}{f}$ are analytic in $\overline{\Omega}$ and such that $\|{\mathbf{m}}{f}\|_{H^{-1}(\Omega)}/ \nu^2$ is small enough so that ${\mathbf{m}}{u}$ is unique. Then, for every $0 < \epsilon < 1$, there exists a NN $\Phi_{\epsilon, {\mathbf{m}}{u}}$ with two-dimensional output such that \begin{equation} \label{eq:polygon-NSE} \| {\mathbf{m}}{u} - \Realiz(\Phi_{\epsilon, {\mathbf{m}}{u}}) \|_{H^1(\Omega)}\leq \epsilon. \end{equation} In addition, $M(\Phi_{\epsilon, {\mathbf{m}}{u}})= \mathcal{O}(\left|\text{log } (\epsilon)\right|^{5})$ and $L(\Phi_{\epsilon,{\mathbf{m}}{u}}) = \mathcal{O}(\left|\text{log } (\epsilon)\right|\text{log } (\left|\text{log } (\epsilon)\right|))$, as $\epsilon \to 0$. \end{corollary} \begin{proof} The velocity fields of Leray solutions of the Navier-Stokes equations in $\Omega$ satisfy the weighted, analytic regularity ${\mathbf{m}}{u} \in \big[\mathcal{J}^\varpi_{{\underline{\gamma}}}(\Omega; \mathcal{C},\emptyset)\big]^2$, with $\min{\underline{\gamma}}>1$, see \cite{MS19_2743}. Then, the application of Theorem \ref{th:polygon} concludes the proof. \end{proof} \subsection{Elliptic PDEs in Fichera-type polyhedral domains} \label{sec:EllPDEFichera} Fichera-type polyhedral domains $\Omega\subset \mathbb{R}^3$ are, loosely speaking, closures of finite, disjoint unions of (possibly affinely mapped) axiparallel hexahedra with $\partial\Omega$ Lipschitz. In Fichera-type domains, analytic regularity of solutions of linear, elliptic boundary value problems from acoustics and linear elasticity in displacement formulation has been established in \cite{CoDaNi12}. As an example of a boundary value problem covered by \cite{CoDaNi12} and our theory, consider $\Omega \coloneqq (-1, 1)^d \setminus (-1, 0]^d$ for $d=2,3$, displayed for $d=3$ in Figure \ref{fig:Fichera}. \begin{figure} \centering \includegraphics[width=0.3\textwidth]{Fichera.pdf} \caption{Example of Fichera-type corner domain.} \label{fig:Fichera} \end{figure} We recall that all NNs are realized with the ReLU activation function, see \eqref{eq:NetworkScheme}. We introduce the \emph{setting for elliptic problems with analytic coefficients in $\Omega$}. Note that the boundary of $\Omega$ is composed of $6$ edges when $d=2$ and of $9$ faces when $d=3$. \begin{setting} \label{setting:Fichera} We assume that $\Dop$ is an elliptic operator as in Definition \ref{def:Dop}. When $d=3$, we assume furthermore that the diffusion coefficient $A\in \mathbb{R}^{3\times 3}$ is a symmectric, positive matrix and $b_i = c = 0$. On each edge (if $d=2$) or face (if $d=3$) $\Gamma_j\subset \partial \Omega$, $j\in\range{3d}$, we introduce the boundary operator $\Bop_j \in \{\gamma_0,\gamma_1\}$, where $\gamma_0$ and $\gamma_1$ are defined as in \eqref{eq:boundarycond}. We collect the boundary operators $\Bop_j$ in $\Bop := \{ \Bop_j \}_{j=1}^{3d}$. \end{setting} For a right hand side $f$, the elliptic boundary value problem we consider in this section is then \begin{equation} \label{eq:Fichera} \begin{aligned} \Dop u = f \text{ in }\Omega, \quad \Bop u = 0 \text{ on }\partial\Omega. \end{aligned} \end{equation} The following extension lemma will be useful for the approximation of the solution to \eqref{eq:Fichera} by NNs. We postpone its proof to Appendix \ref{sec:W11ext}. \begin{lemma} \label{lemma:W11ext} Let $d\in\{2,3\}$ and $u\in W_{\mathrm{mix}}^{1,1}(\Omega)$. Then, there exists a function $v\in W_{\mathrm{mix}}^{1,1}((-1,1)^d)$ such that $v|_{\Omega} = u$. The extension is stable with respect to the $W_{\mathrm{mix}}^{1,1}$ norm. \end{lemma} We denote the set containing all corners (including the re-entrant one) of $\Omega$ as \begin{equation*} \mathcal{C} = \{-1,0,1\}^d\setminus (-1, \dots, -1). \end{equation*} When $d=3$, for all $c\in\mathcal{C}$, then we denote by $\mathcal{E}_c$ the set of edges abutting at $c$ and we denote $\mathcal{E} \coloneqq \bigcup_{c\in\mathcal{C}}\mathcal{E}_c$. \begin{theorem} \label{prop:Fichera-appx} Let $u \in \mathcal{J}^\varpi_{\underline{\gamma}}(\Omega; \mathcal{C}, \mathcal{E})$ with \begin{alignat*}{3} &{\underline{\gamma}} = \{\gamma_c: c\in \mathcal{C}\}, &&\text{with } \gamma_c>1,\; \text{for all } c\in\mathcal{C} &&\text{ if } d = 2,\\ % &{\underline{\gamma}} = \{\gamma_c, \gamma_e: c\in \mathcal{C}, e\in \mathcal{E}\}, \quad&&\text{with } \gamma_c>3/2\text{ and } \gamma_e>1,\; \text{for all }c\in\mathcal{C}\text{ and }e\in \mathcal{E}\quad &&\text{ if } d = 3. % \end{alignat*} Then, for any $0< \epsilon <1$ there exists a NN $\Phi_{\epsilon, u}$ so that \begin{equation}\label{eq:FicheraNN-appx} \left \| u - \Realiz\left(\Phi_{\epsilon, u}\right) \right\|_{H^1(\Omega)} \leq \epsilon. \end{equation} In addition, $\|\Realiz\left(\Phi_{\epsilon, u}\right)\|_{L^\infty(\Omega)} = \mathcal{O}(1+\left| \text{log } \epsilon \right|^{2d})$, as $\epsilon \to 0$. Also, $M(\Phi_{\epsilon, u}) = \mathcal{O}(|\text{log } (\epsilon)|^{2d+1})$ and $L(\Phi_{\epsilon, u}) = \mathcal{O}(|\text{log } (\epsilon)|\text{log } (|\text{log } (\epsilon)|))$, as $\epsilon \to 0$. \end{theorem} \begin{proof} By Lemma \ref{lemma:W11ext}, we extend the function $u$ to a function ${\tilde{u}}$ such that \begin{equation*} {\tilde{u}}\in W_{\mathrm{mix}}^{1,1}((-1, 1)^d) \quad \text{and}\quad{\tilde{u}}|_{\Omega} = u. \end{equation*} Note that, by the stability of the extension, there exists a constant $C_{\mathrm{ext}}>0$ independent of $u$ such that \begin{equation} \label{eq:ext-fichera-stable} \|{\tilde{u}}\|_{W_{\mathrm{mix}}^{1,1}((-1,1)^d)}\leq C_{\mathrm{ext}} \| u\|_{W_{\mathrm{mix}}^{1,1}(\Omega)}. \end{equation} Since $u \in \mathcal{J}^\varpi_{\underline{\gamma}}(\Omega; \mathcal{C}, \mathcal{E})$, there holds $u\in \mathcal{J}^\varpi_{\underline{\gamma}}(S; \mathcal{C}_S, \mathcal{E}_S)$ for all \begin{align} \label{eq:insidefichera} S \in \left\{ \bigtimes_{j=1}^d(a_j, a_j+1/2): (a_1, \dots, a_d)\in\{-1,-1/2,0,1/2\}^d \right\} \text{ such that }S\cap \Omega\neq\emptyset \end{align} with $\mathcal{C}_S = \overline{S}\cap \mathcal{C}$ and $\mathcal{E}_{S} = \{e\in\mathcal{E} : e\subset\overline{S}\}$. Since $S \subset \Omega$ and ${\tilde{u}}|_{\Omega} = u|_{\Omega}$, we also have $$ {\tilde{u}}\in \mathcal{J}^\varpi_{\underline{\gamma}}(S; \mathcal{C}_S, \mathcal{E}_S) \;\mbox{for all }S \mbox{ satisfying \eqref{eq:insidefichera}}. $$ By Theorem \ref{prop:internal} exist $C_p>0$, $C_{\widetilde{N}_{\mathrm{1d}}}>0$, $C_{\widetilde{N}_{\mathrm{int}}}>0$, $C_{{\tilde{v}}}>0$, $C_{\widetilde{c}}>0$, and $b_{{\tilde{v}}}>0$ such that, for all $0<\epsilon\leq 1$, there exists $p\in\mathbb{N}$, a partition $\mathcal{G}_{\mathrm{1d}}$ of $(-1, 1)$ into $\widetilde{N}_{\mathrm{int}}$ open, disjoint, connected subintervals, a $d$-dimensional array $c\in \mathbb{R}^{\widetilde{N}_{\mathrm{1d}}\times\dots\times\widetilde{N}_{\mathrm{1d}}}$, and piecewise polynomials ${\tilde{v}}_i \in \mathbb{Q}_p(\mathcal{G}_{\mathrm{1d}})\cap H^1((-1,1))$, $i=1, \dots, \widetilde{N}_{\mathrm{1d}}$, such that \begin{equation*} \widetilde{N}_{\mathrm{1d}} \leq C_{\widetilde{N}_{\mathrm{1d}}}(1+\left| \text{log } \epsilon \right|^2),\quad \widetilde{N}_{\mathrm{int}} \leq C_{\widetilde{N}_{\mathrm{int}}}(1+\left| \text{log } \epsilon \right|),\quad \|c\|_{1} \leq C_{\widetilde{c}}(1+\left| \text{log } \epsilon \right|^{2d}),\quad p \leq C_p(1+\left| \text{log } \epsilon \right|) \end{equation*} and \begin{equation*} \|{\tilde{v}}_i\|_{H^1(I)}\leq C_{{\tilde{v}}} \epsilon^{-b_{{\tilde{v}}}}, \qquad \|{{\tilde{v}}}_i\|_{L^\infty(I)}\leq 1,\qquad \forall i\in \{1, \dots, \widetilde{N}_{\mathrm{1d}}\}. \end{equation*} Furthermore, \begin{equation*} \| u - v_{\mathsf{hp}} \|_{H^1(\Omega)} = \| {\tilde{u}} - v_{\mathsf{hp}} \|_{H^1(\Omega)}\leq \frac\epsilon2, \qquad v_{\mathsf{hp}} = \sum_{{i_1,\dots, i_d}=1}^{\widetilde{N}_{\mathrm{1d}}} \widetilde{c}_{{i_1\dots i_d}} \bigotimes_{j=1}^d{\tilde{v}}_{i_j}. \end{equation*} Due to the stability \eqref{eq:ext-fichera-stable} and to Lemmas \ref{lemma:cbound} and \ref{lemma:W11J3}, there holds \begin{equation*} \|\widetilde{c}\|_1 \leq C N_{\mathrm{int}}^{2d} \| u\|_{\mathcal{J}^d_{\underline{\gamma}}(\Omega)}, \end{equation*} i.e., the bound on the coefficients $\widetilde{c}$ is independent of the extension ${\tilde{u}}$ of $u$. By Theorem \ref{th:ReLU-hp}, there exists a NN ${\Phi}_{\epsilon, u}$ with the stated approximation properties and asymptotic size bounds. The bound on the $L^\infty(\Omega)$ norm of the realization of $\Phi_{\epsilon, u}$ follows as in the proof of Theorem \ref{th:ReLUapprox}. \end{proof} \begin{remark} Arguing as in Corollary \ref{cor:polygontrace}, a NN with ReLU activation and two-dimensional input can be constructed so that its realization approximates the Dirichlet trace of solutions to \eqref{eq:Fichera} in $H^{1/2}(\partial\Omega)$ at an exponential rate in terms of the NN size $M$. \end{remark} The following statement now gives expression rate bounds for the approximation of solutions to the Fichera problem \eqref{eq:Fichera} by realizations of NNs with the ReLU activation function. \begin{corollary} \label{prop:Fichera} Let $f$ be an analytic function on $\overline{\Omega}$ and let $u$ be a solution to \eqref{eq:Fichera} with operators $\Dop$ and $\Bop$ as in Setting \ref{setting:Fichera} and with source term $f$. Then, for any $0< \epsilon <1$ there exists a NN $\Phi_{\epsilon, u}$ so that \begin{equation}\label{eq:FicheraNN} \left \| u - \Realiz\left(\Phi_{\epsilon, u}\right) \right\|_{H^1(\Omega)} \leq \epsilon. \end{equation} In addition, $M(\Phi_{\epsilon, u}) = \mathcal{O}(|\text{log } (\epsilon)|^{2d+1})$ and $L(\Phi_{\epsilon, u}) = \mathcal{O}(|\text{log } (\epsilon)|\text{log } (|\text{log } (\epsilon)|))$, for $\epsilon \to 0$. \end{corollary} \begin{proof} By \cite[Corollary 7.1, Theorems 7.3 and 7.4]{CoDaNi12} if $d=3$ and \cite[Theorem 3.1]{GuoBab4} if $d=2$, there exists ${\underline{\gamma}}$ such that $\gamma_c-d/2>0$ for all $c\in \mathcal{C}$ and $\gamma_e>1$ for all $e\in \mathcal{E}$ such that $u \in \mathcal{J}^\varpi_{\underline{\gamma}}(\Omega; \mathcal{C}, \mathcal{E})$. An application of Theorem \ref{prop:Fichera-appx} concludes the proof. \end{proof} \begin{remark}\label{rem:weightedanalyticf} By \cite[Corollary 7.1 and Theorem 7.4]{CoDaNi12}, Corollary \ref{prop:Fichera} holds verbatim also under the hypothesis that the right-hand side $f$ is weighted analytic, with singularities at the corners/edges of the domain; specifically, \eqref{eq:FicheraNN} and the size bounds on the NN $\Phi_{\epsilon, u}$ hold under the assumption that there exists ${\underline{\gamma}}$ such that $\gamma_c-d/2>0$ for all $c\in \mathcal{C}$ and $\gamma_e>1$ for all $e\in \mathcal{E}$ such that \begin{equation*} f \in \mathcal{J}^\varpi_{{\underline{\gamma}}-2}(\Omega; \mathcal{C}, \mathcal{E}). \end{equation*} \end{remark} \begin{remark}\label{rmk:otheractivat} The numerical approximation of solutions for \eqref{eq:Fichera} with a NN in two dimensions has been investigated e.g. in \cite{LuMengKarniadakis2019} using the so-called `PINNs' methodology. There, the loss function was based on minimization of the residual of the NN approximation in the strong form of the PDE. Evidently, a different (smoother) activation than the ReLU activations considered here had to be used. Starting from the approximation of products by NNs with smoother activation functions introduced in \cite[Sec.3.3]{schwab2017deep} and following the same line of reasoning as in the present paper, the results we obtain for ReLU-based realizations of NNs can be extended to large classes of NNs with smoother activations and similar architecture. Furthermore, in \cite[Section 3.1]{EDeepRitz}, a slightly different elliptic boundary value problem is numerically approximated by realizations of NNs. Its solutions exhibit the same weighted, analytic regularity as considered in this paper. The presently obtained approximation rates by NN realizations extend also to the approximation of solutions for the problem considered in \cite{EDeepRitz}. \end{remark} In the proof of Theorem \ref{th:polygon}, we require in particular the approximation of weighted analytic functions on $(-1,1)\times(0,1)$ with a corner singularity at the origin. For convenient reference, we detail the argument in this case. \begin{lemma}\label{lem:straightcorner} Let $d=2$ and $\Omega_{DN} := (-1,1)\times(0,1)$. Denote $\mathcal{C}_{DN} = \{-1,0,1\} \times\{0,1\}$. Let $u \in \mathcal{J}^\varpi_{\underline{\gamma}}(\Omega_{DN}; \mathcal{C}_{DN}, \emptyset)$ with ${\underline{\gamma}} = \{\gamma_c: c\in \mathcal{C}_{DN}\}$, with $\gamma_c>1$ for all $c\in\mathcal{C}_{DN}$. Then, for any $0< \epsilon <1$ there exists a NN $\Phi_{\epsilon, u}$ so that \begin{equation}\label{eq:straightcorner} \left \| u - \Realiz\left(\Phi_{\epsilon, u}\right) \right\|_{H^1(\Omega_{DN})} \leq \epsilon. \end{equation} In addition, $\|\Realiz\left(\Phi_{\epsilon, u}\right)\|_{L^\infty(\Omega_{DN})} = \mathcal{O}(1+\left| \text{log } \epsilon \right|^{4})$ , for $\epsilon \to 0$. Also, $M(\Phi_{\epsilon, u}) = \mathcal{O}(|\text{log } (\epsilon)|^{5})$ and $L(\Phi_{\epsilon,u}) = \mathcal{O}(|\text{log } (\epsilon)|\text{log } (|\text{log } (\epsilon)|))$, for $\epsilon \to 0$. \end{lemma} \begin{proof} Let ${\tilde{u}}\in W_{\mathrm{mix}}^{1,1}((-1, 1)^2)$ be defined by $$ \begin{cases} {\tilde{u}}(x_1,x_2) = u(x_1,x_2) & \text{ for all } (x_1,x_2)\in(-1,1)\times[0,1),\\ {\tilde{u}}(x_1,x_2) = u(x_1,0) & \text{ for all } (x_1,x_2)\in(-1,1)\times(-1,0), \end{cases} $$ such that ${\tilde{u}}|_{\Omega_{DN}} = u$. Here we used that there exist continuous imbeddings $\mathcal{J}^\varpi_{\underline{\gamma}}(\Omega_{DN}; \mathcal{C}_{DN}, \emptyset) \hookrightarrow W_{\mathrm{mix}}^{1,1}(\Omega_{DN}) \hookrightarrow C^0(\overline{\Omega_{DN}})$ (see Lemma \ref{lemma:W11J3} for the first imbedding), i.e. $u$ can be extended to a continuous function on $\overline{\Omega_{DN}}$. As in the proof of Lemma \ref{lemma:W11ext}, this extension is stable, i.e. there exists a constant $C_{\mathrm{ext}}>0$ independent of $u$ such that \begin{equation} \label{eq:straightcorner-stable} \|{\tilde{u}}\|_{W_{\mathrm{mix}}^{1,1}((-1,1)^d)}\leq C_{\mathrm{ext}} \| u\|_{W_{\mathrm{mix}}^{1,1}(\Omega_{DN})}. \end{equation} Because $u \in \mathcal{J}^\varpi_{\underline{\gamma}}(\Omega_{DN}; \mathcal{C}_{DN}, \emptyset)$, it holds with $\mathcal{C}_S = \overline{S}\cap \mathcal{C}_{DN}$ that $u\in \mathcal{J}^\varpi_{\underline{\gamma}}(S; \mathcal{C}_S, \emptyset)$ for all \[ S\in \left\{ \bigtimes_{j=1,2}(a_j, a_j+1/2): (a_1, a_2)\in\{-1,-1/2,0,1/2\}\times\{0,1/2\} \right\} \;. \] The remaining steps are the same as those in the proof of Theorem \ref{prop:Fichera-appx}. \end{proof} \section{Conclusions and extensions} \label{sec:ConclExt} We review the main findings of the present paper and outline extensions of the present results, and perspectives for further research. \subsection{Principal mathematical results} \label{sec:MainRes} We established exponential expressivity of realizations of NNs with the ReLU activation function in the Sobolev norm $H^1$ for functions which belong to certain countably normed, weighted analytic function spaces in cubes $Q=(0,1)^d$ of dimension $d=2,3$. The admissible function classes comprise functions which are real analytic at points $x\in Q$, and which admit analytic extensions to the open sides $F\subset \partial Q$, but may have singularities at corners and (in space dimension $d=3$) edges of $Q$. We have also extended this result to cover exponential expressivity of realizations of NNs with ReLU activation for solution classes of linear, second order elliptic PDEs in divergence form in plane, polygonal domains and of elliptic, nonlinear eigenvalue problems with singular potentials in three space dimensions. Being essentially an approximation result, the DNN expression rate bound in Theorem \ref{th:polygon} will apply to any elliptic boundary value problem in polygonal domains where weighted, analytic regularity is available. Apart from the source and eigenvalue problems, such regularity is in space dimension $d=2$ also available for linearized elastostatics, Stokes flow and general elliptic systems \cite{GuoBab3,GuoScStokes,CoDaNi12}. The established approximation rates of realizations of NNs with ReLU activation are fundamentally based on a novel exponential upper bound on approximation of weighted analytic functions via tensorized $hp$ approximations on multi-patch configurations in finite unions of axiparallel rectangles/hexahedra. The $hp$ approximation result is presented in Theorem \ref{prop:internal} and of independent interest in the numerical analysis of spectral elements. The proofs of exponential expressivity of NN realizations are, in principle, constructive. They are based on explicit bounds on the coefficients of $hp$ projections and on corresponding emulation rate bounds for the (re)approximation of modal $hp$ bases. \subsection{Extensions and future work} \label{sec:ExtFrtWrk} The tensor structure of the $hp$ approximation considered here limited geometries of domains that are admissible for our results. \emph{Curvilinear, mapped domains} with analytic domain maps will allow corresponding approximation rates, with the NN approximations obtained by composing the present constructions with NN emulations of the domain maps and the fact that compositions of NNs are again NNs. The only activation function considered in this work is the ReLU. Following the same proof strategy, exponential expression rate bounds can be obtained for functions with smoother, nonlinear activation functions. We refer to Remark \ref{rmk:otheractivat} and to the discussion in \cite[Sec. 3.3]{schwab2017deep}. The principal results in Section \ref{sec:EVPPtSing} yield exponential expressivity of realizations of NNs with ReLU activation for singular eigenvalue problems with multiple, isolated point singularities as arise in electron-structure computations \emph{for static molecules with known loci of the nuclei}. Inspection of our proofs reveals that the expression rate bounds are robust with respect to perturbations of the nuclei sites; only interatomic distances enter the constants in the expression rate bounds of Section \ref{sec:HF}. Given the \emph{closedness of NNs under composition}, obtaining similar expression rates also for solutions of the \emph{vibrational Schr\"odinger equation} appears in principle possible. The presently proved deep ReLU NN expression rate bounds can, in connection with recently proposed, residual-based DNN training methodologies (e.g., \cite{shin2020error}), imply exponential convergence rates of numerical NN approximations of PDE solutions based on machine learning approaches. \section{Tensor product $hp$ approximation} \label{sec:hp-analysis} In this section, we construct the $hp$ tensor product approximation which will then be emulated to obtain the NN expression rate estimates. We denote $Q=(0,1)^d$, $d\in \{2,3\}$ and introduce the set of corners $\mathcal{C}$, \begin{equation} \label{eq:Cset} \mathcal{C} = \begin{cases} \big\{(0,0)\big\} & \text{if }d=2,\\ \big\{(0,0,0)\big\} & \text{if }d=3, \end{cases} \end{equation} and the set of edges $\mathcal{E}$, \begin{equation} \label{eq:Eset} \mathcal{E} = \begin{cases} \emptyset & \text{if }d=2,\\ \big \{ \{0\}\times\{0\}\times(0,1), \{0\}\times(0,1)\times\{0\}, (0,1)\times \{0\}\times\{0\} \big \} & \text{if }d=3. \end{cases} \end{equation} The results in this section extend, by rotation or reflection, to the case where $\mathcal{C}$ contains any of the corners of $Q$ and $\mathcal{E}$ is the set of the adjacent edges when $d=3$. Most of the section addresses the construction of exponentially consistent $hp$-quasiinterpolants in the reference cube $(0,1)^d$; in Section~\ref{sec:patches} the analysis will be extended to domains which are specific finite unions of such patches. \subsection{Product geometric mesh and tensor product $hp$ space} \label{sec:hp-prdmshspc} We fix a geometric mesh grading factor $\sigma \in (0,1/2]$. Furthermore, let \begin{equation*} J_0^\ell = (0, \sigma^\ell)\qquad\text{and} \qquad J_k^\ell = (\sigma^{\ell-k+1}, \sigma^{\ell-k}), \, k=1, \dots, \ell. \end{equation*} In $(0,1)$, the geometric mesh with $\ell$ layers is $\mathcal{G}^\ell_{1} = \left\{J^\ell_k: k=0, \dots, \ell\right\}$. Moreover, we denote the nodes of $\mathcal{G}^\ell_{1}$ by $x_0^\ell = 0$ and $x_k^\ell = \sigma^{\ell-k+1}$ for $k=1,\ldots,\ell+1$. In $(0,1)^d$, the $d$-dimensional tensor product geometric mesh is\footnote{We assume \emph{isotropic tensorization}, i.e. the same $\sigma$ and the same number of geometric mesh layers in each coordinate direction; all approximation results remain valid (with possibly better numerical values for the constants in the error bounds) for anisotropic, co-ordinate dependent choices of $\ell$ and of $\sigma$.} \begin{equation*} \mathcal{G}^\ell_d = \left\{ \bigtimes_{i=1}^d K_i,\, \text{for all } K_1,\dots, K_d \in \mathcal{G}^\ell_{1} \right\}. \end{equation*} For an element $K = \bigtimes_{i=1}^d J_{k_i}^{\ell}$, $k_i\in\{0,\ldots,\ell\}$, we denote by $d^K_c$ the distance from the singular corner, and $d^K_e$ the distance from the closest singular edge. We observe that \begin{equation} \label{eq:dist1} d^K_c = \left( \sum_{i=1}^{d}\sigma^{2(\ell-k_i+1)} \right)^{1/2} \end{equation} and \begin{equation} \label{eq:dist2} d^K_e = \min_{(i_1, i_2) \in \{1, 2, 3\}^2} \left( \sum_{i\in \{i_1,i_2\}}\sigma^{2(\ell-k_i+1)} \right)^{1/2}. \end{equation} The \emph{$hp$ tensor product space} is defined as \begin{equation*} X_{\mathsf{hp}, d}^{\ell, p} \coloneqq \{ v\in H^1(Q): v_{|_{K}} \in \mathbb{Q}_{p}(K), \text{ for all } K\in \mathcal{G}^\ell_d\}, \end{equation*} where $\mathbb{Q}_p(K) \coloneqq \spn\left\{\prod_{i=1}^d (x_i)^{k_i} \colon k_i\leq p, i=1, \dots, d\right\}$. Note that, by construction, $X_{\mathsf{hp}, d}^{\ell, p} = \bigotimes_{i=1}^d X_{\mathsf{hp}, 1}^{\ell, p}$. For positive integers $p$ and $s$ such that $1\leq s \leq p$, we will write \begin{equation} \label{eq:Psi} \Psi_{p,s} \coloneqq \frac{(p-s)!}{(p+s)!}. \end{equation} Additionally, we will denote, for all $\sigma\in (0, 1/2]$, \begin{equation} \label{eq:sigmaratio} \tau_\sigma \coloneqq \frac{1-\sigma}{\sigma} \in [1,\infty). \end{equation} \subsection{Local projector} \label{sec:loc-proj} We denote the reference interval by $I=(-1, 1)$ and the reference cube by ${\widehat{K}} = (-1, 1)^d$. We also write $H_{\mathrm{mix}}^1({\widehat{K}}) = \bigotimes_{i=1}^d H^1(I)\supset H^d({\widehat{K}})$. Let $p\geq 1$: we introduce the univariate projectors ${\widehat{\pi}}_p : H^1(I) \to \mathbb{P}_p(I)$ as \begin{equation} \label{eq:ondim-proj-explicit} \begin{aligned} \left({\widehat{\pi}}_p \hat{v}\right)(x) & = \hat{v}(-1) + \sum_{n=0}^{p-1} \left(\hat{v}', \frac{2n+1}{2}L_n\right)\int_{-1}^x L_n(\xi)d\xi\\ &= \hat{v}(-1)\left( \frac{1-x}{2} \right) + \hat{v}(1)\left( \frac{1+x}{2} \right) + \sum_{n=1}^{p-1} \left(\hat{v}', \frac{2n+1}{2}L_n\right)\int_{-1}^x L_n(\xi)d\xi, \end{aligned} \end{equation} where $L_n$ is the $n$th Legendre polynomial, $L^\infty$ normalized, and $(\cdot, \cdot)$ is the scalar product of $L^2((-1,1))$. Note that \begin{equation} \label{eq:onedim-prop} \left( {\widehat{\pi}}_p \hat{v} \right) (\pm 1) = \hat{v} (\pm 1), \qquad \forall \hat{v} \in H^1(I). \end{equation} For $p\in \mathbb{N}$, we introduce the projection on the reference element ${\widehat{K}}$ as $ {\widehat{\Pi}}_{p} = \bigotimes_{i=1}^d{\widehat{\pi}}_{p} $. For all $K\in \mathcal{G}^\ell_d$, we introduce an affine transformation from $K$ to the reference element \begin{equation} \label{eq:Phi} \Phi_K : K\to {\widehat{K}}\quad \text{such that}\quad\Phi_K(K) = {\widehat{K}}. \end{equation} Remark that since the elements are axiparallel, the affine transformation can be written as a $d$-fold product of one dimensional affine transformations $\phi_k : J_k^\ell \to I$, i.e., supposing that $K= \bigtimes_{i=1}^d J_{k_i}^{\ell}$, there holds \begin{equation*} \Phi_K = \bigotimes_{i=1}^d \phi_{k_i}. \end{equation*} Let $K\in \mathcal{G}^\ell_d$ and let $k_i$, $i=1, \dots, d$ be the indices such that $K=\bigtimes_{i=1}^d J_{k_i}^{\ell}$. Define, for $w\in H^1(J_{k_i}^{\ell})$, \begin{equation*} \pi^{k_i}_p w = \left( {\widehat{\pi}}_p (w\circ \phi_{k_i}^{-1}) \right)\circ \phi_{k_i}. \end{equation*} For $v$ defined on $K$ such that $v\circ \Phi_K^{-1} \in H_{\mathrm{mix}}^1({\widehat{K}})$ and for $(p_1, \dots, p_d)\in \mathbb{N}^d$, we introduce the local projection operator \begin{equation} \label{eq:PiK_tens} \Pi_{{p_1\dots p_d}}^K = \bigotimes_{i=1}^d \pi^{k_i}_{p_i}. \end{equation} We also write \begin{equation} \label{eq:PiK} \Pi_{p}^K v =\Pi_{p\dots p}^K v= \left({\widehat{\Pi}}_{p} (v\circ \Phi_K^{-1}) \right)\circ \Phi_K. \end{equation} For later reference, we note the following property of $\Pi_{p}^K v$: \begin{lemma} \label{lemma:reg-cont} Let $K_1, K_2$ be two axiparallel cubes that share one regular face $F$ (i.e., $F$ is an entire face of both $K_1$ and $K_2$). Then, for $v\in H_{\mathrm{mix}}^1(\interior(\overline{K}_1\cup \overline{K}_2))$, the piecewise polynomial \begin{equation*} \Pi^{K_1\cup K_2}_{p} v = \begin{cases} \Pi_p^{K_1} v &\text{in }K_1,\\ \Pi_p^{K_2} v &\text{in }K_2 \end{cases} \end{equation*} is continuous across $F$. \end{lemma} \begin{proof} This follows directly from \eqref{eq:onedim-prop}. \end{proof} \subsection{Global projectors} \label{sec:glob-proj} We introduce, for $\ell, p \in \mathbb{N}$, the univariate projector $\pi_{\mathsf{hp}}^{\ell, p}: H^1((0,1)) \to X_{\mathsf{hp}, 1}^{\ell, p}$ as \begin{equation} \label{eq:pihpell1d} \left(\pi_{\mathsf{hp}}^{\ell, p} u \right) (x)= \begin{cases} \left(\pi^0_1 u \right)(x) &\text{if } x\in J_0^\ell,\\ \left( \pi_p^{k} u\right) (x) &\text{if }x\in J_{k}^\ell,\, k\in \{1, \dots, \ell\}. \end{cases} \end{equation} Note that for all $\ell\in \mathbb{N}$, for $x\in J_0^\ell$ \begin{equation*} \left(\pi^0_1 u \right)(x) = u(0) + \sigma^{-\ell}\left( u(\sigma^\ell) - u(0) \right)x. \end{equation*} The $d$-variate $hp$ quasi-interpolant is then obtained by tensorization, i.e. \begin{equation} \label{eq:Pihpell} \Pi_{\mathsf{hp}, d}^{\ell, p} \coloneqq \bigotimes_{i=1}^d \pi_{\mathsf{hp}}^{\ell, p}. \end{equation} \begin{remark} \label{rem:global-continuity} By the nodal exactness of the projectors, the operator $\Pi_{\mathsf{hp}, d}^{\ell, p}$ is continuous across interelement interfaces (see Lemma \ref{lemma:reg-cont}), hence its image is contained in $H^1((0,1)^d)$. The continuity can also be observed from the expansion in terms of continuous, globally defined basis functions given in Proposition \ref{prop:compactbasis}. \end{remark} \begin{remark} \label{rem:bigger-space-hp} The projector $\Pi_{\mathsf{hp}, d}^{\ell, p}$ is defined on a larger space than $H_{\mathrm{mix}}^1(Q)$ as specified below (e.g. Remark \ref{rem:hpw11mix}). \end{remark} \subsection{Preliminary estimates} \label{sec:prelim-estimates} The projector on ${\widehat{K}}$ given by \begin{equation} \label{eq:hPi} {\widehat{\Pi}}_{{p_1\dots p_d}} \coloneqq \bigotimes_{i=1}^d{\widehat{\pi}}_{p_i} \end{equation} has the following property. \begin{lemma}[{{\cite[Propositions 5.2 and 5.3]{SSWII}}}] \label{lemma:ref-proj} Let $d=3$, $(p_1, p_2, p_3)\in \mathbb{N}^3$, and $(s_1, s_2, s_3)\in \mathbb{N}^3$ with $1\leq s_i\leq p_i$. Then the projector ${\widehat{\Pi}}_{p_1p_2p_3}:H_{\mathrm{mix}}^1({\widehat{K}}) \to \mathbb{Q}_{p_1, p_2, p_3}({\widehat{K}})$ satisfies that \begin{equation} \label{eq:disc-approx} \begin{aligned} \| v - {\widehat{\Pi}}_{p_1 p_2 p_3} v\|_{H^1({\widehat{K}})}^2 \leq C_{\mathrm{appx}1} \bigg( & \Psi_{p_1,s_1} \sum_{\alpha_1, \alpha_2 \leq 1} \| \partial^{(s_1+1, \alpha_1, \alpha_2)}v \|^2_{L^2({\widehat{K}})} \\ &\qquad + \Psi_{p_2,s_2} \sum_{\alpha_1, \alpha_2 \leq 1} \| \partial^{(\alpha_1, s_2+1, \alpha_2)}v \|^2_{L^2({\widehat{K}})} \\ &\qquad + \Psi_{p_3,s_3} \sum_{\alpha_1, \alpha_2 \leq 1} \| \partial^{(\alpha_1, \alpha_2, s_3+1)}v \|^2_{L^2({\widehat{K}})} \bigg), \end{aligned} \end{equation} for all $v\in H^{s_1+1}(I)\otimes H^{s_2+1}(I)\otimes H^{s_3+1}(I)$. Here, $C_{\mathrm{appx}1}$ is independent of $(p_1, p_2, p_3)$, $(s_1, s_2, s_3)$ and $v$. \end{lemma} \begin{remark} In space dimension $d=2$, a result analogous to Lemma \ref{lemma:ref-proj} holds, see \cite{SSWII}. \end{remark} \begin{lemma} \label{lemma:ref-proj2} Let $d=3$, $(p_1,p_2,p_3)\in\mathbb{N}^3$, and $(s_1, s_2, s_3)\in \mathbb{N}^3$ with $1\leq s_i\leq p_i$. Further, let $\{i, j, k\}$ be a permutation of $\{1,2,3\}$. Then, the projector ${\widehat{\Pi}}_{p_1p_2p_3}:H_{\mathrm{mix}}^1({\widehat{K}}) \to \mathbb{Q}_{p_1, p_2, p_3}({\widehat{K}})$ satisfies \begin{equation} \label{eq:disc-approx2} \begin{aligned} \| \partial_{x_i}\left( v - {\widehat{\Pi}}_{p_1 p_2 p_3} v \right)\|_{L^2({\widehat{K}})}^2 \leq C_{\mathrm{appx}2} \bigg( &\Psi_{p_i,s_i} \sum_{\alpha_1, \alpha_2 \leq 1} \| \partial_{x_i}^{s_i+1}\partial_{x_j}^{ \alpha_1}\partial_{x_k}^{\alpha_2}v \|^2_{L^2({\widehat{K}})} \\ &\qquad + \Psi_{p_j,s_j} \sum_{\alpha_1\leq 1}\| \partial_{x_i}\partial_{x_j}^{s_j+1}\partial_{x_k}^{\alpha_1}v \|^2_{L^2({\widehat{K}})} \\ &\qquad + \Psi_{p_k,s_k} \sum_{\alpha_1\leq 1}\| \partial_{x_i}\partial_{x_j}^{\alpha_1}\partial_{x_k}^{s_k+1}v \|^2_{L^2({\widehat{K}})} \bigg), \end{aligned} \end{equation} for all $v\in H^{s_1+1}(I)\otimes H^{s_2+1}(I)\otimes H^{s_3+1}(I)$. Here, $C_{\mathrm{appx}2}>0$ is independent of $(p_1, p_2, p_3)$, $(s_1, s_2, s_3)$, and $v$. \end{lemma} \begin{proof} Let $(p_1,p_2,p_3)\in\mathbb{N}^3$, and $(s_1, s_2, s_3)\in \mathbb{N}^3$, be as in the statement of the lemma. Also, let $i\in \{1,2,3\}$ and $\{j, k\} = \{1,2 ,3\} \setminus \{i\}$. By Lemma \ref{lemma:ref-proj}, there holds \begin{equation} \label{eq:ref-proj2-proof} \begin{aligned} \| \partial_{x_i}( v - {\widehat{\Pi}}_p v )\|_{L^2({\widehat{K}})}^2 \leq C_{\mathrm{appx}1} \bigg( & \Psi_{p_1,s_1} \sum_{\alpha_1, \alpha_2 \leq 1} \| \partial^{(s_1+1, \alpha_1, \alpha_2)}v \|^2_{L^2({\widehat{K}})} \\ &\qquad + \Psi_{p_2,s_2} \sum_{\alpha_1, \alpha_2 \leq 1} \| \partial^{(\alpha_1, s_2+1, \alpha_2)}v \|^2_{L^2({\widehat{K}})} \\ &\qquad + \Psi_{p_3,s_3} \sum_{\alpha_1, \alpha_2 \leq 1} \| \partial^{(\alpha_1, \alpha_2, s_3+1)}v \|^2_{L^2({\widehat{K}})} \bigg). \end{aligned} \end{equation} With a $C_{\mathrm{appx}1}>0$ independent of $(p_1, p_2, p_3)$, $(s_1, s_2, s_3)$, and $v$. Let now, $\overline{v}_i:I^2 \to \mathbb{R}$ such that \begin{equation*} \overline{v}_i(x_j,x_k) = \int_{I} v (x_1, x_2, x_3)dx_i . \end{equation*} We denote ${\tilde{v}} \coloneqq v - \overline{v}_i$ and, remarking that $\partial_{x_i}\overline{v}_i = \partial_{x_i}{\widehat{\Pi}}_p\overline{v}_i = 0$, we apply \eqref{eq:ref-proj2-proof} to ${\tilde{v}}$, so that \begin{equation} \label{eq:ref-proj2-proof2} \begin{aligned} \| \partial_{x_i}( v - {\widehat{\Pi}}_p v )\|_{L^2({\widehat{K}})}^2 \leq C \bigg( & \Psi_{p_1,s_1} \sum_{\alpha_1, \alpha_2 \leq 1} \| \partial^{(s_1+1, \alpha_1, \alpha_2)}{\tilde{v}} \|^2_{L^2({\widehat{K}})} \\ &\qquad + \Psi_{p_2,s_2} \sum_{\alpha_1, \alpha_2 \leq 1} \| \partial^{(\alpha_1, s_2+1, \alpha_2)}{\tilde{v}} \|^2_{L^2({\widehat{K}})} \\ &\qquad + \Psi_{p_3,s_3} \sum_{\alpha_1, \alpha_2 \leq 1} \| \partial^{(\alpha_1, \alpha_2, s_3+1)}{\tilde{v}} \|^2_{L^2({\widehat{K}})} \bigg). \end{aligned} \end{equation} By the Poincaré inequality, it holds for all $\alpha_1\in \{0,1\}$ that \begin{equation*} \| \partial_{x_j}^{s_j+1}\partial_{x_k}^{\alpha_1}{\tilde{v}} \|^2_{L^2({\widehat{K}})} \leq C \| \partial_{x_i}\partial_{x_j}^{s_j+1}\partial_{x_k}^{\alpha_1}v \|^2_{L^2({\widehat{K}})} \quad\text{ and }\quad \| \partial_{x_j}^{\alpha_1}\partial_{x_k}^{s_k+1}{\tilde{v}} \|^2_{L^2({\widehat{K}})} \leq C \| \partial_{x_i}\partial_{x_j}^{\alpha_1}\partial_{x_k}^{s_k+1}v \|^2_{L^2({\widehat{K}})}. \end{equation*} Using the fact that $\partial_{x_i}{\tilde{v}} = \partial_{x_i} v$ in the remaining terms of \eqref{eq:ref-proj2-proof2} concludes the proof. \end{proof} \subsubsection{One dimensional estimate} \label{sec:oned} The following result is a consequence of, e.g., \cite[Lemma 8.1]{SchSch2018} and scaling. \begin{lemma} \label{lemma:oned} There exists $C>0$ such that for all $\ell\in\mathbb{N}$, all integer $0<k\leq \ell$, all integers $1\leq s \leq p$, all $\gamma>0$, and all $v\in H^{s+1}(J^\ell_k)$ \begin{equation} \label{eq:oned-appx} h^{-2}\| v - \pi^k_p v\|_{L^2(J^\ell_k)}^2 + \| \nabla (v -\pi^k_p v)\|_{L^2(J^\ell_k)}^2 \leq C \tau_\sigma^{2(s+1)}\Psi_{p, s} h^{2(\min\{\gamma-1,s\})} \| |x|^{(s+1-\gamma)_{+}} v^{(s+1)} \|_{L^2(J^\ell_k)}^2 \end{equation} where $h= |J^\ell_k| \simeq \sigma^{\ell-k}$. \end{lemma} \begin{proof} From \cite[Lemma 8.1]{SchSch2018}, there exists $C>0$ independent of $p$, $k$, $s$, and $v$ such that \begin{equation*} h^{-2}\| v - \pi^k_p v\|_{L^2(J^\ell_k)}^2 + \| \nabla (v -\pi^k_p v)\|_{L^2(J^\ell_k)}^2 \leq C \Psi_{p,s} h^{2s} \| v^{(s+1)} \|_{L^2(J^\ell_k)}^2. \end{equation*} In addition, for all $k=1, \dots, \ell$, there holds $x|_{J^\ell_k}\geq \frac{\sigma}{1-\sigma}h$. Hence, for all $\gamma < s+1$, \begin{equation*} h^{2s} \| v^{(s+1)} \|_{L^2(J^\ell_k)}^2 \leq \tau_\sigma^{2(s+1-\gamma)}h^{2\gamma-2}\| x^{s+1-\gamma} v^{(s+1)}\|_{L^2(J^\ell_k)}^{2}. \end{equation*} This concludes the proof. \end{proof} \subsubsection{Estimate at a corner in dimension $d=2$} \label{sec:twod} We consider now a setting with a two dimensional corner singularity. Let $\beta\in\mathbb{R}$, ${\mathfrak{K}} =J_0^\ell\times J_0^\ell$, $r(x) = |x-x_0|$ with $x_0 = (0,0)$, and define the corner-weighted norm $\| v \|_{\mathcal{J}^{2}_\beta({\mathfrak{K}})}$ by \begin{equation*} \| v \|_{\mathcal{J}^{2}_\beta({\mathfrak{K}})}^2 \coloneqq \sum_{{|\alpha|}\leq 2} \|r^{({|\alpha|} - \beta)_+}\partial^\alpha v\|^2_{L^2({\mathfrak{K}})}. \end{equation*} \begin{lemma} \label{lemma:2d-square} Let $d = 2$, $\beta\in (1,2)$. There exists $C_1, C_2>0$ such that for all $v\in \mathcal{J}^2_\beta({\mathfrak{K}})$ \begin{equation} \label{eq:2d-square-stab} \sum_{\alpha \in \mathbb{N}^2_0:{|\alpha|}\leq 1} \|\partial^\alpha(\pi^0_1 \otimes \pi^0_1) v\|_{L^2({\mathfrak{K}})} \leq C_1 \left( \|v\|_{H^1({\mathfrak{K}})} + \sum_{\alpha \in \mathbb{N}^2_0 : {|\alpha|} =2}\sigma^{(\beta-1)\ell}\| r^{2 - \beta} \partial^\alpha v\|_{L^2({\mathfrak{K}})} \right) \;. \end{equation} and \begin{equation} \label{eq:2d-square-approx} \sum_{\alpha \in \mathbb{N}^2_0: {|\alpha|} \leq 1} \sigma^{-\ell(1-{|\alpha|})}\|\partial^\alpha (v - (\pi^0_1 \otimes \pi^0_1) v)\|_{L^2({\mathfrak{K}})} \leq C_2 \sigma^{\ell (\beta-1)}\sum_{\alpha \in \mathbb{N}^2_0: {|\alpha|} =2}\|r^{2-\beta}\partial^\alpha v\|_{L^2({\mathfrak{K}})}. \end{equation} \end{lemma} \begin{proof} Denote by $c_i$, $i=1, \dots, 4$ the corners of ${\mathfrak{K}}$ and by $\psi_i$, $i=1, \dots, 4$ the bilinear functions such that $\psi_i(c_j) = \delta_{ij}$. Then, \begin{equation*} (\pi^0_1\otimes \pi_1^0) v = \sum_{i=1}^4 v(c_i)\psi_i. \end{equation*} Therefore, writing $h=\sigma^\ell$, we have \begin{equation} \label{eq:stab-proof-1} \| (\pi^0_1\otimes \pi_1^0) v \|_{L^2({\mathfrak{K}})} \leq \sum_{i=1, \dots, 4}|v(c_i)| \|\psi_i \|_{L^2({\mathfrak{K}})} \leq 4 \|v\|_{L^\infty({\mathfrak{K}})} |{\mathfrak{K}}|^{1/2}\leq 4 h \|v\|_{L^\infty({\mathfrak{K}})}. \end{equation} With the imbedding $\mathcal{J}^2_\beta ((0,1)^2)\hookrightarrow L^\infty((0,1)^2)$ which is valid for $\beta>1$ (which follows e.g. from Lemma \ref{lemma:W11J3} and $W_{\mathrm{mix}}^{1,1}((0,1)^2)\hookrightarrow L^\infty((0,1)^2)$), a scaling argument gives \begin{equation*} h^2 \|v \|_{L^\infty({\mathfrak{K}})}^2 \leq C h^2 \left( h^{-2} \|v\|_{L^2({\mathfrak{K}})}^2 + |v|_{H^1({\mathfrak{K}})}^2 + \sum_{{|\alpha|} =2}h^{2\beta-2}\| r^{2 - \beta} \partial^\alpha v\|_{L^2({\mathfrak{K}})}^2 \right), \end{equation*} so that we obtain \begin{equation} \label{eq:2d-square-stab-L2} \| (\pi^0_1\otimes \pi_1^0) v \|_{L^2({\mathfrak{K}})}^2 \leq C \left( \|v\|_{L^2({\mathfrak{K}})}^2 + h^2 |v|_{H^1({\mathfrak{K}})}^2 + \sum_{{|\alpha|} =2}h^{2\beta}\| r^{2 - \beta} \partial^\alpha v\|_{L^2({\mathfrak{K}})}^2 \right). \end{equation} % For any ${|\alpha|} = 1$, denoting $v_0 = v(0,0)$ and using the fact that $(\pi^0_1\otimes \pi_1^0) v_0 = v_0$ hence $\partial^\alpha (\pi^0_1\otimes \pi_1^0) v_0 = 0$, \begin{equation} \label{eq:stab-proof-2} \| \partial^\alpha (\pi^0_1\otimes \pi_1^0) v \|_{L^2({\mathfrak{K}})} = \| \partial^\alpha (\pi^0_1\otimes \pi_1^0) (v - v_0) \|_{L^2({\mathfrak{K}})} \leq \sum_{i=1, \dots, 4}|(v-v_0)(c_i)| \| \partial^\alpha \psi_i \|_{L^2({\mathfrak{K}})} \leq C \|v - v_0\|_{L^\infty({\mathfrak{K}})}. \end{equation} With the imbedding $\mathcal{J}^2_\beta ((0,1)^2)\hookrightarrow L^\infty((0,1)^2)$, Poincaré's inequality, and rescaling we obtain \begin{align*} \| \partial^\alpha (\pi^0_1\otimes \pi_1^0) v \|_{L^2({\mathfrak{K}})}^2 \leq C \left( |v|_{H^1({\mathfrak{K}})}^2 + \sum_{{|\alpha|} =2}h^{2\beta - 2}\| r^{2 - \beta} \partial^\alpha v\|_{L^2({\mathfrak{K}})}^2 \right), \end{align*} which finishes the proof of \eqref{eq:2d-square-stab}. To prove \eqref{eq:2d-square-approx}, note that by the Sobolev imbedding of $W^{2,1}({\mathfrak{K}})$ into $H^1({\mathfrak{K}})$ and by scaling, we have \begin{equation*} \sum_{{|\alpha|} \leq 1} h^{{|\alpha|}-1}\|\partial^\alpha (v - (\pi^0_1 \otimes \pi^0_1) v)\|_{L^2({\mathfrak{K}})} \leq C \sum_{{|\alpha|} \leq 2} h^{{|\alpha|}-2}\|\partial^\alpha (v - (\pi^0_1 \otimes \pi^0_1) v)\|_{L^1({\mathfrak{K}})} . \end{equation*} By classical interpolation estimates \cite[Theorem 4.4.4]{Brenner2008}, we additionally conclude that \begin{equation*} \sum_{{|\alpha|} \leq 1} h^{{|\alpha|}-2}\|\partial^\alpha (v - (\pi^0_1 \otimes \pi^0_1) v)\|_{L^1({\mathfrak{K}})} \leq C | v |_{W^{2,1}({\mathfrak{K}})}. \end{equation*} Using the Cauchy-Schwarz inequality, \begin{align*} \sum_{{|\alpha|} \leq 1} h^{{|\alpha|}-1}\|\partial^\alpha (v - (\pi^0_1 \otimes \pi^0_1) v)\|_{L^2({\mathfrak{K}})} &\leq C \sum_{{|\alpha|} =2}\| \partial^\alpha v \|_{L^1({\mathfrak{K}})} \\ &\leq C \sum_{{|\alpha|} =2}\|r^{-2+\beta}\|_{L^2({\mathfrak{K}})}\| r^{2-\beta}\partial^\alpha v \|_{L^2({\mathfrak{K}})} \\ &\leq C \sum_{{|\alpha|} =2}h^{\beta-1}\| r^{2-\beta}\partial^\alpha v \|_{L^2({\mathfrak{K}})} \end{align*} where we also have used, in the last step, the facts that $r(x)\leq \sqrt{2}h$ for all $x\in {\mathfrak{K}}$ and that $\beta >1$. \end{proof} \subsection{Interior estimates} \label{sec:internal} The following lemmas give the estimate of the approximation error on the elements not belonging to edge or corner layers. For $d=3$, all $\ell\in \mathbb{N}$, all $k_1,k_2,k_3\in\{0,\ldots,\ell\}$ and all $K = J^\ell_{k_1}\times J^\ell_{k_2}\times J^{\ell}_{k_3}$, we denote, by $h_\parallel$ the length of $K$ in the direction parallel to the closest singular edge, and by $h_{\bot,1}$ and $h_{\bot,2}$ the lengths of $K$ in the other two directions. If an element has multiple closest singular edges, we choose one of those and consider it as ``closest edge'' for all points in that element. When considering functions from $\mathcal{J}^d_{\underline{\gamma}}(Q)$, $\gamma_e$ will refer to the weight of this closest edge. Similarly, we denote by $\partial_{\parallel}$ (resp. $\partial_{\bot,1}$ and $\partial_{\bot,2}$) the derivatives in the direction parallel (resp. perpendicular) to the closest singular edge. \begin{lemma} \label{lemma:internal-appx-1} Let $d=3$, $\ell\in \mathbb{N}$ and $K = J^\ell_{k_1}\times J^\ell_{k_2}\times J^{\ell}_{k_3}$ for $0< k_1, k_2, k_3\leq \ell$. Let also $v\in \mathcal{J}^\varpi_{\underline{\gamma}}(Q; \mathcal{C},\mathcal{E}; C_v, A_v)$ with \gamma_c \in (3/2, 5/2)$, \gamma_e \in (1, 2)$. Then, there exists $C>0$ dependent only on $\sigma$, $C_{\mathrm{appx}2}$, $C_v$ and $A>0$ dependent only on $\sigma$, $A_v$ such that for all $1\leq s\leq p$ \begin{equation} \label{eq:internal-appx-1} \|\partial_{\parallel} ( v - \Pi^K_p v )\|^2_{L^2(K)} \leq C \Psi_{p,s} A^{2s+6}\left( (d_c^K)^{2} + (d_c^K)^{2(\gamma_c-1)} \right)((s+3)!)^2, \end{equation} where $\partial_{\parallel}$ is the derivative in the direction parallel to the closest singular edge. \end{lemma} \begin{proof} We write $d_a = d_a^K$, $a \in \{c, e\}$. There holds \begin{equation*} d_c^2 = \left( \frac{\sigma}{1-\sigma} \right)^2(h_\parallel^2+h_{\bot,1}^2+h_{\bot,2}^2), \qquad d_e^2 = \left( \frac{\sigma}{1-\sigma} \right)^2(h_{\bot,1}^2+h_{\bot,2}^2). \end{equation*} Denoting $\hat{v} = v \circ \Phi_K^{-1}$ and ${\widehat{\Pi}}_p \hat{v} = \Pi^K_p v \circ \Phi_K^{-1} = {\widehat{\Pi}}_p (v \circ \Phi_K)$, using the result of Lemma \ref{lemma:ref-proj2} and rescaling, we have \begin{equation} \label{eq:Kappx1} \begin{aligned} \| \widehat{\partial}_{\parallel}( \hat{v} - {\widehat{\Pi}}_p \hat{v} )\|_{L^2({\widehat{K}})}^2 &\leq C_{\mathrm{appx}2} \Psi_{p,s} \frac{h_\parallel}{h_{\bot,1}h_{\bot,2}}\left(\sum_{\alpha_1, \alpha_2 \leq 1} h_\parallel^{2s}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K)} \right. \\ &\qquad + \sum_{\alpha_1\leq 1}h_{\bot,1}^{2s+2}h_{\bot,2}^{2\alpha_1}\| \partial_{\parallel}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_1}v \|^2_{L^2(K)} \\ &\qquad + \left. \sum_{\alpha_1\leq 1}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{2s+2}\| \partial_{\parallel}\partial_{\bot,1}^{\alpha_1}\partial_{\bot,2}^{s+1}v \|^2_{L^2(K)} \right) \\ & = C_{\mathrm{appx}2} \Psi_{p,s} \frac{h_\parallel}{h_{\bot,1}h_{\bot,2}}\bigg((I) + (II) + (III) \bigg). \end{aligned} \end{equation} Denote $K_c = K\cap Q_c$, $K_e=K\cap Q_e$, $K_{c e} = K\cap Q_{c e}$, and $K_0 = K\cap Q_0$. Furthermore, we indicate \begin{equation*} (I)_{c} = \sum_{\alpha_1, \alpha_2\leq 1}h_\parallel^{2s}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_c)}, \end{equation*} and do similarly for the other terms of the sum $(II)$ and $(III)$ and the other subscripts $e$, $c e$, $0$. Remark also that $r_{i|_K}\geq d_i$, $i\in\{c, e\}$, and that for $a, b\in \mathbb{R}$ holds $r_c^ar_e^b = r_c^{a+b} \rho_{c e}^{b}$. We will also write ${\widetilde{\gamma}} = \gamma_c-\gamma_e$. We start by considering the term $(I)_{c e}$. Let $\alpha_1= \alpha_2 = 1$; then, \begin{align*} h_\parallel^{2s}h_{\bot,1}^{2}h_{\bot,2}^{2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2} v \|^2_{L^2(K_{c e})} & \leq \tau_\sigma^{2s+4} d_c^{2s}d_e^{4}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2} v \|^2_{L^2(K_{c e})} \\ & \leq \tau_\sigma^{2s+4} d_c^{2{\widetilde{\gamma}}-2}d_e^{2\gamma_e} \|r_c^{s+3-\gamma_c}\rho_{c e}^{2-\gamma_e} \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2} v \|^2_{L^2(K_{c e})}\;, \end{align*} where $\tau_\sigma$ is as in \eqref{eq:sigmaratio}. Furthermore, if $\alpha_1 + \alpha_2\leq 1$ and $s+1+\alpha_1+\alpha_2-\gamma_c\geq0$, \begin{align*} h_\parallel^{2s}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{\alpha_1}\partial_{\bot,2}^{\alpha_2} v \|^2_{L^2(K_{c e})} & \leq \tau_\sigma^{2s+2(\alpha_1+\alpha_2)} d_c^{2s}d_e^{2(\alpha_1+\alpha_2)}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{\alpha_1}\partial_{\bot,2}^{\alpha_2} v \|^2_{L^2(K_{c e})} \\ & \leq \tau_\sigma^{2s+2(\alpha_1+\alpha_2)} d_c^{2\gamma_c-2}\|r_c^{s+1+\alpha_1+\alpha_2-\gamma_c} \partial_{\parallel}^{s+1}\partial_{\bot,1}^{\alpha_1}\partial_{\bot,2}^{\alpha_2} v \|^2_{L^2(K_{c e})}, \end{align*} where we have also used $d_e\leq d_c$. Therefore, \begin{equation*} (I)_{c e} \leq \tau_\sigma^{2s+4} d_c^{2\gamma_c-2} \sum_{\alpha_1, \alpha_2\leq 1}\|r_c^{s+1+\alpha_1+\alpha_2-\gamma_c}\rho_{c e}^{(\alpha_1+\alpha_2-\gamma_e)_+} \partial_{\parallel}^{s+1}\partial_{\bot,1}^{\alpha_1}\partial_{\bot,2}^{\alpha_2} v \|^2_{L^2(K_{c e})}. \end{equation*} If $s+1+\alpha_1+\alpha_2-\gamma_c<0$, then $s=1$ and $\alpha_1=\alpha_2=0$, thus $$(I)_{c e} \leq \tau_\sigma^{2s+4} d_c^{2} \| r_c^{(s+1+\alpha_1+\alpha_2-\gamma_c)_+}\rho_{c e}^{(\alpha_1+\alpha_2-\gamma_e)_+}\partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c e})}. $$ Then, if $s+1+\alpha_1+\alpha_2-\gamma_c\geq0$ \begin{align*} (I)_{c} & = \sum_{\alpha_1, \alpha_2 \leq 1} h_\parallel^{2s}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_c)} \\ & \leq \tau_\sigma^{2s+4}\sum_{\alpha_1, \alpha_2 \leq 1} d_c^{2s}d_e^{2(\alpha_1+\alpha_2)}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_c)} \\ & \leq \tau_\sigma^{2s+4} d_c^{2\gamma_c-2}\sum_{\alpha_1, \alpha_2 \leq 1}\| r_c^{(s+1+\alpha_1+\alpha_2-\gamma_c)_+}\partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_c)} \end{align*} where the last inequality follows also from $d_e\leq d_c$. If $s+1+\alpha_1+\alpha_2-\gamma_c<0$, then the same bound holds with $d_c^{2\gamma_c-2}$ replaced by $d_c^2$. Similarly, \begin{align*} (I)_{e} & = \sum_{\alpha_1, \alpha_2 \leq 1} h_\parallel^{2s}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_e)} \\ &\leq\tau_\sigma^{2s+4} \sum_{\alpha_1, \alpha_2 \leq 1} d_c^{2s}d_e^{2\alpha_1+2\alpha_2 - 2(\alpha_1+\alpha_2-\gamma_e)_+}\| r_e^{(\alpha_1+\alpha_2-\gamma_e)_+}\partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_e)} \\ &\leq\tau_\sigma^{2s+4} d_c^{2s}\sum_{\alpha_1, \alpha_2 \leq 1}\| r_e^{(\alpha_1+\alpha_2-\gamma_e)_+}\partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_e)}, \end{align*} where we used that $d_e\leq 1$. The bound on $(I)_0$ follows directly from the definition: \begin{align*} (I)_0 & = \sum_{\alpha_1, \alpha_2 \leq 1} h_\parallel^{2s}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_0)} \leq\tau_\sigma^{2s+4} d_c^{2s}\sum_{\alpha_1, \alpha_2 \leq 1}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_0)}. \end{align*} Using \eqref{eq:analytic}, there exists $C>0$ dependent only on $C_v$ and $\sigma$ and $A>0$ dependent only on $A_v$ and $\sigma$ such that \begin{equation} \label{eq:Ibound} (I) \leq C A^{2s+6}((s+3)!)^2\left( d_c^2 + d_c^{2\gamma_c-2} \right). \end{equation} We then apply the same argument to the terms $(II)$ and $(III)$. Indeed, \begin{align*} (II)_{c e} &= \sum_{\alpha_1\leq 1}h_{\bot,1}^{2s+2}h_{\bot,2}^{2\alpha_1}\| \partial_{\parallel}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_1}v \|^2_{L^2(K_{c e})} \\ &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_e^{2s+2+2\alpha_1}\| \partial_{\parallel}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_1}v \|^2_{L^2(K_{c e})} \\ &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2{\widetilde{\gamma}}-2}d_e^{2\gamma_e}\| r_c^{s+2+\alpha_1-\gamma_c}\rho_{c e}^{s+1+\alpha_1-\gamma_e}\partial_{\parallel}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_1}v \|^2_{L^2(K_{c e})} \end{align*} and the estimate for $(III)_{c e}$ follows by exchanging $h_{\bot,1}$ and $\partial_{\bot,1}$ with $h_{\bot,2}$ and $\partial_{\bot,2}$ in the inequality above. The estimates for $(II)_{c,e,0}$ and $(III)_{c, e, 0}$ can be obtained as for $(I)_{c, e, 0}$: \begin{align*} (II)_{c} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2\gamma_c-2}\| r_c^{s+2+\alpha_1-\gamma_c}\partial_{\parallel}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_1}v \|^2_{L^2(K_{c})}, \\ (II)_{e} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_e^{2\gamma_e}\| r_e^{s+1+\alpha_1-\gamma_e}\partial_{\parallel}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_1}v \|^2_{L^2(K_{e})}, \\ (II)_{0} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_e^{2s+2}\| \partial_{\parallel}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_1}v \|^2_{L^2(K_{0})}. \end{align*} Therefore, we have \begin{equation} \label{eq:IIbound} (II), (III) \leq C A^{2s+6}(d_c^2+d_c^{2\gamma_c-2})((s+3)!)^2. \end{equation} We obtain, from \eqref{eq:Kappx1}, \eqref{eq:Ibound}, and \eqref{eq:IIbound} that there exists $C>0$ (dependent only on $\sigma$, $C_{\mathrm{appx}2}$, $C_v$ and $A>0$ (dependent only on $\sigma$, $A_v$) such that \begin{equation*} \| \widehat{\partial}_\parallel(\hat{v} - {\widehat{\Pi}}_p \hat{v})\|_{L^2({\widehat{K}})}^2 \leq C \frac{h_\parallel}{h_{\bot,1}h_{\bot,2}}\Psi_{p,s} A^{2s+6}(d_c^2+d_c^{2\gamma_c-2})((s+3)!)^2. \end{equation*} Considering that \begin{equation*} \|\partial_{\parallel} (v-\Pi_p v)\|^2_{L^2(K)}\leq \frac{h_{\bot,1}h_{\bot,2}}{h_\parallel} \|\widehat{\partial}_\parallel (\hat{v} - {\widehat{\Pi}}_p\hat{v})\|^2_{L^2({\widehat{K}})} \end{equation*} completes the proof. \end{proof} \begin{lemma} \label{lemma:internal-appx-2} Let $d=3$, $\ell\in \mathbb{N}$ and $K = J^\ell_{k_1}\times J^\ell_{k_2}\times J^{\ell}_{k_3}$ for $0< k_1, k_2, k_3\leq \ell$. Let also $v\in \mathcal{J}^\varpi_{\underline{\gamma}}(Q; \mathcal{C}, \mathcal{E}; C_v, A_v)$ with $\gamma_c \in (3/2, 5/2)$, $\gamma_e \in (1, 2)$. Then, there exists $C>0$ dependent only on $\sigma$, $C_{\mathrm{appx}2}$, $C_v$ and $A>0$ dependent only on $\sigma$, $A_v$ such that for all $p\in\mathbb{N}$ and all $1\leq s \leq p$ \begin{multline} \label{eq:internal-appx-2} \|\partial_{\bot,1} ( v - \Pi^K_p v )\|^2_{L^2(K)}+ \|\partial_{\bot,2} ( v - \Pi^K_p v )\|^2_{L^2(K)} \\ \leq C \Psi_{p,s} A^{2s+6}\left((d_c^K)^{2(\gamma_c-1)} + (d_c^K)^{2(\gamma_e-1)}\right)((s+3)!)^2, \end{multline} where $\partial_{\bot,1}$, $\partial_{\bot,2}$ are the derivatives in the directions perpendicular to the closest singular edge. \end{lemma} \begin{proof} The proof follows closely that of Lemma \ref{lemma:internal-appx-1} and we use the same notation. From Lemma \ref{lemma:ref-proj2} and rescaling, we have \begin{equation} \label{eq:Kappx1-2} \begin{aligned} \| \widehat{\partial}_{\bot,1}( \hat{v} - {\widehat{\Pi}}_p \hat{v} )\|_{L^2({\widehat{K}})}^2 &\leq C_{\mathrm{appx}2} \Psi_{p,s} \frac{h_{\bot,1}}{h_\parallelh_{\bot,2}}\left(\sum_{\alpha_1 \leq 1} h_\parallel^{2s+2}h_{\bot,2}^{2\alpha_1}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2}^{\alpha_1}v \|^2_{L^2(K)} \right. \\ &\qquad + \sum_{\alpha_1, \alpha_2\leq 1}h_\parallel^{2\alpha_1}h_{\bot,1}^{2s}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K)} \\ &\qquad + \left. \sum_{\alpha_1\leq 1}h_\parallel^{2\alpha_1}h_{\bot,2}^{2s+2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2}^{s+1}v \|^2_{L^2(K)} \right) \\ & = C_{\mathrm{appx}2} \Psi_{p,s} \frac{h_{\bot,1}}{h_\parallelh_{\bot,2}}\bigg((I) + (II) + (III) \bigg). \end{aligned} \end{equation} As before, we will write ${\widetilde{\gamma}} = \gamma_c-\gamma_e$. We start by considering the term $(I)_{c e}$. When $\alpha_1 = 1$, \begin{align*} h_\parallel^{2s+2}h_{\bot,2}^{2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2} v \|^2_{L^2(K_{c e})} & \leq \tau_\sigma^{2s+4} d_c^{2s+2}d_e^{2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2} v \|^2_{L^2(K_{c e})} \\ & \leq \tau_\sigma^{2s+4} d_c^{2{\widetilde{\gamma}}}d_e^{2\gamma_e-2}\|r_c^{s+3-\gamma_c}\rho_{c e}^{2-\gamma_e} \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2} v \|^2_{L^2(K_{c e})}, \end{align*} where $d_c^{2{\widetilde{\gamma}}}d_e^{2\gamma_e-2} \leq d_c^{2\gamma_c-2}$. Furthermore, if $\alpha_1 =0$, \begin{align*} h_\parallel^{2s+2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1} v \|^2_{L^2(K_{c e})} & \leq \tau_\sigma^{2s+2} d_c^{2s+2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1} v \|^2_{L^2(K_{c e})} \\ & \leq \tau_\sigma^{2s+2} d_c^{2\gamma_c-2}\|r_c^{s+2-\gamma_c} \partial_{\parallel}^{s+1}\partial_{\bot,1} v \|^2_{L^2(K_{c e})}. \end{align*} Therefore, \begin{equation*} (I)_{c e} \leq \left( \frac{1-\sigma}{\sigma}\right)^{2s+4} d_c^{2\gamma_c-2}\sum_{\alpha_1\leq 1}\|r_c^{s+2+\alpha_1-\gamma_c}\rho_{c e}^{(1+\alpha_1-\gamma_e)_+} \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2}^{\alpha_1} v \|^2_{L^2(K_{c e})}. \end{equation*} The estimates for $(I)_{c, e, 0}$ follow from the same technique: \begin{align*} (I)_{e} & \leq \sum_{\alpha_1 \leq 1} \tau_\sigma^{2s+4} d_c^{2s+2}\|r_e^{(1+\alpha_1-\gamma_e)_+} \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2}^{\alpha_1} v \|^2_{L^2(K_{e})}, \\ (I)_{c} & \leq \sum_{\alpha_1 \leq 1} \tau_\sigma^{2s+4} d_c^{2\gamma_c-2}\|r_c^{s+2+\alpha_1-\gamma_c} \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2}^{\alpha_1} v \|^2_{L^2(K_{c})}, \\ (I)_{0} & \leq \sum_{\alpha_1 \leq 1} \tau_\sigma^{2s+4} d_c^{2s+2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2}^{\alpha_1} v \|^2_{L^2(K_{0})}. \end{align* Hence, from \eqref{eq:analytic}, there exists $C>0$ dependent only on $C_v$ and $\sigma$ and $A>0$ dependent only on $A_v$ and $\sigma$ such that \begin{equation} \label{eq:Ibound-2} (I) \leq C A^{2s+6}((s+3)!)^2 d_c^{2\gamma_c-2} . \end{equation} We then apply the same argument to the terms $(II)$ and $(III)$. Indeed, if $s+1+\alpha_1+\alpha_2-\gamma_c\geq0$ \begin{align*} (II)_{c e} &= \sum_{\alpha_1, \alpha_2\leq 1}h_\parallel^{2\alpha_1}h_{\bot,1}^{2s}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c e})} \\ &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1, \alpha_2\leq 1}d_c^{2\alpha_1}d_e^{2s+2\alpha_2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c e})} \\ &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2{\widetilde{\gamma}}}d_e^{2\gamma_e-2}\| r_c^{s+1+\alpha_1+\alpha_2-\gamma_c}\rho_{c e}^{s+1+\alpha_2-\gamma_e}\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c e})} \\ &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2\gamma_c-2}\| r_c^{s+1+\alpha_1+\alpha_2-\gamma_c}\rho_{c e}^{s+1+\alpha_2-\gamma_e}\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c e})}, \end{align*} where at the last step we have used that $\gamma_e>1$ and $d_e\leq d_c$. If $s+1+\alpha_1+\alpha_2-\gamma_c<0$, then \begin{align*} (II)_{c e} &= \sum_{\alpha_1, \alpha_2\leq 1}h_\parallel^{2\alpha_1}h_{\bot,1}^{2s}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c e})} \\ &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1, \alpha_2\leq 1}d_c^{2\alpha_1}d_e^{2s+2\alpha_2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c e})} \\ &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2\alpha_1}d_e^{2s+2\alpha_2}(d_e/d_c)^{-2s-2-2\alpha_2+2\gamma_e}\| \rho_{c e}^{s+1+\alpha_2-\gamma_e}\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c e})} \\ &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2s+2-2\gamma_e}d_e^{2\gamma_e-2}\| \rho_{c e}^{s+1+\alpha_2-\gamma_e}\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c e})}. \end{align*} Thus, using $d_e\leq d_c$, \begin{align*} (II)_{c e} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}(d_c^{2s}+d_c^{2\gamma_c-2})\| r_c^{(s+1+\alpha_1+\alpha_2-\gamma_c)_+}\rho_{c e}^{s+1+\alpha_2-\gamma_e}\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c e})}. \end{align*} The estimates for $(II)_{c,e,0}$ and $(III)_{c e, c, e, 0}$ can be obtained as above: \begin{align*} (II)_{e} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_e^{2\gamma_e-2}\| r_e^{s+1+\alpha_2-\gamma_e}\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{e})}, \end{align*} if $s+1+\alpha_1+\alpha_2-\gamma_c\geq0$, then \begin{align*} (II)_{c} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2\gamma_c-2}\| r_c^{s+1+\alpha_1+\alpha_2-\gamma_c}\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c})}, \end{align*} if $s+1+\alpha_1+\alpha_2-\gamma_c<0$, then \begin{align*} (II)_{c} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2s}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c})}, \end{align*} so that \begin{align*} (II)_{c} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}(d_c^{2s}+d_c^{2\gamma_c-2})\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{c})}, \\ (II)_{0} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2s}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K_{0})}, \\ (III)_{c e} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2\gamma_c-2}\| r_c^{s+2+\alpha_1-\gamma_c}\rho_{c e}^{s+2-\gamma_e} \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2}^{s+1} v \|^2_{L^2(K_{c e})}, \\ (III)_{e} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_e^{2\gamma_e-2}\| r_e^{s+2-\gamma_e} \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2}^{s+1} v \|^2_{L^2(K_{e})}, \\ (III)_{c} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_c^{2\gamma_c-2}\| r_c^{s+2+\alpha_1-\gamma_c} \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2}^{s+1} v \|^2_{L^2(K_{c})}, \\ (III)_{0} &\leq \tau_\sigma^{2s+4} \sum_{\alpha_1\leq 1}d_e^{2s+2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2}^{s+1} v \|^2_{L^2(K_{0})}. \end{align* Therefore, we have \begin{equation} \label{eq:IIbound-2} (II) + (III) \leq C A^{2s+6}(d_c^{2\gamma_c-2} + d_c^{2\gamma_e-2})((s+3)!)^2. \end{equation} We obtain, from \eqref{eq:Kappx1-2}, \eqref{eq:Ibound-2}, and \eqref{eq:IIbound-2} that there exists $C>0$ dependent only on $\sigma$, $C_{\mathrm{appx}2}$, $C_v$ and $A>0$ dependent only on $\sigma$, $A_v$ such that \begin{equation*} \| \widehat{\partial}_{\bot, 1}(\hat{v} - {\widehat{\Pi}}_p \hat{v})\|_{L^2({\widehat{K}})}^2 \leq C \frac{h_{\bot,1}}{h_\parallelh_{\bot,2}}\Psi_{p,s} A^{2s+6}\left(d_c^{2(\gamma_c-1)} + d_c^{2(\gamma_e-1)}\right)((s+3)!)^2. \end{equation*} Considering that \begin{equation*} \|\partial_{\bot,1} (v-\Pi_p v)\|^2_{L^2(K)}\leq \frac{h_\parallelh_{\bot,2}}{h_{\bot,1}} \|\widehat{\partial}_{\bot,1} (\hat{v} - {\widehat{\Pi}}_p\hat{v})\|^2_{L^2({\widehat{K}})} \end{equation*} and considering that the estimate for the other term at the left-hand side of \eqref{eq:internal-appx-2} is obtained by exchanging $\{h, \partial\}_{\bot,1}$ with $\{h, \partial\}_{\bot,2}$ completes the proof. \end{proof} \begin{lemma} \label{lemma:internal-appx-3} Let $d=3$, $\ell\in \mathbb{N}$ and $K = J^\ell_{k_1}\times J^\ell_{k_2}\times J^{\ell}_{k_3}$ for $0< k_1, k_2, k_3\leq \ell$. Let also $v\in \mathcal{J}^\varpi_{\underline{\gamma}}(Q; \mathcal{C}, \mathcal{E}; C_v, A_v)$ with $\gamma_c \in (3/2, 5/2)$, $\gamma_e \in (1, 2)$. Then, there exists $C>0$ dependent only on $\sigma$, $C_{\mathrm{appx}1}$, $C_v$ and $A>0$ dependent only on $\sigma$, $A_v$ such that for all $p\in\mathbb{N}$ and all $1\leq s \leq p$ \begin{equation} \label{eq:internal-appx-3} \|v - \Pi^K_p v \|^2_{L^2(K)} \leq C \Psi_{p,s} A^{2s+6}\left(d_c^{2(\gamma_c-1)} + d_c^{2(\gamma_e-1)}\right)((s+3)!)^2. \end{equation} \end{lemma} \begin{proof} The proof follows closely that of Lemmas \ref{lemma:internal-appx-1} and \ref{lemma:internal-appx-2}; we use the same notation. From Lemma \ref{lemma:ref-proj} and rescaling, we have \begin{equation} \label{eq:Kappx1-3} \begin{aligned} \| \hat{v} - {\widehat{\Pi}}_p \hat{v} \|_{L^2({\widehat{K}})}^2 &\leq C_{\mathrm{appx}1} \Psi_{p,s} \frac{1}{h_\parallelh_{\bot,1}h_{\bot,2}}\left(\sum_{\alpha_1, \alpha_2 \leq 1} h_\parallel^{2s+2}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{\alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K)} \right. \\ &\qquad + \sum_{\alpha_1, \alpha_2\leq 1}h_\parallel^{2\alpha_1}h_{\bot,1}^{2s+2}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K)} \\ &\qquad + \left. \sum_{\alpha_1, \alpha_2\leq 1}h_\parallel^{2\alpha_1}h_{\bot,1}^{2\alpha_2}h_{\bot,2}^{2s+2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{\alpha_2}\partial_{\bot,2}^{s+1}v \|^2_{L^2(K)} \right) . \end{aligned} \end{equation} Most terms at the right-hand side above have already been considered in the proofs of Lemmas \ref{lemma:internal-appx-1} and \ref{lemma:internal-appx-2}, and the terms with $\alpha_1 = \alpha_2 = 0$ can be estimated similarly; the observation that \begin{equation*} \| v - \Pi_p v\|^2_{L^2(K)}\leq h_\parallelh_{\bot,1}h_{\bot,2} \| \hat{v} - {\widehat{\Pi}}_p \hat{v} \|_{L^2({\widehat{K}})}^2 \end{equation*} concludes the proof. \end{proof} We summarize Lemmas \ref{lemma:internal-appx-1} to \ref{lemma:internal-appx-3} in the following result. \begin{lemma} Let $d=3$, $\ell\in \mathbb{N}$ and $K=J^\ell_{k_1}\times J^\ell_{k_2}\times J^\ell_{k_3}$ such that $0<k_1,k_2,k_3\leq \ell$. Let also $v\in \mathcal{J}^\varpi_{\underline{\gamma}}(Q; \mathcal{C}, \mathcal{E}; C_v, A_v)$ with $\gamma_c \in (3/2, 5/2)$, $\gamma_e \in (1, 2)$. Then, there exists $C>0$ dependent only on $\sigma$, $C_{\mathrm{appx}1}$, $C_{\mathrm{appx}2}$, $C_v$ and $A>0$ dependent only on $\sigma$, $A_v$ such that for all $p\in\mathbb{N}$ and all $1\leq s \leq p$ \begin{equation} \label{eq:internal-appx} \|v - \Pi^K_p v\|^2_{H^1(K)} \leq C \Psi_{p,s} A^{2s+6}\left(d_c^{2(\gamma_c-1)} + d_c^{2(\gamma_e-1)}\right)((s+3)!)^2. \end{equation} \end{lemma} We then consider elements on the faces (but not abutting edges) of $Q$. \begin{lemma} \label{lemma:internal-appx-face} Let $d=3$, $\ell\in \mathbb{N}$ and $K = J^\ell_{k_1}\times J^\ell_{k_2}\times J^{\ell}_{k_3}$ such that $k_j =0$ for one $j\in\{1,2,3\}$ and $0<k_i\leq \ell$ for $i\neq j$. For all $p\in\mathbb{N}$ and all $1\leq s \leq p$, let $p_j= 1$ and $p_i = p\in\mathbb{N}$ for $i\neq j$. Let also $v\in \mathcal{J}^\varpi_{\underline{\gamma}}(Q; \mathcal{C}, \mathcal{E}; C_v, A_v)$ with $\gamma_c \in (3/2, 5/2)$, $\gamma_e \in (1, 2)$. Then, there exists $C>0$ dependent only on $\sigma$, $C_{\mathrm{appx}1}$, $C_{\mathrm{appx}2}$, $C_v$ and $A>0$ dependent only on $\sigma$, $A_v$ such that \begin{equation} \label{eq:internal-appx-face} \|v - \Pi^K_{p_1 p_2 p_3} v\|^2_{H^1(K)} C \left(\Psi_{p,s} A^{2s+6}(d^K_c)^{2(\min(\gamma_c, \gamma_e)-1)}((s+3)!)^2 + (d_e^K)^{2(\min(\gamma_c, \gamma_e)-2)}\sigma^{2\ell}A^8 \right). \end{equation} \end{lemma} \begin{proof} We write $d_a = d_a^K$, $a \in \{c, e\}$. Suppose, for ease of notation, that $j=3$, i.e. $k_3=0$. The projector is then given by $\Pi^K_{p p 1} = \pi^{k_1}_p\otimes \pi^{k_2}_p\otimes \pi^0_1$. Also, we denote $h_{\bot,2} = \sigma^\ell$ and $\partial_{\bot,2} = \partial_{x_3}$. By \eqref{eq:disc-approx2}, \begin{align*} \| \partial_{\parallel}( v - \Pi^K_{pp1} v )\|_{L^2(K)}^2 &\leq C_{\mathrm{appx}2}\left( \Psi_{p,s} \bigg(\sum_{\alpha_1, \alpha_2 \leq 1} h_\parallel^{2s}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{ \alpha_1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K)} \right. \\ &\qquad + \sum_{\alpha_1\leq 1}h_{\bot,1}^{2s+2}h_{\bot,2}^{2\alpha_1}\| \partial_{\parallel}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_1}v \|^2_{L^2(K)} \bigg) \\ &\qquad + \left. \sum_{\alpha_1\leq 1}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{4}\| \partial_{\parallel}\partial_{\bot,1}^{\alpha_1}\partial_{\bot,2}^{2}v \|^2_{L^2(K)} \right) \\ & = C_{\mathrm{appx}2} \bigg((I) + (II) + (III) \bigg). \end{align*} The bounds on the terms $(I)$ and $(II)$ can be derived as in Lemma \ref{lemma:internal-appx-1}, and give \begin{equation*} (I) + (II) \leq C \Psi_{p,s} A^{2s+6}\left( (d_c^K)^{2} + (d_c^K)^{2(\gamma_c-1)}\right)((s+3)!)^2. \end{equation*} We consider then term $(III)$: with the usual notation, writing ${\widetilde{\gamma}} = \gamma_c-\gamma_e$, \begin{equation} \label{eq:IIIface-a} \begin{aligned} (III)_{c e} &= \sum_{\alpha_1\leq 1}h_{\bot,1}^{2\alpha_1}h_{\bot,2}^{4}\| \partial_{\parallel}\partial_{\bot,1}^{\alpha_1}\partial_{\bot,2}^{2}v \|^2_{L^2(K_{c e})} \\ & \leq\sum_{\alpha_1\leq 1} \tau_\sigma^{4+2\alpha_1} d_c^{2{\widetilde{\gamma}}-2}d_e^{2\gamma_e-4}\sigma^{4\ell} \| r_c^{3+\alpha_1-\gamma_c}\rho_{c e}^{2+\alpha_1-\gamma_e}\partial_{\parallel}\partial_{\bot,1}^{\alpha_1}\partial_{\bot,2}^{2}v \|^2_{L^2(K_{c e})} \\ &\leq C \tau_\sigma^{6}d_c^{2{\widetilde{\gamma}}-2}d_e^{2\gamma_e-4}\sigma^{4\ell}A^{8}. \end{aligned} \end{equation} Note that $d_c\geq d_e$ and \begin{equation} \label{eq:tgamma-gammae} d_c^{{\widetilde{\gamma}}}d_e^{\gamma_e}\leq \begin{cases} 1^{{\widetilde{\gamma}}}d_e^{\gamma_e} & \text{if }{\widetilde{\gamma}}\geq 0\\ d_e^{{\widetilde{\gamma}}}d_e^{\gamma_e} & \text{if }{\widetilde{\gamma}}\geq 0 \end{cases} \leq d_e^{\min(\gamma_c, \gamma_e)}, \end{equation} where we have also used that $d_c\leq 1$. Hence, \begin{equation} \label{eq:IIIface} (III)_{c e}\leq C \tau_\sigma^{6}d_e^{2\min(\gamma_e, \gamma_c)-6}\sigma^{4\ell}A^{8} \leq C \tau_\sigma^{6}d_e^{2\min(\gamma_e, \gamma_c)-4}\sigma^{2\ell}A^{8}. \end{equation} The bounds on the terms $(III)_{c, e, 0}$ follow by the same argument: \begin{align*} (III)_{e} &\leq C \tau_\sigma^{6}d_e^{2\gamma_e-4}\sigma^{4\ell} A^{8}, \\ (III)_{c} &\leq C \tau_\sigma^{6}d_c^{2\gamma_c-6}\sigma^{4\ell} A^{8} \leq C\tau_\sigma^{6} d_e^{2\gamma_c -4}\sigma^{2\ell}A^8, \\ (III)_{0} &\leq C \tau_\sigma^{6}\sigma^{4\ell} A^{8}. \end{align*} Then, \begin{align*} \| \partial_{\bot,1}( v - \Pi^K_{pp1} v )\|_{L^2(K)}^2 &\leq C_{\mathrm{appx}2}\left(\frac{(p-s)!}{(p+s)!} \bigg( \sum_{\alpha_1\leq 1} h_\parallel^{2s+2}h_{\bot,2}^{2\alpha_1}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}\partial_{\bot,2}^{\alpha_1}v \|^2_{L^2(K)} \right. \\ &\qquad + \sum_{\alpha_1, \alpha_2 \leq 1} h_\parallel^{2\alpha_1}h_{\bot,1}^{2s}h_{\bot,2}^{2\alpha_2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2}^{\alpha_2}v \|^2_{L^2(K)} \bigg) \\ &\qquad + \left. \sum_{\alpha_1\leq 1}h_\parallel^{2\alpha_1}h_{\bot,2}^{4}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2}^{2}v \|^2_{L^2(K)} \right) \\ & \leq C_{\mathrm{appx}2} \Big( (I) + (II) + (III) \Big). \end{align*} The bounds on the first two terms at the right-hand side above can be obtained as in Lemma \ref{lemma:internal-appx-2}: \begin{align*} (I) + (II) & \leq C \Psi_{p,s} A^{2s+6}\left((d_c^K)^{2(\gamma_c-1)} + (d_c^K)^{2(\gamma_e-1)}\right)((s+3)!)^2 , \end{align*} while the last term can be bounded as in \eqref{eq:IIIface}, \begin{align*} (III)_{c e} & \leq \tau_\sigma^{6} d_c^{2{\widetilde{\gamma}}}d_e^{2\gamma_e-6}\sigma^{4\ell} A^8 \leq C\tau_\sigma^{6} d_e^{2\min(\gamma_c, \gamma_e) -4}\sigma^{2\ell}A^8, \\ (III)_{e} & \leq \tau_\sigma^{6} d_e^{2\gamma_e-6}\sigma^{4\ell} A^8 \leq C\tau_\sigma^{6} d_e^{2\gamma_e -4}\sigma^{2\ell}A^8, \\ (III)_{c} & \leq \tau_\sigma^{6} d_c^{2\gamma_c - 6}\sigma^{4\ell} A^8 \leq C\tau_\sigma^{6} d_e^{2\gamma_c -4}\sigma^{2\ell}A^8, \\ (III)_{0} & \leq \tau_\sigma^{6} \sigma^{4\ell} A^8, \end{align*} so that \begin{equation*} \sum_{\alpha_1\leq 1}h_\parallel^{2\alpha_1}h_{\bot,2}^{4}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2}^{2}v \|^2_{L^2(K)} \leq C d_e^{2\min(\gamma_c, \gamma_e) -4}\sigma^{2\ell}A^{8}. \end{equation*} The same holds true for the last term of the gradient of the approximation error, given by \begin{align*} \| \partial_{\bot,2}( v - \Pi^K_{pp1} v )\|_{L^2(K)}^2 &\leq C_{\mathrm{appx}2}\left( \Psi_{p,s} \bigg( \sum_{\alpha_1\leq 1} h_\parallel^{2s+2}h_{\bot,1}^{2\alpha_1}\| \partial_{\parallel}^{s+1}\partial_{\bot,1}^{\alpha_1}\partial_{\bot,2} v \|^2_{L^2(K)} \right. \\ &\qquad + \sum_{\alpha_1 \leq 1} h_\parallel^{2\alpha_1}h_{\bot,1}^{2s+2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{s+1}\partial_{\bot,2} v \|^2_{L^2(K)} \bigg) \\ &\qquad + \left. \sum_{\alpha_1, \alpha_2\leq 1}h_\parallel^{2\alpha_1}h_{\bot,1}^{2\alpha_2}h_{\bot,2}^{2}\| \partial_{\parallel}^{\alpha_1}\partial_{\bot,1}^{\alpha_2}\partial_{\bot,2}^{2}v \|^2_{L^2(K)} \right) \\ & \leq C_{\mathrm{appx}2} \Big( (I) + (II) + (III) \Big). \end{align*} From Lemma \ref{lemma:internal-appx-2}, we obtain \begin{align*} (I)+(II) & \leq C \Psi_{p,s} A^{2s+6}\left((d_c^K)^{2(\gamma_c-1)} + (d_c^K)^{2(\gamma_e-1)}\right)((s+3)!)^2, \end{align*} whereas for the third term, it holds that if $\alpha_1+\alpha_2+2-\gamma_c \geq0$ \begin{align*} (III)_{c e} & \leq \tau_\sigma^{6} d_c^{2{\widetilde{\gamma}}}d_e^{2\gamma_e-4}\sigma^{2\ell} A^8 \leq C\tau_\sigma^{6} d_e^{2\min(\gamma_c, \gamma_e) -4}\sigma^{2\ell}A^8, \quad (III)_{c} \leq \tau_\sigma^{6} d_c^{2\gamma_c-4}\sigma^{2\ell} A^8, \end{align*} and if $\alpha_1+\alpha_2+2-\gamma_c < 0$, then \begin{align*} (III)_{c e} & \leq \tau_\sigma^{6} d_e^{2\gamma_e-4}\sigma^{2\ell} A^8, \quad (III)_{c} \leq \tau_\sigma^{6} \sigma^{2\ell} A^8, \end{align*} and for all $\alpha_1+\alpha_2+2-\gamma_c\in\mathbb{R}$, $(III)_{e}$ and $(III)_{0}$ satisfy the bounds that $(III)_{c e}$ and $(III)_{c}$ satisfy in case $\alpha_1+\alpha_2+2-\gamma_c < 0$, so that \begin{align*} \| \partial_{\bot,2}( v - \Pi^K_{pp1} v)\|_{L^2(K)}^2 &\leq C \left( \Psi_{p,s} A^{2s+6}((s+3)!)^2d_c^{2(\min(\gamma_c, \gamma_e)-1)} + A^{8} d_e^{2(\min(\gamma_c, \gamma_e)-2)}\sigma^{2\ell}\right) \;. \end{align*} Finally, the bound on the $L^2(K)$ norm of the approximation error can be obtained by a combination of the estimates above. \end{proof} The exponential convergence of the approximation in internal elements (i.e., elements not abutting a singular edge or corner) follows, from Lemmas \ref{lemma:internal-appx-1} to \ref{lemma:internal-appx-face}. \begin{lemma} \label{lemma:exp-int} Let $d=3$ and $v\in \mathcal{J}^\varpi_{\underline{\gamma}}(Q; \mathcal{C}, \mathcal{E})$ with $\gamma_c>3/2$, $\gamma_e>1$. There exists a constant $C_0>0$ such that if $p\geq C_0 \ell$, there exist constants $C, b>0$ such that for every $\ell \in \mathbb{N}$ holds \begin{equation*} \sum_{K: d_e^K>0} \|v - \Pi_{\mathsf{hp}, d}^{\ell, p} v\|^2_{H^1(K)} \leq C e^{-b\ell}. \end{equation*} \end{lemma} \begin{proof} We suppose, without loss of generality, that $\gamma_c\in (3/2, 5/2)$, and $\gamma_e\in (1,2)$. The general case follows from the inclusion $\mathcal{J}^\varpi_{{\underline{\gamma}}_1}(Q;\mathcal{C}, \mathcal{E}) \subset \mathcal{J}^\varpi_{{\underline{\gamma}}_2}(Q;\mathcal{C}, \mathcal{E})$, valid for $\gamma_1 \geq \gamma_2$. Fix any $C_0>0$ and choose $p\geq C_0 \ell$. For all $A>0$ there exist $C_1, b_1> 0$ such that (see, e.g., \cite[Lemma 5.9]{SSWII}) \begin{equation*} \forall p\in\mathbb{N}: \quad \min_{1\leq s\leq p} \Psi_{p,s} A^{2s} (s!)^2 \leq C_1 e^{-b_1 p}. \end{equation*} From \eqref{eq:internal-appx} and \eqref{eq:internal-appx-face}, there holds \begin{align*} &\sum_{K:d_e^K>0} \|v - \Pi_{\mathsf{hp}, d}^{\ell, p} v\|^2_{H^1(K)} \\ & \qquad \leq C_2 \left( \sum_{K:d_e^K>0}e^{-b_1\ell}(d_c^K)^{2(\min(\gamma_c, \gamma_e)-1)} + \sum_{K:d_e^K>0, d_f^K=0}(d_e^K)^{2(\min(\gamma_e, \gamma_c)-2)}\sigma^{2\ell}\right) \\ & \qquad = C_2\big((I)+ (II)\big), \end{align*} where $d_f^K$ indicates the distance of an element $K$ to one of the faces of $Q$. There holds directly $(I)\leq C\ell^2 e^{-b_1\ell}$. Furthermore, because $(\min(\gamma_c, \gamma_e)-2)<0$, \begin{align*} (II) &\leq 6 \sigma^{2\ell}\sum_{k_1=1}^\ell\sum_{k_2=1}^{k_1} \sigma^{2(\ell-k_2)(\min(\gamma_e, \gamma_c)-2)} \\ & \leq C \sigma^{2\ell}\sum_{k_1=1}^\ell\sigma^{2\ell(\min(\gamma_c, \gamma_e)-2)}\\ & \leq C \ell \sigma^{2(\min(\gamma_c, \gamma_e)-1)\ell}. \end{align*} Adjusting the constants at the exponent to absorb the terms in $\ell$ and $\ell^2$, we obtain the desired estimate. \end{proof} A similar statement holds when $d=2$, and the proof follows along the same lines. \begin{lemma} \label{lemma:exp-int-2d} Let $d=2$ and $v\in \mathcal{J}^\varpi_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E})$ with $\gamma_c>1$. There exists a constant $C_0>0$ such that if $p\geq C_0 \ell$, there exist constants $C, b>0$ such that \begin{equation*} \sum_{K: d_c^K>0} \|v - \Pi_{\mathsf{hp}, d}^{\ell, p} v\|^2_{H^1(K)} \leq C e^{-b\ell}, \qquad \forall \ell \in \mathbb{N}. \end{equation*} \end{lemma} % \subsection{Estimates on elements along an edge in three dimensions} \label{sec:edge-estimates} In the following lemma, we consider the elements $K$ along one edge, but separated from the singular corner. \begin{lemma} \label{lemma:edge-elem} Let $d=3$, $e\in\mathcal{E}$ and let $K\in \mathcal{G}^\ell_3$ be such that $d_c^K >0$ for all $c\in \mathcal{C}$ and $d_e^K=0$. Let $C_v, A_v>0$. Then, if $v\in \mathcal{J}^\varpi_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E}; C_v, A_v)$ with $\gamma_c\in(3/2,5/2)$, $\gamma_e\in(1,2)$, there exist $C, A>0$ such that for all $p\in\mathbb{N}$ and all $1\leq s\leq p$, with $(p_1,p_2,p_3)\in\mathbb{N}^3$ such that $p_\parallel = p$, $p_{\perp,1} = 1 = p_{\perp,2}$, \begin{equation} \label{eq:edge-elem} \| v - \Pi^K_{p_1p_2p_3}v\|_{H^1(K)}^2 \leq C\left(\sigma^{2\{\min\{\gamma_c-1,s\}(\ell-k)} \Psi_{p,s} A^{2s}((s+3)!)^2 + \sigma^{ 2(\min(\gamma_e, \gamma_c)-1)\ell}\right), \end{equation} where $k\in \{1, \dots, \ell\}$ is such that $d_c^K = \sigma^{\ell-k+1}$. \end{lemma} \begin{proof} We suppose that $K=J^\ell_k\times J^\ell_0\times J^\ell_0$ for some $k\in \{1, \dots, \ell\}$, the elements along other edges follow by symmetry. This implies that the singular edge is parallel to the first coordinate direction. Furthermore, we denote \begin{equation*} \Pi^K_{p11} = \pi^k_p\otimes (\pi^0_1 \otimes \pi^0_1) = \pi_\parallel\otimes\pi_\bot. \end{equation*} For $\alpha = (\alpha_1, \alpha_2, \alpha_3) \in \mathbb{N}_0^3$, we write ${\alpha_\parallel} = (\alpha_1, 0, 0)$ and ${\alpha_\bot} = (0, \alpha_2, \alpha_3)$. Also, \begin{equation*} h_\parallel = |J^\ell_k| = \sigma^{\ell-k}(1-\sigma)\qquad h_\bot = \sigma^\ell. \end{equation*} We have \begin{equation} \label{eq:edge-err-decomp} v - \Pi^K_{p11}v = v - \pi_\bot v + \pi_\bot \left( v - \pi_\parallel v \right). \end{equation} We start by considering the first terms at the right-hand side of the above equation. We also compute the norms over $K_{c e} = K\cap Q_{c e}$; the estimate on the norms over $K_c = K\cap Q_c$ and $K_e = K\cap Q_e$ follow by similar or simpler arguments. By \eqref{eq:2d-square-approx} from Lemma \ref{lemma:2d-square}, we have that if $\gamma_c < 2$ \begin{subequations}\label{eqs:edge-err-perp-1} \begin{equation} \begin{aligned} \sum_{{|\alpha_\bot|}\leq 1}h_\bot^{-2(1-{|\alpha_\bot|})}\|\partial^{\alphaperp}( v - \pi_\bot v)\|_{L^2(K_{c e})}^2 & \lesssim h_\bot^{2(\gamma_e-1)}\sum_{{|\alpha_\bot|} = 2} \| r_e^{2 -\gamma_e} \partial^{\alphaperp} v\|_{L^2(K_{c e})}^2 \\ & \lesssim h_\parallel^{2(\gamma_c-\gamma_e)}h_\bot^{2(\gamma_e-1)}\sum_{{|\alpha_\bot|} = 2} \| r_c^{(2-\gamma_c)_+}\rho_{c e}^{2 -\gamma_e} \partial^{\alphaperp} v\|_{L^2(K_{c e})}^2 \\ & \lesssim \sigma^{2k(\gamma_e-1)}\sigma^{2(\ell-k)(\gamma_c-1)} A^{4} \lesssim \sigma^{2\ell(\min\{\gamma_c,\gamma_e\}-1)} A^{4}, \end{aligned} \end{equation} whereas for $\gamma_c\geq 2$ \begin{align} \nonumber \sum_{{|\alpha_\bot|}\leq 1}h_\bot^{-2(1-{|\alpha_\bot|})}\|\partial^{\alphaperp}( v - \pi_\bot v)\|_{L^2(K_{c e})}^2 & \lesssim h_\bot^{2(\gamma_e-1)}\sum_{{|\alpha_\bot|} = 2} \| r_e^{2 -\gamma_e} \partial^{\alphaperp} v\|_{L^2(K_{c e})}^2 \\ & \lesssim \sigma^{2\ell(\gamma_e-1)} A^{4}. \end{align} \end{subequations} On $K_e$, the same bound holds as on $K_{c e}$ for $\gamma_c\geq2$, and on $K_c$ the same bounds hold as on $K_{c e}$ for $\gamma_c<2$. By the same argument, for ${|\alpha_\parallel|}=1$, \begin{subequations}\label{eqs:edge-err-perp-2} \begin{equation} \begin{aligned} \|\partial^{\alphapar} ( v - \pi_\bot v)\|_{L^2(K_{c e})}^2 & = \| (\partial^{\alphapar} v )- \pi_\bot (\partial^{\alphapar} v)\|_{L^2(K_{c e})}^2 \\ & \lesssim h_\bot^{2\gamma_e}\sum_{{|\alpha_\bot|} = 2} \| r_e^{2 -\gamma_e} \partial^{\alphaperp} \partial^{\alphapar} v\|_{L^2(K_{c e})}^2 \\ & \lesssim h_\parallel^{2{\widetilde{\gamma}}-2}h_\bot^{2\gamma_e}\sum_{{|\alpha_\bot|} = 2} \| r_c^{3-\gamma_c}\rho_{c e}^{2 -\gamma_e} \partial^\alpha v\|_{L^2(K_{c e})}^2 \\ & \lesssim \sigma^{2(\ell-k)(\gamma_c-1)}\sigma^{2k(\gamma_e-1)} A^{6} \lesssim \sigma^{2\ell(\min\{\gamma_c,\gamma_e\}-1)} A^{6}, \end{aligned} \end{equation} and \begin{align} \| (\partial^{\alphapar} v )- \pi_\bot (\partial^{\alphapar} v)\|_{L^2(K_{e})}^2 & \lesssim \sigma^{2\ell\gamma_e} A^{6}, \\ \| (\partial^{\alphapar} v )- \pi_\bot (\partial^{\alphapar} v)\|_{L^2(K_{c})}^2 & \lesssim \sigma^{2(\ell-k)(\gamma_c-1)}\sigma^{2k(\gamma_e-1)} A^{6} \lesssim \sigma^{2\ell(\min\{\gamma_c,\gamma_e\}-1)} A^{6}. \end{align} \end{subequations} We now turn to the second part of the right-hand side of \eqref{eq:edge-err-decomp}. We use \eqref{eq:2d-square-stab} from Lemma \ref{lemma:2d-square} so that \begin{equation} \label{eq:prelemmaoned} \begin{aligned} &\sum_{{|\alpha_\bot|} \leq 1}\|\partial^{\alphaperp}\pi_\bot ( v - \pi_\parallel v)\|_{L^2(K)}^2\\ &\qquad \lesssim \sum_{{|\alpha_\bot|} \leq 1}\| \partial^{\alphaperp} (v- \pi_\parallel v )\|_{L^2(K)}^2 + \sum_{{|\alpha_\bot|} = 2} h_\bot^{2(\gamma_e-1)}\| r_e^{2- \gamma_e} \partial^{\alphaperp} (v- \pi_\parallel v)\|_{L^2(K)}^2. \end{aligned} \end{equation} By Lemma \ref{lemma:oned} we have, recalling that ${\alpha_\parallel}=s+1$ and $1\leq s \leq p$, for all ${|\alpha_\bot|}\leq1$, \begin{align*} \| \partial^{\alphaperp} (v- \pi_\parallel v )\|_{L^2(K)}^2 &= \| (\partial^{\alphaperp} v)- \pi_\parallel(\partial^{\alphaperp} v )\|_{L^2(K)}^2 \\ &\lesssim\tau_\sigma^{2s+2} h_\parallel^{2\min\{\gamma_c,s+1\}}\Psi_{p, s}\||x_1|^{(s+1-\gamma_c)_+}\partial^{\alphapar}\partial^{\alphaperp} v)\|^2_{L^2(K)}, \end{align*} and, for all ${|\alpha_\bot|} =2$, using that $\pi_\parallel$ and multiplication by $r_e$ commute, because $r_e$ does not depend on $x_1$, \begin{align*} \| r_e^{2-\gamma_e}\partial^{\alphaperp} (v- \pi_\parallel v )\|_{L^2(K)}^2 &= \| (r_e^{2-\gamma_e}\partial^{\alphaperp} v)- \pi_\parallel(r_e^{2-\gamma_e}\partial^{\alphaperp} v )\|_{L^2(K)}^2\\ &\lesssim\tau_\sigma^{2s+2} h_\parallel^{2\min\{\gamma_c,s+1\}}\Psi_{p, s}\||x_1|^{(s+1-\gamma_c)_+}r_e^{2-\gamma_e}\partial^{\alphapar}\partial^{\alphaperp} v)\|^2_{L^2(K)}. \end{align*} Then, remarking that $|x_1| \lesssim r_c \lesssim |x_1|$, combining \eqref{eq:prelemmaoned} with the two inequalities above we obtain \begin{equation*} \begin{aligned} &\sum_{{|\alpha_\bot|} \leq 1}\|\partial^{\alphaperp}\pi_\bot ( v - \pi_\parallel v)\|_{L^2(K)}^2 \\ & \qquad\begin{multlined}[][.95\linewidth] \lesssim \tau_\sigma^{2s+2}\Psi_{p, s}h_\parallel^{2\min\{\gamma_c-1,s\}}h_\parallel^2 \left(\sum_{{|\alpha_\bot|} \leq 1}\| r_c^{(s+1-\gamma_c)_+}\partial^\alpha v\|_{L^2(K)}^2 \right. \\ \left. +\sum_{{|\alpha_\bot|} =2}h_\bot^{2(\gamma_e-1)}\| r_c^{(s+1-\gamma_c)_+} r_e^{2 - \gamma_e} \partial^\alpha v\|_{L^2(K)}^2\right). \end{multlined} \end{aligned} \end{equation*} Adjusting the exponent of the weights, replacing $h_\parallel$ and $h_\bot$ with their definition, we find that there exists $A>0$ depending only on $\sigma$ and $A_v$ such that \begin{subequations} \label{eqs:edge-err-par-1} \begin{equation} \begin{aligned} &\sum_{{|\alpha_\bot|} \leq 1}\|\partial^{\alphaperp}\pi_\bot ( v - \pi_\parallel v)\|_{L^2(K_{c e})}^2 \\ & \qquad \begin{multlined}[][.95\linewidth] \lesssim\tau_\sigma^{2s+2} \Psi_{p, s}h_\parallel^{2\min\{\gamma_c-1,s\}}h_\parallel^2\left(\sum_{{|\alpha_\bot|} \leq 1}h_\parallel^{-2{|\alpha_\bot|}}\| r_c^{(s+1+{|\alpha_\bot|}-\gamma_c)_+}\partial^\alpha v\|_{L^2(K_{c e})}^2 \right. \\ \left. +\sum_{{|\alpha_\bot|} =2}h_\bot^{2(\gamma_e-1)}h_\parallel^{-2\gamma_e}\| r_c^{s+3-\gamma_c} \rho_{c e}^{2 - \gamma_e} \partial^\alpha v\|_{L^2(K_{c e})}^2\right) \end{multlined} \\ & \qquad\lesssim \sigma^{2(\ell-k)\min\{\gamma_c-1,s\}}\Psi_{p, s} A^{2s+4} ((s+3)!)^2, \end{aligned} \end{equation} and similarly \begin{align} & \sum_{{|\alpha_\bot|} \leq 1}\|\partial^{\alphaperp}\pi_\bot ( v - \pi_\parallel v)\|_{L^2(K_{e})}^2 \lesssim \sigma^{2(\ell-k)\min\{\gamma_c,s+1\}}\Psi_{p, s} A^{2s+4} ((s+3)!)^2, \end{align} and the estimate on $K_c$ is the same as that on $K_{c e}$. \end{subequations} Similarly to \eqref{eq:prelemmaoned}, using first \eqref{eq:2d-square-stab-L2} from the proof of Lemma \ref{lemma:2d-square}, and then Lemma \ref{lemma:oned} \begin{align*} &\sum_{{|\alpha_\parallel|} \leq 1}\|\partial^{\alphapar}\pi_\bot ( v - \pi_\parallel v)\|_{L^2(K)}^2 \\ &\qquad \lesssim \sum_{{|\alpha_\parallel|}\leq 1}\left( \sum_{{|\alpha_\bot|} \leq 1}h_\bot^{2{|\alpha_\bot|}} \| \partial^{\alphaperp} \partial^{\alphapar} (v- \pi_\parallel v )\|_{L^2(K)}^2 + \sum_{{|\alpha_\bot|} = 2} h_\bot^{2\gamma_e}\| r_e^{2 - \gamma_e} \partial^{\alphaperp} \partial^{\alphapar} (v- \pi_\parallel v)\|_{L^2(K)}^2 \right) \\ & \qquad\begin{multlined}[][.95\linewidth] \lesssim \tau_\sigma^{2s+2}\Psi_{p, s}h_\parallel^{2\min\{\gamma_c-1,s\}} \left(\sum_{{|\alpha_\parallel|}=s+1}\sum_{{|\alpha_\bot|} \leq 1}h_\bot^{2{|\alpha_\bot|}}\| r_c^{(s+1-\gamma_c)_+}\partial^{\alphapar} \partial^{\alphaperp} v\|_{L^2(K)}^2 \right. \\ \left. +\sum_{{|\alpha_\parallel|}=s+1}\sum_{{|\alpha_\bot|} =2}h_\bot^{2\gamma_e}\| r_e^{2 - \gamma_e} r_c^{(s+1-\gamma_c)_+} \partial^{\alphapar} \partial^{\alphaperp} v\|_{L^2(K)}^2\right). \end{multlined} \end{align*} As before, there exists $A>0$ depending only on $\sigma$ and $A_v$ such that \begin{subequations}\label{eqs:edge-err-par-2} \begin{equation} \begin{aligned} &\sum_{{|\alpha_\parallel|} \leq 1}\|\partial^{\alphapar}\pi_\bot ( v - \pi_\parallel v)\|_{L^2(K_{c e})}^2 \\ & \qquad\begin{multlined}[][.95\linewidth] \lesssim \tau_\sigma^{2s+2}\Psi_{p, s}h_\parallel^{2\min\{\gamma_c-1,s\}} \left(\sum_{{|\alpha_\parallel|}=s+1}\sum_{{|\alpha_\bot|} \leq 1}h_\bot^{2{|\alpha_\bot|} }h_\parallel^{-2{|\alpha_\bot|}}\| r_c^{(s+1+{|\alpha_\bot|}-\gamma_c)_+}\partial^\alpha v\|_{L^2(K_{c e})}^2 \right. \\ \left. +\sum_{{|\alpha_\parallel|}=s+1}\sum_{{|\alpha_\bot|} =2}h_\bot^{2\gamma_e}h_\parallel^{-2\gamma_e}\| r_c^{s+3-\gamma_c} \rho_{c e}^{2 - \gamma_e} \partial^\alpha v\|_{L^2(K_{c e})}^2\right) \end{multlined} \\ &\qquad \lesssim \sigma^{2(\ell-k)\min\{\gamma_c-1,s\}}\Psi_{p, s} A^{2s+4} ((s+3)!)^2, \end{aligned} \end{equation} and \begin{equation} \begin{aligned} &\sum_{{|\alpha_\parallel|} \leq 1}\|\partial^{\alphapar}\pi_\bot ( v - \pi_\parallel v)\|_{L^2(K_{e})}^2 \\ & \qquad\begin{multlined}[][.95\linewidth] \lesssim \tau_\sigma^{2s+2}\Psi_{p, s}h_\parallel^{2\min\{\gamma_c-1,s\}} \left(\sum_{{|\alpha_\parallel|}=s+1}\sum_{{|\alpha_\bot|} \leq 1}h_\bot^{2{|\alpha_\bot|} }\| r_c^{(s+1-\gamma_c)_+}\partial^\alpha v\|_{L^2(K_{e})}^2 \right. \\ \left. +\sum_{{|\alpha_\parallel|}=s+1}\sum_{{|\alpha_\bot|} =2}h_\bot^{2\gamma_e}\| r_c^{(s+1-\gamma_c)_+} r_e^{2 - \gamma_e} \partial^\alpha v\|_{L^2(K_{e})}^2\right) \end{multlined} \\ &\qquad \lesssim \sigma^{2(\ell-k)\min\{\gamma_c-1,s\}}\Psi_{p, s} A^{2s+4} ((s+3)!)^2, \end{aligned} \end{equation} and the estimate on $K_c$ is the same as that on $K_{c e}$. \end{subequations} The assertion now follows from \eqref{eqs:edge-err-perp-1}, \eqref{eqs:edge-err-perp-2}, \eqref{eqs:edge-err-par-1}, and \eqref{eqs:edge-err-par-2}, upon possibly adjusting the value of the constant $A$. \end{proof} \begin{lemma} \label{lemma:exp-edge} Let $d=3$ and $v\in\mathcal{J}^\varpi_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E})$ with $\gamma_c>3/2$, $\gamma_e>1$. There exists a constant $C_0>0$ such that if $p\geq C_0 \ell$, there exist constants $C, b>0$ such that \begin{equation*} \sum_{K:d_c^K>0,\atop d_e^K=0}\| v - \Pi_{\mathsf{hp}, d}^{\ell, p} v\|_{H^1(K)} \leq Ce^{-b\ell}, \qquad \forall \ell\in \mathbb{N}. \end{equation*} \end{lemma} \begin{proof} As in the proof of Lemma \ref{lemma:exp-int}, we may assume that $\gamma_c\in(3/2,5/2)$ and $\gamma_e\in(1,2)$. The proof of this statements follows by summing over the right-hand side of \eqref{eq:edge-elem}, i.e., \begin{multline*} \begin{aligned} \sum_{K:d_c^K>0,\atop d_e^K=0}\| v - \Pi_{\mathsf{hp}, d}^{\ell, p} v\|_{H^1(K)}^2 & \leq C\left(\sum_{k=1}^\ell \sigma^{2\min\{\gamma_c-1,s\}(\ell-k)} \Psi_{p, s} A^{2s}((s+3)!)^2 + \sigma^{ 2(\min(\gamma_c,\gamma_e)-1)\ell}\right) \\ & =C ((I) + (II)). \end{aligned} \end{multline*} We have $(II) \lesssim \ell \sigma^{2(\min(\gamma_c, \gamma_e)-1)\ell}$; the observation that for all $A>0$ there exist $C_1, b_1>0$ such that \begin{equation*} \min_{1\leq s\leq p}\Psi_{p, s} ((s+3)!)^2 A^{2s} \leq C_1 e^{-b_1 p}, \end{equation*} (see, e.g., \cite[Lemma 5.9]{SSWII}). Combining with $p\geq C_0\ell$ concludes the proof. \end{proof} \subsection{Estimates at the corner} \label{sec:corner-estimates} The lemma below follows from classic low-order finite element approximation results and from the embedding $\mathcal{J}^2_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E})\subset H^{1+\theta}(Q)$, valid for a $\theta>0$ if $ \gamma_c-d/2>0$, for all $c\in\mathcal{C}$, and, when $d=3$, $\gamma_e >1$ for all $e\in\mathcal{E}$ (see, e.g., \cite[Remark 2.3]{SchSch2018}). \begin{lemma} \label{lemma:exp-corner} Let $d \in \{2,3\}$, $K = \bigtimes_{i=1}^dJ_0^{\ell}$. Then, if $v\in\mathcal{J}^\varpi_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E})$ with \begin{alignat*}{2} &\gamma_c>1,\;\text{for all }c\in\mathcal{C} , &&\text{ if } d = 2,\\ & \gamma_c>3/2\text{ and } \gamma_e>1,\; \text{for all }c\in\mathcal{C}\text{ and }e\in\mathcal{E},\quad &&\text{ if } d = 3, \end{alignat*} there exists a constant $C_0>0$ independent of $\ell$ such that if $p\geq C_0\ell$, there exist constants $C,b>0$ such that \begin{equation*} \| v - \Pi_{\mathsf{hp}, d}^{\ell, p} v\|_{H^1(K)} \leq Ce^{-b\ell}. \end{equation*} \end{lemma} \subsection{Exponential convergence} \label{sec:exp-conv} The exponential convergence of the approximation in the full domain $Q$ follows then from Lemmas \ref{lemma:exp-int}, \ref{lemma:exp-int-2d}, \ref{lemma:exp-edge}, and \ref{lemma:exp-corner}. \begin{proposition} \label{lemma:exp-conv} Let $d \in \{2,3\}$, $v\in\mathcal{J}^\varpi_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E})$ with \begin{alignat*}{2} &\gamma_c>1,\;\text{for all }c\in\mathcal{C} , &&\text{ if } d = 2,\\ & \gamma_c>3/2\text{ and } \gamma_e>1,\; \text{for all }c\in\mathcal{C}\text{ and }e\in\mathcal{E},\quad &&\text{ if } d = 3. \end{alignat*} Then, there exist constants $c_p>0$ and $C, b>0$ such that, for all $\ell\in \mathbb{N}$, \begin{equation*} \| v - \Pi_{\mathsf{hp}, d}^{\ell, c_p\ell} v\|_{H^1(Q)}\leq Ce^{-b\ell}. \end{equation*} With respect to the dimension of the discrete space $N_{\mathrm{dof}} = \dim(X_{\mathsf{hp}, d}^{\ell, c_p\ell})$, the above bound reads \begin{equation*} \| v - \Pi_{\mathsf{hp}, d}^{\ell, c_p\ell} v\|_{H^1(Q)}\leq C\exp(-b N_{\mathrm{dof}}^{1/(2d)}). \end{equation*} \end{proposition} \subsection{Explicit representation of the approximant in terms of continuous basis functions} \label{sec:basis} Let $p\in\mathbb{N}$. Let $\hat{\zeta}_1(x) = (1+x)/2$ and $\hat{\zeta}_2 = (1-x)/2$. Let also $\hat{\zeta}_n(x) = \frac{1}{2}\int_{-1}^xL_{n-2}(\xi)d\xi$, for $n=3, \dots, p+1$, where $L_{n-2}$ denotes the $L^\infty((-1,1))$-normalized Legendre polynomial of degree $n-2$ introduced in Section \ref{sec:loc-proj}. Then, fix $\ell\in \mathbb{N}$ and write $\zeta^k_n = \hat{\zeta}_n \circ \phi_k$, $n=1,\dots, p+1$ and $k=0, \dots, \ell$, with the affine map $\phi_k:J_{k}^\ell \to (-1,1)$ introduced in Section \ref{sec:loc-proj}. We construct those functions explicitly: denoting $J^\ell_k = (x_k, x_{k+1})$ and $h_k = |x_{k+1}-x_k|$, there holds, for $x\in J_k^\ell$, \begin{equation} \label{eq:zeta-12} \zeta^k_1 (x)= \frac{1}{h_k}(x-x_k),\qquad \qquad \zeta^k_2(x)= \frac{1}{h_k}(x_{k+1} -x), \end{equation} and \begin{equation} \label{eq:zeta-n} \zeta_n^k(x)= \frac{1}{h_k}\int_{x_k}^x L_{n-2}(\phi_k(\eta))d\eta\qquad n=3, \dots, p+1. \end{equation} Then, for any element $K\in \mathcal{G}^\ell_3$, with $K = J^\ell_{k_1}\times J^\ell_{k_2}\times J^\ell_{k_3}$, there exist coefficients $c^K_{{i_1\dots i_d}}$ such that \begin{equation} \label{eq:local-proj-c} \Pi_{\mathsf{hp}, d}^{\ell, p} u_{|_K} (x_1, x_2, x_3)= \sum_{i_1, i_2, i_3=1}^{p+1} c^K_{{i_1\dots i_d}} \zeta^{k_1}_{i_1}(x_1)\zeta^{k_2}_{i_2}(x_2)\zeta^{k_3}_{i_3}(x_3), \quad \forall (x_1, x_2, x_3)\in K \end{equation} by construction. We remark that, whenever $i_j > 2$ for all $j=1,2,3$, the basis functions vanish on the boundary of the element: \begin{equation*} \left(\zeta^{k_1}_{i_1}\zeta^{k_2}_{i_2}\zeta^{k_3}_{i_3} \right)_{|_{\partial K}} = 0 \qquad\text{if }i_j \geq 3, \, j=1,2,3. \end{equation*} Furthermore, write \begin{equation*} \psi_{{i_1\dots i_d}}^K (x_1, x_2, x_3)= \zeta^{k_1}_{i_1}(x_1)\zeta^{k_2}_{i_2}(x_2)\zeta^{k_3}_{i_3} (x_3) \end{equation*} and consider $t_{{i_1\dots i_d}} = \# \{i_j\leq 2,\, j=1,2,3\}$. We have \begin{itemize} \item if $t_{{i_1\dots i_d}} = 1$, then $\psi_{{i_1\dots i_d}}^K$ is not zero only on one face of the boundary of $K$, \item if $t_{{i_1\dots i_d}} = 2$, then $\psi_{{i_1\dots i_d}}^K$ is not zero only on one edge and neighboring faces of\ the boundary of $K$, \item if $t_{{i_1\dots i_d}} = 3$, then $\psi_{{i_1\dots i_d}}^K$ is not zero only on one corner and neighboring edges and faces of the boundary of $K$. \end{itemize} Similar arguments hold when $d=2$. \subsubsection{Explicit bounds on the coefficients} \label{sec:c-bounds} We derive here a bound on the coefficients of the local projectors with respect to the norms of the projected function. We will use that \begin{equation} \label{eq:L2-L} \|L_{i}\circ \phi_{k}\|_{L^2(J^\ell_{k})} = \left(\frac{h_k}{2}\right)^{1/2} \|L_{i} \|_{L^2((-1,1))} = \left( \frac{h_{k}}{2i +1} \right)^{1/2} \qquad \forall i\in \mathbb{N}_0,\, \forall k\in\{0, \dots, \ell\}. \end{equation} \begin{remark} \label{rem:hpw11mix} As mentioned in Remark \ref{rem:bigger-space-hp}, the $hp$-projector $\Pi_{\mathsf{hp}, d}^{\ell, p}$ can be defined for more general functions than $u\inH_{\mathrm{mix}}^1(Q)$. As follows from Equations \eqref{eq:c-int-def}, \eqref{eq:c-face-def}, \eqref{eq:c-edge-def} and \eqref{eq:c-node-def} below, the projector is also defined for $u\inW_{\mathrm{mix}}^{1,1}(Q)$. \end{remark} \begin{lemma} \label{lemma:cbound} There exist constants $C_1, C_2$ such that, for all $u\in W_{\mathrm{mix}}^{1,1}(Q)$, all $\ell \in \mathbb{N}$, all $p\in \mathbb{N}$ \begin{equation} \label{eq:c-1} |c^K_{{i_1\dots i_d}} |\leq C \left(\prod_{j=1}^di_j \right) \| u \|_{W_{\mathrm{mix}}^{1,1}(Q)}\qquad \forall K\in\mathcal{G}^\ell_d, \, \forall ({i_1,\dots, i_d})\in \{1, \dots, p+1\}^d \end{equation} and for all $({i_1,\dots, i_d})\in \{1, \dots, p+1\}^d$ \begin{align} \sum_{K\in \mathcal{G}^\ell_3}|c^K_{{i_1\dots i_d}} | \leq C \| u \|_{W_{\mathrm{mix}}^{1,1}(Q)} \begin{cases} \left( \prod_{j=1}^d i_j \right) & \text{ if } t_{{i_1\dots i_d}} = 0, \\ (\ell+1)\left(\sum_{j_1=1}^d\sum_{j_2=j_1+1}^d i_{j_1}i_{j_2} \right) & \text{ if } t_{{i_1\dots i_d}} = 1, \\ (\ell+1)^2\left(\sum_{j=1}^di_j \right) & \text{ if } t_{{i_1\dots i_d}} = 2, \\ (\ell+1)^d & \text{ if } t_{{i_1\dots i_d}} = 3. \end{cases} \label{eq:c-2} \end{align} \end{lemma} \begin{proof} Let $d=3$ and $K = J_{k_1}^\ell \times J_{k_2}^\ell \times J_{k_3}^\ell \in \mathcal{G}^\ell_3$. \\\textbf{Internal modes.} We start by considering the case of the coefficients of internal modes, i.e., $c^K_{i_1, i_2, i_3}$ as defined in \eqref{eq:local-proj-c} for $i_n\geq 3$, $n=1,2, 3$. Let then $i_1, i_2, i_3\in \{3, \dots, p+1\}$ and write $L_n^k = L_n \circ \phi_k$: there holds \begin{align} c^K_{i_1,i_2, i_3} =&\, (2i_1-3)(2i_2-3)(2i_3-3)\nonumber\\ & \int_{K} \left({\partial_{x_1}\partial_{ x_2}\partial_{x_3}} u(x_1, x_2, x_3) \right) L_{i_1-2}^{k_1}(x_1)L_{i_2-2}^{k_2}(x_2)L_{i_3-2}^{k_3}(x_3) dx_1dx_2dx_3. \label{eq:c-int-def} \end{align} If $u\in W_{\mathrm{mix}}^{1,1}(K)$, since $\|L_n\|_{L^\infty(-1,1)} = 1$ for all $n$, we have \begin{equation} \label{eq:c-int-1} | c^K_{{i_1\dots i_d}} | \leq (2i_1-3)(2i_2-3)(2i_3-3) \| \partial_{x_1}\partial_{ x_2}\partial_{x_3}u \|_{L^1(K)} \qquad i_n\geq 3,\,n=1,2,3, \end{equation} hence, \begin{equation} \label{eq:c-int-2} \sum_{K\in \mathcal{G}^\ell_3}| c^K_{{i_1\dots i_d}} | \leq (2i_1-3)(2i_2-3)(2i_3-3) \| \partial_{x_1}\partial_{ x_2}\partial_{x_3}u \|_{L^1(Q)} \qquad i_n\geq 3,\,n=1,2,3. \end{equation} \textbf{Face modes.} We continue with face modes and fix, for ease of notation, $i_1 =1$. We also denote $F = J^\ell_{k_2}\times J^\ell_{k_3}$. The estimates will then also hold for $i_1=2$ and for any permutation of the indices by symmetry. We introduce the trace inequality constant $C^{T,1}$, independent of $K$, such that, for all $v\in W^{1,1}(Q)$ and $\hat{x}\in (0,1)$, \begin{equation} \label{eq:trace} \| v(\hat{x}, \cdot, \cdot) \|_{L^1(F)} \leq \| v(\hat{x}, \cdot, \cdot)\|_{L^1((0,1)^2)} \leq C^{T,1} \left( \| v\|_{L^1(Q)} + \|\partial_{x_1} v \|_{L^1(Q)} \right). \end{equation} This follows from the trace estimate in \cite[Lemma 4.2]{Schotzau2013a} and from the fact that \begin{multline*} \| v(\hat{x}, \cdot, \cdot)\|_{L^1((0,1)^2)} \leq C\min \bigg\{ \frac{1}{|1-\hat{x}|}\| v\|_{L^1((\hat{x},1)\times (0,1)^2)} + \|\partial_{x_1} v \|_{L^1((\hat{x}, 1)\times (0,1)^2)},\\ \frac{1}{|\hat{x}|}\| v\|_{L^1((0,\hat{x})\times (0,1)^2)} + \|\partial_{x_1} v \|_{L^1((0,\hat{x})\times (0,1)^2)} \bigg\}. \end{multline*} There holds, for $i_2, i_3\in \{3, \dots, p+1\}$, \begin{equation} \label{eq:c-face-def} c^K_{1,i_2, i_3} = (2i_2-3)(2i_3-3)\int_{F} \left({\partial_{ x_2}\partial_{x_3}} u(x_{k_1}^{\ell}, x_2, x_3) \right) L_{i_2-2}^{k_2}(x_2)L_{i_3-2}^{k_3}(x_3) dx_2dx_3. \end{equation} Since the Legendre polynomials are $L^\infty$ normalized and using the trace inequality \eqref{eq:trace}, \begin{equation} \label{eq:c-face-1} |c_{1, i_2, i_3}^K| \leq (2i_2-3)(2i_3-3) \| ({\partial_{ x_2}\partial_{x_3}} u )(x_{k_1}^{\ell}, \cdot, \cdot)\|_{L^1(F)} \leq C^{T,1}(2i_2-3)(2i_3-3) \| u \|_{W_{\mathrm{mix}}^{1,1}(Q)}. \end{equation} Summing over all internal faces, furthermore, \begin{equation} \label{eq:c-face-2} \begin{aligned} \sum_{K\in \mathcal{G}^\ell_3}|c_{1, i_2, i_3}^K| &\leq (2i_2-3)(2i_3-3) \sum_{k_1=0}^\ell\| ({\partial_{ x_2}\partial_{x_3}} u )(x_{k_1}^{\ell}, \cdot, \cdot)\|_{L^1((0,1)^2)}\\ &\leq C^{T,1}(\ell+1)(2i_2-3)(2i_3-3) \| u \|_{W_{\mathrm{mix}}^{1,1}(Q)}. \end{aligned} \end{equation} \textbf{Edge modes.} We now consider edge modes. Fix for ease of notation $i_1 = i_2 = 1$; as before, the estimates will hold for $(i_1, i_2)\in \{1,2\}^2$ and for any permutation of the indices. By the same arguments as for \eqref{eq:trace}, there exists a trace constant $C^{T,2}$ such that, denoting $e = J^\ell_{k_3}$, for all $v\in W^{1,1}((0,1)^2)$ and for all $\hat{x}\in (0,1)$, \begin{equation} \label{eq:trace-2} \| v (\hat{x}, \cdot)\|_{L^1(e)} \leq \| v (\hat{x}, \cdot)\|_{L^1((0,1))} \leq C^{T,2} \left( \| u\|_{L^1((0,1)^2)} + \|\partial_{x_2} u \|_{L^1((0,1)^2)} \right). \end{equation} By definition, \begin{equation} \label{eq:c-edge-def} c^K_{1,1, i_3} = (2i_3-3)\int_{e} \left({\partial_{x_3}} u(x_{k_1}^{\ell}, x_{k_2}^{\ell}, x_3) \right) L_{i_3-2}^{k_3}(x_3) dx_3. \end{equation} Using \eqref{eq:trace} and \eqref{eq:trace-2} \begin{equation} \label{eq:c-edge-1} |c^K_{1,1, i_3} | \leq (2i_3-3) \| (\partial_{x_3} u)(x_{k_1}^{\ell}, x_{k_2}^{\ell}, \cdot) \|_{L^1(e)} \leq C^{T,1} C^{T,2}(2i_3-3) \| u\|_{W_{\mathrm{mix}}^{1,1}(Q)}. \end{equation} Summing over edges, in addition, \begin{equation} \label{eq:c-edge-2} \begin{aligned} \sum_{K\in \mathcal{G}^\ell_3}|c^K_{1,1, i_3} | &\leq (2i_3-3) \sum_{k_1=0}^\ell\sum_{k_2=0}^\ell\| (\partial_{x_3} u)(x_{k_1}^{\ell}, x_{k_2}^{\ell}, \cdot) \|_{L^1((0,1))}\\ &\leq C^{T,1} C^{T,2}(\ell+1)^2(2i_3-3) \| u\|_{W_{\mathrm{mix}}^{1,1}(Q)}. \end{aligned} \end{equation} \textbf{Node modes.} Finally, we consider the coefficients of nodal modes, i.e., $c^K_{i_1, i_2, i_3}$ for $i_1, i_2, i_3\in \{1,2\}$, which by construction equal function values of $u$, e.g. \begin{equation} \label{eq:c-node-def} c_{111} = u(x_{k_1}^{\ell},x_{k_2}^{\ell},x_{k_3}^{\ell}). \end{equation The Sobolev imbedding $W_{\mathrm{mix}}^{1,1}(Q)\hookrightarrow L^{\infty}(Q)$ and scaling implies the existence of a uniform constant $C_{\mathrm{imb}}$ such that, for any $v\in W_{\mathrm{mix}}^{1,1}(Q)$ \begin{equation*} \| v \|_{L^\infty(K)} \leq\| v \|_{L^\infty(Q)} \leq C_{\mathrm{imb}} \| v \|_{W_{\mathrm{mix}}^{1,1}(Q)}. \end{equation*} Then, by construction, \begin{equation} \label{eq:c-node-1} |c^K_{i_1, i_2, i_3}| \leq \| u \|_{L^\infty(K)} \leq C_{\mathrm{imb}} \| u \|_{W_{\mathrm{mix}}^{1,1}(Q)} \qquad \forall i_1, i_2, i_3\in \{1,2\}. \end{equation} Summing over nodes, it follows directly that \begin{equation} \label{eq:c-node-2} \sum_{{K\in \mathcal{G}^\ell_3}}|c^K_{i_1, i_2, i_3}| \leq \sum_{K\in \mathcal{G}^\ell_3}\| u \|_{L^\infty(K)} \leq C_{\mathrm{imb}} (\ell+1)^3\| u \|_{W_{\mathrm{mix}}^{1,1}(Q)} \qquad \forall i_1, i_2, i_3\in \{1,2\}. \end{equation} We obtain \eqref{eq:c-1} from \eqref{eq:c-int-1}, \eqref{eq:c-face-1}, \eqref{eq:c-edge-1}, and \eqref{eq:c-node-1}. Furthermore, \eqref{eq:c-2} follows from \eqref{eq:c-int-2}, \eqref{eq:c-face-2}, \eqref{eq:c-edge-2}, and \eqref{eq:c-node-2}. The estimates for the case $d=2$ follow from the same argument. \end{proof} The following lemma shows the continuous imbedding of $\mathcal{J}^{d}_{{\underline{\gamma}}}(Q;\mathcal{C}, \mathcal{E})$ into $W_{\mathrm{mix}}^{1,1}(Q)$, given sufficiently large weights ${\underline{\gamma}}$. \begin{lemma} \label{lemma:W11J3} Let $d\in \{2,3\}$. Let ${\underline{\gamma}}$ be such that $ \gamma_c>d/2$, for all $c\in\mathcal{C}$ and (if $d=3$) $\gamma_e>1$ for all $e\in\mathcal{E}$. There exists a constant $C>0$ such that, for all $u \in \mathcal{J}^d_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E})$, \begin{equation*} \| u \|_{W_{\mathrm{mix}}^{1,1}(Q)} \leq C \| u \|_{\mathcal{J}^d_{{\underline{\gamma}}}(Q)}. \end{equation*} \end{lemma} \begin{proof} We recall the decomposition of $Q$ as \begin{equation*} \overline{Q}= \overline{Q_0}\cup \overline{Q_{\mathcal{C}}} \cup \overline{Q_{\mathcal{E}}} \cup \overline{Q_{\mathcal{C}\mathcal{E}}}, \end{equation*} where $Q_{\mathcal{E}}= Q_{\mathcal{C}\mathcal{E}} = \emptyset$ if $d=2$. There holds \begin{equation} \label{eq:W11J3-1} \| u \|_{W_{\mathrm{mix}}^{1,1}(Q_0)} \leq C |Q_0|^{1/2}\| u \|_{H^d(Q_0)}\leq C |Q_0|^{1/2} \| u \|_{\mathcal{J}^d_{{\underline{\gamma}}}(Q)}. \end{equation} We now consider the subdomain $Q_{c}$, for any $c\in \mathcal{C}$. There holds, with constant $C$ that depends only on $\gamma_c$ and on $|Q_c|$, \begin{equation} \label{eq:W11J3-2} \begin{aligned} \| u \|_{W_{\mathrm{mix}}^{1,1}(Q_c)} &= \| u \|_{W^{1,1}(Q_c)} + \sum_{\substack{2\leq {|\alpha|} \leq d\\ {|\alpha|_\infty}\leq 1}} \|\partial^\alpha u \|_{L^1(Q_c)}\\ &\leq C |Q_c|^{1/2} \| u \|_{H^{1}(Q_c)} + C \sum_{\substack{2\leq {|\alpha|} \leq d\\ {|\alpha|_\infty} \leq 1}} \|r_c^{-({|\alpha|}-\gamma_c)_+}\|_{L^2(Q_c)}\|r_c^{({|\alpha|}-\gamma_c)_+}\partial^\alpha u \|_{L^2(Q_c)}\\ &\leq C \| u \|_{\mathcal{J}^d_{{\underline{\gamma}}}(Q)}, \end{aligned} \end{equation} where the last inequality follows from the fact that $\gamma_c > d/2$, hence the norm $\|r_c^{-({|\alpha|}-\gamma_c)_+}\|_{L^2(Q_c)}$ is bounded for all ${|\alpha|}\leq d$. Consider then $d=3$ and any $e\in \mathcal{E}$. Suppose also, without loss of generality, that $\gamma_c-\gamma_e >1/2$ and $\gamma_e<2$ (otherwise, it is sufficient to replace $\gamma_e$ by a smaller ${\widetilde{\gamma}}_e$ such that $1<{\widetilde{\gamma}}_e< \gamma_c-1/2$ and $\gamma_e<2$ and remark that $\mathcal{J}^d_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E})\subset \mathcal{J}_{\underline{\tgamma}}^d(Q;\mathcal{C}, \mathcal{E})$ if ${\widetilde{\gamma}}_e < \gamma_e$). Since $\gamma_e > 1$, then $\|r_e^{-{|\alpha_\bot|}+\gamma_e}\|_{L^2(Q_e)}$ is bounded by a constant depending only on $\gamma_e$ and $|Q_e|$ as long as $\alpha$ is such that ${|\alpha_\bot|}\leq 2$. Hence, denoting by $\partial_{\parallel}$ the derivative in the direction parallel to $e$, \begin{equation} \label{eq:W11J3-3} \begin{aligned} \| u \|_{W_{\mathrm{mix}}^{1,1}(Q_e)} &= \| u \|_{W^{1,1}(Q_e)} + \sum_{{|\alpha_\bot|} = 1}\|\partial_{\parallel}\partial^{\alphaperp} u \|_{L^1(Q_e)} +\sum_{\alpha_1 =0,1} \|\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2} u \|_{L^1(Q_e)} \\ &\begin{multlined}[][.7\linewidth] \leq C |Q_e|^{1/2} \left(\| u \|_{H^{1}(Q_e)} +\sum_{{|\alpha_\bot|} = 1} \|\partial_{\parallel}\partial^{\alphaperp} v\|_{L^2(Q_e)}\right) \\ +C \sum_{\alpha_1=0,1} \|r_e^{-2+\gamma_e}\|_{L^2(Q_e)}\|r_e^{2-\gamma_e}\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2} u \|_{L^2(Q_e)} \end{multlined} \\ &\leq C \| u \|_{\mathcal{J}^3_{{\underline{\gamma}}}(Q)}. \end{aligned} \end{equation} Since $x_\parallel\leq r_c(x)\leq {\widehat{\epsilon}}$ for all $x\in Q_{c e}$, and because $Q_{c e}\subset \left\{x_\parallel\in(0,{\widehat{\epsilon}}), (x_{\bot,1}, x_{\bot,2})\in(0, {\widehat{\epsilon}}^2)^2\right\}$, there holds \begin{equation*} \|r_c^{-(\gamma_e+1-\gamma_c)_+} r_e^{-2+\gamma_e} \|_{L^2(Q_{c e})} \leq \|x_\parallel^{-(\gamma_e+1-\gamma_c)_+} \|_{L^2((0, {\widehat{\epsilon}}))}\|r_e^{-2+\gamma_e} \|_{L^2((0,{\widehat{\epsilon}}^2)^2)}\leq C, \end{equation*} for a constant $C$ that depends only on ${\widehat{\epsilon}}$, $\gamma_c$, and $\gamma_e$. Hence, \begin{equation} \label{eq:W11J3-4} \begin{aligned} \| u \|_{W_{\mathrm{mix}}^{1,1}(Q_{c e})} &= \| u \|_{W^{1,1}(Q_{c e})} + \sum_{{|\alpha_\bot|} = 1}\|\partial_{\parallel}\partial^{\alphaperp} u \|_{L^1(Q_{c e})}+ \sum_{\alpha_1=0,1}\|\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2} u \|_{L^1(Q_{c e})}\\ &\begin{multlined}[][.7\textwidth] \leq C |Q_{c e}|^{1/2} \| u \|_{H^{1}(Q_e)} + C \sum_{{|\alpha_\bot|} = 1} \|r_c^{-(2-\gamma_c)_+}\|_{L^2(Q_{c e})} \|r_c^{(2-\gamma_c)_+}\partial_{\parallel}\partial^{\alphaperp} u \|_{L^2(Q_{c e})}\\ + C \sum_{\alpha_1 = 0,1} \|r_c^{-(\alpha_1+\gamma_e-\gamma_c)_+}r_e^{-2+\gamma_e}\|_{L^2(Q_{c e})}\|r_c^{(\alpha_1+2-\gamma_c)_+}\rho_{c e}^{2-\gamma_e}\partial_{\parallel}^{\alpha_1}\partial_{\bot,1}\partial_{\bot,2} u \|_{L^2(Q_{c e})} \end{multlined}\\ &\leq C \| u \|_{\mathcal{J}^3_{{\underline{\gamma}}}(Q)}, \end{aligned} \end{equation} with $C$ independent of $u$. Combining inequalities \eqref{eq:W11J3-1} to \eqref{eq:W11J3-4} concludes the proof. \end{proof} The following statement is a direct consequence of the two lemmas above and the fact that \linebreak $\normc[L^\infty(K)]{\psi_{{i_1\dots i_d}}^K} \leq 1$ for all $K\in \mathcal{G}^\ell_3$ and all ${i_1,\dots, i_d}\in\{1,\ldots,p+1\}$. \begin{corollary} Let ${\underline{\gamma}}$ be such that $ \gamma_c-d/2>0$, for all $c\in\mathcal{C}$ and, if $d=3$, $ \gamma_e>1$ for all $e\in\mathcal{E}$. There exists a constant $C>0$ such that for all $\ell,p\in \mathbb{N}$ and for all $u \in \mathcal{J}^d_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E})$, \begin{equation*} \|\Pi_{\mathsf{hp}, d}^{\ell, p} u \|_{L^\infty(Q)}\leq Cp^{2d}\| u \|_{\mathcal{J}^d_{\underline{\gamma}}(Q)}. \end{equation*} \end{corollary} \subsubsection{Basis of continuous functions with compact support} \label{sec:compactbasis} It is possible to construct a basis for $\Pi_{\mathsf{hp}, d}^{\ell, p}$ in $Q$ such that all basis functions are continuous and have compact support. For all $\ell\in \mathbb{N}$ and all $p\in \mathbb{N}$, extend to zero outside of their domain of definition the functions $\zeta^k_n$ defined in \eqref{eq:zeta-12} and \eqref{eq:zeta-n}, for $k=0, \dots, \ell$ and $n=1, \dots, p+1$. We introduce the univariate functions with compact support $v_j : (0,1)\to \mathbb{R}$, for $j=1, \dots, (\ell+1)p+1$ so that $v_1 = \zeta^0_{2}$, $v_{\ell+2} = \zeta^\ell_{1}$, \begin{equation}\label{eq:DefinitionOfvk} v_{k} = \zeta^{k-2}_{1} + \zeta^{k-1}_{2}, \qquad \text{for all }k =2, \dots, \ell+1 \end{equation} and \begin{equation*} v_{\ell+2 + k(p-1)+n} = \zeta^{k}_{n+2}, \qquad \text{for all }k =0, \dots, \ell\text{ and }n = 1, \dots, p-1. \end{equation*} \begin{proposition} \label{prop:compactbasis} Let $\ell\in \mathbb{N}$ and $p\in \mathbb{N}$. Furthermore, let $u\in \mathcal{J}^d_{\underline{\gamma}}(Q;\mathcal{C}, \mathcal{E})$ with ${\underline{\gamma}}$ such that $\gamma_c-d/2>0$ and, if $d=3$, $\gamma_e>1$. Let $N_{\mathrm{1d}} = (\ell+1)p+1$. There exists an array of coefficients \[ c = \left\{c_{{i_1\dots i_d}}: ({i_1,\dots, i_d}) \in \{1, \dots, N_{\mathrm{1d}}\}^d\right\} \] such that \begin{equation} \label{eq:compactbasistensor} \left(\Pi_{\mathsf{hp}, d}^{\ell, p} u\right)(x_1,\dots,x_d) = \sum_{{i_1,\dots, i_d} = 1}^{N_{\mathrm{1d}}} c_{{i_1\dots i_d}} \prod_{j=1}^dv_{i_j}(x_j)\qquad \forall (x_1, \dots, x_d)\in Q. \end{equation} Furthermore, there exist constants $C_1, C_2>0$ independent of $\ell$, $p$, and $u$, such that \begin{equation*} |c_{{i_1\dots i_d}} |\leq C_1 (p+1)^d \| u \|_{\mathcal{J}^d_{\underline{\gamma}}(Q)}\qquad \forall {i_1,\dots, i_d} \in\{ 1,\dots, N_{\mathrm{1d}}\}^d \end{equation*} and \begin{equation*} \sum_{{i_1,\dots, i_d}=1}^{N_{\mathrm{1d}}} |c_{{i_1\dots i_d}}| \leq C_2 \left(\sum_{t=0}^d (\ell+1)^{t}(p+1)^{2(d-t)} \right)\| u \|_{\mathcal{J}^d_{\underline{\gamma}}(Q)}. \end{equation*} \end{proposition} \begin{proof} The statement follows directly from the construction of the projector, see \eqref{eq:local-proj-c}, and from the bounds in Lemmas \ref{lemma:cbound} and \ref{lemma:W11J3}. In particular, \eqref{eq:compactbasistensor} holds because the element-wise coefficients related to $\zeta_2^{k-1}$ and to $\zeta_1^{k-2}$ are equal: it follows from Equations \eqref{eq:c-face-def}, \eqref{eq:c-edge-def} and \eqref{eq:c-node-def} that $c^{K}_{1i_2\ldots i_d} = c^{K'}_{2i_2\ldots i_d}$ for all $i_2,\ldots,i_d\in\{1,\ldots,p+1\}$, all $K = J_{k_1}^\ell \times J_{k_2}^\ell \times J_{k_3}^\ell \in \mathcal{G}^\ell_3$ satisfying $k_1<\ell$ and $K' = J_{k_1+1}^\ell \times J_{k_2}^\ell \times J_{k_3}^\ell \in \mathcal{G}^\ell_3$. The same holds for permutations of ${i_1,\dots, i_d}$. % Because $(v_k)_{k=1}^{(\ell+1)p+1}$ are continuous, this again shows continuity of $\Pi_{\mathsf{hp}, d}^{\ell, p} u$ (Remark \ref{rem:global-continuity}). % The last estimate is obtained with \eqref{eq:c-2}: \begin{equation*} \sum_{{i_1,\dots, i_d}=1}^{N_{\mathrm{1d}}} |c_{{i_1\dots i_d}}| \leq \sum_{t=0}^d \sum_{\substack{{i_1,\dots, i_d}=1\\ t_{{i_1\dots i_d}}=t}}^{p+1} \left(\sum_{K\in\mathcal{G}^\ell_d} |c_{{i_1\dots i_d}}| \right) \leq C_2 \left(\sum_{t=0}^d (\ell+1)^{t}(p+1)^{2(d-t)} \right)\| u \|_{\mathcal{J}^d_{\underline{\gamma}}(Q)}. \end{equation* \end{proof} \subsubsection{Proof of Theorem \ref{thm:Interface}}\label{subsec:ProofOfInterface} \begin{proof}[Proof of Theorem \ref{thm:Interface}] Fix $A_f$, $C_f$, and ${\underline{\gamma}}$ as in the hypotheses. Then, by Proposition \ref{lemma:exp-conv}, there exists $c_p$, $C_{\mathsf{hp}}$, $b_{\mathsf{hp}}>0$ such that for every $\ell \in \mathbb{N}$ and for all $v\in \mathcal{J}^\varpi_{{\underline{\gamma}}}(Q;\mathcal{C}, \mathcal{E}; C_f, A_f)$, there exists $v_{\mathsf{hp}}^\ell\in X_{\mathsf{hp}, d}^{\ell, c_p\ell}$ such that (see Section \ref{sec:hp-prdmshspc} for the definition of the space $X_{\mathsf{hp}, d}^{\ell, c_p\ell}$) \begin{equation*} \| v - v_{\mathsf{hp}}^\ell\|_{H^1(Q)}\leq C_{\mathsf{hp}}e^{-b_{\mathsf{hp}} \ell}. \end{equation*}% For $\epsilon > 0$, we choose \begin{equation} \label{eq:L-eps} L \coloneqq \left\lceil \frac{1}{b_{\mathsf{hp}}}\left|\text{log } (\epsilon/C_{\mathsf{hp}}) \right|\right\rceil, \end{equation} so that \begin{equation*} \| v - v_{\mathsf{hp}}^L\|_{H^1(Q)}\leq \epsilon. \end{equation*}% Furthermore, $v_{\mathsf{hp}}^L = \sum_{{i_1,\dots, i_d}}^{N_{\mathrm{1d}}} c_{{i_1\dots i_d}} \phi_{{i_1\dots i_d}}$ and, for all $({i_1,\dots, i_d})\in\{1, \dots, N_{\mathrm{1d}}\}^d$, there exists $v_{i_j}$, $j=1, \dots, d$ such that $\phi_{{i_1\dots i_d}} = \bigotimes_{j=1}^dv_{i_j}$, see Section \ref{sec:compactbasis} and Proposition \ref{prop:compactbasis}. By construction of $v_{i}$ in \eqref{eq:DefinitionOfvk}, and by using \eqref{eq:zeta-12} and \eqref{eq:zeta-n}, we observe that $\|v_{i}\|_{L^\infty(I)}\leq 1$ for all $i=1, \dots, N_{\mathrm{1d}}$. In addition, \eqref{eq:L2-L}, demonstrates that \begin{equation*} \|v_{i}\|_{H^1(I)} \leq \frac{2}{\left|\supp(v_{i})\right|^{1/2}\deg(v_{i})^{1/2}} \leq 2 \sigma^{-L/2}\qquad \forall i\in\{1, \dots, N_{\mathrm{1d}}\}. \end{equation*} Then, by \eqref{eq:L-eps}, \begin{equation*} \sigma^{-L} \leq \sigma^{-\frac{1}{b_{\mathsf{hp}}}\text{log } (C_{\mathsf{hp}})} \epsilon^{-\frac{1}{b_{\mathsf{hp}}}\text{log } (1/\sigma)}. \end{equation*} This concludes the proof of Items \ref{item:vli} and \ref{item:appx-eps}. Finally, Item \ref{item:c} follows from Proposition \ref{prop:compactbasis} and the fact that $p\leq C_p\left( 1+\left| \text{log } (\epsilon) \right| \right)$ for a constant $C_p>0$ independent of $\epsilon$. \end{proof} \subsection{Combination of multiple patches} \label{sec:patches} The approximation results in the domain $Q=(0,1)^d$ can be generalized to include the combination of multiple patches. We give here an example, relevant for the PDEs considered in Section \ref{sec:applications}. For the sake of conciseness, we show a single construction that takes into account all singularities of the problems in Section \ref{sec:applications}. We will then use this construction to prove expression rate bounds for realizations of NNs. Let $a>0$ and $\Omega = (-a,a)^d$. Denote the set of corners \begin{equation} \label{eq:Cset-int} \mathcal{C}_\Omega = \bigtimes_{j=1}^d\{-a, 0, a\}, \end{equation} and the set of edges $\mathcal{E}_\Omega = \emptyset$ if $d=2$, and, if $d=3$, \begin{equation} \label{eq:Eset-int} \mathcal{E}_\Omega = \bigcup_{j=1}^d \bigtimes_{k=1}^{j-1}\{-a, 0, a\} \times \{(-a,-a/2),(-a/2,0),(0,a/2),(a/2,a)\} \times \bigtimes_{k=j+1}^d\{-a, 0, a\}. \end{equation} We introduce the affine transformations $\psi_{1, +}:(0,1)\to(0, a/2)$, $\psi_{2,+}:(0,1)\to(a/2,a)$, $\psi_{1, -} :(0,1)\to(-a/2, 0)$, $\psi_{2, -}:(0,1)\to(-a,-a/2)$ such that \begin{equation*} \psi_{1, \pm}(x) = \pm\frac{a}{2}x, \qquad \psi_{2,\pm}(x) = \pm\left(a-\frac{a}{2}x\right). \end{equation*} For all $\ell\in \mathbb{N}$, define then \begin{equation*} \widetilde{\mathcal{G}}^\ell_1 = \bigcup_{i\in \{1,2\}, \star\in \{+.-\}}\psi_{i, \star}(\mathcal{G}^\ell_1). \end{equation*} Consequently, for $d=2,3$, denote $\widetilde{\mathcal{G}}^\ell_d = \{\bigtimes_{i=1}^dK_i: K_1, \dots, K_d\in \widetilde{\mathcal{G}}^\ell_1\}$, see Figure \ref{fig:multipatch}. \begin{figure} \centering \begin{subfigure}[b]{.45\textwidth} \centering \includegraphics[width=.7\textwidth]{twodmultipatch.pdf} \end{subfigure}% \begin{subfigure}[b]{.55\textwidth} \centering \includegraphics[width=.7\textwidth]{threedmultipatch.pdf} \end{subfigure} \caption{Multipatch geometric tensor product meshes $\widetilde{\mathcal{G}}^\ell_d$, for $d=2$ (left) and $d=3$ (right).} \label{fig:multipatch} \end{figure} The $hp$ space in $\Omega = (-a,a)^d$ is then given by \begin{equation*} \widetilde{X}_{\mathsf{hp}, d}^{\ell, p} = \{ v\in H^1(\Omega): v_{|_{K}} \in \mathbb{Q}_{p}(K), \text{ for all } K\in \widetilde{\mathcal{G}}^\ell_d\}. \end{equation*} Finally, recall the definition of $\pi_{\mathsf{hp}}^{\ell, p}$ from \eqref{eq:pihpell1d} and construct \begin{equation*} \widetilde{\pi}_{\mathsf{hp}}^{\ell, p} : W^{1,1}((-a,a))\to \widetilde{X}_{\mathsf{hp}, 1}^{\ell, p} \end{equation*} such that, for all $v\in W^{1,1}((-a,a))$, \begin{equation} \label{eq:tpihpell} \begin{aligned} &\left(\widetilde{\pi}_{\mathsf{hp}}^{\ell, p} v \right)|_{(0,\frac{a}{2})} = \left(\pi_{\mathsf{hp}}^{\ell, p} (v|_{(0,\frac{a}{2})}\circ\psi_{1,+})\right)\circ\psi_{1,+} ^{-1},\quad \left(\widetilde{\pi}_{\mathsf{hp}}^{\ell, p} v \right)|_{(\frac{a}{2},a)} = \left(\pi_{\mathsf{hp}}^{\ell, p} (v|_{(\frac{a}{2},a)}\circ\psi_{2,+})\right)\circ\psi_{2,+} ^{-1},\\ &\left(\widetilde{\pi}_{\mathsf{hp}}^{\ell, p} v \right)|_{(-\frac{a}{2},0)} = \left(\pi_{\mathsf{hp}}^{\ell, p} (v|_{(-\frac{a}{2},0)}\circ\psi_{1,-}\right)\circ\psi_{1,-} ^{-1},\quad \left(\widetilde{\pi}_{\mathsf{hp}}^{\ell, p} v \right)|_{(-a, -\frac{a}{2})} = \left(\pi_{\mathsf{hp}}^{\ell, p} (v|_{(-a, -\frac{a}{2})}\circ\psi_{2,-})\right)\circ\psi_{2,-} ^{-1}. \end{aligned} \end{equation} Then, the global $hp$ projection operator $\widetilde{\Pi}_{\mathsf{hp}, d}^{\ell, p} : W_{\mathrm{mix}}^{1,1} (\Omega)\to \widetilde{X}_{\mathsf{hp}, d}^{\ell, p}$ is defined as \begin{equation*} \widetilde{\Pi}_{\mathsf{hp}, d}^{\ell, p} = \bigotimes_{i=1}^d \widetilde{\pi}_{\mathsf{hp}}^{\ell, p}. \end{equation*} \begin{theorem} \label{prop:internal} For $a>0$, let $\Omega = (-a,a)^d$, $d=2,3$. Denote by $\Omega^k$, $k=1, \dots, 4^d$ the patches composing $\Omega$, i.e., the sets $\Omega^k = \bigtimes_{j=1}^d(a^k_j, a^k_j+a/2)$ with $a^k_j\in \{-a,-a/2,0,a/2\}$. Denote also $\mathcal{C}^k =\mathcal{C}_\Omega \cap \overline{\Omega}^k$ and $\mathcal{E}^k = \{ e\in \mathcal{E}_\Omega: e\subset \overline{\Omega}^k\}$, which contain one singular corner, and three singular edges abutting that corner, as in \eqref{eq:Cset} and \eqref{eq:Eset}. Let $\mathcal{I}\subset \{1, \dots, 4^d\}$ and let $v\in W_{\mathrm{mix}}^{1,1}(\Omega)$ be such that, for all $k\in \mathcal{I}$, there holds $v_{|\Omega^k}\in \mathcal{J}^\varpi_{{\underline{\gamma}}^k}(\Omega^k; \mathcal{C}^k, \mathcal{E}^k)$ with \begin{alignat*}{2} &\gamma^k_c>1,\;\text{for all }c\in\mathcal{C}^k , &&\text{ if } d = 2,\\ & \gamma^k_c>3/2\text{ and } \gamma^k_e>1,\; \text{for all }c\in\mathcal{C}^k\text{ and }e\in\mathcal{E}^k,\quad &&\text{ if } d = 3. \end{alignat*} Then, there exist constants $c_p>0$ and $C, b>0$ such that, for all $\ell\in \mathbb{N}$, with $p = c_p \ell$, \begin{equation} \label{eq:hp-Omega} \| v - \widetilde{\Pi}_{\mathsf{hp}, d}^{\ell, p} v\|_{H^1(\Omega^k)}\leq Ce^{-b\ell} \leq C\exp(-b\sqrt[2d]{N_{\mathrm{dof}}}). \end{equation} Here, $N_{\mathrm{dof}} = \mathcal{O}(\ell^{2d})$ denotes the overall number of degrees of freedom in the piecewise polynomial approximation. Furthermore, writing $\widetilde{N}_{\mathrm{1d}} = 4(\ell+1)p+1$, there exists an array of coefficients \[ \widetilde{c} = \left\{\widetilde{c}_{{i_1\dots i_d}}: ({i_1,\dots, i_d}) \in \{1, \dots, \widetilde{N}_{\mathrm{1d}}\}^d\right\} \] such that \begin{equation*} \left(\widetilde{\Pi}_{\mathsf{hp}, d}^{\ell, p} v\right)(x_1,\dots,x_d) = \sum_{{i_1,\dots, i_d} = 1}^{N_{\mathrm{1d}}} \widetilde{c}_{{i_1\dots i_d}} \prod_{j=1}^d{\tilde{v}}_{ i_j}(x_j)\qquad \forall (x_1, \dots, x_d)\in \Omega, \end{equation*} where for all $j=1, \dots, d$ and $i_j=1, \dots, \widetilde{N}_{\mathrm{1d}}$, ${\tilde{v}}_{ i_j}\in \widetilde{X}_{\mathsf{hp}, 1}^{\ell, p}$ with support in at most two, neighboring elements of $\widetilde{\mathcal{G}}^\ell_1$. Finally, there exist constants $C_1, C_2>0$ independent of $\ell$ such that \begin{equation} \label{eq:v-Omega} \|v_{i}\|_{H^1((-a,a))} \leq C_1 \sigma^{-\ell/2}\qquad\forall i=1, \dots, \widetilde{N}_{\mathrm{1d}}, \end{equation} and \begin{equation} \label{eq:c-Omega} \sum_{{i_1,\dots, i_d}=1}^{\widetilde{N}_{\mathrm{1d}}}|\widetilde{c}_{{i_1\dots i_d}} |\leq C_2 \sum_{j=0}^d (\ell+1)^{j}(p+1)^{2(d-j)}\|v\|_{W_{\mathrm{mix}}^{1,1}(\Omega)}. \end{equation} \end{theorem} \begin{proof} The statement is a direct consequence of Propositions \ref{lemma:exp-conv} and \ref{prop:compactbasis}. We start the proof by showing that for any function $v\in W_{\mathrm{mix}}^{1,1}(\Omega)$, the approximation $\widetilde{\Pi}_{\mathsf{hp}, d}^{\ell, p} v$ is continuous; the rest of the theorem will then follow from the results in each sub-patch. Let now $w\in W^{1,1}((-a,a))$. Then, it holds that $ \left(\widetilde{\pi}_{\mathsf{hp}}^{\ell, p} w \right)|_{I} \in C(I)$, for all $I\in \{(0, a/2), (a/2,a), (-a/2, 0), (-a, -a/2)\}$, by definition \eqref{eq:tpihpell}. Furthermore, by the nodal exactness of the local projectors, for $\tilde{x} \in \{-a/2, 0, a/2\}$, there holds \begin{equation*} \lim_{x \to \tilde{x}^-} (\widetilde{\pi}_{\mathsf{hp}}^{\ell, p} w)(x) = w(\tilde{x}) = \lim_{x \to \tilde{x}^+} (\widetilde{\pi}_{\mathsf{hp}}^{\ell, p} w)(x), \end{equation*} implying then that $\widetilde{\pi}_{\mathsf{hp}}^{\ell, p} w$ is continuous. Since $\widetilde{\Pi}_{\mathsf{hp}, d}^{\ell, p} = \bigotimes_{j=1}^d\widetilde{\pi}_{\mathsf{hp}}^{\ell, p}$, this implies that $\widetilde{\Pi}_{\mathsf{hp}, d}^{\ell, p} v$ is continuous for all $v\in W_{\mathrm{mix}}^{1,1}(\Omega)$. Fix $k\in \{1, \dots, 4^d\}$ such that $v\in \mathcal{J}^\varpi_{{\underline{\gamma}}^k}(\Omega^k; \mathcal{C}^k,\mathcal{E}^k)$. There exist then, by Proposition \ref{lemma:exp-conv}, constants $C, b, c_p>0$ such that for all $\ell\in\mathbb{N}$ \begin{equation*} \| v - \widetilde{\Pi}_{\mathsf{hp}, d}^{\ell, c_p\ell} \|_{H^1(\Omega^k)}\leq Ce^{-b\ell}. \end{equation*} Equation \eqref{eq:hp-Omega} follows. The bounds \eqref{eq:v-Omega} and \eqref{eq:c-Omega} follow from the construction of the basis functions \eqref{eq:zeta-12}--\eqref{eq:zeta-n} and from the application of Lemma \ref{lemma:cbound} in each patch, respectively. \end{proof} \section{Proofs of Section \ref{sec:applications}} \label{sec:proofs-appendix} \subsection{Proof of Lemma \ref{lemma:exist-PU}} \label{sec:triangulationpolygon} \begin{proof}[Proof of Lemma \ref{lemma:exist-PU}] For any two sets $X,Y\subset\Omega$, we denote by $\dist_{\Omega}(X,Y)$ the infimum of Euclidean lengths of paths in $\Omega$ connecting an element of $X$ with one of $Y$. We introduce several domain-dependent quantities to be used in the construction of the triangulation $\mathcal{T}$ with the properties stated in the lemma. Let $\mathcal{E}$ denote the set of edges of the polygon $\Omega$. For each corner $c\in \mathcal{C}$ at which the interior angle of $\Omega$ is smaller than $\pi$ (below called \emph{convex corner}), we fix a parallelogram $G_c \subset \Omega$ and a bijective, affine transformation $F_c : (0,1)^2\to G_c$ such that \begin{itemize} \item $F_c((0,0)) = c$, \item two edges of $G_c$ coincide partially with the edges of $\Omega$ abutting at the corner $c$. \end{itemize} If at $c$ the interior angle of $\Omega$ is greater than or equal to $\pi$ (both are referred to by slight abuse of terminology as \emph{nonconvex corner}), the same properties hold, with $F_c : (-1,1)\times(0,1)\to G_c$ if the interior angle equals $\pi$, and $F_c : (-1,1)^2\setminus (-1, 0]^{2}\to G_c$ else, and with $G_c$ having the corresponding shape. Let now \begin{equation*} d_{c,1} \coloneqq \sup \{r>0: B_r(c) \cap\Omega \subset G_c\}, \qquad d_{\mathcal{C},1} \coloneqq \min_{c\in\mathcal{C}}d_{c,1}. \end{equation*} Then, for each $c\in \mathcal{C}$, let $e_1$ and $e_2$ be the edges abutting $c$, and define \begin{equation*} d_{c, 2} \coloneqq \dist_{\Omega} \left(e_1\cap \left(B_{\frac{\sqrt{2}}{\sqrt{2}+1}d_{\mathcal{C}, 1}}(c)\right)^c , e_2\cap \left(B_{\frac{\sqrt{2}}{\sqrt{2}+1}d_{\mathcal{C}, 1}}(c)\right)^c\right), \qquad d_{\mathcal{C}, 2} \coloneqq \min_{c\in \mathcal{C}} d_{c, 2}. \end{equation*} Furthermore, for each $e\in\mathcal{E}$, denote $d_e := \infty$ if $\Omega$ is a triangle, otherwise \begin{equation*} d_e \coloneqq \min\left\{ \dist_{\Omega}(e, e_1): e_1\in\mathcal{E} \text{ and }\overline{e}\cap\overline{e_1} = \emptyset \right\}, \qquad d_{\mathcal{E}} = \min_{e\in\mathcal{E}}d_e. \end{equation*} Finally, for all $x\in\Omega$, let \begin{equation*} n_e(x) \coloneqq \#\{e_1, e_2, \ldots\in \mathcal{E}: \dist_{\Omega}(x,\partial\Omega) = \dist_{\Omega}(x, e_1) = \dist_{\Omega}(x, e_2) = \dots\}. \end{equation*} Then, in case $\Omega$ is a triangle, let $d_0$ be half of the radius of the inscribed circle, else let $d_0 := \tfrac13 d_{\mathcal{E}} < \tfrac12 d_{\mathcal{E}}$. It holds that \begin{equation*} \dist_{\Omega}(\{x\in \Omega: n_e(x)\geq 3\}, \partial\Omega) \geq d_0 >0. \end{equation*} For any shape regular triangulation $\mathcal{T}$ of $\mathbb{R}^2$, such that for all $K\in \mathcal{T}$, $K\cap \partial \Omega = \emptyset$, denote ${\cT_\Omega} = \{K\in \mathcal{T}: K\subset\Omega\}$ and $h({\cT_\Omega}) = \max_{K\in{\cT_\Omega}} h(K)$, where $h(K)$ denotes the diameter of $K$. Denote by ${\cN_\Omega}$ the set of nodes of $\mathcal{T}$ that are in $\overline{\Omega}$. For any $n\in {\cN_\Omega}$, define \begin{equation*} \ptch(n) \coloneqq \interior\left(\bigcup_{K\in \mathcal{T}: n\in\overline{K}} \overline{K} \right). \end{equation*} Let $\mathcal{T}$ be a triangulation of $\mathbb{R}^2$ such that \begin{equation} \label{eq:meshsize} h({\cT_\Omega}) \leq \min\left( \frac{d_0}{\sqrt{2}}, \frac{d_{\mathcal{C},1}}{\sqrt{2}+1} , \frac{d_{\mathcal{C}, 2}}{2\sqrt{2}},\frac{d_{\mathcal{E}}}{2\sqrt{2}}\right), \end{equation} and such that for all $K\in\mathcal{T}$ it holds $K\cap \partial\Omega = \emptyset$. The hat-function basis $\{\phi_n\}_{n\in{\cN_\Omega}}$ is a basis for $\mathbb{P}_1({\cT_\Omega})$ such that $\supp(\phi_n) \subset \overline{\ptch(n)}$ for all $n\in {\cN_\Omega}$, and it is a partition of unity. We will show that, for each $n\in {\cN_\Omega}$, there exists a subdomain $\Omega_n$ with the desired properties, such that $\ptch(n)\cap\Omega\subset \Omega_n$. We point to Figure \ref{fig:patch-corner} for an illustration of the patches $\Omega_n$ that will be introduced in the proof, for different sets of nodes. \begin{figure} \centering \begin{subfigure}[b]{.25\textwidth} \centering \includegraphics[width=\textwidth]{patch_convex.pdf} \caption{}\label{fig:patch-convex} \end{subfigure}% \begin{subfigure}[b]{.25\textwidth} \centering \includegraphics[width=\textwidth]{patch_nonconvex.pdf} \caption{}\label{fig:patch-nonconvex} \end{subfigure}% \begin{subfigure}[b]{.25\textwidth} \centering \includegraphics[width=.8\textwidth]{patch_interior.pdf} \caption{}\label{fig:patch-interior} \end{subfigure}% \begin{subfigure}[b]{.25\textwidth} \centering \includegraphics[width=.85\textwidth]{patch_edge.pdf} \caption{}\label{fig:patch-edge} \end{subfigure}% \caption{Patches $\Omega_n$ for nodes near a convex (\subref{fig:patch-convex}), nonconvex corner (\subref{fig:patch-nonconvex}), for nodes in the interior of $\Omega$ (\subref{fig:patch-interior}), and near an edge (\subref{fig:patch-edge}).} \label{fig:patch-corner} \end{figure} For each $c\in\mathcal{C}$, let $\widehat{\mathcal{N}}_c = \{n\in {\cN_\Omega}: \ptch(n)\cap\Omega\subset G_c\}$. There holds \begin{equation*} \mathcal{N}_c\coloneqq\{n\in {\cN_\Omega}: \dist_{\Omega}(n, c)\leq d_{\mathcal{C},1} - h({\cT_\Omega})\} \subset \widehat{\mathcal{N}}_c. \end{equation*} Therefore, all the nodes $n\in \mathcal{N}_c$ are such that $\ptch(n)\cap\Omega \subset G_c =: \Omega_n$. Denote then \begin{equation*} \mathcal{N}_{\mathcal{C}} =\bigcup_{c\in\mathcal{C}}\mathcal{N}_c. \end{equation*} Note that, due to \eqref{eq:meshsize}, there holds $\sqrt{2}h({\cT_\Omega}) \leq \frac{\sqrt{2}}{\sqrt{2}+1}d_{\mathcal{C}, 1} \leq d_{\mathcal{C}, 1} - h({\cT_\Omega})$. We consider the nodes in $\mathcal{N}\setminus \mathcal{N}_{\mathcal{C}}$. First, consider the nodes in \begin{equation*} \mathcal{N}_0\coloneqq \{n\in \mathcal{N}\setminus\mathcal{N}_{\mathcal{C}} : \dist_{\Omega}(n, \partial \Omega) \geq \sqrt{2}h({\cT_\Omega})\}. \end{equation*} For all $n\in \mathcal{N}_0$, there exists a square $Q_n$ such that \begin{equation*} \ptch(n) \subset B_{h({\cT_\Omega})}(n) \subset Q_n\subset B_{\sqrt{2}h({\cT_\Omega})}(n)\subset \Omega, \end{equation*} see Figure \ref{fig:patch-interior}. Hence, for all $n\in \mathcal{N}_0$, we take $\Omega_n := Q_n$. % Define \begin{equation*} \mathcal{N}_{\mathcal{E}}\coloneqq \mathcal{N} \setminus(\mathcal{N}_0\cup\mathcal{N}_{\mathcal{C}}) = \left\{n\in \mathcal{N}: \dist_{\Omega}(n, c) > d_{\mathcal{C},1}-h({\cT_\Omega}), \forall c\in \mathcal{C},\text{ and }\dist_{\Omega}(n, \partial\Omega) < \sqrt{2}h({\cT_\Omega})\right\}. \end{equation*} For all $n\in \mathcal{N}_{\mathcal{E}}$, from \eqref{eq:meshsize} it follows that $\dist_{\Omega}(n, \partial \Omega) <\sqrt{2}h({\cT_\Omega}) \leq d_0$, hence $n_e(n) \leq 2$. Furthermore, suppose there exists $n\in \mathcal{N}_{\mathcal{E}}$ such that $n_e(n) =2$. Let the two closest edges to $n$ be denoted by $e_1$ and $e_2$, so that $\dist_{\Omega}(n, e_1) = \dist_{\Omega}(n, e_2) = \dist_{\Omega}(n, \partial\Omega) <\sqrt{2}h({\cT_\Omega})$. If $\overline{e_1}\cap\overline{e_2} = \emptyset$, there must hold $\dist_{\Omega}(n, e_1) + \dist_{\Omega}(n, e_2)\geq d_\mathcal{E}$, which is a contradiction with $\dist_{\Omega}(n, \partial\Omega) < \sqrt{2}h({\cT_\Omega})\leq d_{\mathcal{E}}/2$. If instead there exists $c\in\mathcal{C}$ such that $\overline{e_1}\cap \overline{e_2} = \{c\}$, then $n$ is on the bisector of the angle between $e_1$ and $e_2$. Using that $2\sqrt{2} h({\cT_\Omega})\leq d_{\mathcal{C}, 2}$, we now show that all such nodes belong either to $\mathcal{N}_{\mathcal{C}}$ or to $\mathcal{N}_0$, which is a contradiction to $n\in\mathcal{N}_{\mathcal{E}}$. Let $x_0\in\Omega$ be the intersection of $B_{\frac{\sqrt{2}}{\sqrt{2}+1}d_{\mathcal{C}, 1}}(c)$ and the bisector. To show that $n\in\mathcal{N}_{\mathcal{C}}\cup\mathcal{N}_0$, it suffices to show that $\dist(x_0,e_i) \geq \sqrt{2}h({\cT_\Omega})$ for $i=1,2$. Because $\frac{\sqrt{2}}{\sqrt{2}+1}d_{\mathcal{C}, 1} \leq d_{\mathcal{C}, 1} - h({\cT_\Omega})$, it a fortiori holds for all points $y$ in $\Omega$ on the bisector intersected with $\left(B_{d_{\mathcal{C}, 1}-h({\cT_\Omega})}(c)\right)^c$, that $\dist(y,e_i)\geq \sqrt{2}h({\cT_\Omega})$, which shows that if $\dist_{\Omega}(n,c)\geq d_{\mathcal{C},1}-h({\cT_\Omega})$, then $n\in\mathcal{N}_0$. If $c$ is a nonconvex corner, then $\dist(x_0,e_i) \geq \sqrt{2}h({\cT_\Omega})$ for $i=1,2$ follows immediately from $\dist(x_0,e_i) = \dist(x_0,c) = \frac{\sqrt{2}}{\sqrt{2}+1}d_{\mathcal{C}, 1}$ and \eqref{eq:meshsize}. To show that $\dist(x_0,e_i) \geq \sqrt{2}h({\cT_\Omega})$, $i=1,2$ in case $c$ is a convex corner, we make the following definitions (see Figure \ref{fig:e1ce2}): \begin{figure \centering \includegraphics[width=.4\textwidth]{e1ce2_geometry_resized.pdf} \caption{Situation near a convex corner $c$.} \label{fig:e1ce2} \end{figure} \begin{itemize} \item For $i=1,2$, let $x_i$ be the intersection of $e_i$ and $B_{\frac{\sqrt{2}}{\sqrt{2}+1}d_{\mathcal{C}, 1}}(c)$, \item let $x_3$ be the intersection of $\overline{x_1x_2}$ with the bisector, \item and for $i=1,2$, let $x_{i+3}$ be the orthogonal projection of $x_0$ onto $e_i$, which is an element of $e_i$ because $c$ is a convex corner. \end{itemize} Then $d_{c,2} = |\overline{x_1x_2}| = |\overline{x_1x_3}| + |\overline{x_3x_2}| = 2 |\overline{x_ix_3}|$. Because the triangle $cx_0x_{i+3}$ is congruent to $cx_1x_3$, it follows that $\dist(x_0,e_i) = |\overline{x_0x_{i+3}}| = |\overline{x_ix_3}| = \tfrac12 d_{c,2} \geq \sqrt{2}h({\cT_\Omega})$. We can conclude with \eqref{eq:meshsize} that $n_e(n) = 1$ for all $n\in \mathcal{N}_{\mathcal{E}}$ and denote the edge closest to $n$ by $e_n$. Let then $S_n$ be the square with two edges parallel to $e_n$ such that \begin{equation*} \ptch(n) \subset B_{h({\cT_\Omega})}(n)\subset S_n\subset B_{\sqrt{2}h({\cT_\Omega})}(n), \end{equation*} see Figure \ref{fig:patch-edge}, i.e. $S_n$ has center $n$ and sides of length $2h({\cT_\Omega})$. For each $n\in \mathcal{N}_{\mathcal{E}}$, the connected component of $S_n\cap \Omega$ containing $n$ is a rectangle: \begin{itemize} \item[(i)] Note that for all edges e such that $\overline{e}\cap\overline{e_n} = \emptyset$, it holds that $S_n\cap e \subset B_{\sqrt{2}h({\cT_\Omega})}(n) \cap e = \emptyset$. The latter holds because $\sqrt{2}h({\cT_\Omega}) \leq \tfrac12 d_{\mathcal{E}}$ and $\dist_{\Omega}(n,e_n)<\sqrt{2}h({\cT_\Omega})$ imply $\dist_{\Omega}(n,e)\geq \sqrt{2}h({\cT_\Omega})$. \item[(ii)] From the previously given geometric argument considering a convex corner $c$ and the two neighboring edges $e_1$ and $e_2$, showing that $\dist(x_0,e_i) \geq \sqrt{2}h({\cT_\Omega})$ for $i=1,2$, we can additionally conclude that there is no $x\in\Omega\setminus B_{\frac{\sqrt{2}}{\sqrt{2}+1}d_{\mathcal{C}, 1}}(c)$ for which $\dist(x,e_n)<\sqrt2 h({\cT_\Omega})$ and such that there exists another edge $e$ so that $\overline{e_n}\cap\overline{e}\neq\emptyset$ and $\dist(x,e) < \sqrt2 h({\cT_\Omega})$. This implies that $S_n\cap\partial\Omega \subset e_n$ or $S_n\cap\partial\Omega = \emptyset$. \end{itemize} Thus, the connected component of $S_n\cap\Omega$ containing $n$ is a rectangle, which we define to be $\Omega_n$. Setting $N_p := \#{\cN_\Omega}$ and $\{\Omega_i\}_{i=1,\ldots,N_p} = \{\Omega_n\}_{n\in{\cN_\Omega}}$ concludes the proof. \end{proof} \subsection{Proof of Lemma \ref{lemma:W11ext}} \label{sec:W11ext} \begin{proof}[Proof of Lemma \ref{lemma:W11ext}] Let $d=3$ and denote $R = (-1, 0)^3$. Denote by $O$ the origin, and let $E = \{e_1, e_2, e_3\}$ denote the set of edges of $R$ abutting the origin. Let also $F=\{f_1, f_2, f_3\}$ denote the set of faces of $R$ abutting the origin, i.e., the faces of $R$ such that $f_i \subset \overline{R}\cap\overline{\Omega}$, $i=1,2,3$. Let, finally, for each $f\in F$, $E_f = \{e\in E: e\subset\overline{f}$ denote the subset of $E$ containing the edges neighboring $f$. For each $e\in E$, define $u_e$ to be the lifting of $u|_e$ into $R$, i.e., the function such that $u_e|_e = u_e$ and $u_e$ is constant in the two coordinate directions perpendicular to $e$. Similarly, let, for each $f\in F$, $u_f$ be such that $u_f|_f = u|_f$ and $u_f$ is constant in the direction perpendicular to $f$. We define $w:R\to \mathbb{R}$ as \begin{equation} \label{eq:vR} w = u_0 + \sum_{e\in E}\big( u_{e} - u_0\big) + \sum_{f\in F}\big( u_{f} - u_0 - \sum_{e\in E_f}(u_{e}-u_0)\big) = u_0 -\sum_{e\in E}u_e + \sum_{f\in F}u_f, \end{equation} where $u_0 = u(O)$. Since $u|_{e}\in W^{1,1}(e)$, $u|_{f}\inW_{\mathrm{mix}}^{1,1}(f)$ for all $e\in E$ and $f\in F$, there holds $u_e\in W_{\mathrm{mix}}^{1,1}(R)$ and $u_f\in W_{\mathrm{mix}}^{1,1}(R)$ for all $e\in E$ and $f\in F$ (cf. Equations \eqref{eq:trace} and \eqref{eq:trace-2}), hence $w\inW_{\mathrm{mix}}^{1,1}(R)$. Furthermore, note that \begin{equation*} \big( u_{e} - u_0\big)|_{\widetilde{e}} = 0, \quad \text{for all }E\ni\widetilde{e}\neq e \end{equation*} and that \begin{equation*} \big(u_{f} - u_0 - \sum_{e\in E_f}(u_{e}-u_0)\big)|_{\widetilde{f}} = 0, \quad \text{for all }F\ni\widetilde{f}\neq f. \end{equation*} From the first equality in \eqref{eq:vR}, then, it follows that, for all $f\in F$, \begin{equation*} w|_f = u_0 + \sum_{e\in E_f}\big(u_e|_f - u_0\big) + u_f|_f - u_0 - \sum_{e\in E_f}\big(u_e|_f - u_0\big)= u|_f. \end{equation*} Let the function $v$ be defined as \begin{equation} \label{eq:vR2} v|_R = w, \qquad v|_{\Omega} =u. \end{equation} Then, $v$ is continuous in $(-1,1)^3$ and $v\inW_{\mathrm{mix}}^{1,1}((-1,1)^3)$. Now, for all $\alpha\in \mathbb{N}^3_0$ such that $|\alpha|_\infty\leq 1$, \begin{equation*} \|\partial^\alpha u_e\|_{L^1(R)} = \|\partial^{\alpha^e_\parallel} u_e\|_{L^1(R)} = \|\partial^{\alpha^e_\parallel} u\|_{L^1(e)},\qquad \forall e\in E, \end{equation*} where $\alpha^e_\parallel$ denotes the index in the coordinate direction parallel to $e$, and \begin{equation*} \|\partial^\alpha u_f\|_{L^1(R)} = \|\partial^{\alpha^f_{\parallel,1}} \partial^{\alpha^f_{\parallel,2}} u_f\|_{L^1(R)} = \|\partial^{\alpha^f_{\parallel,1}} \partial^{\alpha^f_{\parallel,2}} u\|_{L^1(f)} ,\qquad \forall f\in F, \end{equation*} where $\alpha^f_{\parallel, j}$, $j=1,2$ denote the indices in the coordinate directions parallel to $f$. Then, by a trace inequality (see \cite[Lemma 4.2]{Schotzau2013a}), there exists a constant $C>0$ independent of $u$ such that \begin{equation*} \| u_e \|_{W_{\mathrm{mix}}^{1,1}(R)} \leq C \| u \|_{W_{\mathrm{mix}}^{1,1}(\Omega)},\qquad \| u_f \|_{W_{\mathrm{mix}}^{1,1}(R)} \leq C \| u \|_{W_{\mathrm{mix}}^{1,1}(\Omega)}, \end{equation*} for all $e\in E$, $f\in F$. Then, by \eqref{eq:vR} and \eqref{eq:vR2}, \begin{equation*} \| v\|_{W_{\mathrm{mix}}^{1,1}((-1,1)^d)}\leq C \| u \|_{W_{\mathrm{mix}}^{1,1}(\Omega)}, \end{equation*} for an updated constant $C$ independent of $u$. This concludes the proof when $d=3$. The case $d=2$ can be treated by the same argument. \end{proof} \section*{Acknowledgements}
1,116,691,498,138
arxiv
\section{Introduction} In this paper we study admissible extensions of several theories $T$ of reverse mathematics. The idea is that in such an extension the structure $\mathfrak{M} =(\mathbb{N},\mathbb{S},\in)$ of the natural numbers and collection of sets of natural numbers $\mathbb{S}$ has to obey the axioms of $T$ while simultaneously one also has a set-theoretic world with transfinite levels erected on top of $\mathfrak{M}$ governed by the axioms of Kripke-Platek set theory, $\mathsf{KP}$. In some respects, the admissible extension of $T$ can be viewed as a proof-theoretic analog of Barwise's admissible cover of an arbitrary model of set theory (in his book ``Admissible Sets and Structures''). However, by contrast, the admissible extension of $T$ is usually not a conservative extension of $T$. Owing to the interplay of $T$ and $\mathsf{KP}$, either theory's axioms may force new sets of natural to exists which in turn may engender yet new sets of naturals on account of the axioms of the other. This approach will be studied in detail and paradigmatically by combining $\Pi^1_1$ comprehension on he natural numbers with Kripke-Platek set theory, the respective theory being called $\mathsf{KP} +(\pooca^*)$. In the next two sections we present the syntactic machinery of this system and then turn to its Tait-style reformulation, which is convenient for the later proof-theoretic analysis. For this proof-theoretic analysis we make use of a more or less standard ordinal notation system, following Buchholz \cite{buchholz86a}, that includes the Bachmann-Howard ordinal $\mathcal{BH}$ and the collapsing functions needed later. Section~\ref{s:rs} introduces the semi-formal system $\frs^*$ in which $\mathsf{KP} +(\pooca^*)$ will be embedded. For the subsequent analysis of $\frs^*$ we the paper uses a novel type of ordinal analysis, expanding that for $\mathsf{KP}$ to the higher set-theoretic universe while at the same time treating the world of subsets of $\mathbb{N}$ as an unanalyzed class-sized urelement structure. Based on these results we turn to the reduction of $\mathsf{KP} +(\pooca^*)$ to $\pooca + \mathit{TI}({<}\BH)$ in Section~\ref{s:reduction}. Main ingredients here are the notions of $\alpha$-suitable trees - extending Simpson's suitable trees in \cite{simpson09} - and a specific truth definition based on those. The final section shows how the previous results can be extended to other systems of reverse mathematics such as $\mathsf{ATR}_0$ and $\mathsf{BI}$. \section{$\mathsf{KP}$ and $(\pooca^*)$} Let $\mathcal{L}_\in$ be the usual language of set theory with $\in$ as its only non-logical relation symbol plus set constants $\underline{\varnothing}$ (for the empty set) and $\underline{\omega}$ (for the first infinite ordinal). $\Lcal^{set}_2$ is the extension of $\mathcal{L}_\in$ by countably many unary relation variables. The \emph{set terms} of $\Lcal^{set}_2$ are the set constants and the set variables. The \emph{atomic formulas} of $\Lcal^{set}_2$ are the expressions of the form $(a \in b)$, $(a \notin b)$, $U(a)$, and $\neg U(a)$, where $a,b$ are set terms and $U$ is a relation variable. The \emph{formulas} of $\Lcal^{set}_2$ are built up from these atomic formulas by means of the propositional connectives $\lor, \land$, the bounded set quantifiers $(\exists x \in a)$, $(\forall x \in a)$, the unbounded set quantifiers $\exists x$, $\forall x$, and the relation quantifiers $\exists X$, $\forall X$. We use as metavariables (possibly with subscripts): \begin{itemize} \item $u,v,w,x,y,z$ for set variables, \item $a,b,c$ for set terms, \item U,V,W,X,Y,Z for relation variables, \item $A,B,C,D$ for formulas. \end{itemize} Observe that all $\Lcal^{set}_2$ formulas are in negation normal form. We assume classical logic throughout this article. Therefore, the negation $\neg A$ of an $\Lcal^{set}_2$ formula $A$ is defined via de Morgan's laws and the law of double negation. Furthermore, $(A \to B)$ is defined as $(\neg A \lor B)$ and $(A \leftrightarrow B)$ as $((A \to B) \land (B \to A))$. To simplify the notation we often omit parentheses if there is no danger of confusion. Equality is not a basic symbol but defined for sets and relations: \begin{align*} (a=b) &\;:=\; (\forall x \in a)(x \in b) \,\land\, (\forall x \in b)(x \in a), \\[1.5ex] (U=V) &\;:=\; (\forall x \in \underline{\omega})(U(x) \leftrightarrow V(x)).% \end{align*} Since relations are supposed to range over subcollections of $\underline{\omega}$ -- see axiom (Sub $\underline{\omega})$ below -- this definition of the equality of relations makes sense. The vector notation $\vec{a}$ will be used to denote finite sequences of $\Lcal^{set}_2$ terms. $A^{(a)}$ results from $A$ by restricting all unbounded set quantifiers to $a$; relation quantifiers $QX$ are not affected by this restriction. The $\Delta_0$, $\Sigma$, $\Pi$, $\Sigma_n$, and $\Pi_n$ formulas of $\Lcal^{set}_2$ are defined as usual where relation variables are permitted as parameters. Moreover, we shall employ the common set-theoretic terminology and the standard notational conventions, for example: \begin{itemize} \item $\mathit{Tran}[a] \;:=\; (\forall x \in a)(\forall y \in x)(y \in a)$, \item $\mathit{Ord}[a] \;:=\; \mathit{Tran}[a] \,\land\, (\forall x \in a)\mathit{Tran}[x]$, \item $\mathit{Succ}[a] \;:=\; \mathit{Ord}[a] \,\land\, (\exists x \in a)(a = x \cup \{x\})$, \item $\mathit{FinOrd}[a] \;:=\; \left\{ \begin{array}{l} \mathit{Ord}[a] \,\land\, (a = \underline{\varnothing} \,\lor\, \mathit{Succ}[a]) \,\land\, \phantom{a} \\[1ex] (\forall x \in a)(x = \underline{\varnothing} \,\lor\, \mathit{Succ}[x]). \end{array} \right. $ \end{itemize} Clearly, $\mathit{Succ}[a]$ and $\mathit{FinOrd}[a]$ express that $a$ is a succesor and finite ordinal, respectively. Now we formulate \emph{Kripke-Platek set theory} $\mathsf{KP}$ in $\Lcal^{set}_2$, based on classical logic.% \footnote{Observe that $(a=a)$ and $(U=U)$ are provable by logic.} Its non-logical axioms are: \[ \begin{array}{lll} \mbox{(Equality)} & a=b \land D[a] \;\to\; D[b]. \phantom{aaaaaaaaaaaaaaaaaaaaaaaaaaaaaa} \\[1.5ex] \mbox{(Sub $\underline{\omega})$ } & U(a) \,\to\, a \in \underline{\omega}. \\[1.5ex] \mbox{(Pair)} & \exists z(a \in z \,\land\, b \in z). \\[1.5ex] \mbox{(Union)} & \exists z(\forall y \in a)(\forall x \in y)(x \in z). \\[1.5ex] \mbox{(Empty set)} & (\forall x \in\underline{\varnothing})(x \neq x). \\[1.5ex] \mbox{(Infinity)} & a \in \underline{\omega} \;\leftrightarrow\; \mathit{FinOrd}[a]. \\[1.5ex] \mbox{($\Delta_0$-Sep)} & \exists z(z = \{x \in a : D[x]\}). \\[1.5ex] \mbox{($\Delta_0$-Col)} & (\forall x \in a)\exists y D[x,y] \,\to\, \exists z(\forall x \in a)(\exists y \in z)D[x,y]. \\[1.5ex] \mbox{($\in$-Ind)} & \forall x((\forall y \in x)A[y] \,\to\, A[x]) \,\to\, \forall x A[x]. \end{array} \] The formulas $D$ in the schemas (Equality), ($\Delta_0$-Sep), and ($\Delta_0$-Col) are $\Delta_0$ whereas the formula $A$ in ($\in$-Ind) ranges over arbitrary formulas of $\Lcal^{set}_2$. \medskip It is easy to see that the theory $\mathsf{KP}$ is a conservative extension of the usual first order formalization of Kripke-Platek set theory with infinity. In this article we are interested in the theory that is obtained from $\mathsf{KP}$ by adding the following form of $\Pi^1_1$ comprehension, where $D[x,Y]$ is a $\Delta_0$ formula of $\Lcal^{set}_2$, \begin{gather*} \exists Z (\forall x \in \underline{\omega})(Z(x) \;\leftrightarrow\; \forall Y D[x,Y]). \tag*{$(\pooca^*)$} \end{gather*} \medskip From $(\Delta_0$-Sep) -- and here it is crucial that relations are permitted as parameters -- we immediately obtain that the intersection of any relation with $\underline{\omega}$ is extensionally equal to a subset of $\underline{\omega}$. On the other hand, in view of $(\pooca^*)$ we also know that for every subset of $\underline{\omega}$ there exists a relation with the same elements. It is also clear that our comprehension schema $(\pooca^*)$ implies the usual form of $\Pi^1_1$ comprehension of second order arithmetic modulo its natural translation into $\Lcal^{set}_2$; for this translation see Simpson \cite{simpson09}. \begin{remark}\rm Let $A'[x,Y]$ be the natural translation of the arithmetic $A[x,Y]$. Then $\mathsf{KP} +(\pooca^*)$ proves that \[ (\exists z \subseteq \underline{\omega})(\forall x \in \underline{\omega})(x \in z \;\leftrightarrow\; (\exists y \subset \underline{\omega})A'[x,y]). \] \end{remark} \section{A Tait-style reformulation of $\mathsf{KP} +(\pooca^*)$} The basic idea of a Tait-style is that it derives finite sets of $\Lcal^{set}_2$ formulas rather than individual $\Lcal^{set}_2$ formulas. The intended meaning of such a set is the disjunction of its elements. This reformulation is for technical reasons only and simplifies the proof of Theorem~\ref{t:embedding}. The Greek capital letters $\Gamma,\Theta,\Lambda$ (possibly with subscripts) will act as metavariables for finite sets of $\Lcal^{set}_2$ formulas. Also, we write (for example) $\Gamma, A_1, \ldots, A_n$ for $\Gamma \cup \{A_1,\ldots,A_n\}$; similarly for expressions such as $\Gamma,\Theta,A$. \bigskip\medskip \noindent\textbf{The Tait-style axioms of $\mathsf{KP} + (\pooca^*)$} \[ \begin{array}{ll} \mbox{(TnD)} & \Gamma,\, \neg D,\, D. \phantom{aaaaaaaaaaaaaaaaaa} \\[1.5ex] \mbox{(Equality)} & \Gamma,\, a \neq b,\, \neg D[a],\, D[b] \\[1.5ex] \mbox{(Sub $\underline{\omega})$ } & \Gamma,\, \neg U(a),\, a \in \underline{\omega}. \\[1.5ex] \mbox{(Pair)} & \Gamma,\, \exists z(a \in z \,\land\, b\in z). \\[1.5ex] \mbox{(Union)} & \Gamma,\, \exists z(\forall y \in a)(\forall x \in y)(x \in z). \\[1.5ex] \mbox{(Empty set)} & \Gamma,\, (\forall x \in \underline{\varnothing})(x \neq x). \\[1.5ex] \mbox{(Infinity)} & \Gamma,\, a \in \underline{\omega} \,\lra\, \mathit{FinOrd}[a]. \\[1.5ex] \mbox{($\Delta_0$-Sep)} & \Gamma,\, \exists z(z = \{x \in a : D[x]\}). \\[1.5ex] \mbox{($\Delta_0$-Col)} & \Gamma,\, (\forall x \in a)\exists y D[x,y] \,\to\, \exists z(\forall x \in a)(\exists y \in z)D[x,y]. \\[1.5ex] \mbox{($\in$-Ind)} & \Gamma,\, \forall x((\forall y \in x)A[y] \,\to\, A[x]) \,\to\, \forall x A[x]. \\[1.5ex] (\pooca^*) & \Gamma,\, \exists Z (\forall x \in \underline{\omega})(Z(x) \;\leftrightarrow\; \forall Y D[x,Y]). \end{array} \] In these axioms the formulas $D$ in (TnD), (Equality), ($\Delta_0$-Sep), ($\Delta_0$-Col), and $(\pooca^*)$ are supposed to be $\Delta_0$. The formula $A$ in ($\in$-Ind) may be an arbitrary $\Lcal^{set}_2$ formula. (TnD) stands for ``Tertium non datur''. \bigskip\medskip \noindent\textbf{ The Tait-style Inference rules of $\mathsf{KP} +(\pooca^*)$} \medskip \[ \begin{array}{ccccc} \ddfrac{\Gamma,\, A,\, B}{\Gamma,\, A \lor B} & (\lor) & & \ddfrac{\Gamma,\, A \qquad \Gamma,\, \, B}{\Gamma,\, A \land B} &(\land) \\[3.5ex] \ddfrac{\Gamma,\, A[b]}{\Gamma,\, \exists x A[x]}\ &(\exists) & & \ddfrac{\Gamma,\, A[u]}{\Gamma,\, \forall x A[x]} & (\forall) \\[3.5ex] \ddfrac{\Gamma,\, b \in a \,\land\, A[b]}{\Gamma,\, (\exists x \in a) A[x]} & (b\exists) & & \ddfrac{\Gamma,\, u \in a \,\to\, A[u]}{\Gamma,\, (\forall x \in a)A[x]} & (b\forall) \\[3.5ex] \ddfrac{\Gamma,\, A[V]}{\Gamma,\, \exists X A[X]} & (\exists_2) & & \ddfrac{\Gamma,\, A[U]}{\Gamma,\, \forall X A[X]} & (\forall_2) \\[3.5ex] \ddfrac{\Gamma,\, A \qquad \Gamma,\, \neg A}{\Gamma} & (\mathsf{Cut}) & & \phantom{a} & \phantom{a} \end{array} \] \smallskip \noindent Of course, it is demanded that in $(\forall)$ and $(b\forall)$ the eigenvariable $u$ must not occur in the conclusion; the same is the case for the variable $U$ in $(\forall_2)$. We say that $\Gamma$ is Tait-style devivable from $\mathsf{KP} + (\pooca^*)$ iff there exists a finite sequence of finite sets of $\Lcal^{set}_2$ formulas \[ \Theta_0,\ldots,\Theta_k \] such that $\Theta_k$ is the set $\Gamma$ and for any $i = 1,\ldots, k-1$ one of the following two conditions is satisfied: \begin{itemize} \item $\Theta_i$ is a Tait-style axiom of $\mathsf{KP} + (\pooca^*)$. \item $\Theta_i$ is the conclusion of an inference of a Tait-style inference rule of $\mathsf{KP} + (\pooca^*)$ with premise(s) from $\Theta_0,\ldots,\Theta_{i-1}$. \end{itemize} In this case we write $\mathsf{KP} + (\pooca^*) \vdash^k \Gamma$ and say that $\Gamma$ has a proof of length $k$. It is an easy exercise to show that a formula $A$ is provable in one of the usual Hilbert-style formalizations of $\mathsf{KP} +(\pooca^*)$ iff $\mathsf{KP} + (\pooca^*) \vdash^k A$ for some natural number $k$. Details are left to the reader. \section{Ordinal notations} In the next sections we establish the upper proof-theoretic bound of the theory $\mathsf{KP} +(\pooca^*)$. Our method of choice is the ordinal analysis of $\mathsf{KP} +(\pooca^*)$ via a system $\frs^*$ of ramified set theory. And in order to build up this system and to control the derivations in $\frs^*$ we work with specific ordinal notations. The following outline is based on ideas going back to Buchholz \cite{buchholz86a}. \begin{definition}\rm The set of ordinals $C(\alpha,\beta)$ and the ordinals $\psi(\alpha)$ are defined for all ordinals $\alpha$ and $\beta$ by induction on $\alpha$. \begin{enumerate}[(i)] \item $\{0,\Omega\} \cup \beta \subseteq C(\alpha,\beta)$. \item If $\eta,\xi \in C(\alpha,\beta)$, then $\eta +\xi \in C(\alpha,\beta)$ and $\omega^\xi \in C(\alpha,\beta)$. \item If $\xi <\alpha$ and $\xi \in C(\alpha,\beta)$, then $\psi(\xi) \in C(\alpha,\beta)$. \item $\psi(\alpha) \,:=\, \min(\{ \eta \in \mathit{On} : C(\alpha,\eta) \cap \Omega = \eta \})$. \end{enumerate} \end{definition} The following lemma summarizes some key properties of the sets $C(\alpha,\beta)$ and the function $\psi$. For its proof see Buchholz \cite{buchholz86a}. \begin{lemma} \label{l:o0} We have for all ordinals $\alpha,\alpha_1,\alpha_2,\beta$: \begin{enumerate}[(1)] \item $\psi(\alpha) < \Omega$. \item $C(\alpha,\psi(\alpha)) \cap \Omega = \psi(\alpha)$. \item $\psi(\alpha)$ is an $\varepsilon$-number. \item If $\alpha_1 <\alpha_2$ and $\alpha_1 \in C(\alpha_2,\psi(\alpha_2))$, then $\psi(\alpha_1) < \psi(\alpha_2)$. \item If $\alpha_1 \leq \alpha_2$, then $\psi(\alpha_1) \leq \psi(\alpha_2)$ and $C(\alpha_1,\psi(\alpha_1)) \subseteq C(\alpha_2,\psi(\alpha_2))$. \item $C(\alpha,0) = C(\alpha,\psi(\alpha))$. \end{enumerate} \end{lemma} We write $\varepsilon_{\Omega+1}$ for the least ordinal $\alpha > \Omega$ such that $\omega^\alpha = \alpha$. Its collapse $\mathcal{BH} := \psi(\varepsilon_{\Omega+1})$ is called the \emph{Bachmann-Howard ordinal}. This number gained importance in proof theory since it is the proof-theoretic ordinal of the theory $\mathsf{ID}_1$ of one positive inductive definition and of Kripke-Platek set theory $\mathsf{KP}$; see, for example, Buchholz and Pohlers \cite{buchholz-pohlers78a}, J\"ager \cite{j82a}, and Pohlers \cite{pohlers81a}. \section{The semi-formal ramified system $\frs^*$} \label{s:rs} In this section we introduce the semi-formal proof system $\frs^*$ of ramified set theory. We begin with extending our language $\Lcal^{set}_2$ to the language $\Lcal^\star$ and then present the axioms and rules of inference of $\frs^*$. Afterwards we turn to operator controlled derivations and some basic properties of $\frs^*$. We show that $\mathsf{KP} + (\pooca^*)$ can be embedded into $\frs^*$ and prove cut elimination and collapsing for $\frs^*$. Henceforth, all ordinals used in this section on the metalevel range over the set $C(\varepsilon_{\Omega+1},0)$ if not stated otherwise. \subsection{The language $\Lcal^\star$} The basic idea is to extend the language $\Lcal^{set}_2$ to the language $\Lcal^\star$ by adding unary relation symbols $M_\alpha$ and new quantifiers $\exists x^\alpha\, /\, \forall x^\alpha$ for all $\alpha < \Omega$. The quantifiers $Q x^\alpha$ are supposed to range over $M_\alpha$, and later, see Subsection~\ref{s:suitable}, an atomic formula $M_\alpha(a)$ will be interpreted as stating that $a$ can be coded by a so-called $\beta$-tree for some $\beta <\alpha$. \begin{definition}\rm \label{d:rs-formulas} The \emph{formulas} $F$ of $\frs^*$, their \emph{ranks} $\mathit{rk}(F)$ and \emph{parameter sets} $|F|$ are inductively defined as follows. \begin{enumerate} \item If $a$ and $b$ are set terms of $\Lcal^{set}_2$, then $(a\in b)$ and $(a \notin b)$ are formulas with \[ \mathit{rk}(a \in b) := \mathit{rk}(a \notin b) := 0\; \mbox{and}\; |(a \in b)| := |(a \notin b)| := \{0\}. \] \item If $a$ is a set term and $U$ a relation variable, then $U(a)$ and $\neg U(a)$ are formulas with \[ rk(U(a)) := \mathit{rk}(\neg U(a)) := 0\; \mbox{and}\; |U(a)| := |\neg U(a)| := \{0\}. \] \item If $a$ is a set term and $M_\alpha$ one of these new relation symbols, then $M_\alpha(a)$ and $\neg M_\alpha(a)$ are formulas with \[ rk(M_\alpha(a)) := \mathit{rk}(\neg M_\alpha(a)) := \omega\alpha\; \mbox{and}\; |M_\alpha(a)| := |\neg M_\alpha(a)| := \{\alpha\}. \] \item If $F$ and $G$ are formulas, then $(F \lor G)$ and $(F \land G)$ are formulas with \begin{gather*} rk(F \lor G) := \mathit{rk}(F \land G) := \max(\mathit{rk}(F),\mathit{rk}(G))+1, \\[1ex] |F \lor G| := |F \land G| := |F| \cup |G|. \end{gather*} \item If $F[u]$ is a formula, then $(\exists x \in a)F[x]$ and $(\forall x \in a)F[x]$ are formulas with \begin{gather*} \mathit{rk}((\exists x \in a)F[x]) := \mathit{rk}((\forall x \in a)F[x]) := \mathit{rk}(u \in a \,\land\, F[u]) +1. \\[1ex] |(\exists x \in r)F[x]| := |(\forall x \in r)F[x]| := \{|r|\} \cup |F[u]|. \end{gather*} \item If $F[u]$ is a formula, then $\exists x^\alpha F[x]$ and $\forall x^\alpha F[x]$ are formula with \begin{gather*} \mathit{rk}(\exists x^\alpha F[x]) := \mathit{rk}(\forall x^\alpha F[x]) := \mathit{rk}(M_\alpha(u) \,\land\, F[u]) +1. \\[1ex] |\exists x^\alpha F[x]| := |\forall x^\alpha F[x]| := \{\alpha\} \cup |F[u]|. \end{gather*} \item If $F[u]$ is a $\Delta_0$ formula of $\Lcal^{set}_2$, then $\exists x F[x]$ and $\forall x F[x]$ are formulas with \begin{gather*} \mathit{rk}(\exists x F[x]) := \mathit{rk}(\forall x F[x]) := \Omega, \\[1ex] |\exists x F[x]| := |\forall x F[x]| := |F[u]|. \end{gather*} \item If $F[u]$ is not a $\Delta_0$ formula of $\Lcal^{set}_2$, then $\exists x F[x]$ and $\forall x F[x]$ are also formulas but with \begin{gather*} \mathit{rk}(\exists x F[x]) := \mathit{rk}(\forall x F[x]) := \max(\Omega+1,\mathit{rk}(F[u])+3), \\[1ex] |\exists x F[x]| := |\forall x F[x]| := |F[u]|. \end{gather*} \item If $ F[U]$ is a formula, then $\exists X F[X]$ and $\forall X F[X]$ are formulas with \begin{gather*} \mathit{rk}(\exists X F[X]) := \mathit{rk}(\forall X F[X]) := \mathit{rk}(F[U])+1, \\[1ex] |\exists X F[X]| := |\forall X F[X]| := |F[U]|. \end{gather*} \end{enumerate} Finally, we define the \emph{level} of a formula as \[ \mathit{lev}(F) \;:=\; \left\{ \begin{array}{ll} \max(|F|) & \mbox{if $\mathit{rk}(F) < \Omega$}, \\[1ex] \Omega & \mbox{if $\Omega \leq \mathit{rk}(F)$}. \end{array} \right. \] \end{definition} To be precise: If $F$ is a formula of $\frs^*$, then $|F|$ collects the levels of the relations symbols $M_\alpha$ and of the quantifiers $Q x^\alpha$ occurring in $F$ plus possibly the number $0$. There are several collections of $\Lcal^\star$ formulas that will play an important role later. \begin{enumerate} \item $\mathcal{D}$ is the collection of all $\Lcal^\star$ formulas in which unbounded set quantifiers $Q x$ and relation quantifiers $Q X$ do not occur. \item $\mathcal{S}$ is the closure of $\mathcal{D}$ under the propositional connectives, quantifiers $Q x^\alpha$, bounded quantifiers $(Qx \in r)$, and unbounded existential set quantifiers. \item $\mathcal{S}_0$ is the subclass of $\mathcal{S}$ that contains all $\Sigma$ formulas of $\Lcal^{set}_2$ that are not $\Delta_0$. \item $\mathcal{B}$ consists of all $\Lcal^\star$ formulas that do not contain unbounded set quantifiers $Q x$. \end{enumerate} Some important properties of the ranks of $\frs^*$ formulas are summarized in the following lemma. Its proof is straightforward and can be omitted. \begin{lemma}\rm\label{l:rank} \quad \begin{enumerate}[(1)] \item $\mathit{rk}(F) \,=\, \mathit{rk}(\neg F) \,<\, \omega \cdot \mathit{lev}(F) + \omega \,\leq\, \Omega+\omega$. \item $\mathit{rk}(F) < \Omega$ iff $F$ belongs to $\mathcal{B}$. \item If $\mathit{rk}(\exists x F[x]) = \Omega$, then $F[u]$ is a $\Delta_0$ fromula of $\Lcal^{set}_2$. \item $\mathit{rk}(M_\alpha(a) \,\land\, F[a]) \,<\, \mathit{rk}(\exists x^\alpha F[x]) \,<\, \mathit{rk}(\exists x F[x])$. \item $\mathit{rk}(s \in r \,\land\, F[s]) \,<\, \mathit{rk}((\exists x \in r)F[x]) \,<\, \mathit{rk}(\exists x F[x])$. \item $\mathit{rk}(F[P]) \,=\, \mathit{rk}(F[U]) \,<\, \mathit{rk}(\exists X F[X])$. \end{enumerate} \end{lemma} \subsection{Axioms and rules of inference of $\frs^*$} For technical reasons we shall use a Tait-style sequent calculus as proof system for $\frs^*$. This means that we derive finite sets of $\Lcal^\star$ formulas rather than individual $\Lcal^\star$ formulas. In the following the Greek capital letters $\Gamma,\Theta,\Lambda$ (possibly with subscripts) act as metavariables for finite sets of $\Lcal^\star$ formulas. Also, we write (for example) $\Gamma, F_1, \ldots, F_n$ for $\Gamma \cup \{F_1,\ldots,F_n\}$; similarly for expressions such as $\Gamma,\Theta,F$. If $\Gamma$ is the set $\{F_1,\ldots,F_k\}$ we define \[ |\Gamma| \;:=\; |F_1| \cup \ldots \cup |F_k| \quad\mbox{and}\quad \Gamma^\lor \;:=\; F_1 \lor \ldots \lor F_k \] such that $|\Gamma|$ collects the parameter sets of the formulas in $\Gamma$ and $\Gamma^\lor$ is the disjunction of its elements. Clearly, $\Gamma^\lor$ reflects the inteded meaning of $\Gamma$. \bigskip \noindent\textbf{Axioms of $\frs^*$} \begin{enumerate}[(1)] \vspace{0.5ex}\item\label{a:1} $\Gamma,\, \neg F,\, F$\quad for $F \in \mathcal{B}$. \vspace{0.5ex}\item $\Gamma,\, a \neq b,\, \neg F[a],\, F[b]$\quad for $F[u] \in \mathcal{B}$. \vspace{0.5ex}\item $\Gamma,\, \neg U(a),\, a \in \underline{\omega}$. \vspace{0.5ex}\item\label{a:5} $\Gamma,\, \neg M_0(a)$. \vspace{0.5ex}\item \label{a:6} $\Gamma,\, M_1(\underline{\varnothing})$. \vspace{0.5ex}\item $\Gamma,\, a \notin \varnothing$. \vspace{1ex}\item \label{a:8} $\Gamma,\, M_{\omega{+}1}(\underline{\omega})$. \vspace{1ex}\item $\Gamma,\, (a \in \underline{\omega} \;\leftrightarrow\; \mathit{FinOrd}[a])$. \vspace{1ex}\item $\Gamma,\, \neg M_\alpha(a),\, M_\beta(a)$\quad for $\alpha \leq \beta$. \vspace{1ex}\item\label{a:12} $\Gamma,\, \neg M_{\alpha{+}1}(a),\, b \notin a,\, M_\alpha(b)$. \vspace{1ex}\item\label{a:13} $\Gamma,\, \neg M_\alpha(a),\, \negM_\beta(b),\, \exists z^{\beta{+}1}(a,b \in z)$\quad for $\alpha \leq \beta$. \vspace{1ex}\item $\Gamma,\, \neg M_\alpha(a),\, \exists z^\alpha(\forall y \in a)(\forall x \in y)(x \in z)$. \vspace{1ex}\item\label{a:15} $\Gamma,\, \neg M_\alpha(a),\, \exists z^{\alpha{+}1}(z = \{x \in a: A[x]\})$ \quad for $A[u]$ from $\Delta_0$ of $\Lcal^{set}_2$. \vspace{1ex}\item\label{a:16} $\Gamma,\, \exists Z(\forall x \in \underline{\omega})(Z(x) \,\lra\, \forall Y A[x,Y])$ \quad for $A[u,V]$ from $\Delta_0$ of $\Lcal^{set}_2$. \end{enumerate} Please observe that the main formulas of all axioms belong to $\mathcal{B}$. This will be important in the later subsections when it comes to the ordinal analysis of the derivations in $\frs^*$. \bigskip \noindent\textbf{Inference rules of $\frs^*$} \begin{enumerate}[({RRRRR}1)] \vspace{1ex}\item[$(\lor)$]\phantom{a} $\proofrule{\Gamma,\,F,\, G} {\Gamma,\, F \lor G}$ \vspace{1ex}\item[$(\land)$]\phantom{a} $\proofrule{\Gamma,\, F \qquad \Gamma,\, G} {\Gamma,\, F \land G}$ \vspace{1ex}\item[$(\neg M)$]\phantom{a} $\proofrule{\Gamma,\, \neg M_\beta(a) \,\,\, \mbox{for all $\beta < \lambda$}} {\Gamma,\, \neg M_\lambda(a)}$\quad for $\lambda$ limit \vspace{1ex}\item[$(\exists)$]\phantom{a} $\proofrule{\Gamma,\, M_\beta(a) \,\land\, F[a]} {\Gamma,\, \exists x F[x]}$ \vspace{1ex}\item[$(\forall)$]\phantom{a} $\proofrule{\Gamma,\, \neg M_\beta(a) \,\lor\, F[a] \,\,\, \mbox{for all $\beta < \Omega$ and all $a$}} {\Gamma,\, \forall x F[x]}$ \vspace{1ex}\item[$(\exists^\alpha)$]\phantom{a} $\proofrule{\Gamma,\, M_\beta(a) \,\land\, F[a]}{\Gamma,\, \exists x^\alpha F[x]}$\quad \mbox{for $\beta \leq \alpha$} \vspace{1ex}\item[$(\forall^\alpha)$]\phantom{a} $\proofrule{\Gamma,\, \negM_\beta(a) \,\lor\, F[a]\,\,\, \mbox{for all $\beta \leq \alpha$ and all $a$}} {\Gamma,\, \forall x^\alpha F[x]}$ \vspace{1ex}\item[$(b\exists)$]\phantom{a} $\proofrule{\Gamma,\, b \in a \,\land\, F[b]} {\Gamma,\, (\exists x \in a)F[x]}$ \vspace{1ex}\item[$(b\forall)$]\phantom{a} $\proofrule{\Gamma,\, b \notin a \,\lor\, F[b]\,\,\, \mbox{for all $b$}} {\Gamma,\, (\forall x \in a)F[x]}$ \vspace{1ex}\item[$(\exists_2)$]\phantom{a} $\proofrule{\Gamma,\, F[U]} {\Gamma,\, \exists X F[X]}$ \vspace{1ex}\item[$(\forall_2)$]\phantom{a} $\proofrule{\Gamma,\, F[U] \,\,\, \mbox{for all $U$}} {\Gamma,\, \forall X F[X]}$ \vspace{1ex}\item[$(\mathsf{Cut})$]\phantom{a} $\proofrule{\Gamma,\, F \qquad \Gamma,\, \neg F} {\Gamma}$ \vspace{1ex}\item[$(\Scal_0\mbox{-}\mathsf{Ref})$]\phantom{a} $\proofrule{\Gamma,\, F} {\Gamma,\, \exists z F^{(z)}}$\quad for $F \in \mathcal{S}_0$ \vspace{1ex}\item[$\mathsf{(BC)}$]\phantom{a} $\proofrule{\Gamma,\, F^\beta}{\Gamma,\, \exists z^{\beta{+}\omega} F^z}$\quad for $F \in \mathcal{S}_0$ \end{enumerate} If $F$ is from $\mathcal{S}$, we write $F^\beta$ for the result of replacing each unbounded existential quantifier $\exists x$ by $\exists x^\beta$. This must not be confused with $F^{(a)}$ where each unbounded set quantifier $Qx$ in $F$ is restricted to $(Q x \in a)$. \smallskip The meaning of the rules $(\lor)$ -- $(\Scal_0\mbox{-}\mathsf{Ref})$ should be self-explaining. Rule $\mathsf{(BC)}$ will be needed in connection with the boundedness and collapsing results in subsection \ref{ss:collapsing}. \subsection{Derivation operators} \label{s:derivation-operators} The general theory of derivation operators and operator controlled derivations has been introduced in Buchholz \cite{buchholz92a}. In the following adapt his general approach to the more specific (and simpler) situation with which we have to deal here. \begin{definition}\rm Let $\mathit{Pow}(\mathit{On})$ denote the collection of all sets of ordinals. A class function \[ \mathcal{H} : \mathit{Pow}(\mathit{On}) \to \mathit{Pow}(\mathit{On}) \] is called a \emph{derivation operator d-operator for short)} iff it is a closure operator if it satisfies the following conditions for all $X,Y \in \mathit{Pow}(\mathit{On})$: \begin{enumerate}[(i)] \item $X \subseteq \mathcal{H}(X)$. \item $Y \subseteq \mathcal{H}(X) \quad\Longrightarrow\quad \mathcal{H}(Y) \subseteq \mathcal{H}(X)$. \item $\{0,\Omega\} \subseteq \mathcal{H}(X)$. \item If $\alpha$ has Cantor normal form $\omega^{\alpha_1} + \ldots + \omega^{\alpha_k}$, then \[ \alpha \in \mathcal{H}(X) \quad\Longleftrightarrow\quad \alpha_1,\ldots,\alpha_k \in \mathcal{H}(X). \] \end{enumerate} \end{definition} These requirements ensure that every d-operator $\mathcal{H}$ is monotone, inclusive, and idempotent. Every $\mathcal{H}(X)$ is closed under $+$ as well as $\xi \mapsto \omega^\xi$, and under the decomposition of its members into their Cantor normal form components. let $\mathcal{H}$ be a d-operator. Then we define for all finite sets of ordinals $\mathfrak{m}$ the operators \[ \mathcal{H}[\mathfrak{m}] \;:\; \mathit{Pow}(\mathit{On}) \to \mathit{Pow}(\mathit{On}) \] by setting for all $X \in \mathit{Pow}(\mathit{On})$: \[ \mathcal{H}[\mathfrak{m}](X) \;:=\; \mathcal{H}(\mathfrak{m} \cup X). \] Also, we simply write $\mathcal{H}[\sigma]$ for $\mathcal{H}[\{\sigma\}]$. If $\mathcal{H}$ and $\mathcal{K}$ are d-operators, then we write $\mathcal{H} \subseteq \mathcal{K}$ to state that \[ \mathcal{H}(X) \subseteq \mathcal{K}(X) \;\; \mbox{for all $X \subseteq \mathit{On}$}. \] In this case $\mathcal{K}$ is called an \emph{extension} of $\mathcal{H}$. The following observation is immediate from these definitions. \begin{lemma} \label{l:o1} If $\mathcal{H}$ is a d-operator, then we have for all finite sets of ordinals $\mathfrak{m},\mathfrak{n}$: \begin{enumerate}[(1)] \item $\mathcal{H}[\mathfrak{m}]$ is a d-operator and an extension of $\mathcal{H}$. \item If $\mathfrak{m} \subseteq \mathcal{H}(\varnothing)$, then $\mathcal{H}[\mathfrak{m}] = \mathcal{H}$. \item If $\mathfrak{n} \subseteq \mathcal{H}[\mathfrak{m}](\varnothing)$, then $\mathcal{H}[\mathfrak{n}] \subseteq \mathcal{H}[\mathfrak{m}]$. \end{enumerate} \end{lemma} Now we turn to specific operators $\mathcal{H}_\sigma$. The will play a central role in the collapsing process of $\frs^*$ derivations; see Theorem~\ref{t:collapsing}. \begin{definition}\rm \label{d:deriv-op} We define, for all d-operators $\mathcal{H}$, the operators \[ \mathcal{H}_\sigma \;:\; \mathit{Pow}(\mathit{On}) \to \mathit{Pow}(\mathit{On}) \] by setting for all $X \subseteq \mathit{On}$: \[ \mathcal{H}_\sigma(X) \;:=\; \bigcap\, \{C(\alpha,\beta) : X \subseteq C(\alpha,\beta) \;\,\&\;\, \sigma < \alpha \}. \] \end{definition} In the following lemmas we collect those properties of these operators that will be needed later. For their proofs we refer to Buchholz \cite{buchholz92a}, in particular Lemma~4.6 and Lemma~4.7. \begin{lemma} \label{l:o2} We have for all ordinals $\sigma,\tau$ and all $X \subseteq \mathit{On}$: \begin{enumerate}[(1)] \item $\mathcal{H}_\sigma$ is a d-operator. \item $\mathcal{H}_\sigma(\varnothing) = C(\sigma+1,0)$. \item $\tau \leq \sigma \;\;\mbox{and}\;\; \tau \in \mathcal{H}_\sigma(X) \quad\Longrightarrow\quad \psi(\tau) \in \mathcal{H}_\sigma(X)$. \item $\sigma < \tau \quad\Longrightarrow\quad \mathcal{H}_\sigma \subseteq \mathcal{H}_\tau$. \item $\mathcal{H}_\sigma(X) \cap \Omega$ is an ordinal. \end{enumerate} \end{lemma} \smallskip \begin{lemma} \label{l:o3} Let $\mathfrak{m}$ be a finite set of ordinals and $\sigma$ an ordinal such that the following conditions are satisfied: \[ \mathfrak{m} \subseteq C(\sigma+1,\psi(\sigma+1)) \cap \Omega \quad\mbox{and}\quad \sigma \in \mathcal{H}_\sigma[\mathfrak{m}](\varnothing). \] Then we have for ${\widehat{\alpha}} := \sigma+ \omega^{\Omega+\alpha}$ and ${\widehat{\beta}} := \sigma + \omega^{\Omega + \beta}$: \begin{enumerate}[(1)] \item $\alpha \in \mathcal{H}_\sigma[\mathfrak{m}](\varnothing) \quad\Longrightarrow\quad {\widehat{\alpha}} \in \mathcal{H}_\sigma[\mathfrak{m}](\varnothing) \;\;\mbox{and}\;\; \psi({\widehat{\alpha}}) \in \mathcal{H}_{\widehat{\alpha}}[\mathfrak{m}](\varnothing)$. \item $\alpha \in \mathcal{H}_\sigma[\mathfrak{m}](\varnothing) \;\;\mbox{and}\;\; \alpha < \beta \quad\Longrightarrow\quad \psi({\widehat{\alpha}}) < \psi({\widehat{\beta}})$. \item $\mathcal{H}_\sigma[\mathfrak{m}](\varnothing) \cap \Omega \,\subseteq\, \psi(\sigma+1)$. \end{enumerate} \end{lemma} \medskip \noindent\textbf{Convention}. From now on the letter $\mathcal{H}$ will be used as a metavariable that ranges over d-operators. \subsection{Operator controlled derivations in $\frs^*$} For formulas $F$, finite formula sets $\Gamma$, and d-operators $\mathcal{H}$ we define \[ \mathcal{H}[F] := \mathcal{H}[|F|] \quad\mbox{and}\quad \mathcal{H}[\Gamma] := \mathcal{H}[|\Gamma|]. \] Likewise, if each $\mathfrak{f}_1, \mathfrak{f}_2, \ldots, \mathfrak{f}_k$ is an ordinal, a formula or a finite set of formulas we define \[ \mathcal{H}[\mathfrak{f}_1,\mathfrak{f}_2] \;:=\; \mathcal{H}[\mathfrak{f}_1][\mathfrak{f}_2], \quad \mathcal{H}[\mathfrak{f}_1,\mathfrak{f}_2,\mathfrak{f}_3] \;:=\; \mathcal{H}[\mathfrak{f}_1,\mathfrak{f}_2][\mathfrak{f}_3], \; \ldots \] and you can convince yourself immediately that the order of the expressions $\mathfrak{f}_1, \mathfrak{f}_2, \ldots, \mathfrak{f}_k$ does not matter. \begin{definition}\rm \label{d:derivation} Given a derivation operator $\mathcal{H}$ and a finite set $\Theta$ of $\frs^*$ formulas, $\mathcal{H} \prov{\alpha}{\rho}\, \Theta$ is defined by recursion on $\alpha$: \begin{enumerate} \item If $\Theta$ is an axiom of $\frs^*$ and $\{\alpha\} \cup |\Theta| \subseteq \mathcal{H}(\varnothing)$, then $\mathcal{H} \prov{\alpha}{\rho}\, \Theta$. \item For the inference rules of $\frs^*$ we proceed according to the ordinal assignments listed below; in addition, we always demand that $\{\alpha\} \cup |\Theta| \subseteq \mathcal{H}(\varnothing)$ for the conclusions $\Theta$ of all rules. \end{enumerate} \end{definition} \bigskip \noindent\textbf{Inference rules of $\frs^*$ with ordinal assignments} \allowdisplaybreaks \begin{eqnarray*} (\lor) & \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\,F,\, G} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, F \lor G} & \begin{array}{r} \alpha_0 < \alpha \end{array} \\[4.5ex] (\land) & \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\,F \qquad \mathcal{H} \prov{\alpha_1}{\rho}\, \Gamma,\, G} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, F \land G} & \begin{array}{r} \alpha_0, \alpha_1 < \alpha \end{array} \\[4.5ex] (\neg M) & \proofrule{\mathcal{H}[\beta] \prov{\alpha_\beta}{\rho}\,\Gamma,\, \neg M_\beta(a) \,\,\, \mbox{for all $\beta < \lambda$}} {\mathcal{H} \prov{\alpha}{\rho} \Gamma,\, \neg M_\lambda(a)} & \begin{array}{r} \alpha_\beta < \alpha \\[0.5ex] \mbox{$\lambda$ limit} \end{array} \\[4.5ex] (\exists) & \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\,M_\beta(a) \land F[a]} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, \exists x F[x]} & \begin{array}{r} \alpha_0 < \alpha \\[0.5ex] \beta < \alpha \end{array} \\[4.5ex] (\forall) & \proofrule{\mathcal{H}[\beta] \prov{\alpha_\beta}{\rho}\, \Gamma,\, \neg M_\beta(a) \lor F[a] \,\,\,\mbox{for all $\beta < \Omega$ and all $a$}} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, \forall x F[x]} & \begin{array}{r} \beta < \alpha_\beta < \alpha \end{array} \\[4.5ex] (\exists^\alpha) & \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\, M_\beta(a) \land F[a]} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, \exists x^\alpha F[x]} & \begin{array}{r} \alpha_0 < \alpha \\[0.5ex] \beta \leq \alpha \end{array} \\[4.5ex] (\forall^\alpha) & \proofrule{\mathcal{H}[\beta] \prov{\alpha_\beta}{\rho}\, \Gamma,\, \negM_\beta(a) \lor F[a]\,\,\, \mbox{for all $\beta \leq \alpha$ and all $a$}} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, \forall x^\alpha F[x]} & \begin{array}{r} \alpha_\beta < \alpha \end{array} \\[4.5ex] (b\exists) & \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\, b \in a \land F[b]} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, (\exists x \in a)F[x]} & \begin{array}{r} \alpha_0 < \alpha \end{array} \\[4.5ex] (b\forall) & \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\, b \notin a \lor F[b]\,\,\, \mbox{for all $b$}} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, (\forall x \in a)F[x]} & \begin{array}{r} \alpha_0 < \alpha \end{array} \\[4.5ex] (\exists_2) &] \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\, F[U]} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, \exists X F[X]} & \begin{array}{r} \alpha_0 < \alpha \end{array} \\[4.5ex] (\forall_2) & \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\, F[U]\,\,\, \mbox{for all $U$}} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, \forall X F[X]} & \begin{array}{r} \alpha_0 < \alpha \end{array} \\[4.5ex] (\mathsf{Cut}) & \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\,F \qquad \mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\, \neg F}{\mathcal{H} \prov{\alpha}{\rho}\, \Gamma} & \begin{array}{r} \alpha_0 < \alpha \\[0.5ex] \mathit{rk}(F) < \rho \end{array} \\[4.5ex] (\Scal_0\mbox{-}\mathsf{Ref}) & \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\, F} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, \exists z F^{(z)}} & \begin{array}{r} \alpha_0, \Omega < \alpha \\[0.5ex] F \in \mathcal{S}_0 \end{array} \\[4.5ex] \mathsf{(BC)} & \proofrule{\mathcal{H} \prov{\alpha_0}{\rho}\, \Gamma,\, F^\beta} {\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, \exists z^{\beta{+}\omega}F^{(z)}} & \begin{array}{r} \alpha_0 < \alpha \\[0.5ex] F \in \mathcal{S}_0 \end{array} \end{eqnarray*} \noindent This concludes the list of the rules of inference of $\frs^*$ with their respective ordinal assignments. The following lemmas collect a few basic properties of the system $\frs^*$. They are proved by straightforward induction on $\alpha$. \begin{lemma}[Weakening] \[ \mathcal{H} \prov{\alpha}{\rho}\, \Gamma \;\;\&\;\; \alpha \leq \beta \in \mathcal{H}(\varnothing) \;\;\&\;\; \rho \leq \sigma \;\;\&\;\; |\Theta| \subseteq \mathcal{H}(\varnothing) \quad\Longrightarrow\quad \mathcal{H} \prov{\beta}{\sigma}\, \Gamma, \Theta. \] \end{lemma} \smallskip \begin{lemma}[Inversion] \label{l:inversion}\quad \begin{enumerate} \item $\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, F_1 \lor F_2 \;\;\&\;\; \Omega \leq \mathit{rk}(F_1 \lor F_2) \quad\Longrightarrow\quad \mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, F_1,\, F_2$. \item $\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, F_1 \land F_2 \;\;\&\;\; \Omega \leq \mathit{rk}(F_1 \land F_2) \;\;\&\;\; i \in \{1,2\} \quad\Longrightarrow\quad \mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, F_i$. \item $\mathcal{H} \prov{\alpha}{\rho}\, \Gamma, \forall x F[x] \;\;\&\;\; \beta \in \mathcal{H}(\varnothing) \cap \Omega \quad\Longrightarrow\quad \mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, a \notin M_\beta(a) \,\lor\, F[a]$. \item $\mathcal{H} \prov{\alpha}{\rho}\, \Gamma, \forall x F[x] \;\;\&\;\; \beta \in \mathcal{H}(\varnothing) \cap \Omega \quad\Longrightarrow\quad \mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, \forall x^\beta F[x]$. \item $\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, (\forall x \in a)F[x] \;\;\&\;\; \Omega \leq \mathit{rk}(F[u)]) \quad\Longrightarrow\quad \mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, b \notin a \,\lor\, F[s]$. \item $\mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, \forall X F[X] \;\;\&\;\; \Omega \leq \mathit{rk}(F[U]) \quad\Longrightarrow\quad \mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, F[U]$. \end{enumerate} \end{lemma} \medskip \begin{lemma}[Boundedness] \label{l:boundedness} Assume $F \in \mathcal{S}$, $\alpha \leq \beta < \Omega$, and $\beta \in \mathcal{H}(\varnothing)$. Then we have for all $\Gamma$ that \[ \mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, F \quad\Longrightarrow\quad \mathcal{H} \prov{\alpha}{\rho}\, \Gamma,\, F^\beta. \] \end{lemma} This boundedness lemma is one of the main ingredients of impredicative cut elimination and will play a central role in the proof of the collapsing theorem in Subsection~\ref{ss:collapsing}. However, before dealing with cut elimination we turn to some embedding results. \subsection{Embedding} In a next step we show that $\mathsf{KP} + (\pooca^*)$ can be embedded into $\frs^*$, and as it will turn out, that finite derivations in $\mathsf{KP} + ((\pooca^*))$ translate into uniform infinitary derivations in $\frs^*$. We begin with showing (TnD). \begin{lemma}\label{l:ausvoll} For any $\mathcal{H}$ and any $F$ we have that \[ \mathcal{H}[F] \prov{\mathit{rk}(F) \mathrel{\#} \mathit{rk}(F)}{0}\, \neg F,\, F \] \end{lemma} \begin{proof} We proceed by induction on $\mathit{rk}(F)$. If $F$ is from $\mathcal{B}$, then $\neg F,\, F$ is an axiom, and we are done. Now suppose that $F$ is of the form $\exists x G[x]$ and observe that, for any $\beta < \Omega$, the rank of the formula $M_\beta(a) \land G[a]$ is independent of $a$. Therefore, we can set \[ \rho_\beta \;:=\; \mathit{rk}(M_\beta(a) \land G[a]) \quad\mbox{and}\quad \rho \;:=\; \mathit{rk}(\exists x G[x]) \] and immediately obtain $\beta \,<\, \rho_\beta \mathrel{\#} \rho \,<\, \rho \mathrel{\#} \rho$. Moreover, \[ \mathcal{H}[M_\beta(a) \land G[a]] \,=\, \mathcal{H}[\exists x G[x],\beta]. \] Thus the induction hypothesis yields \[ \mathcal{H}[\exists x G[x],\beta] \prov{\rho_\beta \mathrel{\#} \rho_\beta}{0}\, \negM_\beta(a) \lor \neg G[a],\, M_\beta(a) \land G[a] \] and an application of $(\exists)$ gives us \[ \mathcal{H}[\exists x G[x],\beta] \prov{\rho_\beta \mathrel{\#} \rho}{0}\, \negM_\beta(a) \lor \neg G[a],\, \exists x G[x]]. \] This is so for all $\beta < \Omega$ and all $a$. Therefore, by means of $(\forall)$ we obtain \[ \mathcal{H}[\exists x G[x]] \prov{\rho \mathrel{\#} \rho}{0}\, \forall x \neg G[x],\, \exists x G[x]], \] as desired. The other cases are similar. \end{proof} \smallskip \begin{lemma}[Lifting] \label{l:lifting} For $\rho := \mathit{rk}(\exists x^\alpha F[x])$ and all $\mathcal{H}$ we have that \[ \mathcal{H}[F[u]] \prov{\rho \mathrel{\#} \rho}{0}\, \neg \exists x^\alpha F[x],\, \exists x F[x]. \] \end{lemma} \begin{proof} For $\rho_\beta := \mathit{rk}(M_\beta(a) \land F[a])$ we obtain from the previous lemma that \[ \mathcal{H}[F[u],\beta] \prov{\rho_\beta \mathrel{\#} \rho_\beta}{0}\, \negM_\beta(a) \,\lor\, \neg F[a],\, M_\beta(a) \,\land\, F[a] \] for all $\beta \leq \alpha$. By applying $(\exists)$ and $(\forall^\alpha)$ our assertion follows. \end{proof} \smallskip \begin{lemma}[$\in$-induction] For every formula $F[u]$ we have \[ \mathcal{H}[F[u]] \prov{\sigma+\Omega+1}{\Omega}\, \forall x((\forall y \in x)F[y] \to F[x]) \,\to\, \forall x F[x], \] where $\sigma := \omega^{\mathit{rk}(\forall x((\forall y \in x)F[y] \to F[x]))}$. \end{lemma} \begin{proof} We set \[ G \;:=\; \forall x((\forall y \in x)F[y] \,\to\, F[x]), \quad \sigma \;:=\; \omega^{\mathit{rk}(G)}, \quad \gamma_\alpha \;:=\; \sigma \mathrel{\#} \omega^\alpha \] and prove in a first step \[ \mathcal{H}[F,\alpha] \prov{\gamma_\alpha}{\Omega}\, \neg G, \negM_\alpha(a),\, F[a] \tag{*} \] by induction on $\alpha$. If $|\alpha| = 0$, then simply apply Axiom~(\ref{a:5}). If $\alpha$ is a limit, then the assertion is immediate from the induction hypothesis by means of inference rule $(\neg M)$. Now suppose $\alpha = \beta + 1$. Then $\mathcal{H}[F,\beta] = \mathcal{H}[F,\alpha]$ and the induction hypothesis gives us \[ \mathcal{H}[F,\alpha] \prov{\gamma_\beta}{\Omega}\, \neg G,\, \negM_\beta(b),\, F[b]. \] In view of Axiom~(\ref{a:12}) we also have \[ \mathcal{H}[F,\alpha] \prov{0}{0}\, \neg M_\alpha(a),\, b \notin a,\, M_\beta(b). \] Therefore, a cut yields \[ \mathcal{H}[F,\alpha] \prov{\gamma_\beta +1}{\Omega}\, \neg G,\, \negM_\alpha(a),\, b \notin a,\, F[b]. \] Using $(\lor)$ and $(b\forall)$ we can continue with \[ \mathcal{H}[F,\alpha] \prov{\gamma_\beta +3}{\Omega}\, \neg G,\, \negM_\alpha(a),\, (\forall x \in a)F[x]. \] On the other hand, the previous lemma also implies that \[ \mathcal{H}[F,\alpha] \prov{\sigma}{\Omega}\, \neg F[a],\, F[a]. \] From the previous two assertions we obtain \[ \mathcal{H}[F,\alpha] \prov{\gamma_\beta +4}{\Omega}\, \neg G,\, \negM_\alpha(a),\, (\forall x \in a)F[x] \,\land\, \neg F[a],\, F[a] \] via $(\land)$ and from the latter \[ \mathcal{H}[F,\alpha] \prov{\gamma_\beta +5}{\Omega}\, \neg G,\, \negM_\alpha(a),\, M_\alpha(a) \,\land\, ((\forall x \in a)F[x] \,\land\, \neg F[a]),\, F[a] \] via Axiom~(\ref{a:1}) and $(\land)$. Thus $(\exists)$ and the definition of $G$ lead to \[ \mathcal{H}[F,\alpha] \prov{\gamma_\alpha}{\Omega}\, \neg G,\, \negM_\alpha(a),\, F[a]. \] Therefore, (*) is proved. The rest is simple. The rule $(\lor)$ applied to (*) yields \[ \mathcal{H}[F,\alpha] \prov{\gamma_\alpha}{\Omega}\, \neg G,\, \negM_\alpha(a) \,\lor\, F[a] \] for all $\alpha < \Omega$ and all $a$. Hence we are in the position to apply $(\forall)$ and deduce \[ \mathcal{H}[F,] \prov{\sigma+\Omega}{\Omega}\, \neg G,\, \forall x F[x]. \] From this our assertion follows by applying $(\lor)$. \end{proof} \smallskip \begin{lemma} We have for all $\mathcal{H}$, all $a,b$, and all $\alpha,\beta$: \begin{enumerate}[(1)] \item $\mathcal{H} \prov{3}{0}\, M_1(\underline{\varnothing}) \,\land\, (\forall x \in \underline{\varnothing})(x \neq x)$. \item $\mathcal{H} \prov{0}{0} M_{\omega+1}(\underline{\omega})$. \item $\mathcal{H} \prov{0}{0}\, \negM_\alpha(a),\, (a \in \underline{\omega} \,\lra\, \mathit{FinOrd}[a])$. \item $\mathcal{H}[\alpha,\beta] \prov{\omega^{\beta+2}}{\omega\beta+\omega+\omega}\, \negM_\alpha(a),\, \negM_\beta(b),\, \exists z(a \in x \,\land\, b \in z)$\quad for $\alpha \leq \beta$. \item $\mathcal{H}[\alpha] \prov{\omega^{\alpha+2}}{\omega\alpha + \omega}\, \negM_\alpha(a),\, \exists z(\forall y \in a)(\forall x \in y)(x \in z)$. \end{enumerate} \end{lemma} \begin{proof} These five assertions are immediate consequences of the respective axioms and Lemma~\ref{l:lifting}. \end{proof} It $\vec{\alpha} = \alpha_1,\ldots,\alpha_k$ and $\vec{a} = a_1,\ldots,a_k$, then we write $\neg M_{\vec{\alpha}}(\vecc{a})$ for the set \[ \{\neg M_{\alpha_1}(a_1),\ldots,\neg M_{\alpha_k}(a_k) \}. \] Now we turn to $\Delta_0$ separation and our form of $\Pi^1_1$ class comprehension. Again, they are basically given by the axioms. However, for ($\Delta_0$-Sep) a cut is needed. \smallskip \begin{lemma}[($\Delta_0$-Sep) and $(\pooca^*)$] \label{l:sepucomp} Let $A[u,\vecc{v}]$ and $B[u,\vec{v},W]$ be $\Delta_0$ formulas of $\Lcal^{set}_2$ whose free set variables are from the list $u,\vec{v}$. Then we have for all $\mathcal{H}$, all $\alpha,\vec{\beta}$, and all $a, \vec{b}$: \begin{enumerate}[(1)] \item $\mathcal{H}[\alpha,\vecc{\beta}] \prov{\rho}{\Omega}\, \neg M_\alpha(a),\, \neg M_{\vec{\beta}}(\vecc{b}),\, \exists z( z = \{x \in a : A[x,\vecc{b}]\})$, \; where $\rho := \omega^{\max(\alpha,\vecc{\beta})+2}$. \item $\mathcal{H}[\vecc{\beta}] \prov{0}{0}\, \neg M_{\vec{\beta}}(\vecc{b}),\, \exists Z(\forall x \in \underline{\omega})(Z(x) \,\lra\, \forall Y B[x,\vec{b},Y])$. \end{enumerate} \end{lemma} \begin{proof} The first assertion is obtained from Axiom~(\ref{a:15}) and Lemma~\ref{l:lifting} by a cut. The second is a direct consequence of Axiom~(\ref{a:16}). \end{proof} \begin{lemma}[$\mathcal{S}_0$ reflection] \label{l:siref} Let $A[\vecc{u}]$ be a formula from $\mathcal{S}_0$ whose free set variables are from the list $\vec{u}$. Then we have for all $\mathcal{H}$, all $\vec{\alpha}$, and all $\vec{a}$ that \[ \mathcal{H}[\vec{\alpha}) \prov{\sigma}{0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, A[\vecc{a}] \,\to\, \exists z A^{(z)}[\vecc{a}], \] where $\sigma \,:=\, \omega^{\mathit{rk}(A[\vecc{u}]) +1}$. \end{lemma} \begin{proof} In view of Lemma~\ref{l:ausvoll} we have \[ \mathcal{H}[\vec{\alpha}] \prov{\rho}{0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \neg A[\vec{a}],\, A[\vec{a}] \] for $\rho := \mathit{rk}(A[\vec{a}]) \mathrel{\#} \mathit{rk}(A[\vec{a}])$. Since $A[\vec{u}]$ is supposed to be proper, we know that $\Omega \leq \rho$. Now we apply $(\Scal_0\mbox{-}\mathsf{Ref})$ and obtain \[ \mathcal{H}[\vec{\alpha}] \prov{\rho+1}{0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \neg A[\vec{a}],\, \exists z A^{(z)}[\vec{a}] \] Thus an application of $(\lor)$ yields our assertion. \end{proof} \smallskip \begin{theorem}[Embedding] \label{t:embedding} Let $A[u_1,\ldots,u_k]$ be an $\Lcal^\star$ formula whose free set variables are exactly those indicated and suppose that \[ \mathsf{KP} + (\pooca^*) \,\vdash^k\, \Gamma[u_1,\ldots,u_k]. \] Then there exist $m,n < \omega$ such that \[ \mathcal{H}[\vecc{\alpha}] \prov{\omega^{\Omega+m}}{\Omega+n}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \Gamma[\vecc{a}] \] for all $\mathcal{H}$, all $\vec{\alpha} = \alpha_1,\ldots,\alpha_k$, and all $\vecc{a} = a_1,\ldots,a_k$. \end{theorem} \begin{proof} We proceed by induction on the length $k$ of the derivation of $A[\vec{a}]$ in the Tait-style formalization of $\mathsf{KP} + (\pooca^*)$. If $A[\vecc{a}]$ is an axiom, then the assertion follows from Lemma~\ref{l:ausvoll} -- Lemma~\ref{l:siref}. Observe that $\Delta_0$ collection is a special case of $\mathcal{S}_0$ reflection. \smallskip As a first example of an inference rule we treat $(\forall)$. Then $\Gamma[\vecc{u}]$ contains a formula of the form $\forall x A[x,\vecc{u}]$ and in $\mathsf{KP} + (\pooca^*)$ there is a shorter derivation of \[ \Gamma'[\vecc{u}],\, A[v,\vecc{u}], \] where $v$ is a fresh set variable not occurring in $\Gamma[\vecc{u}]$ and $\Gamma'[\vecc{u}]$ is either $\Gamma[\vecc{u}]$ or $\Gamma[\vecc{u}] \setminus \{\forall x A[x,\vecc{u}]\}$. Thus the induction hypothesis provides us with $m_0,n_0 < \omega$ such that \[ \mathcal{H}[\vec{\alpha},\beta] \prov{\omega^{\Omega+m_0}}{\Omega+n_0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \neg M_\beta(b),\, \Gamma'[\vecc{a}],\, A[b,\vecc{a}] \] for all $\vec{\alpha},\beta$ and all $\vec{a},b$. Now we apply $(\lor)$ and then $(\forall)$ and obtain \[ \mathcal{H}[\vec{\alpha}] \prov{\omega^{\Omega+m}}{\Omega+n_0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \Gamma'[\vecc{a}],\, \forall xA[x,\vecc{a}] \] for $m := m_0+1$ and arbitrary $\vec{\alpha}$ and $\vec{a}$. Since $\forall xA[x,\vecc{a}]$belongs to $\Gamma[\vecc{a}]$ this is our assertion. \smallskip As second example is the rule $(\exists)$. Then $\Gamma[\vec{u}]$ contains a formula of the form $\exists x A[x,\vecc{u}]$ and in $\mathsf{KP} + (\pooca^*)$ there is a shorter derivation of \[ \Gamma'[\vecc{u}],\, A[b,\vecc{u}], \] for some set term $b$ and $\Gamma'[\vecc{u}]$ is either $\Gamma[\vecc{u}]$ or $\Gamma[\vecc{u}] \setminus \{\forall x A[b,\vecc{u}]\}$. Now we have to distinguish three cases: \smallskip \noindent (i) $b$ is the term $\underline{\varnothing}$ or the term $\underline{\omega}$. Then the induction hypothesis supplies us with $m_0,n_0 < \omega$ such that \[ \mathcal{H}[\vecc{\alpha}] \prov{\omega^{\Omega+m_0}}{\Omega+n_0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \Gamma[\vec{a}],\, A[b,\vecc{a}] \] for all $\vec{\alpha}$ and all $\vec{a}$. Together with Axiom~(\ref{a:6}) or Axiom~(\ref{a:8}) and $(\land)$ we thus obtain \[ \mathcal{H}[\vecc{\alpha}] \prov{\omega^{\Omega+m_0}+1}{\Omega+n_0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \Gamma[\vec{a}],\, M_\beta(b) \land A[b,\vecc{a}] \] (where $\beta$ is $1$ or $\omega+1$) and thus, by $(\exists)$, \[ \mathcal{H}[\vecc{\alpha}] \prov{\omega^{\Omega+m}}{\Omega+n_0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \Gamma[\vec{a}],\, \exists x A[x,\vecc{a}]. \] for $m := m_0+1$ and arbitrary $\vec{\alpha}$ and $\vec{a}$. \smallskip \noindent(ii) $b$ is the variable $u_i$, $1 \leq i \leq n$. Then the induction hypothesis yields \[ \mathcal{H}[\vecc{\alpha}] \prov{\omega^{\Omega+m_0}}{\Omega+n_0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \Gamma[\vec{a}],\, A[a_i,\vecc{a}] \] for suitable $m_0,n_0 < \omega$, all $\vec{\alpha}$ and all $\vec{a}$. Now Axiom~(\ref{a:1}) and $(\land)$ give us \[ \mathcal{H}[\vecc{\alpha}] \prov{\omega^{\Omega+m_0}+1}{\Omega+n_0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \Gamma[\vec{a}],\, M_{\alpha_i}(a_i) \land A[a_i,\vecc{a}] \] and from there we proceed as in (i). \smallskip \noindent(iii) $b$ is a set variable not occurring in $\Gamma[\vecc{u}]$. Then the induction hypothesis supplies us with $m_0,n_0 < \omega$ such that \[ \mathcal{H}[\vecc{\alpha}] \prov{\omega^{\Omega+m_0}}{\Omega+n_0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\,\neg M_1(\underline{\varnothing}),\, \Gamma[\vec{a}],\, A[\underline{\varnothing},\vecc{a}] \] for all $\vec{\alpha}$ and all $\vec{a}$. Now we make first use of Axiom~(\ref{a:6}) and a cut and then of Axiom~(\ref{a:6}) and $(\land)$ to derive \[ \mathcal{H}[\vecc{\alpha}] \prov{\omega^{\Omega+m_0}+2}{\Omega+n_0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \Gamma[\vec{a}],\, M_1(\underline{\varnothing}) \land A[\underline{\varnothing},\vecc{a}] \] from where we can proceed again as in (i). \smallskip In all three cases we obtain \[ \mathcal{H}[\vecc{\alpha}] \prov{\omega^{\Omega+m}}{\Omega+n_0}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \Gamma[\vec{a}],\, \exists x A[x,\vecc{a}]. \] for $m := m_0+1$ and arbitrary $\vec{\alpha}$ and $\vec{a}$. Since $\exists x A[x,\vecc{a}]$ belongs to $\Gamma[\vecc{a}]$ this finishes $(\exists)$. \smallskip All other cases are straightforward from the induction hypothesis or can be treated in the same vein. \end{proof} \subsection{Predicative cut elimination} The rules of inference of $\frs^*$ can be divided into two classes: (i) In all rules except $(\Scal_0\mbox{-}\mathsf{Ref})$ the principal formula is more complex than the respective minor formulas. We may, therefore, consider these rules as \emph{predicative} rules. (ii) The rule$(\Scal_0\mbox{-}\mathsf{Ref})$, on the other hand, transforms (in the general case) a formula of rank greater than $\Omega$ into one of rank $\Omega$. This is a sort of impredicativity and we consider $(\Scal_0\mbox{-}\mathsf{Ref})$ as an \emph{impredicative} inference. Also remember that the principal formulas of the axioms of $\frs^*$ belong to $\mathcal{B}$ and, therefore, have rank less than $\Omega$. These formulas are an obstacle for cut elimination below $\Omega$. However, above $\Omega$ we can follow the pattern of the usual (predicative) cut elimination. In this section we sketch how all cuts with cut formulas of ranks greater than $\Omega$ can be eliminated by standard methods as presented, for example, in Sch\"utte \cite{schuette77} or Buchholz \cite{buchholz92a}. \begin{lemma} Let $F$ be a formula of the form $G_1 \lor G_2$, $(\exists x \in a)G[x]$, $\exists x^\alpha G[x]$, $\exists x G[X]$ or $\exists X G[X]$ and assume that the rank $\rho$ of $F$ is greater than $\Omega$. Then we have: \[ \mathcal{H} \prov{\alpha}{\rho}\, \Gamma, \neg F \;\;\mbox{and}\;\;\; \mathcal{H} \prov{\beta}{\rho}\, \Theta, F \quad\Longrightarrow\quad \mathcal{H} \prov{\alpha + \beta}{\rho} \Gamma,\, \Theta. \] \end{lemma} \begin{proof} By straightforward induction on $\beta$. \end{proof} \bigskip \begin{theorem}[Predicative cut elimination] \label{t:pred-cut-el} We have the following two elimination results, where the second is an immediate consequence of the first. For all $\mathcal{H}$, $\Gamma$, $\alpha$, and all $n < \omega$: \begin{enumerate}[(1)] \item $\mathcal{H} \prov{\alpha}{\Omega + n +2}\, \Gamma \quad\Longrightarrow\quad \mathcal{H} \prov{\omega^\alpha}{\Omega + n + 1},\, \Gamma$. \item $\mathcal{H} \prov{\alpha}{\Omega + n +1}\, \Gamma \quad\Longrightarrow\quad \mathcal{H} \prov{\omega_n(\alpha)}{\Omega + 1},\, \Gamma$. \end{enumerate} Recall that $\omega_0(\alpha) := \alpha$ and $\omega_{n+1}(\alpha) := \omega^{\omega_n(\alpha)}$. \end{theorem} \begin{proof} The proof of the first assertion is standard. For details see, for example, Buchholz \cite{buchholz92a}. The second assertion is an immediate consequence of the first. \end{proof} \subsection{Collapsing} \label{ss:collapsing} We begin this subsection by showing that specific operator controlled derivations of finite sets of formulas from $\mathcal{S} \cup \mathcal{B}$ in which all cut formulas have ranks $\leq \Omega$ can be collapsed into derivations of depth and cut rank less than $\Omega$. This technique -- called collapsing technique -- is a corner stone of \emph{impredicative proof theory}. See Buchholz \cite{buchholz92a} for more on general collapsings. Together with the results of the previous subsection this collapsing theorem will then lead to our main theorem about the $\mathcal{S} \cup \mathcal{B}$ fragment of $\frs^*$ derivations and the $\Sigma$ formulas provable in $\mathsf{KP} + (\pooca^*)$. For the collapsing theorem we work with the derivation operators $\mathcal{H}_\sigma$ introduced in Definition~\ref{d:deriv-op}. \bigskip \begin{theorem}[Collapsing] \label{t:collapsing} Let $\Gamma$ be a finite set of formulas from $\mathcal{S} \cup \mathcal{B}$ and suppose that \[ |\Gamma| \subseteq C(\sigma+1,\psi(\sigma+1)) \quad\mbox{and}\quad \sigma \in \mathcal{H}_\sigma[\Gamma](\varnothing). \] Then we have, for all $\alpha$, \[ \mathcal{H}_\sigma[\Gamma] \provv{\alpha}{\Omega+1}\, \Gamma \quad\Longrightarrow\quad \mathcal{H}_{\widehat{\alpha}}[\Gamma] \provv{\psi({\widehat{\alpha}})}{\psi({\widehat{\alpha}})}\, \Gamma, \] where ${\widehat{\alpha}} := \sigma + \omega^{\Omega+\alpha}$. \end{theorem} \begin{proof}\setcounter{equation}{0} By induction on $\alpha$. Assume $\mathcal{H}_\sigma \prov{\alpha}{\Omega+1}\, \Gamma$. Then we certainly have (see Lemma~\ref{l:o2} and Lemma~\ref{l:o3}): \begin{gather} |\Gamma | \cup \{\alpha\} \;\subseteq\; \mathcal{H}_\sigma[\Gamma](\varnothing) \;\subseteq\; \mathcal{H}_{\widehat{\alpha}}[\Gamma](\varnothing), \\[1.5ex] {\widehat{\alpha}} \in \mathcal{H}_\sigma[\Gamma](\varnothing) \quad\mbox{and}\quad \psi({\widehat{\alpha}}) \in \mathcal{H}_{\widehat{\alpha}}[\Gamma](\varnothing). \end{gather} Now we distinguish cases according to the last inference of $\mathcal{H}_\sigma[\Gamma] \provv{\alpha}{\Omega+1}\, \Gamma$ and note that this cannot be $(\forall)$. In the following we confine our attention to the interesting cases; all others can be dealt with in a similar manner. \medskip \noindent (i) $\Gamma$ is an axiom. Then $\mathcal{H}_{\hat{\alpha}}[\Gamma] \provv{\psi(\hat{\alpha})}{\psi(\hat{\alpha})}\, \Gamma$ follows from (1) and (2). \medskip \noindent (ii) The last inference was $(\forall^\tau)$. Then $\Gamma$ contains a formula of the form $\forall x^\tau F[x]$ and we have \begin{gather} \mathcal{H}_\sigma[\Gamma,\xi] \provv{\alpha_\xi}{\Omega+1}\, \Gamma,\, \neg M_\xi(a) \,\lor\, F[a] \end{gather} with $\alpha_\xi < \alpha$ for all $\xi \leq \tau$ and all $a$. We know $\tau \in \mathcal{H}_\sigma[\Gamma](\varnothing)$. Lemma~\ref{l:o1}, Lemma~\ref{l:o2}(5), and (1) yield \begin{gather} \xi \in \mathcal{H}_\sigma[\Gamma](\varnothing) \subseteq \mathcal{H}_{\widehat{\alpha}}[\Gamma](\varnothing), \quad \mathcal{H}_\sigma[\Gamma,\xi] = \mathcal{H}_\sigma[\Gamma], \quad \mathcal{H}_{\widehat{\alpha}}[\Gamma,\xi] = \mathcal{H}_{\widehat{\alpha}}[\Gamma]. \end{gather} Hence (3) gives us \[ \mathcal{H}_\sigma[\Gamma] \provv{\alpha_\xi}{\Omega+1}\, \Gamma,\, \neg M_\xi(a) \,\lor\, F[a] \] and by the induction hypothesis we obtain that \[ \mathcal{H}_{\widehat{\alpha_\xi}}[\Gamma] \provv{\psi({\widehat{\alpha_\xi}})}{\psi({\widehat{\alpha_\xi}})}\, \Gamma,\, \neg M_\xi(a) \,\lor\, F[a] \] and, therefore, \begin{gather} \mathcal{H}_{\widehat{\alpha}}[\Gamma] \provv{\psi({\widehat{\alpha_\xi}})}{\psi({\widehat{\alpha_\xi}})}\, \Gamma,\, \neg M_\xi(a) \,\lor\, F[a], \end{gather} always for all $\xi \leq \tau$ and all $a$. Then $\mathcal{H}_{\widehat{\alpha}}[\Gamma] \provv{\psi({\widehat{\alpha}})}{\psi({\widehat{\alpha}})}\, \Gamma$ follows immediately by an application of $(\forall^\tau)$. \medskip \noindent (iii) The last inference was $(\exists)$. Then $\Gamma$ contains a formula of the form $\exists x F[x]$ and we have \begin{gather} \mathcal{H}_\sigma[\Gamma] \provv{\alpha_0}{\Omega+1}\, \Gamma,\, M_\beta(a) \,\land\, F[a] \end{gather} for some $\alpha_0 < \alpha$ and some $\beta < \alpha$. In view of (6) we have $\alpha_0,\beta \in \mathcal{H}_\sigma[\Gamma](\varnothing)$. The induction hypothesis yields \[ \mathcal{H}_{\widehat{\alpha_0}}[\Gamma] \provv{\psi({\widehat{\alpha_0}})}{\psi({\widehat{\alpha_0}})}\, \Gamma,\, M_\beta(a) \,\land\, F[a], \] thus also \[ \mathcal{H}_{\widehat{\alpha}}[\Gamma] \provv{\psi({\widehat{\alpha_0}})}{\psi({\widehat{\alpha_0}})}\, \Gamma,\, M_\beta(a) \,\land\, F[a]. \] Now recall from Lemma~\ref{l:o3}(3) that \[ \mathcal{H}_\sigma[\Gamma](\varnothing) \cap \Omega \,\subseteq\, \psi(\sigma+1) \,\leq\, \psi({\widehat{\alpha}}) \] and conclude that $\beta < \psi({\widehat{\alpha}})$. In addition, $\psi({\widehat{\alpha_0}}) < \psi({\widehat{\alpha}})$ follows from Lemma~\ref{l:o3}(2). Therefore, an application of $(\exists)$ yields $\mathcal{H}_\sigma[\Gamma] \provv{\psi({\widehat{\alpha}})}{\psi({\widehat{\alpha}})}\, \Gamma$. \medskip \noindent (iv) The last inference was $(\Scal_0\mbox{-}\mathsf{Ref})$. Then there exist an $F \in \mathcal{S}_0$ and an $\alpha_0 < \alpha$ such that $\exists z F^z$ belongs to $\Gamma$ and \begin{gather} \mathcal{H}_\sigma[\Gamma] \provv{\alpha_0}{\Omega+1}\, \Gamma,\, F. \end{gather} The induction hypothesis immediately yields \[ \mathcal{H}_{\widehat{\alpha_0}} \prov{\psi({\widehat{\alpha_0}})}{\psi({\widehat{\alpha_0}})}\, \Gamma,\, F. \] Since $F \in \mathcal{S}_0$, we are in the position to make use of Lemma~\ref{l:boundedness} and obtain \begin{gather*} \mathcal{H}_{\widehat{\alpha_0}} \provv{\psi({\widehat{\alpha_0}})}{\psi({\widehat{\alpha_0}})}\, \Gamma,\, F^{\psi({\widehat{\alpha_0}})}. \end{gather*} Now we apply $\mathsf{(BC)}$ to derive \begin{gather} \mathcal{H}_{\widehat{\alpha_0}} \provv{\psi({\widehat{\alpha_0}})}{\psi({\widehat{\alpha_0}})}\, \Gamma,\, \exists z^\beta F^{(z)}. \end{gather} for $\beta := \psi({\widehat{\alpha_0}}) + \omega$. On the other hand, Lemma~\ref{l:lifting} yields \begin{gather} \mathcal{H}_\sigma[\Gamma] \provv{\rho \mathrel{\#} \rho}{0}\, \neg \exists z^\beta F^{(z)},\, \exists z F^{(z)} \end{gather} for $\rho := \mathit{rk}(\exists z^\beta F^{(z)})$. Furthermore, since $F$ is from $\mathcal{S}_0$ we can easily see that the rank of $\exists z^\beta F^{(z)}$ is smaller than $\omega\beta+\omega$. Hence some trivial ordinal computations yield that $\psi({\widehat{\alpha_0}}), \rho \mathrel{\#} \rho < \psi(\alpha)$. Therefore, a cut applied to (9) and (10) implies $\mathcal{H}_{\widehat{\alpha}} \provv{\psi({\widehat{\alpha}})}{\psi({\widehat{\alpha}})}\, \Gamma$. \medskip \noindent (v) The last inference was $(\mathsf{Cut})$. Then there exist a formula $F$ and an ordinal $\alpha_0 < \alpha$ such that $\mathit{rk}(F) < \Omega+1$ and \begin{gather} \mathcal{H}_\sigma[\Gamma] \provv{\alpha_0}{\Omega+1}\, \Gamma,\, F \quad\mbox{and}\quad \mathcal{H}_\sigma[\Gamma] \provv{\alpha_0}{\Omega+1}\, \Gamma,\, \neg F. \end{gather} Therefore, $\alpha_0 \in \mathcal{H}_\sigma[\Gamma](\varnothing)$, $|F| \subseteq \mathcal{H}_\sigma[\Gamma](\varnothing) \cap \Omega$ and, by Lemma~\ref{l:o3}(3), \[ |F| \,\subseteq\, \mathcal{H}_\sigma[\Gamma](\varnothing) \cap \Omega \,\subseteq\, \psi(\sigma+1) \,\leq\, \psi({\widehat{\alpha}}). \] We distinguish two cases. \medskip \noindent (v.1) $\mathit{rk}(F) < \Omega$. Then $\mathit{rk}(F) < \psi({\widehat{\alpha}})$ according to the line above. Furthermore, $\Gamma \cup \{F,\neg F\} \subseteq \mathcal{S} \cup \mathcal{B}$ and thus the induction hypothesis yields \begin{gather*} \mathcal{H}_{\widehat{\alpha_0}}[\Gamma] \provv{\psi({\widehat{\alpha_0}})}{\psi({\widehat{\alpha_0}})}\, \Gamma,\, F \quad\mbox{and}\quad \mathcal{H}_{\widehat{\alpha_0}}[\Gamma] \provv{\psi({\widehat{\alpha_0}})}{\psi({\widehat{\alpha_0}})}\, \Gamma,\, \neg F. \end{gather*} As above, we can easily convince ourselves that $\psi({\widehat{\alpha_0}}) < \psi({\widehat{\alpha}})$. Hence, $(\mathsf{Cut})$ yields $\mathcal{H}_{\widehat{\alpha}}[\Gamma] \provv{\psi({\widehat{\alpha}})}{\psi({\widehat{\alpha}})}\, \Gamma$. \medskip \noindent (v.2) $\mathit{rk}(F) = \Omega$. Then $F$ is of the form $\exists xG[x]$ or $\forall x G[x]$ and $G[u]$ is a $\Delta_0$ formula of $\Lcal^{set}_2$. We assume that $F$ is $\exists x G[x]$. Clearly, $\exists x G[x]$ belongs to $\mathcal{S}_0$. Hence, the induction hypothesis applied to the left hand side of (10) yields \begin{gather*} \mathcal{H}_{\widehat{\alpha_0}}[\Gamma] \provv{\beta}{\beta}\, \Gamma,\, \exists x G[x] \end{gather*} for $\beta := \psi({\widehat{\alpha_0}})$. Since $\beta \in \mathcal{H}_{\widehat{\alpha_0}}[\Gamma](\varnothing)$ (see Lemma~\ref{l:o2}), we can apply Lemma~\ref{l:boundedness} and obtain \begin{gather} \mathcal{H}_{\widehat{\alpha_0}}[\Gamma] \provv{\beta}{\beta}\, \Gamma,\, \exists x^\beta G[x]. \end{gather} Furthermore, applying Lemma~\ref{l:inversion} to the right hand side of (10) yields \begin{gather} \mathcal{H}_{\widehat{\alpha_0}}[\Gamma] \provv{\alpha_0}{\Omega+1}\, \Gamma,\, \forall x^\beta \neg G[x]. \end{gather} Now we exploit the fact that $\forall x^\beta \neg G[x]$ is from $\mathcal{B}$ and convince ourselves that the induction hypothesis can be applied to (12). So we obtain \begin{gather*} \mathcal{H}_\gamma[\Gamma] \provv{\psi(\gamma)}{\psi(\gamma)}\, \Gamma,\, \forall x^\beta \neg G[x] \end{gather*} with $\gamma = {\widehat{\alpha_0}} + \omega^{\Omega+\alpha_0} = \sigma + \omega^{\Omega+\alpha_0} + \omega^{\Omega+\alpha_0} < \sigma + \omega^{\Omega + \alpha} = {\widehat{\alpha}}$. Moreover, it is easy to check that $\gamma \in \mathcal{H}_\sigma[\Gamma](\varnothing)$, hence $\psi(\gamma) < \psi({\widehat{\alpha}})$ according to Lemma~\ref{l:o3}. An easy computation, similar to above, also shows that \[\mathit{rk}(\exists x^\beta G[x]) < \psi({\widehat{\alpha}}). \] Therefore, $(\mathsf{Cut})$ applied to (11) and (12) establishes $\mathcal{H}_{\widehat{\alpha}}[\Gamma] \provv{\psi({\widehat{\alpha}})}{\psi({\widehat{\alpha}})}\, \Gamma$. \medskip \noindent (vi) All other cases are trivial or can be dealt with similarly. This completes the proof of collapsing. \end{proof} \subsection{Finite bounds for the lengths of formulas in infinitary derivations} Clearly, every proof $\mathcal{P}$ in the theory $ \mathsf{KP} + (\pooca^*)$ is finite and, therefore, there exists a natural number $p$ such that every formulas in $\mathcal{P}$ has a length less than $p$. We sketch now how this bound $k$ carries over to the ``proof-theoretically treated'' derivation stemming from $\mathcal{P}$. First we introduce a suitable definition of length of an $\Lcal^\star$ formula. \begin{definition}\rm The \emph{length} $\ell(F)$ of an $\Lcal^\star$ formula $F$is inductively defined as follows: \begin{enumerate} \item If $F$ is of the form $(a \in b)$, $(a \notin b)$, $U(a)$, $\neg U(a)$, $M_\alpha(a)$ or $\neg M_\alpha(a)$, then $\ell(F) := 0$. \item If $F$ is of the form $(G \lor H)$ or $(G \land H)$, then $\ell(F) := \max(\ell(G),\ell(H)) +1$. \item If $F$ is of the form $\exists x G[x]]$, $\forall x G[x]$, $\exists x^\alpha G[x]$, $\forall x^\alpha G[x]]$, $(\exists x \in a)G[x]$ or $(\forall x \in a)G[x]$, then $\ell(F) := \ell(G[u]) +1$. \item If $F$ is of the form $\exists XG[X]$ or $\forall XG[X]$, then $\ell(F) := \ell(G[U]) +1$. \end{enumerate} \end{definition} Based on this definition we now introduce a refined notion of derivability in $\frs^*$. Given a natural number $k$ we let \[ \mathcal{H} \provv{\alpha }{\rho,\,p}\, \Gamma \] be defined as $\mathcal{H} \provv{\alpha }{\rho}\, \Gamma$ in Definition~\ref{d:derivation} with the additional requirements that \begin{enumerate}[(1)] \item $\ell(F) < p$ for any element $F$ of $\Gamma$, \item $\ell(G) < p$ if the last inference has been a cut with the two cut formulas $G$ and $\neg G$.\end{enumerate} The proofs of the following three theorems do not raise any questions of principle. We simply follow the original proofs and check by a tedious case to case analysis that the additional requirements are satisfied. \begin{theorem}[Refined embedding] \label{t:refined-embedding} Let $A[u_1,\ldots,u_k]$ be an $\Lcal^\star$ formula whose free set variables are exactly those indicated and suppose that there is a proof of $A[u_1,\ldots,u_k]$ in $\mathsf{KP} + (\pooca^*)$ such that $\ell(F) < p$ for all formulas occurring in this proof. Then there exist $m,n < \omega$ such that \[ \mathcal{H}[\vecc{\alpha}] \prov{\omega^{\Omega+m}}{\Omega+n,\,p}\, \neg M_{\vec{\alpha}}(\vecc{a}),\, \Gamma[\vecc{a}] \] for all $\mathcal{H}$, all $\vec{\alpha} = \alpha_1,\ldots,\alpha_k$, and all $\vecc{a} = a_1,\ldots,a_k$. \end{theorem} \begin{theorem}[Refined predicative cut elimination] \label{t:refined-pred-cut-el} For all $\mathcal{H}$, $\Gamma$, $\alpha$, and all $n,p < \omega$, \[ \mathcal{H} \prov{\alpha}{\Omega + n +1,\,p}\, \Gamma \quad\Longrightarrow\quad \mathcal{H} \prov{\omega_n(\alpha)}{\Omega + 1,\,p}\, \Gamma. \] \end{theorem} \begin{theorem}[Refined collapsing] \label{t:refined-collapsing} Let $\Gamma$ be a finite set of formulas from $\mathcal{S} \cup \mathcal{B}$ and suppose that \[ |\Gamma| \subseteq C(\sigma+1,\psi(\sigma+1)) \quad\mbox{and}\quad \sigma \in \mathcal{H}_\sigma[\Gamma](\varnothing). \] Then we have, for all $\alpha$ and all $p < \omega$, \[ \mathcal{H}_\sigma[\Gamma] \provv{\alpha}{\Omega+1,\,p}\, \Gamma \quad\Longrightarrow\quad \mathcal{H}_{\widehat{\alpha}}[\Gamma] \provv{\psi({\widehat{\alpha}})}{\psi({\widehat{\alpha}}),\,p}\, \Gamma, \] where ${\widehat{\alpha}} := \sigma + \omega^{\Omega+\alpha}$. \end{theorem} \section{Reduction to $\pooca + \mathit{TI}({<}\BH)$} \label{s:reduction} The goal of this section is to interpret certain $\frs^*$-derivations of length and cut rank $< \Omega$ in the theory $\pooca + \mathit{TI}({<}\BH)$. Here $\Pi^1_1\mbox{-}\mathsf{CA}$ stands for the usual system of second order arithmetic with $\Pi^1_1$ comprehension and full induction on the natural numbers; it is formulated in the language $\mathrm{L}_2$. This is a familiar theory in proof theory and, therefore, we refrain from saying more about this theory. If you are interested in all details you may consult, for example Simpson \cite{simpson09}. By $\mathit{TI}({<}\mathcal{BH})$ we mean the scheme of transfinite induction over all initial segments (indexed externally) below the Bachmann-Howard ordinal $\mathcal{BH}$. More precisely, let $ \omega_0[\xi]=\xi$ and $\omega_{k+1}[\xi]=\omega^{\omega_k[\xi]}$. Then $\mathit{TI}({<}\mathcal{BH})$ is the collection of all formulas \[ \forall \alpha((\forall \beta<\alpha)\mathcal{F}[\beta] \to \mathcal{F}[\alpha]) \,\to\, (\forall \alpha<\psi(\omega_n[\Omega+1]))\mathcal{F}[\alpha] \] where $\mathcal{F}[u]$ is any formula of $\mathrm{L}_2$ and $n$ is any natural number (or rather the $n^{th}$ numeral). To erase any lingering doubts, $<$ is the ordering of $C(\varepsilon_{\Omega+1},0)$ and quantifiers $\forall \alpha,\forall \beta$ range over the latter set. \subsection{$\alpha$-suitable trees} \label{s:suitable} To interpret set theory in $\pooca + \mathit{TI}({<}\BH)$ we use well-founded trees, also called {\em suitable} trees. We will mostly follow the terminology and presentation in Simpson \cite[VII.3]{simpson09}. \begin{definition}\label{MR1} \rm Let variables $\sigma,\tau,\rho,\sigma_0,\sigma_1,\ldots$ range over finite sequences of naturals, i.e., elements of $\mathbb{N}^{<\mathbb{N}}$. \begin{enumerate} \item $\langle\rangle$ will denote the empty sequence of $\mathbb{N}^{<\mathbb{N}}$ and $\langle n_0,\ldots, n_r\rangle$ the sequence of numbers $n_0,\ldots, n_r$ coded as a single number (see \cite[II.2.6]{simpson09}). \item The \emph{length} of a sequence is defined by $\mathrm{lh}(\langle\rangle)=0$ and $\mathrm{lh}(\langle n_0,\ldots, n_r\rangle)=r+1$. \item The \emph{concatenation} operation on $\mathbb{N}^{<\mathbb N}$ will be denoted by $\sigma\star\tau$. This means that $\sigma\star\langle\rangle=\langle\rangle\star\sigma=\sigma$ and $\langle n_0,\ldots,n_r\rangle \star\langle m_0,\ldots,m_s\rangle=\langle n_0,\ldots,n_r,m_0,\ldots,m_s\rangle$. \item We write $\sigma \subset \tau$ to mean that $\sigma$ is a proper \emph{initial segment} of $\tau$, i.e., $\tau=\sigma\star \rho$ for some $\rho\ne \langle\rangle$. \item For a function $f\colon \mathbb{N}\to \mathbb{N}$ let $f[0]=\langle \rangle$ and $f[n+1]:=\langle f(0),\ldots,f(n)\rangle$. \end{enumerate} \end{definition} \begin{definition}\label{MR2}\rm $T \subseteq \mathbb{N}^{<\mathbb{N}}$ is said to be a \emph{tree} if $T$ is nonempty and closed under initial segments, i.e., $(\forall \sigma \in T)(\forall \tau\subset \sigma)(\tau \in T)$. $T$ is a \emph{suitable tree} if $T$ is also well-founded, i.e., \[ (\forall f : \mathbb{N} \to \mathbb{N})\exists n(f[n]\notin T). \] \end{definition} \begin{remark}\rm If $T$ is a tree and $\sigma\in T$, we put \[ T^{\sigma}=\{\tau\mid \sigma\star\tau \in T\} \] and note that if $T$ is suitable tree then so is $T^{\sigma}$. \end{remark} Suitable trees furnish a way of talking about sets in the language of second order arithmetic. Two such trees are considered to be coding the same set if there is a bisimulation between them. \begin{definition}[$=^*$ and $\in^*$]\label{MR3}\rm \quad \begin{enumerate} \item Let $T$ be a tree. We write $\mathit{Iso}(X,T)$ to mean that $X \subseteq T \times T$ and, for all $\sigma,\tau \in T$, $(\sigma,\tau)\in X$ if and only if \[ \forall n[\sigma\star \langle n \rangle \in T \,\to\, \exists m((\sigma\star \langle n\rangle,\tau\star\langle m\rangle)\in X))] \] \vspace{-3ex}and\vspace{-4ex} \[ \forall m[\tau\star \langle m \rangle \in T \,\to\, \exists n((\sigma\star \langle n\rangle,\tau\star\langle m\rangle)\in X))]. \] \item For trees $S$ and $T$, we define \[ S\oplus T \;:=\; \{\langle \rangle\} \,\cup\, \{\langle 0\rangle \star \sigma\mid \sigma\in S\} \,\cup\, \{\langle 1\rangle \star \tau\mid \tau\in T\} \] \vspace{-3ex}and set\vspace{-5ex} \begin{eqnarray*} S=^*T &\mbox{ iff }& \exists X(\mathit{Iso}(X,S\oplus T) \,\land\, (\langle 0\rangle,\langle 1\rangle)\in X), \\[1ex] S\in^*T &\mbox{ iff }& \exists X(\mathit{Iso}(X,S\oplus T) \,\land\, \exists n\,(\langle 0\rangle,\langle 1,n\rangle)\in X). \end{eqnarray*} \end{enumerate} % \end{definition} \begin{lemma}[$\mathsf{ATR}_0$]\label{MR4} Let $T$ be a suitable tree. Then there exists a unique set $X$ such that $\mathit{Iso}(X,T)$. Moreover, for all $\sigma,\tau\in T$, \begin{eqnarray*} T^{\sigma}=^* T^{\tau} &\mbox{ iff }& (\sigma,\tau)\in X, \\[1ex] T^{\sigma}\in ^* T^{\tau} &\mbox{ iff }& \exists n\,(\sigma,\tau\star\langle n\rangle)\in X.\end{eqnarray*} In particular, $X$ is an equivalence relation $T$. \end{lemma} \begin{proof} See \cite[Lemma VII.3.17]{simpson09}. \end{proof} \begin{corollary}[$\mathsf{ATR}_0$]\label{MR5} Given suitable trees $S$ and $T$, one has \begin{eqnarray*} S\in^* T &\mbox{ iff }& \exists n\;S=^*T^{\langle n\rangle}. \end{eqnarray*} \end{corollary} \begin{definition}\label{MR6}\rm In order to model the set-theoretic naturals and $\omega$ via suitable trees we introduce trees $n^*$ for $n\in\mathbb{N}$. Let \begin{eqnarray*} 0^*&=&\{\langle\rangle\} \\[1ex] (n+1)^* &=& n^*\,\cup\,\{\langle n\rangle \star \sigma\mid \sigma\in n^*\} \\[1ex] \omega^* &=&\{\langle\rangle\}\,\cup\,\{\langle n\rangle\star\sigma\mid \sigma\in n^*,\; n\in \mathbb{N}\}. \end{eqnarray*} \end{definition} Note that $n^*\in\omega^*$ and $m^*\in^*n^*$ whenever $m<n$. Since the sequences in $n^*$, and thus in $\omega^*$, are strictly descending sequences of naturals it is clear that $n^*$ and $\omega^*$ are suitable trees. Moreover, there is a map $f\colon \omega^*\to \omega{+}1$ such that \[ \tau\subset\sigma \,\land\, \sigma\in\omega^* \quad\Longrightarrow\quad f(\sigma)\prec f(\tau).\] To see this, let $f(\langle\rangle)=\omega$ and $f(\langle n_1,\ldots,n_r\rangle)=n_r$. This means that $\omega^*$ is an $\omega{+}1$-tree in the sense of Definition~\ref{MR7} below. The natural translation of the set-theoretic language into that of second order arithmetic, $\mathsf{L}_2$, proceeds by letting quantifiers range over suitable trees and interpreting $\in$ and $=$ as $\in^*$ and $=^*$, respectively. In \cite[Theorem VII.3.22]{simpson09} it is shown that this yields an interpretation of the axioms of $\mathsf{ATR}^{set}_0$ in $\mathsf{ATR}_0$. In what follows it is our goal to interpret the richer language of $\frs^*$ in $\mathsf{L}_2$. For the predicates $M_\alpha$ and the quantifiers $\forall x^{\alpha}$ and $\exists x^{\alpha}$ we use the notion of an $\alpha$-tree. \begin{definition}\label{MR7}\rm For an ordinal notation $\alpha$, we say that a tree $T$ is an \emph{$\alpha$-tree} if the exists a function $f : T\to \{\beta\in \mathcal{BH} : \beta\prec \alpha\}$ such that \[ (\forall \sigma \in T)\forall \tau(\tau\subset \sigma \to f(\sigma)\prec f(\tau)]. \] \end{definition} The interpretation of the predicate $M_{\alpha}$ for trees $T$ will be that $M_{\alpha}(T)$ means that $T$ is an $\alpha$-tree. Quantifiers $\forall x^{\alpha}(\ldots x\ldots)$ and $\exists x^{\alpha}(\ldots x\ldots)$ will then be interpreted as $\forall T(T\mbox{ $\alpha$-tree} \,\to\, \ldots T\ldots)$ and $\exists T(T\mbox{ $\alpha$-tree} \,\land\, \ldots T\ldots)$, respectively. Relation variables $U,VX,Y,\ldots$ will be interpreted to range over $\omega{+}1$-trees $T$ such that $T\subseteq^*\omega^*$, i.e., $(\forall \langle n\rangle\in T)(T^{\langle n\rangle}\in^*\omega^*)$. It is perhaps worth mentioning that any suitable tree satisfying $T\subseteq^*\omega^*$ can be seen to be an $\omega{+}1$-tree. Note that if we have an ordinal notation $\alpha\prec\Omega$ given externally, then $\pooca + \mathit{TI}({<}\BH)$ proves transfinite induction along the ordinals $\prec \alpha$, and consequently the notion of an $\alpha$-tree becomes $\Delta^1_1$ in $\pooca + \mathit{TI}({<}\BH)$. \begin{lemma}\label{MRalpha} The notion of an $\alpha$-tree is $\Delta^1_1$ in the theory $\mathsf{ATR}_0+\mathit{WO}(\alpha)$, where $\mathit{WO}(\alpha)$ expresses that $\prec$ restricted to $\{\beta : \beta\prec \alpha\}$ is a wellordering. \end{lemma} \begin{proof} For a tree $T$, define a function $g$ with domain $\{\beta : \beta\prec \alpha\}$ by arithmetical transfinite recursion as follows \[ g(\beta) \;=\; \{\sigma\in T : (\forall \tau\in T)(\sigma\subset \tau \,\to\, \tau\in \bigcup_{\xi\prec \beta}g(\xi))\}. \] Note that $g$ is uniquely determined by $T$. One can then show that: \begin{eqnarray*}\label{MR1.8.8} \mbox{$T$ is an $\alpha$-tree } &\Leftrightarrow & \mbox{ $T$ is the image of $g$}. \end{eqnarray*} If $T$ is an $\alpha$-tree witnessed by a function $f : T\to \{\beta : \beta\prec \alpha\}$, then one shows, using $\prec$-induction on $f(\sigma)$, that $\sigma\in g(f(\sigma))$ for all $\sigma\in T$, hence $T$ is the image of $g$. Conversely, if $T$ is the image of $g$, then a function $f : T\to \{\beta : \beta\prec \alpha\}$ witnessing the $\alpha$-treeness of $T$ is obtained by letting $f(\sigma)$ be the least $\beta \prec \alpha$ such that $\sigma\in g(\beta)$. \end{proof} In order to show that the collapsed derivations of Theorem~\ref{t:refined-collapsing} prove true statements when subjected to the interpretation of Definition~\ref{MR7}, we need to show that the rule $\mathsf{(BC)}$ preserves truth. This will mainly be a consequence of $\Sigma^1_1$-$\mathsf{AC}$ . \begin{lemma}\label{MR9} Let $F[x_0,\ldots ,x_q,U_0,\ldots,U_r]$ be a $\Sigma$-formula of $\mathcal{L}_2^{set}$ with all free variables indicated, and $\ell_F$ be the length of the latter formula as a string of symbols. Let \[ F^{\alpha}[x_0,\ldots ,x_q,U_0,\ldots,U_r] \quad\mbox{and}\quad F^{(a)}[x_0,\ldots ,x_q,U_0,\ldots,U_r] \] be the results of replacing the unbounded existential set quantifiers $\exists x$ in the formula by $\exists x^{\alpha}$ and $\exists x\in a$, respectively. Arguing inthe theory $\pooca + \mathit{TI}(\alpha{+}\omega)$, we have that whenever $T_0,\ldots,T_q$ are $\alpha$-trees and $Q_0,\ldots ,Q_r$ are trees such that $Q_i\subseteq^* \omega^*$ (so actually $\omega{+}1$-trees), and the $*$-translation of \[ F^{\alpha}[x_0,\ldots ,x_q,U_0,\ldots,U_r] \] with $x_i$ and $U_j$ replaced by $T_i$ and $Q_j$, respectively, is true, then the $*$-translation of \[ \exists y^{\alpha+\ell_F}F^{(y)}[x_0,\ldots ,x_q,U_0,\ldots,U_r],\] again with $x_i$ replaced by $T_i$ and $U_j$ replaced by $Q_j$, holds true as well. \end{lemma} \begin{proof}\setcounter{equation}{0} We proceed by (meta) induction on $\ell_F$. The most interesting case arises when this formula starts with a bounded universal quantifier, i.e., if it is of the form \[ \forall v\in x_0\,F^{\alpha}_0[x_0,\ldots ,x_q,v,U_0,\ldots,U_r]. \] % So assume that $T_0,\ldots,T_r$ are $\alpha$-trees and $Q_0,\ldots ,Q_r$ are trees such that $Q_i\subseteq^* \omega^*$ and % \[ (\forall \langle n\rangle\in T_0) G^{\alpha}_0[T_0,\ldots ,T_q,T_0^{\langle n\rangle},Q_0,\ldots,Q_r] \] % holds, where $G^{\alpha}_0$ is the $*$-translation of $F^{\alpha}_0$. Inductively we then have % \begin{gather}\label{MRc} (\forall \langle n\rangle\in T_0) \exists S(S \mbox{ an $\alpha{+}\ell_{F_0}$-tree} \;\land \; G^{(S)}_0[T_0,\ldots ,T_q,T_0^{\langle n\rangle},Q_0,\ldots,Q_r]), \end{gather} where $G^{(S)}_0[T_0,\ldots ,T_q,T_0^{\langle n\rangle},Q_0,\ldots,Q_r]$ results from $G^{\alpha}_0[T_0,\ldots ,T_q,T_0^{\langle n\rangle},Q_0,\ldots,Q_r]$ by replacing any part of the form $\exists Q[Q\mbox{ an $\alpha$-tree} \,\land\, \ldots Q\ldots]$ by $(\exists \langle n\rangle \in \mathcal{S})(\ldots \mathcal{S}^{\langle n\rangle}\ldots)$. Now it follows from Lemma~\ref{MRalpha} that the part $(\ldots)$ in (\ref{MRc}) is $\Delta^1_1$. As a result, we may apply $\Sigma^1_1$-$\mathsf{AC}$ (which is provable in $\Pi^1_1\mbox{-}\mathsf{CA}_0$) to conclude that there exists a set $Z$ such that for all $\langle n\rangle \in T_0$, \begin{gather}\label{MRa} Z_{(n)}\mbox{ an $\alpha{+}\ell_{F_0}$-tree} \;\land\; G^{(Z_{(n)})}_0[T_0,\ldots ,T_q,T_i^{\langle n\rangle},Q_0,\ldots,Q_r] \end{gather} where $Z_{(n)}\,=\,\{k : (n,k)\in Z\}$. Using $\Sigma^1_1$-$\mathsf{AC}$ again, we can also single out a sequence of functions \[ f_{2^n3^k} : Z_{(n)}^{\langle k\rangle} \,\to\, \alpha{+}\ell_{F_0} \] such that $f_{2^n3^k}$ witnesses that $Z_{(n)}^{\langle k\rangle} $ is an $\alpha{+}\ell_{F_0}$-tree whenever $\langle n\rangle \in T_0$ and $\langle k\rangle \in Z_{(n)}$. Now define a tree $\mathcal S$ by \[ \mathcal{S} \;:=\; \{ \langle\rangle \} \;\cup\; \{\langle 2^n3^k \rangle \star \sigma : \langle n\rangle \in T_0 \,\land\, \langle k\rangle\in Z_{(n)} \,\land\, \sigma \in Z_{(n)}^{\langle k\rangle}\}. \] By design, $S$ is a tree and, moreover, $S$ is an $\alpha{+}\ell_F$-tree as witnessed by the function $f\colon S\to \alpha{+}\ell_F$ defined by \[ f(\langle\rangle) \;:=\; \alpha{+}\ell_{F_0} \quad\mbox{and}\quad f(\langle 2^n3^k \rangle \star \sigma) \;:=\; f_{2^n3^k}(\sigma). \] % As a consequence of (\ref{MRa}) and the fact that $F[x_0,\ldots ,x_q,U_0,\ldots,U_r]$ is a $\Sigma$-formula, we infer that \[ (\forall \langle n\rangle \in T_0) G^{\mathcal S}_0[T_0,\ldots ,T_q,T_0^{\langle n\rangle},Q_0,\ldots,Q_r] \] and thus, as desired, the truth of the $*$-translation of $\exists y^{\alpha{+}\ell_F}F^{(y)}(x_0,\ldots ,x_q,U_0,\ldots,U_r)$ with the substitutions $x_i\mapsto T_i$ and $U_j\mapsto Q_j$ follows. \end{proof} \subsection{Conservativity} It remains to ascertain that $\mathsf{KP} + (\pooca^*)$ is conservative over $\pooca + \mathit{TI}({<}\BH)$ for formulas in the language of second order arithmetic. More precisely, if $\mathcal{F}$ is a formula of second order arithmetic (as for instance defined in \cite[I.2]{simpson09}) and $\mathcal{F}_0$ denotes its natural translation into the language $\mathcal{L}_2^{set}$, then we aim to show that \[ \mathsf{KP} + (\pooca^*) \,\vdash\, \mathcal{F}_0 \quad\Longrightarrow\quad \pooca + \mathit{TI}({<}\BH) \,\vdash\, \mathcal{F}. \] Let $ \omega_0[\xi]=\xi$ and $\omega_{k+1}[\xi]=\omega^{\omega_k[\xi]}$. Now assume $\mathsf{KP} + (\pooca^*) \vdash \mathcal{F}_0$. The latter being a finite deduction, it follows from what we have etablished in the previous sections that we can determine fixed naturals $n,p$ such that $\mathcal{H} \prov{\beta}{\eta,p} \mathcal{F}_0$ for some derivation operator $\mathcal{H}$ and $\beta,\eta\in C(\omega_n[\Omega+1],0)$. This follows from Theorems~\ref{t:refined-embedding}, \ref{t:refined-pred-cut-el}, and \ref{t:refined-collapsing}. To dilate on this, note that because of the finiteness of the deduction in $\mathsf{KP} + (\pooca^*)$ we can find this a priori bound $\omega_n[\Omega+1]$ so that the entire ordinal analysis, commencing with the embedding of this finite deduction into the infinitary proof system, solely uses ordinals from $C(\omega_n[\Omega+1],0)$. Moreover, we can carry this ordinal analysis out in the background theory $\mathsf{KP} + (\pooca^*)$ as there exists an order preserving map from $C(\omega_n[\Omega+1],0)$ into the segment of ordinals $<\psi(\omega_{n+1}[\Omega+1])$ for which transfinite induction is available in this theory. Such a map exists, for if $\delta,\eta\in C(\omega_n[\Omega+1],0)$ with $\delta<\eta$ one has \[ \psi(\omega_n[\Omega+1]+\omega^{\Omega+\delta})< \psi(\omega_n[\Omega+1]+\omega^{\Omega+\eta})< \psi(\omega_{n+1}[\Omega+1]) \] by Lemma~\ref{l:o3}(2). In light of the previous, the next step will be to show that $\mathcal{H} \prov{\beta}{\eta,p} \mathcal{F}_0$ entails that $\mathcal{F}$ is true, all the while working in our background theory $\pooca + \mathit{TI}({<}\BH)$. This is where the up to now neglected parameter $p$ comes into its own in that the idea is to employ a formal truth predicate for formulas of length $<p$. However, it is not possible to just focus on formulas $\mathcal{F}_0$, where $\mathcal{F}$ resides in the language of second order arithmetic, since in showing that $\mathcal{H} \prov{\beta}{\eta,p} \mathcal{F}_0$ implies the truth of $\mathcal{F}$ we shall induct on $\beta$ and the pertaining derivation is usually not cut-free, so we have to take formulas from $\mathcal{B}$ of length $<p$ into account. Thus the first task which presents itself is to translate such formulas into the language of second order arithmetic, $\mathrm{L}_2$. \begin{definition}\rm For convenience assume that we have injections $x\mapsto T_x$ and $X\mapsto T_X$ from set variables $x$ and relation variables $X$ of $\mathcal{L}^*$ to second order variables of $\mathrm{L}_2$ in such a way that $T_x$ is always different from any $T_X$. The latter provides a translation of the terms of $\mathcal{L}^*$ except for the constants $\underline{\emptyset}$ and $\underline{\omega}$. These can just be translated as $\{\langle\rangle\}$ and $\omega^*$, respectively (see Definition~\ref{MR6}). Since the relation $\in^*$ of Definition~\ref{MR3} has only be defined for trees, let us agree that henceforth a formula $S\in^* T$ will be considered a shorthand for $S,T\mbox{ trees } \,\land\, S\in^* T$. The translation $^*$ from $\mathcal{L}^*$ to $\mathrm{L}_2$ is then effected as follows. \begin{enumerate} \item $(a\in \underline{\omega})^*$ is $T_a\in^*\omega^*$. $(\neg a\in \underline{\omega})^*$ is the negation of $(a\in \underline{\omega})^*$. $(a\in \underline{\emptyset})^*$ is $0=1$ while $(\neg a\in \underline{\emptyset})^*$ is $0=0$. \item $(a\in x)^*$ is $T_a\in^* T_x$. $(a \notin x)^*$ is the negation of $(a \in x)^*$. \item $(U(a))^*$ is $T_a\in^* T_U\,\wedge\, T_U\subseteq^*\omega^*$. $(\neg U(a))^*$ is the negation of $(U(a))^*$. \item $(M_{\alpha}(a))^*$ is {\em $T_a$ is an $\alpha$-tree}. $(\neg M_{\alpha}(a))^*$ is {\em $T_a$ is not an $\alpha$-tree}. \item $(F\lor G)^*$ and $(F \land G)^*$ are $F^* \lor G^*$ and $F^* \land G^*$, respectively. \item $((\exists x \in a)F[x])^*$ and $((\forall x\in a)F[x])^*$ are \[ \exists T_x(T_x\in^* T_a \,\land\, F^*[T_x]) \quad\mbox{and}\quad \forall T_x(T_x\in^* T_a \,\to\, F^*[T_x]) \] respectively, where $ F^*[T_x]$ stands for $(F[x])^*$; note that $^*$ replaces $x$ by $T_x$. \item $(\exists x^{\alpha}F[x])^*$ and $(\forall x^{\alpha}F[x])^*$ are \[ \exists T_x\,(T_x\mbox{ $\alpha$-tree} \,\land\, F^*[T_x]) \quad\mbox{and}\quad \forall T_x\,(T_x\mbox{ $\alpha$-tree} \,\to\, F^*[T_x]), \] respectively. \item $(\exists X F[X])^*$ and $(\forall X F[X])^*$ are \begin{gather*} \exists T_X(T_X\mbox{ $\omega{+}1$-tree} \,\land\, T_X\subseteq^*\omega^* \,\land\, F^*[T_X]) \end{gather*} and \begin{gather*} \forall T_X(T_X\mbox{ $\omega{+}1$-tree} \,\land\, T_X\subseteq^*\omega^*\,\to\, F^*[T_X]), \end{gather*} respectively. Here $ F^*[T_X]$ stands for $(F[X])^*$, noting that $^*$ replaces $X$ by $T_X$. \end{enumerate} \end{definition} It is perhaps noteworthy that if $\mathcal{H} \prov{\beta}{\eta,p} \Gamma$ with $\Gamma$ a set of formulas in $\mathcal{B}$ and $\beta,\eta<\Omega$, then all formulas occurring in the derivation must be in $\mathcal{B}$, too, as there can be no cuts in it involving formulas with unbounded set quantifiers since their ranks would be $\geq \Omega$ (see Definition~\ref{d:rs-formulas} items (7) and (8)). \begin{definition}\rm \label{W-Definition} The purpose of this definition is to fix an arithmetized truth definition for $\mathcal{B}$-formulas of length $< p$ in $\pooca + \mathit{TI}({<}\BH)$. Let $Z$ be a set of naturals such $(Z)_k$ is a suitable tree for all $k$. We'd like to engineer a formula $\mathcal{T}_p(x,X)$ of $\mathrm{L}_2$ such that for all $\mathcal{B}$-formulas $F[x_1,\ldots,x_r,U_1,\dots,U_s] $ of length $<p$ with all free variables exhibited, \begin{gather} \mathcal{T}_p(\goed{F[x_1,\ldots,x_r,U_1,\dots,U_s]},Z) \;\leftrightarrow\; \tilde{F}[Z_{(1)},\ldots Z_{(r)},Z_{(r+1)},\ldots, Z_{(r+s)}] \tag{*} \end{gather} where $\goed{F[x_1,\ldots,x_r,U_1,\dots,U_s]}$ stands for the G\"odel number of $F[x_1,\ldots,x_r,U_1,\dots,U_s]$ whilst \[ \tilde{F}[Z_{(1)},\ldots Z_{(r)},Z_{(r+1)}, \ldots, Z_{(r+s)}] \] denotes the formula obtained from $(F[x_1,\ldots,x_r,U_1,\dots,U_s])^*$ by replacing $T_{x_i}$ by $(Z)_i$ and $T_{U_j}$ by $(Z)_{r+j}$. Thus the sections $(Z)_k$ of $Z$ furnish an assignment of suitable trees to the free variables of $(F[x_1,\ldots,x_r,U_1,\dots,U_s])^*$. \end{definition} The formal definition of such a truth predicate is a standard but cumbersome procedure. A place in the literature, where one finds this carried out in detail, is Takeuti \cite[CH.~3,19]{takeuti87}, and another is Troelstra \cite[1.5.4]{troelstra73}. \begin{theorem}\label{Wahrheit} Fix an $n$ and $p$. Let $\mathit{TV}(Z)$ be short for ``$\forall k\,(Z)_k\mbox{ is a suitable tree}$''. Then $\pooca + \mathit{TI}({<}\BH)$ proves that for all sequents of formulas $\Gamma$ in $\mathcal{B}$ consisting of formulas of length $<p$ and $\beta,\rho\in C(\omega_n[\Omega+1],0)\cap\Omega$ that $\mathcal{H} \prov{\beta}{\eta,p} \Gamma$ implies \[ \forall Z(\mathsf{TV}(Z)\to \mathcal{T}_p(\goed{\bigvee \Gamma},Z)) \] where $\bigvee \Gamma$ stands for the disjunction of all formulas of $\Gamma$, but if $\Gamma$ is empty let $\bigvee\Gamma$ be $\underline{\emptyset}\in \underline{\emptyset}$. \end{theorem} \begin{proof} We reason in $\pooca + \mathit{TI}({<}\BH)$ by induction on $\beta$. The axiom cases are obvious. If $\mathcal{H} \prov{\beta}{\eta,p} \Gamma$ is the result of an inference of a form other than $\mathsf{(BC)}$, then this follows immediately from the induction hypothesis applied to the immediate subderivations. Note also that in case of a cut, the cut formulas belong to $\mathcal{B}$ and have lengths $<p$. Observe also that the derivation cannot contain $(\Scal_0\mbox{-}\mathsf{Ref})$ inferences since $\beta<\Omega$. So it remains to deal with the case where the last inference is an instance of $\mathsf{(BC)}$. Fortunately, this is what Lemma~\ref{MR9} is really about. The latter shows that if the premise $\Theta$ of an instance of $\mathsf{(BC)}$ is true under an assignment $Z$, then so is the conclusion. % \end{proof} \begin{theorem}\label{Trompete} $\mathsf{KP} + (\pooca^*)$ is conservative over $\pooca + \mathit{TI}({<}\BH)$ for formulas of second order arithmetic. More precisely, if $\mathcal{F}$ is sentence of second order arithmetic (i.e. $\mathrm{L}_2$) and $\mathcal{F}_0$ denotes its natural translation into the language $\Lcal^{set}_2$, then \[ \mathsf{KP} + (\pooca^*) \vdash \mathcal{F}_0 \quad\Longrightarrow\quad \pooca + \mathit{TI}({<}\BH)\vdash \mathcal{F}. \] \end{theorem} \begin{proof} Assume $\mathsf{KP} + (\pooca^*) \vdash \mathcal{F}_0$. As elaborated on before, with the help of the results in subsection 4.8 and Theorem~\ref{Wahrheit}, it follows that \[ \pooca + \mathit{TI}({<}\BH) \vdash \forall Z(\mathsf{TV}(Z)\to \mathcal{T}_p(\goed{ \mathcal{F}_0},Z)). \] Thus, in light of (*) and noting that $\mathcal{F}_0$ has no free variables, \[ \pooca + \mathit{TI}({<}\BH)\vdash \mathcal{F}_0^* \] where $\mathcal{F}_0^*$ is the translation of $\mathcal{F}_0$ according to Definition~\ref{W-Definition}. It remains to establish the relationship between $\mathcal{F}_0^*$ and $\mathcal{F}$. $\mathcal{F}_0^*$ arises from $\mathcal{F}$ by translating numerical quantifiers $Q n$ as ranging over the immediate subtrees of $\omega^*$ and second order quantifiers as ranging over $\omega{+}1$-trees $T$ such that $T\subseteq^*\omega^*$. Now, as the naturals with their ordering are isomorphic to the immediate subtrees of $\omega^*$ ordered via $\in^*$ and also the collection of sets of natural numbers is naturally isomorphic to the collection of $\omega{+}1$-trees $T$ such that $T\subseteq^*\omega^*$, it follows that $\mathcal{F}_0^*$ implies $\mathcal{F}$, completing the proof. Anyone insisting on a more formal proof is invited to proceed by induction on the buildup of $\mathcal{F}$. \end{proof} \section{Extensions} Thus far we have investigated what happens if one adds Kripke-Platek set theory to the subtheory of second order arithmetic based on $\Pi^1_1$-comprehension. There obtains a certain analogy to what Barwise \cite{barwise75} called the \emph{Admissible Cover}, $\mathbb{C}\mathrm{ov}_{\mathfrak M}$, of a basic structure $\mathfrak M$. In his case, the basic structures $\mathfrak M$ were models of set theory. $\mathbb{C}\mathrm{ov}_{\mathfrak M}$ is the intersection of all admissible sets which cover $\mathfrak M$ (see \cite{barwise75}, Appendix 2.1).\footnote{The proper proof-theoretic counterpart of the admissible cover was developed in J\"ager \cite{j86a}.} In our context, the admissible cover amounts to grafting the theory $\mathsf{KP}$ onto a subsystem of second order arithmetic, $T$. This could be called the {\em proof-theoretic admissible cover} of $T$. There is, however, a crucial difference between Barwise's model-theoretic construction and the proof-theoretic one employed in this paper. In the former the basic structure one starts from remains unchanged in a strong sense when building its admissible cover in that no new subsets of $\mathfrak M$ become available in $\mathbb{C}\mathrm{ov}_{\mathfrak M}$ (see \cite{barwise75} Appendix Corollary 2.4), whereas in the proof-theoretic case the axioms of the basic theory $T$ will interact with the axioms of the set theory $\mathsf{KP}$, witnessed by the fact that, in general, more theorems of second order arithmetic become deducible in $T+ \mathsf{KP}$ than in $T$ alone. It is also interesting to investigate how the proof-theoretic admissible cover plays out in the case of other well-known subsystems of second order arithmetic. It turns out that a certain pattern emerges. Moreover, the techniques developed in this paper, when combined with insights from the literature, suffice to get these additional results. \begin{definition}\rm We will focus on two well-studied theories. \begin{enumerate} \item $\mathsf{ATR}_0$ with its signature axiom of \emph{arithmetical transfinite recursion} is the fourth system of the ``big'' five of reverse mathematics (see \cite[V]{simpson09}). \item The second system we will consider is traditionally called the theory of \emph{bar induction}, $\mathsf{BI}$, by proof theorists. In \cite[VII.2.14]{simpson09} it is denoted by $\Pi^1_{\infty}\mbox{-}\mathsf{TI}_0$. The axioms of $\mathsf{BI}$ are those of $\mathsf{ACA}_0$ plus the scheme of transfinite induction \[ \forall X(\mathit{WO}(X) \to \mathit{TI}(X,A)) \] for every $\mathrm{L}_2$ formula $A[x]$, where $\mathit{TI}(X,A)$ stands for the formula \[ \forall u((\forall v <_X u)A[v] \to A[u]) \,\to\, \forall u A[u] \] with $\mathit{WO}(X)$ expressing that the ordering $<_X$ defined by $v<_X u:\Leftrightarrow 2^v\cdot3^u\in X$ is a well ordering. \end{enumerate} \end{definition} $(\mathsf{ATR}_0^*)$ and$(\mathsf{BI}^*)$ are the axiom schemas -- formulated in the language $\Lcal^{set}_2$ -- that comprise the natural translations of all instances of arithmetical transfinite recursion and transfinite induction, respectively. If $\mathcal{C}$ is a collection of sentences of $\mathrm{L}_2$ and $T_1$, $T_2$ are theories of the language $\mathrm{L}_2$ or $\Lcal^{set}_2$, we write $T_1\equiv_\mathcal{C} T_2$ to convey that $T_1$ and $T_2$ possess the same $\mathcal{C}$-theorems (perhaps modulo the translation of $\mathcal{C}$ into $\Lcal^{set}_2$). \begin{theorem}\label{THm1} $\mathsf{KP} + (\mathsf{ATR}_0^*) \;\equiv_{\Pi^1_{\infty}}\; \mathsf{ATR}_0 + \mathit{TI}({<}\mathcal{BH})$. \end{theorem} \begin{proof} The proof of Theorem \ref{Trompete} essentially carries over with $(\mathsf{ATR}_0^*)$ replacing $(\pooca^*)$ since the crucial Lemma~\ref{MR9} is also provable in $\mathsf{ATR}_0$ as the latter theory proves $\Sigma^1_1$-$\mathsf{AC}$, too (this is an old result, see \cite[Theorem V.8.3]{simpson09}. \end{proof} \begin{theorem}\label{THm2} \quad \begin{enumerate}[(i)] \item $\mathsf{KP} + (\mathsf{BI}^*) \;\equiv_{\Pi^1_{\infty}}\; \mathsf{BI}$. \item $\mathsf{KP} + (\mathsf{BI}^*) \;\equiv_{\Pi^1_{1}}\; \mathsf{KP}$. \item $\mathsf{KP} + (\mathsf{ATR}_0^*) \;\equiv_{\Pi^1_{1}}\; \mathsf{KP}$. \end{enumerate} \end{theorem} \begin{proof} (i) The proof of Theorem \ref{Trompete} also works with $\mathsf{BI}$ in lieu of $\Pi^1_1\mbox{-}\mathsf{CA}$ since $\mathsf{ATR}_0$ is a consequence of $\mathsf{BI}$ (see \cite[Corollary VII.2.19]{simpson09}. So we infer that $\mathsf{BI}^*+\mathsf{KP} \equiv_{\Pi^1_{\infty}} \mathsf{BI}+\mathit{TI}({<}\mathcal{BH})$. However, it is well known from the proof-theoretic literature that $\mathsf{BI}$ already proves $\mathit{TI}({<}\mathcal{BH})$, yielding (i) (for more details see \cite{f70a,bfps81,buchholz-schuette88}). (ii) By J\"ager \cite{j82a} and the papers cited in the previous line, it is known that $\mathsf{KP}$ and $\mathsf{BI}$ share the same proof-theoretic ordinal. Moreover, from the ordinal analyses of these two theories it can be inferred that they prove the same $\Pi^1_1$-theorems. The degree of conservativity, though, cannot be improved much beyond this level as $\mathsf{ATR}_0$ has a $\Pi^1_2$-axiomatization and $\mathsf{KP}$ does not prove $\mathsf{ATR}_0$. To see this, first note that $\mathsf{KP}$ has a model $\mathbb{H}\mathrm{YP}_{\mathfrak N}$ which is the intersection of all admissible sets that contain the standard structure $\mathfrak N$ of the naturals as a set (see \cite[Theorem 5.9]{barwise75} ). The subsets of $\mathbb{N}$ in $\mathbb{H}\mathrm{YP}_{\mathfrak N}$ are the hyperarithmetical sets. Then we can proceed, for example, in one of the following two ways: \begin{enumerate} \item[-] The collection of the hyperarithmetical sets, as apparently proved by Kreisel, constitutes the smallest $\omega$-model of $\Sigma^1_1$-$\mathsf{AC}$, but it cannot be a model of $\mathsf{ATR}_0$ as it would have to have a proper $\omega$-submodel that is again an $\omega$-model of $\mathsf{ATR}_0$, and thus of $\Sigma^1_1$-$\mathsf{AC}$. The result about $\omega$-models of $\mathsf{ATR}_0$ is due to Quinsey \cite[pages 93--96]{quinsey80} (for more details see \cite{simpson09} Theorem VIII.6.12 and the Notes for $\S$VIII.6). \item[-] Alternatively, observe that the theory $\mathsf{ATR}_0$ is equivalent to the theory $\mathsf{FP}_0$ which extends $\mathsf{ACA}_0$ by axioms stating that any positive arithmetic operator has a fixed point; see Avigad \cite{avigad96a}). But the hyperarithmetical sets do not constitute a model of $\mathsf{FP}_0$ according to Probst \cite[II.2.4]{probst05a} and Gregoriades \cite{gregoriades19a}. \end{enumerate} (iii) By Theorem \ref{THm1} we have $\mathsf{ATR}_0^* + \mathsf{KP} \equiv_{\Pi^1_{\infty}} \mathsf{ATR}_0 +\mathit{TI}({<}\mathcal{BH})$. Since $\mathsf{ATR}_0 + \mathit{TI}({<}\mathcal{BH})$ is a subtheory of $\mathsf{BI}$ and $\mathsf{BI}^* + \mathsf{KP} \equiv_{\Pi^1_{1}} \mathsf{KP}$, it follows that $\mathsf{ATR}_0^* +\mathsf{KP} \equiv_{\Pi^1_{1}} \mathsf{KP}$. \end{proof} In connection with the previous results we would also like to mention Sato \cite{sato18a}. He states there, among other things, a special case of Theorem~\ref{THm2}(ii), namely that the addition of a $\Pi^1_2$ theorem of $\mathsf{BI}$ to $\mathsf{KP}\omega$ does not increase the consistency strength of the augmented theory. Further interesting work about the relationship between Kripke-Platek set theory and $\Pi^1_1$ comprehension on the natural numbers is presented in Simpson \cite{simpson18a}. There the interplay between $\mathsf{KP}$ and $\Pi^1_1\mbox{-}\mathsf{CA}$ is studied from a recursion- and model-theoretic perspective. However, it does not provide the exact proof-theoretic strength of $\mathsf{KP} + (\pooca^*)$. \bigskip
1,116,691,498,139
arxiv
\section{Introduction} Conformal field theories (CFTs) play an important role in many aspects of theoretical physics, from the concrete study of physical systems at criticality to abstract problems in mathematical physics. Although some CFTs can be given a weakly coupled description, the most interesting and commonly occurring CFTs are strongly coupled, and in three or more dimensions few analytic tool are available to study them. In supersymmetric or lower-dimensional examples, conformal invariant defects have played a useful role in probing the structure of CFTs. A conformal defect is a non-local observable, a modification of the theory which is localized on a lower-dimensional manifold and preserves an appropriate subgroup of the conformal group. It is natural to attempt to define and study defects in non-supersymmetric, commonly occurring CFTs. The 3d Ising model at criticality is a natural candidate: it is in a sense the simplest non-trivial 3d CFT and has been the subject of an intensive and rather successful analysis by a variety of theoretical and numerical tools \cite{Pelissetto2002a} such as the $\epsilon$-expansion and Monte Carlo simulations. More recently, interesting constraints on the Ising model \cite{ElShowk:2012ht} and other CFTs have been derived using the methods of the conformal bootstrap \cite{Beem:2013qxa,Alday:2013opa,Caracciolo:2009bx,ElShowk:2012hu,Fitzpatrick:2012yx,Gliozzi:2013ysa,Komargodski:2012ek,Kos2013,El-Showk:2013nia,Liendo2012,Maldacena:2011jn,Poland:2010wg,Poland:2011ey,Rattazzi:2010gj,Rattazzi:2010yc,Rattazzi:2008pe,Rychkov:2009ij,Vichi-thesis,Vichi:2011ux}. The simplest possible conformal defect in a CFT is a boundary condition. Boundary conditions in the 3d Ising model have been the subject of some theoretical \cite{Diehl1981, McAvity1995} and numerical study \cite{Binder199017}. More recently, there have been attempts to ``bootstrap'' such boundary conditions \cite{Liendo2012}, by looking at two-point functions of bulk operators in the presence of the boundary. Another interesting example is a monodromy, or twist defect. The global $Z_2$ flavor symmetry of the Ising model allows for a natural definition of codimension two twist defects: under a rotation around the defect, local operators pick up a phase factor according to their $Z_2$ quantum numbers. Due to their topological nature, such defects are essentially guaranteed to flow to scale invariant defects in the IR and possibly to conformal defects. Recently the authors of reference \cite{Billo:2013jda} have used Monte Carlo simulations to provide numerical evidence for the existence of a twist defect in the 3d Ising model. In this note, we aim to present further evidence for this from different points of view. We shall take a two-pronged approach: direct analytic calculations using $\epsilon$-expansion techniques; and the numerical methods of the conformal bootstrap. In both cases not only do we find excellent agreement with existent data, but we are also able to make new predictions that may be verified in the near future. As such, our work is a nice example of the interplay between theory, Monte Carlo simulations and the numerical bootstrap. The $\epsilon$-expansion, introduced by Wilson and Fisher \cite{Wilson1972a}, provides a framework to study the critical $O(N)$ models in a perturbative setting. A drawback of this method is its disregard of the conformal symmetry, and another is that high accuracy requires computations to high loop orders and Borel resummation due to the asymptotic nature of the perturbative expansion \cite{LeGuillou1985}. Nevertheless, the $\epsilon$-expansion has been used to determine basic critical exponents in 3d rather precisely. The numerical bootstrap recently provided compelling evidence for the consistency of this method by identifying a family of solutions to crossing symmetry, interpolating between the 2d and 3d Ising model, and the 4d free scalar \cite{El-Showk:2013nia}. It is thus natural to use the $\epsilon$-expansion as a source of data on the twist defect in the 3d Ising model. Concretely, we will start with the twist defect in the free theory, add a $\phi^4$ coupling in the bulk and study correlation functions in the IR. The theory is expected to flow to the twist defect of the 3d Ising model. Performing one-loop computations, and setting $\epsilon=1$, we find good agreement with the Monte Carlo data. The one-loop deviation from the 3d free theory is always in the right direction, and often surprisingly close to the measured value. Note that defect scaling dimensions have been studied for Wilson lines in $3d$ $U(1)$ gauge theory with matter in \cite{kolezhuk2006theory}. As was mentioned before, boundary conditions have been previously considered in the context of the conformal bootstrap. The main obstacle in such program is the lack of guaranteed positivity/unitarity constraints in the intermediate channel where the two bulk operators are fused together. Here we shall take a different approach, by considering directly correlators of defect operators. This guarantees positivity, but the price to pay is that it uses very little information about the bulk CFT itself, as the bulk operators do not appear in any fusion channel. The only properties of the bulk theory which affect directly the four-point functions on the defect are its symmetries. The 3d Ising model should be a reasonable candidate for such an analysis, because it is strongly constrained by its symmetries: in a sense, it is the simplest 3d CFT with a $Z_2$ flavor symmetry. It would be interesting to investigate if such a strategy may be successful in the study of boundary conditions (codimension one defects). In this paper, we focus on the codimension two twist line defects, and thus consider the conformal bootstrap in the one-dimensional world volume of the defect. The spectrum of operators on the defect contains operators of various $U(1)$ `spin' (corresponding to rotations around the defect), which can be integer of half-integer according to the $Z_2$ charge of the operator. Further, the spectrum should contain a protected ``displacement operator'' $D$, of spin $1$ and dimension $2$. This is the operator one would add to the defect Lagrangian to deform the defect away from a straight line. We shall consider four-point functions of the simplest local operator $\psi$ on the defect, the leading spin-$1/2$ operator, which occurs in the defect OPE of the $Z_2$-odd bulk field $\sigma$ (the Ising model spin field). However, in one dimension one must take care because a four-point function can be decomposed only into two crossing symmetry channels. There are therefore two crossing equations: in the four-point function $\langle \psi\bar \psi \psi \bar \psi \rangle$ both fusion channels have spin $0$; but the correlator $\langle \psi \psi \bar \psi \bar \psi\rangle$ has both spin $0$ and spin $1$ fusion channels. We shall explore the constraints following from the crossing equations, deriving universal bounds on one-dimension unitary CFTs. By forcing the spectrum to contain the displacement operator, we can derive a bound on the dimension of the leading parity-even spin-0 operator. In the extremal case where the bound is saturated, we can reconstruct a unique solution to crossing symmetry \cite{ElShowk:2012hu}, and we find that for a certain value of the OPE coefficient of $D$ the spectrum seems to match that of the defect, found both numerically and via $\epsilon$-expansion. We also obtain a number of other operator dimensions and OPE coefficients which can be thought of as specific predictions for future numerical tests. Here is a brief outline of this note. In section \ref{sec:defect}, we review the twist defect introduced in \cite{Billo:2013jda}. We work in the continuum limit, describing the expected symmetries, low-lying operators and the form of the operator product expansion. Section \ref{sec:epsexpansion} is concerned with $\epsilon$-expansion calculations. In section \ref{sec:bootstrap} we turn to the methods of the modern conformal bootstrap and conclude in Section \ref{sec:conclusions} together with suggestions for further research. \section{The $Z_2$ Twist Defect} \label{sec:defect} Let us recall \cite{Billo:2013jda} that the twist line defect in the 3d Ising model can be constructed on the lattice by flipping the Ising coupling on a semi-infinite half-plane ending on a line of the dual lattice. Such semi-infinite surface is a topological defect, since physics is invariant under its arbitrary deformations fixing the boundary line, provided we also flip the spins in between the original and deformed surface. The boundary of such topological surface defect is precisely a twist line defect. In the continuum limit, correlation functions become discontinuous (antiperiodic) across the surface. Presumably the same twist line defect lies at the IR end of the renormalization group flow from the free theory with a $Z_2$ twist defect generated by $\phi^4$ coupling in the bulk. The global spacetime symmetry group of a $D$-dimensional Euclidean parity-invariant CFT is $O^+(1,D+1)$, where parity or sphere inversion switches between the two connected components. A conformal $ Z_2$ twist line defect thus breaks the bulk symmetry $O^{+}(1,4)\times Z_2$ down to $O^{+}(1,2)\times O'(2)$, where $O'(2)$ is a double cover of the group of rotations and reflections fixing the defect, such that the rotation by $2\pi$ is identified with the nonidentity element of $ Z_2$. The dihedral symmetry $D_{8}$ of motions of the cubic lattice fixing the defect, discussed in \cite{Billo:2013jda}, is a subgroup of $O'(2)$. $O^+(1,2)$ is the spacetime symmetry group of the defect. At the level of Lie algebras, we have $so(1,2)=sl(2,\mathbb{R})$, and the connected components of $O^+(1,2)$ are switched by the reflection in a plane orthogonal to the defect or the sphere inversion centered on the defect. Following \cite{Billo:2013jda}, we call the former the $S$-parity. In this note, we will be concerned with local operators living on the twist defect. In the Ising model, these correspond to local modifications of the lattice model in close proximity of the defect line. Applying radial quantization centered at a point on the defect, the defect local operators are seen to correspond to the states of the CFT quantized on a two-punctured sphere, with each puncture inducing the $ Z_2$ action on the bulk fields. The local operators fall into representations of the group $O^{+}(1,2)\times O'(2)$. The 1D conformal algebra $sl(2,\mathbb{R})$ is generated by operators $P,D,K$ (respectively translations, dilations and special conformal transformations) satisfying the commutation relations \begin{equation} [D,P]=i P\,,\quad [D,K]=-i K\,,\quad [K,P]=-2i D\,. \end{equation} Physically relevant irreps are the highest-weight representations labelled by the scale dimension $\Delta\geq 0$ of the primary $\mathcal{O}(x)$, i.e. $[K,\mathcal{O}(0)] = 0$, $[D,\mathcal{O}(0)] = i\Delta\mathcal{O}(0)$. $\Delta<0$ would lead to correlation functions growing with distance and also violation of the unitarity bound by the first descendant. The counterpart of unitarity in the Euclidean signature has been called `reflection-positivity'. In our setting, this property means that any correlation function of a configuration of real operators which is invariant under the $S$-parity is positive. Real operators in the Ising model are those appearing in the real operator algebra generated by the spin field. Reflection-positivity of the 3d Ising model is not spoiled by the defect line since the lattice transfer matrix in a plane perpendicular to the defect is unchanged with respect to the bulk theory. This leads us to define the (Euclidean) conjugate $C(\mathcal{O}(x))\equiv\bar{\mathcal{O}}(x)$ as complex conjugate composed with $S$-parity, so that $\langle\bar{\mathcal{O}}(x)\mathcal{O}(y)\rangle\geq 0$. $C$ is an antilinear map on the algebra of local operators which reverses the $O(2)$ spin and commutes with the other quantum numbers. The commutativity properties of the symmetry algebra enable us to find a basis of defect primaries with well-defined $S$-parity, and $O(2)$ spin $s$, which is (half)integer for primaries even (odd) under the global $Z_2$. Each $s=0$ representation moreover carries $O(2)$-parity, denoted $B$. We are free to choose the phase of the $s=0$ primaries so that $C$ acts on them as the identity. The basis of $|s|>0$ primaries can be chosen so that $B(\mathcal{O}) = b_{\mathcal{O}}\bar{\mathcal{O}}$. From $BC = CB$ and $B^2=1$, we get $b_{\mathcal{O}} = e^{i\theta}$. Redefining $\mathcal{O}\rightarrow e^{-i\theta/2}\mathcal{O}$, we cancel the phase and get $B\mathcal{O} = \bar{\mathcal{O}}$, so that $|s|>0$ do not carry any $O(2)$-parity. Exactly as in higher dimensions, conformal invariance fixes the form of two and three point functions. The difference in 1d is that the three point function coefficient $c_{\mathcal{O}_1\mathcal{O}_2\mathcal{O}_3}$ may depend on the cyclic order of the operators (signature of the permutation), since this order is invariant under the connected component of identity in the conformal group. In particular, note that for $x<y<z$ \begin{equation} \langle\mathcal{O}_1(x)\mathcal{O}_2(y)\mathcal{O}_3(z)\rangle = (-1)^{S_1+S_2+S_3}\langle\mathcal{O}_3(-z)\mathcal{O}_2(-y)\mathcal{O}_1(-x)\rangle\,, \end{equation} where $(-1)^{S_i}$ is the S-parity of $\mathcal{O}_i$. Hence \begin{equation} c_{\mathcal{O}_1\mathcal{O}_2\mathcal{O}_3} = (-1)^{S_1+S_2+S_3}c_{\mathcal{O}_2\mathcal{O}_1\mathcal{O}_3}. \label{eq:opecoefs} \end{equation} Arbitrary cyclic permutations are generated by $P+K$. The sign in \eqref{eq:opecoefs} will play an important role in one of our bootstrap equations. Primary operators on the defect satisfy the usual operator product expansion \begin{equation} \mathcal{O}_1(x)\mathcal{O}_2(y) = \sum_{\mathcal{O}_3}\frac{c_{\mathcal{O}_1\mathcal{O}_2\bar{\mathcal{O}}_3}}{|x-y|^{\Delta_1+\Delta_2-\Delta_3}}\mathcal{D}_{\Delta_i}(x-y,\partial)\mathcal{O}_3(y), \end{equation} where the sum runs over defect primaries and \begin{equation} \mathcal{D}_{\Delta_i}(x-y,\partial) = \sum_{n=0}^{\infty} \frac{(\Delta_1+\Delta_3-\Delta_2)_n}{n!(2\Delta_3)_n}(x-y)^n\partial^n \end{equation} is fixed by conformal symmetry. Moreover, bulk operators can be expanded in terms of the defect operators in the so-called bulk-defect OPE \cite{Cardy1990,McAvity1995} , which for a scalar primary in the bulk takes the form \begin{equation} \phi(x,z,\bar{z})=\sum_{\mathcal{O}}C^{\phi}_{\mathcal{O}}\frac{\bar{z}^{s_{\mathcal{O}}}}{|z|^{\Delta_{\phi}-\Delta_{\mathcal{O}}+s_{\mathcal{O}}}}\mathcal{B}_{\Delta_{\mathcal{O}}}(|z|,\partial)\mathcal{O}(x)\,,\label{eq:bulkdefectope} \end{equation} where we use complex coordinates $z,\bar z$ for the transverse directions, the sum is over defect primaries, and $s_{\mathcal{O}}$ denotes the $O(2)$ spin of $\mathcal{O}$. Conformal symmetry in the presence of the defect fixes $\langle\phi(x,z,\bar{z})\bar{\mathcal{O}}(y)\rangle$ up to an overall constant $C^{\phi}_{\mathcal{O}}$, and consequently determines \begin{equation} \mathcal{B}_{\Delta}(|z|,\partial) = \sum_{n=0}^{\infty}\frac{(-1)^n(\Delta)_n}{n!(2\Delta)_{2n}}|z|^{2n}\partial^{2n}\,. \end{equation} Notice that in particular, the boundary OPE coefficient $C_{\mathds 1}^{\mathcal \phi}$ gives the expectation value of $\phi$, \begin{eqnarray} \langle \phi(x,z,\bar z)\rangle= \frac{C_{\mathbf 1}^{\mathcal \phi}}{|z|^{\Delta_\phi}}\,\label{eq:vev} \end{eqnarray} Applying a $2\pi$ rotation to \eqref{eq:bulkdefectope}, we see that the defect expansion of a bulk operator $\phi$ even (odd) under the global $Z_2$ contains only defect primaries with integer (half-integer) spins. Typically, the bulk-defect OPE will contain an infinite tower of defect primaries at each allowed spin. An exception is the bulk free field, studied below, which only features one defect primary at each spin. The defect spectrum always contains the displacement operator $D(x)$ which, when added to the Lagrangian, generates deformations of the defect. Its dimension and quantum numbers are fixed by the Ward identity expressing the breaking of transverse translational symmetry by the defect \begin{equation} \partial_a T^{ai}(x,z,\bar z) = D^{i}(x)\delta^2(z,\bar z), \end{equation} where $i$ label the transverse coordinates. Hence $\Delta_D = 2$, $s_D = 1$, and $D$ is even under $S$-parity. Let us illustrate the above in the simplest setting -- the theory of the free massless real scalar $\phi$ in three dimensions, with twist defect for the global $Z_2$. Applying the bulk equations of motion to the bulk-defect OPE of $\phi$, we find that the scale dimension of the defect primary of (half-integer) spin $s$ appearing in the OPE is $\Delta_s = |s| + 1/2$. We will denote this tower of operators by $\psi_s$. The field $\phi$ (and consequently each $\psi_s$) is even under $S$-parity. Reality of $\phi$ implies $\psi_{-s} = \bar{\psi}_s$. The lowest-lying non-identity defect primary is $\psi\equiv\psi_{1/2}$ with scale dimension $\Delta_{\psi} = 1$. Since the scale dimension of the bulk spin field in the 3d Ising model is close to the free-field value, we expect the lowest-lying operator in the Ising defect spectrum to have dimension close to 1 and share the other quantum numbers with the free-theory $\psi$. Going back to the free theory, the $\bar\psi\psi$ OPE contains primary operators of schematic form $\mathcal{O}_n = \bar\psi\partial^n\psi$, $n\geq 0$. We have $\Delta_{\mathcal{O}_n} = n + 2$, $s_{\mathcal{O}_n} = 0$, and the $S$-parity, as well as $O(2)$-parity of $\mathcal{O}_n$ is $(-1)^n$. The $\psi\psi$ OPE features primaries with schematic form $\mathcal{S}_n = \psi\partial^{2n}\psi$ for $n\geq 0$. This time, we obtain $\Delta_{\mathcal{S}_n} = 2n+2$, $s_{\mathcal{S}_n} = 1$, and the operators are even under $S$-parity. $\mathcal{S}_0$ is the only candidate for the displacement operator, since forming further OPEs will only create operators with dimensions greater than 2. In the next section, we will compute the first-order corrections to the scale dimensions of some of these operators, as well as their three point function constants at the Wilson-Fisher fixed point in $4-\epsilon$ dimensions. In particular, we will check that $D \equiv \mathcal{S}_0$ is indeed protected at this order. \section{Epsilon Expansion} \label{sec:epsexpansion} In order to study the properties of the twist defect at the Wilson-Fischer fixed point in $4-\epsilon$ dimensions, we start with the $D=2-\epsilon$ dimensional twist defect in the free theory and add a bulk $\phi^4$ interaction at the critical coupling. Since renormalization is a local property, the bulk flow is unaffected by the presence of the defect, and so the critical coupling is the usual $g=(4\pi)^{2}\epsilon/3 + O(\epsilon^2)$. Correlation functions of local bulk operators interpolate between two regimes -- when the typical distances between the insertions are much smaller than the distance from the defect, the correlation functions become those of the Wilson-Fisher fixed point with no defect. In the opposite case, the correlation functions are controlled by the CFT data of the defect. In the latter regime, the distance from the defect acts as a UV cutoff. In this section, we use bulk perturbation theory to study bulk correlation functions in the defect regime and thus determine the data associated to some important defect operators to the first order in $\epsilon$. The reader uninterested in the details may skip directly to the results which are displayed in table \ref{tab:MCWF}. \subsection{The two-point function in the free theory} First, we will need the two-point function in the free theory alias the propagator. It is anti-periodic around the defect and satisfies \begin{equation} -\nabla^{2}G_{0}(x_1,x_2) =\frac{4\pi^{D/2 + 1}}{\Gamma\left(\frac{D}{2}\right)} \delta^{D+2}(x_1-x_2), \label{eq:propdef} \end{equation} where we chose the normalization standard in CFT literature, resulting in the asymptotics \begin{equation} G_0(x_1,x_2) \stackrel{x_1\rightarrow x_2}{\sim}\frac{1}{|x_1-x_2|^{d}}\,. \label{eq:G0bulk} \end{equation} Let $x$ denote coordinates in the whole space and $y$ those along the defect. The propagator can be easily found in momentum space \begin{equation} G_{0}(x_1,x_2) = \frac{2\pi^{D/2}}{\Gamma\left( \frac{D}{2} \right)}\sum_{s\in\mathbb{Z}+\frac{1}{2}}\int\frac{d^{D}k}{(2\pi)^{D}} e^{is(\theta_1-\theta_2)}e^{ik\cdot (y_{1}-y_{2})}I_{|s|}\left(kr_{-}\right)K_{|s|}\left(kr_{+}\right),\label{eq:propagator} \end{equation} where the Fourier transform is over the coordinates along the defect, $\theta$ is the angle around the defect, $r_{-}=\min(r_1,r_2)$, $r_{+}=\max(r_1,r_2)$ and $I_s, K_s$ are the modified Bessel functions. The contribution from spin $s$ can be integrated to give \begin{align} G_0(x_1,x_2,s) &= \frac{1}{4^{\Delta}}\frac{\Gamma\left(\Delta\right)}{\Gamma\left( \frac{D}{2} \right)\Gamma\left(\Delta -\frac{D}{2} + 1\right)}\frac{e^{is(\theta_1-\theta_2)}}{(r_1r_2)^{\frac{D}{2}}}\xi^{-\Delta}\times\nonumber\\ &\times\phantom{}_2F_1\left(\Delta,\Delta - \frac{D}{2}+\frac{1}{2};2\Delta - D +1;-\frac{1}{\xi}\right), \label{eq:gs} \end{align} where $\Delta = |s| + D/2$ is the scaling dimension of the primary field $\psi_s$ of spin $s$ induced on the defect by $\phi$ in the bulk, and \begin{equation} \xi = \frac{(y_1-y_2)^{2} + (r_1 - r_2)^{2}}{4 r_1 r_2} \end{equation} is one of the two conformal cross-ratios, the other being the relative angle. The computation can be simplified by using conformal invariance -- it is enough to evaluate the spin-$s$ propagator at $r_1=r_2$ since this fixes the dependence on $\xi$. $\xi\ll1$, $\Delta\theta\ll1$ is the regime controlled by the bulk CFT and $\xi\gg1$ the regime controlled by the defect data. Defect channel scalar conformal blocks for equal external dimensions can be read off from \eqref{eq:gs}, since these depend only on the internal dimension $\Delta$ and space-time dimension. To compute the properties of $\psi_s$, we will need the spin-$s$ two-point function in four dimensions, where \eqref{eq:gs} reduces to \begin{equation} G_0(x_1,x_2,s)\stackrel{D=2}{=} \frac{e^{is(\theta_1-\theta_2)}}{4r_1r_2}\frac{\xi^{-\frac{1}{2}}}{\sqrt{1+\xi}\left( \sqrt{\xi} + \sqrt{1+\xi} \right)^{2|s|}}. \label{eq:gs2} \end{equation} We can check that the infinite sum over spins produces the correct short distance singularity. Indeed, the full free two-point function can be resummed for $\theta_1=\theta_2$ \begin{equation} G_0(x_1,x_2) \stackrel{\theta_1=\theta_2}{=} \frac{1}{|x_1-x_2|^{D}}\frac{2\Gamma\left( \frac{D+1}{2} \right)}{\sqrt{\pi}\Gamma\left( \frac{D}{2} \right)}\xi^{-\frac{1}{2}} \phantom{}_2F_1\left(\frac{1}{2},\frac{D+1}{2};\frac{3}{2};-\frac{1}{\xi}\right). \label{eq:g2} \end{equation} When $\xi\ll1$, this reduces to the expected \eqref{eq:G0bulk}. For completeness, let us note that the full two-point function can be found explicitly in $D=2$ by summing \eqref{eq:gs2} \begin{equation} G_0(x_1,x_2) \stackrel{D=2}{=} \frac{1}{|x_1-x_2|^{2}}\frac{\cos\left(\frac{\theta_1-\theta_2}{2}\right)}{\sqrt{1 + \xi}}. \label{eq:d2full} \end{equation} \subsection{The two-point function at one loop} \subsubsection{Leading defect operators of half-integer spin} In this subsection, we will compute the scaling dimensions of the operators $\psi_s$ of spin $s=n+1/2$, $n\in\mathbb{Z}_{\geq0}$, induced by $\sigma$ on the defect, as well as the bulk-defect OPE coefficient $C^{\sigma}_{\psi_s}$ to the first order in $\epsilon$. If nothing too dramatic happens along the RG flow from the free massless scalar, these should be the leading operators of half-integer spin. We will consider the spin-$s$ component of the bulk two-point function when the two insertions are taken close to the defect. Let us place both points at radius $r$ and distance $y$ along the defect, relative angle $\theta$ and denote $\lambda=r/y=1/\sqrt{4\xi}$. From the bulk-defect OPE, we expect the spin-$s$ component of the two-point function to have the following leading behaviour as $\lambda\rightarrow0$ \begin{equation} G(x_1,x_2,1/2) = |C^\sigma_{\psi_s}|^{2}\frac{e^{is\theta}}{r^{2\Delta_\sigma}}\lambda^{2\Delta_{\psi_s}}(1+O(\lambda^2)). \end{equation} The dependence of $\Delta_{\psi_s}$ and $C^{\sigma}_{\psi_s}$ on $\epsilon$ at one loop comes from two sources -- the change of the free theory result with space-time dimension and the one-loop self-energy diagram (see figure \ref{fig:2ptfunction}). \begin{figure}[b] \centering \includegraphics[width=.5\textwidth]{2ptfunction.pdf} \caption{The one-loop contribution to $\langle\phi(x_1)\phi(x_2)\rangle$} \label{fig:2ptfunction} \end{figure} Using \eqref{eq:gs}, one finds the free theory result \begin{equation} G_0(x_1,x_2,s) = \frac{\Gamma\left( s+ \frac{D}{2} \right)}{\Gamma\left(\frac{D}{2} \right)\Gamma\left( s+1 \right)}\frac{e^{is\theta}}{r^D}\lambda^{2s + D}(1+O(\lambda^2)). \end{equation} Expanding the Gamma functions, we obtain the free theory CFT data of $\psi_s$ to the first order in $\epsilon$ \begin{align} \Delta_{\psi_s} &\stackrel{free}{=} s + 1 - \frac{\epsilon}{2}\\ |C^{\sigma}_{\psi_s}| &\stackrel{free}{=} 1 + \frac{\psi(1)-\psi(s+1)}{4}\epsilon + O(\epsilon^2), \end{align} where $\psi(z)=(\log\Gamma(z))'$. The one-loop self-energy diagram should be evaluated in $D=2$ since the coupling constant is itself proportional to $\epsilon$. Taking care of the normalization and symmetry factor, the diagram's contribution is equal to \begin{equation} G_1(x_1,x_2,s) = -\frac{g}{32\pi^4}\int\limits_{\mathbb{R}^4} d^4 x_0\, G_0(x_1,x_0,s)G_0(x_0,x_0)G_0(x_0,x_2,s). \end{equation} We need a regularized expression for the full free two-point function between coincident points $G_0(x_0,x_0)$ in $D=2$. Starting either from \eqref{eq:propagator} and evaluating the sum over spins for $D<0$ (so in dimensional regularization), or taking the finite piece of \eqref{eq:g2}, we find \begin{equation} G_0(x_0,x_0) = - \frac{\Gamma\left( \frac{D+1}{2} \right)}{2^{D-1}D\sqrt{\pi}\Gamma\left( \frac{D}{2} \right)}\frac{1}{r_0^D} \stackrel{D=2}{=}-\frac{1}{8}\frac{1}{r_0^2}. \label{eq:gxx} \end{equation} Using $g = (4\pi)^2\epsilon/3$, and the free $D=2$, spin-$s$ propagator \eqref{eq:gs2}, and performing the trivial integration over the angle, the one-loop diagram becomes \begin{equation} G_1(x_1,x_2,s) = \frac{\epsilon}{24\pi}e^{is\theta}\int\limits_{\mathbb{R}^2}dy_0dz_0\int\limits_0^{\infty}\frac{dr_0}{r_0} \frac{(4r_0 r)^{2s}}{d_+ d_- e_+ e_- (d_+ + d_-)^{2s}(e_+ + e_-)^{2s}}, \label{eq:gs1loop} \end{equation} where \begin{align*} d_{\pm} &= \sqrt{\left( y_0 - \frac{y}{2} \right)^2 + z_0^{2} + \left( r_0\pm r \right)^2}\\ e_{\pm} &= \sqrt{\left( y_0 + \frac{y}{2} \right)^2 + z_0^{2} + \left( r_0\pm r \right)^2}. \end{align*} When $\lambda\rightarrow 0$, the integral is proportional to $\lambda^{2(s+1)}\log\lambda$, which is giving precisely the anomalous dimension of $\psi_s$. The asymptotic expansion (see Appendix \ref{app:sintegral}) reveals that \begin{equation} G_1(x_1,x_2,s) = - \frac{\epsilon}{12 s}\frac{e^{is\theta}}{r^2}\lambda^{2(s+1)}\left(\log\lambda + o(1) \right) \end{equation} as $\lambda\rightarrow 0$. It follows that the one-loop contribution to $\Delta_{\psi_s}$ is $-\epsilon/24s$ and that to $|C^\sigma_{\psi_s}|$ vanishes. The CFT data at the Wilson-Fisher fixed point to the first order in $\epsilon$ are therefore \begin{align} \Delta_{\psi_s} &= s + 1 - \left( \frac{1}{2} + \frac{1}{24 s} \right)\epsilon + O(\epsilon^2)\label{eq:deltapsi} \\ |C^{\sigma}_{\psi_s}| &= 1 + \frac{\psi(1)-\psi(s+1)}{4}\epsilon + O(\epsilon^2). \end{align} The inverse power-law dependence of the anomalous dimension on spin is in agreement with the results of \cite{Fitzpatrick:2012yx, Komargodski:2012ek}. The comparison to Monte Carlo data on $\psi=\psi_{1/2}$ and $\psi_{3/2}$ presented in \cite{Billo:2013jda} are reassuring, see table \ref{tab:MCWF}. \begin{center} \begin{table}\centering \begin{tabular}{c | c | c | l} \hline quantity & 3D free theory & Wilson-Fisher & Monte Carlo\\\hline $\Delta_{\psi}$ & 1 & 0.917 & 0.9187(6)\\ $\Delta_{\psi_{3/2}}$ & 2 & 1.972 & 1.99(5)\\ $\Delta_D$ & 2 & 2 & 2\\ $\Delta_s$ & 2 & 2.167 & 2.27(1)\\ $\Delta_{p^0}$ & 3 & 2.833 & 2.9(2)\\ $\Delta_{t_+}$ & 3 & 3.111 & 3.1(5)\\ $|C^\sigma_\psi|$ & 0.798 & 0.847 & 0.968(2)\\ $|C^\sigma_{\psi_{3/2}}|$ & 0.651 & 0.680 & 0.61(9)\\ $C^{\epsilon}_{\mathbf{1}}$ & -0.225 & -0.141 & -0.167(4) \end{tabular} \caption{A comparison of lattice data and the Wilson-Fisher fixed point at one loop} \label{tab:MCWF} \end{table} \end{center} \subsection{Energy operator} In this subsection, we will consider the two-point function in the bulk limit $\xi\ll1$ in order to find the one-point function of the energy operator $\epsilon$ in the presence of the defect at one loop. Put the two insertions at the same $\theta$, same radius $r$ and distance $y$ along the defect, so that $\lambda = r/y \gg 1$. Bulk OPE and conformal invariance of the one-point function dictates that \begin{equation} G(x_1,x_2) = \frac{1}{y^{2\Delta_\sigma}}\left[1 + c_{\sigma\sigma\epsilon}C^{\epsilon}_{\mathbf{1}}\lambda^{-\Delta_\epsilon}(1 + o(1))\right], \end{equation} where the $o$-notation now refers to the limit $\lambda\rightarrow\infty$. Expanding the free-theory result \eqref{eq:g2} around $\xi = \infty$ yields \begin{equation} G_0(x_1,x_2) = \frac{1}{y^{D}}\left[1 -\frac{2^{-D}\Gamma\left( \frac{D+1}{2} \right)}{\sqrt{\pi}\Gamma\left( \frac{D+2}{2} \right)} \lambda^{-D}(1 + O(\lambda^{-2}))\right], \end{equation} which gives the following free-theory predictions for the CFT data associated to $\epsilon$ \begin{align} \Delta_{\epsilon} &\stackrel{free}{=} 2 - \epsilon\\ c_{\sigma\sigma\epsilon}C^{\epsilon}_{\mathbf{1}} &\stackrel{free}{=} -\frac{1}{8}\left[1 + \frac{2\log2 - \psi(3/2) + \psi(2)}{2}\epsilon\right] + O(\epsilon^2). \end{align} The one-loop self-energy can be evaluated using the full 4D propagator \eqref{eq:d2full}. Rather than starting directly from \eqref{eq:d2full}, it is more convenient to sum \eqref{eq:gs1loop} over the spins, setting $\theta=0$ \begin{equation} G_1(x_1,x_2) = \frac{\epsilon}{3\pi}\int\limits_{\mathbb{R}^2}dy_0dz_0\int\limits_0^{\infty}dr_0 \frac{r}{d_+ d_- e_+ e_-}\frac{(d_++d_-)(e_++e_-)}{(d_++d_-)^{2}(e_++e_-)^{2}-(4rr_0)^{2}}. \label{eq:energy} \end{equation} Asymptotic expansion of this integral as $\lambda\rightarrow\infty$ shows (see Appendix \ref{app:eintegral}) \begin{equation} G_1(x_1,x_2) = \frac{\epsilon}{y^{2}}\lambda^{-2}\left[\frac{1}{24}\log\lambda + \frac{\log 2}{12} + o(1)\right]. \end{equation} We checked this result agrees with the computation which uses the full propagator \eqref{eq:d2full}. Combining the tree-level and one-loop result, we find the following properties of $\epsilon$ at one loop \begin{align} \Delta_{\epsilon} &= 2 -\frac{2}{3}\epsilon + O(\epsilon^2)\\ c_{\sigma\sigma\epsilon}C^{\epsilon}_{\mathbf{1}} &= -\frac{1}{8}\left[1 + \frac{2\log2 + 3\psi(2)-3\psi(3/2)}{6}\epsilon\right] + O(\epsilon^2). \end{align} The formula for $\Delta_\epsilon$ is in agreement with the standard result obtained using perturbation theory without the defect. We reproduce the computation in Appendix \ref{app:nodefect} in order to find the OPE coefficient $c_{\sigma\sigma\epsilon} = \sqrt{2}(1-\epsilon/6) + O(\epsilon^2)$. It follows that the one-point function coefficient of energy is \begin{equation} C^{\epsilon}_{\mathbf{1}} = -\frac{1}{8\sqrt{2}}\left[1 + \frac{1 + 2\log2 + 3\psi(2)-3\psi(3/2)}{6}\epsilon\right] + O(\epsilon^2). \end{equation} As shown in table \ref{tab:MCWF}, the first order result is again in a good agreement with Monte Carlo data. \subsection{The four-point function} \subsubsection{Leading defect operators of positive integer spin} Operators on the defect of integer spin can be found in the $\psi_{s_1}\psi_{s_2}$ OPEs. The most important of these is the displacement operator of spin one and protected dimension $D+1=3-\epsilon$. In the free theory, the normal ordered product $\psi_{s_1}\psi_{s_2}$ has scaling dimension $|s_1|+|s_2|+2 - \epsilon$. Consequently, the space of lowest-lying operators of positive integer spin $s$ is generated by all $\psi_{s_1}\psi_{s_2}$ with $s_1,s_2>0$ and $s_1+s_2 = s$. After flowing to the Wilson-Fisher fixed point, this degeneracy is lifted. Let us denote $\mathcal{O}_{s,m} \equiv \psi_{m-\frac{1}{2}}\psi_{s-m+\frac{1}{2}}$ for $m=1,\ldots,\lfloor\frac{s+1}{2}\rfloor$, with the exception $\mathcal{O}_{2k-1,k}\equiv\psi_{k-\frac{1}{2}}\psi_{k-\frac{1}{2}}/\sqrt{2}$, so that $\mathcal{O}_{s,m}$ is normalized in the free theory. At the Wilson-Fisher fixed point, the matrix of two-point functions of $\mathcal{O}_{s,m}$s is, to the first order in $\epsilon$, \begin{equation} \langle\mathcal{O}_{s,m}(y_1)\bar{\mathcal{O}}_{s,n}(y_2)\rangle = \frac{1}{y_{12}^{2s+4 - 2\epsilon}}\left[ \delta_{mn}- 2\epsilon(\log y_{12})\Delta^s_{mn} \right], \end{equation} where we ignored the possible corrections sub-leading in $y_{12}$. Denoting $\delta_s$ the minimal eigenvalue of $\Delta^s_{mn}$, the lowest dimension at spin $s\in\mathbb{Z}_{>0}$ is, to the first order in $\epsilon$ \begin{equation} \Delta_{s} = s + 2 + \epsilon (\delta_s - 1). \end{equation} In particular, if the displacement $D=\mathcal{O}_{1,1}$ is protected, we should have $\delta_1=\Delta^1_{11} = 0$. In the following, we will find the matrix $\Delta^s_{mn}$ by studying the various spin components of the four-point function of $\phi$ when all four insertions are at the same radius $r$ with $|y_{12}|=|y_{34}| = r/\lambda$ and $|y_{13}| = r/(\lambda\mu)$ such that $\lambda \ll 1$, $\mu\ll1$. Using first the bulk-defect OPE, and then OPE on the defect, we find the leading piece of the four-point function for $s_1,s_2>0$, $s_3,s_4<0$ and $s_1+s_2 = -s_3 - s_4 = s$ \begin{align} G\left(\{x_j,s_j\}_{j=1}^4\right) &= \frac{\prod_{j=1}^4\left(C^\sigma_{\psi_{s_j}} e^{is_j\theta_j}\lambda^{\Delta_{\psi_{s_j}}}\right)}{r^{4\Delta_{\sigma}}}\times\nonumber\\ &\times c_{\psi_{s_1}\psi_{s_2}\bar{\mathcal{O}}_{s,m}}c_{\psi_{s_3}\psi_{s_4}\mathcal{O}_{s,n}}\mu^{2s+4-2\epsilon}\left[\delta_{mn} + 2\epsilon(\log\mu)\Delta^s_{mn} \right], \label{eq:4ptfn} \end{align} where $\mathcal{O}_{s,m}$ is the normalized product $\psi_{s_1}\psi_{s_2}$ and $\bar{\mathcal{O}}_{s,n}$ is the normalized product $\psi_{s_3}\psi_{s_4}$. Recall that to $O(\epsilon^0)$, we have $C^\sigma_{\psi_{s_j}} = 1$ and from Wick's theorem \begin{equation} c_{\psi_{s_1}\psi_{s_2}\bar{\mathcal{O}}_{s,m}} = \begin{cases} 1\quad&\textrm{if }s_1\neq s_2\\ \sqrt{2}\quad&\textrm{if }s_1= s_2 \end{cases}. \end{equation} In bulk perturbation theory, the contributions to the four-point function at the first order come from the diagrams with two disconnected loop-corrected propagators, and the contact four point interaction (see figure \ref{fig:4ptfunction}). \begin{figure}[t] \centering \includegraphics[width=.5\textwidth]{4ptfunction.pdf} \caption{The diagrams contributing to the properties of $\psi_{s_1}\psi_{s_2}$ up to one loop. The double line denotes the one-loop-corrected propagator.} \label{fig:4ptfunction} \end{figure} The former give the leading contribution \begin{equation} G_{\mathrm{disc.}}\left(\{x_j,s_j\}_{j=1}^4\right) = \frac{\prod_{j=1}^4\left[ e^{is_j\theta_j}(\lambda\mu)^{\Delta_{\psi_{s_j}}}\right]}{r^{4\Delta_{\sigma}}}\left( \delta_{s_1,-s_3}\delta_{s_2,-s_4}+\delta_{s_1,-s_4}\delta_{s_2,-s_3} \right), \end{equation} while the contact interaction leads to the integral (following from \eqref{eq:gs2}) \begin{equation} G_{\mathrm{con.}}\left(\{x_j,s_j\}_{j=1}^4\right) = - \frac{\epsilon}{2^{7}3\pi}\int\limits_{\mathbb{R}^2}dy_0dz_0\int\limits_0^{\infty}\!\!\frac{dr_0}{r^4r_0^3}\prod_{j=1}^{4}\frac{e^{is_j\theta_j}}{\sqrt{\xi_j}\sqrt{1+\xi_j}\left( \sqrt{\xi_j} + \sqrt{1+\xi_j} \right)^{2|s_j|}}, \label{eq:integerspindim} \end{equation} where \begin{equation} \xi_j = \frac{(y_j - y_0)^2 + z_0 ^ 2 + (r-r_0)^2}{4rr_0}. \end{equation} Asymptotic expansion gives the following leading piece (see Appendix \ref{app:Sintegral}) \begin{equation} G_{\mathrm{con.}}\left(\{x_j,s_j\}_{j=1}^4\right) = \frac{4\epsilon}{3(s + 1)}\left(\log\mu+O(1)\right)\frac{1}{r^4}\prod_{j=1}^4\left[e^{is_j\theta_j}(\lambda\mu)^{|s_j| + 1}\right], \label{eq:g4con} \end{equation} which is consistent with \eqref{eq:4ptfn}. Putting the disconnected and contact interaction diagrams together, we find the following values of the matrix of scaling dimensions $\Delta^s_{mn}$ \begin{equation} \Delta^s_{mn} = \begin{cases} \frac{2}{3(s+1)}\quad&\textrm{if }m\neq n\\ \frac{2}{3(s+1)}-\frac{1}{12}\left( \frac{1}{2m-1} + \frac{1}{2s-2m+1} \right)\quad&\textrm{if }m=n, 2m\neq s+ 1\\ \frac{1}{3(s+1)} - \frac{1}{6s}\quad&\textrm{if }m=n, 2m= s+ 1 \end{cases} \end{equation} The first term comes from the contact interaction and the second from the disconnected diagrams (if present), where we need to use the one-loop-corrected $\Delta_{\psi_{s_j}}$ from \eqref{eq:deltapsi}. The first case occurs when $\{s_1,s_2\}\neq\{-s_3,-s_4\}$, when only the contact interaction contributes. The second case occurs when $\{s_1,s_2\}=\{-s_3,-s_4\}$ but $s_1\neq s_2$. Finally, the third case occurs when $s_1=s_2=-s_3=-s_4$. The first thing to notice is that $\Delta^1_{11} = 0$, so the displacement operator is indeed protected at the first order in $\epsilon$. The next simplest case is $s=2$, with a single operator $t_{+} = \psi\psi_{\frac{3}{2}}$ of free-theory dimension $4-\epsilon$ and anomalous dimension $\epsilon/9$. Numerical results for the lowest eigenvalue of $\Delta^s_{mn}$ are shown in figure \ref{fig:anomdims}. The leading anomalous dimension converges to $-1/12$ as $s\rightarrow\infty$, which can be understood be noting that in this limit, $(e^{j})_n=\delta_{nj}$ becomes an eigenvector of $\Delta_{mn}^s$ with eigenvalue \begin{equation} \lambda_j = -\frac{1}{12(2j-1)}\,. \end{equation} It would be interesting to understand the asymptotic properties of the spectrum along the lines of \cite{Fitzpatrick:2012yx, Komargodski:2012ek}. Unfortunately, the Monte Carlo data on higher-spin operators are not yet precise enough to provide a test of our results. \begin{figure}[htb] \centering \includegraphics[width=.8\textwidth]{anomdims.pdf} \caption{Anomalous dimensions of the leading operators of spin $s$ at one loop. Dashed blue lines interpolate between the even and odd spins. They both asymptote to the dashed red line $\delta_s=-1/12$.} \label{fig:anomdims} \end{figure} Computation of the next-to-leading order in $\mu$ of the contact interaction diagram \eqref{eq:g4con} provides the first order correction to the OPE coefficients $c_{\psi_{s_1}\psi_{s_2}\mathcal{O}_{s,m}}$. The disconnected diagrams contribute only to $C^\sigma_{\psi_{s_j}}$. The computation is included in Appendix \ref{app:Dintegral}, the result being \begin{align} G_{\mathrm{con.}}\left(x_1,\frac{1}{2};x_2,\frac{1}{2};x_3,-\frac{1}{2};x_4,-\frac{1}{2}\right) &= \epsilon\left( \frac{2}{3}\log\mu - \frac{8\log2 - 5}{6} + o(1)\right)\times\nonumber\\ &\times\frac{(\lambda\mu)^6}{r^4}e^{(\theta_1+\theta_2-\theta_3-\theta_4)/2}, \label{eq:4ptfnresult} \end{align} from which it follows that \begin{equation} c_{\psi\psi\bar{D}} = \sqrt{2}\left( 1 - \frac{8\log2 - 5}{24}\epsilon + O(\epsilon^2) \right). \end{equation} \subsection{The leading defect scalar and pseudoscalar} The above discussion was concerned only with operators of positive integer spin, but it is a simple matter to use the same method to find the dimension of the leading defect (non-identity) scalar. In the free theory, it is the operator $s=\bar{\psi}\psi$ of dimension $3-\epsilon$. Now we can repeat the steps above with $s_1 = - s_2 = -s_3 = s_4=1/2$ and find that the computation is almost identical to that for the displacement operator, the only difference being in the free-theory OPE coefficients ($c_{\psi\psi\bar{D}}=\sqrt{2}$, $c_{\psi\bar{\psi}s} = 1$). In both cases, the contribution from the disconnected diagrams is $-\epsilon/6$ (twice the anomalous dimension of $\psi$). The contact interaction diagram contributes $\epsilon/6$ to the displacement, but $\epsilon/3$ to the scalar since in the former case, it is reduced by $|c_{\psi\psi\bar{D}}|^2$=2. Hence the dimension of $s$ is \begin{equation} \Delta_{s} = 3 - \frac{5}{6}\epsilon + O(\epsilon^2). \end{equation} Table \ref{tab:MCWF} indicates that already the first order provides a considerable improvement towards the Monte Carlo results with respect to the free theory. We can also use the constant piece of \eqref{eq:4ptfnresult} to conclude that \begin{equation} c_{\bar\psi\psi s} = 1 - \frac{8\log2 - 5}{12}\epsilon + O(\epsilon^2). \end{equation} The leading free-theory defect operator with spin zero and negative $S$-parity is $p^0 = \bar\psi \overleftrightarrow\partial \psi/2 = [(\partial\bar\psi)\psi - \bar\psi(\partial\psi) ]/2$. We wish to study it using the $\langle \phi(x_1)\overleftrightarrow\partial\phi(x_2)\phi(x_3)\overleftrightarrow\partial\phi(x_4)\rangle$ bulk correlator, where the derivatives act along the defect. We put all four points at the same distance from the defect and focus on the correct spin component of the four-point function. The contact interaction diagram for $\langle\phi(x_1)\phi(x_2)\phi(x_3)\phi(x_4)\rangle$ is completely symmetric under any permutation of the four points. The antisymmetric derivative acting on $x_3$, $x_4$ thus makes the diagram vanish in the limit $x_3\rightarrow x_4$. Hence the properties of $p^0$ to the first order are determined solely by the renormalization of $\psi$. We find \begin{equation} \Delta_{p^0} = 2\Delta_{\psi} + 1 + O(\epsilon^2) = 4 - \frac{7}{6}\epsilon + O(\epsilon^{2})\,. \end{equation} The generalized free theory gives for the three point function constant \begin{equation} c_{\bar\psi\psi p^0} = \sqrt{\Delta_{p^0}} + O(\epsilon^2) = \sqrt{\frac{3}{2}}\left( 1 - \frac{7}{36}\epsilon + O(\epsilon^2) \right). \end{equation} We will be able to compare these predictions with data from conformal bootstrap in the following section. \section{Bootstrapping the twist defect}\label{sec:bootstrap} In this section we will apply the methods of the numerical conformal bootstrap to the one-dimensional defect directly. As outlined in the introduction, there are two distinct but related crossing equations which are relevant for our problem. Analysis of the first leads to an operator dimension bound in one dimension, similar to those derived between 2 and 4 dimensions in references \cite{El-Showk:2013nia,ElShowk:2012ht,Rattazzi:2008pe,Rychkov:2009ij}. The bound appears to be saturated by the generalized free fermion. Adding an extra equation and demanding the existence of a displacement operator leads to more interesting bounds, and we are able to reconstruct the twist defect spectrum. \subsection{The bootstrap equations} The bootstrap equations that we use result from expanding four-point functions of $\psi$, $\bar\psi$ in different OPE channels. Four points on a line have only one invariant under the $SL(2,\mathbb{R})$ action. We take it to be \begin{equation} z = \frac{x_{12}x_{34}}{x_{13}x_{24}}\,. \end{equation} We fix the order of the insertions to $x_1<x_2<x_3<x_4$, which results in the constraint $0<z<1$. The four-point function of defect primaries $\mathcal{O}_i$ of equal scale dimension $d$ can be written as \begin{equation} \langle\mathcal{O}_1(x_1)\mathcal{O}_2(x_2)\mathcal{O}_3(x_3)\mathcal{O}_4(x_4)\rangle = \frac{1}{|x_{12}|^{2d}|x_{34}|^{2d}}g(z)\,, \end{equation} where $g(z)$ is an analytic function for $z\in(0,1)$. Colliding $x_1$ and $x_2$ leads to the series expansion in conformal blocks \begin{equation} g(z) = \sum_{\mathcal{O}}c_{12\mathcal{O}}c_{34\bar{\mathcal{O}}} G_{\Delta_{\mathcal{O}}}(z), \end{equation} where the sum runs over defect primaries, and $G_{\Delta}(z)$ is the 1d conformal block for equal external dimensions and internal dimension $\Delta$. The conformal blocks are given by \cite{Dolan2011} \begin{equation} G_{\Delta}(z)= z^\Delta\, _2 F_1(\Delta,\Delta;2\Delta;z)\,. \end{equation} Colliding instead $x_2$ and $x_3$ and equating the two different representations of the four-point function leads to the crossing equation \begin{equation} \sum_{\mathcal{O}}c_{12\mathcal{O}}c_{34\bar{\mathcal{O}}} z^{-2d}G_{\Delta_{\mathcal{O}}}(z) = \sum_{\mathcal{O}}c_{23\mathcal{O}}c_{41\bar{\mathcal{O}}} (1-z)^{-2d}G_{\Delta_{\mathcal{O}}}(1-z)\, \end{equation} valid for $z\in(0,1)$. $U(1)$ symmetry requires that a nonzero four-point function of $\psi$ and $\bar\psi$ must contain two of each. There are two nonequivalent orders to consider: $\langle\bar\psi\psi\bar\psi\psi\rangle$ and $\langle\bar\psi\psi\psi\bar\psi\rangle$. Focusing on the first case, the exchanged operators come from the $\bar\psi\psi$ OPE, so they have $U(1)$ spin zero. Moreover, their S-parity equals the $O(2)$ parity since the two symmetries require in turn \begin{equation} \langle \bar\psi\psi\mathcal{O}\rangle = (-1)^{S(\mathcal{O})}\langle\psi\bar\psi\mathcal{O}\rangle =(-1)^{B(\mathcal{O})}\langle\psi\bar\psi\mathcal{O}\rangle \,. \end{equation} We have seen this correlation between the parities in the $\bar\psi\psi$ OPE in the free theory example of section \ref{sec:defect}. Of course, the $\bar\psi\psi$ OPE starts with the identity. The coefficients of the conformal block expansion in the (12)(34) channel are $c_{\bar\psi\psi\mathcal{O}}c_{\bar{\mathcal{O}}\bar\psi\psi}$. Using the Hilbert space formalism, this equals \begin{equation} \langle\psi|\psi|\mathcal{O}\rangle\langle\mathcal{O}|\bar\psi|\psi\rangle = |\langle\psi|\psi|\mathcal{O}\rangle|^2 = |c_{\bar\psi\psi\mathcal{O}}|^2. \end{equation} The (23)(41) contains the same set of spin-0 operators and the corresponding coefficients are $|c_{\psi\bar\psi\mathcal{O}}|^2$. But $c_{\psi\bar\psi\mathcal{O}} = \pm c_{\bar\psi\psi\mathcal{O}}$ thanks to the parity symmetries, so that the first bootstrap equation can be written as \begin{equation} \sum_{\mathcal{O}}|c_{\bar\psi\psi\mathcal{O}}|^2\left[z^{-2d}G_{\Delta_{\mathcal{O}}}(z)-(1-z)^{-2d}G_{\Delta_{\mathcal{O}}}(1-z)\right] = 0\,. \label{eq:beq1} \end{equation} We have thus obtained a conventional crossing equation with positive and equal coefficients on both sides, directly analogous to those used in higher dimensions \cite{ElShowk:2012ht,El-Showk:2013nia}. The equation resulting from the crossing symmetry of the $\langle\bar\psi\psi\psi\bar\psi\rangle$ correlation function is less standard. The (12)(34) channel still consists of primaries from the $\bar\psi\psi$ OPE, but this time, the coefficient is \begin{equation} c_{\bar\psi\psi\mathcal{O}}c_{\psi\bar\psi\bar{\mathcal{O}}} = (-1)^{S(\mathcal{O})}c_{\bar\psi\psi\mathcal{O}}c_{\bar\psi\psi\bar{\mathcal{O}}} = (-1)^{S(\mathcal{O})}|c_{\bar\psi\psi\mathcal{O}}|^2, \end{equation} so that the conformal block expansion can distinguish between scalars and pseudoscalars at the cost of lost positivity. The (23)(41) channel comes from the $\psi\psi$ OPE, and so contains only spin-1 operators even under S-parity ($\langle\psi\psi\mathcal{S}\rangle =(-1)^{S(\mathcal{S})}\langle\psi\psi\mathcal{S}\rangle $). The coefficients are manifestly positive since \begin{equation} c_{\bar{\psi}\bar\psi\mathcal{S}}c_{\psi\psi\bar{\mathcal{S}}} = \langle\psi|\bar{\psi}|\mathcal{S}\rangle\langle\mathcal{S}|\psi|\psi\rangle = |c_{\psi\psi\bar{\mathcal{S}}}|^2\,. \end{equation} The resulting bootstrap equation thus takes the form \begin{align} \sum_{\mathcal{O}^+}|c_{\bar\psi\psi\mathcal{O}^+}|^2z^{-2d}G_{\Delta_{\mathcal{O}^+}}(z) &- \sum_{\mathcal{O}^-}|c_{\bar\psi\psi\mathcal{O}^-}|^2z^{-2d}G_{\Delta_{\mathcal{O}^-}}(z) =\nonumber\\ &= \sum_{\mathcal{S}}|c_{\psi\psi\bar{\mathcal{S}}}|^2(1-z)^{-2d}G_{\Delta_{\mathcal{S}}}(1-z)\,, \label{eq:beq2} \end{align} where the first, second sum on the LHS runs over parity-even, odd scalars respectively, and the sum on the RHS runs over spin-1 primaries. We expect the lowest operator in the $\psi\psi$ OPE to be the displacement. Note that the difference in sign between the two bootstrap equations goes hand in hand with the fact that the crossed channel in \eqref{eq:beq1} starts with the identity, while in \eqref{eq:beq2}, it starts at $\Delta>0$. In the former case, the scalars and pseudoscalars together produce the strong singularity of the identity in the crossed channel, but in the later, their effect must cancel to leave a weaker singularity corresponding to the first spin-1 primary. Since the singularity in the crossed channel is produced by the tail of the set of primaries, it follows that there are infinitely many scalars as well as infinitely many pseudoscalars. There is a family of simple solutions of the two bootstrap equations corresponding to a generalized free complex scalar in 1d. In this case, Wick's theorem implies ($x_1<x_2<x_3<x_4$) \begin{align} \langle\bar\psi(x_1)\psi(x_2)\bar\psi(x_3)\psi(x_4)\rangle &= \frac{1}{|x_{12}|^{2d}|x_{34}|^{2d}}\left[1 + \left( \frac{z}{1-z} \right)^{2d}\right]\label{eq:gf1}\\ \langle\bar\psi(x_1)\psi(x_2)\psi(x_3)\bar\psi(x_4)\rangle &= \frac{1}{|x_{12}|^{2d}|x_{34}|^{2d}}\left(1 + z^{2d}\right)\label{eq:gf2}\,. \end{align} The first term in each bracket is the contribution of the identity, and the rest can be expanded in 1d conformal blocks as \begin{align} \left( \frac{z}{1-z} \right)^{2d} &= \sum_{n=0}^\infty\frac{(2d)^2_n}{n!(4d+n-1)_n}G_{2d+n}(z)\label{eq:exp1}\\ z^{2d} &= \sum_{n=0}^\infty\frac{(-1)^n(2d)^2_n}{n!(4d+n-1)_n}G_{2d+n}(z)\label{eq:exp2}\,, \end{align} so that the $\bar\psi\psi$ OPE contains scalars of dimensions $2d+2n$, $n\geq0$, and pseudoscalars of dimensions $2d+2n+1$, $n\geq 0$. \eqref{eq:gf2} in the crossed channel becomes \begin{equation} \langle\psi(x_1)\psi(x_2)\bar\psi(x_3)\bar\psi(x_4)\rangle = \frac{1}{|x_{12}|^{2d}|x_{34}|^{2d}}\left[z^{2d}+\left( \frac{z}{1-z} \right)^{2d}\right] \end{equation} with conformal block expansion \begin{equation} z^{2d}+\left( \frac{z}{1-z} \right)^{2d} = \sum_{m=0}^\infty\frac{2(2d)^2_{2m}}{(2m)!(4d+2m-1)_{2m}}G_{2d+2m}(z)\,, \end{equation} so that the spin-1 sector consists of dimensions $2d+2m$, $m\geq0$. Unless we put constraints on the spin-1 spectrum, any solution of \eqref{eq:beq1} can be extended to a solution of both \eqref{eq:beq1} and \eqref{eq:beq2}. Indeed, let \begin{equation} \sum_{i}|\lambda_i|^2\left[z^{-2d}G_{\Delta_{i}}(z)-(1-z)^{-2d}G_{\Delta_{i}}(1-z)\right] = 0 \end{equation} be a solution of the first equation and take the $\Delta>0$ spectrum in the even and odd scalar sectors identical, with $|c_{\bar\psi\psi\mathcal{O}^+_i}|^2 = |c_{\bar\psi\psi\mathcal{O}^-_i}|^2 = |\lambda_i|^2/2$. \eqref{eq:beq1} is automatically satisfied and in \eqref{eq:beq2}, the nonidentity scalars and pseudoscalars cancel out. Moreover, \eqref{eq:exp1} guarantees that we can use a tower of spin-1 operators of dimensions $2d + n$, $n\geq0$ to cancel the contribution of the identity. Let us comment on the domain of applicability of our bootstrap equations. \eqref{eq:beq1} by itself does not know in any way about the bulk theory and merely expresses the constraints of crossing and unitarity for a 1d CFT. It is \eqref{eq:beq2} together with the assumption that the $\psi\psi$ OPE starts with the displacement that identifies the line as a codimension two object. Indeed, the structure of the OPE suggests a displacement operator which carries charge 1 under a transverse $SO(2)$ rotation symmetry, and a bosonic operator $\psi$ of half-integral rotation quantum number\footnote{Of course, the bounds derived from the bootstrap equations may apply to other situations which include operators with similar quantum numbers. For example, a codimension 3 defect has an $SO(3)$ rotation symmetry, and may have an operator of spin $1/2$ under that $SO(3)$. One could focus on a single component $\psi$ of that doublet and on the $SO(2)$ Cartan subgroup of the full rotation group, using our analysis for a sub-optimal bound.}. \subsection{Constraints from the first crossing equation} As a warm-up, let us consider first the constraints that follow from the first bootstrap equation \reef{eq:beq1}. This kind of equation has been previously analyzed in the literature, though not in one dimension. The major difference is that here there are no spin-$L$ representations other than $L=0$. Operators are labeled only by their conformal dimensions, along with discrete quantum numbers. The method for deriving constraints from equation \reef{eq:beq1} has been explained in detail elsewhere, so here we will content ourselves with a brief summary. We first expand it in derivatives around $z=1/2$ up to some finite order. By setting each individual Taylor coefficient to zero, we are left with a system of linear equations with constraints, namely that the OPE coefficients should be positive and that at least one of them (that of the identity operator) is strictly non-zero. This is a linear programming problem, which can be solved with standard algorithms, such as the simplex method. Alternatively, we can try to disprove that such an equation can hold, by finding a linear functional which is non-negative on all possible vectors (namely, for any $\Delta$). We will follow the former route, using our own numerical implementation of the simplex algorithm. This has the advantage that the output is automatically a solution to the crossing symmetry constraints -- a spectrum, made up of operator dimensions and OPE coefficients, which solve the crossing equations -- as opposed to the linear functional method, where a spectrum has to be extracted by examining the zeros of the functional \cite{ElShowk:2012hu}. Our approach is to fix $d$, the dimension of $\psi, \bar \psi$ and ask for the maximum allowed dimension of the first scalar appearing in the $\psi \bar \psi$ OPE. We do this by excluding from the sum rule \reef{eq:beq1} all vectors with dimension below some value $\Delta_s$ (apart from the identity). We then increase this gap until no solution can be found. The result is shown in figure \ref{fig:1dBound1Eqn}. \begin{figure}[htb] \begin{centering} \begin{tabular}{cc} \includegraphics[width=.47\textwidth]{1d_bound_dimension.pdf} & \includegraphics[width=.47\textwidth]{1d_bound_OPE_quad.pdf} \end{tabular} \caption{One-dimensional bounds derived from \eqref{eq:beq1}. In red the curves corresponding to the generalized free fermion solution. Left: bound on scalar dimension. Right: OPE coefficient of the leading scalar, in the solution to crossing corresponding to the dots on the top plot.} \label{fig:1dBound1Eqn} \end{centering} \end{figure} The result is a relatively boring straight line, which seems to very nearly coincide with the curve corresponding to the 1d generalized free fermion. This amounts to the four-point function \begin{equation} \langle\psi(x_1)\psi(x_2)\psi(x_3)\psi(x_4)\rangle = \frac{1}{|x_{12}|^{2d}|x_{34}|^{2d}} \left[1+\left(\frac z{1-z}\right)^{2d}-z^{2d}\right] \end{equation} with conformal block expansion \begin{equation} 1+\left(\frac z{1-z}\right)^{2d}-z^{2d} = 1 + \sum_{j=0}^{\infty}\frac{2(2d)^{2}_{2j+1}}{(2j+1)!(4d+2j)_{2j+1}}G_{2d+2j+1}(z)\,, \end{equation} so that the minimal exchanged primary above the identity has $\Delta_s = 2d + 1$. We can find solutions to crossing at any point below our bound curve. In the extremal case where we sit directly on the bound itself, the solution is generically unique \cite{ElShowk:2012hu}. In this case we expect this solution to closely match the generalized free fermion. On the same figure on the right-hand side we compare the OPE coefficient of the leading scalar obtained with the bootstrap with that of the generalized free fermion -- namely $|c_{\psi \bar \psi \mathcal O}|^2=2d$. Overall the agreement is quite good for small $d$ and gradually gets worse as $d$ increases. As we increase the accuracy in our numerical procedure, by augmenting the total number of derivatives (here we have used 50), the agreement gets better and better for larger and larger values of $d$. As for the twist defect CFT, it lies well inside the bound, and as such, through bounds alone we cannot reach it, at least not with a single equation. This is unlike the situation described in \cite{El-Showk:2013nia}, where the Ising model lies on an interesting point (a kink) in the dimension bound. Here we are not as lucky and must work a bit harder to obtain an interesting result. \subsection{Constraints from both crossing equations} We now turn to deriving constraints by using both crossing equations. We use the conformal dimension to label operators, and define \begin{eqnarray} F_\Delta(z)&=&G_{\Delta}(z)- \left( \frac{z}{1-z} \right)^{2d}G_{\Delta}(1-z),\\ S_\Delta(z)&=&G_{\Delta}(z),\\ T_\Delta(z)&=& -\left(\frac {z}{1-z}\right)^{2d}G_{\Delta}(1-z) \end{eqnarray} With this notation, it follows that we can write \eqref{eq:beq1} and \eqref{eq:beq2} in vector form. \begin{eqnarray} \sum_{\mathcal {O}^+}a^+_{\Delta} \left(\begin{tabular}{c} $F_\Delta(z)$ \\ $S_\Delta(z)$ \end{tabular}\right) + \sum_{\mathcal {O}^-} a^-_{\Delta} \left(\begin{tabular}{c} $F_\Delta(z)$ \\ $-S_\Delta(z)$ \end{tabular}\right) + \sum_{\mathcal {S}} b_{\Delta} \left(\begin{tabular}{c} $0$ \\ $T_\Delta(z)$ \end{tabular}\right)=0 \label{2eqnsumrule} \end{eqnarray} where all coefficients appearing in the above are explicitly positive. The procedure now is the same as in the single equation case. We evaluate the sum rule and its derivatives at $z=1/2$ (up to 40) and attempt to find a solution imposing various constraints. Since the spectrum is now split into three different sectors, we have more freedom in setting up the problem. Since we are looking for the twist defect, we are interested in solutions to crossing where the first spin-1 operator is the displacement, which has dimension 2. Therefore we shall impose a gap, by disallowing any spin-1 operators with dimension below 2 in the sum rule above. Figure \ref{fig:2eqnTemp} shows the bound derived by scanning over the dimension $d$ of $\psi$ while imposing the same gap on the dimension of the parity odd and parity even scalars. \begin{figure}[htb] \centering \includegraphics[width=.8\textwidth]{2eqnBound.pdf} \caption{Single equation bound in red and two equation bound in black. In the latter, the leading scalar is parity odd, up to about $d=1$, where the parity even and odd scalars have identical spectra.} \label{fig:2eqnTemp} \end{figure} The bound is clearly more restrictive up to some value of $d$, beyond which it returns to the original single equation result. This can be understood by recalling that a solution of the first equation can be extended to a solution of both as long as the gap imposed in the spin-1 sector does not exceed $2d$. We can see this directly by examining the spectra of the solutions to crossing living at the boundary of the bound. In figure \ref{fig:2EqnSpectra} we show the odd and even scalar spectra corresponding to these solutions. It is clear that for high enough $d$ the spectra become identical in these two channels, as we expect. A detailed examination of the OPE coefficients shows that this occurs precisely at $d=1$. \begin{figure}[htb] \begin{centering} \begin{tabular}{cc} \includegraphics[width=.47\textwidth]{2EqnSpectraDimensions.pdf} & \includegraphics[width=.47\textwidth]{2EqnSpectraOPE.pdf} \end{tabular} \caption{Spectra corresponding to the extremal solutions in figure \ref{fig:2eqnTemp}. In black (red) the parity even (odd) scalars. On the left the operator dimensions, and their OPE coefficients on the right. The correspondence between both is reversed: larger OPE coefficients correspond to lower dimension operators. } \label{fig:2EqnSpectra} \end{centering} \end{figure} As it is clear, this approach is unfortunately still not sufficient to find the twist defect. From table \ref{tab:MCWF} we expect there to be a parity even scalar of dimension about 2.27 when $d\simeq 0.9187$, which is allowed, but not saturated by our bound. Hence we consider a different strategy. Since we know that the defect must contain a spin-1 operator with dimension 2 in its spectrum, we shall impose this directly on the sum rule. More concretely, we fix the OPE coefficient of the $D$ operator in the sum rule to some value, and we determine the maximum gap in the parity even sector consistent with crossing symmetry. We can do this for various values of $d$, but we will be interested in the experimentally relevant $d=0.9187$. \begin{figure}[htb] \centering \includegraphics[width=.7\textwidth]{2eqn_ope_bound_d09187.pdf} \caption{One-dimensional bound, using two equations.} \label{fig:2eqnOPEBound} \end{figure} Figure \ref{fig:2eqnOPEBound} shows the resulting bound. We see that the bound is saturated by a solution to crossing including a parity even scalar of dimension $2.27$ for an OPE squared value of about $1.8$. Notice that this is consistent with the results of the $\epsilon$-expansion, which indicate that the OPE coefficient square should be $\simeq 1.9$. We can determine the spectrum of this solution, and this is shown in figure \ref{fig:spectra}. Remarkably, we find a parity odd scalar of dimension $\simeq 2.9$ in the solution, signaling that this is indeed the twist defect. \begin{figure}[htb] \begin{centering} \begin{tabular}{cc} \includegraphics[width=.45\textwidth]{2eqn_EvenScalarSpec.pdf} & \includegraphics[width=.45\textwidth]{2eqn_OddScalarSpec.pdf} \end{tabular} \includegraphics[width=.45\textwidth]{2eqn_Spin1Spec.pdf} \caption{Spectra corresponding to the extremal solutions to crossing symmetry - the unique solutions at the boundary of our bounds.} \label{fig:spectra} \end{centering} \end{figure} We summarize our spectrum results in table \ref{tab:Bootstrap}. Besides the spectrum data present on the table, the bootstrap also predicts other operators and their OPE coefficients. The accuracy of these depends on the number of derivatives. We can estimate the error by repeating the calculations at different numbers of derivatives and seeing how the results change. Doing this we further predict the existence of the operators shown in table \ref{tab:Bootstrap2}, with an estimated error of $5\%$ or less. \begin{center} \begin{table}\centering \begin{tabular}{c | c | c | c l} \hline quantity & Bootstrap & $\epsilon$-expansion & Monte Carlo\\\hline $\Delta_{\psi}$ & {\em 0.9187} & 0.917 & 0.9187(6)\\ $\Delta_D$ & {\em 2} & 2 & 2\\ $\Delta_s$ & {\em 2.27} & 2.167 & 2.27(1)\\ $\Delta_{p^o}$ & 2.92 & 2.833 & 2.9(2)\\ $c_{\psi\psi s}$ & 0.95 & 0.955 & ???\\ $c_{\psi\psi\bar{D}}$ & 1.345 & 1.382 & ???\\ $c_{\psi \bar{\psi} p^o}$ & 0.988 & 0.987 & ??? \end{tabular} \caption{A comparison of lattice data, the Wilson-Fisher fixed point at one loop, and bootstrap calculations. We have italicized numbers which are used as inputs to the bootstrap method.} \label{tab:Bootstrap} \end{table} \end{center} To summarize, we have used as input the dimension of $\psi$; the dimension of the first even scalar $s$; and the existence of a spin 1 operator $D$ of dimension 2. Using this information, and assuming the defect spectrum lies on the bound of figure \ref{fig:2eqnOPEBound}, we have been able to determine the OPE coefficient of $D$ in the $\psi \psi$ operator product. Further, we have checked the existence of an odd scalar of dimension $\simeq 2.9$ and its OPE coefficient, and predict a further six operator dimensions and OPE coefficients. We could have gone further by doing more intensive calculations, but we are limited by the relatively large error in the dimension of $s$ determined from the lattice. As it stands, our confidence that we are finding the correct solution to crossing hinges on obtaining the correct operator dimension for $p^o$ and an OPE coefficient for the displacement operator consistent with the $\epsilon$-expansion. It would be very interesting to further test this by extending the $\epsilon$-expansion calculations or doing further lattice simulations. \begin{center} \begin{table}\centering \begin{tabular}{c | c| c} \hline Type & Dimension & OPE$^2$\\\hline $0^+$ & 4.12 & 0.66 \\ $0^+$ & 6.29 & 0.26\\ $0^-$ & 5.11 & 0.45\\ $0^-$ & 7.42 & 0.15\\ 1 & 3.98 & 0.99\\ 1 & 6.20 & 0.38 \end{tabular} \caption{Spectrum predictions from the bootstrap method.} \label{tab:Bootstrap2} \end{table} \end{center} \section{Conclusions}\label{sec:conclusions} We have offered new points of view on the twist line defect in the 3d Ising model -- the $\epsilon$-expansion and the conformal bootstrap of the defect four-point functions. While the $\epsilon$-expansion at one loop leads to a surprisingly good agreement with the Monte Carlo results, the identification of the defect spectrum from conformal bootstrap is not as straightforward as in the case of the bulk theory \cite{ElShowk:2012ht}. In spite of this, we believe we have successfully found the 1d defect theory by forcing the inclusion of the displacement operator in the spectrum, at the cost of using more data, namely the dimensions of the leading parity even scalar $s$ and of $\psi$ as determined from the lattice. The pay-off is that we determine a number of other quantities, namely operator dimensions and their OPE coefficients, which match well with results of the $\epsilon$-expansion. It is quite interesting that the inclusion of the second equation in the bootstrap set-up results in significant improvement of the bound, despite the lack of positivity in the spin-0 channel. Several extensions of our work offer themselves. The $O(N)$ models allow twist line defects for arbitrary $R\in O(N)$, and it should be straightforward to generalize the $\epsilon$-expansion calculation at least in the case when $R=-I$. Although our bootstrap bounds apply to this defect for any $N$ by taking $\psi$ to be a fixed component of a spin-1/2 $O(N)$ vector, it may be worth repeating the analysis for $\langle\bar\psi_i\psi_j\bar\psi_k\psi_l\rangle$, $\langle\bar\psi_i\psi_j\psi_k\bar\psi_l\rangle$ while separating the exchanged primaries according to their $O(N)$ representations, as in \cite{Kos2013}. Large-$N$ computations for the defect should also be possible. Note that $O(N)$ can also be interpreted as the spacetime symmetry of the transverse directions, so that conformal bootstrap on the line can be used to constrain higher-dimensional CFTs. It may also be interesting to see how the bootstrap bound evolves for the $2-\epsilon$ dimensional defect in the Wilson-Fisher CFT. 1d CFTs can also serve as simple test cases for analytical understanding of the conformal bootstrap. In particular, the coincidence of the single equation bound with the generalized free fermion begs for an analytical explanation. Note that the techniques of \cite{Fitzpatrick:2012yx} and \cite{Komargodski:2012ek} are not directly applicable since they require the presence of two cross-ratios. Also for this reason, the study of crossing of the bulk two-point function in the presence of a defect may be a fruitful direction of research. \acknowledgments We are grateful for useful discussions with C. Beem, D. Simmons-Duffin, M. Meineri, R. Pellegrini, D. Poland, S. Rychkov, S. El-Showk and A. Vichi. The research of D.G. was supported by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Economic Development and Innovation. M.P. is supported by DOE grant DESC0010010-Task A. M.P. thanks CERN for hospitality while this work was being completed.
1,116,691,498,140
arxiv
\section{Introduction} \label{intro} The cosmological constant, $\Lambda$, an essential ingredient of the theory of general relativity (GR) \cite{Einstein1917As}, was guided by the idea that the evolution of the Universe should be static \cite{Tawfik:2011mw,Tawfik:2008cd}. This model was subsequently refuted and accordingly the $\Lambda$-term was abandoned from the Einstein field equation (EFE), especially after the confirmation of the celebrated Hubble obervations in 1929 \cite{Hubble:1929ig}, which also have verified the consequences of Friedmann solutions for EFE with vanishing $\Lambda$ \cite{Friedman:1922kd}. Nearly immediate after publishing GR, a matter-free solution for EFE with finite $\Lambda$-term was obtained by de Sitter \cite{deSitter:1917zz}. Later on when it has been realised that the Einstein {\it static} Universe was found unstable for small perturbations \cite{Mulryne:2005ef, Wu:2009ah, delCampo:2011mq}, it was argued that the inclusion of the $\Lambda$-term remarkably contributes to the stability and simultaniously supports the expansion of the Universe, especially that the initial singularity of Friedmann-Lem$\hat{\mbox{a}}$itre-Robertson-Walker (FLRW) models could be improved, as well \cite{Weinberg1972AA,Misner1984B}. Furthermore, the observations of type-Ia high redshift supernovae in late ninteeth of the last century \cite{Riess:1998cb, Perlmutter:1998np} indicated that the expanding Universe is also accelerating, especially at a small $\Lambda$-value, which obviously contributes to the cosmic negative pressure \cite{Garriga:1999bf,Martel:1997vi}. With this regard, we recall that the cosmological constant can be related to the vacuum energy density, $\rho$, as $\Lambda=8\pi G \rho/c^2$, where $c$ is the speed of light in vacuum and $G$ is the gravitational constant. In 2018, the PLANCK observations have provided us with a precise estimation of $\Lambda$, namely $\Lambda_{\mbox{Planck}} \simeq 10^{-47}$GeV$^4/\hbar^3 c^3$ \cite{Aghanim:2018eyx}. When comparing this tiny value with the theoretical estimation based on quantum field theory in weakly- or non-gravitating vacuum, $\Lambda_{\mbox{QFT}} \simeq 10^{74}$GeV$^4/\hbar^3 c^3$, there is, at least, a $121$-orders-of-magnitude-difference to be fixed \cite{Adler:1995vd,Weinberg:1988cp,Zeldovich:1968ehl}. The disagreement between both values is one of the greatest mysteries in physics and known as the cosmological constant problem or {\it catastrophe of non-gravitating vacuum}. Here, we present an attempt to solve this problem. To this end, we utilize the generalized uncertainty principle (GUP), which is an extended version of Heisenberg uncertainty principle (HUP), where a correction term encompassing the gravitational impacts is added, and thus an alternative quantum gravity approach emerges \cite{Tawfik:2014zca,Tawfik:2015rva}. To summarize, the present attempt is motivated by the similarity of GUP (including non-gravitating and gravitating impacts on HUP) and the disagreement between theoretical and observed estimations for $\Lambda$ (manifesting gravitational influences on the vacuum energy density) and by the remarkable impacts of $\Lambda$ on early and late evolution of the Universe \cite{Tawfik:2019jsa,Tawfik:2011mw,Tawfik:2008cd}. So far, there are various quantum gravity approaches presenting quantum descriptions for different physical phenomena in presence of gravitational fields to be achnowledged, here \cite{Tawfik:2014zca, Tawfik:2015rva}. The GUP offers a quantum mechanical framework for a potential minimal length uncertainty in terms of the Planck scale \cite{Tawfik:2017syy,Tawfik:2016uhs,Dahab:2014tda,Ali:2013ma}. The minimal length uncertainty, as proposed by GUP, exhibits some features of the UV/IR correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}, which has been performed in viewpoint of local quantum field theory. Thus, it is argued that the UV/IR correspondence is relevant to revealing several aspects of short-distance physics, such as, the cosmological constant problem \cite{Weinberg:1988cp,Banks:2000fe,Cohen:1998zx,ArkaniHamed:2000eg}. Therefore, a precise estimation of the minimal length uncertainty strongly depends on the proposed upper bound of the GUP parameter, $\beta_0$ \cite{Dahab:2014tda,Tawfik:2013uza}. Various ratings for the upper bound of $\beta_0$ have been proposed, for example, by comparing quantum gravity corrections to various quantum phenomena with electroweak \cite{Das:2008kaa, Das:2009hs} and astronomical \cite{ Scardigli:2014qka, Feng:2016tyt} observations. Accordingly, $\beta_0$ ranges between $10^{33}$ to $10^{78}$ \cite{Scardigli:2014qka,Feng:2016tyt,Walker:2018muw}. As a preamble of the present study, we present a novel estimation for $\beta_0$ from the binary neutron stars merger, the gravitational wave event GW170817 reported by the Laser Interferometer Gravitational-Wave Observatory (LIGO) and the Advanced Virgo collaborations \cite{TheLIGOScientific:2017qsa}. With this regard, there are different efforts based on the features of the UV/IR correspondence in order to interpret the $\Lambda$ problem \cite{Chang:2001bm, Chang:2011jj,Miao:2013wua,Shababi:2017zrt,Vagenas:2019wzd} with Liouville theorem in the classical limit \cite{Fityo:2008zz, Chang:2001bm, Wang:2010ct}. Having a novel estimation of $\beta_0$, a solution of the $\Lambda$ problem, {\it catastrophe of non-gravitating vacuum}, could be best proposed. The present paper is organized as follows. Section \ref{MDRGUP} reviews the basic concepts of the GUP approach with quadratic momentum. The associated modifications of the energy-momentum dispersion relations related to GR and rainbow gravity are also outlined in this section. In section \ref{GUPparameter}, we show that the dimensionless GUP parameter, $\beta_o$, could be, for instance, constrained to the gravitational wave event GW170817. Section \ref{LamdaProblem} is devoted to calculating the vacuum energy density of states and shows how this contributes to understanding the cosmological constant problem with an quantum gravity approach, the GUP. The final conclusions are outlined in section \ref{conclusion}. \section{Generalized Uncertainty Principle and Modified Dispersion Relations \label{MDRGUP}} Several approaches to the quantum gravity, such as GUP, predict a minimal length uncertainties that could be related to the Planck scale \cite{Tawfik:2015rva,Tawfik:2014zca}. There were various laboratory experiments conducted to examine the GUP effects \cite{Bawaj:2014cda, Marin:2013pga, Pikovski:2011zk, Khodadi:2018kqp}. In this section, we focus the discussion on GUP with a quadratic momentum uncertainty \cite{Tawfik:2015rva,Tawfik:2014zca}. This version of GUP was obtained from black hole physics \cite{Gross:1987kza} and supported by {\it gedanken} experiments \cite{Maggiore:1993zu}, which have been proposed Kempf, Mangano, and Mann (KMM), \cite{Kempf:1994su} \begin{eqnarray} \Delta x\, \Delta p\geq \frac{\hbar}{2} \left[ 1+ \beta (\Delta p)^2 \right], \label{GUPuncertainty} \end{eqnarray} where $\Delta x$ and $\Delta p$ are the uncertainties in position and momentum, respectively. The GUP parameter can be exressed as $\beta = \beta_0 (\ell_p/\hbar)^2 = \beta_0/ (M_p c)^2$, where $\beta_0$ is a dimensionless parameter, $\ell_p=1.977 \times 10^{-16}~$GeV$^{-1}$ is the Planck length, and $M_p= 1.22 \times 10^{19}~$GeV$/c^2$ is the Planck mass. Equation (\ref{GUPuncertainty}) implies the existence of a minimum length uncertainty, which is related to the Planck scale, $\Delta x_{\mbox{min}} \approx \hbar \sqrt{\beta} =\ell_p \sqrt{\beta_0}$. It should be noticed that the minimum length uncertainty exhibits features of the UV/IR correspondence \cite{Maldacena:1997re,Gubser:1998bc,Witten:1998qj}. $\Delta x$ is obviously proportional to $\Delta p$, where large $\Delta p$ (UV) becomes proportional to large $\Delta x$ (IR). Equation (\ref{GUPuncertainty}) is a noncommutative relation; $[\hat{x}_i,\; \hat{p}_j] = \delta_{ij} i \hbar [1+\beta p^2]$, where both position and momentum operators can be defined as \begin{eqnarray} \hat{x}_i = \hat{x}_{0i}, \quad \quad \hat{p}_j= \hat{p}_{0j} (1+\beta p^2), \end{eqnarray} where $\hat{x}_{0i}$ and $\hat{p}_{0j}$ are corresponding operators obtained from the canonical commutation relations $[\hat{x}_{0i},\; \hat{p}_{0j}]=\delta_{ij} i \hbar,$ and $p^2= g_{ij} p^{0i} \; p^{0j}$. We can now construct the modified dispersion relation (MDR) due to quadratic GUP. We start with the background metric in GR gravitational spacetime \begin{eqnarray} ds^2 =g_{\mu \nu} dx^\mu \, dx^\nu = g_{00} c^2 dt^2 + g_{ij} dx^i\, dx^j, \end{eqnarray} with $g_{\mu \nu}$ is the Minkowski spacetime metric tensor $(-,+,+,+)$. Accordingly, the modified four-momentum squared is given by \begin{eqnarray} p_\mu p^\mu = g_{\mu \mu} p^\mu p^\mu &=& g_{00} (p^0)^2 + g_{ij} p^{0i} p^{0j} (1+\beta p^2) \nonumber \\ &=& -(p^0)^2 + p^2 + 2 \beta\; p^2 \,\cdot\, p^2. \label{modifyMomentum} \end{eqnarray} Comparing this with the conventional dispersion relation, $p_\mu p^\mu = - m^2c^2$, the time component of the momentum can then be written as \begin{eqnarray} (p^0)^2 &=& m^2c^2 + p^2 (1+\beta p^2). \end{eqnarray} The energy of the particle $\omega$ can be defined as $\omega/c = - \zeta_\mu p^\mu = - g_{\mu \nu} \zeta^\mu p^\nu$, where the killing vector is given as $\zeta^\mu = (1,0,0,0)$. Therefore, the energy of the particle could be expressed as $\omega=-g_{00} c (p^0)=c (p^0)$ and the modified dispersion relation in GR gravity reads \begin{eqnarray} \omega^2 = m^2 c^4 + p^2 c^2 (1+ 2\beta p^2). \qquad\qquad\qquad\mbox{GR Gravity} \label{MDRrel} \end{eqnarray} For $\beta \rightarrow 0$, the standard dispersion can be obtained. The rainbow gravity generalizes the MDR in doubly special relativity to curved spacetime \cite{magueijo2004gravity}, where the geometry spacetime is explored by a test particle with energy $\omega$ \cite{Magueijo:2001cr, Magueijo:2002am}, \begin{eqnarray} \omega^2 \; f_1 \left(\frac{\omega}{\omega_p}\right)^2 - (pc)^2 f_2 \left(\frac{\omega}{\omega_p}\right)^2 = \left(mc^2\right)^2, \end{eqnarray} where $\omega_p$ is the Planck energy and $f_1 (\omega/\omega_p)$ and $f_2 (\omega/\omega_p)$ are known as the rainbow functions which are model-depending. The rainbow functions can be defined as \cite{AmelinoCamelia:1996pj, AmelinoCamelia:1997gz}, \begin{eqnarray} f_1 (\omega/\omega_p) &=& 1, \quad \quad f_2 (\omega/\omega_p) = \sqrt{1-\eta (\omega/\omega_p)^n}, \label{Rainbowfuncs} \end{eqnarray} where $\eta$ and $n$ are free positive parameters. It was argued that for the logarithmic corrections of black hole entropy \cite{Tawfik:2015kga}, the integer $n$ is limited as $n=1,2$ \cite{Gangopadhyay:2016rpl}. Therefore, it would be eligible to assume that $n=2$. Thus, the MDR for rainbow gravity with GUP can be written as, \begin{eqnarray} \omega^2 = \frac{(mc^2)^2 + p^2 c^2 (1+2\beta p^2)}{1+ \eta\; \Big[\frac{pc}{\omega_p}\Big]^2 (1+2\beta p^2)}.\qquad\qquad\qquad\mbox{Rainbow Gravity} \label{MDRrain} \end{eqnarray} Again, as $\beta \rightarrow 0$, Eq. (\ref{MDRrain}) goes back to the standard dispersion relation. We have constructed two different MDRs for quadratic GUP, namely Eqs (\ref{MDRrel}) and (\ref{MDRrain}) in GR and rainbow gravity, respectively. Bounds on GUP parameter from GW170817 shall be outlined in the section that follows. \section{Bounds on GUP parameter from GW170817} \label{GUPparameter} Instead of violating Lorentz invariance \cite{Tawfik:2012hz}, we intend to investigate the speed of the graviton from the GW170817 event. To this end, we use MDRs obtained from the quadratic GUP approaches, section \ref{MDRGUP}. Thus, defining an upper bound on the dimensionless GUP parameter $\beta_0$ for given bounds on mass and energy of the graviton, where $m_g\lesssim 4.4 \times 10^{-22}~$eV$/c^2$ and $\omega = 8.5 \times 10^{-13}~$eV, respectively, plays an essential role. Assuming that the gravitational waves propagate as free waves, we could, therefore, determine the speed of the mediator, that of the graviton, from the group velocity of the accompanying wavefront, i.e. $v_g = \partial \omega/\partial p$, where $\omega$ and $p$ are the energy and momentum of the graviton, respectively \cite{Mirshekari:2011yq}. The idea is that the group velocity of the graviton can be simply deduced from the MDRs, Eqs. (\ref{MDRrel}) for the GR gravity and (\ref{MDRrain}) and the rainbow gravity, in presence and then in absence of the GUP impacts, which have been discussed in section \ref{MDRGUP}. Accordingly, Eq. (\ref{MDRrel}) implies that the group velocity reads \begin{eqnarray} v_g = \frac{\partial \omega}{\partial p} = \frac{pc^2}{\omega} \left(1+ 4 \beta p^2\right). \label{vgMDR1} \end{eqnarray} The unmodified momentum $p$ in terms of the modified parameters up to $\mathcal{O} (\beta)$, can be expressed as $p=a+b \beta$, where $a$ and $b$ are arbitrary parameters. By substituting this expression into Eq. (\ref{MDRrel}), we find that $p^2= (\omega_g/c)^2 - m^2 c^2 $. Thus, Eq. (\ref{vgMDR1}) can be rewritten as \begin{eqnarray} v_g = c \;\Big\{ \Big[1-\Big( \frac{mc^2}{\omega_g}\Big)^2 \Big]^{1/2} +4\beta \frac{\omega_g^2}{c^2} \; \Big[1-\Big( \frac{mc^2}{\omega_g}\Big)^2 \Big]^{3/2} \Big\}, \end{eqnarray} where $\omega_g$ is the energy of the graviton. It is obvious that for $\beta \rightarrow 0 $, i.e. in absence of GUP impacts, the group velocity reads \begin{eqnarray} v_g = c \Big[1- \frac{1}{2}\Big( \frac{mc^2}{\omega_g}\Big)^2 \Big]. \end{eqnarray} Then, the difference between the speed of photon (light) and that of graviton without GUP impacts is given as \begin{eqnarray} \Big| \delta v\Big| = \Big| c-v_g\Big| = c \Big| \frac{1}{2}\Big( \frac{mc^2}{\omega_g}\Big)^2 \Big| \lesssim 1.34 \times 10^{-19}\;c. \label{vDr} \end{eqnarray} Although the small difference obtained, we are - in the gravitational waves epoch - technically able to measure even a such tiny difference! In light of this, we could use the results associated with the GW170817 event, such as the graviton velocity, in order to set an upper bound on the GUP parameter, $\beta_0$. For a massless graviton, the difference between the speed of photons (light) and that of the gravitons in presence of the GUP impacts reads \begin{eqnarray} \Big|\delta v_{\mbox{GUP}}\Big| &=&\Big| 4\beta \frac{\omega^2}{c}\Big| = \Big| 4\beta_0 \Big(\frac{\omega^2}{ M_p^2 c^3}\Big)^2\Big| \lesssim 1.95 \times 10 ^{-80} \beta_0 \;c. \label{vMDR} \end{eqnarray} Thus, the upper bound on the dimensionless parameter, $\beta_0$, of the quadratic GUP can be simply deduced from Eqs. (\ref{vDr}) and (\ref{vMDR}), \begin{eqnarray} \beta_0 \lesssim 8.89 \times 10^{60}. \label{MDRbeta1} \end{eqnarray} The group velocity of the graviton due to MDR and rainbow gravity when applying the quadratic GUP approach, Eq. (\ref{MDRrain}), can be expressed as \begin{eqnarray} v_g = \frac{\partial \omega}{\partial p} = \Big(\frac{pc^2}{\omega_g}\Big) \; \frac{\Big( 1- \frac{\eta}{\omega_p^2} (mc^2)^2 \Big) \Big(1+4\beta p^2 \Big)}{\left[1+ \eta \Big(\frac{cp}{\omega_p}\Big)^2 (1+ 2 \beta p^2) \right]^2}. \end{eqnarray} Similarly, one can for a massless graviton express the conventional momentum in terms of the GUP parameter. In order of $\mathcal{O}(\beta)$, we get \begin{eqnarray} c p = \omega_g \Big[ \Big(1 - \eta \Big(\frac{\omega_g}{\omega_p}\Big)^2 \Big)^{-1/2} - \beta \frac{\omega_g^2}{c^2} \Big(1 -\eta \Big(\frac{\omega_g}{\omega_p}\Big)^2 \Big)^{-3/2} \Big]. \label{pcRainbow} \end{eqnarray} The unmodified momentum can be expressed in GUP-terms up to $\mathcal{O}(\beta)$; $p=a_0 + a_1 \beta$, where $a_0$ and $a_1$ are arbitrary parameters. Nevertheless, the investigation of the speed of the graviton from the GW150914 observations \cite{Abbott:2016blz} specifies the rainbow gravity parameter, $ \eta (\omega_g/\omega_p)^2\leq 3.3\times 10^{-21}$ \cite{Gwak:2016wmg}. Accordingly, Eq. (\ref{pcRainbow}) can be reduced to $c p=\omega_g (1- \beta \omega_g^2/c^2)$ and the group velocity of the massless graviton becomes \begin{eqnarray} v_g = c \Big[ 1 - 5\frac{\beta \omega^2}{c^2} + \mathcal{O}(\beta^2) \Big]. \end{eqnarray} Then, the difference between the speed of photons and that of the gravitons reads \begin{eqnarray} \Big|\delta v_{\mbox{GUP}}\Big| &=&\Big| 5 \beta \frac{\omega^2}{c} \Big| \lesssim 2.43 \times 10^{-80} \beta_0 \, c. \label{vgRainbow} \end{eqnarray} By comparing Eqs. (\ref{vgRainbow}) and (\ref{vDr}), the upper bound of the GUP parameter $\beta_0$ can be estimated as \begin{eqnarray} \beta_0 \lesssim 5.5 \times 10^{60}. \label{Rainbowbeta1} \end{eqnarray} It is obvious that both results, Eqs. (\ref{MDRbeta1}) and (\ref{Rainbowbeta1}), are very close to each other; $\beta_0 \lesssim 10^{60}$. The improved upper bound of $\beta_0$ is very similar to the ones reported in refs. \cite{Scardigli:2014qka, Feng:2016tyt}, which - as well - are depending on astronomical observations. The present results are based on mergers of spinning neutron stars. Thus, it is believed that more accurate observations, the more precise shall be $\beta_0$. Having set a upper bound on the GUP parameter and counting on the spoken similarities between GUP and the catastrophe of non-gravitating vacuum, we can now propose a possible solution of the cosmological constant problem. \section{A Possible Solution of the Cosmological Constant Problem} \label{LamdaProblem} The cosmological constant can be given as $\Lambda = 3 H_0^2 \Omega_\Lambda$, where $H_0$ and $\Omega_\Lambda$ are the Hubble parameter and the dark energy density, respectively \cite{Carroll:2000fy}. On the other hand, the origin of the catastrophe of non-gravitating vacuum would be understood from the disproportion of the value of $\Lambda$ in the theoretical calculations, while this is apparently impacting the GW observations \cite{Sahni:2002kh}. From the most updated PLANCK observations, the values of $\Omega_\Lambda = 0.6889 \pm 0.0056$ and $H_0 = 67.66 \pm 0.42~$Km $\cdot$ s$^{-1}$ $\cdot$ Mpc$^{-1}$ \cite{Aghanim:2018eyx}. Then, the vacuum energy density \begin{eqnarray} \frac{c^2}{8 \pi G} \Lambda &=& \left(\frac{3 H_0^2 c^2}{8\pi G}\right) \Omega_\Lambda = \frac{3\hbar c}{8\pi \ell_p^2 \ell_0^2} \Omega_\Lambda, \label{VacuEnergy} \end{eqnarray} where the scale of the visible light, $\ell_0= c/H_0 \simeq 1.368 \times 10^{23}~$Km \cite{Aghanim:2018eyx}. Therefore, one can use Eq. (\ref{VacuEnergy}) to esiamte the vacuum energy density in order of $10^{-47}~$GeV$^4/(\hbar^3c^3)$. In quantum field theory, the cosmological constant is to be calculated from sum over the vacuum fluctuation energies corresponding to all particle momentum states \cite{Carroll:2000fy}. For a massless particle, we obtain \begin{eqnarray} \frac{1}{(2\pi \hbar)^3} \int d^3 \; \vec{p}\; (\hbar \omega_p /2) \simeq 9.60\times10^{74} \; {\mbox{GeV}}^4/ (\hbar^3 c^3). \label{QFTlamda} \end{eqnarray} This is clearly infinite integral. But, it is usually cut off, at the Planck scale, $\mu_p = \hbar/\ell_p$. We assume $\omega_p$ is the vacuum energy of quantum harmonic state $\hbar \omega_p = [p^2c^2+m_g^2c^4]^{1/2}$. To propose a possible solution of the cosmological constant problem, it is initially needed to determine the number of states in the phase space volume taking into account GUP, Eq. (\ref{GUPuncertainty}). An analogy can be found in Liouville theorem in the classical limit. We need to make sure that the size of each quantum mechanical state in phase space volume is depending on the modified momentum $p$, especially when taking GUP into consideration, Eq. (\ref{GUPuncertainty}). In other words, the number of quantum states in the phase space volume is assumed not depending on time. In the classical limit, the relation of the quantum commutation relations and the Poisson brackets is given as $[\hat{A}, \hat{B}] = i\hbar \{A, B\}$. Details on the Poisson bracket in D-Dimensions are outlined in appendix \ref{LiouvilleTheorem}. Consequently, the modified density of states implies different implications on quantum field theory, such as, the cosmological constant problem. In D-dimensional spherical coordinate systems, the density of states in momentum space is given as \cite{Fityo:2008zz, Chang:2001bm, Wang:2010ct} \begin{eqnarray} \frac{V\, d^D \vec{p}}{(1+\beta p^2)^{D} }, \end{eqnarray} where $V$ is the volume of space. It should be noticed that in quantum mechanics, the number of quantum stated per unit volume is given as $V/(2\pi \hbar)^D$. Therefore, for Liouville theorem, the weight factor in 3-D dimension reads \cite{Fityo:2008zz, Chang:2001bm, Wang:2010ct} (review appendix \ref{LiouvilleTheorem}) \begin{eqnarray} \frac{1}{(2\pi \hbar)^3} \frac{d^3 \vec{p}}{(1+\beta p^2)^3}. \label{densityStates} \end{eqnarray} In quantum field theory, the modification in the quantum number of state of the phase space volume should have consequences on different quantum phenomena, such as, the cosmological constant problem and the black body radiation. At finite weight factor of GUP, the sum over all momentum states per unit volume of the phase space modifies the vacuum energy density. The cosmological constant, on the other hand, is determined by summing over the vacuum fluctuations, the energies, corresponding to a particular momentum state \begin{eqnarray} \Lambda_{\mbox{GUP}} (m) &=& \frac{1}{(2\pi \hbar)^3} \int d^3 \vec{p} \rho(p^2) (\hbar \omega_p /2) = \frac{1}{2(2\pi \hbar)^3} \int \frac{d^3 \vec{p}}{(1+\beta p^2)^3} \sqrt{p^2c^2+m_g^2c^4} \end{eqnarray} For a massless particle, the vacuum energy density, which is directly related to $\Lambda$, reads \begin{eqnarray} \Lambda_{\mbox{GUP}}(m=0) &=& \ \frac{c}{4\pi^2 \hbar^3} \int \frac{p^3}{(1+\beta p^2)^3}\; dp = \frac{c (M_p^2 c^2)^2}{16 \pi^2 \hbar^3 \beta_0^2} = 1.78 \times 10^{-48}~\mbox{GeV}^4/(\hbar^3 c^3). \label{GUPLamda} \end{eqnarray} The agreement between the observed value of the cosmological constant, $\Lambda \simeq 10^{-47}~$GeV$^4/\hbar^3 c^3$, and our calculations based on quantum gravity approach, Eq. (\ref{GUPLamda}), is very convincing. We conclude that the connection between the estimated upper bound on $\beta_0$, Eqs. (\ref{vgRainbow}) and (\ref{vDr}), from GW170817 event \cite{TheLIGOScientific:2017qsa} and the most updated observations of the PLANCK collaboration \cite{Aghanim:2018eyx} for the cosmological constant $\Lambda$, Eq. (\ref{QFTlamda}), and our estimated value of $\Lambda(m=0)$, Eq. (\ref{GUPLamda}), gives an interpretation for the cosmological constant problem in presence of the minimal length uncertainty. \section{Conclusions \label{conclusion} } In the present study, we have proposed the generalized uncertainty principle (GUP) with an addition term of quadratic momentum, from which we have driven the modified dispersion relations for GR and rainbow gravity, Eq. (\ref{MDRrel}) and Eq. (\ref{MDRrain}), respectively. Counting on the similarities between GUP (manifesting gravitational impacts on HUP) and the likely origin of the great discrepancy between the theoretical and observed values of the cosmological constant that in the gravitational impacts on the vacuum energy density, the present study suggests a possible solution for the long-standing cosmological constant problem ({\it catastrophe of non-gravitating vacuum}) that $\Lambda \simeq 10^{-47}~$GeV$^4/\hbar^3 c^3$. We have assumed that the gravitational waves propagate as a free wave. Therefore, we could drive the group velocity in terms of the GUP parameter $\beta_0$ for GR and rainbow gravity, Eq. (\ref{MDRbeta1}) and Eq. (\ref{Rainbowbeta1}), respectively. Moreover, we have used recent results on gravitational waves, the binary neutron stars merger, GW170817 event, in order to determine the speed of the gravitons. Then, we have calculated the difference between the speed of gravitons and that of (photons) light, at finite and visnishing GUP parameter. We have shown that the upper bound on the dimensionless GUP parameter, $\beta \sim 10^{60}$, is merely constrained by such a speed difference. We have concluded that the speed of graviton is directly related to the GUP approach utilized in. The cosmological constant problem, which is stemming from the large discrepancy between the QFT-based calculations and the cosmological observations, is tagged as $\Lambda_{QFT}/\Lambda_{exp} \sim 10^{121}$. This quite large ratio can be interpreted by features of the UV/IR correspondence and the impacts of gravity. For the earlier, the large $\Delta x$ (IR) corresponds to a large $\Delta p$ (UV) in scale of Planck momentum. For the later, the GUP approach, for instance, Eq. (\ref{GUPuncertainty}), plays an essential role. We have assumed that in calculating the density of states where GUP approach is taken into account, a possible solution of the cosmological constant problem, Eq. (\ref{densityStates}), can be proposed. At Planck scale, the resulting density of the states seems to impact the vacuum energy density of each quantum state, Eq. (\ref{GUPLamda}). A refined value of the cosmological constant we have obtained for a novel upper bound on $\beta_0$, which - in turn - was determined from the GW170817 observations. Finally, the possible matching between the estimation of the upper bound on the GUP parameter deduced from the gravitational waves, GW170817 event, and the one estimated from the PLANCK 2018 observations seems to support the conclusion about the great importance of constructing a theory for quantum gravity. This likely helps in explaining various still-mysterious phenomena in physics.
1,116,691,498,141
arxiv
\section{Compactness of the operators $T_1$ and $T_2$}\label{T_1-compact} Here we use notations from the algorithm of Subsection \ref{Alg_statement} to prove the compactness of the operator $T_j$ given by equation (\ref{T_j}) for $j=1, 2$. The proof requires several steps. Let us first prove that the operator $T_1$ is compact. Recall that since $G\in H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn),$ by Hartman's theorem, the operator $T_0 = H_G$ is compact and hence there exist $x_0 \in H^2(\Dd,\Cc^n)$ and $y_0 \in H^2(\Dd,\Cc^m)^\perp$ such that $(x_0,y_0)$ is a Schmidt pair for $H_G$ corresponding to the singular value $\|H_G\|=t_0.$ By Lemma \ref{2.2}, $x_0, \bar{z} \bar{y}_0$ admit the inner-outer factorizations \begin{equation}\label{x00y00} x_0 = \xi_0 h_0, \quad \bar{z} \bar{y}_0=\eta_0 h_0,\end{equation} where $\xi_0 \in H^\infty(\Dd,\Cc^n)$, $\eta_0 \in H^\infty(\Dd,\Cc^m)$ are vector-valued inner functions and $h_0\in H^2(\Dd,\Cc)$ is a scalar outer function. Moreover there exist unitary-valued functions of types $n\times n, m\times m$ respectively, of the form \begin{equation}\label{v00w00} V_0= \begin{pmatrix} \xi_0 & \bar{\alpha}_0 \end{pmatrix},\quad W_0= \begin{pmatrix}\eta_0 & \bar{\beta}_0 \end{pmatrix}^T, \end{equation} where $\alpha_0,\beta_0$ are inner, co-outer, quasi-continuous functions of types $n\times (n-1)$, $m\times (m-1)$ respectively and all minors on the first columns of $V_0,W_0^T$ are in $H^\infty$. Furthermore every $Q_1 \in H^\infty(\Dd,\CCmn)$ which is at minimal distance from $G$ satisfies $$W_0 (G-Q_1) V_0 = \begin{pmatrix} t_0 u_0 & 0\\ 0 & F_1 \end{pmatrix} $$ for some $$F_1\in H^\infty(\Dd,\Cc^{(m-1)\times(n-1)})+C(\Tt,\Cc^{(m-1)\times(n-1)}) $$ and some quasi-continuous function $u_0$ with $|u_0(z)|=1$ almost everywhere on $\Tt.$ \noindent Recall that $$X_1 = \xi_0 \telwe H^2(\Dd,\Cc^n),\quad Y_1 = \bar{\eta}_0\telwe H^2(\Dd,\Cc^m)^\perp $$ and $T_1 \colon X_1 \to Y_1 $ is given by $$ T_1(\xi_0 \telwe x) = P_{Y_1}[ \bar{\eta}_0 \telwe(G-Q_1)x] \quad \text{for all}\; x \in H^2(\Dd,\Cc^n). $$ Our first endeavour in this section is to prove the following theorem. \begin{theorem}\label{T0compact} Let \begin{equation}\label{calk1l1} \mathcal{K}_1 \stackrel{\emph{def}}{=} V_0 \begin{pmatrix}0 \\ H^2(\Dd,\Cc^{n-1}) \end{pmatrix}, \quad \mathcal{L}_1 \stackrel{\emph{def}}{=} W_0^* \begin{pmatrix}0 \\ H^2(\Dd,\Cc^{m-1})^\perp \end{pmatrix},\end{equation}and let the maps $$U_1\colon H^2(\Dd,\Cc^{n-1}) \to \mathcal{K}_1,$$ $$U_2\colon H^2(\Dd,\Cc^{m-1})^\perp \to \mathcal{L}_1$$ be given by $$ U_1 x = V_0 \begin{pmatrix}0 \\ x \end{pmatrix}, \quad U_2 y = W_0^* \begin{pmatrix}0 \\ y \end{pmatrix} $$for all $x\in H^2(\Dd,\Cc^{n-1}),\; y \in H^2(\Dd,\Cc^{m-1})^\perp.$ Consider the operator $\Gamma_1 = P_{\mathcal{L}_1} M_{G-Q_1} |_{\mathcal{K}_1}.$ \noindent Then \begin{enumerate} \item[\emph{(i)}] The maps $U_1,U_2$ are unitaries. \item[\emph{(ii)}] The maps $(\xi_0\telwe \cdot )\colon\mathcal{K}_1\to H^2(\Dd,\we^2\Cc^n)$ and $(\bar{ \eta}_0 \telwe \cdot)\colon\mathcal{L}_1 \to H^2(\Dd,\Cc^m)^\perp$ are unitaries. \item[\emph{(iii)}] The following diagram is commutative: \begin{equation}\label{commdiagr} \begin{array}{clllll} H^2(\Dd,\Cc^{n-1}) &\xrightarrow{U_1} & \mathcal{K}_1 &\xrightarrow{\xi_0 \telwe \cdot}& \xi_0 \telwe H^2 (\Dd, \Cc^n)=X_1\\ \Big\downarrow\rlap{$\scriptstyle H_{F_1} $} & ~ &\Big\downarrow\rlap{$\scriptstyle \Gamma_1$} &~&\hspace{3ex}\Big\downarrow\rlap{$\scriptstyle T_1$} \\ H^2(\Dd,\Cc^{m-1})^\perp &\xrightarrow{U_2}& \mathcal{L}_1 &\xrightarrow{\bar{\eta}_0 \telwe \cdot } & \bar{\eta}_0 \telwe H^2 (\Dd, \Cc^m)^\perp =Y_1 \end{array}.\end{equation} \item[\emph{(iv)}] $T_1$ is a compact operator. \item[\emph{(v)}]$ \|T_1\| = \|\Gamma_1\| = t_1$. \end{enumerate} \end{theorem} \begin{proof} Statement {\rm (i)} follows from Lemma \ref{3.2constr}. Statement {\rm (ii)} follows from Propositions \ref{xi0wek0subset} and \ref{eta0telweh2} below, which are consequences of the following lemmas. \begin{lemma}\label{maxvect} In the notation of Theorem \ref{T0compact}, the Hankel operator $H_G$ has a maximizing vector $x_0$ of unit norm such that $\xi_0,$ which is defined by $\xi_0 = \frac{x_0}{h_0},$ is a co-outer function. \end{lemma} \begin{proof} Choose any maximizing vector $x_0.$ By Lemma \ref{2.2}, $x_0$ has the inner-outer factorization $x_0 = \xi_0 h_0,$ where $h_0$ is a scalar outer factor. Then, the closure of $ \xi_0^T H^2(\Dd,\Cc^n),$ denoted by $\clos(\xi_0^T H^2(\Dd,\Cc^n)),$ is a closed shift-invariant subspace of $H^2(\Dd,\Cc),$ so, by Beurling's theorem, $$ \clos(\xi_0^T H^2(\Dd,\Cc^n)) = \phi H^2(\Dd,\Cc)$$ for some scalar inner function $\phi.$ Hence $$\bar{\phi} \xi_0^T H^2(\Dd,\Cc^n) \subset H^2(\Dd,\Cc). $$ Thus, if $\xi_0^T = (\xi_{01}, \cdots,\xi_{0n} ),$ we have $\bar{\phi}\xi_{0j} \in H^\infty(\Dd,\Cc)$ for $j=1,\cdots, n,$ and so, $$ \overline{ \phi } \xi_0 \in H^\infty(\Dd,\Cc^n).$$ Hence $$ \overline{ \phi}x_0 = \overline{ \phi} \xi_0 h_0 \in H^2(\Dd,\Cc^n).$$ \noindent Let $Q$ be a best $H^\infty$ approximation to $G.$ Since $x_0$ is a maximizing vector for $H_G$, by Theorem \ref{1.7}, $$ (G-Q)x_0 \in H^2(\Dd,\Cc^m)^\perp $$ and $$\|(G-Q)(z)x_0(z)\|_{\Cc^m} = \|H_G\| \|x_0(z)\|_{\Cc^n}$$ for almost all $z \in \Tt.$ Thus $$ (G-Q)\overline{\phi} x_0 \in H^2(\Dd,\Cc^m)^\perp $$ and $$\|(G-Q)\overline{\phi}x_0(z) \|_{\Cc^m} = \|H_G\| \| \overline{ \phi}x_0(z)\|_{\Cc^n} $$ for almost all $z \in \Tt.$ \noindent Hence $\overline{ \phi}x_0 \in H^2(\Dd,\Cc^n)$ is a maximizing vector for $H_G,$ and $\overline{\phi} x_0$ is co-outer. Then $\frac{\bar{ \phi}x_0}{\|x_0\|}$ is a co-outer maximizing vector of unit norm for $H_G.$ \end{proof} \begin{lemma}\label{utxi=1} Let $x_0$ be a co-outer maximizing vector of unit norm for $H_G,$ and let \newline $x_0=\xi_0 h_0$ be the inner-outer factorisation of $x_0.$ Then {\em (i)} $\xi_0$ is a quasi-continuous function and {\em (ii)} there exists a function $A\in H^\infty(\Dd,\Cc^n)$ such that $$A^T \xi_0 =1. $$ \end{lemma} \begin{proof} Let us first show that $$\xi_0 \in (H^\infty(\Dd,\Cc^n) + C(\Tt, \Cc^n)) \cap \overline{H^\infty(\Dd,\Cc^n) + C(\Tt, \Cc^n)}.$$ \noindent Let $Q$ be a best $H^\infty$ approximation to $G.$ Then, by Theorem \ref{1.7}, the function $Q$ satisfies the equation $$(G-Q)^* y_0 = t_0 x_0. $$ Taking complex conjugates in equations \eqref{x00y00}, we have $$(G-Q)^T \bar{y}_0 = t_0 \overline{x_0}. $$ Hence, for $z \in \Tt,$ $$(G-Q)^T z h_0 \eta_{0} = t_0 \overline{h_0} \overline{ \xi_0} ,$$ and therefore $$\displaystyle \frac{(G-Q)^T z h_0 \eta_0 }{t_0 \overline{h_0}} = \overline{\xi_0}.$$ \noindent Recall, by equation \eqref{eq223} (with $\phi=1$), $u_0 =\frac{\bar{z}\bar{h}_0}{h_0}.$ By Lemma \ref{2.2}, $u_0 \in QC ,$ hence $\overline{u_0} \in H^\infty+C.$ Note $ \overline{u_0} = \frac{zh_0}{\overline{h_0}}, $ and hence $$\overline{\xi_0} = \displaystyle \frac{(G-Q)^T \overline{u_0} \eta_0 }{t_0 }. $$ \noindent Since $H^\infty +C$ is an algebra and $(G-Q)^T, \; \eta_0 \in H^\infty +C,$ it follows that $\overline{\xi_0} \in H^\infty+C,$ thus $$ \xi_0 \in (H^\infty(\Dd,\Cc^n) + C(\Tt, \Cc^n)) \cap \overline{H^\infty(\Dd,\Cc^n) + C(\Tt, \Cc^n)}.$$ \noindent The conclusion that there exists a function $A\in H^\infty(\Dd,\Cc^n)$ such that $A^T \xi_0 =1 $ now follows directly from Lemma \ref{L6.2}. \end{proof} \begin{lemma}\label{a0h2} In the notation of Theorem \ref{T0compact}, let $\xi_0 \in H^\infty(\Dd,\Cc^n)$ be a vector-valued inner, co-outer, quasi-continuous function and let $$V_0 = \begin{pmatrix} \xi_0 & \bar{\alpha}_0 \end{pmatrix}$$ be a thematic completion of $\xi_0$ as described in Lemma \ref{2.2}, where $\alpha_0$ is an inner, co-outer, quasi-continuous function of order $n\times (n-1) $ and all minors on the first column of $V_0$ are analytic. Then, $$ \alpha_0^T H^2(\Dd,\Cc^n) = H^2(\Dd,\Cc^{n-1}). $$ \end{lemma} \begin{proof} By Lemma \ref{L6.2}, for the given $\alpha_0,$ there exists $A_0\in H^\infty(\Dd, \Cc^{(n-1)\times n })$ such that $A_0\alpha_0 = I_{n-1}.$ Equivalently, $\alpha_0^T A_0^T = I_{n-1}.$ \medskip \noindent Let $g\in H^2(\Dd,\Cc^{n-1}).$ Then $g = (\alpha_0^T A_0^T) g \in \alpha_0^T A_0^T H^2(\Dd,\Cc^{n-1}),$ which implies that $ g \in \alpha_0^T H^2(\Dd,\Cc^n).$ Hence $H^2(\Dd,\Cc^{n-1}) \subseteq \alpha_0^T H^2(\Dd,\Cc^n).$\medskip \noindent For the reverse inclusion, note that since $\alpha_0$ is in $H^\infty(\Dd, \Cc^{n\times (n-1)})$, we have $\alpha_0^T H^2(\Dd,\Cc^n) \subseteq H^2(\Dd,\Cc^{n-1}).$ Thus $$ \alpha_0^T H^2(\Dd,\Cc^n) = H^2(\Dd,\Cc^{n-1}). $$ \end{proof} \begin{proposition}\label{v*poc} Let $\xi_0, \alpha_0$ and $V_0$ be as in Lemma \ref{a0h2}. Then $$V_0^* \Poc(\{\xi_0\}, L^2(\Tt,\Cc^n)) = \begin{pmatrix} 0 \\ L^2(\Tt,\Cc^{n-1}) \end{pmatrix}. $$ \end{proposition} \begin{proof} Let $g \in V_0^* \Poc(\{\xi_0\}, L^2(\Tt,\Cc^n)).$ Equivalently, $g$ can be written as $g= V_0^* f$ for some $f\in L^2(\Tt,\Cc^n)$ such that $f(z) \perp \xi_0(z)$ for almost all $z\in \Tt.$ This in turn is equivalent to the assertion that $g=V_0^*f$ for some $f\in L^2(\Tt,\Cc^n)$ such that $(V_0^*f)(z) \perp (V_0^*\xi_0)(z)$ for almost all $z \in \Tt,$ since $V_0(z)$ is unitary for almost all $z \in \Tt.$ \noindent Note that, by the fact that $V_0$ is unitary-valued almost everywhere on $\Tt$, we have \begin{align}I_n &= V_0^*(z)V_0(z)\nonumber \vspace{2ex}\\ &= \begin{pmatrix} \xi_0^*(z) \\ \alpha_0^T(z) \end{pmatrix} \begin{pmatrix} \xi_0(z) & \bar{ \alpha}_0(z) \end{pmatrix}\nonumber \vspace{2ex}\\ &= \begin{pmatrix} \xi_0^*(z) \xi_0(z) & \xi_0^*(z) \bar{ \alpha}_0(z) \\ \alpha_0^T(z) \xi_0(z) & \alpha_0^T(z) \bar{\alpha}_0(z) \end{pmatrix}\quad\text{almost everywhere on}\;\Tt\label{vounitary},\end{align} and so $$V_0^*\xi_0 = \begin{pmatrix}\xi_0^* \\ \alpha_0^T \end{pmatrix} \xi_0 = \begin{pmatrix} 1 \\ 0_{(n-1)\times 1} \end{pmatrix},$$ where $0_{(n-1)\times 1}$ denotes the zero vector in $\Cc^{n-1}.$ \noindent Hence $g=V_0^*f$ with $(V_0^*f)(z)$ orthogonal to $(V_0^*\xi_0)(z)$ for almost every $z\in \Tt,$ is equivalent to the statement $g \in L^2(\Tt,\Cc^n)$ and $$g(z) \perp \begin{pmatrix} 1 \\ 0_{(n-1)\times 1} \end{pmatrix}$$ for almost all $z\in \Tt,$ or equivalently, $g\in \begin{pmatrix} 0\\ L^2(\Tt,\Cc^{n-1})\end{pmatrix}.$ \end{proof} \index{ $0_{(n-1)\times 1}$} \begin{proposition}\label{xi0wek0subset} Under the assumptions of Theorem \ref{T0compact}, where $x_0$ is a co-outer maximizing vector of unit norm for $H_G,$ $\xi_0\in H^\infty(\Dd,\Cc^n)$ is a vector-valued inner function given by $\xi_0= \frac{x_0}{h_0} ,$ $V_0=\begin{pmatrix} \xi_0 & \bar{\alpha}_0 \end{pmatrix}$ is a thematic completion of $\xi_0$ and $\mathcal{K}_1$ is defined by $$\mathcal{K}_1 = V_0 \begin{pmatrix} 0 \\ H^2(\Dd,\Cc^{n-1}) \end{pmatrix}\subseteq L^2(\Tt,\Cc^n),$$ we have $$\xi_0 \telwe \mathcal{K}_1 = \xi_0 \telwe H^2(\Dd,\Cc^n) $$ and the operator $$(\xi_0 \telwe \cdot) \colon \mathcal{K}_1 \to \xi_0 \telwe H^2(\Dd,\Cc^n)$$ is unitary. \end{proposition} \begin{proof} \noindent Let us first prove $\xi_0 \telwe H^2(\Dd,\Cc^n) \subset \xi_0 \telwe \mathcal{K}_1.$ Let $\phi\in H^2(\Dd,\Cc^n).$ Since $V_0$ is unitary-valued, $$ \xi_0\xi_0^* + \bar{\alpha}_0\alpha_0^T = I_n. $$ Thus $$ \begin{array}{cllll} \xi_0 \telwe \phi &= \xi_0 \telwe (\xi_0\xi_0^*\phi +\bar{\alpha}_0\alpha_0^T\phi) \vspace{2ex} \\ &= \xi_0 \telwe \xi_0 (\xi_0^*\phi) + \xi_0 \telwe \bar{\alpha}_0 (\alpha_0^T \phi) \vspace{2ex} \\ &= 0 + \xi_0 \telwe \bar{\alpha}_0 (\alpha_0^T \phi) \end{array} $$ on account of the pointwise linear dependence of $\xi_0$ and $\xi_0\xi_0^*\phi$ on $\Dd.$ Recall that, by Lemma \ref{a0h2}, $\alpha_0^T \phi \in H^2(\Dd,\Cc^{n-1})$ and, by the definition of $\mathcal{K}_1,$ $$\mathcal{K}_1 = \bar{\alpha}_0 H^2(\Dd,\Cc^{n-1}).$$ Hence, for $\phi\in H^2(\Dd,\Cc^n),$ $$\xi_0 \telwe \phi = \xi_0 \telwe \bar{\alpha}_0\alpha_0^T \phi \in \xi_0 \telwe \bar{ \alpha}_0 H^2(\Dd,\Cc^{n-1}),$$ and thus \begin{equation}\label{xi0telwek0}\xi_0 \telwe H^2(\Dd,\Cc^n) \subseteq \xi_0 \telwe \mathcal{K}_1. \end{equation} Let us now show that $\xi_0 \telwe \mathcal{K}_1 \subseteq \xi_0 \telwe H^2(\Dd,\Cc^n).$ Since $\mathcal{K}_1 = \bar{\alpha}_0H^2(\Dd,\Cc^{n-1}),$ an arbitrary element $u \in \xi_0 \telwe \mathcal{K}_1$ is of the form $$ u = \xi_0 \telwe \bar{\alpha}_0g,$$ for some $g\in H^2(\Dd,\Cc^{n-1}).$ Note that, by Lemma \ref{a0h2}, there exists a function $f \in H^2(\Dd,\Cc^n)$ such that $g = \alpha_{0}^T f.$ Hence $u=\xi_0 \telwe \bar{\alpha}_0 \alpha_0^T f.$ By equation \eqref{vounitary}, $\xi_0 \xi_0^* + \bar{\alpha}_0 \alpha_0^T = I_n.$ Thus $$u= \xi_0 \telwe (I_{n}-\xi_0 \xi_0^*)f = \xi_0 \telwe f - \xi_0 \telwe \xi_0 \xi_0^*f = \xi_0 \telwe f \in \xi_0 \telwe H^2(\Dd,\Cc^n), $$ and so, $ \xi_0 \telwe \mathcal{K}_1 \subseteq \xi_0 \telwe H^2(\Dd,\Cc^n).$ Combining the latter inclusion with the relation \eqref{xi0telwek0}, we have $$ \xi_0 \telwe \mathcal{K}_1 = \xi_0 \telwe H^2(\Dd,\Cc^n) . $$ Now, let us show that the operator $(\xi_0 \telwe \cdot) \colon \mathcal{K}_1 \to \xi_0 \telwe H^2(\Dd,\Cc^n)$ is unitary. As we have shown above, the operator is surjective. We will show it is also an isometry. Let $f \in \mathcal{K}_1.$ Then, $$\begin{array}{cllllllll} \| \xi_0 \telwe f\|_{L^2(\Tt,\we^2\Cc^n)}^2 &= \langle \xi_0 \telwe f, \xi_0 \telwe f \rangle_{L^2(\Tt,\we^2\Cc^n)} \vspace{2ex} \\ &= \displaystyle \frac{1}{2\pi} \int_0^{2\pi} \langle \xi_0(\eiu) \telwe f(\eiu), \xi_0(\eiu) \telwe f(\eiu) \rangle_{\we^2\Cc^n} d\theta\end{array}.$$ By Proposition \ref{we}, the latter integral is equal to $$\begin{array}{llll} &= \displaystyle \frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} \langle \xi_0 (\eiu), \xi_0 (\eiu) \rangle_{\Cc^n} & \langle \xi_0(\eiu) , f(\eiu) \rangle_{\Cc^n} \\ \langle f(\eiu) , \xi_0 (\eiu) \rangle_{\Cc^n} & \langle f(\eiu) , f(\eiu) \rangle_{\Cc^n} \end{pmatrix}\; d\theta \vspace{2ex} \\ &= \displaystyle \frac{1}{2\pi} \int_0^{2\pi} \|\xi_0 (\eiu)\|_{\Cc^n}^2\langle f(\eiu) , f(\eiu) \rangle_{\Cc^n} - |\langle \xi_0(\eiu) , f(\eiu) \rangle_{\Cc^n} |^2 \; d\theta. \end{array} $$ \noindent Note that, by Proposition \ref{onxi}, $ \|\xi_0 (\eiu)\|_{\Cc^n} =1 $ for almost all $\eiu$ on $\Tt.$ Moreover, since $$\mathcal{K}_1= \bar{\alpha}_0H^2(\Dd,\Cc^{n-1}) ,$$ $f=\bar{\alpha}_0 g$ for some $g \in H^2(\Dd,\Cc^{n-1}).$ Hence $$\langle \xi_0 (\eiu), f(\eiu) \rangle_{\Cc^n} = \langle \xi_0(\eiu), \bar{\alpha}_0(\eiu) g(\eiu)\rangle_{\Cc^n} =\langle \alpha_{0}^T(\eiu)\xi_0(\eiu), g(\eiu)\rangle_{\Cc^{n-1}}= 0 $$ almost everywhere on $\Tt,$ since $ V_0 = \begin{pmatrix} \xi_0 & \bar{\alpha}_0 \end{pmatrix}$ is unitary-valued. Thus $$ \| \xi_0 \telwe f\|_{L^2(\Tt,\we^2\Cc^n)}^2 = \|f\|_{L^2(\Tt,\Cc^n)}^2,$$that is, the operator $(\xi_0 \telwe \cdot) \colon \mathcal{K}_1 \to \xi_0 \telwe H^2(\Dd,\Cc^n)$ is an isometry. Therefore, the surjective operator $(\xi_0 \telwe \cdot)$ is unitary. \end{proof} \begin{lemma}\label{inner0iff} Let $u\in L^2(\Tt,\Cc^m)$ and let $\eta_0 \in H^\infty(\Dd,\Cc^m)$ be a vector-valued inner function. Then \begin{equation}\label{cond9.7} \langle \bar{\eta}_0 \telwe u, \bar{\eta}_0 \telwe \bar{z} \bar{f} \rangle_{L^2(\Tt,\we^2\Cc^m)} = 0 \quad \text{for all}\quad f\in H^2(\Dd,\Cc^m) \end{equation} if and only if the function $$ z \mapsto u(z) - \langle u(z), \bar{ \eta}_{0}(z)\rangle_{\Cc^m} \bar{ \eta}_{0}(z)$$ belongs to $H^2(\Dd,\Cc^m).$ \end{lemma} \begin{proof} \noindent The statement that $\bar{\eta}_0 \telwe u$ is orthogonal to $\bar{\eta}_0 \telwe \bar{z} \bar{f}$ in $L^2(\Tt,\we^2\Cc^m)$ is equivalent to the equation $I =0$, where $$I=\frac{1}{2\pi}\int_{0}^{2\pi} \langle \bar{\eta}_0(\eiu) \telwe u(\eiu) , \bar{\eta}_0(\eiu) \telwe e^{-i\theta} \bar{f}(\eiu)\rangle_{\we^2\Cc^m} \; d\theta.$$ By Proposition \ref{we}, $$I=\displaystyle \frac{1}{2\pi}\int_{0}^{2\pi} \det \begin{pmatrix} \langle \bar{\eta}_0(\eiu), \bar{\eta}_0(\eiu)\rangle_{\Cc^m} & \langle \bar{\eta}_0(\eiu) , e^{-i\theta} \bar{f}(\eiu)\rangle_{\Cc^m} \\ \langle u(\eiu) , \overline{ \eta_{0}}(\eiu) \rangle_{\Cc^m} & \langle u(\eiu), e^{-i\theta} \bar{f}(\eiu)\rangle_{\Cc^m} \end{pmatrix}d\theta. $$ Notice that, since $\eta_0$ is an inner function, $\|\bar{\eta}_0(\eiu)\|_{\Cc^m}=1$ almost everywhere on $\Tt,$ and hence $$\begin{array}{cllllll} I&=\displaystyle \frac{1}{2\pi}\int_{0}^{2\pi} \det \begin{pmatrix} 1 & \langle \bar{\eta}_0(\eiu) , e^{-i\theta} \bar{f}(\eiu)\rangle_{\Cc^m} \\ \langle u(\eiu) , \overline{ \eta_{0}}(\eiu) \rangle_{\Cc^m} & \langle u(\eiu), e^{-i\theta} \bar{f}(\eiu)\rangle_{\Cc^m} \end{pmatrix}d\theta \vspace{2ex}\\ &=\displaystyle \frac{1}{2\pi} \displaystyle \int_{0}^{2\pi} \langle u(\eiu), e^{-i\theta}\bar{f}(\eiu)\rangle_{\Cc^m} \\ &\hspace{15ex}- \langle \bar{\eta}_0(\eiu) , e^{-i\theta} \bar{f}(\eiu)\rangle_{\Cc^m} \langle u(\eiu) , \bar{\eta}_0(\eiu)\rangle_{\Cc^m} d\theta \vspace{2ex} \\ &=\displaystyle \frac{1}{2\pi}\displaystyle \int_{0}^{2\pi} \langle u(\eiu), e^{-i\theta}\bar{f}(\eiu)\rangle_{\Cc^m} \\ &\hspace{15ex}- \left\langle \langle u(\eiu), \bar{\eta}_0(\eiu) \rangle_{\Cc^m} \bar{\eta}_0(\eiu) , e^{-i\theta} \bar{f}(\eiu) \right\rangle_{\Cc^m} d\theta \vspace{2ex} \\ &=\displaystyle \frac{1}{2\pi} \displaystyle \int_{0}^{2\pi} \left\langle u(\eiu) -\langle u(\eiu), \bar{\eta}_0(\eiu)\rangle_{\Cc^m}\bar{\eta}_0(\eiu) , e^{-i\theta} \bar{f}(\eiu)\right\rangle_{\Cc^m} d\theta . \end{array}$$ \noindent Thus, the condition \eqref{cond9.7} holds if and only if $$\displaystyle\frac{1}{2\pi}\int_{0}^{2\pi} \langle \bar{\eta}_0(\eiu) \telwe u(\eiu) , \bar{\eta}_0(\eiu) \telwe e^{-i\theta} \bar{f}(\eiu)\rangle_{\we^2\Cc^m} \; d\theta = 0 \quad \text{for all}\quad f\in H^2(\Dd,\Cc^m) $$ if and only if $$ \displaystyle\frac{1}{2\pi} \int_{0}^{2\pi} \left\langle u(\eiu) - \langle u(\eiu), \bar{\eta}_0(\eiu)\rangle_{\Cc^m}\bar{\eta}_0(\eiu) , e^{-i\theta} \bar{f}(\eiu)\right\rangle_{\Cc^m} d\theta =0 $$for all $f\in H^2(\Dd,\Cc^m),$ and the latter equation holds if and only if $$u(\eiu) - \langle u(\eiu), \bar{\eta}_0(\eiu)\rangle_{\Cc^m} \bar{\eta}_0(\eiu) $$ belongs to $H^2(\Dd,\Cc^m).$ \end{proof} \begin{lemma}\label{l0perp} In the notation of Theorem \ref{T0compact}, $$ \mathcal{L}_1^\perp = \{f \in L^2(\Tt,\Cc^m) \; : \beta_0^*f \in H^2(\Dd,\Cc^{m-1}) \} .$$ \end{lemma} \begin{proof} It is easy to see that $\mathcal{L}_1 = \beta_0 H^2(\Dd,\Cc^{m-1})^\perp.$ The general element of $\beta_0 H^2(\Dd,\Cc^{m-1})^\perp$ is $\beta_0 \bar{z} \bar{g}$ with $g \in H^2(\Dd,\Cc^{m-1}).$ For $f \in L^2(\Tt,\Cc^m),$ $f \in \mathcal{L}_1^\perp$ if and only if $$ \langle f,\beta_0 \bar{z} \bar{g} \rangle_{L^2(\Tt,\Cc^m)} = 0 \quad \text{for all} \quad g \in H^2(\Dd,\Cc^{m-1}).$$ Equivalently, $f \in \mathcal{L}_1^\perp$ if and only if $$ \displaystyle \frac{1}{2\pi} \int_{0}^{2\pi} \langle f(\eiu), \beta_0(\eiu) e^{-i\theta} \overline{g}(\eiu) \rangle_{\Cc^m} d\theta =0 \quad \text{for all} \quad g \in H^2(\Dd,\Cc^{m-1}) $$ if and only if $$\displaystyle \frac{1}{2\pi} \int_{0}^{2\pi} \langle \beta_0(\eiu)^* f(\eiu), e^{-i\theta} \overline{g}(\eiu) \rangle_{\Cc^{m-1}} d\theta =0 \quad \text{for all} \quad g \in H^2(\Dd,\Cc^{m-1}) .$$ The latter statement is equivalent to the assertion that $ \beta_0^* f$ is orthogonal to $ H^2(\Dd,\Cc^{m-1})^\perp$ in $L^2(\Tt,\Cc^{m-1}),$ which holds if and only if $\beta_0^* f $ belongs to $H^2(\Dd,\Cc^{m-1}).$ \noindent Hence $$ \mathcal{L}_1^\perp = \{f \in L^2(\Tt,\Cc^m) \; : \beta_0^*f \in H^2(\Dd,\Cc^{m-1}) \} $$ as required. \end{proof} \begin{proposition}\label{beta0*h2} Under the assumptions of Theorem \ref{T0compact}, let $\eta_0$ be defined by equation \eqref{x00y00} and let $W_0^T = \begin{pmatrix} \eta_0 & \bar{\beta}_0 \end{pmatrix}$ be a thematic completion of $\eta_0,$ where $\beta_0$ is an inner, co-outer, quasi-continuous function of type $m \times (m-1).$ Then, $$ \beta_0^* H^2(\Dd,\Cc^{m})^\perp = H^2(\Dd,\Cc^{m-1})^\perp.$$ \end{proposition} \begin{proof} By virtue of the fact that complex conjugation is a unitary operator on $L^2(\Tt,\Cc^m),$ an equivalent statement is that $\beta_0^T z H^2(\Dd,\Cc^{m}) = z H^2(\Dd,\Cc^{m-1}).$ By Lemma \ref{L6.2}, since $\beta_0$ is an inner, co-outer and quasi-continuous function, there exists a matrix-valued function $B_0 \in H^\infty( \Dd,\Cc^{(m-1)\times m})$ such that $$B_0 \beta_0 = I_{m-1}$$ or, equivalently, $$ \beta_0^T B_0^T = I_{m-1}.$$ \noindent Let $g \in z H^2(\Dd,\Cc^{m-1}).$ Then, $$ g = (\beta_0^T B_0^T) g \in \beta_0^T B_0^T z H^2(\Dd,\Cc^{m-1}) \subseteq \beta_0^T z H^2(\Dd,\Cc^{m}).$$ Hence $$ z H^2(\Dd,\Cc^{m-1}) \subseteq \beta_0^T z H^2(\Dd,\Cc^{m}). $$ Note that, $\beta_0 \in H^\infty(\Dd,\Cc^{m \times (m-1)}),$ and so, $$ zH^2(\Dd,\Cc^{m-1}) \subseteq \beta_0^T z H^2(\Dd,\Cc^{m}) \subseteq z H^2(\Dd,\Cc^{m-1}).$$ Thus $$ \beta_0^T z H^2(\Dd,\Cc^{m}) = z H^2(\Dd,\Cc^{m-1}). $$ \end{proof} \begin{proposition}\label{eta0telweh2} In the notation of Theorem \ref{T0compact}, let $\eta_0 \in H^\infty(\Dd,\Cc^m)$ be a vector-valued inner function given by equation \eqref{x00y00}, let $W_0^T = \begin{pmatrix} \eta_0 & \bar{ \beta}_0 \end{pmatrix}$ be a thematic completion of $\eta_0$ given by equation \eqref{v00w00}, and let $$ \mathcal{L}_1 = W_0^* \begin{pmatrix} 0 \\ H^2(\Dd,\Cc^{m-1})^\perp\end{pmatrix}. $$ Then $$ \bar{\eta}_0 \telwe \mathcal{L}_1 = \bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp $$ and the operator $$ (\bar{\eta}_0 \telwe \cdot)\colon \mathcal{L}_1 \to \bar{\eta}_0\telwe H^2(\Dd,\Cc^m)^\perp $$ is unitary. \end{proposition} \begin{proof} Let us first prove that $\bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp \subseteq \bar{\eta}_0 \telwe \mathcal{L}_1.$ Consider an element $f \in H^2(\Dd,\Cc^m)^\perp.$ Note that, since $W_0^T$ is unitary valued, we have \begin{equation}\label{wounit} \bar{\eta}_0\eta_0^T +\beta_0\beta_0^*=I_m. \end{equation} Thus $$ \begin{array}{cllll} \bar{\eta}_0 \telwe f &= \bar{\eta}_0 \telwe (\bar{\eta}_0\eta_0^T +\beta_0\beta_0^*)f \vspace{2ex} \\ &= \bar{\eta}_0 \telwe \bar{\eta}_0\eta_0^T f + \bar{\eta}_0 \telwe \beta_0\beta_0^*f \vspace{2ex}\\ &= 0+\bar{\eta}_0 \telwe \beta_0\beta_0^*f, \end{array} $$ the last equality following by the pointwise linear dependence of $\bar{ \eta}_0$ and $\bar{ \eta}_0 (\eta_0^T f)$ on $\Dd.$ By Proposition \ref{beta0*h2}, $$ \beta_0^* H^2(\Dd,\Cc^{m})^\perp = H^2(\Dd,\Cc^{m-1})^\perp, $$ and, by the definition of $\mathcal{L}_1$, we have $$ \mathcal{L}_1 = \beta_0 H^2(\Dd,\Cc^{m-1})^\perp. $$ Hence, for $f \in H^2(\Dd,\Cc^m)^\perp,$ $$ \bar{\eta}_0 \telwe f = \bar{\eta}_0 \telwe \beta_0\beta_0^*f \in \bar{\eta}_0 \telwe \beta_0 H^2(\Dd,\Cc^{m-1})^\perp, $$ and thus $$ \bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp \subseteq \bar{\eta}_0 \telwe \mathcal{L}_1. $$ \noindent Let us show $$ \bar{\eta}_0 \telwe \mathcal{L}_1 \subseteq \bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp. $$ A typical element of $\bar{\eta}_0 \telwe \mathcal{L}_1$ is of the form $\bar{\eta}_0 \telwe \beta_0 g ,$ for some $g \in H^2(\Dd,\Cc^{m-1})^\perp.$ By Proposition \ref{beta0*h2}, there exists a $\phi \in H^2(\Dd,\Cc^m)^\perp$ such that $\beta_0^*\phi= g.$ Then $$ \bar{\eta}_0 \telwe \beta_0 g = \bar{\eta}_0 \telwe \beta_0 \beta_0^*\phi. $$ By equation \eqref{wounit}, we have $$ \bar{\eta}_0 \telwe \beta_0 g = \bar{\eta}_0 \telwe (I_{m}- \bar{\eta}_0\eta_0^T)\phi = \bar{\eta}_0 \telwe \phi, $$ the last equality following by pointwise linear dependence of $\bar{ \eta}_0$ and $\bar{ \eta}_0(\eta_0^T\phi) $ on $\Dd$. Thus $$ \bar{\eta}_0 \telwe \beta_0 g \in \bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp ,$$ and so $ \bar{\eta}_0 \telwe \mathcal{L}_1 \subseteq \bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp.$ Consequently $$ \bar{\eta}_0 \telwe \mathcal{L}_1 = \bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp.$$ To prove that the operator $$ (\bar{\eta}_0 \telwe \cdot)\colon \mathcal{L}_1 \to \bar{\eta}_0\telwe H^2(\Dd,\Cc^m)^\perp $$ is unitary, it suffices to show that it is an isometry, since the preceding discussion asserts that it is surjective. To this end, let $s \in \mathcal{L}_1.$ Then, $$\begin{array}{lll} \|\bar{\eta}_0 \telwe s\|_{L^2(\Tt,\we^2\Cc^m)}^2 &= \langle \bar{\eta}_0 \telwe s , \bar{\eta}_0 \telwe s \rangle_{L^2(\Tt,\we^2\Cc^m)} \vspace{2ex} \\ &= \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \langle \bar{\eta}_0 (\eiu) \telwe s(\eiu) , \bar{\eta}_0(\eiu) \telwe s(\eiu) \rangle_{\we^2\Cc^m}\;d\theta \vspace{2ex} \\ &= \displaystyle \frac{1}{2\pi}\int_{0}^{2\pi} \det \begin{pmatrix} \langle \bar{\eta}_0 (\eiu) , \bar{\eta}_0 (\eiu) \rangle_{\Cc^m} & \langle \bar{\eta}_0 (\eiu) , s(\eiu)\rangle_{\Cc^m} \\ \langle s(\eiu) , \bar{\eta}_0 (\eiu) \rangle_{\Cc^m} & \langle s(\eiu), s(\eiu)\rangle_{\Cc^m} \end{pmatrix}d\theta. \end{array} $$ By Proposition \ref{onxi}, $\|\bar{\eta}_0(z)\|_{\Cc^m} =1$ almost everywhere on $\Tt.$ Moreover, since $s \in \mathcal{L}_1,$ there exists a function $\psi \in H^2(\Dd,\Cc^{m-1})^\perp$ such that $s=\beta_0 \psi.$ Then $$\langle \bar{\eta}_0(\eiu), s(\eiu)\rangle_{\Cc^m} = \langle \bar{\eta}_0(\eiu), \beta_0(\eiu) \psi(\eiu)\rangle_{\Cc^m}=\langle \beta_0^*(\eiu)\bar{\eta}_0(\eiu), \psi(\eiu)\rangle_{\Cc^m}=0$$almost everywhere on $\Tt,$ which follows by the fact that $W_0$ is unitary-valued, and so $$(W_0W_0^*)(z)=\begin{pmatrix}\eta_0^T(z) \\ \beta_0^*(z) \end{pmatrix}\begin{pmatrix} \bar{ \eta}_0(z) & \beta_0(z) \end{pmatrix}=\begin{pmatrix} \eta_0^T(z)\bar{ \eta}_0(z) & \eta_0^T(z) \beta_0^T(z)\\ \beta_0^*(z) \bar{ \eta}_0(z) & \beta_0^*(z) \beta_0(z) \end{pmatrix} =I_m$$ almost everywhere on $\Tt.$ \noindent Thus, for all $s \in \mathcal{L}_1$, $$\|\bar{\eta}_0 \telwe s\|_{L^2(\Tt,\we^2\Cc^m)}^2 = \|s\|_{L^2(\Tt,\Cc^m)}^2 , $$ which shows that the operator $$ (\bar{\eta}_0 \telwe \cdot)\colon \mathcal{L}_1 \to \bar{\eta}_0\telwe H^2(\Dd,\Cc^m)^\perp $$ is an isometry. We have proved it is also surjective, hence the operator $(\bar{\eta}_0\telwe \cdot)$ is unitary.\end{proof} {\it Continuation of the proof of Theorem \ref{T0compact}.}\\ {\rm (iii)} We have to prove that diagram \eqref{commdiagr} commutes. Recall, by Lemma \ref{3.2constr}, the left hand square commutes, so it suffices to show that that the right hand square, namely \begin{equation}\label{commdiagrr} \begin{array}{clllll} \mathcal{K}_1 &\xrightarrow{\xi_0 \telwe \cdot}& \xi_0 \telwe H^2 (\Dd, \Cc^n)=X_1\\ \Big\downarrow\rlap{$\scriptstyle \Gamma_1$} &~&\hspace{3ex}\Big\downarrow\rlap{$\scriptstyle T_1$} \\ \mathcal{L}_1 &\xrightarrow{\bar{\eta}_0 \telwe \cdot } & \bar{\eta}_0 \telwe H^2 (\Dd, \Cc^m)^\perp =Y_1 \end{array},\end{equation}also commutes. That is, we wish to prove that, for all $x \in \mathcal{K}_1,$ $$ T_1(\xi_0 \telwe x)= \bar{ \eta}_0 \telwe \Gamma_1(x), $$where $\Gamma_1(x) = P_{\mathcal{L}_1}((G-Q_1)x)$ for any function $Q_1 \in H^\infty(\Dd,\CCmn)$ that satisfies the following equations $$(G-Q_1)x_0 = t_0 y_0, \quad y_0^*(G-Q_1) = t_0 x_0^*. $$ By Proposition \ref{xi0wek0subset}, $$\xi_0 \telwe \mathcal{K}_1 = \xi_0 \telwe H^2(\Dd,\Cc^n), $$ and so, for every $x \in \mathcal{K}_1, $ there exists $\tilde{x} \in H^2(\Dd,\Cc^n)$ such that $$\xi_0 \telwe x = \xi_0 \telwe \tilde{x}. $$ Thus, for $ x \in \mathcal{K}_1,$ $$T_1(\xi_0 \telwe x)= T_1(\xi_0\telwe \tilde{x}) = P_{Y_1}(\bar{ \eta}_0 \telwe (G-Q_1)\tilde{x}), $$ and $$ \bar{ \eta}_0 \telwe \Gamma_1(x) = \bar{ \eta}_0 \telwe P_{\mathcal{L}_1}(G-Q_1)x.$$ Hence to prove the commutativity of diagram (\ref{commdiagrr}), it suffices to show that, for all $x\in \mathcal{K}_1,$ $$P_{Y_1}[ \bar{\eta}_0 \telwe(G-Q_1)\tilde{x})] = \bar{\eta}_0 \telwe P_{\mathcal{L}_1}(G-Q_1)x$$ in $Y_1,$ where $\xi_0\telwe (x-\tilde{x})=0.$ By Proposition \ref{eta0telweh2}, $$\bar{\eta}_0 \telwe \mathcal{L}_1 = \bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp=Y_1,$$ and so, for all $x \in \mathcal{K}_1,$ $\bar{ \eta}_0 \telwe P_{\mathcal{L}_1}(G-Q_1)x\in Y_1.$ \noindent Let us show that, for $x \in \mathcal{K}_1,$ $$\bar{\eta}_0 \telwe (G-Q_1)\tilde{x} - \bar{\eta}_0\telwe P_{\mathcal{L}_1} (G-Q_1)x $$ is orthogonal to $Y_1$ in $L^2(\Tt,\we^2 \Cc^m),$ or equivalently, that for every $f \in H^2(\Dd,\Cc^m),$ \begin{equation}\label{7.1} \left\langle \bar{\eta}_0 \telwe [ (G-Q_1)\tilde{x} - P_{\mathcal{L}_1}(G-Q_1)x ] , \bar{\eta}_0 \telwe \bar{z}\bar{f} \right\rangle_{L^2(\Tt, \we^2\Cc^m)} =0 \end{equation} for $x \in \mathcal{K}_1$ and for any $\tilde{x} \in H^2(\Dd,\Cc^n)$ such that $\xi_0 \telwe \tilde{x} = \xi_0 \telwe x.$ By Lemma \ref{0to0}, $$\bar{ \eta}_0 \telwe (G-Q_1)x=\bar{ \eta}_0 \telwe (G-Q_1)\tilde{x}.$$ Then equation (\ref{7.1}) is equivalent to the equation \begin{equation}\label{7.11} \langle \bar{\eta}_0 \telwe P_{\mathcal{L}_1^\perp} (G-Q_1)x, \bar{\eta}_0 \telwe \bar{z} \bar{f} \rangle_{L^2(\Tt, \we^2\Cc^m)} =0 \end{equation} for any $x \in \mathcal{K}_1.$ By Lemma \ref{inner0iff}, equation (\ref{7.11}) holds if and only if the function \begin{equation}\label{zmapsto1} z \mapsto [P_{\mathcal{L}_1^\perp}(G-Q_1)x](z) - \langle [P_{\mathcal{L}_1^\perp}(G-Q_1)x](z) ,\bar{\eta}_0(z)\rangle_{\Cc^m} \bar{\eta}_0(z)\end{equation}belongs to $H^2(\Dd,\Cc^m).$ By Lemma \ref{l0perp}, there exists a function $\psi \in L^2(\Tt,\Cc^m)$ such that \begin{equation}\label{PLperp} P_{\mathcal{L}_1^\perp} (G-Q_1)x=\psi \end{equation} and $$\beta_0^*\psi \in H^2(\Dd,\Cc^{m-1}).$$ Equation \eqref{PLperp} implies $$(G-Q_1)x -\psi \in \mathcal{L}_1 = \beta_0 H^2(\Dd,\Cc^{m-1})^\perp.$$ \noindent Hence, to prove that the function defined by equation (\ref{zmapsto1}) belongs to $H^2(\Dd,\Cc^m)$, we have to show that $$\psi - (\eta_0^T \psi) \bar{\eta}_0 \in H^2(\Dd,\Cc^m).$$ Since $W_0 =\begin{pmatrix} \eta_0 & \overline{ \beta_{0}} \end{pmatrix}^T $ is a unitary-valued function, $$\bar{ \eta}_0(z) \eta_0^T(z)+ \beta_0(z)\beta_0^*(z)=I_m$$ almost everywhere on $\Tt.$ \noindent Since $\eta_0^T \psi$ is a scalar-valued function, $$ \psi - (\eta_0^T \psi) \overline{ \eta_0 } = (I_m - \eta_0^T \overline{ \eta_0 })\psi = \beta_0 \beta_0^*\psi \in H^2(\Dd,\Cc^m) .$$ Thus diagram (\ref{commdiagrr}) commutes. {\rm (iv)} By Lemma \ref{2.2}, $$F_1 \in H^\infty(\Dd,\Cc^{(m-1)\times (n-1)})+C(\Tt,\Cc^{(m-1)\times (n-1)}).$$ Then, by Hartman's Theorem \ref{2.04}, the Hankel operator $H_{F_1}$ is compact, and by (iii), $$(\bar{\eta}_0 \telwe \cdot) \circ (U_2 H_{F_1} U_1^*) \circ(\xi_0\telwe \cdot )^*= T_1 .$$ By (i) and (ii), the operators $U_1, U_2,$ $(\xi_0 \telwe \cdot)$ and $(\bar{ \eta}_0 \telwe \cdot)$ are unitary. Hence $T_1$ is a compact operator. {\rm (v)} Since diagram \eqref{commdiagr} is commutative and $U_1,U_2,(\xi_0\telwe\cdot)$ and $(\eta_0\telwe \cdot)$ are unitaries, $$\|T_1\| = \|\Gamma_1\| = \|H_{F_1}\|. $$ \end{proof} \noindent In what follows, we will prove an analogous statement to Theorem \ref{T0compact} for $T_2.$ To this end, we need the following results. \begin{lemma}\label{connofschpairs1} In the notation of Theorem \ref{T0compact}, $v_1 \in H^2(\Dd,\Cc^n)$ and $w_1 \in H^2(\Dd,\Cc^m)^\perp$ are such that $(\xi_0 \telwe v_1, \bar{\eta}_0 \telwe w_1)$ is a Schmidt pair for the operator $T_1$ corresponding to $\|T_1\|.$ Then {\em (i)} there exist $x_1 \in \mathcal{K}_1$ and $y_1\in \mathcal{L}_1$ such that $(x_1,y_1) $ is a Schmidt pair for the operator $\Gamma_1$; {\em (ii)} for any $x_1 \in \mathcal{K}_1$ and $y_1\in \mathcal{L}_1$ such that $$ \xi_0 \telwe x_1 = \xi_0 \telwe v_1,\quad \bar{ \eta}_0 \telwe y_1 = \bar{ \eta}_0 \telwe w_1,$$ the pair $(x_1,y_1)$ is a Schmidt pair for $\Gamma_1$ corresponding to $\|\Gamma_1\|.$ \end{lemma} \begin{proof} {\rm (i)} By Theorem \ref{T0compact}, the diagram (\ref{commdiagr}) commutes, $(\xi_0 \telwe \cdot)$ is unitary from $\mathcal{K}_1$ to $X_1,$ and $(\bar{ \eta}_0\telwe \cdot)$ is unitary from $\mathcal{L}_1$ to $Y_1.$ Thus $\|\Gamma_1\| =\|T_1\|=t_1.$ Moreover, by Lemma \ref{3.2constr}, the operator $\Gamma_1\colon \mathcal{K}_1\to \mathcal{L}_1$ is compact, hence there exist $x_1 \in \mathcal{K}_1,$ $y_1 \in \mathcal{L}_1$ such that $(x_1,y_1)$ is a Schmidt pair for $\Gamma_1$ corresponding to $\|\Gamma_1\|=t_1.$ {\rm (ii)} Suppose that $x_1\in \mathcal{K}_1,y_1\in \mathcal{L}_1$ satisfy \begin{equation}\label{xitel1} \xi_0 \telwe x_1 = \xi_0 \telwe v_1,\end{equation} \begin{equation}\label{eta0tel1} \bar{ \eta}_0 \telwe y_1 = \bar{ \eta}_0 \telwe w_1. \end{equation} Let us show that $(x_1,y_1)$ is a Schmidt pair for $\Gamma_1$ corresponding to $t_1$, that is, $$\Gamma_1 x_1 = t_1y_1,\quad \Gamma_1^*y_1=t_1x_1. $$ Since diagram (\ref{commdiagrr}) commutes, \begin{equation}\label{commt1gamma1}T_1 \circ (\xi_0\telwe \cdot )=(\bar{ \eta}_0 \telwe \cdot)\circ\Gamma_1 , \quad (\xi_0\telwe \cdot )^*\circ T_1^* = \Gamma_1^* \circ (\bar{\eta}_0 \telwe \cdot)^*. \end{equation} By hypothesis, \begin{equation}\label{hypt1} T_1 (\xi_0 \telwe v_1)= t_1 (\bar{ \eta}_0 \telwe w_1), \quad T_1^*(\bar{ \eta}_0 \telwe w_1)= t_1 (\xi_0 \telwe v_1). \end{equation} Thus, by equations \eqref{eta0tel1}, \eqref{commt1gamma1} and \eqref{hypt1}, $$\begin{array}{clllll} \Gamma_1 x_1&= (\bar{\eta}_0 \telwe \cdot)^* T_1 (\xi_0\telwe v_1 ) \vspace{2ex} \\ &= (\bar{\eta}_0 \telwe \cdot)^* t_1 (\bar{ \eta}_0 \telwe w_1) \vspace{2ex} \\ &= t_1 (\bar{\eta}_0 \telwe \cdot)^* (\bar{ \eta}_0 \telwe y_1).\end{array}$$ Hence $$\Gamma_1 x_1= t_1 (\bar{\eta}_0 \telwe \cdot)^* (\bar{ \eta}_0 \telwe \cdot)y_1= t_1 y_1.$$ \noindent By equation (\ref{xitel1}), $$x_1 = (\xi_0\telwe \cdot )^* (\xi_0\telwe v_1 ),$$ and, by equation (\ref{eta0tel1}), $$(\bar{\eta}_0 \telwe \cdot)^*(\bar{ \eta}_0 \telwe w_1)=y_1. $$Thus $$\begin{array}{clll}\Gamma_1^* y_1 &= \Gamma_1^*(\bar{ \eta}_0 \telwe \cdot)^*(\bar{\eta}_0 \telwe w_1)\vspace{2ex} \\ &= (\xi_0 \telwe \cdot )^* T_1^* (\bar{ \eta}_0 \telwe w_1),\end{array}$$ the last equality following by the second equation of (\ref{commt1gamma1}). By equations \eqref{xitel1} and (\ref{hypt1}), we have $$ T_1^* (\bar{ \eta}_0 \telwe w_1) = t_1 (\xi_0\telwe v_1)= t_1(\xi_0\telwe x_1),$$ and so, $$ \Gamma_1^* y_1 = t_1 x_1.$$ Therefore $(x_1,y_1)$ is a Schmidt pair for $\Gamma_1$ corresponding to $\|\Gamma_1\|=\|T_1\|=t_1.$ \end{proof} \begin{lemma}\label{schfohf1} Suppose $(\xi_0 \telwe v_1, \bita_0 \telwe w_1)$ is a Schmidt pair for $T_1$ corresponding to $t_1.$ Let $$x_1 = (I_{n} - \xi_0 \xi_0^*)v_1,\quad y_1= (I_{m} - \bita_0\eta_0^T)w_1,$$ and let $$\hx_1 = \alpha_0^T x_1,\quad \hy_1=\beta_0^*y_1. $$ Then {\em (i)} \begin{equation}\label{x1=alpha0alphaTx1} x_1= \bar{\alpha}_0 \alpha_0^Tx_1,\quad y_1=\beta_0\beta_0^*y_1; \end{equation} {\em (ii)} the pair $(\hx_1,\hy_1)$ is a Schmidt pair for $H_{F_1}$ corresponding to $\|H_{F_1}\|=t_1.$ \end{lemma} \begin{proof} {\rm (i)} Since $V_0=\big(\begin{matrix}\xi_0 & \bar{\alpha}_0 \end{matrix}\big)$ is unitary-valued, $I_{n} - \xi_0 \xi_0^* = \balpha_0 \alpha_0^T,$ and so \begin{align} \label{x1=alpha0alphaTx1-pr} \bar{\alpha}_0 \alpha_0^Tx_1& =(I_{n} - \xi_0 \xi_0^*) (I_{n} - \xi_0 \xi_0^*)v_1 \nn\\ & = (I_{n} - 2\xi_0 \xi_0^* + \xi_0 \xi_0^*\xi_0 \xi_0^*)v_1 \nn\\ & = (I_{n} - \xi_0 \xi_0^*)v_1 = x_1. \end{align} Similarly, since $W_0^T=\big( \begin{matrix}\eta_0 & \bar{\beta}_0 \end{matrix}\big)$ is unitary valued, $I_{m} - \bita_0 \eta_0^T = \beta_0 \beta_0^*,$ and so \begin{align} \label{y1=beta0beta*y1-pr} \beta_0\beta_0^*y_1 &= (I_{m} - \bita_0\eta_0^T)(I_{m} - \bita_0\eta_0^T)w_1 \nn\\ & = (I_{m} - 2\bita_0\eta_0^T +\bita_0\eta_0^T\bita_0\eta_0^T)w_1 \nn\\ & = (I_{m} - \bita_0\eta_0^T)w_1= y_1. \end{align} {\rm (ii)} Recall that, by Lemma \ref{3.2constr}, the maps $$U_1 \colon H^2(\Dd,\Cc^{n-1}) \to \mathcal{K}_1,\quad U_2 \colon H^2(\Dd,\Cc^{m-1})^\perp \to \mathcal{L}_1,$$ defined by $$U_1 \chi = V_0 \begin{pmatrix} 0 \\ \chi \end{pmatrix}=\bar{\alpha}_0\chi, \quad U_2 \psi = W_0^* \begin{pmatrix} 0 \\ \psi \end{pmatrix}=\beta_0\psi \quad $$ for all $\chi \in H^2(\Dd,\Cc^{n-1})$ and all $\psi \in H^2(\Dd,\Cc^{m-1})^\perp,$ are unitaries. By the commutativity of the diagram \eqref{commdiagr}, \begin{equation}\label{hf1g1} H_{F_1} = U_2^* \Gamma_1 U_1.\end{equation} By Part (i), $x_1 \in \mathcal{K}_1$ and $y_1 \in \mathcal{L}_1$ and by Proposition \ref{onxi}, $$ \xi_0 \telwe x_1 = \xi_0 \telwe v_1,\quad \bar{ \eta}_0 \telwe y_1 = \bar{ \eta}_0 \telwe w_1.$$ Thus, by Lemma \ref{connofschpairs1}, $(x_1,y_1)$ is a Schmidt pair for the operator $\Gamma_1$ corresponding to $t_1=\|\Gamma_1\|,$ that is, \begin{equation}\label{schmg1} \Gamma_1 x_1 =t_1y_1,\quad \Gamma_1^*y_1=t_1x_1. \end{equation} To prove that the pair $(\hx_1,\hy_1)$ is a Schmidt pair for $H_{F_1}$ corresponding to $\|H_{F_1}\|=t_1$, we need to show that $$H_{F_1}\hx_1 = t_1 \hy_1,\; \text{and} \; H_{F_1}^*\hy_1 = t_1\hx_1. $$ By equations \eqref{hf1g1} and \eqref{x1=alpha0alphaTx1}, we have \begin{align}\label{u2star} H_{F_1}\hat{x}_1&= H_{F_1}\alpha_0^T\hx_1\nonumber\vspace{2ex}\\&= U_2^*\Gamma_1 U_1 \alpha_0^T x_1 = U_2^* \Gamma_1 \bar{\alpha}_0\alpha_0^T x_1 \nonumber\vspace{2ex}\\ &= U_2^*\Gamma_1 x_1 = t_1 \beta_0^* y_1 = t_1 \hy_1. \end{align} Let us show that $H_{F_1}^*\hy_1 = t_1\hx_1.$ By equations \eqref{hf1g1} and \eqref{x1=alpha0alphaTx1}, we have \begin{align}\label{g1*} H_{F_1}^*\hy_1 &=H_{F_1}^* \beta_0^* y_1\nonumber\vspace{2ex}\\ &= U_1^* \Gamma_1^* U_2 \beta_0^* y_1 = U_1^*\Gamma_1^*\beta_0\beta_0^* y_1\nonumber \vspace{2ex}\\ &= U_1^*\Gamma_1^* y_1= t_1 U_1^* x_1 = t_1 \alpha_0^T x_1 = t_1 \hx_1. \end{align} Therefore $(\hx_1,\hy_1)$ is a Schmidt pair for $H_{F_1}$ corresponding to $\|H_{F_1}\|=t_1.$ \end{proof} \begin{proposition}\label{x0wev1eta1wew1} Let $(\xi_0\telwe v_1,\bita_0\telwe w_1)$ be a Schmidt pair for $T_1$ corresponding to $t_1$ for some $v_1\in H^2(\Dd,\Cc^n),w_1\in H^2(\Dd,\Cc^m)^\perp,$ let $h_1 \in H^2(\Dd,\Cc)$ be the scalar outer factor of $\xi_0 \telwe v_1,$ let $$x_1 = (I_{n}- \xi_0 \xi_0^*)v_1,\quad y_1=(I_{m} - \bar{\eta}_0 \eta_0^T)w_1,$$ and let $$\hx_1 = \alpha_0^T x_1,\quad \hy_1=\beta_0^*y_1. $$ Then $$\|\hx_1 (z) \|_{\Cc^{n-1}} = \|\hy_1(z)\|_{\Cc^{m-1}} = |h_1(z)|,$$ $$\| x_1(z) \|_{\Cc^n} = \|y_1(z)\|_{\Cc^m} = |h_1(z)|$$ and $$\| \xi_0 (z) \we v_1(z) \|_{\we^2\Cc^n} = \| \bar{\eta}_0(z) \we w_1(z)\|_{\we^2\Cc^m} = |h_1(z)|$$ almost everywhere on $\Tt.$ \end{proposition} \begin{proof} By Lemma \ref{schfohf1}, $(\hx_1,\hy_1)$ is a Schmidt pair for $H_{F_1}$ corresponding to $\|H_{F_1}\|=t_1$. Hence $$H_{F_1}\hx_1 = t_1\hy_1 \quad \text{and}\quad H_{F_1}^* \hy_i = t_1 \hx_1. $$ By Theorem \ref{1.7}, for the Hankel operator $H_{F_1}$ and the Schmidt pair $(\hx_1,\hy_1)$, we have \begin{equation}\label{hatseq} \|\hy_1(z)\|_{\Cc^{m-1}}= \|\hx_1(z)\|_{\Cc^{n-1}} \end{equation} almost everywhere on $\Tt.$ By equations \eqref{x1=alpha0alphaTx1}, $$x_1= \bar{\alpha}_0 \alpha_0^Tx_1 =\bar{\alpha}_0 \hx_1,\quad y_1= \beta_0\beta_0^*y_1=\beta_0 \hy_1.$$ Since $\bar{\alpha}_0(z)$ and $\beta_0(z)$ are isometric for almost every $z\in \Tt$, $$\|x_1(z)\|_{\Cc^n}=\|\hx_1(z)\|_{\Cc^{n-1}} \;\; \text{and} \;\; \|y_1(z)\|_{\Cc^m}=\|\hy_1(z)\|_{\Cc^{m-1}} $$ almost everywhere on $\Tt$. By equations \eqref{hatseq}, we deduce \begin{equation}\label{x1isy1} \|x_1(z)\|_{\Cc^n}=\|y_1(z)\|_{\Cc^m} \end{equation} almost everywhere on $\Tt.$ By Theorem \ref{T0compact}, $(\xi_0 \telwe \cdot)$ is an isometry from $\mathcal{K}_1$ to $X_1,$ and $(\bar{ \eta}_0\telwe \cdot)$ is an isometry from $\mathcal{L}_1$ to $Y_1.$ By Proposition \ref{onxi}, $$ \xi_0 \telwe x_1 = \xi_0 \telwe v_1,\quad \bar{ \eta}_0 \telwe y_1 = \bar{ \eta}_0 \telwe w_1.$$ Hence $$\begin{array}{lll}\|\xi_0(z)\we v_1(z)\|_{\we^2\Cc^n}&= \|\xi_0(z)\we x_1(z)\|_{\we^2\Cc^n}\vspace{2ex}=\|x_1(z)\|_{\Cc^n} \end{array}$$ almost everywhere on $\Tt.$ Also $$\begin{array}{llll} \| \bita_0(z) \we w_1(z)\|_{\we^2\Cc^m}&=\| \bita_0(z) \we y_1(z) \|_{\we^2\Cc^m}\vspace{2ex}=\|y_1(z)\|_{\Cc^m} \end{array} $$ almost everywhere on $\Tt.$ Thus, by equation \eqref{x1isy1}, $$\| \xi_0 (z) \we v_1(z) \|_{\we^2\Cc^n} = \| \bar{\eta}_0(z) \we w_1(z)\|_{\we^2\Cc^m}$$almost everywhere on $\Tt.$ Recall that $h_1$ is the scalar outer factor of $\xi_0 \telwe v_1$. Hence $$\| \xi_0 (z) \we v_1(z) \|_{\we^2\Cc^n} = \| \bar{\eta}_0(z) \we w_1(z)\|_{\we^2\Cc^m} = |h_1(z)|,$$ $$\| x_1(z) \|_{\Cc^n} = \|y_1(z)\|_{\Cc^m} = |h_1(z)|$$ and $$\|\hx_1 (z) \|_{\Cc^{n-1}} = \|\hy_1(z)\|_{\Cc^{m-1}} = |h_1(z)|$$ almost everywhere on $\Tt.$ \end{proof} \index{level $j-$superoptimal error function} \index{$\mathcal{E}_j$} \begin{definition}\label{epsilonj} Given $G\in H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn)$ and $0\leq j \leq\min(m,n),$ define $\Omega_j$ to be \emph{the set of level $j$ superoptimal analytic approximants to $G$}, that is, the set of $Q\in H^\infty(\Dd,\CCmn)$ which minimize the tuple $$\big(s_0^\infty(G-Q),s_1^\infty(G-Q), \dots, s_j^\infty(G-Q)\big) $$with respect to the lexicographic ordering over $Q \in H^\infty(\Dd,\CCmn).$ For $Q\in \Omega_j$ we call $G-Q$ a \emph{level $j$ superoptimal error function}, and we denote by $\mathcal{E}_j$ the \emph{set of all level $j$ superoptimal error functions}, that is $$\mathcal{E}_j =\{G-Q \; : \; Q\in \Omega_j\}. $$ \end{definition} \begin{proposition}\label{tildew1v1} Let $m,n$ be positive integers such that $\min(m,n)\geq2.$ Let \linebreak $G\in H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn).$ In line with the algorithm from Subsection \ref{Alg_statement}, let $Q_1 \in H^\infty(\Dd,\CCmn)$ satisfy $$(G-Q_1)x_0 = t_0 y_0,\quad (G-Q_1)^*y_0=t_0x_0. $$ Let the spaces $X_1 , Y_1$ be given by $$ X_1 = \xi_0 \telwe H^2(\Dd,\Cc^n) \subset H^2(\Dd,\we^2\Cc^n), \quad Y_1 = \bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp \subset H^2(\Dd,\we^2\Cc^m)^\perp,$$ and consider the compact operator $T_1\colon X_1 \to Y_1$ given by $$T_1(\xi_0 \telwe x) = P_{Y_1} (\bar{\eta}_0 \telwe (G-Q_1)x)$$ for all $x \in H^2(\Dd,\Cc^n).$ Let $(\xi_0 \telwe v_1,\bar{\eta}_0\telwe w_1)$ be a Schmidt pair for the operator $T_1$ corresponding to $t_1 = \|T_1\|,$ let $h_1 \in H^2(\Dd,\Cc)$ be the scalar outer factor of $\xi_0 \telwe v_1,$ let $$x_1 = (I_{n} - \xi_0 \xi_0^*)v_1, \quad y_1=(I_{m}-\bar{\eta}_0\eta_0^T)w_1 $$ and let $$\xi_1 =\frac{{x}_1}{h_1}, \quad \eta_1 =\frac{\bar{z}\bar{y}_1}{h_1}. $$ Then, there exist unitary-valued functions $\tilde{V}_1, \tilde{W}_1$ of types $(n-1)\times(n-1),$\linebreak $(m-1)\times (m-1)$ respectively of the form \begin{equation}\label{V1} \tilde{V}_1 \stackrel{\emph{def}}{=} \begin{pmatrix}\alpha_0^T \xi_1 & \overline{\alpha}_1 \end{pmatrix} \end{equation}and \begin{equation}\label{W1} \tilde{W}_1^T \stackrel{\emph{def}}{=} \begin{pmatrix} \beta_0^T \eta_1 & \overline{\beta}_1 \end{pmatrix},\end{equation}where $\alpha_1,\beta_1$ are inner, co-outer, quasi-continuous functions of types $(n-1)\times (n-2),$ \linebreak $(m-1)\times (m-2)$ respectively, and all minors on the first columns of $\tilde{V}_1,\tilde{W}_1^T$ are in $H^\infty.$ Furthermore, the set of all level $1$ superoptimal functions $\mathcal{E}_1$ satisfies \begin{equation}\label{g-qv0v1w0w1} \mathcal{E}_1 = W_0^* \begin{pmatrix} 1 & 0 \\ 0& \tilde{W}_1^*\end{pmatrix} \begin{pmatrix} t_0 u_0 & 0&0 \\ 0& t_1 u_1 &0\\ 0&0 & \left(F_2+H^\infty(\Dd,\Cc^{(m-2)\times(n-2)})\right)\cap B(t_1) \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & \tilde{V}_1^* \end{pmatrix} V_0^* , \end{equation} where $F_2 \in H^\infty(\Dd,\Cc^{(m-2)\times (n-2)})+C(\Tt,\Cc^{(m-2)\times (n-2)}),$ $u_1 = \frac{\bar{z} \bar{h}_1}{h_1}$ is a quasi-continuous unimodular function and $V_0,W_0^T$ are as in Theorem \ref{T0compact}, and $B(t_1)$ is the closed ball of radius $t_1$ in $L^\infty(\Tt,\Cc^{(m-2) \times (n-2)})$. \end{proposition} \begin{proof} By Theorem \ref{T0compact}, the following diagram commutes \begin{equation}\label{commdiagrhf1} \begin{array}{clllll} H^2(\Dd,\Cc^{n-1}) &\xrightarrow{U_1} & \mathcal{K}_1 &\xrightarrow{\xi_0 \telwe \cdot}& \xi_0 \telwe H^2 (\Dd, \Cc^n)=X_1\\ \Big\downarrow\rlap{$\scriptstyle H_{F_1} $} & ~ &\Big\downarrow\rlap{$\scriptstyle \Gamma_1$} &~&\hspace{3ex}\Big\downarrow\rlap{$\scriptstyle T_1$} \\ H^2(\Dd,\Cc^{m-1})^\perp &\xrightarrow{U_2}& \mathcal{L}_1 &\xrightarrow{\bar{\eta}_0 \telwe \cdot } & \bar{\eta}_0 \telwe H^2 (\Dd, \Cc^m)^\perp =Y_1. \end{array}\end{equation} Let $\hx_1=\alpha_0^Tx_1,\; \hy_1 = \beta_0^* y_1.$ By Lemma \ref{schfohf1}, $(\hx_1,\hy_1)$ is a Schmidt pair for $H_{F_1}$ corresponding to $t_1$. By equations \eqref{x1=alpha0alphaTx1}, $$x_1= \bar{\alpha}_0 \alpha_0^Tx_1 =\bar{\alpha}_0 \hx_1\; \text{and} \; y_1= \beta_0\beta_0^*y_1=\beta_0 \hy_1.$$ We want to apply Lemma \ref{2.2} to $H_{F_1}$ and the Schmidt pair $(\hx_1,\hy_1)$ to find unitary-valued functions $\tilde{V}_1,\tilde{W}_1$ such that, for any function $\tilde{Q}_1\in H^\infty(\Dd,\Cc^{(m-1)\times (n-1)})$ which is at minimal distance from $F_1,$ the following equation holds $$F_1-\tilde{Q}_1 = \tilde{W}_1^* \begin{pmatrix} t_1 u_1 & 0 \\ 0 & F_2 \end{pmatrix}\tilde{V}_1^*, $$ for some $F_2 \in H^\infty(\Dd,\Cc^{(m-2)\times (n-2)})+C(\Tt,\Cc^{(m-2)\times (n-2)}).$ For this purpose we find the inner-outer factorisations $\hat{x}_1$ and $\bar{z}\bar{\hy}_1$. By Proposition \ref{x0wev1eta1wew1}, \begin{equation}\label{h1common} \begin{aligned}&\|\hx_1(z)\|_{\Cc^{n-1}}=\|x_1(z)\|_{\Cc^n}=\| \xi_0(z)\telwe v_1(z)\|_{\we^2{\Cc^n}} = |h_1(z)|\;\\ \text{and}\\ &\| \hy_1(z)\|_{\Cc^{m-1}}= \|y_1(z)\|_{\Cc^m}=\| \bar{\eta}_0(z) \telwe w_1(z)\|_{\we^2\Cc^m} =|h_1(z)|\end{aligned}\end{equation} almost everywhere on $\Tt.$ Equations \eqref{h1common} imply that $h_1\in H^2(\Dd,\Cc)$ is the scalar outer factor of both $\hat{x}_1$ and $\bar{z}\bar{\hat{y}}_1.$ By Lemma \ref{2.2}, $\hat{x}_1,\bar{z}\bar{\hat{y}}_1$ admit the inner-outer factorisations $$\hat{x}_1 = \hat{\xi}_1 h_1, \quad \bar{z}\bar{\hat{y}}_1=\hat{\eta}_1 h_1 ,$$ for some inner vector-valued $\hat{\xi}_1\in H^\infty(\Dd,\Cc^{n-1})$ and $\hat{\eta}_1 \in H^\infty(\Dd,\Cc^{m-1}). $ Recall that $$\hat{x}_1 = \alpha_0^T x_1 = \alpha_0^T \xi_1 h_1,\quad \bar{z}\bar{\hat{y}}_1=\bar{z}\beta_0^T \bar{y}_1 = \beta_0^T \eta_1 h_1, $$ which imply $$ \hat{\xi}_1 = \alpha_0^T\xi_1 \quad \text{and}\quad \hat{\eta}_1 = \beta_0^T \eta_1 .$$ Let us show that $\alpha_0^T \xi_1,\;\beta_0^T \eta_1 $ are inner in order to apply Lemma \ref{2.2}. \noindent Recall that, since $V_0,W_0^T$ are unitary-valued, we have $$I_n -\xi_0 \xi_0^* =\bar{\alpha}_0 \alpha_0^T, \quad I_m - \bar{\eta}_0 \eta_0^T = \beta_0 \beta_0^* .$$ Therefore $$x_1 = (I_{n} - \xi_0 \xi_0^*)v_1=\bar{\alpha}_0 \alpha_0^Tv_1,\quad y_1= (I_{m} - \bita_0\eta_0^T)w_1=\beta_0\beta_0^*w_1 .$$ Then, \begin{equation}\label{aox1aov1}\alpha_0^Tx_1 = \alpha_0^T v_1, \quad \beta_0^T \bar{y}_1 =\beta_0^T\bar{w}_1 ,\end{equation} and since $$\xi_1 =\frac{x_1}{h_1},\quad \eta_1 = \frac{\bar{z}\bar{y}_1}{h_1}, $$ the functions $$\alpha_0^T\xi_1=\frac{\alpha_0^Tv_1}{h_1},\quad \beta_0^T\eta_1 = \frac{\beta_0^T\bar{z}\bar{w}_1}{h_1}$$ are analytic. Furthermore, by Proposition \ref{x0wev1eta1wew1}, $$\|x_1(z)\|_{\Cc^n}= \|y_1(z)\|_{\Cc^m}=|h_1(z)|= \|\hx_1(z)\|_{\Cc^{n-1}}=\| \hy_1(z)\|_{\Cc^{m-1}}$$ almost everywhere on $\Tt$. Thus $$\|\alpha_0^T(z)x_1(z)\|_{\Cc^{n-1}}=\|\alpha_0^T(z)v_1(z)\|_{\Cc^{n-1}}=|h_1(z)|$$ and $$ \|\beta_0^T(z)\bar{z} \bar{y}_1(z)\|_{\Cc^{m-1}}=\|\beta_0^T (z)\bar{z}\bar{w}_1(z)\|_{\Cc^{m-1}}=|h_1(z)| $$ almost everywhere on $\Tt.$ Hence $$\|\alpha_0^T(z)\xi_1(z)\|_{\Cc^{n-1}}=1,\quad \|\beta_0^T(z)\eta_1(z)\|_{\Cc^{m-1}}=1 $$ almost everywhere on $\Tt.$ Therefore $\alpha_0^T\xi_1,\; \beta_0^T\eta_1$ are inner functions. By Lemma \ref{2.2}, there exist inner, co-outer, quasi-continuous functions $\alpha_1,\beta_1$ of types $(n-1)\times (n-2)$ and $(m-1)\times (m-2)$ respectively such that $$\tilde{V}_1 =\begin{pmatrix}\alpha_0^T \xi_1 & \overline{\alpha}_1 \end{pmatrix} ,\quad \tilde{W}_1^T = \begin{pmatrix} \beta_0^T \eta_1 & \overline{\beta}_1 \end{pmatrix}$$ are unitary-valued and all minors on the first columns are in $H^\infty.$ Furthermore, by Lemma \ref{2.2}, every $\hat{Q}_1\in H^\infty(\Dd,\Cc^{(m-1)\times(n-1)})$ which is at minimal distance from $F_1$ satisfies $$F_1-\hat{Q}_1 = \tilde{W}_1^* \begin{pmatrix} t_1 u_1 & 0 \\ 0 & F_2 \end{pmatrix}\tilde{V}_1^*, $$where $F_2 \in H^\infty(\Dd,\Cc^{(m-2)\times (n-2)})+C(\Tt,\Cc^{(m-2)\times (n-2)})$ and $u_1$ is a quasi-continuous unimodular function given by $u_1 = \frac{\bar{z} \bar{h}_1}{h_1}.$ By Lemma \ref{f+hinfty}, the set $$\tilde{\mathcal{E}}_{0} =\{F_{1} - \hat{Q} : \hat{Q} \in H^\infty(\Dd,\Cc^{(m-1)\times (n-1)}), \| F_{1} - \hat{Q}\|_{L^\infty}=t_{1} \}$$ satisfies $$\tilde{\mathcal{E}}_{0} = \tilde{W}_{1}^* \begin{pmatrix} t_{1}u_{1} & 0 \\ 0 & \left(F_2+H^\infty(\Dd,\Cc^{(m-2)\times(n-2)})\right) \cap B(t_1) \end{pmatrix}V_{1}^*, $$ for some $F_2 \in H^\infty(\Dd,\Cc^{(m-2)\times(n-2)}) + C(\Tt, \Cc^{(m-2)\times(n-2)})$ and for the closed ball $B(t_1)$ of radius $t_1$ in $L^\infty(\Tt,\Cc^{(m-2)\times (n-2)}).$ Thus, by Lemma \ref{f+hinfty}, $\mathcal{E}_1$ admits the factorisation \eqref{g-qv0v1w0w1} as claimed. \end{proof} \begin{proposition}\label{g-q1y1t1x1} Suppose the function $Q_2\in H^\infty(\Dd,\CCmn)$ minimises $$(s_0^\infty(G-Q),s_1^\infty(G-Q)).$$ Then $Q_2$ satisfies $$(G-Q_2)x_0 = t_0y_0,\quad (G-Q_2)^*y_0 = t_0 x_0 $$and $$(G-Q_2)x_1 = t_1y_1,\quad (G-Q_2)^*y_1=t_1x_1, $$where $x_0,x_1,y_0,y_1,t_0,t_1$ are as in Theorem \ref{T0compact}. \end{proposition} \begin{proof} Let $(x_0,y_0)$ be a Schmidt pair for the Hankel operator $H_G$ corresponding to \linebreak$\|H_G\|=t_0.$ Then, by Theorem \ref{1.7}, every $Q_2 \in H^\infty(\Dd,\CCmn)$ which is at minimal distance from $G$ satisfies $$(G-Q_2)x_0 = t_0y_0,\quad (G-Q_2)^*y_0 = t_0 x_0 , $$ and, by Lemma \ref{2.2}, $$W_0 (G-Q_2)V_0 = \begin{pmatrix} t_0u_0 & 0 \\ 0 & F_1 \end{pmatrix} ,$$where $F_1 \in H^\infty(\Dd,\Cc^{(m-1)\times (n-1)})+C(\Tt,\Cc^{(m-1)\times (n-1)}).$ Moreover, by Lemma \ref{f+hinfty}, the set $\mathcal{E}_0 = \{ G-Q : Q \in \Omega_0\}$ of all level $0$ superoptimal error functions satisfies \begin{equation}\label{wevv}W_0 \mathcal{E}_0 V_0 = \begin{pmatrix} t_0 u_0 & 0\\ 0 & F_1 +H^\infty(\Dd, \Cc^{m-1\times n-1}) \end{pmatrix}\cap B(t_0).\end{equation} Suppose $Q_2\in \Omega_0$. Then $$ W_0 (G-Q_2) V_0 =\begin{pmatrix} \eta_0^T \\\beta_0^* \end{pmatrix}(G-Q_2) \begin{pmatrix} \xi_0 & \balpha_0 \end{pmatrix}= \begin{pmatrix} \eta_0^T (G-Q_2)\xi_0 & \eta_0^T (G-Q_2)\balpha_0\\ \beta_0^* (G-Q_2)\balpha_0 & \beta_0^*(G-Q_2)\balpha_0 \end{pmatrix}.$$ By equation \eqref{wevv}, for $\tilde{Q}_1\in H^\infty(\Dd,\Cc^{(m-1)\times(n-1)})$ at minimal distance from $F_1,$ \begin{equation}\label{wog-qo}\begin{pmatrix} \eta_0^T (G-Q_2)\xi_0 & \eta_0^T (G-Q_2)\balpha_0\\ \beta_0^* (G-Q_2)\balpha_0 & \beta_0^*(G-Q_2)\balpha_0 \end{pmatrix}=\begin{pmatrix} t_0 u_0 &0 \\ 0 & F_1 -\tilde{Q}_1 \end{pmatrix} \end{equation} \noindent Note that, by Theorem \ref{nehtmatr}, $$\| F_1 - \tilde{Q}_1\|_\infty=\|H_{F_1}\|, $$and, by Theorem \ref{T0compact} (part (v)), $\|H_{F_1}\|=t_1.$ Consideration of the $(2,2)$ entries of equation \eqref{wog-qo} yields \begin{equation}\label{fq-q1} F_1-\tilde{Q}_1 =\beta_0^* (G-Q_2) \bar{\alpha}_0 .\end{equation} Note that, if $(\hat{x}_1,\hat{y}_1)$ is a Schmidt pair for $H_{F_1}$ corresponding to $t_1=\|H_{F_1}\|,$ then, by Theorem \ref{1.7}, $$(F_1-\tilde{Q}_1)\hat{x}_1 = t_1 \hat{y}_1,\quad (F-\hat{Q}_1)^*\hat{y}_1=t_1\hat{x}_1 .$$In view of equation \eqref{fq-q1}, the latter equations imply \begin{equation}\label{bg-q1a}\beta_0^* (G-Q_2) \bar{\alpha}_0 \hat{x}_1 = t_1 \hat{y}_1,\end{equation}and \begin{equation}\label{bg-q1ab} \alpha_0^T (G-Q_2)^*\beta_0\hy_1 = t_1\hat{x}_1. \end{equation} \noindent By Lemma \ref{schfohf1}, we may choose the Schmidt pair for $H_{F_1}$ corresponding to $\|H_{F_1}\|$ to be \begin{equation}\label{schmhf1} \hat{x}_1 =\alpha_0^T x_1,\quad \hat{y}_1 = \beta_0^* y_1 .\end{equation} Recall that, by equations \eqref{x1=alpha0alphaTx1}, \begin{equation}\label{ex1} x_1 =\bar{\alpha}_0 \alpha_0^Tx_1\end{equation} and \begin{equation}\label{yai1} y_1 =\beta_0 \beta_0^* y_1 .\end{equation} In view of equations \eqref{bg-q1a} and \eqref{schmhf1}, we obtain $$ \beta_0^* (G-Q_2) \bar{\alpha}_0 \alpha_0^Tx_1= t_1 \beta_0^*y_1.$$ Multiplying both sides of the latter equation by $\beta_0,$ we have $$ \beta_0 \beta_0^*(G-Q_2)\balpha_0 \alpha_0^Tx_1 = t_1\beta_0 \beta_0^* y_1,$$ which, by equation \eqref{ex1}, implies $$ \beta_0 \beta_0^*(G-Q_2)x_1 = t_1\beta_0\beta_0^*y_1 ,$$ or equivalently, $$\beta_0\beta_0^* \bigg( (G-Q_2)x_1-t_1y_1\bigg)=0. $$ Since, by Theorem \ref{T0compact}, $U_2^* =M_{\beta_0\beta_0^*}$ is unitary, the latter equation yields $$ (G-Q_2)x_1 = t_1y_1.$$ Moreover, by equations \eqref{bg-q1ab} and \eqref{schmhf1}, we obtain $$\alpha_0^T(G-Q_2)^*\beta_0\beta_0^*y_1= t_1 \alpha_0^Tx_1.$$ Multiplying both sides of the latter equation by $\balpha_0,$ we have $$ \balpha_0\alpha_0^T(G-Q_2)^*\beta_0\beta_0^*y_1= t_1 \balpha_0\alpha_0^Tx_1.$$ In view of equation \eqref{yai1}, the latter expression is equivalent to the equation $$ \balpha_0\alpha_0^T(G-Q_2)^*y_1= t_1 \balpha_0\alpha_0^Tx_1, $$ or equivalently, $$\balpha_0\alpha_0^T \bigg( (G-Q_2)^*y_1-t_1x_1 \bigg) =0.$$ Since, by Theorem \ref{T0compact}, $U_1^* = M_{\balpha_0\alpha_0^T}$ is unitary, the latter equation yields $$ (G-Q_2)^*y_1=t_1x_1.$$ Therefore $Q_2$ satisfies the required equations. \end{proof} The next few propositions are in preparation for Theorem \ref{T2compactt} on the compactness of $T_2$. \begin{proposition}\label{beta1h2} For a thematic completion of the inner matrix-valued function $\beta_0^T\eta_1$ of the form $\tilde{W}_1^T =\big( \begin{matrix} \beta_0^T \eta_1 & \bar{ \beta}_{1}\end{matrix}\big)$, where $\beta_1$ is an inner, co-outer, quasi-continuous function of type $(m-1) \times (m-2),$ the following equation holds $$\beta_1^* H^2(\Dd, \Cc^{m-1})^\perp= H^2(\Dd, \Cc^{m-2})^\perp.$$ \end{proposition} \begin{proof} By virtue of the fact that complex conjugation is a unitary operator on $L^2(\Tt,\Cc^m),$ an equivalent statement is that $\beta_1^T z H^2(\Dd,\Cc^{m-1}) = z H^2(\Dd,\Cc^{m-2}).$ By Lemma \ref{L6.2}, there exists a matrix-valued function $B_1 \in H^\infty( \Dd,\Cc^{(m-2)\times (m-1)})$ such that $$B_1 \beta_1 = I_{m-2}$$ or, equivalently, $$ \beta_1^T B_1^T = I_{m-2}.$$ \noindent Let $f \in z H^2(\Dd,\Cc^{m-2}).$ Then, $$ f = (\beta_1^T B_1^T) f \in \beta_1^T B_1^T z H^2(\Dd,\Cc^{m-2}) \subseteq \beta_1^T z H^2(\Dd,\Cc^{m-1}).$$ Hence $$ z H^2(\Dd,\Cc^{m-2}) \subseteq \beta_1^T z H^2(\Dd,\Cc^{m-1}). $$ \noindent Note that, since $\beta_1 \in H^\infty(\Dd, \Cc^{(m-1) \times (m-2)})$, we have $$ \beta_1^T z H^2(\Dd,\Cc^{m-1}) \subseteq z H^2(\Dd,\Cc^{m-2}).$$ Thus $$\beta_1^T z H^2(\Dd,\Cc^{m-1}) = z H^2(\Dd,\Cc^{m-2}).$$ \end{proof} \begin{lemma}\label{a1h2} For a thematic completion of the inner matrix-valued function $\alpha_0^T\xi_1$ of the form $ \tilde{V}_1 =\big(\begin{matrix} \alpha_0^T \xi_1 & \bar{ \alpha}_{1} \end{matrix}\big), $ where $\alpha_1$ is an inner, co-outer, quasi-continuous function of type $(n-1) \times (n-2),$ the following equation holds $$ \alpha_1^T H^2(\Dd,\Cc^{n-1}) = H^2(\Dd,\Cc^{n-2}). $$ \end{lemma} \begin{proof} \noindent By Lemma \ref{L6.2}, for the given $\alpha_1,$ there exists $A_1\in H^\infty(\Dd, \Cc^{(n-2)\times (n-1) })$ such that $A_1\alpha_1 = I_{n-2}.$ Equivalently, $\alpha_1^T A_1^T = I_{n-2}.$ \medskip \noindent Let $g\in H^2(\Dd,\Cc^{n-2}).$ Then $g = (\alpha_1^T A_1^T) g \in \alpha_1^T A_1^T H^2(\Dd,\Cc^{n-2}),$ which implies that \linebreak$ g \in \alpha_1^T H^2(\Dd,\Cc^{n-1}).$ Hence $H^2(\Dd,\Cc^{n-2}) \subseteq \alpha_1^T H^2(\Dd,\Cc^{n-1}).$\medskip \noindent For the reverse inclusion, note that, since $\alpha_1 \in H^\infty(\Dd, \Cc^{(n-1) \times (n-2)})$, we have $$\alpha_1^T H^2(\Dd,\Cc^{n-1}) \subseteq H^2(\Dd,\Cc^{n-2}).$$ Thus $$ \alpha_1^T H^2(\Dd,\Cc^{n-1}) = H^2(\Dd,\Cc^{n-2}). $$ \end{proof} \begin{remark}\label{V1V1*unit} Let $V_0$ and $\tilde{V}_1$ be given by equations \eqref{V0W0} and \eqref{V1} respectively and let $V_1 = \begin{pmatrix} 1 & 0 \\ 0 & \tilde{V}_1 \end{pmatrix}.$ Since $V_0,$ $\tilde{V}_1$ and $V_1$ are unitary-valued, we have \begin{equation}\label{V0V0*} I_n = V_0 V_0^*= \xi_0 \xi_0^* + \bar{\alpha}_0 \alpha_0^T, \end{equation} \begin{equation}\label{V1V1*} I_{n-1} = \tilde{V}_1 \tilde{V}_1^* = \alpha_0^T \xi_1\xi_1^* \bar{\alpha}_0 + \bar{\alpha}_1 \alpha_1^T. \end{equation} \end{remark} \begin{lemma}\label{xi=A1A1*} Let $V_0$ and $\tilde{V}_1$ be given by equations \eqref{V0W0} and \eqref{V1} respectively. Let $A_1= \alpha_0\alpha_1$. Then \begin{equation} \label{1-xi=A1A1*} I_n - \xi_0 \xi_0^* -\xi_1 \xi_1^*= \bar{\alpha}_0 \bar{\alpha}_1 \alpha_1^T\alpha_0^T = \bar{A_1}A_1^T. \end{equation} almost everywhere on $\Tt$. \end{lemma} \begin{proof} By equation \eqref{V1V1*} $$ \bar{\alpha}_1 \alpha_1^T= I_{n-1} - \alpha_0^T \xi_1\xi_1^* \bar{\alpha}_0,$$ thus $$ \bar{\alpha}_0 \bar{\alpha}_1\alpha_1^T\alpha_0^T =\bar{\alpha}_0( I_{n-1} - \alpha_0^T \xi_1\xi_1^* \bar{\alpha}_0)\alpha_0^T .$$ By equation \eqref{V0V0*}, $$ \bar{\alpha}_0 \alpha_0^T = I_n - \xi_0 \xi_0^* .$$ Hence $$ \bar{\alpha}_0 \bar{\alpha}_1 \alpha_1^T \alpha_0^T =(I_n - \xi_0 \xi_0^* ) - (I_n - \xi_0 \xi_0^* )\xi_1\xi_1^* (I_n - \xi_0 \xi_0^* ).$$ Since, by Proposition \ref{onxi}, the set $\{\xi_0(z), \xi_1(z) \}$ is orthonormal in $\Cc^m$ for almost every $z \in \Tt$, $$ \bar{\alpha}_0 \bar{\alpha}_1 \alpha_1^T\alpha_0^T = I_n - \xi_0 \xi_0^* -\xi_1 \xi_1^* $$ almost everywhere on $\Tt$. \end{proof} Let us state certain identities that are useful for the next statements. \begin{remark}\label{w1w1*unit} Let $W_0$ and $\tilde{W}_1$ be given by equations \eqref{V0W0} and \eqref{W1} respectively and let $W_1 = \begin{pmatrix} 1 & 0 \\ 0 & \tilde{W}_1 \end{pmatrix}.$ Then \begin{equation}\label{w0w0*} I_m = W_0^* W_0= \bar{\eta}_0 \eta_0^T + \beta_0 \beta_0^*, \end{equation} \begin{equation}\label{w1w1*} I_{m-1} = \tilde{W}_1^* \tilde{W}_1 = \beta_0^* \bar{\eta}_1 \eta_1^T \beta_0 + \beta_1 \beta_1^*. \end{equation} \begin{equation} \label{W1W0*} \begin{array}{cllllll} W_0^* \begin{pmatrix} 1 & 0 \\ 0 & \tilde{W}_1^* \end{pmatrix}= \begin{pmatrix} \bar{\eta}_0 & \beta_0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & \begin{pmatrix} \beta_0^* \bar{\eta}_1 & \beta_1 \end{pmatrix} \end{pmatrix}= \begin{pmatrix} \bar{\eta}_0 & \beta_0 \beta_0^* \bar{\eta}_1 & \beta_0 \beta_1 \end{pmatrix}. \end{array} \end{equation} $$\begin{array}{cllllll} W_0^* \begin{pmatrix} 1 & 0 \\ 0 & \tilde{W}_1^* \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & \tilde{W}_1 \end{pmatrix} W_0 \vspace{2ex} &= \begin{pmatrix} \bar{\eta}_0 & \beta_0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & \begin{pmatrix} \beta_0^* \bar{\eta}_1 & \beta_1 \end{pmatrix} \end{pmatrix} \begin{pmatrix} 1 & 0 \\0 & \begin{pmatrix} \eta_1^T \beta_0 \\ \beta_1^* \end{pmatrix} \end{pmatrix} \begin{pmatrix} \eta_{0}^T \\ \beta_0^* \end{pmatrix}\vspace{2ex} \\\end{array}$$ $$\begin{array}{llll}&= \begin{pmatrix} \bar{\eta}_0 & \beta_0 \beta_0^* \bar{\eta}_1 & \beta_0 \beta_1 \end{pmatrix} \begin{pmatrix} \eta_0^T \\ \eta_1^T \beta_0 \beta_0^* \\ \beta_1^* \beta_0^* \end{pmatrix} \vspace{2ex} \\&= \bar{\eta}_0 \eta_0^T + \beta_0 \beta_0^* \bar{\eta}_1 \eta_1^T \beta_0 \beta_0^* + \beta_0 \beta_1 \beta_1^* \beta_{0}^* .\end{array}$$ Furthermore, \begin{equation}\label{b0b1conn}\bar{\eta}_0 \eta_0^T + \beta_0 \beta_0^* \bar{\eta}_1 \eta_1^T \beta_0 \beta_0^* + \beta_0 \beta_1 \beta_1^* \beta_{0}^* =\bar{\eta}_0 \eta_0^T + \beta_0 (I_{m-1} - \beta_1 \beta_1^* + \beta_1 \beta_1^* ) \beta_0^* \vspace{2ex} = I_m .\end{equation} \end{remark} Equations \eqref{w0w0*} and \eqref{w1w1*} follow from the facts that $W_0, \tilde{W}_1$ and $W_1$ are unitary-valued on $\Tt$. Equations \eqref{b0b1conn} follow from equations (\ref{w0w0*}) and (\ref{w1w1*}). \begin{lemma}\label{eta=B1B1*} Let $W_0$ and $\tilde{W}_1$ be given by equations \eqref{V0W0} and \eqref{W1} respectively. Let $B_1= \beta_0 \beta_1$. Then \begin{equation} \label{1-eta=B1B1*} I_m - \bar{\eta}_0 \eta_0^T -\bar{\eta}_1 \eta_1^T= \beta_0 \beta_1 \beta_1^*\beta_0^*= B_1 B_1^*. \end{equation} almost everywhere on $\Tt$. \end{lemma} \begin{proof} By equation \eqref{w1w1*} $$\beta_1 \beta_1^*= I_{m-1} -\beta_0^* \bar{\eta}_1 \eta_1^T \beta_0 ,$$ thus $$ \beta_0 \beta_1 \beta_1^*\beta_0^*= \beta_0(I_{m-1} -\beta_0^* \bar{\eta}_1 \eta_1^T \beta_0 )\beta_0^*.$$ By equation \eqref{w0w0*}, $$ \beta_0 \beta_0^*= I_m - \bar{\eta}_0 \eta_0^T.$$ Hence $$ \beta_0 \beta_1 \beta_1^*\beta_0^*= (I_m - \bar{\eta}_0 \eta_0^T) - (I_m - \bar{\eta}_0 \eta_0^T)\bar{\eta}_1 \eta_1^T (I_m - \bar{\eta}_0 \eta_0^T).$$ Since, by Proposition \ref{onxi}, the set $\{\bar{\eta}_0(z), \bar{\eta}_1(z) \}$ is orthonormal in $\Cc^m$ for almost every $z \in \Tt$, $$ \beta_0 \beta_1 \beta_1^*\beta_0^* = I_m - \bar{\eta}_0 \eta_0^T -\bar{\eta}_1 \eta_1^T $$ almost everywhere on $\Tt$. \end{proof} \begin{proposition}\label{xi12telweunit} With the notation of Proposition \ref{tildew1v1}, let unitary completions of $\xi_0$ and $\alpha_0^T\xi_1$ be given by $$V_0 = \begin{pmatrix} \xi_0 & \bar{\alpha_{0}} \end{pmatrix}, \quad \tilde{V}_1 = \begin{pmatrix}\alpha_0^T\xi_1 & \bar{\alpha_{1}} \end{pmatrix} ,$$ where $\alpha_0, \alpha_1$ are inner, co-outer, quasi-continuous matrix-valued functions of types $n\times (n-1)$ and $(n-1)\times (n-2)$ respectively. Let $$V_1= \begin{pmatrix} 1 & 0 \\ 0 & \tilde{V}_1\end{pmatrix}$$ and let \begin{equation}\label{K2} \mathcal{K}_2 = V_0 V_1 \begin{pmatrix} 0_{2\times 1} \\ H^2(\Dd,\Cc^{n-2}) \end{pmatrix}. \end{equation} Then $$\xi_0 \telwe \xi_1 \telwe H^2(\Dd,\Cc^n) = \xi_0 \telwe \xi_1 \telwe \mathcal{K}_2$$ and the operator $(\xi_0 \telwe \xi_1 \telwe\cdot)\colon \mathcal{K}_2 \to \xi_0 \telwe \xi_1 \telwe H^2(\Dd,\Cc^n)$ is unitary. \end{proposition} \begin{proof} First let us prove that $$\xi_0 \telwe \xi_1 \telwe H^2(\Dd,\Cc^n) = \xi_0 \telwe \xi_1 \telwe \mathcal{K}_2. $$ Recall that $A_1 = \alpha_0\alpha_1$. Observe that, by definition, \begin{equation} \label{K2A} \mathcal{K}_2 = \balpha_0 \balpha_1 H^2(\Dd,\Cc^{n-2})= \bar{A}_1 H^2(\Dd,\Cc^{n-2}). \end{equation} By Lemmas \ref{a0h2} and \ref{a1h2}, \begin{equation}\label{alphaH2-K2} \begin{array}{lll} H^2(\Dd,\Cc^{n-2})&= \alpha_1^T H^2(\Dd,\Cc^{n-1})\\ &= \alpha_1^T \alpha_0^T H^2(\Dd,\Cc^{n})\\ &= A_1^T H^2(\Dd,\Cc^{n}). \end{array} \end{equation} By equations \eqref{K2A} and \eqref{alphaH2-K2}, \begin{equation}\label{K2-2} \mathcal{K}_{2} = \balpha_0 \balpha_1 H^2(\Dd,\Cc^{n-2})= \bar{A}_{1} A_1^T H^2(\Dd,\Cc^{n}). \end{equation} By Lemma \ref{xi=A1A1*}, \begin{equation}\label{AA1-K} \bar{A}_1 A_1^T = I_n - \sum\limits_{k=0}^1 \xi_k \xi_k^*. \end{equation} By Proposition \ref{onxi}, $\{\xi_i(z)\}_{i=0}^1$ is an orthonormal set in $\Cc^n$ for almost every $z \in \Tt.$ Therefore, by equations \eqref{K2-2} and \eqref{AA1-K}, \begin{equation}\label{K2H2} \begin{array}{lll} \xi_0 \telwe \xi_1 \telwe \mathcal{K}_{2} &= \xi_0 \telwe \xi_1 \telwe \bar{A}_{1} A_1^T H^2(\Dd,\Cc^{n})\\ &= \xi_0 \telwe \xi_1 \telwe (I_n - \sum\limits_{k=0}^1 \xi_k \xi_k^*)H^2(\Dd,\Cc^n)\\ &= \xi_0 \telwe \xi_1 \telwe H^2(\Dd,\Cc^n). \end{array} \end{equation} Let us show that the operator $(\xi_0 \telwe \xi_1 \telwe\cdot)\colon \mathcal{K}_2 \to \xi_0 \telwe \xi_1 \telwe H^2(\Dd,\Cc^n)$ is unitary. The foregoing paragraph asserts that the operator is surjective. It remains to be shown that it is an isometry. To this end, let $f \in \mathcal{K}_2.$ Then $$\begin{array}{cllllllll} \| \xi_0 \telwe \xi_1\telwe f\|_{L^2(\Tt,\we^3\Cc^n)}^2 &= \langle \xi_0 \telwe \xi_1\telwe f, \xi_0 \telwe \xi_1\telwe f \rangle_{L^2(\Tt,\we^3\Cc^n)} \vspace{2ex} \\ &= \displaystyle \frac{1}{2\pi} \int_0^{2\pi} \langle \xi_0(\eiu) \telwe \xi_1(\eiu) \telwe f(\eiu), \xi_0(\eiu) \telwe \xi_1(\eiu) \telwe f(\eiu) \rangle_{\we^3\Cc^n}\; d\theta.\end{array}$$ By Proposition \ref{we}, the latter integral is equal to $$ \displaystyle \frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} \langle \xi_0 (\eiu), \xi_0 (\eiu) \rangle_{\Cc^n} & \langle \xi_0(\eiu) , \xi_1 (\eiu) \rangle_{\Cc^n} &\langle \xi_0(\eiu) , f(\eiu) \rangle_{\Cc^n} \\ \langle \xi_1(\eiu) , \xi_0 (\eiu) \rangle_{\Cc^n} & \langle \xi_1(\eiu) , \xi_1 (\eiu) \rangle_{\Cc^n} & \langle \xi_1(\eiu) , f (\eiu) \rangle_{\Cc^n} \\ \langle f(\eiu) , \xi_0 (\eiu) \rangle_{\Cc^n} & \langle f(\eiu) , \xi_1(\eiu) \rangle_{\Cc^n} & \langle f(\eiu) , f(\eiu) \rangle_{\Cc^n} \end{pmatrix}\; d\theta. $$ \noindent Note that, by Proposition \ref{onxi}, $ \{\xi_0(\eiu), \xi_1(\eiu)\} $ is an orthonormal set for almost all $\eiu$ on $\Tt.$ Moreover, since $\mathcal{K}_2= \bar{ \alpha}_{0} \bar{\alpha}_1 H^2(\Dd,\Cc^{n-2}) ,$ then $f=\bar{\alpha}_0 \bar{\alpha}_1 \varphi$ for some $\varphi \in H^2(\Dd,\Cc^{n-2}).$ Hence $$\begin{array}{lll}\langle \xi_0 (\eiu), f(\eiu) \rangle_{\Cc^n} &= \langle \xi_0(\eiu), \bar{ \alpha}_{0}(\eiu)\bar{\alpha}_1(\eiu) \varphi(\eiu)\rangle_{\Cc^n}\vspace{2ex}\\ &= \langle \alpha_0^T(\eiu) \xi_0(\eiu) , \bar{\alpha}_1(\eiu)\varphi(\eiu) \rangle_{\Cc^{n-1}}\vspace{2ex}\\&=0\end{array} $$almost everywhere on $\Tt,$ since $ V_0$ is unitary-valued. Similarly, since $\tilde{V}_1$ is unitary valued, we deduce that $$\langle \xi_1(\eiu), f(\eiu) \rangle_{\Cc^n}= \langle \alpha_1^T(\eiu)\alpha_0^T(\eiu) \xi_1(\eiu), \varphi(\eiu)\rangle_{\Cc^{n-2}}= 0 $$almost everywhere on $\Tt.$ Therefore $$ \| \xi_0 \telwe \xi_1 \telwe f\|_{L^2(\Tt,\we^3\Cc^n)}^2 =\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} 1 & 0 &0 \\ 0 & 1 & 0\\ 0 & 0 & \|f(\eiu)\|_{\Cc^n}^2 \end{pmatrix}\; d\theta= \|f\|_{L^2(\Tt,\Cc^n)}^2,$$that is, $(\xi_0 \telwe \xi_1 \telwe\cdot)\colon \mathcal{K}_2 \to \xi_0 \telwe \xi_1 \telwe H^2(\Dd,\Cc^n)$ is an isometric operator. Thus, the operator $(\xi_0 \telwe \xi_1 \telwe \cdot)\colon \mathcal{K}_2 \to \xi_0 \telwe \xi_1 \telwe H^2(\Dd,\Cc^n)$ is unitary. \end{proof} \begin{proposition}\label{eta0eta1b0b1} Let $\eta_0, \eta_1$ be defined by equations \eqref{xi0eta0} and \eqref{311} respectively, and let $\beta_0, \beta_1$ be inner, co-outer, quasi-continuous functions of types $m \times (m-1)$ and $(m-1) \times (m-2)$ respectively, such that the functions $$W_0^T= \begin{pmatrix} \eta_0 & \bar{\beta}_0 \end{pmatrix}, \quad \tilde{W}_1^T = \begin{pmatrix} \beta_0^T \eta_1 & \bar{\beta}_1 \end{pmatrix} $$ are unitary-valued. Let $$W_1^T = \begin{pmatrix} 1 & 0\\0& \tilde{W}_1^T \end{pmatrix} $$ and let \begin{equation}\label{L2} \mathcal{L}_2 = W_0^* W_1^* \begin{pmatrix} 0_{2\times 1}\\H^2(\Dd,\Cc^{m-2})^\perp \end{pmatrix}. \end{equation} Then \begin{equation} \label{L2=H2} \bar{\eta}_0 \telwe \bar{\eta}_1 \telwe \mathcal{L}_2 = \bar{\eta}_0 \telwe \bar{\eta}_1 \telwe H^2(\Dd,\Cc^{m})^\perp, \end{equation} and the operator $(\bar{\eta}_0 \telwe \bar{\eta}_1 \telwe \cdot)\colon \mathcal{L}_2 \to \bar{\eta}_0 \telwe \bar{\eta}_1 \telwe H^2(\Dd,\Cc^m)^\perp$ is unitary. \end{proposition} \begin{proof} First let us prove that $$\bar{ \eta}_0 \telwe \bar{ \eta}_1\telwe \mathcal{L}_{2} = \bar{ \eta}_0 \telwe \bar{ \eta}_1 \telwe H^2(\Dd,\Cc^m)^\perp.$$ Let $B_1 = \beta_0 \beta_1$. By equations \eqref{L2} and \eqref{W1W0*}, \begin{equation}\label{L2B} \mathcal{L}_{2} = B_1 H^2(\Dd,\Cc^{m-2})^\perp. \end{equation} By Lemmas \ref{beta0*h2} and \ref{beta1h2}, \begin{equation}\label{betaH2-L2} \begin{array}{lll} H^2(\Dd,\Cc^{m-2})^\perp &= \beta_1^*(H^2(\Dd,\Cc^{m-1})^\perp\\ & = \beta_1^*\beta_0^*(H^2(\Dd,\Cc^{m})^\perp\\ & = B_1^*H^2(\Dd,\Cc^m)^\perp. \end{array} \end{equation} By equations \eqref{L2B} and \eqref{betaH2-L2}, \begin{equation}\label{L2-2} \mathcal{L}_{2} = B_1 H^2(\Dd,\Cc^{m-2})^\perp = B_1 B_1^*H^2(\Dd,\Cc^m)^\perp. \end{equation} By Lemma \ref{eta=B1B1*}, \begin{equation}\label{BB_1*-L} B_1 B_1^* = I_m - \sum\limits_{i=0}^1 \bita_i \eta_i^T . \end{equation} Thus \begin{equation}\label{L2-eta} \mathcal{L}_{2} = (I_m - \sum\limits_{i=0}^1 \bita_i \eta_i^T)H^2(\Dd,\Cc^m)^\perp. \end{equation} By Proposition \ref{onxi}, $\{\bar{\eta}_i(z)\}_{i=0}^1$ is an orthonormal set in $\Cc^m$ for almost every $z \in \Tt.$ Therefore, by equations \eqref{L2-2} and \eqref{L2-eta}, \begin{equation}\label{L2H2} \begin{array}{lll} \bar{ \eta}_0 \telwe \bar{ \eta}_1 \telwe \mathcal{L}_{2} &= \bar{ \eta}_0 \telwe \bar{ \eta}_1 \telwe (I_m - \sum\limits_{i=0}^1 \bita_i \eta_i^T)H^2(\Dd,\Cc^m)^\perp\\ &= \bar{ \eta}_0 \telwe \bar{ \eta}_1 \telwe H^2(\Dd,\Cc^m)^\perp. \end{array} \end{equation} To complete the proof, let us show that the operator $$(\bita_0 \telwe \bita_1 \telwe \cdot)\colon \mathcal{L}_2 \to \bita_0 \telwe \bita_1 \telwe H^2(\Dd,\Cc^{m})^\perp$$ is unitary. Observe that the foregoing paragraph asserts the operator is surjective. Hence it suffices to prove that it is an isometry. To this end, let $\upsilon \in \mathcal{L}_2.$ Then $$\| \bita_0 \telwe \bita_1 \telwe \upsilon \|_{L^2(\Tt,\we^3\Cc^m)}^2 = \langle \bita_0 \telwe \bita_1 \telwe \upsilon, \bita_0 \telwe \bita_1 \telwe \upsilon \rangle_{L^2(\Tt,\we^3\Cc^m)} ,$$ and, by Proposition \ref{we}, $$ \begin{array}{ll}&\langle \bita_0 \telwe \bita_1 \telwe \upsilon, \bita_0 \telwe \bita_1 \telwe \upsilon \rangle_{L^2(\Tt,\we^3\Cc^m)}\vspace{2ex}\\ &= \displaystyle \frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} \langle \bita_0(\eiu) , \bita_0(\eiu)\rangle_{\Cc^m} &\langle\bita_0(\eiu),\bita_1 (\eiu)\rangle_{\Cc^m}&\langle \bita_0(\eiu), \upsilon(\eiu) \rangle_{\Cc^m}\\ \langle\bita_1(\eiu), \bita_0(\eiu)\rangle_{\Cc^m} &\langle \bita_1(\eiu) , \bita_1(\eiu)\rangle_{\Cc^m} &\langle\bita_1(\eiu) , \upsilon(\eiu)\rangle_{\Cc^m}\\ \langle\upsilon(\eiu) , \bita_0(\eiu)\rangle_{\Cc^m} & \langle\upsilon(\eiu), \bita_1(\eiu)\rangle_{\Cc^m} & \langle \upsilon (\eiu) , \upsilon(\eiu)\rangle_{\Cc^m} \end{pmatrix}\;d\theta.\end{array}$$ Notice that, by Proposition \ref{onxi}, $\{\bita_0(\eiu),\bita_1(\eiu) \}$ is an orthonormal set almost everywhere on $\Tt.$ Further, since $\mathcal{L}_2 = \beta_0 \beta_1 H^2(\Dd,\Cc^{m-2})^\perp,$ $\upsilon = \beta_0 \beta_1 \varphi$ for some $\varphi\in H^2(\Dd,\Cc^{m-2})^\perp.$ Hence $$\begin{array}{lll}\langle \bita_0 (\eiu),v(\eiu)\rangle_{\Cc^m} &= \langle \bita_0(\eiu), \beta_0 (\eiu) \beta_1(\eiu)\varphi(\eiu)\rangle_{\Cc^m}\vspace{2ex}\\ &= \langle \beta_0^* (\eiu)\bita_0(\eiu) , \beta_1(\eiu)\varphi(\eiu)\rangle_{\Cc^{m-1}} = 0, \end{array}$$since $W_0^T$ is unitary-valued almost everywhere on $\Tt.$ Similarly, since, by Proposition \ref{tildew1v1}, $\tilde{W}_1^T$ is unitary-valued almost everywhere on $\Tt,$ we obtain $$ \langle \bita_1(\eiu), \upsilon(\eiu)\rangle_{\Cc^m} = \langle \beta_1^*(\eiu) \beta_0^*(\eiu) \bita_1(\eiu), \varphi(\eiu)\rangle_{\Cc^{m-2}}=0.$$ Therefore $$ \| \bita_0 \telwe \bita_1 \telwe \upsilon \|_{L^2(\Tt,\we^3\Cc^m)}^2 = \displaystyle \frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} 1 &0&0\\ 0 &1 &0\\ 0& 0 & \| \upsilon (\eiu) \|_{\Cc^m}^2 \end{pmatrix}\;d\theta = \|\upsilon\|^2_{L^2(\Tt,\Cc^m)},$$ that is, the operator $(\bita_0 \telwe \bita_1 \telwe \cdot)\colon \mathcal{L}_2 \to \bita_0 \telwe \bita_1 \telwe H^2(\Dd,\Cc^{m})^\perp$ is an isometry. Thus the operator is unitary.\end{proof} \begin{proposition}\label{l2perp} Let $\eta_0, \eta_1$ be defined by equations \eqref{xi0eta0} and \eqref{311} respectively and let $\beta_0, \beta_1$ be inner, co-outer, quasi-continuous functions of types $m \times (m-1)$ and \\ $(m-1) \times (m-2)$ respectively, such that the functions $$W_0^T= \begin{pmatrix} \eta_0 & \bar{\beta}_0 \end{pmatrix}, \quad \tilde{W}_1^T = \begin{pmatrix} \beta_0^T \eta_1 & \bar{\beta}_1 \end{pmatrix} $$ are unitary-valued. Let $$\mathcal{L}_2 = W_0^* \begin{pmatrix} 1 & 0 \\0& \tilde{W}_1^* \end{pmatrix} \begin{pmatrix} 0_{2\times 1} \\ H^2(\Dd,\Cc^{m-2})^\perp \end{pmatrix}.$$ Then $$ \mathcal{L}_2^\perp = \{ f \in L^2(\Tt,\Cc^m) : \beta_1^* \beta_0^* f \in H^2(\Dd,\Cc^{m-2})\}.$$ \end{proposition} \begin{proof} Clearly $\mathcal{L}_2 = \beta_0 \beta_1 H^2(\Dd,\Cc^{m-2})^\perp.$ The general element of $\beta_0 \beta_1 H^2(\Dd,\Cc^{m-2})^\perp$ is $\beta_0 \beta_1 \bar{z} \bar{g}$ with $ g \in H^2 (\Dd,\Cc^{m-2})$. A function $f \in L^2(\Tt,\Cc^m)$ belongs to $\mathcal{L}_2^\perp$ if and only if $$\langle f, \beta_0 \beta_1 \bar{z} \bar{g} \rangle_{L^2(\Tt,\Cc^m)} =0 \quad \text{for all} \quad g\in H^2(\Dd,\Cc^{m-2}) $$if and only if $$\displaystyle \frac{1}{2\pi} \int_0^{2\pi} \langle f(\eiu), \beta_0(\eiu) \beta_1(\eiu) e^{-i\theta} \bar{g}(\eiu) \rangle_{\Cc^m}d\theta =0 \quad \text{for all} \quad g\in H^2(\Dd,\Cc^{m-2}) $$ if and only if $$ \displaystyle \frac{1}{2\pi} \int_0^{2\pi} \langle \beta_1^*(\eiu)\beta_0^*(\eiu)f(\eiu), e^{-i\theta} \bar{g}(\eiu) \rangle_{\Cc^{m-2}}d\theta =0 \quad \text{for all} \quad g\in H^2(\Dd,\Cc^{m-2}),$$which in turn is equivalent to the assertion that $\beta_1^* \beta_0^* f $ is orthogonal to $H^2(\Dd,\Cc^{m-2})^\perp$ in $L^2(\Tt,\Cc^{m-2}),$ which holds if and only if $\beta_1^* \beta_0^* f $ belongs to $H^2(\Dd,\Cc^{m-2}).$ Thus $$ \mathcal{L}_2^\perp = \{ f \in L^2(\Tt,\Cc^m) : \beta_1^* \beta_0^* f \in H^2(\Dd,\Cc^{m-2})\}$$as required. \end{proof} \index{$\tilde{V}_2$} \index{$\tilde{W}_2$} \index{$\mathcal{E}_2$} \begin{theorem}\label{T2compactt} Let $m,n$ be positive integers such that $\min(m,n)\geq2.$ Let $G$ be in $H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn).$ Let $(\xi_0 \telwe v_1,\bar{\eta}_0\telwe w_1)$ be a Schmidt pair for the operator $T_1,$ as given in equation \eqref{T_0}, corresponding to $t_1 = \|T_1\|\neq 0,$ let $h_1 \in H^2(\Dd,\Cc)$ be the scalar outer factor of $\xi_0 \telwe v_1,$ let $$x_1 = (I_{n} - \xi_0 \xi_0^*)v_1, \quad y_1 = (I_m - \bar{\eta}_0 \eta_0^T)w_1, $$ and let $$\xi_1 = \frac{x_1}{h_1} , \quad \bar{\eta}_1 = \frac{zy_1}{\bar{h}_1}. $$ \noindent Let $$V_0=\big(\begin{matrix}\xi_0 & \balpha_0\end{matrix}\big),\quad W_0^T=\big(\begin{matrix} \eta_0 & \bar{\beta}_0 \end{matrix} \big)$$ be given by equations \eqref{V0W0}, and let $$\tilde{V}_1=\big(\begin{matrix} \alpha_{0}^T\xi_1 & \balpha_1 \end{matrix}\big), \quad \tilde{W}_1^T=\big( \begin{matrix} \beta_0^T\eta_1 & \bar{\beta}_1 \end{matrix}\big)$$ be given by equations \eqref{V1} and \eqref{W1} respectively. Let $$X_2 = \xi_0 \telwe \xi_1 \telwe H^2(\Dd,\Cc^n), \quad Y_2 = \bar{\eta}_0 \telwe \bar{\eta}_1 \telwe H^2(\Dd,\Cc^m)^\perp ,$$ let \begin{equation}\label{k2l2} \mathcal{K}_2 = V_0 \begin{pmatrix} 1 & 0 \\ 0& \tilde{V}_1\end{pmatrix} \begin{pmatrix} 0_{2\times 1} \\ H^2(\Dd,\Cc^{n-2})\end{pmatrix}, \quad \mathcal{L}_2 = W_0^* \begin{pmatrix} 1 & 0 \\ 0& \tilde{W}_1^*\end{pmatrix} \begin{pmatrix} 0_{2\times 1} \\ H^2(\Dd,\Cc^{m-2})^\perp \end{pmatrix}. \end{equation} Consider the operator $T_2 \colon X_2 \to Y_2$ given by \begin{equation}\label{TT2} T_2 (\xi_0 \telwe \xi_1 \telwe x ) = P_{Y_2} (\bar{\eta}_0 \telwe \bar{\eta}_1 \telwe (G-Q_2) x ),\end{equation} where $Q_2 \in H^\infty(\Dd,\CCmn)$ satisfies \begin{equation} \label{G_Q2xy} (G-Q_2) x_i = t_i y_i,\quad (G-Q_2)^*y_i = t_ix_i \quad \text{for}\; i=0,1. \end{equation} Let the operator $\Gamma_2 \colon \mathcal{K}_2 \to \mathcal{L}_2$ be given by $\Gamma_2 = P_{\mathcal{L}_2} M_{G-Q_2}|_{\mathcal{K}_2}.$ Then \begin{itemize} \item[{\rm (i)}] The maps $$ M_{\bar{\alpha}_0\bar{\alpha}_1} \colon H^2(\Dd,\Cc^{n-2}) \to \mathcal{K}_2 \colon x \mapsto \bar{\alpha_0}\bar{\alpha}_1 x, $$ and $$M_{\beta_0\beta_1}: H^2(\Dd,\Cc^{m-2})^\perp \to \mathcal{L}_2: y \mapsto \beta_0 \beta_1 y $$ are unitaries. \item[{\rm (ii)}] The maps $(\xi_0\telwe \xi_1\telwe \cdot)\colon\mathcal{K}_2\to X_2,$ $(\bita_0\telwe \bita_1\telwe\cdot)\colon\mathcal{L}_2\to Y_2 $ are unitaries. \item[{\rm (iii)}] The following diagram commutes \begin{equation}\label{commdiagrt2} \begin{array}{clllll} H^2(\Dd,\Cc^{n-2}) &\xrightarrow{M_{\bar{\alpha_0}\bar{\alpha}_1}} & \mathcal{K}_2 &\xrightarrow{\xi_0 \telwe \xi_1 \telwe \cdot}& \xi_0 \telwe \xi_1 \telwe H^2 (\Dd, \Cc^n)=X_2\\ \Big\downarrow\rlap{$\scriptstyle H_{F_2} $} & ~ &\Big\downarrow\rlap{$\scriptstyle \Gamma_2$} &~&\hspace{3ex}\Big\downarrow\rlap{$\scriptstyle T_2$} \\ H^2(\Dd,\Cc^{m-2})^\perp &\xrightarrow{M_{\beta_0 \beta_1} }& \mathcal{L}_2 &\xrightarrow{\bar{\eta}_0 \telwe \bar{\eta}_1\telwe \cdot } & \bar{\eta}_0 \telwe \bar{\eta}_1 \telwe H^2 (\Dd, \Cc^m)^\perp =Y_2, \end{array}\end{equation} where $F_2 \in H^\infty(\Dd,\Cc^{(m-2)\times(n-2)})+ C(\Tt,\Cc^{(m-2)\times(n-2)})$ is the function defined in Proposition \ref{tildew1v1}. \item [{\rm (iv)}] $T_2$ is a compact operator. \item[{\rm (v)}] $\|T_2\|=\|\Gamma_2\|=\|H_{F_2}\|=t_2.$ \end{itemize} \end{theorem} \begin{proof} {\rm (i)} follows from Lemma \ref{3.1constr}. {\rm (ii)} follows from Propositions \ref{xi12telweunit} and \ref{eta0eta1b0b1}. {\rm (iii)} By Proposition \ref{Twell}, $T_2$ is well-defined and is independent of the choice of $Q_2 \in H^\infty(\Dd,\CCmn)$ satisfying equations \eqref{G_Q2xy}. We can choose $Q_2$ which minimises \[ (s_0^\infty(G-Q),s_1^\infty(G-Q)), \] and therefore satisfies equations \eqref{G_Q2xy}. By Lemma \ref{3.2constr} and Theorem \ref{1.7}, the left hand side of diagram \eqref{commdiagrt2} commutes. Let us show the right hand side also commutes. A typical element of $\mathcal{K}_2$ is of the form $\bar{\alpha}_0 \bar{\alpha}_1x$ where $x \in H^2(\Dd,\Cc^{n-2}).$ Then, by equation (\ref{TT2}), $$\begin{array}{cllll} T_2 (\xi_0 \telwe \xi_1 \telwe \bar{\alpha}_0 \bar{\alpha}_1x)= P_{Y_2} \left( \bar{ \eta}_{0} \telwe \bar{ \eta}_{1} \telwe (G-Q_2) \bar{\alpha}_0 \bar{\alpha}_1 x \right) . \end{array}$$ By Proposition \ref{tildew1v1}, every $Q_2\in H^\infty(\Dd,\CCmn)$ which minimises $(s_0^\infty(G-Q),s_1^\infty(G-Q))$ satisfies the following equation (see equation \eqref{g-qv0v1w0w1}), \begin{equation}\label{g-Q2}(G-Q_2) V_0 \begin{pmatrix} 1 & 0 \\ 0 & \tilde{V}_1 \end{pmatrix} \begin{pmatrix} 0 \\ 0 \\ x \end{pmatrix} = W_0^* \begin{pmatrix} 1 & 0 \\ 0 & \tilde{W}_1^* \end{pmatrix}\begin{pmatrix} 0 \\ 0 \\ F_2 x \end{pmatrix} , \end{equation} for some $F_2 \in H^\infty(\Dd,\Cc^{(m-2)\times(n-2)})+ C(\Dd,\Tt^{(m-2)\times(n-2)})$. This implies that \begin{equation}\label{comm2-0} (G-Q_2)\bar{\alpha_0}\bar{\alpha}_1 x = \beta_0 \beta_1 F_2 x, \end{equation} for $x \in H^2(\Dd,\Cc^{n-2}).$ Hence \begin{equation}\label{comm2-1} T_2 (\xi_0 \telwe \xi_1 \telwe \bar{\alpha}_0 \bar{\alpha}_1 x) = P_{Y_2} ( \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe \beta_0 \beta_1 F_2 x ). \end{equation} Furthermore, $$\begin{array}{clll} ( \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe \cdot ) \Gamma_2 (\bar{\alpha}_0 \bar{\alpha}_1 x) &= \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe P_{\mathcal{L}_2} [ (G-Q_2) \bar{\alpha}_0 \bar{\alpha}_1 x]. \end{array} $$ Hence, by equation \eqref{comm2-0}, \begin{equation}\label{comm2-2} ( \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe \cdot ) \Gamma_2 \left(\bar{\alpha}_0 \bar{\alpha}_1 x \right) = \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe P_{\mathcal{L}_2}( \beta_0 \beta_1 F_2 x ). \end{equation} To show commutativity of the right hand square in the diagram (\ref{commdiagrt2}), we need to prove that, for every $x \in H^2(\Dd,\Cc^{n-2})$, \begin{equation}\label{comm2-3} T_2 (\xi_0 \telwe \xi_1 \telwe \bar{\alpha}_0 \bar{\alpha}_1 x) = ( \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe \cdot ) \Gamma_2 (\bar{\alpha}_0 \bar{\alpha}_1 x). \end{equation} By equations \eqref{comm2-1} and \eqref{comm2-2}, it is equivalent to show that \begin{equation}\label{comm2-4} P_{Y_2} ( \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe \beta_0 \beta_1 F_2 x ) = \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe P_{\mathcal{L}_2}( \beta_0 \beta_1 F_2 x ). \end{equation} Therefore, we need to show that $$ \bar{ \eta}_{0} \telwe \bar{ \eta}_{1} \telwe P_{\mathcal{L}_2}(\beta_0 \beta_1 F_2 x)\in Y_2 $$ and that $$ \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe \beta_0 \beta_1 F_2 x - \bar{ \eta}_{0} \telwe \bar{ \eta}_{1} \telwe P_{\mathcal{L}_2}(\beta_0 \beta_1 F_2 x) $$ is orthogonal to $Y_2.$ By Proposition \ref{eta0eta1b0b1}, $\bar{ \eta}_{0} \telwe \bar{ \eta}_{1} \telwe P_{\mathcal{L}_2}(\beta_0 \beta_1 F_2 x)$ is indeed an element of $Y_2.$ Furthermore, $$ \begin{array}{lll}\bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe \beta_0 \beta_1 F_2 x - \bar{ \eta}_{0} \telwe \bar{ \eta}_{1} \telwe P_{\mathcal{L}_2}(\beta_0 \beta_1 F_2 x) &=\bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe [\beta_0 \beta_1 F_2 x - P_{\mathcal{L}_2}(\beta_0 \beta_1 F_2 x)] \vspace{2ex} \\ &= \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe P_{\mathcal{L}_2^\perp}(\beta_0 \beta_1 F_2 x) \end{array}.$$ Let us show that $ \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe P_{\mathcal{L}_2^\perp}(\beta_0 \beta_1 F_2 x)$ is orthogonal to $Y_2$. It is so if and only if \begin{equation}\label{orthtoy2}\left\langle \bar{\eta}_0 \telwe \bar{ \eta}_1 \telwe P_{\mathcal{L}_2^\perp}(\beta_0 \beta_1 F_2 x) , \bar{ \eta}_0 \telwe \bar{ \eta}_{1} \telwe g \right\rangle_{L^2(\Tt,\we^3\Cc^m)} = 0\quad \text{for every}\; g \in H^2(\Dd,\Cc^m)^\perp.\end{equation} Let $\Phi = P_{\mathcal{L}_2^\perp}(\beta_0 \beta_1 F_2 x) \in L^2(\Tt,\Cc^m)$. By Proposition \ref{l2perp}, \begin{equation}\label{phil2} \beta_1^* \beta_0^* \Phi \in H^2(\Dd,\Cc^{m-2}). \end{equation} Then, by Proposition \ref{we}, assertion (\ref{orthtoy2}) is equivalent to the following assertion $$\displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} \langle \bar{ \eta}_{0} (\eiu), \bar{ \eta}_{0} (\eiu) \rangle_{\Cc^m} & \langle \bar{ \eta}_{0} (\eiu), \bar{ \eta}_{1} (\eiu)\rangle_{\Cc^m} & \langle \bar{ \eta}_{0} (\eiu) , g(\eiu) \rangle_{\Cc^m}\\ \bar{ \eta}_{1} (\eiu) , \bar{ \eta}_{0} (\eiu) \rangle_{\Cc^m} & \langle \bar{ \eta}_{1} (\eiu), \bar{ \eta}_{1} (\eiu)\rangle_{\Cc^m} & \langle \bar{ \eta}_{1} (\eiu), g(\eiu) \rangle_{\Cc^m}\\ \langle \Phi(\eiu) , \bar{ \eta}_{0} (\eiu) \rangle_{\Cc^m} &\langle \Phi(\eiu), \bar{ \eta}_{1} (\eiu) \rangle_{\Cc^m}& \langle \Phi(\eiu), g (\eiu) \rangle_{\Cc^m} \end{pmatrix} \; d\theta = 0 $$ for every $g \in H^2(\Dd,\Cc^m)^\perp,$ which in turn, by Proposition \ref{onxi}, is equivalent to the assertion $$ \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} 1 & 0 & \langle \bar{ \eta}_{0} (\eiu) , g(\eiu) \rangle_{\Cc^m}\\ 0 & 1 & \langle \bar{ \eta}_{1} (\eiu), g(\eiu) \rangle_{\Cc^m}\\ \langle \Phi(\eiu) , \bar{ \eta}_{0} (\eiu) \rangle_{\Cc^m}&\langle \Phi(\eiu), \bar{ \eta}_{1} (\eiu) \rangle_{\Cc^m}& \langle \Phi(\eiu), g (\eiu) \rangle_{\Cc^m} \end{pmatrix} \; d\theta = 0$$ for every $g \in H^2(\Dd,\Cc^m)^\perp.$ The latter statement is equivalent to the assertion $$\begin{array}{clll}\displaystyle\frac{1}{2\pi} \int_0^{2\pi} \langle \Phi(\eiu), g (\eiu) \rangle_{\Cc^m} &- \langle \Phi(\eiu), \bar{ \eta}_{1} (\eiu) \rangle_{\Cc^m} \langle \bar{ \eta}_{1} (\eiu), g(\eiu) \rangle_{\Cc^m} \\&- \langle \Phi(\eiu) , \bar{ \eta}_{0} (\eiu) \rangle_{\Cc^m} \langle \bar{ \eta}_{0} (\eiu) , g(\eiu) \rangle_{\Cc^m} \; d\theta =0 \end{array}$$ for every $g \in H^2(\Dd,\Cc^m)^\perp,$ which in turn is equivalent to the statement that $$\begin{array}{lll}\displaystyle\frac{1}{2\pi} \int_0^{2\pi} g^*(\eiu)\Phi(\eiu) &- g^*(\eiu) \bar{ \eta}_{0} \eta_0^T(\eiu) \Phi(\eiu) (\eiu)\\ &-g^*(\eiu) \bar{ \eta}_1(\eiu) \eta_1^T(\eiu) \Phi(\eiu) \; d\theta =0 \end{array}$$ for every $g \in H^2(\Dd,\Cc^m)^\perp.$ Equivalently $$\displaystyle\frac{1}{2\pi} \int_0^{2\pi} g^*(\eiu) \bigg(I_{m} - \bar{ \eta}_0 (\eiu)\eta_0^T(\eiu) -\bar{\eta}_1 (\eiu)\eta_1^T(\eiu) \bigg)\Phi(\eiu)\;d\theta =0 $$ for every $g \in H^2(\Dd,\Cc^m)^\perp$ if and only if $$ \bigg(I_{m} - \bar{ \eta}_0 (\eiu)\eta_0^T(\eiu) -\bar{\eta}_1 (\eiu)\eta_1^T(\eiu) \bigg)\Phi(\eiu)$$ \text{is orthogonal to} $H^2(\Dd,\Cc^m)^\perp,$ which occurs if and only if $$\left(I_{m} - \bar{ \eta}_0 \eta_0^T - \bar{ \eta}_1 \eta_1^T\right)\Phi \in H^2(\Dd,\Cc^m).$$ By Lemma \ref{eta=B1B1*}, $$\left(I_{m} - \bar{ \eta}_0 \eta_0^T - \bar{ \eta}_1 \eta_1^T\right)\Phi = \beta_0 \beta_1 \beta_1^* \beta_0^*\Phi. $$ Recall that, by assertions (\ref{phil2}), $\beta_1^* \beta_0^*\Phi \in H^2(\Dd,\Cc^{m-2}),$ and so $$\beta_0 \beta_1 \beta_1^* \beta_0^*\Phi \in H^2(\Dd,\Cc^m).$$ Thus the right hand square in the diagram (\ref{commdiagrt2}) commutes, and so the diagram (\ref{commdiagrt2}) commutes. {\rm(iv)} By Proposition \ref{tildew1v1}, $$F_2 \in H^\infty(\Dd,\Cc^{(m-2)\times(n-2)})+ C(\Tt, \Cc^{(m-2)\times(n-2)}). $$ Thus, by Hartman's Theorem, the Hankel operator $H_{F_2}$ is compact. By (iii), $$(\bita_0 \telwe \bita_1 \telwe \cdot)\circ (M_{\beta_0 \beta_1} H_{F_2} M_{\alpha_0^T\alpha_1^T} ) \circ (\xi_0\telwe \xi_1 \telwe \cdot)^*=T_2 .$$ By (i) and (ii), the operators $ M_{\balpha_0\balpha_1},\;M_{\beta_0 \beta_1},$ $(\xi_0\telwe \xi_1 \telwe \cdot)$ and $(\bita_0 \telwe \bita_1 \telwe \cdot)$ are unitaries. Hence $T_2$ is a compact operator. {\rm (v)} Since diagram \eqref{commdiagrt2} commutes and the operators $ M_{\balpha_0\balpha_1},\;M_{\beta_0 \beta_1},$ $(\xi_0\telwe \xi_1 \telwe \cdot)$ and\linebreak $(\bita_0 \telwe \bita_1 \telwe \cdot)$ are unitaries, $$\|T_2\|=\|\Gamma_2\|=\|H_{F_2}\|=t_2. $$ \end{proof} \begin{lemma}\label{coofschpairs2} In the notation of Theorem \ref{T2compactt}, let $v_2 \in H^2(\Dd,\Cc^n)$ and $w_2 \in H^2(\Dd,\Cc^m)^\perp$ be such that $(\xi_0 \telwe\xi_1\telwe v_2, \bar{\eta}_0 \telwe\bita_1\telwe w_2)$ is a Schmidt pair for the operator $T_2$ corresponding to $\|T_2\|.$ Then {\em (i)} there exist $x_2 \in \mathcal{K}_2$ and $y_2\in \mathcal{L}_2$ such that $(x_2,y_2) $ is a Schmidt pair for the operator $\Gamma_2$; {\em (ii)} for any $x_2 \in \mathcal{K}_2$ and $y_2\in \mathcal{L}_2$ such that $$ \xi_0 \telwe \xi_1 \telwe x_2= \xi_0 \telwe \xi_1 \telwe v_2,\quad \bar{ \eta}_0 \telwe \bita_1 \telwe y_2 = \bar{ \eta}_0 \telwe \bita_1\telwe w_2,$$ the pair $(x_2,y_2)$ is a Schmidt pair for $\Gamma_2$ corresponding to $\|\Gamma_2\|.$ \end{lemma} \begin{proof} {\rm (i)} By Theorem \ref{T2compactt}, the diagram (\ref{commdiagrt2}) commutes, $(\xi_0 \telwe \xi_1\telwe \cdot)$ is unitary from $\mathcal{K}_2$ to $X_2,$ and $(\bar{ \eta}_0\telwe \bita_1 \telwe \cdot)$ is unitary from $\mathcal{L}_2$ to $Y_2$ and $\|\Gamma_2\| =\|T_2\|=t_2.$ Moreover, by the commutativity of diagram (\ref{commdiagrt2}), the operator $\Gamma_2\colon \mathcal{K}_2\to \mathcal{L}_2$ is compact, hence there exist $x_2 \in \mathcal{K}_2,$ $y_2 \in \mathcal{L}_2$ such that $(x_2,y_2)$ is a Schmidt pair for $\Gamma_2$ corresponding to $\|\Gamma_2\|=t_2.$ {\rm (ii)} Suppose that $x_2\in \mathcal{K}_2,y_2\in \mathcal{L}_2$ satisfy \begin{equation}\label{xitel2} \xi_0 \telwe \xi_1 \telwe x_2= \xi_0 \telwe \xi_1 \telwe v_2\quad \text{and}\end{equation} \begin{equation}\label{eta0tel2} \bar{ \eta}_0 \telwe \bita_1 \telwe y_2 = \bar{ \eta}_0\telwe \bita_1 \telwe w_2. \end{equation} Let us show that $(x_2,y_2)$ is a Schmidt pair for $\Gamma_2,$ that is, $$\Gamma_2 x_2 = t_2y_2,\quad \Gamma_2^*y_2=t_2x_2. $$ Since diagram (\ref{commdiagrt2}) commutes, \begin{equation}\label{commt2gamma2} T_2 \circ (\xi_0\telwe \xi_1 \telwe \cdot )=(\bar{ \eta}_0 \telwe \bita_1 \telwe \cdot)\circ\Gamma_2 , \quad (\xi_0\telwe\xi_1\telwe \cdot )^*\circ T_2^* = \Gamma_2^* \circ (\bar{\eta}_0 \telwe\bita_1\telwe \cdot)^*. \end{equation} By hypothesis, \begin{equation}\label{hypt2} T_2 (\xi_0 \telwe \xi_1 \telwe v_2)= t_2 (\bar{ \eta}_0 \telwe \bita_1 \telwe w_2), \quad T_2^*(\bar{ \eta}_0 \telwe \bita_1\telwe w_2)= t_2 (\xi_0 \telwe\xi_1\telwe v_2). \end{equation} Thus, by equations \eqref{xitel2}, \eqref{eta0tel2} and \eqref{hypt2}, $$\begin{array}{clllll} \Gamma_2 x_2&= (\bar{\eta}_0 \telwe \bita_1 \telwe \cdot)^* T_2 (\xi_0\telwe \xi_1\telwe v_2) \vspace{2ex} \\ &= (\bar{\eta}_0 \telwe \bita_1 \telwe \cdot)^* t_2 (\bar{ \eta}_0 \telwe \bita_1 \telwe w_2) \vspace{2ex} \\ &= t_2 (\bar{\eta}_0 \telwe \bita_1 \telwe \cdot)^* (\bar{ \eta}_0 \telwe \bita_1 \telwe y_2).\end{array}$$ Hence $$\Gamma_2 x_2= t_2 (\bar{\eta}_0 \telwe \bita_1 \telwe \cdot)^* (\bar{ \eta}_0 \telwe \bita_1 \telwe \cdot)y_2= t_2 y_2.$$ \noindent By equation (\ref{xitel2}), $$x_2 = (\xi_0\telwe \xi_1 \telwe \cdot )^* (\xi_0\telwe \xi_1 \telwe v_2 ),$$ and, by equation (\ref{eta0tel2}), $$(\bar{\eta}_0 \telwe \bita_1 \telwe \cdot)^*(\bar{ \eta}_0 \telwe \bita_1 \telwe w_2)=y_2. $$ Thus $$\begin{array}{clll}\Gamma_2^* y_2 &= \Gamma_2^*(\bar{ \eta}_0 \telwe\bita_1 \telwe \cdot)^*(\bar{\eta}_0 \telwe \bita_1 \telwe w_2)\vspace{2ex} \\ &= (\xi_0 \telwe \xi_1 \telwe \cdot )^* T_2^* (\bar{ \eta}_0 \telwe \bita_1 \telwe w_2),\end{array}$$ the last equality following by the second equation of (\ref{commt2gamma2}). By equations \eqref{xitel2} and (\ref{hypt2}), we have $$ T_2^* (\bar{ \eta}_0 \telwe \bita_1 \telwe w_2) = t_2 (\xi_0\telwe \xi_1 \telwe v_2)= t_2(\xi_0\telwe \xi_1\telwe x_2),$$ and so, $$ \Gamma_2^* y_2 = t_2 x_2.$$Therefore $(x_2,y_2)$ is a Schmidt pair for $\Gamma_2$ corresponding to $t_2$. \end{proof} \begin{lemma}\label{schfohf2} Suppose $(\xi_0 \telwe \xi_1\telwe v_2, \bita_0 \telwe \bita_1 \telwe w_2)$ is a Schmidt pair for $T_2$ corresponding to $t_2.$ Let $$x_2 = (I_{n} - \xi_0 \xi_0^*-\xi_1\xi_1^*)v_2,\quad y_2= (I_{m} - \bita_0\eta_0^T-\bita_1\eta_1^T)w_2,$$ and let $$\hx_2 = \alpha_1^T\alpha_0^T x_2,\quad \hy_2=\beta_1^*\beta_0^*y_2. $$Then the pair $(\hx_2,\hy_2)$ is a Schmidt pair for $H_{F_2}$ corresponding to $\|H_{F_2}\|=t_2.$ \end{lemma} \begin{proof} Let us first show that $\hat{x}_2\in H^2(\Dd,\Cc^{n-2})$ and $x_2\in \mathcal{K}_2.$ Recall that $V_0=\big( \begin{matrix} \xi_0 & \balpha_0 \end{matrix}\big)$ and $\tilde{V}_1=\big(\begin{matrix} \alpha_0^T\xi_1& \balpha_1 \end{matrix} \big)$ are unitary-valued, that is, $\alpha_0^T\xi_0=0,$ $\alpha_1^T\alpha_0^T\xi_1=0,$ \begin{equation}\label{vounit}I_{n}-\xi_0\xi_0^* =\balpha_0 \alpha_0^T, \end{equation}and \begin{equation}\label{tildv1unit}I_{n-1}-\alpha_0^T\xi_1\xi_1^* \balpha_0 = \balpha_1\alpha_1^T .\end{equation} Then \begin{align}\hx_2&=\alpha_1^T\alpha_0^T x_2\nonumber\vspace{2ex}\\&= \alpha_1^T\alpha_0^T (I_{n}-\xi_0\xi_0^*-\xi_1\xi_1^*)v_2\nonumber\vspace{2ex}\\ &=\alpha_1^T\alpha_0^T v_2 - \alpha_1^T\alpha_0^T\xi_0\xi_0^*v_2 - \alpha_1^T\alpha_0^T\xi_1\xi_1^*v_2 \nonumber\vspace{2ex}\\ &= \alpha_1^T\alpha_0^T v_2,\label{hx2} \end{align} which, by Lemmas \ref{a0h2} and \ref{a1h2}, implies that $\hx_2 \in H^2(\Dd,\Cc^{n-2}).$ Moreover, by Lemma \ref{xi=A1A1*}, we obtain $$\begin{array}{ll}\balpha_0\balpha_1\hx_2&=\balpha_0\balpha_1\alpha_1^T\alpha_0^T v_2\vspace{2ex}\\ &=(I_{n}-\xi_0\xi_0^*-\xi_1\xi_1^* )v_2 =x_2. \end{array}$$ Hence \begin{equation}\label{x2}x_2 =\balpha_0\balpha_1\alpha_1^T\alpha_0^Tv_2=\balpha_0\balpha_1\hx_2,\end{equation}and thus $x_2\in \mathcal{K}_2.$ Next, we shall show that $\hy_2\in H^2(\Dd,\Cc^{n-2})^\perp$ and $y_2 \in \mathcal{L}_2.$ Notice that since $\tilde{W}_1^T=\big(\begin{matrix} \beta_0^T \eta_1 & \bar{\beta}_1\end{matrix} \big)$ and $W_0^T=\big( \begin{matrix} \eta_0 & \bar{\beta}_0 \end{matrix}\big)$ are unitary-valued, $\beta_0^*\bita_0 =0, $ $\beta_1^* \beta_0^*\bita_1=0,$ \begin{equation}\label{tw1} \big(I_{m-1} - \beta_0^* \bita_1 \eta_1^T \beta_0 \big)= \beta_1 \beta_1^* \end{equation}and \begin{equation}\label{wo} \big(I_m-\bita_0 \eta_0^T\big)=\beta_0 \beta_0^*. \end{equation} We have \begin{align}\hy_2 &= \beta_1^*\beta_0^*y_2 \nonumber\vspace{2ex}\\ &=\beta_1^*\beta_0^*(I_{m} - \bita_0\eta_0^T-\bita_1\eta_1^T)w_2\nonumber\vspace{2ex}\\ &=\beta_1^*\beta_0^* w_2 - \beta_1^*\beta_0^*\bita_0\eta_0^Tw_2- \beta_1^*\beta_0^*\bita_1\eta_1^Tw_2\nonumber\vspace{2ex}\\ &= \beta_1^*\beta_0^* w_2\label{betaw2}, \end{align} which, by Propositions \ref{beta0*h2} and \ref{beta1h2}, implies that $\hy_2 \in H^2(\Dd,\Cc^{m-2})^\perp.$ By Lemma \ref{eta=B1B1*}, we have $$\begin{array}{lllllll}\beta_0 \beta_1 \hy_2&= \beta_0 \beta_1 \beta_1^*\beta_0^* w_2 \vspace{2ex}\\ &= (I_m -\bita_0 \eta_0^T- \bita_1 \eta_1^T)w_2\vspace{2ex}= y_2. \end{array}$$ Hence \begin{equation}\label{y2}y_2 = \beta_0 \beta_1 \beta_1^*\beta_0^* w_2 = \beta_0 \beta_1 \hy_2,\end{equation}and therefore $y_2\in \mathcal{L}_2.$ By Theorem \ref{T2compactt}, the maps $$M_{\bar{\alpha}_0\balpha_1} \colon H^2(\Dd,\Cc^{n-2}) \to \mathcal{K}_2,\quad M_{\beta_0\beta_1} \colon H^2(\Dd,\Cc^{m-2})^\perp \to \mathcal{L}_2,$$ are unitaries and \begin{equation}\label{hf2g2} H_{F_2} = M^*_{\beta_0\beta_1} \Gamma_2 M_{ \balpha_0\balpha_1}.\end{equation} We need to show that $$H_{F_2}\hx_2 = t_2 \hy_2,\quad H_{F_2}^*\hy_2 = t_2\hx_2. $$By equations \eqref{hx2} and \eqref{x2}, \begin{equation}\label{x_2alphax_2} x_2= \balpha_0 \balpha_1 \alpha_1^T\alpha_0^Tx_2.\end{equation} Hence equation \eqref{hf2g2} yields \begin{align} H_{F_2}\hat{x}_2&= \beta_1^*\beta_0^* \Gamma_2 \balpha_0 \balpha_1 \hx_2=\beta_1^*\beta_0^* \Gamma_2 x_2.\label{h_f2.1}\end{align} By Proposition \ref{onxi}(ii), $$ \xi_0 \telwe \xi_1 \telwe x_2= \xi_0 \telwe \xi_1 \telwe v_2,\quad \bar{ \eta}_0 \telwe \bita_1 \telwe y_2 = \bar{ \eta}_0 \telwe \bita_1\telwe w_2.$$ Thus, by Lemma \ref{coofschpairs2}, $(x_2,y_2)$ is a Schmidt pair for the operator $\Gamma_2$ corresponding to $t_2= \|\Gamma_2\|$, that is, \begin{equation}\label{schmg2} \Gamma_2 x_2 =t_2y_2,\quad \Gamma_2^*y_2=t_2x_2. \end{equation} Thus equation \eqref{h_f2.1} yields $$H_{F_2}\hx_2= \beta_1^*\beta_0^* \Gamma_2 x_2 = \beta_1^*\beta_0^* t_2 y_2 = t_2\hy_2 $$ as required. Let us show that $H_{F_2}^*\hy_2 = t_2\hx_2.$ By equations \eqref{betaw2} and \eqref{y2}, \begin{equation}\label{y2betay2}y_2=\beta_0 \beta_1 \beta_1^* \beta_0^* y_2.\end{equation} By equation \eqref{hf2g2}, $$H_{F_2}^* = M_{ \balpha_0\balpha_1}^*\circ \Gamma_2^* \circ M_{\beta_0\beta_1} .$$ Hence \begin{align}\label{h_f2.4} H_{F_2}^*\hy_2= \alpha_1^T \alpha_0^T \Gamma_2^* \beta_0\beta_1 \hy_2 =\alpha_1^T \alpha_0^T \Gamma_2^*y_2,\end{align} and, by equations \eqref{schmg2} and \eqref{h_f2.4}, $$H_{F_2}^*\hy_2 = \alpha_1^T \alpha_0^T \Gamma_2^*y_2 = \alpha_1^T \alpha_0^T t_2 x_2 =t_2\hx_2 .$$ Therefore $(\hx_2,\hy_2)$ is a Schmidt pair for $H_{F_2}$ corresponding to $\|H_{F_2}\|=t_2.$ \end{proof} \begin{proposition}\label{x0wev2eta2wew2} Let $(\xi_0\telwe \xi_1\telwe v_2,\bita_0\telwe\bita_0 \telwe w_2)$ be a Schmidt pair for $T_2$ corresponding to $t_2$ for some $v_2\in H^2(\Dd,\Cc^n), w_2\in H^2(\Dd,\Cc^m)^\perp,$ let $h_2 \in H^2(\Dd,\Cc)$ be the scalar outer factor of $\xi_0\telwe \xi_1\telwe v_2$, let $$x_2 = (I_{n}- \xi_0 \xi_0^*-\xi_1\xi_1^*)v_2,\quad y_2=(I_{m} - \bar{\eta}_0 \eta_0^T-\bita_1 \eta_1^T)w_2,$$ and let \begin{equation}\label{hatx2}\hx_2=\alpha_1^T \alpha_0^T x_2\quad \text{and}\quad \hy_2=\beta_1^*\beta_0^*y_2.\end{equation} Then $$\|\hx_2 (z) \|_{\Cc^{n-2}} = \|\hy_2(z)\|_{\Cc^{m-2}} = |h_2(z)|,$$ $$\| x_2(z) \|_{\Cc^n} = \|y_2(z)\|_{\Cc^m} =|h_2(z)| $$ and $$\| \xi_0 (z) \we \xi_1(z) \we v_2(z) \|_{\we^3\Cc^n} = \| \bar{\eta}_0(z) \we \bita_1(z) \we w_2(z)\|_{\we^3\Cc^m} =|h_2(z)| $$ almost everywhere on $\Tt.$ \end{proposition} \begin{proof} By Lemma \ref{schfohf2}, $(\hx_2,\hy_2)$ is a Schmidt pair for $H_{F_2}$ corresponding to $\|H_{F_2}\|=t_2$ (see Theorem \ref{T2compactt} (v)). Hence $$H_{F_2}\hx_2 = t_2\hy_2 \quad \text{and}\quad H_{F_2}^* \hy_2 = t_2 \hx_2. $$ Then, by Theorem \ref{1.7}, $$\|\hy_2(z)\|_{\Cc^{m-2}}=\|\hx_2(z)\|_{\Cc^{n-2}} $$ almost everywhere on $\Tt$. Notice that, by equations \eqref{hatx2}, $$x_2 =\bar{\alpha}_0\balpha_1\hx_2, $$ and since $\bar{\alpha}_0(z),\balpha_1(z)$ are isometric for almost every $z\in \Tt,$ we obtain $$\|x_2(z)\|_{\Cc^n}=\|\hx_2(z)\|_{\Cc^{n-2}}. $$ \noindent Furthermore, by equations \eqref{hatx2}, $$y_2 =\beta_0 \beta_1 \hy_2, $$ and since $\beta_0(z),\beta_1(z)$ are isometries almost everywhere on $\Tt,$ we have $$\|y_2(z)\|_{\Cc^m}=\|\hy_2(z)\|_{\Cc^{m-2}} $$ almost everywhere on $\Tt.$ By equations \eqref{hatseq}, we deduce \begin{equation}\label{x2isy2} \|x_2(z)\|_{\Cc^n}=\|\hx_2(z)\|_{\Cc^{n-2}}=\|\hy_2(z)\|_{\Cc^{m-2}}=\|y_2(z)\|_{\Cc^m} \end{equation} almost everywhere on $\Tt.$ By Proposition \ref{onxi}, $$\xi_0\we \xi_1 \telwe v_2 = \xi_0\we \xi_1 \telwe x_2, \quad \bita_0 \we \bita_1 \we w_2= \bita_0 \we \bita_1 \we y_2.$$ Hence, by Proposition \ref{weon}, $$\begin{array}{lll} &\|\xi_0(z)\we \xi_1(z) \telwe v_2(z)\|_{\we^3\Cc^n}=\|\xi_0(z)\we \xi_1(z)\we x_2(z)\|_{\we^3\Cc^n}\\ & = \| x_2(z) - \displaystyle\sum\limits_{i=0}^1 \langle x_2(z), \xi_i(z)\rangle \xi_i(z)\|_{\Cc^n}=\|x_2(z)\|_{\Cc^n} \end{array}$$ almost everywhere on $\Tt$. Furthermore $$\begin{array}{llll} &\| \bita_0(z) \we \bita_1(z) \we w_2(z)\|_{\we^3\Cc^m}=\| \bita_0(z) \we \bita_1(z)\we y_2(z) \|_{\we^3\Cc^m}\\ &=\| y_2(z) - \displaystyle\sum\limits_{i=0}^1 \langle y_2(z), \bita_i(z)\rangle \bita_i(z)\|_{\Cc^m} =\|y_2(z)\|_{\Cc^m} \end{array}$$ almost everywhere on $\Tt$. Thus, by equation \eqref{x2isy2}, $$\| \xi_0 (z) \we \xi_1(z)\we v_2(z) \|_{\we^3\Cc^n} = \| \bar{\eta}_0(z) \we\bita_1(z)\we w_2(z)\|_{\we^3\Cc^m}$$ almost everywhere on $\Tt.$ Recall that $h_2$ is the scalar outer factor of $\xi_0\telwe \xi_1\telwe v_2$. Hence $$\|\hx_2 (z) \|_{\Cc^{n-2}} = \|\hy_2(z)\|_{\Cc^{m-2}} = |h_2(z)|,$$ $$\| x_2(z) \|_{\Cc^n} = \|y_2(z)\|_{\Cc^m} =|h_2(z)| $$ and $$\| \xi_0 (z) \we \xi_1(z) \we v_2(z) \|_{\we^3\Cc^n} = \| \bar{\eta}_0(z) \we \bita_1(z) \we w_2(z)\|_{\we^3\Cc^m} =|h_2(z)| $$ almost everywhere on $\Tt.$ \end{proof} \begin{proposition}\label{epsilon2} Let $m,n$ be positive integers such that $\min(m,n)\geq2.$ Let \linebreak $G\in H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn).$ In line with the algorithm from Subsection \ref{Alg_statement}, let $Q_2 \in H^\infty(\Dd,\CCmn)$ satisfy \begin{equation}\label{Q2cond} \begin{array}{llll} (G-Q_2)x_0 &= t_0 y_0,\quad (G-Q_2)^*y_0&=t_0x_0, \\ (G-Q_2)x_1 &= t_1 y_1,\quad (G-Q_2)^*y_1&=t_1x_1. \end{array} \end{equation} Let the spaces $X_2 , Y_2$ be given by $$X_2 = \xi_0 \telwe \xi_1 \telwe H^2(\Dd,\Cc^n), \quad Y_2 = \bar{\eta}_0 \telwe \bar{\eta}_1 \telwe H^2(\Dd,\Cc^m)^\perp ,$$ and consider the compact operator $T_2\colon X_2 \to Y_2$ given by $$T_2(\xi_0 \telwe \xi_1 \telwe x) = P_{Y_2} (\bar{\eta}_0 \telwe \bar{\eta}_1 \telwe (G-Q_2)x)$$ for all $x \in H^2(\Dd,\Cc^n).$ Let $(\xi_0 \telwe \xi_1\telwe v_2,\bar{\eta}_0\telwe \bar{\eta}_1 \telwe w_2)$ be a Schmidt pair for the operator $T_2$ corresponding to $t_2 = \|T_2\|,$ let $h_2 \in H^2(\Dd,\Cc)$ be the scalar outer factor of $ \xi_0 \telwe \xi_1\telwe v_2$, let $$x_2 = (I_{n} - \xi_0 \xi_0^*- \xi_1 \xi_1^*)v_2, \quad y_2=(I_{m}-\bar{\eta}_0\eta_0^T-\bar{\eta}_1\eta_1^T)w_2 $$ and let $$\xi_2 =\frac{{x}_2}{h_2}, \quad \eta_2 =\frac{\bar{z}\bar{y}_2}{h_2}. $$ Then there exist unitary-valued functions $\tilde{V}_2, \tilde{W}_2$ of types $(n-2)\times (n-2)$ and $(m-2) \times (m-2)$ respectively of the form $$\tilde{V}_2 =\begin{pmatrix} \alpha_1^T \alpha_0^T \xi_2 & \bar{\alpha}_2 \end{pmatrix}, \quad \tilde{W}_2^T = \begin{pmatrix} {\beta}_1^T \beta_0^T \eta_2 & \bar{\beta}_2 \end{pmatrix} , $$ where $\alpha_2, \beta_2$ are inner, co-outer, quasi-continuous and all minors on the first columns of $\tilde{V}_2, \tilde{W}_2^T$ are in $H^\infty.$ Furthermore, the set $\mathcal{E}_2$ of all level $2$ superoptimal error functions for $G$ satisfies $$\mathcal{E}_2 = W_0^* W_1^* \begin{pmatrix} I_2 & 0 \\ 0& \tilde{W}_2^* \end{pmatrix} \begin{pmatrix} t_0 u_0 & 0 & 0&0 \\ 0 & t_1 u_1 &0 &0 \\ 0& 0 & t_2 u_2&0 \\ 0&0 & 0 & (F_3 +H^\infty)\cap B(t_2) \end{pmatrix}\begin{pmatrix} I_2 & 0 \\ 0 & \tilde{V}_2^* \end{pmatrix} V_1^* V_0^* , $$ for some $F_3 \in H^\infty(\Dd,\Cc^{(m-3)\times (n-3)}) +C(\Tt,\Cc^{(m-3)\times (n-3)}),$ where $u_3=\frac{\bar{z}\bar{h}_3}{h_3}$ is a quasi-continuous unimodular function and $B(t_2)$ is the closed ball of radius $t_2$ in $L^\infty(\Tt,\Cc^{(m-3)\times (n-3)}).$ \end{proposition} \begin{proof} By Theorem \ref{T2compactt}, the following diagram commutes \begin{equation}\label{commdiagrt2-2} \begin{array}{clllll} H^2(\Dd,\Cc^{n-2}) &\xrightarrow{M_{\bar{\alpha}_0\bar{\alpha}_1}} & \mathcal{K}_2 &\xrightarrow{\xi_0 \telwe \xi_1 \telwe \cdot}& \xi_0 \telwe \xi_1 \telwe H^2 (\Dd, \Cc^n)=X_2\\ \Big\downarrow\rlap{$\scriptstyle H_{F_2} $} & ~ &\Big\downarrow\rlap{$\scriptstyle \Gamma_2$} &~&\hspace{3ex}\Big\downarrow\rlap{$\scriptstyle T_2$} \\ H^2(\Dd,\Cc^{m-2})^\perp &\xrightarrow{M_{\beta_0 \beta_1} }& \mathcal{L}_2 &\xrightarrow{\bar{\eta}_0 \telwe \bar{\eta}_1\telwe \cdot } & \bar{\eta}_0 \telwe \bar{\eta}_1 \telwe H^2 (\Dd, \Cc^m)^\perp =Y_2. \end{array}\end{equation} Recall that the operators $ M_{\balpha_0\balpha_1},\;M_{\beta_0 \beta_1},$ $(\xi_0\telwe \xi_1 \telwe \cdot)$ and $(\bita_0 \telwe \bita_1 \telwe \cdot)$ are unitaries. By Proposition \ref{Twell}, $T_2$ is well defined and is independent of the choice of $Q_2 \in H^\infty(\Dd,\Cc^{m\times n}) $ satisfying conditions \eqref{Q2cond}. Hence we may choose $Q_2$ to minimise $(s_0^\infty(G-Q), s_1^\infty(G-Q))$, and then, by Proposition \ref{g-q1y1t1x1}, the conditions \eqref{Q2cond} hold. By Lemma \ref{coofschpairs2}, $(x_2,y_2)$ defined above is a Schmidt pair for $\Gamma_2$ corresponding to $t_2$. By Lemma \ref{schfohf2}, $(\hx_2,\hy_2)$ is a Schmidt pair for $H_{F_2}$ corresponding to $t_2,$ where $$\hx_2 = \alpha_1^T\alpha_0^T x_2,\quad \hy_2=\beta_1^*\beta_0^*y_2. $$ We intend to apply Lemma \ref{2.2} to $H_{F_2}$ and the Schmidt pair $(\hx_2,\hy_2)$ to find unitary-valued functions $\tilde{V}_2,\tilde{W}_2$ such that, for every $\tilde{Q}_2\in H^\infty(\Dd,\Cc^{(m-2)\times(n-2)})$ which is at minimal distance from $F_2,$ a factorisation of the form $$F_2-\tilde{Q}_2 = \tilde{W}_2^* \begin{pmatrix}t_2 u_2 &0\\ 0 & F_3 \end{pmatrix}\tilde{V}_2^* $$is obtained, for some $F_3 \in H^\infty(\Dd,\Cc^{(m-2)\times(n-2)})+C(\Tt,\Cc^{(m-2)\times(n-2)}).$ For this purpose we find the inner-outer factorisations of $\hx_2$ and $\bar{z}\bar{\hy}_2.$ By Lemma \ref{x0wev2eta2wew2} \begin{equation}\label{h2common} \|\hx_2(z)\|_{\Cc^{n-2}} = |h_2(z)|\;\text{and}\; \|\hy_2(z)\|_{\Cc^{m-2}} =|h_2(z)|\end{equation} almost everywhere on $\Tt.$ Equations \eqref{h2common} imply that $h_2\in H^2(\Dd,\Cc)$ is the scalar outer factor of both $\hat{x}_2$ and $\bar{z}\bar{\hat{y}}_2.$ By Lemma \ref{2.2}, $\hat{x}_2, \bar{z}\bar{\hat{y}}_2$ admit the inner outer factorisations $$\hat{x}_2 = \hat{\xi}_2 h_2, \quad \bar{z} \bar{y}_2 = \hat{\eta}_2 h_2,$$for some inner $\hat{\xi}_2 \in H^\infty(\Dd,\Cc^{n-2}), \hat{\eta}_2\in H^\infty(\Dd,\Cc^{m-2}).$ Then $$ \hat{x}_2 = \hat{\xi}_2 h_2 =\alpha_1^T\alpha_0^T x_2,\quad \bar{z}\bar{\hat{y}}_2 =\hat{\eta}_2 h_2=\bar{z}\beta_1^T \beta_0^T \bar{y}_2,$$from which we obtain $$ \hat{\xi}_2 = \alpha_1^T\alpha_0^T \xi_2,\quad \hat{\eta}_2 = \beta_1^T \beta_0^T \eta_2. $$ We show that $\alpha_1^T\alpha_0^T \xi_2,\;\beta_1^T\beta_0^T \eta_2 $ are inner in order to apply Lemma \ref{2.2} and obtain $\tilde{V}_2$ and $\tilde{W}_2.$ Recall that, by Lemma \ref{schfohf2}, $$x_2 = (I_{n} - \xi_0 \xi_0^*-\xi_1\xi_1^*)v_2=\bar{\alpha}_0\balpha_1\alpha_1^T \alpha_0^Tv_2,\quad y_2= (I_{m} - \bita_0\eta_0^T-\bita_1\eta_1^T)w_2=\beta_0\beta_1\beta_1^*\beta_0^*w_2 .$$ Thus $$\alpha_1^T\alpha_0^Tx_2 = \alpha_1^T\alpha_0^T v_2, \quad \beta_1^T\beta_0^T \bar{y}_2 =\beta_1^T\beta_0^T\bar{w}_2 ,$$ and since $$\xi_2 =\frac{x_2}{h_2},\quad \eta_2 = \frac{\bar{z}\bar{y}_2}{h_2}, $$ we deduce that the functions $$\alpha_1^T\alpha_0^T\xi_2=\frac{\alpha_1^T\alpha_0^Tv_2}{h_2},\quad \beta_1^T\beta_0^T\eta_2 = \frac{\beta_1^T\beta_0^T\bar{z}\bar{w}_2}{h_2}$$ are analytic. Furthermore, $\|\xi_2(z)\|_{\Cc^n}=1$ and $\|\eta_2(z)\|_{\Cc^m}=1$ almost everywhere on $\Tt,$ and, by equations \eqref{h2common}, $$\|\alpha_1^T(z)\alpha_0^T(z)x_2(z)\|_{\Cc^{n-2}}=\|\alpha_1^T(z)\alpha_0^T(z)v_2(z)\|_{\Cc^{n-2}}=|h_2(z)|$$and$$ \|\beta_1^T(z)\beta_0^T(z)\bar{y}_2(z)\|_{\Cc^{m-2}}=\|\beta_1^T(z)\beta_0^T (z)\bar{w}_2(z)\|_{\Cc^{m-2}}=|h_2(z)| $$almost everywhere on $\Tt.$ Hence $$\|\alpha_1^T(z)\alpha_0^T(z)\xi_2(z)\|_{\Cc^{n-2}}=1,\quad \|\beta_1^T(z)\beta_0^T(z)\eta_2(z)\|_{\Cc^{m-2}}=1 $$almost everywhere on $\Tt.$ Thus $\alpha_1^T\alpha_0^T\xi_2,\; \beta_1^T\beta_0^T\eta_2$ are inner functions. By Lemma \ref{2.2}, there exist inner, co-outer, quasi-continuous functions $\alpha_2, \beta_2$ of types $(n-2)\times (n-3), (m-2)\times (m-3)$ respectively such that the functions $$\tilde{V}_2 =\begin{pmatrix} \alpha_1^T \alpha_0^T \xi_2 & \bar{\alpha}_2 \end{pmatrix}, \quad \tilde{W}_2^T = \begin{pmatrix} {\beta}_1^T \beta_0^T \eta_2 & \bar{\beta}_2 \end{pmatrix} $$ are unitary-valued with all minors on the first columns in $H^\infty.$ \noindent Furthermore, by Lemma \ref{2.2}, every $\hat{Q}_2\in H^\infty(\Dd,\Cc^{(m-2)\times(n-2)})$ which is at minimal distance from $F_2$ satisfies $$F_2-\hat{Q}_2 = \tilde{W}_2^* \begin{pmatrix} t_2 u_2 & 0 \\ 0 & F_3 \end{pmatrix}\tilde{V}_2^*, $$ for some $F_3 \in H^\infty(\Dd,\Cc^{(m-3)\times (n-3)})+C(\Tt,\Cc^{(m-3)\times (n-3)})$ and for the quasi-continuous unimodular function given by $u_2 = \frac{\bar{z} \bar{h}_2}{h_2}.$ By Lemma \ref{f+hinfty}, the set $$\tilde{\mathcal{E}}_{2} =\{F_{2} - \hat{Q} : \hat{Q} \in H^\infty(\Dd,\Cc^{(m-2)\times (n-2)}), \| F_{2} - \hat{Q}\|_{L^\infty}=t_{2} \}$$ satisfies $$\tilde{\mathcal{E}}_{2} = \tilde{W}_{2}^* \begin{pmatrix} t_{2}u_{2} & 0 \\ 0 & (F_3+H^\infty)\cap B(t_2) \end{pmatrix}V_{2}^*, $$ where $B(t_2)$ is the closed ball of radius $t_2$ in $L^\infty(\Tt,\Cc^{(m-3)\times (n-3)}).$ Thus, by Proposition \ref{tildew1v1}, $\mathcal{E}_2$ admits the factorisation claimed. \end{proof} \begin{proposition}\label{g-q2} Every $Q_3 \in H^\infty(\Dd,\CCmn)$ which minimises $$(s_0^\infty(G-Q),s_1^\infty(G-Q),s_2^\infty(G-Q)) $$satisfies $$(G-Q_3)x_i = t_i y_i,\quad (G-Q_3)^*y_i=t_ix_i\quad \text{for}\quad i=0,1,2. $$ \end{proposition} \begin{proof} By Proposition \ref{g-q1y1t1x1}, every $Q_3 \in H^\infty(\Dd,\CCmn)$ that minimises $$(s_0^\infty(G-Q),s_1^\infty(G-Q))$$ satisfies $$(G-Q_3)x_i = t_i y_i,\quad (G-Q_3)^*y_i=t_ix_i\quad \text{for}\quad i=0,1. $$Hence it suffices to show that $Q_3$ satisfies $$(G-Q_3)x_2 = t_2y_2, \quad (G-Q_3)^*y_2=t_2x_2. $$ By Theorem \ref{T2compactt}, the following diagram commutes $$ \begin{array}{clllll} H^2(\Dd,\Cc^{n-2}) &\xrightarrow{M_{\bar{\alpha}_0\bar{\alpha}_1}} & \mathcal{K}_2 &\xrightarrow{\xi_0 \telwe \xi_1 \telwe \cdot}& \xi_0 \telwe \xi_1 \telwe H^2 (\Dd, \Cc^n)=X_2\\ \Big\downarrow\rlap{$\scriptstyle H_{F_2} $} & ~ &\Big\downarrow\rlap{$\scriptstyle \Gamma_2$} &~&\hspace{3ex}\Big\downarrow\rlap{$\scriptstyle T_2$} \\ H^2(\Dd,\Cc^{m-2})^\perp &\xrightarrow{M_{\beta_0 \beta_1} }& \mathcal{L}_2 &\xrightarrow{\bar{\eta}_0 \telwe \bar{\eta}_1\telwe \cdot } & \bar{\eta}_0 \telwe \bar{\eta}_1 \telwe H^2 (\Dd, \Cc^m)^\perp =Y_2, \end{array}$$ where the operator $\Gamma_2 \colon \mathcal{K}_2 \to \mathcal{L}_2$ is given by $\Gamma_2 = P_{\mathcal{L}_2} M_{G-Q_2}|_{\mathcal{K}_2}$ and \linebreak$F_2\in H^\infty(\Dd,\Cc^{(m-2)\times(n-2)})+C(\Tt,\Cc^{(m-2)\times(n-2)})$ is constructed as follows. \noindent By Lemma \ref{2.2} and Proposition \ref{tildew1v1}, there exist unitary-valued functions $$\tilde{V}_1=\big( \begin{matrix} \alpha_0^T \xi_1 & \balpha_1 \end{matrix}\big),\quad \tilde{W}_1^T =\big( \begin{matrix} \beta_0^T \eta_1 & \bar{\beta}_1 \end{matrix}\big), $$where $\alpha_1, \beta_1$ are inner, co-outer, quasi-continuous functions of types $(n-1)\times(n-2)$ and $(m-1)\times (m-2)$ respectively, and all minors on the first columns of $\tilde{V}_1, \tilde{W}_1^T$ are in $H^\infty.$ Furthermore, the set of all level $1$ superoptimal functions $\mathcal{E}_1=\{G-Q:Q\in \Omega_1\}$ satisfies \begin{equation}\label{g-qv0v1w0w111} \mathcal{E}_1 = W_0^* \begin{pmatrix} 1 & 0 \\ 0& \tilde{W}_1^*\end{pmatrix} \begin{pmatrix} t_0 u_0 & 0&0 \\ 0& t_1 u_1 &0\\ 0&0 & (F_2+H^\infty(\Dd,\Cc^{(m-2)\times(n-2)}))\cap B(t_1) \end{pmatrix}\begin{pmatrix} 1 & 0 \\ 0 & \tilde{V}_1^* \end{pmatrix} V_0^*, \end{equation} for some $F_2 \in H^\infty(\Dd,\Cc^{(m-2)\times (n-2)})+C(\Tt,\Cc^{(m-2)\times (n-2)})$, for a quasi-continuous unimodular function $u_1 = \frac{\bar{z} \bar{h}_1}{h_1}$, and for the closed ball $B(t_1)$ of radius $t_1$ in $L^\infty(\Tt, \Cc^{(m-2)\times (n-2)})$. Consider some $Q_3\in \Omega_1,$ so that, according to equation \eqref{g-qv0v1w0w111}, $$ \begin{pmatrix} 1 & 0 \\ 0& \tilde{W}_1\end{pmatrix}W_0 (G-Q_3)V_0 \begin{pmatrix} 1 & 0 \\ 0 & \tilde{V}_1 \end{pmatrix} =\begin{pmatrix} t_0 u_0 & 0&0 \\ 0& t_1 u_1 &0\\ 0&0 & F_2-\tilde{Q}_2 \end{pmatrix},$$ for some $\tilde{Q}_2\in H^\infty(\Dd,\Cc^{(m-2)\times(n-2)}),$ that is, \begin{equation}\label{g-q3} \begin{pmatrix} 1 & 0 \\ 0 & \begin{pmatrix}\eta_1^T\beta_0 \\\beta_1^*\end{pmatrix} \end{pmatrix} \begin{pmatrix}\eta_0^T \\\beta_0^*\end{pmatrix}(G-Q_3) \big(\begin{matrix} \xi_0&\balpha_0 \end{matrix}\big)\begin{pmatrix} 1 & 0 &0 \\ 0 & \alpha_0^T \xi_1 & \balpha_1 \end{pmatrix}=\begin{pmatrix} t_0 u_0 & 0&0 \\ 0& t_1 u_1 &0\\ 0&0 & F_2-\tilde{Q}_2 \end{pmatrix}.\end{equation}Observe $$ \begin{pmatrix}\eta_0^T \\\beta_0^*\end{pmatrix}(G-Q_3) \big(\begin{matrix} \xi_0&\balpha_0 \end{matrix}\big)= \begin{pmatrix} t_0 u_0 & 0 \\ 0& \beta_0^* (G-Q_3)\balpha_0 \end{pmatrix},$$hence $$\begin{pmatrix} 1 & 0 \\ 0 & \begin{pmatrix}\eta_1^T\beta_0 \\\beta_1^*\end{pmatrix} \end{pmatrix} \begin{pmatrix} t_0 u_0 & 0 \\ 0& \beta_0^* (G-Q_3)\balpha_0 \end{pmatrix} \begin{pmatrix} 1 & 0 &0 \\ 0 & \alpha_0^T \xi_1 & \balpha_1 \end{pmatrix}$$ is equal to $$\begin{pmatrix} t_0 u_0 & 0 & 0 \\ 0& \eta_1^T \beta_0 \beta_0^* (G-Q_3)\balpha_0 \alpha_0^T\xi_1 & \eta_1^T \beta_0 \beta_0^* (G-Q_3)\balpha_0 \balpha_1 \\ 0& \beta_1^* \beta_0^* (G-Q_3)\balpha_0 \alpha_0^T \xi_1 & \beta_1^* \beta_0^* (G-Q_3)\balpha_0 \balpha_1 \end{pmatrix}, $$ and so equation \eqref{g-q3} yields $$\begin{pmatrix} t_0 u_0 & 0 & 0 \\ 0& \eta_1^T \beta_0 \beta_0^* (G-Q_3)\balpha_0 \alpha_0^T\xi_1 & \eta_1^T \beta_0 \beta_0^* (G-Q_3)\balpha_0 \balpha_1 \\ 0& \beta_1^* \beta_0^* (G-Q_3)\balpha_0 \alpha_0^T \xi_1 & \beta_1^* \beta_0^* (G-Q_3)\balpha_0 \balpha_1 \end{pmatrix} =\begin{pmatrix} t_0 u_0 & 0&0 \\ 0& t_1 u_1 &0\\ 0&0 & F_2-\tilde{Q}_2 \end{pmatrix},$$which is equivalent to the following equations $$\eta_1^T \beta_0 \beta_0^* (G-Q_3)\balpha_0 \alpha_0^T\xi_1= t_1 u_1, $$ $$\eta_1^T \beta_0 \beta_0^* (G-Q_3)\balpha_0 \balpha_1 =0, $$ $$\beta_1^* \beta_0^* (G-Q_3)\balpha_0 \alpha_0^T \xi_1= 0, $$ and \begin{equation}\label{g-q_3} \beta_1^* \beta_0^* (G-Q_3)\balpha_0 \balpha_1 = F_2 -\tilde{Q}_2. \end{equation} By Theorem \ref{1.7} applied to $H_{F_2},$ if $(\hx_2,\hy_2)$ is a Schmidt pair for $H_{F_2}$ corresponding to $t_2=\|H_{F_2}\|,$ then, for any $\tilde{Q}_2$ which is at minimal distance from $F_2,$ we have \begin{equation}\label{f2-q2}(F_2-\tilde{Q}_2)\hx_2 = t_2\hy_2,\quad (F_2-\tilde{Q}_2)^*\hy_2 =t_2\hx_2. \end{equation} By equations \eqref{g-q_3} and \eqref{f2-q2}, \begin{equation}\label{g-q31}\beta_1^* \beta_0^* (G-Q_3)\balpha_0 \balpha_1\hx_2= t_2\hy_2\end{equation}and \begin{equation}\label{g-q32}\alpha_1^T \alpha_0^T (G-Q_3)^*\beta_0 \beta_1 \hy_2=t_2\hx_2. \end{equation}Recall that, by equations \eqref{x2} and \eqref{y2}, \begin{equation}\label{a01x1} \balpha_0\balpha_1\hx_2 = x_2\quad \text{and}\quad \hy_2 = \beta_1^*\beta_0^*y_2.\end{equation} Hence, by equation \eqref{g-q31}, we obtain $$\beta_1^* \beta_0^* (G-Q_3)x_2=t_2\beta_1^* \beta_0^* y_2, $$or equivalently, $$ \beta_1^* \beta_0^* \bigg( (G-Q_3)x_2-t_2y_2\bigg)=0.$$ Since, by Theorem \ref{T2compactt}, $M_{\beta_0\beta_1}$ is unitary, the latter equation yields $$ (G-Q_3)x_2=t_2y_2.$$ Moreover, in view of equations \eqref{g-q_3}, \eqref{f2-q2} and \eqref{a01x1}, equation \eqref{g-q32} implies $$ \alpha_1^T \alpha_0^T (G-Q_3)^*y_2=t_2\alpha_1^T\alpha_0^Tx_2,$$which in turn is equivalent to the equation $$ \alpha_1^T \alpha_0^T\bigg( (G-Q_3)^*y_2-t_2x_2\bigg)=0.$$By Theorem \ref{T2compactt}, $M_{\balpha_0 \balpha_1}$ is unitary, hence the latter equation yields $$(G-Q_3)^*y_2=t_2x_2 $$and therefore the assertion has been proved. \end{proof} \section{Compactness of the operator $T_{j+1}$}\label{T_j-compact} At this point, the reader has seen the proof of the compactness of the operators $T_1$ and $T_2$. Suppose we have applied steps $0,\dots,j$ of the superoptimal analytic approximation algorithm from Subsection \ref{Alg_statement} to $G$, we have constructed $$\begin{array}{lll} &t_0 \geq t_1 \geq \cdots \geq t_j > 0\\ &x_0, x_1, \cdots, x_j \in L^2 (\Tt, \Cc^n)\\ &y_0 , y_1 , \cdots , y_j \in L^2(\Tt, \Cc^m) \\ &h_0, h_1, \cdots, h_j \in H^2(\Dd,\Cc) \; \text{outer}\\ & \xi_0,\xi_1, \cdots, \xi_j \in L^\infty(\Tt,\Cc^n)\; \text{pointwise orthonormal on}\;\Tt\\ & \eta_0, \eta_1, \cdots , \eta_j \in L^\infty(\Tt,\Cc^m)\; \text{pointwise orthonormal on}\;\Tt\\ &X_0 = H^2(\Dd,\Cc^n),X_1, \cdots, X_j \\ &Y_0 = H^2(\Dd,\Cc^m)^\perp, Y_1, \cdots, Y_j\\ &T_0, T_1, \cdots, T_j \; \text{compact operators},\end{array} $$ and all the claimed properties hold. We shall apply a similar method to show that the operator $T_{j+1}$ as given in equation \eqref{T_j} is compact. \begin{proposition}\label{tildvjwj} \index{$\mathcal{E}_j$} Let $m,n$ be positive integers such that $\min(m,n)\geq2.$ Let \linebreak $G\in H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn).$ In line with the algorithm from Subsection \ref{Alg_statement}, let $Q_j \in H^\infty(\Dd,\CCmn)$ satisfy \begin{equation}\label{g-qj} (G-Q_{j})x_i = t_i y_i,\quad (G-Q_{j})^*y_i=t_ix_i\quad \text{for}\quad i=0,1,\dots,j-1. \end{equation} Let the spaces $X_j , Y_j$ be given by $$X_j = \xi_0 \telwe \xi_1 \telwe \dots \telwe \xi_{j-1} \telwe H^2(\Dd,\Cc^n), \quad Y_j = \bar{\eta}_0 \telwe \bar{\eta}_1 \telwe \dots \telwe \bar{\eta}_{j-1} \telwe H^2(\Dd,\Cc^m)^\perp ,$$ and consider the compact operator $T_j\colon X_j \to Y_j$ given by $$T_j(\xi_0 \telwe \xi_1 \telwe \dots \telwe \xi_{j-1} \telwe x) = P_{Y_j} (\bar{\eta}_0 \telwe \bar{\eta}_1 \telwe \dots \telwe\bar{\eta}_{j-1} \telwe (G-Q_j)x)$$ for all $x \in H^2(\Dd,\Cc^n).$ Let $(\xi_0 \telwe \xi_1 \telwe \dots \telwe \xi_{j-1} \telwe v_j, \bar{\eta}_0\telwe \bar{\eta}_1 \telwe \dots \bar{\eta}_{j-1} \telwe w_j)$ be a Schmidt pair for the operator $T_j$ corresponding to $t_j = \|T_j\|,$ let $h_j \in H^2(\Dd,\Cc)$ be the scalar outer factor of $\xi_0 \telwe \xi_1 \telwe \dots \telwe \xi_{j-1} \telwe v_j$, let $$x_{j} =(I_{n}-\xi_0 \xi_0^* \dots- \xi_{j-1}\xi_{j-1}^*)v_{j},\quad y_{j} = (I_{m} - \bar{\eta}_0 \eta_0^T - \dots - \bar{\eta}_{j-1}\eta_{j-1}^T)w_{j} $$ and let \begin{equation} \label{etajxij}\xi_{j} = \frac{x_{j}}{h_{j}},\quad \eta_{j} = \frac{\bar{z} \bar{y}_{j}}{h_{j}}. \end{equation} Let, for $i=0,1,\dots, j-1$, \begin{equation} \label{tildevjwjj} \tilde{V}_{i} = \begin{pmatrix} \alpha_{i-1}^T \cdots \alpha_0^T \xi_{i} & \bar{ \alpha}_{i} \end{pmatrix}, \quad \tilde{W}_{i}^T = \begin{pmatrix} \beta_{i-1}^T \cdots \beta_0^T \eta_{i} & \bar{\beta}_{i} \end{pmatrix} \end{equation} be unitary-valued functions, as described in Lemma \ref{2.2} (see also Proposition \ref{epsilon2} for $\tilde{V}_{2}$ and $\tilde{W}_{2}^T$), $u_i = \frac{\bar{z} \bar{h}_i}{h_i}$ are quasi-continuous unimodular functions, and $$V_{i} = \begin{pmatrix} I_{i} & 0 \\ 0 & \tilde{V}_{i} \end{pmatrix}, \quad W_{i} = \begin{pmatrix} I_{i} & 0 \\ 0 & \tilde{W}_{i} \end{pmatrix} .$$ There exist unitary-valued functions $\tilde{V}_{j},\tilde{W}_{j}$ of the form \begin{equation}\label{tildevjwj} \tilde{V}_{j} = \begin{pmatrix} \alpha_{j-1}^T \cdots \alpha_0^T \xi_{j} & \bar{ \alpha}_{j} \end{pmatrix}, \quad \tilde{W}_{j}^T = \begin{pmatrix} \beta_{j-1}^T \cdots \beta_0^T \eta_{j} & \bar{\beta}_{j} \end{pmatrix} ,\end{equation} where $\alpha_0, \dots, \alpha_{j-1}$ and $\beta_0,\dots,\beta_{j-1}$ are of types $n\times (n-1),\dots,(n-j-1)\times (n-j-2)$ and $m\times (m-1),\dots,(m-j-1)\times (m-j-2)$ respectively, and are inner, co-outer and quasi-continuous. Furthermore, the set of all level $j$ superoptimal error functions $\mathcal{E}_j$ satisfies \index{$\mathcal{E}_j$} \begin{equation}\label{epsilonnj} \mathcal{E}_j = W_0^* W_1^* \cdots W_{j}^* \begin{pmatrix} t_0 u_0 & 0 & \cdots & 0 &0_{1\times (n-j-1)}\\ 0 &t_1u_1 & \dots & 0 &0_{1\times (n-j-1)} \\ \vdots & \vdots & \ddots & \vdots &\vdots \\ 0 & 0 & \cdots &t_{j}u_{j} & 0\\ 0_{(m-j-1)\times 1} & 0_{(m-j-1)\times 1} & \dots & \dots & (F_{j+1}+H^\infty)\cap B(t_{j}) \end{pmatrix}V_{j}^* \cdots V_0^* , \end{equation} for some $F_{j+1} \in H^\infty(\Dd,\Cc^{(m-j-1)\times (n-j-1)})+C(\Tt,\Cc^{(m-j-1)\times (n-j-1)}),$ for the quasi-continuous unimodular functions $u_i = \frac{\bar{z} \bar{h}_i}{h_i}$, for all $i=0,\dots, j ,$ for the closed ball $B(t_{j})$ of radius $t_j$ in $L^\infty(\Tt,\Cc^{(m-j-1)\times (n-j-1)})$, and for the unitary valued functions $$V_{j} = \begin{pmatrix} I_{j} & 0 \\ 0 & \tilde{V}_{j} \end{pmatrix}, \quad W_{j} = \begin{pmatrix} I_{j} & 0 \\ 0 & \tilde{W}_{j} \end{pmatrix}. $$ \end{proposition} \index{$\tilde{V}_j$} \index{$\tilde{W}_j$} \begin{proof} Suppose we have applied steps $0,\dots,j$ of the algorithm from Subsection \ref{Alg_statement} and that the following diagram commutes \begin{equation}\label{tj-1comm} \begin{array}{clllll} H^2(\Dd,\Cc^{n-j}) &\xrightarrow{M_{\bar{\alpha}_0\cdots \bar{ \alpha}_{j-1}}} & \mathcal{K}_{j} &\xrightarrow{\xi_{(j-1)} \telwe \cdot}& \xi_{(j-1)} \telwe H^2 (\Dd, \Cc^n)=X_{j}\\ \Big\downarrow\rlap{$\scriptstyle H_{F_{j}} $} & ~ &\Big\downarrow\rlap{$\scriptstyle \Gamma_{j}$} &~&\hspace{3ex}\Big\downarrow\rlap{$\scriptstyle T_{j}$} \\ H^2(\Dd,\Cc^{m-j})^\perp &\xrightarrow{M_{\beta_0 \cdots \beta_{j-1}}}&\mathcal{L}_{j} &\xrightarrow{\bar{\eta}_{(j-1)} \telwe \cdot } & \bar{\eta}_{(j-1)} \telwe H^2 (\Dd, \Cc^m)^\perp =Y_{j}, \end{array}\end{equation} where the maps $$M_{\bar{\alpha}_0\cdots \bar{ \alpha}_{j-1}}\colon H^2(\Dd,\Cc^{n-j}) \to \mathcal{K}_{j}\colon x \mapsto \bar{\alpha}_0\cdots \bar{ \alpha}_{j-1}x,$$ $$ M_{\beta_0 \cdots \beta_{j-1}}\colon H^2(\Dd,\Cc^{m-j})^\perp \to \mathcal{L}_{j}\colon y \mapsto \beta_0 \cdots \beta_{j-1}y,$$ $$(\xi_{(j-1)} \telwe \cdot)\colon \mathcal{K}_{j}\to X_{j} \; \text{ and} \; (\bar{\eta}_{(j-1)} \telwe \cdot)\colon\mathcal{L}_{j}\to Y_{j} $$ are unitaries. Let $(\xi_{(j-1)} \telwe v_{j}, \bar{\eta}_{(j-1)}\telwe w_{j})$ be a Schmidt pair for the compact operator $T_{j}.$ Then \linebreak$x_{j} \in \mathcal{K}_{j},$ $y_{j} \in \mathcal{L}_{j}$ are such that $(x_{j}, y_{j})$ is a Schmidt pair for $\Gamma_{j}$ corresponding to $t_{j}=\|\Gamma_{j}\|,$ and $(\hat{x}_{j},\hat{y}_{j})$ is a Schmidt pair for $H_{F_{j}}$ corresponding to $t_{j}=\| H_{F_{j}} \|,$ where \begin{equation}\label{hatxj-1}\hat{x}_{j}= \alpha_{j-1}^T\cdots \alpha_0^T x_{j}, \quad \hat{y}_{j}=\beta_{j-1}^* \cdots \beta_0^* y_{j}. \end{equation} We intend to apply Lemma \ref{2.2} to $H_{F_j}$ and the Schmidt pair $(\hx_j,\hy_j)$ to find unitary-valued functions $\tilde{V}_j,\tilde{W}_j$ such that, for every $\tilde{Q}_j\in H^\infty(\Dd,\Cc^{(m-j)\times(n-j)})$ which is at minimal distance from $F_j,$ a factorisation of the form $$F_j-\tilde{Q}_j = \tilde{W}_j^* \begin{pmatrix}t_j u_j &0\\ 0 & F_{j+1} \end{pmatrix}\tilde{V}_j^* $$ is obtained, for some $F_{j+1} \in H^\infty(\Dd,\Cc^{(m-j-1)\times (n-j-1)})+C(\Tt,\Cc^{(m-j-1)\times (n-j-1)}).$ For this purpose we find the inner-outer factorisations of $\hx_j$ and $\bar{z}\bar{\hy}_j.$ By the inductive hypothesis (see Lemma \ref{x0wev2eta2wew2} for $j=2$), we have \begin{equation}\label{hjcommon} \begin{array}{lll} |h_j(z)|&= \| \xi_0 (z) \we \dots \we \xi_{j-1}(z) \we v_j(z) \|_{\we^{j+1}\Cc^n} = \| \bar{\eta}_0(z) \we \dots \we\bita_{j-1}(z) \we w_j(z)\|_{\we^{j+1}\Cc^m}, \\ \|\hx_j (z) \|_{\Cc^{n-j}} &= \|\hy_j(z)\|_{\Cc^{m-j}} = |h_j(z)|,\;\text{and}\\ \| x_j(z) \|_{\Cc^n} &= \|y_j(z)\|_{\Cc^m} =|h_j(z)|, \; \end{array} \end{equation} almost everywhere on $\Tt.$ Equations \eqref{hjcommon} imply that $h_j\in H^2(\Dd,\Cc)$ is the scalar outer factor of both $\hat{x}_j$ and $\bar{z}\bar{\hat{y}}_j.$ By Lemma \ref{2.2}, $\hat{x}_{j}, \bar{z}\bar{\hat{y}}_{j}$ admit the inner-outer factorisations \begin{equation}\label{facthatj-1}\hat{x}_{j} = \hat{\xi}_{j} h_{j},\quad \bar{z}\bar{\hat{y}}_{j} = \hat{\eta}_{j} h_{j}, \end{equation}where $\hat{\xi}_{j} \in H^\infty(\Dd,\Cc^{n-j})$ and $\hat{\eta}_{j} \in H^\infty(\Dd,\Cc^{m-j})$ are vector-valued inner functions. By equations \eqref{hatxj-1} and \eqref{facthatj-1}, we deduce that $$\hat{\xi}_{j} = \alpha_{j-1}^T\cdots \alpha_0^T \xi_{j},\quad \hat{\eta}_{j} = \beta_{j-1}^T \cdots \beta_0^T \eta_{j}.$$ We shall show that $\alpha_{j-1}^T\cdots\alpha_0^T \xi_j,\;\beta_{j-1}^T\cdots\beta_0^T \eta_j $ are inner in order to apply Lemma \ref{2.2} and obtain $\tilde{V}_j$ and $\tilde{W}_j$ as required. We have $$\begin{array}{lll}\hx_j&=\alpha_{j-1}^T\cdots \alpha_0^Tx_j \vspace{2ex}\\&= \alpha_{j-1}^T\cdots \alpha_0^T(I_{n} - \xi_0 \xi_0^*-\dots-\xi_{j-1}\xi_{j-1}^*)v_j\vspace{2ex}\\ &= \alpha_{j-1}^T\cdots \alpha_0^Tv_j- \alpha_{j-1}^T\cdots \alpha_0^T\xi_0\xi_0^*v_j-\cdots-\alpha_{j-1}^T\cdots \alpha_0^T\xi_{j-1}\xi_{j-1}^*v_j. \end{array}$$ Recall that, by the inductive hypothesis, for $i=0,\dots,j-1,$ each $$\tilde{V}_{i}= \begin{pmatrix} \alpha_{i-1}^T \cdots \alpha_0^T \xi_{i} & \bar{ \alpha}_{i} \end{pmatrix}$$ is unitary-valued, and so $\alpha_{i}^T \alpha_{i-1}^T \cdots \alpha_0^T \xi_{i} =0$. Hence, if $0 \le i \le j-1$, we have $$\alpha_{j-1}^T\cdots\alpha_{i+1}^T \alpha_{i}^T\cdots\alpha_0^T\xi_i=0.$$ Thus $$\hx_j=\alpha_{j-1}^T\cdots \alpha_0^Tx_j =\alpha_{j-1}^T\cdots \alpha_0^Tv_j,$$ that is, $\hx_j \in H^2(\Dd,\Cc^{n-j})$ and $$ \alpha_{j-1}^T\cdots \alpha_0^T\xi_j= \frac{1}{h_j} \alpha_{j-1}^T\cdots \alpha_0^Tx_j=\frac{1}{h_j} \alpha_{j-1}^T\cdots \alpha_0^Tv_j$$is analytic. Moreover, by equations \eqref{hjcommon}, $$\|\alpha_{j-1}^T(z)\cdots\alpha_0^T(z)x_j(z)\|_{\Cc^{n-j}}= \|\alpha_{j-1}^T(z)\cdots\alpha_0^T(z)v_j(z)\|_{\Cc^{n-j}}=|h_j(z)|$$ almost everywhere on $\Tt,$ and hence $$\|\alpha_{j-1}^T(z)\cdots \alpha_0^T(z)\xi_j(z)\|_{\Cc^{n-j}} =1$$ almost everywhere on $\Tt.$ Therefore $\alpha_{j-1}^T\cdots \alpha_0^T\xi_j$ is inner. Furthermore $$ \begin{array}{llll}\hy_j &=\beta_{j-1}^*\cdots \beta_0^*y_j\vspace{2ex}\\ &= \beta_{j-1}^*\cdots \beta_0^* (I_{m} - \bar{\eta}_0 \eta_0^T - \dots - \bar{\eta}_{j-1}\eta_{j-1}^T)w_{j}\vspace{2ex}\\ &= \beta_{j-1}^*\cdots \beta_0^* w_j - \beta_{j-1}^*\cdots \beta_0^* \bar{\eta}_0 \eta_0^T w_j-\cdots-\beta_{j-1}^*\cdots \beta_0^* \bar{\eta}_{j-1} \eta_{j-1}^Tw_j. \end{array}$$ Notice that, by the inductive hypothesis, for $i=0,\dots,j-1,$ each $$\tilde{W}_{i}^T = \begin{pmatrix} \beta_{i-1}^T \cdots \beta_0^T \eta_{i} & \bar{\beta}_{i} \end{pmatrix} $$ is unitary-valued, and so $\beta_{i}^*\cdots \beta_0^* \bar{\eta}_{i} =0.$ Hence, if $0 \le i \le j-1$, we have $$\beta_{j-1}^*\cdots \beta_{i+1}^*\beta_{i}^*\cdots \beta_0^* \bar{\eta}_{i} =0.$$ Thus $$\hy_j =\beta_{j-1}^*\cdots \beta_0^*y_j = \beta_{j-1}^*\cdots \beta_0^* w_j, $$ that is, $\hy_j \in H^2(\Dd,\Cc^{m-j})^\perp$ and $$\beta_{j-1}^T\cdots \beta_0^T \eta_j= \frac{1}{h_j} \beta_{j-1}^T\cdots \beta_0^T \bar{z}\bar{y}_j =\frac{1}{h_j} \beta_{j-1}^T\cdots \beta_0^T \bar{z}\bar{w}_j$$ is analytic. Further, by equations \eqref{hjcommon}, $$\| \beta_{j-1}^T(z)\cdots \beta_0^T (z) \bar{z}\bar{y}_j(z) \|_{\Cc^{m-j}}= \| \beta_{j-1}^T(z)\cdots \beta_0^T (z) \bar{z}\bar{w}_j(z) \|_{\Cc^{m-j}}=|h_j(z)| $$ almost everywhere on $\Tt,$ and therefore $$ \|\beta_{j-1}^T(z)\cdots \beta_0^T (z)\eta_j(z)\|_{\Cc^m}=1 $$almost everywhere on $\Tt,$ that is, $\beta_{j-1}^T\cdots \beta_0^T \eta_j$ is inner. We apply Lemma \ref{2.2} to the Hankel operator $H_{F_j}$ and the Schmidt pair $(\hx_j, \hy_j)$ to deduce that there exist inner, co-outer, quasi-continuous functions $\alpha_{j}, \beta_{j}$ of types $(n-j)\times (n-j-1),$ $ (m-j)\times (m-j-1)$ respectively such that $$\tilde{V}_{j} = \begin{pmatrix} \alpha_{j-1}^T\cdots \alpha_0^T \xi_{j} & \bar{ \alpha}_{j} \end{pmatrix}, \quad \tilde{W}_{j}^T = \begin{pmatrix} \beta_{j-1}^T \cdots \beta_0^T \eta_{j} & \bar{ \beta}_{j} \end{pmatrix} $$ are unitary-valued and all minors on the first columns of $\tilde{V}_{j},\tilde{W}_{j}$ are in $H^\infty.$ Moreover, every function $\hat{Q}_{j}\in H^\infty(\Dd,\Cc^{(m-j)\times (n-j)}),$ which is at minimal distance from $F_{j},$ satisfies $$F_{j} - \hat{Q}_{j} = \tilde{W}_{j}^* \begin{pmatrix} t_{j}u_{j} & 0 \\ 0 & F_{j+1} \end{pmatrix}\tilde{V}_{j}^* ,$$ for some $F_{j+1} \in H^\infty(\Dd,\Cc^{(m-j-1)\times (n-j-1)})+C(\Tt,\Cc^{(m-j-1)\times (n-j-1)})$ and for the quasi-continuous unimodular function $u_j = \frac{\bar{z} \bar{h}_j}{h_j}.$ By Lemma \ref{f+hinfty}, the set $$\tilde{\mathcal{E}}_{j} =\{F_{j} - \hat{Q} : \hat{Q} \in H^\infty(\Dd,\Cc^{(m-j)\times (n-j)}), \| F_{j} - \hat{Q}\|_{L^\infty}=t_{j} \}$$ satisfies $$\tilde{\mathcal{E}}_{j} = \tilde{W}_{j}^* \begin{pmatrix} t_{j}u_{j} & 0 \\ 0 & (F_{j+1}+H^\infty)\cap B(t_{j}) \end{pmatrix}\tilde{V}_{j}^*, $$ where $B(t_{j})$ is the closed ball of radius $t_{j}$ in $L^\infty(\Tt,\Cc^{(m-j-1)\times (n-j-1)}).$ By the inductive hypothesis, the set of all level $j$ superoptimal error functions $\mathcal{E}_j$ satisfies \index{$\mathcal{E}_j$} \begin{equation}\label{epsilonnj-1} \mathcal{E}_{j-1} = W_0^* W_1^* \cdots W_{j-1}^* \begin{pmatrix} t_0 u_0 & 0 & \cdots & 0 &0_{1\times (n-j)}\\ 0 &t_1u_1 & \dots & 0 &0_{1\times (n-j)} \\ \vdots & \vdots & \ddots & \vdots &\vdots \\ 0 & 0 & \cdots &t_{j-1}u_{j-1} & 0\\ 0_{(m-j)\times 1} & 0_{(m-j)\times 1} & \dots & \dots & (F_{j}+H^\infty)\cap B(t_{j-1}) \end{pmatrix}V_{j-1}^* \cdots V_0^* , \end{equation} for some $F_{j} \in H^\infty(\Dd,\Cc^{(m-j)\times (n-j)})+C(\Tt,\Cc^{(m-j)\times (n-j)}),$ $u_i = \frac{\bar{z} \bar{h}_i}{h_i}$ are quasi-continuous unimodular functions for all $i=0,\dots, j-1$, and for the closed ball $B(t_{j-1})$ of radius $t_{j-1}$ in $L^\infty(\Tt,\Cc^{(m-j)\times (n-j)}).$ Thus, by equation \eqref{epsilonnj-1}, $\mathcal{E}_j$ admits the factorisation \eqref{epsilonnj} as claimed. \end{proof} \begin{remark}\label{V0VjandW0Wj} Let, for $i=0,1,\dots, j$, \begin{equation} \label{tildevjwjj-1} \tilde{V}_{i} = \begin{pmatrix} \alpha_{i-1}^T \cdots \alpha_0^T \xi_{i} & \bar{ \alpha}_{i} \end{pmatrix}, \quad \tilde{W}_{i}^T = \begin{pmatrix} \beta_{i-1}^T \cdots \beta_0^T \eta_{i} & \bar{\beta}_{i} \end{pmatrix} \end{equation} be unitary-valued functions, as described in Lemma \ref{2.2}. Let $$V_{j} = \begin{pmatrix} I_{j} & 0 \\ 0 & \tilde{V}_{j} \end{pmatrix}, \quad W_{j} = \begin{pmatrix} I_{j} & 0 \\ 0 & \tilde{W}_{j} \end{pmatrix}. $$ Let $A_j= \alpha_0\alpha_1 \dots \alpha_j$, $A_{-1} = I_n$, $B_j= \beta_0 \beta_1\dots\beta_j$ and $B_{-1} = I_m$. Note $$W_1W_0 = \begin{pmatrix} 1 & 0\\ 0& \eta_1^T\beta_0\\ 0& \beta_1^* \end{pmatrix}\begin{pmatrix} \eta_0^T\\ \beta_0^* \end{pmatrix}= \begin{pmatrix} \eta_0^T \\ \eta_1^T B_0B_0^*\\B_1^* \end{pmatrix}$$ and $$W_2W_1W_0 = \begin{pmatrix} I_2 & 0\\ 0& \eta_2^TB_1\\ 0& \beta_2^* \end{pmatrix}\begin{pmatrix} \eta_0^T \\ \eta_1^T B_0 B_0^*\\B_1^* \end{pmatrix} = \begin{pmatrix} \eta_0^T \\ \eta_1^TB_0B_0^* \\ \eta_2^T B_1B_1^*\\ B_2^* \end{pmatrix}.$$ Similarly one obtains \begin{equation}\label{WjW0} W_jW_{j-1}\cdots W_0 = \begin{pmatrix} \eta_0^T \\ \eta_1^TB_0B_0^* \\\vdots\\ \eta_j^TB_{j-1}B_{j-1}^* \\ B_j^* \end{pmatrix}. \end{equation} Therefore \begin{equation}\label{WjW0*} W_0^* W_1^*\cdots W_j^*= \begin{pmatrix} \bar{\eta}_{0} & B_0B_0^*\bar{\eta}_{1} & \dots & B_{j-1} B_{j-1}^* \bar{\eta}_{j} & B_{j} \end{pmatrix}. \end{equation} Thus \begin{equation} \label{W0*Wj*WjW0} I_m= W_0^* W_1^* \cdots W_j^* W_j \cdots W_{1} W_0 = \sum\limits_{i=0}^{j} B_{i-1}B_{i-1}^* \bita_i \eta_i^T B_{i-1}B_{i-1}^* + B_j B_j^*. \end{equation} \begin{comment} so that $$W_{j+1}W_j\cdots W_0 = \begin{pmatrix} I_{j+1}&0 \\ \vdots & \vdots\\ 0& \eta_{j+1}^TB_j\\ 0& \beta_{j+1}^* \end{pmatrix} \begin{pmatrix} \eta_0^T \\ \eta_1^TB_0B_0^* \\\vdots\\ \eta_j^TB_{j-1}B_{j-1}^* \\ B_j^* \end{pmatrix} = \begin{pmatrix}\eta_0^T\\ \eta_1^T B_0B_0^* \\\vdots \\ \eta_{j+1}^T B_jB_j^*\\ B_{j+1}^* \end{pmatrix}.$$ \end{comment} Furthermore $$V_0 V_1 = \begin{pmatrix} \xi_0 & \balpha_0 \end{pmatrix} \begin{pmatrix} 1 & 0 &0\\ 0& \alpha_0^T\xi_1 & \balpha_1 \end{pmatrix}= \begin{pmatrix} \xi_0 & A_0^T \xi_1 & \bar{A}_1 \end{pmatrix}$$ and $$V_0 V_1 V_2 = \begin{pmatrix} \xi_0 & \bar{A}_0 A_0^T \xi_1 & \bar{A}_1 \end{pmatrix} \begin{pmatrix} I_2 & 0 & 0 \\ 0 &A_1^T\xi_2 &\bar{\alpha}_2 \\ \end{pmatrix}=\begin{pmatrix} \xi_0 & \bar{A}_0 A_0^T \xi_1 & \bar{A}_1 A_1^T \xi_2 & \bar{A}_2 \end{pmatrix}. $$ One can easily show by induction that \begin{equation}\label{V0Vj} V_0\cdots V_j = \begin{pmatrix} \xi_0 & \bar{A}_0 A_0^T \xi_1 & \bar{A}_1A_1^T\xi_2& \dots & \bar{A}_{j-1}A_{j-1}^T \xi_{j}& \bar{A}_{j} \end{pmatrix}. \end{equation} Therefore, \begin{equation} \label{V0VjVj*V0*} I_n= V_0\cdots V_j V_j^*\cdots V_0^* = \xi_0 \xi_0^* + \bar{A}_0 A_0^T \xi_1 \xi_1^* \bar{A}_0 A_0^T + \dots \bar{A}_{j-1} A_{j-1}^T \xi_j \xi_j^* \bar{A}_{j-1} A_{j-1}^T +\bar{A}_j A_j^T. \end{equation} \end{remark} \begin{lemma}\label{xi=AkAkT} Let \begin{equation}\label{tildevjwjj-2} \tilde{V}_{i} = \begin{pmatrix} \alpha_{i-1}^T \cdots \alpha_0^T \xi_{i} & \bar{\alpha}_{i} \end{pmatrix} \end{equation} be unitary-valued functions, for $i=0,1,\dots, j$, as described in Lemma \ref{2.2} For $i = 0,1,\dots,j$, let $A_i= \alpha_0\alpha_1 \dots \alpha_i$ and $A_{-1} = I_n$. Then, for $i=0,1,\dots, j$, \begin{equation}\label{AAk} \bar{A}_i A_i^T = I_n - \sum\limits_{k=0}^i \xi_k \xi_k^* \end{equation} almost everywhere on $\Tt$. \end{lemma} \begin{proof} By equation \eqref{V0VjVj*V0*}, for $k= 0, \dots,j$, \begin{equation}\label{AkAkT} \bar{A}_k A_k^T = I_n - \sum\limits_{i=0}^{k} \bar{A}_{i-1} A_{i-1}^T \xi_i \xi_i^* \bar{A}_{i-1} A_{i-1}^T. \end{equation} Thus to prove condition \eqref{AAk} it suffices to show that, for $k=0,\dots,j,$ $$\bar{A}_{k-1} A_{k-1}^T \xi_k \xi_k^* \bar{A}_{k-1} A_{k-1}^T =\xi_k \xi_k^*.$$ For $k=0,$ $$\bar{A}_{-1} A_{-1}^T \xi_0 \xi_0^* \bar{A}_{-1} A_{-1}^T =\xi_0 \xi_0^*,$$ and so, equation \eqref{AkAkT} yields $$\bar{A}_0 A_0^T = I_n -\xi_0 \xi_0^*.$$ For $k=1,$ $$\bar{A}_{0} A_{0}^T \xi_1 \xi_1^* \bar{A}_{0} A_{0}^T= (I_n -\xi_0 \xi_0^*)\xi_1 \xi_1^*(I_n -\xi_0 \xi_0^*) $$ By Proposition \ref{onxi}, $\xi_1$ and $\xi_0$ are pointwise orthogonal almost everywhere on $\Tt,$ hence $$\bar{A}_{0} A_{0}^T \xi_1 \xi_1^* \bar{A}_{0} A_{0}^T=\xi_1 \xi_1^*,$$ and in view of equation \eqref{AkAkT}, we get $$\bar{A}_1 A_1^T = I_n - \xi_0 \xi_0^* -\xi_1 \xi_1^*.$$ Suppose \begin{equation}\label{AAkT} \bar{A}_{\ell-1} A_{\ell-1}^T \xi_\ell \xi_\ell^* \bar{A}_{\ell-1} A_{\ell-1}^T =\xi_\ell \xi_\ell^* \end{equation} holds for every $\ell \le k$, where $0\le k \le j$, almost everywhere on $\Tt$. By equations \eqref{AkAkT} and \eqref{AAkT}, this implies $$ \bar{A}_{k} A_{k}^T = I_n - \sum\limits_{i=0}^{k} \xi_i \xi_i^* .$$ Let us show that $$\bar{A}_{k} A_{k}^T \xi_{k+1} \xi_{k+1}^* \bar{A}_{k} A_{k}^T =\xi_{k+1} \xi_{k+1}^*.$$ Note that $$\bar{A}_{k} A_{k}^T \xi_{k+1} \xi_{k+1}^* \bar{A}_{k} A_{k}^T = (I_n - \sum\limits_{i=0}^{k} \xi_i \xi_i^*)\xi_{k+1} \xi_{k+1}^* (I_n - \sum\limits_{i=0}^{k} \xi_i \xi_i^*). $$ By Proposition \ref{onxi}, the set $\{ \xi_i(z)\}_{i=0}^{k+1}$ is pointwise orthogonal almost everywhere on $\Tt,$ and therefore $$\bar{A}_{k} A_{k}^T \xi_{k+1} \xi_{k+1}^* \bar{A}_{k} A_{k}^T = \xi_{k+1} \xi_{k+1}^*.$$ Thus, by equation \eqref{AkAkT}, $$\bar{A}_{k+1} A_{k+1}^T = I_n - \sum\limits_{i=0}^{k+1} \xi_i \xi_i^*$$ almost everywhere on $\Tt$, and the assertion has been proved. \end{proof} \begin{lemma}\label{eta=BkBk*} Let \begin{equation} \tilde{W}_{i}^T = \begin{pmatrix} \beta_{i-1}^T \cdots \beta_0^T \eta_{i} & \bar{\beta}_{i} \end{pmatrix} \end{equation} be unitary-valued functions, for $i=0,1, \dots, j$, as described in Lemma \ref{2.2}. For $i=0,1,\dots, j$, let $B_i= \beta_0 \beta_1\dots\beta_i$ and $B_{-1} = I_m$. Then, for $k=0,1,\dots, j$, \begin{equation}\label{BB_k*} B_k B_k^* = I_m - \sum\limits_{i=0}^k \bita_i \eta_i^T \end{equation} almost everywhere on $\Tt$. \end{lemma} \begin{proof} By equation \eqref{W0*Wj*WjW0}, for $k= 0, \dots,j$, \begin{equation}\label{BB_k**} B_k B_k^* = I_m - \sum\limits_{i=0}^{k} B_{i-1}B_{i-1}^*\bita_i\eta_i^T B_{i-1}B_{i-1}^*.\end{equation} Thus to prove condition \eqref{BB_k*} it suffices to show that, for $k=0,\dots,j,$ $$B_{k-1}B_{k-1}^*\bita_k\eta_k^T B_{k-1}B_{k-1}^* =\bita_k\eta_k^T.$$ For $k=0,$ $$B_{-1} B_{-1}^* \bita_0 \eta_0^T B_{-1} B_{-1}^*=I_m \bita_0 \eta_0^T I_m =\bita_0 \eta_0^T ,$$ and so, equation \eqref{BB_k**} yields $$B_0 B_0^* = I_m - \bita_0 \eta_0^T. $$ For $k=1,$ $$B_{0}B_{0}^*\bita_1\eta_1^T B_{0}B_{0}^* = (I_m - \bita_0 \eta_0^T) \bita_1 \eta_1^T (I_m - \bita_0 \eta_0^T).$$ By Proposition \ref{onxi}, $\eta_1$ and $\eta_0$ are pointwise orthogonal almost everywhere on $\Tt,$ hence $$B_{0}B_{0}^*\bita_1\eta_1^T B_{0}B_{0}^*= \bita_1 \eta_1^T,$$ and in view of equation \eqref{BB_k**}, we get $$B_1B_1^* = I_m - \bita_0 \eta_0^T - \bita_1\eta_1^T. $$ Suppose \begin{equation}\label{BBk*} B_{\ell-1}B_{\ell-1}^*\bita_\ell\eta_\ell^T B_{\ell-1}B_{\ell-1}^* =\bita_\ell \eta_\ell^T \end{equation} holds for every $\ell \le k$, where $0\le k \le j$, almost everywhere on $\Tt$. By equations \eqref{BB_k**} and \eqref{BBk*}, this implies $$ B_k B_k^* = I_m - \sum\limits_{i=0}^k \bita_i \eta_i^T .$$ Let us show that $$B_{k}B_{k}^*\bita_{k+1}\eta_{k+1}^T B_{k}B_{k}^* =\bita_{k+1}\eta_{k+1}^T .$$ Note that $$B_{k}B_{k}^*\bita_{k+1}\eta_{k+1}^T B_{k}B_{k}^* = (I_m - \sum\limits_{i=0}^k \bita_i \eta_i^T) \bita_{k+1}\eta_{k+1}^T (I_m - \sum\limits_{i=0}^k \bita_i \eta_i^T).$$ By Proposition \ref{onxi}, the set $\{ \bita_i(z)\}_{i=0}^{k+1}$ is pointwise orthogonal almost everywhere on $\Tt,$ and therefore $$B_{k}B_{k}^*\bita_{k+1}\eta_{k+1}^T B_{k}B_{k}^* = \bita_{k+1}\eta_{k+1}^T .$$ Thus, by equation \eqref{BB_k**}, $$B_{k+1} B_{k+1}^* = I_m - \sum\limits_{i=0}^{k+1} \bita_i \eta_i^T$$ almost everywhere on $\Tt$, and the assertion has been proved. \end{proof} The following statement asserts that any function $Q_{j+1} \in \Omega_{j}$ necessarily satisfies equations \eqref{g-qi}. \begin{proposition}\label{g-qjj} Every $Q_{j+1} \in H^\infty(\Dd,\CCmn)$ which minimises $$(s_0^\infty(G-Q),s_1^\infty(G-Q),\dots,s_j^\infty(G-Q)) $$ satisfies $$(G-Q_{j+1})x_i = t_i y_i, (G-Q_{j+1})^*y_i=t_ix_i,\quad \text{for}\; i=0,1,\dots,j. $$ \end{proposition} \begin{proof} By the recursive step of the algorithm from Subsection \ref{Alg_statement}, every $Q_{j+1} \in H^\infty(\Dd,\CCmn)$ that minimises $$(s_0^\infty(G-Q),\dots, s_{j-1}^\infty(G-Q))$$ satisfies $$(G-Q_{j+1})x_i = t_i y_i,\quad (G-Q_{j+1})^*y_i=t_ix_i\quad \text{for}\quad i=0,1,\dots,j-1. $$ Hence it suffices to show that $Q_{j+1}$ satisfies $$ (G-Q_{j+1})x_j = t_j y_j, (G-Q_{j+1})^*y_j=t_jx_j .$$ Notice that, by the inductive step, the following diagram commutes \begin{equation}\label{tj-1commj} \begin{array}{clllll} H^2(\Dd,\Cc^{n-j}) &\xrightarrow{M_{\bar{\alpha}_0\cdots \bar{ \alpha}_{j-1}}} & \mathcal{K}_{j} &\xrightarrow{\xi_{(j-1)} \telwe \cdot}& \xi_{(j-1)} \telwe H^2 (\Dd, \Cc^n)=X_{j}\\ \Big\downarrow\rlap{$\scriptstyle H_{F_{j}} $} & ~ &\Big\downarrow\rlap{$\scriptstyle \Gamma_{j}$} &~&\hspace{3ex}\Big\downarrow\rlap{$\scriptstyle T_{j}$} \\ H^2(\Dd,\Cc^{m-j})^\perp &\xrightarrow{M_{\beta_0 \cdots \beta_{j-1}}}&\mathcal{L}_{j} &\xrightarrow{\bar{\eta}_{(j-1)} \telwe \cdot } & \bar{\eta}_{(j-1)} \telwe H^2 (\Dd, \Cc^m)^\perp =Y_{j}, \end{array}\end{equation} where the maps $M_{\bar{\alpha}_0\cdots \bar{ \alpha}_{j-1}},$ $ M_{\beta_0 \cdots \beta_{j-1}},$ $(\xi_{(j-1)} \telwe \cdot)\colon \mathcal{K}_{j}\to X_{j} $ and $(\bar{\eta}_{(j-1)} \telwe \cdot)\colon\mathcal{L}_{j}\to Y_{j} $ are unitaries, and $F_{j} \in H^\infty(\Dd,\Cc^{(m-j)\times(n-j)})+C(\Tt,\Cc^{(m-j)\times(n-j)}).$ By equation \eqref{epsilonnj-1}, the set of all level $j-1$ superoptimal error functions $$\mathcal{E}_{j-1} =\{G-Q:Q\in \Omega_{j-1} \}$$ satisfies \begin{equation}\label{epsilonnj-11} \mathcal{E}_{j-1} = W_0^* W_1^* \cdots W_{j-1}^* \begin{pmatrix} t_0 u_0 & 0 & \cdots & 0 &0_{1\times (n-j)}\\ 0 &t_1u_1 & \dots & 0 &0_{1\times (n-j)} \\ \vdots & \vdots & \ddots & \vdots &\vdots \\ 0 & 0 & \cdots &t_{j-1}u_{j-1} & 0\\ 0_{(m-j)\times 1} & 0_{(m-j)\times 1} & \dots &\dots & (F_{j}+H^\infty)\cap B(t_{j-1}) \end{pmatrix}V_{j-1}^* \cdots V_0^* , \end{equation} for some $F_{j} \in H^\infty(\Dd,\Cc^{(m-j)\times (n-j)})+C(\Tt,\Cc^{(m-j)\times (n-j)}),$ for quasi-continuous unimodular functions $u_i = \frac{\bar{z} \bar{h}_i}{h_i}$, $i=0,\dots, j-1,$ and the closed ball $B(t_{j-1})$ of radius $t_{j-1}$ in $L^\infty(\Tt,\Cc^{(m-j)\times (n-j)}).$ Consider some $Q_{j+1}\in \Omega_{j-1},$ so that, according to equation \eqref{epsilonnj-11}, \begin{equation}\label{wow1vovj} \footnotesize{\begin{pmatrix} I_{j-1} & 0 \\ 0& \tilde{W}_{j-1} \end{pmatrix}\cdots W_0 (G-Q_{j+1}) V_0\cdots \begin{pmatrix} I_{n-j-1} & 0 \\ 0& \tilde{V}_{j-1} \end{pmatrix} = \scriptsize{\begin{pmatrix} t_0 u_0 & 0 &\dots &0 \\ 0&t_1u_1 &\dots &0 \\ \vdots & \hspace{10ex}\ddots & &\vdots \\ 0 &\cdots &t_{j-1}u_{j-1} & 0\\ 0 & \dots & \dots & F_{j}-\tilde{Q}_{j} \end{pmatrix}}}, \end{equation} where $\tilde{Q}_{j} \in H^\infty(\Dd,\Cc^{(m-j)\times(n-j)})$ is at minimal distance from $F_j.$ Let $B_j = \beta_0 \cdots \beta_j $ and let $A_j = \alpha_0 \cdots \alpha_j$. \index{$B_j$} \index{$A_j$} By equations \eqref{tildevjwjj}, we have $$\begin{array}{lll} &\begin{pmatrix} I_{j-1} & 0 \\ 0& \tilde{W}_{j-1} \end{pmatrix}\cdots W_0 (G-Q_{j+1}) V_0\cdots \begin{pmatrix} I_{n-j-1} & 0 \\ 0& \tilde{V}_{j-1} \end{pmatrix}\vspace{2ex}\\ &=\footnotesize{\begin{pmatrix} t_0 u_0 & 0 &\dots &0 \\ \vdots &\ddots & \dots &\vdots \\ 0 & \dots &\eta_{j-1}^T B_{j-2} B_{j-2}^* (G-Q_{j+1})\bar{A}_{j-2}A_{j-2}^T\xi_{j-1} & \eta_{j-1}^T B_{j-2} B_{j-2}^* (G-Q_{j+1}) \bar{A}_{j-1} \\ 0& \dots & B_{j-1}^*(G-Q_{j+1})\bar{A}_{j-2}A_{j-2}^T\xi_{j-1} & B_{j-1}^* (G-Q_{j+1})\bar{A}_{j-1} \end{pmatrix}},\end{array}$$ which, combined with equation \eqref{wow1vovj}, yields \begin{equation}\label{f-qj} B_{j-1}^* (G-Q_{j+1})\bar{A}_{j-1}= F_j -\tilde{Q}_j. \end{equation} Since $\tilde{Q}_{j}$ is at minimal distance from $F_{j},$ $$\|F_{j}-\tilde{Q}_{j}\|_\infty=\|H_{F_{j}}\|=t_{j}.$$ Note that, if $(\hat{x}_{j},\hat{y}_{j})$ is a Schmidt pair for $H_{F_{j}}$ corresponding to $t_{j},$ then, by Theorem \ref{1.7}, $$(F_{j}-\tilde{Q}_{j})\hat{x}_{j} = t_{j} \hat{y}_{j}, \quad (F_{j}-\tilde{Q}_{j})^*\hat{y}_{j} = t_{j} \hat{x}_{j}. $$In view of equation \eqref{f-qj}, the latter equations imply $$ B_{j-1}^* (G-Q_{j+1}) \bar{A}_{j-1} \hat{x}_{j} = t_{j} \hat{y}_{j},\quad A_{j-1}^T (G-Q_{j+1})^* B_{j-1}\hat{y}_{j} = t_{j}\hat{x}_{j}.$$ By equation \eqref{hatxj-1}, $$\hat{x}_{j} = A_{j-1}^T x_{j}, \quad \hat{y}_{j} =B_{j-1}^* y_{j}$$ Thus $$B_{j-1}^* (G-Q_{j+1}) \bar{A}_{j-1} \hat{x}_{j} = B_{j-1}^* (G-Q_{j+1}) x_{j} = t_{j} B_{j-1}^*y_j, $$ or equivalently, $$B_{j-1}^* \big( (G-Q_{j+1})x_j - t_jy_j\big)=0, $$ and since, by the inductive hypothesis, $M_{B_{j-1}}$ is a unitary map, we have $$ (G-Q_{j+1})x_j =t_jy_j.$$ Furthermore $$A_{j-1}^T (G-Q_{j+1})^* B_{j-1}\hat{y}_{j} = A_{j-1}^T (G-Q_{j+1})^*y_{j} =t_j A_{j-1}^Tx_j, $$ or equivalently, $$A_{j-1}^T \big((G-Q_{j+1})^*y_{j} - t_jx_j \big)=0. $$ By the inductive hypothesis, $M_{\bar{A}_{j-1}}$ is a unitary map, hence $$(G-Q_{j+1})^*y_{j} = t_jx_j,$$ and therefore $Q_{j+1}$ satisfies the required equations. \end{proof} \begin{lemma}\label{corona} Let \begin{equation} \label{tildevjwjj-c} \tilde{V}_{i} = \begin{pmatrix} \alpha_{i-1}^T \cdots \alpha_0^T \xi_{i} & \bar{ \alpha}_{i} \end{pmatrix}, \quad \tilde{W}_{i}^T = \begin{pmatrix} \beta_{i-1}^T \cdots \beta_0^T \eta_{i} & \bar{\beta}_{i} \end{pmatrix}, \;\; i=0, 1, \dots, j, \end{equation} be unitary-valued functions, as described in Lemma \ref{2.2}. Then $$\alpha_l^T H^2(\Dd,\Cc^{n-l}) = H^2(\Dd,\Cc^{n-l-1}) $$ and $$ \beta_l^*(H^2(\Dd,\Cc^{m-l})^\perp = H^2(\Dd,\Cc^{m-l-1})^\perp, $$ for all $l=0,\dots,j.$ \end{lemma} \begin{proof} Recall that, by Lemma \ref{L6.2}, for all $l=0,\dots,j,$ the inner, co-outer, quasi-continuous functions $\alpha_l, \beta_l$ of types $(n-l)\times (n-l-1)$ and $(m-l)\times (m-l-1)$ respectively, are left invertible. The rest of the proof is similar to Lemmas \ref{a0h2} and \ref{beta0*h2}. \end{proof} As a preparation for proof of the main inductive step we prove several propositions. \begin{proposition}\label{xitelakxik0} Let \begin{equation}\label{tildevjwjj-Kj} \tilde{V}_{i} = \begin{pmatrix} \alpha_{i-1}^T \cdots \alpha_0^T \xi_{i} & \bar{\alpha}_{i} \end{pmatrix} \end{equation} be unitary-valued functions, for $i=0,1,\dots, j$, as described in Lemma \ref{2.2} Let $A_i= \alpha_0\alpha_1 \dots \alpha_i$, for $i = 0,1,\dots,j$, and $A_{-1} = I_n$. Let $$V_i = \begin{pmatrix} I_i & 0 \\ 0 & \tilde{V}_i \end{pmatrix}, \; \text{for} \; i = 0,1,\dots,j, $$ and let \begin{equation}\label{calkj+1}\mathcal{K}_{j+1}= V_0 \cdots V_{j} \begin{pmatrix} 0_{(j+1)\times 1} \\ H^2(\Dd,\Cc^{n-j-1}) \end{pmatrix}.\end{equation} Let $\xi_{(j)} = \xi_0 \telwe \dots \telwe \xi_j.$ Then, $$\xi_{(j)} \telwe \mathcal{K}_{j+1} = \xi_{(j)} \telwe H^2(\Dd,\Cc^n) $$ and the operator $(\xi_{(j)} \telwe \cdot) \colon \mathcal{K}_{j+1} \to \xi_{(j)} \telwe H^2(\Dd,\Cc^n)$ is unitary. \index{$\xi_{(i)}$} \end{proposition} \begin{proof} First let us prove that $$\xi_0 \telwe \dots \telwe \xi_j \telwe \mathcal{K}_{j+1} = \xi_0 \telwe \dots \telwe \xi_j \telwe H^2(\Dd,\Cc^n). $$ By equations \eqref{calkj+1} and \eqref{V0Vj} \begin{equation}\label{K(j+1)} \mathcal{K}_{j+1} = \bar{A}_{j}H^2(\Dd,\Cc^{n-j-1}). \end{equation} By Lemma \ref{corona}, $$\alpha_l^T H^2(\Dd,\Cc^{n-l}) = H^2(\Dd,\Cc^{n-l-1}) $$ for all $l=0,\dots,j.$ Thus \begin{equation}\label{alphaH2} \begin{array}{lll} H^2(\Dd,\Cc^{n-j-1})&= \alpha_j^T H^2(\Dd,\Cc^{n-j})\\ &= \alpha_j^T \alpha_{j-1}^T H^2(\Dd,\Cc^{n-j+1})\\ &= \alpha_j^T \alpha_{j-1}^T \alpha_{j-2}^T H^2(\Dd,\Cc^{n-j+2})\\ &= \dots\\ &= \alpha_j^T \alpha_{j-1}^T \dots \alpha_1^T H^2(\Dd,\Cc^{n-1})\\ &= \alpha_j^T \alpha_{j-1}^T \dots \alpha_1^T \alpha_0^T H^2(\Dd,\Cc^{n})\\ &= A_j^T H^2(\Dd,\Cc^{n}). \end{array} \end{equation} By equations \eqref{K(j+1)} and \eqref{alphaH2}, \begin{equation}\label{K(j+1)-2} \mathcal{K}_{j+1} = \bar{A}_{j}H^2(\Dd,\Cc^{n-j-1})= \bar{A}_{j} A_j^T H^2(\Dd,\Cc^{n}). \end{equation} By Lemma \ref{xi=AkAkT}, \begin{equation}\label{AAk-K} \bar{A}_j A_j^T = I_n - \sum\limits_{k=0}^j \xi_k \xi_k^*. \end{equation} By Proposition \ref{onxi}, $\{\xi_i(z)\}_{i=0}^j$ is an orthonormal set in $\Cc^n$ for almost every $z \in \Tt.$ Therefore, by equations \eqref{K(j+1)-2} and \eqref{AAk-K}, \begin{equation}\label{Kj+1H2} \begin{array}{lll} \xi_0 \telwe \dots \telwe \xi_j \telwe \mathcal{K}_{j+1} &= \xi_0 \telwe \dots \telwe \xi_j \telwe \bar{A}_{j} A_j^T H^2(\Dd,\Cc^{n})\\ &= \xi_0 \telwe \dots \telwe \xi_j \telwe (I_n - \sum\limits_{k=0}^j \xi_k \xi_k^*)H^2(\Dd,\Cc^n)\\ &= \xi_0 \telwe \dots \telwe \xi_j \telwe H^2(\Dd,\Cc^n). \end{array} \end{equation} To show that the operator $(\xi_{(j)} \telwe \cdot)\colon\mathcal{K}_{j+1} \to \xi_{(j)} \telwe H^2(\Dd,\Cc^n)$ is unitary, it suffices to prove that, for every $\vartheta \in \mathcal{K}_{j+1},$ $$\| \xi_{(j)} \telwe \vartheta\|_{L^2(\Tt,\we^{j+2}\Cc^n)} = \| \vartheta \|_{L^2(\Tt,\Cc^n)} .$$ \noindent Let $\vartheta \in \mathcal{K}_{j+1}.$ Then, by Proposition \ref{we}, we have $$\begin{array}{llll} \| \xi_{(j)} \telwe \vartheta\|_{L^2(\Tt,\we^{j+2}\Cc^n)}^2 &= \langle \xi_{(j)} \telwe \vartheta , \xi_{(j)} \telwe \vartheta \rangle_{L^2(\Tt,\we^{j+2}\Cc^n)} \vspace{2ex} \\ &= \displaystyle\frac{1}{2\pi} \int_0^{2\pi}\langle \xi_{(j)}(\eiu) \telwe \vartheta(\eiu) , \xi_{(j)}(\eiu) \telwe \vartheta(\eiu) \rangle_{\we^{j+2}\Cc^n} d\theta \vspace{2ex} \\ &= \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} \langle \xi_0(\eiu), \xi_0 (\eiu)\rangle_{\Cc^n} &\dots & \langle\xi_0(\eiu),\vartheta (\eiu)\rangle_{\Cc^n}\\ \langle\xi_1(\eiu) , \xi_0(\eiu) \rangle_{\Cc^n} & \dots &\langle \xi_1(\eiu) , \vartheta(\eiu) \rangle_{\Cc^n}\\ \vdots & \ddots & \vdots \\ \langle\vartheta(\eiu) , \xi_0(\eiu) \rangle_{\Cc^n}& \dots &\langle \vartheta(\eiu) , \vartheta(\eiu) \rangle_{\Cc^n} \end{pmatrix}d\theta .\end{array}$$ By Proposition \ref{onxi}, $\{\xi_i(z)\}_{i=0}^j$ is an orthonormal set in $\Cc^n$ for almost every $z \in \Tt.$ Thus the latter integral is equal to $$ \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} 1 & 0& \dots & \langle\xi_0(\eiu),\vartheta (\eiu)\rangle_{\Cc^n}\\ 0 &1& \dots &\langle \xi_1(\eiu) , \vartheta(\eiu) \rangle_{\Cc^n}\\ \vdots & ~&\ddots & \vdots \\ \langle\vartheta(\eiu) , \xi_0(\eiu) \rangle_{\Cc^n}& ~&\dots &\langle \vartheta(\eiu) , \vartheta(\eiu) \rangle_{\Cc^n} \end{pmatrix}d\theta. $$ Note that since $\vartheta \in \mathcal{K}_{j+1},$ $$ \vartheta = \bar{A}_l A_j^T \psi= (I_n - \sum\limits_{i=0}^j \xi_i \xi_i^*)\psi$$ for some $\psi \in H^2(\Dd,\Cc^{n}).$ Then, for almost every $\eiu \in \Tt,$ $$\begin{array}{lll} \langle \xi_k (\eiu), \vartheta(\eiu) \rangle_{\Cc^n} &= \langle \xi_k (\eiu) , (I_n - \sum\limits_{i=0}^j \xi_i \xi_i^*)(\eiu) \psi(\eiu) \rangle_{ \Cc^n}\\ & = \langle \xi_k (\eiu) ,\psi(\eiu) \rangle_{ \Cc^n} - \langle \xi_k, \xi_k \rangle_{\Cc^n} \langle \xi_k, \psi(\eiu) \rangle_{ \Cc^n} =0. \end{array}$$ Hence $$ \| \xi_{(j)} \telwe \vartheta\|_{L^2(\Tt,\we^{j+2}\Cc^n)}^2= \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} 1 & ~&0 &\dots & 0\\ 0 & ~& 1& \dots &0\\ \vdots & ~& ~& \ddots & \vdots \\ 0&~ &~ & \dots &\langle \vartheta(\eiu) , \vartheta(\eiu) \rangle_{\Cc^n} \end{pmatrix}d\theta ,$$ which yields $$\begin{array}{clll}\displaystyle\frac{1}{2\pi} \int_0^{2\pi} \| \vartheta(\eiu)\|_{\Cc^n}^2 d\theta = \| \vartheta \|_{L^2(\Tt,\Cc^n)}^2. \end{array} $$ Therefore, the operator $ (\xi_{(j)} \telwe \cdot)\colon \mathcal{K}_{j+1} \to \xi_{(j)} \telwe H^2(\Dd,\Cc^n)$ is unitary. \end{proof} \begin{proposition}\label{etatelwelj} Let \begin{equation} \tilde{W}_{i}^T = \begin{pmatrix} \beta_{i-1}^T \cdots \beta_0^T \eta_{i} & \bar{\beta}_{i} \end{pmatrix} \end{equation} be unitary-valued functions, for $i=0,1, \dots, j$, as described in Lemma \ref{2.2}. Let $B_i= \beta_0 \beta_1\dots\beta_i$, for $i=0,1,\dots, j$, and $B_{-1} = I_m$. Let $W_{i}^T = \begin{pmatrix} I_i & 0 \\ 0 & \tilde{W}_i^T \end{pmatrix},\; \text{for} \; i=0,1,\dots, j, $ and let \begin{equation}\label{callj+1} \mathcal{L}_{j+1} = W_0^* \cdots W_j^* \begin{pmatrix} 0_{(j+1)\times 1} \\ H^2(\Dd,\Cc^{m-j-1})^\perp \end{pmatrix},\end{equation} Let $\bar{ \eta}_{(j)} = \bar{ \eta}_0 \telwe \dots \telwe \bar{ \eta}_j.$ Then, $$\bar{ \eta}_{(j)} \telwe \mathcal{L}_{j+1} = \bar{ \eta}_{(j)} \telwe H^2(\Dd,\Cc^m)^\perp$$ and the operator $(\bar{\eta}_{(j)} \telwe \cdot)\colon \mathcal{L}_{j+1} \to \bar{\eta}_{(j)} \telwe H^2(\Dd,\Cc^m)^\perp$ is unitary. \index{$\bar{\eta}_{(j)}$} \end{proposition} \begin{proof} First let us prove that $$\bar{ \eta}_0 \telwe \dots \telwe \bar{ \eta}_j\telwe \mathcal{L}_{j+1} = \bar{ \eta}_0 \telwe \dots \telwe \bar{ \eta}_j \telwe H^2(\Dd,\Cc^m)^\perp.$$ By equations \eqref{callj+1} and \eqref{WjW0*}, \begin{equation}\label{L(j+1)} \mathcal{L}_{j+1} = B_j H^2(\Dd,\Cc^{m-j-1})^\perp. \end{equation} By Lemma \ref{corona}, $$ \beta_\ell^*(H^2(\Dd,\Cc^{m-\ell})^\perp = H^2(\Dd,\Cc^{m-\ell-1})^\perp, $$ for all $\ell=0,\dots,j.$ Thus \begin{equation}\label{betaH2} \begin{array}{lll} H^2(\Dd,\Cc^{m-j-1})^\perp &= \beta_j^*(H^2(\Dd,\Cc^{m-j})^\perp\\ & = \beta_j^*\beta_{j-1}^*(H^2(\Dd,\Cc^{m-j+1})^\perp\\ & = \beta_j^*\beta_{j-1}^*\beta_{j-2}^*(H^2(\Dd,\Cc^{m-j+2})^\perp\\ &= \dots\\ & = \beta_j^*\beta_{j-1}^* \dots \beta_1^*(H^2(\Dd,\Cc^{m-1})^\perp\\ & = \beta_j^*\beta_{j-1}^* \dots \beta_1^*\beta_0^*(H^2(\Dd,\Cc^{m})^\perp\\ & = B_j^*H^2(\Dd,\Cc^m)^\perp. \end{array} \end{equation} By equations \eqref{L(j+1)} and \eqref{betaH2}, \begin{equation}\label{L(j+1)-2} \mathcal{L}_{j+1} = B_j H^2(\Dd,\Cc^{m-j-1})^\perp = B_j B_j^*H^2(\Dd,\Cc^m)^\perp. \end{equation} By Lemma \ref{eta=BkBk*}, \begin{equation}\label{BB_k*-L} B_j B_j^* = I_m - \sum\limits_{i=0}^j \bita_i \eta_i^T . \end{equation} Thus \begin{equation}\label{L(j+1)-eta} \mathcal{L}_{j+1} = (I_m - \sum\limits_{i=0}^j \bita_i \eta_i^T)H^2(\Dd,\Cc^m)^\perp. \end{equation} By Proposition \ref{onxi}, $\{\bar{\eta}_i(z)\}_{i=0}^j$ is an orthonormal set in $\Cc^m$ for almost every $z \in \Tt.$ Therefore, by equations \eqref{L(j+1)-2} and \eqref{L(j+1)-eta}, \begin{equation}\label{Lj+1=H2} \begin{array}{lll} \bar{ \eta}_0 \telwe \dots \telwe \bar{ \eta}_j\telwe \mathcal{L}_{j+1} &= \bar{ \eta}_0 \telwe \dots \telwe \bar{ \eta}_j\telwe (I_m - \sum\limits_{i=0}^j \bita_i \eta_i^T)H^2(\Dd,\Cc^m)^\perp\\ &= \bar{ \eta}_0 \telwe \dots \telwe \bar{ \eta}_j \telwe H^2(\Dd,\Cc^m)^\perp. \end{array} \end{equation} \noindent To show that the operator $(\bar{\eta}_{(j)} \telwe \cdot)\colon \mathcal{L}_{j+1} \to \bar{\eta}_{(j)} \telwe H^2(\Dd,\Cc^m)^\perp$ is unitary, it suffices to prove that, for every $\varphi \in \mathcal{L}_{j+1},$ $$\|\bar{\eta}_{(j)} \telwe \varphi \|_{L^2(\Tt,\we^{j+2}\Cc^m)} = \| \varphi \|_{L^2(\Tt,\Cc^m)}. $$ \noindent By Proposition \ref{we}, we have $$ \begin{array}{clllll}\|\bar{\eta}_{(j)} \telwe \varphi \|_{L^2(\Tt,\we^{j+2}\Cc^m)}^2 &= \langle \bar{\eta}_{(j)} \telwe \varphi , \bar{\eta}_{(j)} \telwe \varphi \rangle_{ L^2(\Tt,\we^{j+2}\Cc^m)} \vspace{2ex} \\ &= \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} \langle \bar{\eta}_0(\eiu) , \bar{\eta}_0(\eiu) \rangle_{\Cc^m} & \dots & \langle\bar{\eta}_0(\eiu) , \varphi (\eiu) \rangle_{\Cc^m} \\ \langle\bar{\eta}_1 (\eiu), \bar{\eta}_0(\eiu) \rangle_{\Cc^m}& \dots & \langle\bar{\eta}_1 (\eiu),\varphi(\eiu) \rangle_{\Cc^m} \\ \vdots & \ddots & \vdots \\ \langle \varphi(\eiu), \bar{\eta}_0(\eiu) \rangle_{\Cc^m}& \dots &\langle \varphi(\eiu), \varphi(\eiu) \rangle_{\Cc^m} \end{pmatrix} d\theta .\end{array} $$ By Proposition \ref{onxi}, the set $\{ \bar{\eta}_i(z)\}_{i=0}^j$ is orthonormal in $\Cc^n$ for almost every $z \in \Tt.$ Then the latter integral is equal to $$\displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix}1 & 0& \dots & \langle\bar{\eta}_0(\eiu) , \varphi (\eiu) \rangle_{\Cc^m} \\ 0& 1&\dots & \langle\bar{\eta}_1 (\eiu),\varphi(\eiu) \rangle_{\Cc^m} \\ \vdots & ~ &\ddots & \vdots \\ \langle \varphi(\eiu), \bar{\eta}_0(\eiu) \rangle_{\Cc^m}& \dots& \dots &\langle \varphi(\eiu), \varphi(\eiu) \rangle_{\Cc^m} \end{pmatrix} d\theta .$$ Note that since $\varphi \in \mathcal{L}_{j+1},$ $$ \varphi = (I_m - \sum\limits_{i=0}^j \bita_i \eta_i^T) \psi,$$ for some $\psi \in H^2(\Dd,\Cc^m)^\perp. $ Then, for almost every $\eiu \in \Tt,$ $$\begin{array}{lll} \langle\bar{\eta}_k (\eiu),\varphi(\eiu) \rangle_{\Cc^m} &= \langle\bar{\eta}_k (\eiu) , (I_m - \sum\limits_{i=0}^j \bita_i \eta_i^T)(\eiu) \psi(\eiu) \rangle_{\Cc^m} \vspace{2ex} \\ &= \langle\bar{\eta}_k (\eiu) ,\psi(\eiu) \rangle_{\Cc^m} - \langle\bar{\eta}_k (\eiu) ,\bar{\eta}_k (\eiu)\rangle_{\Cc^m} \langle\bar{\eta}_k (\eiu) ,\psi(\eiu) \rangle_{\Cc^m} =0. \vspace{2ex} \end{array} $$ Thus $$ \begin{array}{clll} \|\bar{\eta}_{(j)} \telwe \varphi \|_{L^2(\Tt,\we^{j+2}\Cc^m)}^2 = &\displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix}1 & 0& \dots & 0\\ 0& 1&\dots &0\\ \vdots & ~ &\ddots & \vdots \\ 0& \dots& \dots &\langle \varphi(\eiu), \varphi(\eiu) \rangle_{\Cc^m} \end{pmatrix} d\theta \vspace{2ex} \\ &= \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \langle \varphi(\eiu), \varphi(\eiu) \rangle_{\Cc^m} d\theta = \| \varphi \|_{L^2(\Tt,\Cc^m)}^2. \end{array}$$ Hence the operator $(\bar{\eta}_{(j)} \telwe \cdot)\colon \mathcal{L}_{j+1} \to \bar{\eta}_{(j)} \telwe H^2(\Dd,\Cc^m)^\perp$ is unitary.\end{proof} \begin{proposition}\label{callj+1perp} With the notation of Proposition \ref{etatelwelj} $$\mathcal{L}_{j+1}^\perp = \{ f \in L^2(\Tt,\Cc^m): \beta_{j}^* \cdots \beta_{0}^*f \in H^2(\Dd,\Cc^{m-j-1})\}. $$ \end{proposition} \begin{proof} Clearly $\mathcal{L}_{j+1} = \beta_0 \cdots \beta_j H^2(\Dd,\Cc^{m-j-1})^\perp.$ The general element of $\beta_0 \cdots \beta_j H^2(\Dd,\Cc^{m-j-1})^\perp$ is $\beta_0 \cdots \beta_j \bar{z} \bar{g}$ with $ g \in H^2 (\Dd,\Cc^{m-j-1})$. A function $f\in L^2(\Tt,\Cc^m)$ belongs to $\mathcal{L}_{j+1}^\perp$ if and only if $$\langle f, \beta_0 \cdots \beta_j \bar{z} \bar{g}\rangle_{L^2(\Tt,\Cc^m)}=0 \quad \text{for all} \quad g \in H^2(\Dd,\Cc^{m-j-1}) $$ if and only if $$\displaystyle \frac{1}{2\pi} \int_0^{2\pi} \langle f(\eiu), \beta_0(\eiu)\cdots \beta_j(\eiu) e^{-i\theta} \bar{g}(\eiu) \rangle_{\Cc^m}d\theta =0 \quad \text{for all} \quad g \in H^2(\Dd,\Cc^{m-j-1}) $$ if and only if $$ \displaystyle \frac{1}{2\pi} \int_0^{2\pi} \langle \beta_j^*(\eiu)\cdots\beta_0^*(\eiu)f(\eiu), e^{-i\theta} \bar{g}(\eiu) \rangle_{\Cc^{m-2}}d\theta =0 \quad \text{for all} \quad g \in H^2(\Dd,\Cc^{m-j-1}).$$ \noindent The latter statement is equivalent to the assertion that $\beta_j^*\cdots \beta_0^* f $ is orthogonal to \linebreak$H^2(\Dd,\Cc^{m-j-1})^\perp$ in $L^2(\Tt,\Cc^{m-j-1}),$ which holds if and only if $$\beta_j^*\cdots \beta_0^* f \in H^2(\Dd,\Cc^{m-j-1}).$$ Thus $$\mathcal{L}_{j+1}^\perp = \{ f \in L^2(\Tt,\Cc^m): \beta_{j}^* \cdots \beta_{0}^*f \in H^2(\Dd,\Cc^{m-j-1})\} $$as required. \end{proof} Let us proceed to the main theorem of this section. \begin{theorem}\label{Tkismultipleofhankel} Let $m,n$ be positive integers such that $\min(m,n)\geq2.$ Let $G$ be in $H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn)$. In the notation of the algorithm \ref{Alg_statement}, let $$(\xi_0 \telwe \dots \telwe \xi_{j-1}\telwe v_{j}, \bar{\eta}_0\telwe \dots \telwe \bar{\eta}_{j-1} \telwe w_{j} ) $$ be a Schmidt pair for $T_{j}$ corresponding to $t_{j}= \|T_{j}\| \neq 0.$ Let $h_{j} \in H^2(\Dd,\Cc)$ be the scalar outer factor of $$\xi_0 \telwe \dots \telwe \xi_{j-1}\telwe v_{j}.$$ Let $$x_{j} = (I_{n} - \xi_0 \xi_0^* - \dots - \xi_{j-1}\xi_{j-1}^* ) v_{j} ,$$ $$y_{j} = (I_{m} - \bar{\eta}_0 \eta_0^T - \dots - \bar{\eta}_{j-1} \eta_{j-1}^T )w_{j}$$ and $$\xi_{j} = \frac{x_{j}}{h_{j}} , \quad \bar{\eta}_{j} = \frac{z y_{j}}{\bar{h}_{j}} .$$ For $i=0,1,\dots, j$, let \begin{equation} \label{tildevjwjj+1} \tilde{V}_{i} = \begin{pmatrix} \alpha_{i-1}^T \cdots \alpha_0^T \xi_{i} & \bar{ \alpha}_{i} \end{pmatrix}, \quad \tilde{W}_{i}^T = \begin{pmatrix} \beta_{i-1}^T \cdots \beta_0^T \eta_{i} & \bar{\beta}_{i} \end{pmatrix} \end{equation} be unitary-valued functions, as described in Lemma \ref{2.2}. Let $$V_{j} = \begin{pmatrix} I_{j} & 0 \\ 0 & \tilde{V}_{j} \end{pmatrix}, \quad W_{j} = \begin{pmatrix} I_{j} & 0 \\ 0 & \tilde{W}_{j} \end{pmatrix}. $$ Let $A_j= \alpha_0\alpha_1 \dots \alpha_j$, $A_{-1} = I_n$, $B_j= \beta_0 \beta_1\dots\beta_j$ and $B_{-1} = I_m$. Let $$X_{j+1} = \xi_0 \telwe \dots \telwe \xi_{j} \telwe H^2(\Dd,\Cc^n) \subset H^2(\Dd,\we^{j+2}{\Cc^n}), $$ and let $$Y_{j+1} = \bar{ \eta}_0 \telwe \dots \telwe \bar{ \eta}_{j} \telwe H^2(\Dd,\Cc^m)^\perp \subset H^2(\Dd,\we^{j+2}\Cc^m)^\perp. $$ Let $$T_{j+1} (\xi_0 \telwe \dots \telwe \xi_j \telwe x) = P_{Y_{j+1}} (\bar{ \eta}_0 \telwe \dots \telwe \bar{ \eta}_{j} \telwe (G-Q_{j+1})x )$$ for all $x \in H^2(\Dd,\Cc^n)$, where $Q_{j+1}$ satisfies \begin{equation} \label{G_Q(j+1)xy} (G-Q_{j+1})x_i = t_i y_i, \; \text{and}\; \;(G-Q_{j+1})^*y_i=t_ix_i,\quad \text{for}\; i=0,1,\dots,j. \end{equation} Let \begin{equation}\label{k(j+1)l{j+1}} \mathcal{K}_{j+1} = V_0 \cdots V_{j} \begin{pmatrix} 0_{(j+1)\times 1} \\ H^2(\Dd,\Cc^{n-j-1}) \end{pmatrix}, \quad \mathcal{L}_{j+1} = W_0^* \cdots W_j^* \begin{pmatrix} 0_{(j+1)\times 1} \\ H^2(\Dd,\Cc^{m-j-1})^\perp \end{pmatrix}. \end{equation} Let the operator $\Gamma_{j+1} \colon \mathcal{K}_{j+1} \to \mathcal{L}_{j+1}$ be given by $$\Gamma_{j+1} = P_{\mathcal{L}_{j+1}} M_{G-Q_{j+1}}|_{\mathcal{K}_{j+1}}.$$ Then \begin{itemize} \item[{\rm (i)}] The maps $$ M_{\bar{A}_j} \colon H^2(\Dd,\Cc^{n-j-1}) \to \mathcal{K}_{j+1} \colon x \mapsto \bar{A}_j x,\;\; \text{and} \;\; M_{B_j}\colon H^2(\Dd,\Cc^{m-j-1})^\perp \to \mathcal{L}_{j+1} \colon y \mapsto {B_j} y $$ are unitaries. \item[{\rm (ii)}] The maps $(\xi_0\telwe \dots \telwe \xi_j \telwe \cdot)\colon\mathcal{K}_{j+1}\to X_{j+1},$ $(\bita_0\telwe \dots \telwe \bita_j \telwe\cdot)\colon\mathcal{L}_{j+1}\to Y_{j+1} $ are unitaries. \item[{\rm (iii)}] the following diagram commutes \begin{equation}\label{t(j+1)comm} \begin{array}{clllll} H^2(\Dd,\Cc^{n-j}) &\xrightarrow{M_{\bar{\alpha}_0\cdots \bar{ \alpha}_{j}}} & \mathcal{K}_{j+1} &\xrightarrow{\xi_{(j)} \telwe \cdot}& \xi_{(j)} \telwe H^2 (\Dd, \Cc^n)=X_{j+1}\\ \Big\downarrow\rlap{$\scriptstyle H_{F_{j+1}} $} & ~ &\Big\downarrow\rlap{$\scriptstyle \Gamma_{j+1}$} &~&\hspace{3ex}\Big\downarrow\rlap{$\scriptstyle T_{j+1}$} \\ H^2(\Dd,\Cc^{m-j})^\perp &\xrightarrow{M_{\beta_0 \cdots \beta_{j}}}&\mathcal{L}_{j+1} &\xrightarrow{\bar{\eta}_{(j)} \telwe \cdot } & \bar{\eta}_{(j)} \telwe H^2 (\Dd, \Cc^m)^\perp =Y_{j+1}, \end{array}\end{equation} where $F_{j+1} \in H^\infty(\Dd,\Cc^{(m-j-1)\times(n-j-1)})+ C(\Tt,\Cc^{(m-j-1)\times(n-j-1)})$ is the function defined in Proposition \ref{tildvjwj}; \item [{\rm (iv)}] ${\Gamma}_{j+1}$ and $T_{j+1}$ are compact operators; \item[{\rm (v)}] $\|T_{j+1}\|=\|\Gamma_{j+1}\|=\|H_{F_{j+1}}\|=t_{j+1}.$ \end{itemize} \end{theorem} \begin{proof} {\rm (i)} It follows from Lemma \ref{3.1constr}. {\rm (ii)} follows from Propositions \ref{xitelakxik0} and \ref{etatelwelj}. {\rm (iii)} By Theorem \ref{1.8}, there exists a function $Q_{j+1} \in H^\infty(\Dd, \CCmn)$ such that the sequence $$\left(s_0^\infty(G-Q_{j+1}),s_1^\infty(G-Q_{j+1}), \dots , s_{j+1}^\infty(G-Q_{j+1}) \right)$$ is lexicographically minimized. By Proposition \ref{g-qjj}, any such $Q_{j+1}$ satisfies \begin{equation} \label{G_Qjxy} (G-Q_{j+1})x_i = t_i y_i, (G-Q_{j+1})^*y_i=t_ix_i,\quad \text{for}\; i=0,1,\dots,j. \end{equation} By Proposition \ref{Twell}, $T_{j+1}$ is well-defined and is independent of the choice of $Q_{j+1}\in H^\infty(\Dd,\CCmn)$ satisfying equations \eqref{G_Qjxy}. We can choose $Q_{j+1}$ which minimises $$\left(s_0^\infty(G-Q_{j+1}),s_1^\infty(G-Q_{j+1}), \dots , s_{j+1}^\infty(G-Q_{j+1}) \right),$$ and therefore satisfies equations \eqref{G_Qjxy}. Consider the following diagram. \begin{equation}\label{diagr1122} \begin{array}{clllll} &\mathcal{K}_{j+1} &\xrightarrow{\xi_0 \telwe\cdots \telwe \xi_{j} \telwe \cdot}& \xi_0 \telwe \cdots \telwe \xi_{j} \telwe H^2 (\Dd, \Cc^n)=X_{j+1}\\ &\Big\downarrow\rlap{$\scriptstyle \Gamma_{j+1}$} &~ &\hspace{13ex}\Big\downarrow\rlap{$\scriptstyle T_{j+1}$} \\ & \mathcal{L}_{j+1} &\xrightarrow{\bar{\eta}_0 \telwe \cdots \telwe \bar{{\eta}}_{j} \telwe \cdot } & \bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe H^2 (\Dd, \Cc^m)^\perp =Y_{j+1}. \end{array}\end{equation} Let us prove first that diagram \eqref{diagr1122} commutes. By Proposition \ref{tildvjwj}, every $Q_{j+1}\in H^\infty(\Dd,\CCmn)$, which minimises $$\left(s_0^\infty(G-Q_{j+1}),s_1^\infty(G-Q_{j+1}), \dots , s_{j+1}^\infty(G-Q_{j+1}) \right),$$ satisfies the following equation (see equation \eqref{epsilonnj}). \begin{equation}\label{G-Q(j+1)} G - Q_{j+1} = W_0^* W_1^* \cdots W_{j}^* \begin{pmatrix} t_0 u_0 & 0 & \cdots & 0 &0_{1\times (n-j-1)}\\ 0 &t_1u_1 & \dots & 0 &0_{1\times (n-j-1)} \\ \vdots & \vdots & \ddots & \vdots &\vdots \\ 0 & 0 & \cdots &t_{j}u_{j} & 0\\ 0_{(m-j-1)\times 1} & 0_{(m-j-1)\times 1} & \dots & \dots & (F_{j+1}+H^\infty)\cap B(t_{j}) \end{pmatrix}V_{j}^* \cdots V_0^* , \end{equation} Thus, for every $\chi \in H^2(\Dd,\Cc^{n-j-1})$, \begin{equation}\label{G-Q(j+1)-2} (G - Q_{j+1}) V_0 \cdots V_{j} \begin{pmatrix} 0_{(j+1)\times 1}\\H^2(\Dd,\Cc^{m-j-1})^\perp \end{pmatrix} = \hspace{6cm} \end{equation} $$ W_0^* W_1^* \cdots W_{j}^* \begin{pmatrix} t_0 u_0 & 0 & \cdots & 0 &0_{1\times (n-j-1)}\\ 0 &t_1u_1 & \dots & 0 &0_{1\times (n-j-1)} \\ \vdots & \vdots & \ddots & \vdots &\vdots \\ 0 & 0 & \cdots &t_{j}u_{j} & 0\\ 0_{(m-j-1)\times 1} & 0_{(m-j-1)\times 1} & \dots & \dots & (F_{j+1}+H^\infty)\cap B(t_{j}) \end{pmatrix} \begin{pmatrix} 0_{(j+1)\times 1}\\H^2(\Dd,\Cc^{m-j-1})^\perp \end{pmatrix}, $$ for some $F_{j+1} \in H^\infty(\Dd,\Cc^{(m-j-1)\times (n-j-1)})+C(\Tt,\Cc^{(m-j-1)\times (n-j-1)}),$ for the quasi-continuous unimodular functions $u_i = \frac{\bar{z} \bar{h}_i}{h_i}$, for all $i=0,\dots, j ,$ for the closed ball $B(t_{j})$ of radius $t_j$ in $L^\infty(\Tt,\Cc^{(m-j-1)\times (n-j-1)})$. By equation \eqref{WjW0*}, $$W_0^* W_1^* \cdots W_{j}^*= \begin{pmatrix} \bar{\eta}_{0} & B_0B_0^*\bar{\eta}_{1} & \dots & B_{j-1} B_{j-1}^* \bar{\eta}_{j} & B_{j} \end{pmatrix}. $$ By equation \eqref{V0Vj}, \begin{equation}\label{V0Vj-2} V_0\cdots V_j = \begin{pmatrix} \xi_0 & \bar{A}_0 A_0^T \xi_1 & \bar{A}_1A_1^T\xi_2& \dots & \bar{A}_{j-1}A_{j-1}^T \xi_{j-1}& \bar{A}_{j} \end{pmatrix}. \end{equation} Therefore, by equation \eqref{G-Q(j+1)-2}, for every $\chi \in H^2(\Dd,\Cc^{n-j-1})$, \begin{equation}\label{(G-Q)Ax=BFX} (G-Q_{j+1}) \bar{A}_j \chi = B_j F_{j+1} \chi. \end{equation} \noindent A typical element $x \in \mathcal{K}_{j+1}$ is of the form $ x = \bar{A}_j \chi,$ for some $\chi \in H^2(\Dd,\Cc^{n-j-1}).$ Then, by Proposition \ref{xitelakxik0}, $$ (\xi_0 \telwe\dots \telwe \xi_{j} \telwe \cdot)\bar{A}_j \chi = \xi_0 \telwe\dots \telwe \xi_{j} \telwe \bar{A}_j \chi \in X_{j+1}.$$ Therefore, by the definition of $T_{j+1}$ and by equation \eqref{(G-Q)Ax=BFX}, $$\begin{array}{cllll} T_{j+1} ( \xi_0 \telwe\dots \telwe \xi_{j} \telwe \bar{A}_j \chi) &= P_{Y_{j+1}} (\bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe (G-Q_{j+1}) \bar{A}_j \chi)\vspace{2ex} \\ &=P_{Y_{j+1}} (\bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe B_j F_{j+1} \chi).\end{array}$$ Furthermore, by the definition of $\Gamma_{j+1}$ and by equation \eqref{(G-Q)Ax=BFX}, $$\begin{array}{cllll} (\bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe \cdot)\Gamma_{j+1}(\bar{A}_j\chi) &= \bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe P_{\mathcal{L}_{j+1}} (G-Q_{j+1})(\bar{A}_j\chi)\\ &= \bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe P_{\mathcal{L}_{j+1}} B_j F_{j+1} \chi. \end{array} $$ In order to prove the commutativity of diagram \eqref{diagr1122}, we need to show that, for every $\chi \in H^2(\Dd,\Cc^{n-j-1})$, $$T_{j+1} ( \xi_0 \telwe\dots \telwe \xi_{j} \telwe \bar{A}_j \chi)= (\bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe \cdot)\Gamma_{j+1}(\bar{A}_j\chi) .$$ Hence we must prove that, for every $\chi \in H^2(\Dd,\Cc^{n-j-1})$, $$ \bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe P_{\mathcal{L}_{j+1}} B_j F_{j+1} \chi \in Y_{j+1}$$ and that $$ \bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe \left(B_j F_{j+1} \chi- P_{\mathcal{L}_{j+1}} B_j F_{j+1} \chi \right), \; \text{which is equal to } \; \bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe P_{\mathcal{L}_{j+1}^\perp} B_j F_{j+1} \chi, $$ is orthogonal to $Y_{j+1}.$ Observe that, by Proposition \ref{etatelwelj}, for any $ \chi \in H^2(\Dd,\Cc^{n-j-1})$, $ \bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe P_{\mathcal{L}_{j+1}} B_j F_{j+1} \chi$ is indeed an element of $Y_{j+1}.$ To prove that $$ \bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j} \telwe P_{\mathcal{L}_{j+1}^\perp} B_j F_{j+1} \chi $$ is orthogonal to $Y_{j+1},$ it suffices to prove that $$\langle \bar{\eta}_{(j)} \telwe \Phi, \bar{\eta}_{(j)} \telwe \psi \rangle_{L^2(\Tt,\we^{j+2}\Cc^m)}=0$$ for $\Phi = P_{\mathcal{L}_{j+1}^\perp} B_j F_{j+1} \chi$, for all $ \chi \in H^2(\Dd,\Cc^{n-j-1})$ and for all $\psi \in H^2(\Dd,\Cc^{m})^\perp.$ By Proposition \ref{we}, $$\begin{array}{llll} &\langle \bar{\eta}_{(j)} \telwe \Phi, \bar{\eta}_{(j)} \telwe \psi \rangle_{L^2(\Tt,\we^{j+2}\Cc^m)} \vspace{3ex}\\ &\vspace{1ex}= \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} \langle \bar{\eta}_0(\eiu) , \bar{\eta}_0(\eiu) \rangle_{\Cc^m} & \dots & \langle\bar{\eta}_0(\eiu) , \psi (\eiu) \rangle_{\Cc^m} \\ \langle\bar{\eta}_1 (\eiu), \bar{\eta}_0(\eiu) \rangle_{\Cc^m}& \dots & \langle\bar{\eta}_1 (\eiu),\psi(\eiu) \rangle_{\Cc^m} \\ \vdots & \ddots & \vdots \\ \langle \Phi(\eiu), \bar{\eta}_0(\eiu) \rangle_{\Cc^m}& \dots &\langle \Phi(\eiu), \psi(\eiu) \rangle_{\Cc^m} \end{pmatrix} d\theta \end{array} $$ for all $ \psi \in H^2(\Dd,\Cc^{m})^\perp.$ Recall, by Proposition \ref{onxi}, the set $\{\eta_i\}_{i=0}^j$ is an orthonormal set in $\Cc^m$ almost everywhere on $\Tt.$ Hence $$\begin{array}{llll} &\langle \bar{\eta}_{(j)} \telwe \Phi, \bar{\eta}_{(j)} \telwe \psi \rangle_{L^2(\Tt,\we^{j+2}\Cc^m)} \vspace{3ex}\\ & \vspace{1ex} = \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} 1 & 0&\dots & \langle\bar{\eta}_0(\eiu) , \psi (\eiu) \rangle_{\Cc^m} \\ 0& 1& \dots & \langle\bar{\eta}_1 (\eiu),\psi(\eiu) \rangle_{\Cc^m} \\ \vdots & ~&\ddots & \vdots \\ \langle \Phi(\eiu), \bar{\eta}_0(\eiu) \rangle_{\Cc^m}& ~&\dots &\langle \Phi(\eiu), \psi(\eiu) \rangle_{\Cc^m} \end{pmatrix} d\theta. \end{array}$$ Multiplying the $k$-th column by $\langle \bar{\eta}_k(\eiu), \psi (\eiu) \rangle_{\Cc^m} $ and subtracting it from the last column of the determinant above, we obtain $$\displaystyle\frac{1}{2\pi} \int_0^{2\pi} \scriptsize{\det \begin{pmatrix} 1 & 0&\dots & 0 \\ 0& 1& \dots & 0 \\ \vdots & ~&\ddots & \vdots \\ \langle \Phi(\eiu), \bar{\eta}_0(\eiu) \rangle_{\Cc^m}& ~&\dots &\langle \Phi(\eiu), \psi(\eiu) \rangle_{\Cc^m} \\ &~ &~ &-\sum_{i=0}^{j} \langle \Phi(\eiu), \bar{\eta}_i (\eiu) \rangle_{\Cc^m} \langle \bar{\eta}_i (\eiu), \psi(\eiu)\rangle_{\Cc^m} \end{pmatrix}}d\theta, $$which is equal to $$ \begin{array}{llll} \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \psi^*(\eiu) \Phi(\eiu) &- \sum_{i=0}^{j}\psi^*(\eiu) \bar{\eta}_i(\eiu) \eta_i^T(\eiu)\Phi(\eiu)d\theta \vspace{2ex} \\ &= \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \psi^*(\eiu) \bigg(I_m - \sum_{i=0}^{j} \bar{\eta}_i(\eiu) \eta_i^T(\eiu)\bigg)\Phi(\eiu)d\theta. \end{array}$$ \noindent Then $$\langle \bar{\eta}_{(j)} \telwe \Phi, \bar{\eta}_{(j)} \telwe \psi \rangle_{L^2(\Tt,\we^{j+2}\Cc^m)}=0$$ for all $\psi \in H^2(\Dd,\Cc^{m})^\perp$ if and only if $$\displaystyle\frac{1}{2\pi} \int_0^{2\pi}\left\langle \bigg(I_m - \sum_{i=0}^{j} \bar{\eta}_i(\eiu) \eta_i^T(\eiu)\bigg)\Phi(\eiu), \psi(\eiu) \right\rangle_{\Cc^m} =0 $$ for all $\psi \in H^2(\Dd,\Cc^{m})^\perp,$ which holds if and only if $$\bigg(I_m - \sum_{i=0}^{j} \bar{\eta}_i \eta_i^T\bigg)\Phi \in H^2(\Dd,\Cc^m). $$ By Lemma \ref{eta=BkBk*}, \begin{equation}\label{BB_j*} B_j B_j^* = I_m - \sum\limits_{i=0}^j \bita_i \eta_i^T. \end{equation} Thus $$\langle \bar{\eta}_{(j)} \telwe \Phi, \bar{\eta}_{(j)} \telwe \psi \rangle_{L^2(\Tt,\we^{j+2}\Cc^m)}=0$$ for all $\psi \in H^2(\Dd,\Cc^{m})^\perp$ if and only if $$\displaystyle\frac{1}{2\pi} \int_0^{2\pi} \langle B_{j-1}(\eiu)B_{j-1}^*(\eiu)\Phi(\eiu), \psi(\eiu)\rangle_{\Cc^m}=0, $$ which holds if and only if $B_j B_j^* \Phi \in H^2(\Dd,\Cc^m). $ By Proposition \ref{callj+1perp}, $\Phi = P_{\mathcal{L}_{j+1}^\perp} B_j F_{j+1} \chi $ satisfies the following property $B_j^* \Phi \in H^2(\Dd,\Cc^{m-j-1}).$ Hence diagram \eqref{diagr1122} commutes. Recall that, by Lemma \ref{3.2constr}, the following diagram also commutes \begin{equation}\label{hfj+1} \begin{array}{clllll} &H^2(\Dd,\Cc^{n-j-1}) &\xrightarrow{M_{\bar{A}_j}}& \mathcal{K}_{j+1} \\ &\Big\downarrow\rlap{$\scriptstyle H_{F_{j+1}}$} &~ &\Big\downarrow\rlap{$\scriptstyle {\Gamma}_{j+1}$} \\ & H^2(\Dd,\Cc^{m-j-1})^\perp &\xrightarrow{M_{B_j}} & \mathcal{L}_{j+1}. \end{array}\end{equation} {\rm(iv)} Since $F_{j+1} \in H^\infty(\Dd,\Cc^{(n-j-1)\times (m-j-1)}) + C(\Tt, \Cc^{(n-j-1)\times (m-j-1)}),$ by Hartman's Theorem, the Hankel operator $H_{F_{j+1}}$ is compact. Since diagram \eqref{hfj+1} commutes and the operators $ M_{\bar{A}_{j}}$ and $\;M_{B_j}$ are unitaries, ${\Gamma}_{j+1}$ is compact. By (iii), $$ (\bita_0 \telwe \dots \telwe \bita_j\telwe \cdot) \circ (M_{B_j} \circ H_{F_{j+1}} \circ M_{\bar{A}_{j}}^*) \circ (\xi_0\telwe \dots \telwe \xi_j \telwe \cdot)^*=T_{j+1} .$$ By (i) and (ii), the operators $ M_{\bar{A}_{j}},\;M_{B_j},$ $(\xi_0\telwe \dots \telwe \xi_j \telwe \cdot)$ and $(\bita_0 \telwe \dots \telwe \bita_j \telwe \cdot)$ are unitaries, Hence $T_{j+1}$ is a compact operator. {\rm (v)} Since diagram \eqref{t(j+1)comm} commutes and the operators $ M_{\bar{A}_{j}},\;M_{B_j},$ $(\xi_0\telwe \dots \telwe \xi_j \telwe \cdot)$ and $(\bita_0 \telwe \dots \telwe \bita_j \telwe \cdot)$ are unitaries, $$\|T_{j+1}\|=\|\Gamma_{j+1}\|=\|H_{F_{j+1}}\|=t_{j+1}. $$ \end{proof} \begin{lemma}\label{coofschpairsj} Let $v_{j+1} \in H^2(\Dd,\Cc^n)$ and $w_{j+1} \in H^2(\Dd,\Cc^m)^\perp$ be such that $$(\xi_0 \telwe\cdots \telwe \xi_{j}\telwe v_{j+1}, \bar{\eta}_0 \telwe \cdots \telwe \bita_j\telwe w_{j+1})$$ is a Schmidt pair for the operator $T_{j+1}$ corresponding to $\|T_{j+1}\|.$ Then\begin{enumerate} \item[\emph{(i)}] there exist $x_{j+1} \in \mathcal{K}_{j+1}$ and $y_{j+1}\in \mathcal{L}_{j+1}$ such that $(x_{j+1},y_{j+1}) $ is a Schmidt pair for the operator $\Gamma_{j+1}$; \item[\emph{(ii)}] for any $x_{j+1} \in \mathcal{K}_{j+1}$ and $y_{j+1}\in \mathcal{L}_{j+1}$ such that $$ \xi_0 \telwe \cdots \telwe \xi_j\telwe x_{j+1}= \xi_0 \telwe \cdots \telwe \xi_j \telwe v_{j+1},\quad \bar{ \eta}_0 \telwe \cdots \telwe \bita_j \telwe y_{j+1} = \bar{ \eta}_0 \telwe \cdots\telwe \bita_j\telwe w_{j+1},$$ the pair $(x_{j+1},y_{j+1})$ is a Schmidt pair for $\Gamma_{j+1}$ corresponding to $\|\Gamma_{j+1}\|.$ \end{enumerate} \end{lemma} \begin{proof} {\rm (i)} By Theorem \ref{Tkismultipleofhankel}, the operator $\Gamma_{j+1}\colon \mathcal{K}_{j+1}\to \mathcal{L}_{j+1}$ is compact and $$\|\Gamma_{j+1}\| =\|T_{j+1}\|=t_{j+1}.$$ Hence there exist $x_{j+1} \in \mathcal{K}_{j+1},$ $y_{j+1} \in \mathcal{L}_{j+1}$ such that $(x_{j+1},y_{j+1})$ is a Schmidt pair for $\Gamma_{j+1}$ corresponding to $\|\Gamma_{j+1}\|=t_{j+1}.$ {\rm (ii)} Suppose that $x_{j+1}\in \mathcal{K}_{j+1},y_{j+1}\in \mathcal{L}_{j+1}$ satisfy \begin{equation}\label{xitelj} \xi_0 \telwe \cdots \telwe \xi_j\telwe x_{j+1}= \xi_0 \telwe \cdots \telwe \xi_j \telwe v_{j+1},\end{equation} \begin{equation}\label{eta0telj} \bar{ \eta}_0 \telwe \cdots \telwe \bita_j \telwe y_{j+1} = \bar{ \eta}_0 \telwe \cdots\telwe \bita_j\telwe w_{j+1}. \end{equation} Let us show that $(x_{j+1},y_{j+1})$ is a Schmidt pair for $\Gamma_{j+1},$ that is, $$\Gamma_{j+1} x_{j+1} = t_{j+1}y_{j+1},\quad \Gamma_{j+1}^*y_{j+1}=t_{j+1}x_{j+1}. $$ Since diagram \eqref{diagr1122} commutes, \begin{equation}\begin{aligned}\label{commt2gammaj} &T_{j+1} \circ (\xi_0\telwe \cdots \telwe \xi_j \telwe \cdot )=(\bar{ \eta}_0 \telwe \cdots \telwe \bita_j \telwe \cdot)\circ\Gamma_{j+1} \;\\ \text{and} \\ &(\xi_0\telwe \cdots \telwe \xi_j\telwe \cdot )^*\circ T_{j+1}^* = \Gamma_{j+1}^* \circ (\bar{\eta}_0 \telwe\cdots \telwe \bita_j \telwe \cdot)^*. \end{aligned}\end{equation} By hypothesis, \begin{equation}\begin{aligned}\label{hyptj} &T_{j+1} (\xi_0 \telwe \cdots \telwe \xi_{j}\telwe v_{j+1})= t_{j+1} (\bar{ \eta}_0 \telwe \cdots \telwe \bita_j\telwe w_{j+1}) \;\\ \text{and}\\ & T_{j+1}^*(\bar{ \eta}_0 \telwe \cdots \telwe \bita_j\telwe w_{j+1})= t_{j+1} (\xi_0 \telwe\cdots \telwe \xi_{j}\telwe v_{j+1}).\end{aligned} \end{equation} Thus, by equations \eqref{xitelj}, \eqref{eta0telj} and \eqref{hyptj}, $$\begin{array}{clllll} \Gamma_{j+1} x_{j+1}&= (\bar{\eta}_0 \telwe \cdots \telwe \bita_j \telwe \cdot)^* T_{j+1} (\xi_0\telwe \cdots \telwe \xi_j \telwe v_{j+1}) \vspace{2ex} \\ &= (\bar{\eta}_0 \telwe \cdots \telwe \bita_j \telwe \cdot)^* t_{j+1} (\bar{ \eta}_0 \telwe \cdots \telwe \bita_j \telwe w_{j+1}) \vspace{2ex} \\ &= t_{j+1} (\bar{\eta}_0 \telwe \cdots \telwe \bita_j \telwe \cdot)^* (\bar{ \eta}_0 \telwe \cdots \telwe \bita_j \telwe y_{j+1}).\end{array}$$ Hence $$\Gamma_{j+1} x_{j+1}= t_{j+1} (\bar{\eta}_0 \telwe \cdots \telwe \bita_j \telwe \cdot)^* (\bar{ \eta}_0 \telwe\cdots \telwe \bita_j \telwe \cdot)y_{j+1}= t_{j+1} y_{j+1}.$$ \noindent By equation \eqref{xitelj}, $$x_{j+1} = (\xi_0\telwe \cdots \telwe \xi_{j}\telwe \cdot)^* (\xi_0 \telwe \cdots \telwe \xi_{j}\telwe v_{j+1} ),$$ and, by equation \eqref{eta0telj}, $$(\bar{\eta}_0 \telwe\cdots \telwe \bita_j \telwe \cdot)^*(\bar{ \eta}_0 \telwe\cdots \telwe \bita_j \telwe w_{j+1})=y_{j+1}. $$ Thus $$\begin{array}{clll}\Gamma_{j+1}^* y_{j+1} &= \Gamma_{j+1}^*(\bar{ \eta}_0 \telwe\cdots \telwe \bita_j \telwe \cdot)^*(\bar{\eta}_0 \telwe\cdots \telwe \bita_j \telwe w_{j+1})\vspace{2ex} \\ &= (\xi_0 \telwe \cdots \telwe \xi_j \cdot )^* T_{j+1}^* (\bar{ \eta}_0 \telwe\cdots \telwe \bita_j \telwe w_{j+1}),\end{array}$$ the last equality following by the second equation of \eqref{commt2gammaj}. By equations \eqref{xitelj} and \eqref{hyptj}, we have $$ T_{j+1}^* (\bar{ \eta}_0 \telwe\cdots \telwe \bita_j \telwe w_{j+1}) = t_{j+1} (\xi_0\telwe \cdots \telwe \xi_j\telwe v_{j+1})= t_{j+1}(\xi_0\telwe \cdots \telwe \xi_j\telwe x_{j+1}),$$ and so, $$ \Gamma_{j+1}^* y_{j+1} = t_{j+1} x_{j+1}.$$Therefore $(x_{j+1},y_{j+1})$ is a Schmidt pair for $\Gamma_{j+1}$ corresponding to $\|\Gamma_{j+1}\|=t_{j+1}.$ \end{proof} \begin{lemma}\label{schfohfj} Suppose that $$(\xi_0 \telwe\cdots \telwe \xi_{j}\telwe v_{j+1}, \bar{\eta}_0 \telwe \cdots \telwe \bita_j\telwe w_{j+1})$$ is a Schmidt pair for the operator $T_{j+1}$ corresponding to $\|T_{j+1}\|=t_{j+1}.$ Let $$x_{j+1} = (I_{n} - \xi_0 \xi_0^*-\cdots-\xi_j\xi_j^*)v_{j+1},$$ $$ y_{j+1}= (I_{m} - \bita_0\eta_0^T- \cdots- \bita_j\eta_j^T)w_{j+1},$$ and let $$\hx_{j+1} = A_{j}^T x_{j+1},\quad \hy_{j+1}=B_j^*y_{j+1}. $$ Then {\rm (i)} the pair $(x_{j+1},y_{j+1})$ is a Schmidt pair for the operator $\Gamma_{j+1}$ corresponding to $t_{j+1}$; {\rm (ii)} the pair $(\hx_{j+1},\hy_{j+1})$ is a Schmidt pair for $H_{F_{j+1}}$ corresponding to $\|H_{F_{j+1}}\|=t_{j+1}.$ \end{lemma} \begin{proof} By Lemmas \ref{xi=AkAkT} and \ref{eta=BkBk*}, \begin{equation}\label{xjAA} x_{j+1} = (I_{n} - \xi_0 \xi_0^*-\cdots-\xi_j\xi_j^*)v_{j+1}=\bar{A}_{j}A_{j}^Tv_{j+1} \end{equation} and \begin{equation}\label{yjBB} y_{j+1}= (I_{m} - \bita_0\eta_0^T-\cdots -\bita_j\eta_j^T)w_{j+1}=B_{j}B_{j}^*w_{j+1} . \end{equation} Hence \begin{equation}\label{hxj+1} \hx_{j+1} = A_{j}^T x_{j+1}= A_{j}^T \bar{A}_{j} A_{j}^Tv_{j+1}= A_{j}^Tv_{j+1} \end{equation} and \begin{equation}\label{bjwj} \hy_{j+1}=B_j^*y_{j+1}= B_j^*B_{j}B_{j}^*w_{j+1} = B_{j}^*w_{j+1}. \end{equation} These imply that $\hat{x}_{j+1}\in H^2(\Dd,\Cc^{n-j-1})$, $x_{j+1}\in \mathcal{K}_{j+1}$, $\hy_{j+1}\in H^2(\Dd,\Cc^{n-j-1})^\perp$ and $y_{j+1} \in \mathcal{L}_{j+1}.$ By Proposition \ref{onxi}, $$ \xi_0 \telwe \cdots \telwe \xi_j\telwe x_{j+1}= \xi_0 \telwe \cdots \telwe \xi_j \telwe v_{j+1}\quad \text{and} \; \bar{ \eta}_0 \telwe \cdots \telwe \bita_j \telwe y_{j+1} = \bar{ \eta}_0 \telwe \cdots\telwe \bita_j\telwe w_{j+1}.$$ Thus, by Proposition \ref{coofschpairsj}, the pair $(x_{j+1},y_{j+1})$ is a Schmidt pair for $\Gamma_{j+1}$ corresponding to $\|\Gamma_{j+1}\|.$ Therefore, \begin{equation}\label{gammaJ+1} \Gamma_{j+1}x_{j+1}= t_{j+1}y_{j+1},\quad \Gamma_{j+1}^*y_{j+1}=t_{j+1}x_{j+1}. \end{equation} To show that the pair $(\hx_{j+1},\hy_{j+1})$ is a Schmidt pair for $H_{F_{j+1}}$ corresponding to $t_{j+1}$, we need to prove that $$H_{F_{j+1}}\hx_{j+1} = t_{j+1} \hy_{j+1},\quad H_{F_{j+1}}^*\hy_{j+1} = t_{j+1}\hx_{j+1}. $$ By Theorem \ref{Tkismultipleofhankel}, \begin{equation}\label{hf2gj} H_{F_{j+1}} = (M_{B_j})^* \circ \Gamma_{j+1} \circ M_{\bar{A}_j},\end{equation} and \begin{equation}\label{hf2*gj} H_{F_{j+1}}^* = M_{\bar{A}_j}^* \circ \Gamma_{j+1}^* \circ M_{B_j}.\end{equation} By equation \eqref{hf2gj}, we have \begin{align} \label{h_fj.1} H_{F_{j+1}}\hat{x}_{j+1}&= H_{F_{j+1}}A_{j}^T x_{j+1}\nonumber\vspace{2ex}\\ &= B_j^* \Gamma_{j+1} \bar{A}_{j}A_{j}^Tx_{j+1}. \end{align} Notice that, by equations \eqref{xjAA} and \eqref{hxj+1}, \begin{equation}\label{x_j+1} x_{j+1}=\bar{A}_jA_j^Tx_{j+1}.\end{equation} Hence, by equations \eqref{gammaJ+1} and \eqref{h_fj.1}, we obtain $$H_{F_{j+1}}\hx_{j+1} = B_{j}^* \Gamma_{j+1} x_{j+1}=t_{j+1}B_j^*y_{j+1}=t_{j+1}\hy_{j+1}.$$ Let us show that $H_{F_{j+1}}^*\hy_{j+1} = t_{j+1}\hx_{j+1}$. By equations \eqref{hf2*gj} and \eqref{gammaJ+1}, we have \begin{align} H_{F_{j+1}}^*\hy_{j+1}&= H_{F_{j+1}}^* B_{j}^* y_{j+1}\nonumber\vspace{2ex}\\ &= A_{j}^T\Gamma_{j+1}^* B_j B_j^* y_{j+1} \label{h_fj.4}\end{align} Observe that, in view of equations \eqref{yjBB} and \eqref{bjwj}, \begin{equation}\label{y_j+1} y_{j+1}=B_jB_j^*y_{j+1}.\end{equation} Hence, by equations \eqref{gammaJ+1} and \eqref{h_fj.4}, we obtain $$H_{F_{j+1}}^*\hy_{j+1}= A_{j}^T\Gamma_{j+1}^* y_{j+1} = t_{j+1} A_{j}^Tx_{j+1} = t_{j+1} \hx_{j+1}.$$Therefore $(\hx_{j+1},\hy_{j+1})$ is a Schmidt pair for the Hankel operator $H_{F_{j+1}}$ corresponding to $\| H_{F_{j+1}}\| = t_{j+1}.$ \end{proof} \begin{proposition}\label{xjwevjetajwewj} Let $$(\xi_0\telwe \cdots \telwe \xi_j \telwe v_{j+1},\bita_0\telwe \cdots \telwe \bita_j \telwe w_{j+1})$$ be a Schmidt pair for $T_{j+1}$ corresponding to $t_{j+1}$ for some $v_{j+1}\in H^2(\Dd,\Cc^n),$ $w_{j+1}\in H^2(\Dd,\Cc^m)^\perp.$ Let $$x_{j+1} = (I_{n}- \xi_0 \xi_0^*-\cdots-\xi_j\xi_j^*)v_{j+1},\quad y_{j+1}=(I_{m} - \bar{\eta}_0 \eta_0^T-\cdots-\bita_j \eta_j^T)w_{j+1},$$ and let \begin{equation}\label{hatxj}\hx_{j+1}=A_j^T x_{j+1}\quad \text{and}\quad \hy_{j+1}=B_j^*y_{j+1}.\end{equation} Then \begin{equation}\label{hj+1common} \begin{array}{lll} \| \xi_0 (z) \we \dots \we \xi_{j}(z) \we v_{j+1}(z) \|_{\we^{j+2}\Cc^n} &= \| \bar{\eta}_0(z) \we \dots \we\bita_{j}(z) \we w_{j+1}(z)\|_{\we^{j+2}\Cc^m}= |h_{j+i}(z)|, \\ \|\hx_{j+1} (z) \|_{\Cc^{n-j-1}} &= \|\hy_{j+1}(z)\|_{\Cc^{m-j-1}} = |h_{j+1}(z)|,\;\text{and}\\ \| x_{j+1}(z) \|_{\Cc^n} &= \|y_{j+1}(z)\|_{\Cc^m} =|h_{j+1}(z)|, \; \end{array} \end{equation} almost everywhere on $\Tt.$ \end{proposition} \begin{proof} \begin{comment} Suppose $$(\xi_0\telwe \cdots \telwe \xi_j \telwe v_{j+1},\bita_0\telwe \cdots \telwe \bita_j \telwe w_{j+1})$$ is a Schmidt pair for $T_{j+1}$ corresponding to $t_{j+1}$, so that \begin{equation}\begin{aligned}\label{scmhforTj} &T_{j+1} (\xi_0\telwe \cdots \telwe \xi_j \telwe v_{j+1}) =t_{j+1} \bita_0\telwe \cdots \telwe \bita_j \telwe w_{j+1}\quad\text{and}\vspace{2ex}\\ &T_{j+1}^* (\bita_0\telwe \cdots \telwe \bita_j \telwe w_{j+1}) = t_{j+1} (\xi_0\telwe \cdots \telwe \xi_j \telwe v_{j+1}).\end{aligned}\end{equation} By Lemmas \ref{xi=AkAkT} and \ref{eta=BkBk*} $$x_{j+1} = \bar{A}_j A_{j}^T v_{j+1},\quad y_{j+1}= B_jB_j^*w_{j+1}$$ which, by equations \eqref{calkj+1} and \eqref{callj+1}, imply $$x_{j+1} \in \mathcal{K}_{j+1},\quad y_{j+1} \in \mathcal{L}_{j+1}. $$ By Lemma \ref{schfohfj}, $(x_{j+1},y_{j+1})$ is a Schmidt pair for the operator $\Gamma_{j+1}$ corresponding to $t_{j+1}=\|\Gamma_{j+1}\|,$ so that \begin{equation}\label{schmforgj} \Gamma_{j+1} x_{j+1} = t_{j+1} y_{j+1}, \quad \Gamma_{j+1}^* y_{j+1} = t_{j+1} x_1.\end{equation} \end{comment} By Lemma \ref{schfohfj}, $(\hx_{j+1},\hy_{j+1})$ is a Schmidt pair for $H_{F_{j+1}}$ corresponding to $\|H_{F_{j+1}}\|=t_{j+1}$. Hence $$H_{F_{j+1}}\hx_{j+1} = t_{j+1}\hy_{j+1} \quad \text{and}\quad H_{F_{j+1}}^* \hy_{j+1} = t_{j+1} \hx_{j+1}. $$ By Theorem \ref{1.7}, $$t_{j+1} \|\hy_{j+1}(z)\|_{\Cc^{m-j-1}}=\|H_{F_{j+1}}\| \|\hx_{j+1}(z)\|_{\Cc^{n-j-1}} $$almost everywhere on $\Tt.$ Thus \begin{equation}\label{hatseqj} \|\hy_{j+1}(z)\|_{\Cc^{m-j-1}}= \|\hx_{j+1}(z)\|_{\Cc^{n-j-1}} \end{equation} almost everywhere on $\Tt.$ Notice that $\bar{A}_j(z)$ is isometric for almost every $z\in \Tt,$ and therefore, by equations \eqref{hatxj}, we obtain $$\|x_{j+1}(z)\|_{\Cc^{n}}=\|\hx_{j+1}(z)\|_{\Cc^{n-j-1}}. $$ Moreover, since $B_j(z)$ are isometries almost everywhere on $\Tt,$ by equations \eqref{hatxj}, we have $$\|y_{j+1}(z)\|_{\Cc^m}=\|\hy_{j+1}(z)\|_{\Cc^{m-j-1}} $$ almost everywhere on $\Tt.$ By equations \eqref{hatseqj}, we deduce \begin{equation}\label{x2isyj}\|x_{j+1}(z)\|_{\Cc^n}=\|y_{j+1}(z)\|_{\Cc^m} \end{equation} almost everywhere on $\Tt.$ By Proposition \ref{onxi}, \begin{equation} \label{xitelvj} \xi_0 \telwe \cdots \telwe \xi_j \telwe x_{j+1}=\xi_0\telwe \cdots \telwe \xi_j \telwe v_{j+1} \end{equation} and \begin{equation} \label{etatelwj} \bar{\eta}_0 \telwe \cdots \telwe \bita_j \telwe y_{j+1} = \bar{\eta}_0 \telwe \cdots \telwe \bita_j \telwe w_{j+1}. \end{equation} Hence, by Proposition \ref{weon}, $$\begin{array}{lll} &\|\xi_0(z)\we \cdots \we \xi_j(z)\we v_{j+1}(z)\|_{\we^{j+2}\Cc^n}\\ &= \|\xi_0(z)\we \cdots \we \xi_j(z)\we x_{j+1}(z)\|_{\we^{j+2}\Cc^n}, \\ & = \| x_{j+1}(z) - \displaystyle\sum\limits_{i=0}^j \langle x_{j+1}(z), \xi_i(z)\rangle \xi_i(z)\|_{\Cc^n}=\|x_{j+1}(z)\|_{\Cc^n}, \end{array}$$ almost everywhere on $\Tt$. Furthermore $$\begin{array}{llll} &\| \bita_0(z) \we \cdots \we \bita_j(z) \we w_{j+1}(z)\|_{\we^{j+2}\Cc^m}\\ &=\| \bita_0(z) \we \cdots \we \bita_j(z) \we y_{j+1}(z)\|_{\we^{j+2}\Cc^m}\\ &=\| y_{j+1}(z) - \displaystyle\sum\limits_{i=0}^j \langle y_{j+1}(z), \bita_i(z)\rangle \bita_i(z)\|_{\Cc^m} =\|y_{j+1}(z)\|_{\Cc^m}, \end{array}$$ almost everywhere on $\Tt$. Thus, by equation \eqref{x2isyj}, $$\| \bita_0(z) \we \cdots \we \bita_j(z) \we w_{j+1}(z)\|_{\we^{j+2}\Cc^m} = \|\xi_0(z)\we \cdots \we \xi_j(z)\we v_{j+1}(z)\|_{\we^{j+2}\Cc^n}$$ almost everywhere on $\Tt.$ Recall that $h_{j+1}$ is the scalar outer factor of $\xi_0\we \cdots \we \xi_j\we v_{j+1}$. Hence $$\|\hx_{j+1} (z) \|_{\Cc^{n-j-1}} = \|\hy_{j+1}(z)\|_{\Cc^{m-j-1}} = |h_{j+1}(z)|,$$ $$\| x_{j+1}(z) \|_{\Cc^n} = \|y_{j+1}(z)\|_{\Cc^m} =|h_{j+1}(z)|, $$ and $$\|\xi_0(z)\we \cdots \we \xi_j(z)\we v_{j+1}(z)\|_{\we^{j+2}\Cc^n} = \| \bita_0(z) \we \cdots \we \bita_j(z) \we w_{j+1}(z)\|_{\we^{j+2}\Cc^m}\| =|h_{j+1}(z)|, $$ almost everywhere on $\Tt.$ \end{proof} \begin{proposition}\label{epsilonjj} In the notation of Theorem \ref{Tkismultipleofhankel}, there exist unitary-valued functions $\tilde{V}_{j+1}, \tilde{W}_{j+1}$ of types $(n-j-1)\times (n-j-2)$ and $(m-j-1) \times (m-j-2)$ respectively of the form $$\tilde{V}_{j+1} =\begin{pmatrix} A_{j} \xi_{j+1} & \bar{\alpha}_{j+1} \end{pmatrix}, \quad \tilde{W}_{j+1}^T = \begin{pmatrix} B_{j}^T \eta_{j+1} & \bar{\beta}_{j+1} \end{pmatrix} ,$$ where $\alpha_{j+1}, \beta_{j+1}$ are inner, co-outer, quasi-continuous and all minors on the first columns of $\tilde{V}_{j+1}, \tilde{W}_{j+1}^T$ are in $H^\infty.$ Furthermore, the set $\mathcal{E}_{j+1}$ of all level $j+1$ superoptimal error functions for $G$ is equal to the following set $$\scriptsize{ W_0^*\cdots \begin{pmatrix} I_{j+1} & 0 \\ 0& \tilde{W}_{j+1}^* \end{pmatrix} \begin{pmatrix} t_0 u_0 & 0 & 0&0 \\ 0 & t_1 u_1 &0 &0 \\ \vdots &\hspace{8ex} \ddots&~&\vdots \\ 0& 0 & t_{j+1} u_{j+1}&0 \\ 0&0 & 0 & (F_{j+2} +H^\infty)\cap B(t_{j+1}) \end{pmatrix}\begin{pmatrix} I_{j+1} & 0 \\ 0 & \tilde{V}_{j+1}^* \end{pmatrix}\cdots V_0^* , }$$ where $F_{j+2} \in H^\infty(\Dd,\Cc^{(m-j-2)\times (n-j-2)}) +C(\Tt,\Cc^{(m-j-2)\times (n-j-2)}),$ $u_{j+1}=\frac{\bar{z}\bar{h}_{j+1}}{h_{j+1}}$ is a quasi-continuous unimodular function and $B(t_{j+1})$ is the closed ball of radius $t_{j+1}$ in $L^\infty(\Tt,\Cc^{(m-j-2)\times (n-j-2)}).$ \end{proposition} \begin{proof} Recall that, in diagrams \eqref{diagr1122} and \eqref{hfj+1}, the operators $M_{\bar{A}_j},\;M_{B_j},$ $(\xi_0\telwe \cdots \telwe \xi_j \cdot)$ and $(\bita_0 \telwe \cdots\telwe \bita_{j} \telwe \cdot)$ are unitaries. Since both diagrams commute and $(x_{j+1},y_{j+1})$ defined above is a Schmidt pair for $\Gamma_{j+1}$ corresponding to $t_{j+1},$ by Lemma \ref{schfohfj}, $(\hx_{j+1},\hy_{j+1})$ is a Schmidt pair for $H_{F_{j+1}}$ corresponding to $t_{j+1},$ where $$\hx_{j+1} = A_{j} x_{j+1},\quad \hy_{j+1}=B_{j}^*y_{j+1}. $$ We intend to apply Lemma \ref{2.2} to $H_{F_{j+1}}$ and the Schmidt pair $(\hx_{j+1},\hy_{j+1})$ to find unitary-valued functions $\tilde{V}_{j+1},\tilde{W}_{j+1}$ such that, for every $\tilde{Q}_{j+1}\in H^\infty(\Dd,\Cc^{(m-j-1)\times(n-j-1)})$ which is at minimal distance from $F_{j+1},$ we obtain a factorisation of the form $$ F_{j+1}-\tilde{Q}_{j+1} = \tilde{W}_{j+1}^* \begin{pmatrix}t_{j+1} u_{j+1} &0\\ 0 & F_{j+2} \end{pmatrix}\tilde{V}_{j+1}^*, $$ for some $F_{j+2} \in H^\infty(\Dd,\Cc^{(m-j-2)\times(n-j-2)})+C(\Tt,\Cc^{(m-j-2)\times(n-j-2)}).$ For this purpose we find the inner-outer factorisations of $\hx_{j+1}$ and $\bar{z}\bar{\hy}_{j+1}.$ By Proposition \ref{xjwevjetajwewj}, \begin{equation}\label{hj+1common1} \|\hx_{j+1}(z)\|_{\Cc^{n-j-1}}= |h_{j+1}(z)| \end{equation} and \begin{equation}\label{hj+1common2} \|\hy_{j+1}(z)\|_{\Cc^{m-j-1}}=|h_{j+1}(z)|, \end{equation} almost everywhere on $\Tt.$ Equations \eqref{hj+1common1} and \eqref{hj+1common2} imply that $h_{j+1}\in H^2(\Dd,\Cc)$ is the scalar outer factor of both $\hat{x}_{j+1}$ and $\bar{z}\bar{\hat{y}}_{j+1}.$ Hence, by Lemma \ref{2.2}, $\hat{x}_{j+1}, \bar{z}\bar{\hat{y}}_{j+1}$ admit the inner outer factorisations $$\hat{x}_{j+1} = \hat{\xi}_{j+1} h_{j+1}, \quad \bar{z} \bar{y}_{j+1} = \hat{\eta}_{j+1} h_{j+1},$$for some inner $\hat{\xi}_{j+1} \in H^\infty(\Dd,\Cc^{n-j-1}), \hat{\eta}_{j+1}\in H^\infty(\Dd,\Cc^{m-j-1}).$ Then $$ \hat{x}_{j+1} = \hat{\xi}_{j+1} h_{j+1} =A_j^T x_{j+1},\quad \bar{z}\bar{\hat{y}}_{j+1} =\hat{\eta}_{j+1} h_{j+1}=\bar{z}B_j^T \bar{y}_{j+1},$$from which we obtain $$ \hat{\xi}_{j+1} = A_j^T \xi_{j+1},\quad \hat{\eta}_{j+1} =B_j^T \eta_{j+1}. $$ We wish to show that $A_j^T \xi_{j+1},\; B_j^T \eta_{j+1}$ are inner functions in order to apply Lemma \ref{2.2}. Observe that, by equations \eqref{xjAA} and \eqref{yjBB}, $$x_{j+1}=\bar{A}_jA_j^Tv_{j+1},\quad y_{j+1}= B_j B_j^*w_{j+1} .$$ Then, $$A_j^T x_{j+1}= A_j^Tv_{j+1}, \quad B_j^T\bar{y}_{j+1} =B_j^T\bar{w}_{j+1} ,$$ and since $$\xi_{j+1} =\frac{x_{j+1}}{h_{j+1}},\quad \eta_{j+1} = \frac{\bar{z}\bar{y}_{j+1}}{h_{j+1}}, $$ the functions $$A_j^T \xi_{j+1}=\frac{A_j^Tv_{j+1}}{h_{j+1}},\quad B_j^T \eta_{j+1}= \frac{B_j^*w_{j+1}}{h_2}$$ are analytic. Furthermore, by Proposition \ref{onxi} $\|\xi_{j+1}(z)\|_{\Cc^n}=1$ and $\|\eta_{j+1}(z)\|_{\Cc^m}=1$ almost everywhere on $\Tt,$ and, by equations \eqref{hj+1common}, $$\|A_j^T(z) \xi_{j+1}(z)\|_{\Cc^{n-j-1}}=1,\quad \|B_j^T(z) \eta_{j+1}(z)\|_{\Cc^{m-j-1}}=1 $$almost everywhere on $\Tt.$ Thus $A_j^T \xi_{j+1},\; B_j^T \eta_{j+1}$ are inner functions. By Lemma \ref{2.2}, there exist inner, co-outer, quasi-continuous functions $\alpha_{j+1}, \beta_{j+1}$ of types $(n-j-1)\times (n-j-2), (m-j-1)\times (m-j-2)$ respectively such that the functions $$\tilde{V}_{j+1} =\begin{pmatrix} A_j^T \xi_{j+1} & \bar{\alpha}_{j+1} \end{pmatrix}, \quad \tilde{W}_{j+1}^T = \begin{pmatrix} B_j^T \eta_{j+1} & \bar{\beta}_{j+1} \end{pmatrix} $$are unitary-valued with all minors on the first columns in $H^\infty.$ Furthermore, by Lemma \ref{2.2}, every $\hat{Q}_{j+1}\in H^\infty(\Dd,\Cc^{(m-j-1)\times(n-j-1)})$ which is at minimal distance from $F_{j+1}$ satisfies $$F_{j+1}-\hat{Q}_{j+1} = \tilde{W}_{j+1}^* \begin{pmatrix} t_{j+1} u_{j+1} & 0 \\ 0 & F_{j+2} \end{pmatrix}\tilde{V}_{j+1}^*, $$ for some $F_{j+2} \in H^\infty(\Dd,\Cc^{(m-j-2)\times (n-j-2)})+C(\Tt,\Cc^{(m-j-2)\times (n-j-2)})$, where $u_{j+1}$ is a quasi-continuous unimodular function given by $u_{j+1} = \frac{\bar{z} \bar{h}_{j+1}}{h_{j+1}}.$ By Lemma \ref{f+hinfty}, the set $$\tilde{\mathcal{E}}_{j+1} =\{F_{j+1} - \hat{Q} : \hat{Q} \in H^\infty(\Dd,\Cc^{(m-j-1)\times (n-j-1)}), \| F_{j+1} - \hat{Q}\|_{L^\infty}=t_{j+1} \}$$ satisfies $$\tilde{\mathcal{E}}_{j+1} = \tilde{W}_{j+1}^* \begin{pmatrix} t_{_{j+1}}u_{_{j+1}} & 0 \\ 0 & (F_{j+2}+H^\infty)\cap B(t_{j+1}) \end{pmatrix}V_{j+1}^*, $$ where $B(t_{j+1})$ is the closed ball of radius $t_{j+1}$ in $L^\infty(\Tt,\Cc^{(m-j-2)\times (n-j-2)}).$ Thus, by Proposition \ref{tildvjwj}, $\mathcal{E}_{j+1}$ admits the factorisation claimed.\end{proof} \begin{theorem}\label{mathcalAG} Let $G\in H^\infty(\Dd, \Cc^{m\times n})+C(\Tt, \Cc^{m\times n}).$ Let $T_i, x_i, y_i, h_i$, for $i\ge 0$, be defined by the algorithm from Subsection \ref{Alg_statement}. Let $r$ be the least index $j \ge 0$ such that $T_j=0$. Then $r\leq \min(m,n)$ and the superoptimal approximant $\mathcal{A}G$ is given by the formula $$ G-\mathcal{A}G= \displaystyle \sum\limits_0^{r-1} \frac{t_i y_i x_i^*}{|h_i|^2} .$$ \end{theorem} \begin{proof} First observe that, if $T_0=H_G=0,$ then this implies $G\in H^\infty(\Dd,\CCmn),$ and so $$\mathcal{A}G=G.$$ Otherwise, let $t_0=\|H_G\|>0.$ If $T_1=0$, by Theorem \ref{T0compact}, $H_{F_{1}} =0,$ that is, $$F_1 \in H^\infty(\Dd, \Cc^{(m-1)\times (n-1)}).$$ Then, by Lemma \ref{f+hinfty}, we have $$W_0 (G-\mathcal{A}G) V_0 = \begin{pmatrix} t_0 u_0 & 0 \\ 0 & 0 \end{pmatrix}. $$ \noindent Equivalently $$ \begin{array}{cllll} G-\mathcal{A}G &=W_0^* \begin{pmatrix} t_0 u_0 & 0 \\ 0 & 0 \end{pmatrix}V_0^* \vspace{2ex} \\ &= \begin{pmatrix} \bar{\eta}_0& \beta_0 \end{pmatrix} \begin{pmatrix} t_0 u_0 & 0 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} \xi_0^*\\\alpha_0^T \end{pmatrix}\vspace{2ex} \\\end{array}$$ $$\begin{array}{lll} &= \begin{pmatrix} \bar{\eta}_0 t_0 u_0 & 0 \end{pmatrix}\begin{pmatrix} \xi_0^*\\\alpha_0^T \end{pmatrix} \vspace{2ex} = \bar{\eta}_0t_0 u_0 \xi_0^*\vspace{2ex} \\ &= t_0 \displaystyle\frac{zy_0}{\bar{h}_0} \frac{\bar{z} \bar{h_0}}{h_0} \frac{x_0^*}{\bar{h}_0}\vspace{2ex} = \displaystyle\frac{t_0 y_0 x_0^*}{|h_0|^2}. \end{array}$$ Let $j$ be a non-negative integer such that $T_j=0$ and $T_i \neq 0$ for $ 1\le i < j$. By the commutativity of the diagrams \eqref{diagr1122} and \eqref{hfj+1}, $H_{F_j} =0$, and therefore $F_j \in H^\infty(\Dd,\Cc^{(m-j)\times (n-j)})$. By Proposition \ref{epsilonjj}, the superoptimal analytic approximant $\mathcal{A}G$ satisfies equation \eqref{epsilonnj}, that is, \begin{equation}\label{AG} G-\mathcal{A}G= W_0^* W_1^* \cdots W_{j-1}^* \begin{pmatrix} t_0 u_0 & 0 &\dots &0 \\ 0&t_1u_1 &\dots &0 \\ \vdots & \hspace{10ex}\ddots & &\vdots \\ 0 &\cdots &t_{j-1}u_{j-1} & 0\\ 0 & \dots & \dots & 0 \end{pmatrix} V_{j-1}^* \cdots V_1^* V_0^*, \end{equation} where, for $i=0,1,\dots, j-1$, $$ \tilde{V}_{i} = \begin{pmatrix} \alpha_{i-1}^T \cdots \alpha_0^T \xi_{i} & \bar{ \alpha}_{i} \end{pmatrix}, \quad \tilde{W}_{i}^T = \begin{pmatrix} \beta_{i-1}^T \cdots \beta_0^T \eta_{i} & \bar{\beta}_{i} \end{pmatrix} $$ are unitary-valued functions, as described in Proposition \ref{tildevjwj}, $u_i = \frac{\bar{z} \bar{h}_i}{h_i}$ are quasi-continuous unimodular functions, and $$V_{i} = \begin{pmatrix} I_{i} & 0 \\ 0 & \tilde{V}_{i} \end{pmatrix}, \quad W_{i} = \begin{pmatrix} I_{i} & 0 \\ 0 & \tilde{W}_{i} \end{pmatrix} .$$ Recall that, by equations \eqref{xij+1etaj+1}, for $i =0, \dots, j-1$, \begin{equation} \xi_{i} = \frac{x_{i}}{h_{i}}, \quad \eta_{i}=\frac{\bar{z}\bar{y}_{i}}{h_{i}}.\end{equation} By Proposition \ref{xjwevjetajwewj}, for $i =0, \dots, j-1$, $$ |h_i(z)| = \|x_i(z)\|_{\Cc^n} = \|y_i(z)\|_{\Cc^m}$$ almost everywhere on $\Tt$. With the aid of the formulae \eqref{WjW0*} and \eqref{V0Vj}, equation \eqref{AG} simplifies to \begin{align}\label{G-AG-main} G-\mathcal{A}G =& \displaystyle\frac{t_0 y_0 x_0^*}{|h_0|^2} + t_1\frac{1}{|h_1|^2}B_0 B_0^*y_1 x_1^* \bar{A}_0 A_0^T+\dots \nonumber\\ & \; + t_{j-1} \frac{1}{|h_{j-1}|^2}B_{j-1} B_{j-1}^*y_{j-1} x_{j-1}^* \bar{A}_{j-1} A_{j-1}^T. \end{align} By equations \eqref{x_j+1} and \eqref{y_j+1}, for $i =0, \dots, j-1$, $$ x_i^* = x_i^* \bar{A}_{i-1} A_{i-1}^T \; \; \text{and} \;\; y_i = B_{i-1} B_{i-1}^* y_i.$$ Thus $$ G -\mathcal{A}G = \sum\limits_{i=0}^{r-1}\frac{t_i y_i x_i^*} {|h_i|^2} $$ and the assertion has been proved. \end{proof} \section{$T_j$ is a well-defined operator }\label{Tj-well-def} \begin{proposition}\label{Twell} Let $G\in H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn)$ and let $0\leq j \leq\min(m,n)-2.$ Let the functions $\xi_{i}, \eta_{i}$ be defined by equations \eqref{xij+1etaj+1}, that is, \begin{equation}\label{xizetaiz}\xi_i = \displaystyle\frac{x_i}{h_i} , \quad \eta_i=\displaystyle\frac{\bar{z} \bar{ \eta_i}}{h_i} \end{equation} for $i=0,\cdots,j$ and let $$X_i = \xi_0 \telwe \xi_1 \telwe \cdots \telwe \xi_{i-1} \dot{\we} H^2(\Dd, \Cc^n) \subset H^2(\Dd, \we^{i+1} \Cc^n),\quad i=0,\cdots, j,$$ $$Y_i = \bar{\eta}_{0} \telwe \bar{\eta}_{1} \telwe \cdots \telwe \bar{\eta}_{i-1} \dot{\we} H^2 (\Dd, \Cc^m )^\perp \subset H^2 (\Dd, \we^{i+1} \Cc^m)^\perp,\quad i=0,\cdots, j.$$ Let $Q_i \in H^\infty(\Dd,\CCmn)$ satisfy \begin{equation}\label{G-qii}(G-Q_i)x_{k} = t_{k} y_{k}, \quad (G-Q_i)^*y_{k} =t_{k} x_{k}\end{equation} for $k=0,\dots,i-1.$ \noindent Then, the operators $T_i\colon X_i \to Y_i,$ $i=0,\cdots,j,$ given by \be\label{defTi} T_i(\xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_{i-1} \dot{\we} x)= P_{Y_i} \left( \bar{\eta}_0 \dot{\we} \bar{\eta}_1 \dot{\we} \cdots \dot{\we} \bar{\eta}_{i-1}\dot{\we} (G-Q_i)x \right) \ee \noindent are well-defined and are independent of the choice of $Q_i\in H^\infty(\Dd, \CCmn)$ satisfying equations \eqref{G-qii}. \end{proposition} \begin{proof} By Corollary \ref{projwellgen}, the projections $P_{Y_i}$ are well-defined for all $i=0, \cdots, j.$ Hence it suffices to show that, for all $i=0,1,\cdots,j,$ $T_i$ maps a zero from its domain to a zero in its range and that $T_i$ does not depend on the choice of $Q_i,$ which satisfies equations (\ref{G-qii}). For $i=0,$ the operator $T_0$ is the Hankel operator $H_G.$ If $f_0 \equiv 0,$ then $H_G f_0 =0$ and, moreover, $H_G$ is independent of the choice of any $Q\in H^\infty(\Dd,\CCmn)$ as $H_{G-Q} = H_G.$ Thus, $T_0$ is well-defined. For $i=1,$ let $(x_0,y_0)$ be a Schmidt pair for the compact operator $H_G$ corresponding to $t_0=\|H_G\|,$ where $x_0 \in H^2(\Dd,\Cc^n)$ and $y_0 \in H^2(\Dd,\Cc^m)^\perp.$ By Lemma \ref{2.2}, $x_0, \bar{z}\bar{y}_0$ admit the inner-outer factorisations $x_0= \xi_0 h_0, \quad \bar{z}\bar{y}_0= \eta_0 h_0,$ where $\xi_0 \in H^\infty(\Dd,\Cc^n),$ $\eta_0 \in H^\infty(\Dd,\Cc^m)$ are inner vector-valued functions and $h_0 \in H^2(\Dd,\Cc)$ is scalar outer. The spaces $X_1$ and $Y_1$ are given by the formulas $$X_1 = \xi_0 \telwe H^2(\Dd,\Cc^n),\quad Y_1=\bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp .$$ The operator $T_1\colon X_1\to Y_1$ is given by $$T_1(\xi_0\telwe x) = P_{Y_1} (\bar{\eta}_0 \telwe (G-Q_1)x)$$ for all $x \in H^2(\Dd,\Cc^n),$ where $Q_1 \in H^\infty(\Dd,\CCmn)$ satisfies equations (\ref{G-qii}). \begin{lemma}\label{0to0} Let $\xi_0 \telwe u =\xi_0 \telwe v$ for some $u,v\in H^2(\Dd,\Cc^n)$. Then $$ \bar{\eta}_0 \telwe (G-Q_1)u = \bar{\eta}_0 \telwe (G-Q_1)v. $$ \end{lemma} \begin{proof} Suppose that $\xi_0 \telwe u =\xi_0 \telwe v$ for some $u,v\in H^2(\Dd,\Cc^n)$. Let $x=u-v$, then $\xi_0 \telwe x =0$, and so $x$ and $\xi_0$ are pointwise linearly dependent in $\Cc^n$ on $\Dd.$ Therefore there exist maps $\beta,\lambda \colon \Dd \to \Cc$, having no common zero in $\Dd$, such that \begin{equation}\label{lindep}\beta(z) \xi_0(z) = \lambda(z) x(z) \;\; \text{in}\; \Cc^n ,\end{equation} for all $z\in \Dd.$ By assumption, $Q_1 \in H^\infty(\Dd, \CCmn)$ satisfies equations (\ref{G-qii}). Thus, for all $z \in \Dd,$ \begin{equation}\label{t0y0} t_0 y_0(z) = (G-Q_1)(z) x_0 (z).\end{equation} \noindent By equations (\ref{xizetaiz}) and (\ref{lindep}), \begin{equation}\label{x0lindep} \beta(z) x_0(z)= \beta(z) h_0(z)\xi_0(z)= h_0(z) \lambda (z) x(z)\end{equation} for all $z \in \Dd.$ By equations (\ref{t0y0}) and (\ref{x0lindep}), for all $z \in \Dd,$ $$\begin{array}{cllll} t_0 y_0(z) &= (G-Q_1)(z) x_0 (z) ,\\ \beta(z) t_0 z\displaystyle\frac{y_0(z)}{\bar{h}_0(z)} &= \displaystyle (G-Q_1)(z)h_0 (z) \lambda (z) x(z)\frac{z}{\bar{h}_0(z)}. \end{array} $$ \noindent Therefore, by equations (\ref{xizetaiz}), for all $z\in \Dd,$ $$t_0 \beta(z) \bar{\eta}_0 (z) = (G-Q_1)(z) x(z) \mu(z)\; \; \text{in} \; \Cc^m,$$ where $$\mu(z)= \frac{zh_0(z) \lambda(z)}{\bar{h}_0(z)}, \; \text{for all }\; z\in \Dd.$$ Hence, by Definition \ref{pointwiseld}, $\bar{\eta}_0$ and $(G-Q_1)x$ are pointwise linearly dependent in $\Cc^m $ on $\Dd,$ and so $$ \bar{\eta}_0 \dot{\we} (G-Q_1)x =0. $$ Consequently, \[ \bar{\eta}_0 \dot{\we}(G-Q_1)u= \bar{\eta}_0 \dot{\we}(G-Q_1)v. \] \end{proof} Therefore the formula \eqref{defTi} (with $i=1$) does uniquely define $T_1u \in Y_1$. Next, we show that the operator $T_1$ is independent of the choice of $Q_1 \in H^\infty(\Dd,\CCmn)$ satisfying equations (\ref{G-qii}). Suppose $Q_1 , Q_2 \in H^\infty(\Dd, \CCmn)$ satisfy \begin{equation}\label{q_1} (G-Q_1 ) x_0 = t_0 y_0 \;, \; y_0^* (G-Q_1 ) = t_0 x_0^* \end{equation} and \begin{equation}\label{q_2} (G-Q_2 ) x_0 = t_0 y_0 \;, \; y_0^* (G-Q_2 ) = t_0 x_0^* .\end{equation} \noindent Then, we claim that, for all $x \in H^2(\Dd, \Cc^n),$ $$P_{Y_1} ( \bar{\eta}_0 \telwe (G-Q_1 ) x ) = P_{Y_1}( \bar{\eta}_0 \telwe (G-Q_2 ) x ) , $$ that is, $$ P_{Y_1} ( \bar{\eta}_0 \telwe (Q_1 - Q_2 ) x )=0. $$ The latter equation is equivalent to the statement that, for all $x \in H^2(\Dd, \Cc^n)$, $\bar{\eta}_0 \telwe (Q_2 - Q_1)x$ is orthogonal to $Y_1$, that is, to $ \bar{\eta}_0 \telwe \varrho$ for all $ \varrho \in H^2(\Dd, \Cc^m)^\perp.$ As a matter of convenience, set $$ A x = (Q_2 - Q_1)x, \quad x \in H^2(\Dd,\Cc^n). $$ We have to prove that $$\langle \bar{\eta}_0 \telwe Ax , \bar{\eta}_0 \telwe \varrho \rangle_{L^2(\Tt, \we^2\Cc^m)}=0$$ for all $x \in H^2 (\Dd, \Cc^n)$ and all $\varrho \in H^2(\Cc^m)^\perp.$ Note that $$\begin{array}{clllllll} \langle \bar{\eta}_0 \telwe Ax , \bar{\eta}_0 \telwe \varrho\rangle_{L^2(\Tt, \we^2\Cc^m)} =\displaystyle\frac{1}{2\pi}\int\limits_0^{2\pi}\langle \bar{\eta}_0 (e^{i\theta}) \telwe A(e^{i\theta}) x(e^{i\theta}) , \bar{\eta}_0(e^{i\theta}) \telwe \varrho(e^{i\theta})\rangle_{\we^2\Cc^m} \; d\theta,\end{array}$$which by Proposition \ref{we} yields $$\begin{array}{llll}&\displaystyle\frac{1}{2\pi}\int\limits_0^{2\pi} \det \begin{pmatrix} \langle \bar{\eta}_0 (e^{i\theta}) ,\bar{\eta}_0 (e^{i\theta}) \rangle_ {\Cc^m}& \langle \bar{\eta}_0 (e^{i\theta}) , \varrho(e^{i\theta})\rangle_{\Cc^m}\\ \langle A(e^{i\theta})x(e^{i\theta}), \bar{\eta}_0 (e^{i\theta})\rangle_{\Cc^m} & \langle A(e^{i\theta}) x(e^{i\theta}), \varrho(e^{i\theta})\rangle_ {\Cc^m}\\ \end{pmatrix}d\theta\vspace{2ex}\\ &=\displaystyle\frac{1}{2\pi}\int\limits_0^{2\pi} \|\bar{\eta}_0 (e^{i\theta})\|_{\Cc^m}^2 \langle A(e^{i\theta})x(e^{i\theta}), \varrho (e^{i\theta})\rangle_{\Cc^m}\;d\theta \vspace{2ex} \\ &\hspace{5ex}- \displaystyle\frac{1}{2\pi}\int\limits_0^{2\pi} \langle A(e^{i\theta})x(e^{i\theta}), \bar{\eta}_0 (e^{i\theta})\rangle_{\Cc^m} \langle \bar{\eta}_0 (e^{i\theta}) , \varrho(e^{i\theta})\rangle_{\Cc^m} \; d\theta . \end{array}$$ By Proposition \ref{onxi}, $\|\bar{\eta}_0 (e^{i\theta})\|_{\Cc^m} =1$ for almost every $\eiu \in \Tt.$ Since $Ax \in H^2 (\Dd, \Cc^m)$ and $\varrho \in H^2 (\Dd, \Cc^m)^\perp,$ $$\displaystyle\frac{1}{2\pi}\int\limits_0^{2\pi} \langle A(e^{i\theta})x(e^{i\theta}), \varrho (e^{i\theta}\rangle_{\Cc^m} d\theta = \langle Ax, \varrho\rangle_{L^2 (\Tt, \Cc^m)}=0.$$ Thus $$ \begin{array}{cllllll} \langle \bar{\eta}_0 \telwe Ax , \bar{\eta}_0 \telwe \varrho\rangle_{L^2(\Tt, \we^2\Cc^m)}&= \displaystyle\frac{1}{2\pi}\int\limits_0^{2\pi} \langle A(e^{i\theta})x(e^{i\theta}), \bar{\eta}_0 (e^{i\theta})\rangle_{\Cc^m} \langle \bar{\eta}_0 (e^{i\theta}) , \varrho(e^{i\theta})\rangle_{\Cc^m} d\theta \vspace{2ex} \\ &= \displaystyle\frac{1}{2\pi}\int\limits_0^{2\pi} \bar{\eta}_0^* (e^{i\theta})A(e^{i\theta})x(e^{i\theta}) \langle \bar{\eta}_0 (e^{i\theta}) , \varrho(e^{i\theta})\rangle_{\Cc^m} d\theta. \end{array}$$ \noindent Recall that by equation (\ref{xi0eta0}), $\bar{\eta}_0(z) =\displaystyle \frac{zy_0 (z)}{\bar{h}_0(z)}, \; z \in \Tt,$ so that $$\bar{\eta}_0^* (e^{i\theta})= \left(\frac{e^{i\theta} y_0(e^{i\theta}) }{\bar{h}_0 (e^{i\theta}) }\right)^* = \frac{e^{-i\theta} y_0^*(e^{i\theta}) }{h_0 (e^{i\theta}) }.$$ \noindent Therefore $$\langle \bar{\eta}_0 \telwe Ax , \bar{\eta}_0 \telwe \varrho\rangle_{L^2(\Tt, \we^2\Cc^m)}=\displaystyle\frac{1}{2\pi}\int\limits_0^{2\pi} \frac{e^{-i\theta} y_0^*(e^{i\theta}) }{h_0 (e^{i\theta}) } A(e^{i\theta})x(e^{i\theta}) \langle \bar{\eta}_0 (e^{i\theta}) , \varrho(e^{i\theta})\rangle_{\Cc^m} d\theta .$$ \noindent Recall our initial assumption was that $Q_1 , Q_2$ satisfy equations (\ref{q_1}) and (\ref{q_2}), consequently, $$ y_0^* (G- Q_i ) =t_0 x_0 ^* , \; \text{for} \; i=1,2. $$ Hence, for $z \in \Tt,$ $$\begin{array}{clll}y_0^* (z) A(z)x(z) &= y_0^* (z) (G-Q_1)(z) x(z)- y_0^* (z) (G-Q_2) (z) x(z) \vspace{2ex} \\ &= (t_0 x_0^* x - t_0 x_0^* x) (z) \vspace{2ex} \\&=0. \end{array}$$ \noindent We deduce that $$ \displaystyle\frac{1}{2\pi}\int\limits_0^{2\pi} \bar{\eta}_0^* (e^{i\theta})A(e^{i\theta})x(e^{i\theta}) \langle \bar{\eta}_0 (e^{i\theta}) , \varrho(e^{i\theta})\rangle_{\Cc^m} d\theta =0. $$ \noindent To conclude, we have proved that, if $Q_1 , Q_2 \in H^\infty( \Dd, \CCmn)$ satisfy equations (\ref{q_1}) and (\ref{q_2}), then $$P_{Y_1}(\bar{\eta}_0\telwe (G-Q_1)x)= P_{Y_1}(\bar{\eta}_0\telwe (G-Q_2)x),$$ that is, $T_1$ is independent of the choice of $Q_1$ subject to equations \eqref{q_1},\eqref{q_2}. Recursive step: suppose that functions $x_{i-1} \in L^2(\Tt,\Cc^n),$ $y_{i-1} \in L^2(\Tt,\Cc^m),$ outer functions $h_{i-1} \in H^2(\Dd,\Cc),$ positive numbers $t_i,$ matrix-valued functions $Q_i \in H^\infty(\Dd,\CCmn),$ spaces $X_i, Y_i$ and compact operators $T_i \colon X_i \to Y_i$ are constructed inductively by the algorithm for $i=0,\dots,j.$ Let us prove that $T_j\colon X_j \to Y_j,$ given by equation (\ref{T_j}), is well-defined for $0\leq j \leq r$. Note, by Corollary \ref{projwellgen}, the projection $P_{Y_{j}}$ is well-defined. We must show that if an element of $X_j$ has two different expressions as an element of $\xi_0\telwe \dots\telwe \xi_{j-1}\telwe \xi_j \telwe H^2(\Dd,\Cc^n),$ say \begin{equation}\label{tworeps} u= \xi_0\telwe \dots\telwe \xi_{j-1} \telwe x= \xi_0\telwe \dots\telwe \xi_{j-1} \telwe \tilde{x} \end{equation} for some $x,\tilde{x} \in H^2(\Dd,\Cc^n),$ then the two corresponding formulae for $T_j u $ given by the defining equation \eqref{T_j} agree, that is, $$P_{Y_j}(\bita_0 \telwe \bita_1\telwe \dots \telwe \bita_{j-1}\telwe (G-Q_j)x)=P_{Y_j}(\bita_0 \telwe \bita_1\telwe \dots \telwe \bita_{j-1}\telwe (G-Q_j)\tilde{x}),$$ or equivalently, $$ P_{Y_j}\bigg(\bita_0 \telwe \bita_1\telwe \dots \telwe \bita_{j-1}\telwe (G-Q_j)(x-\tilde{x})\bigg)=0,$$ which is to say that we need to show that \begin{equation}\label{weneed} \bita_0 \telwe \bita_1 \telwe \dots \telwe \bita_{j-1}\telwe (G-Q_j)(x-\tilde{x}) \in Y_j^\perp. \end{equation} If $x,\tilde{x}$ satisfy equation \eqref{tworeps}, then $$\xi_0\telwe \dots\telwe \xi_{j-1} \telwe (x-\tilde{x})=0, $$ and so, by Corollary \ref{lin-depend}, $\xi_0,\xi_1,\dots, \xi_{j-1},x-\tilde{x}$ are pointwise linearly dependent almost everywhere on $\Tt.$ \noindent It follows immediately that, for almost all $z\in \Tt,$ the vectors $$ (G-Q_j)\xi_0(z),\dots, (G-Q_j)\xi_{j-1}(z), (G-Q_j)\xi_j(z), (G-Q_j)(x-\tilde{x})(z)$$ are linearly dependent in $\Cc^m.$ \noindent Since $y_j = \bar{z}\bar{h}_j \bar{\eta}_j,$ by equations \eqref{xij+1etaj+11}, equations \eqref{G-qii} imply, for $i=0,\dots,j-1$ and almost all $z\in \Tt,$ $$\displaystyle (G-Q_j)\xi_i(z) = (G-Q_j)\frac{x_i}{h_i}(z)= t_i \frac{y_i}{h_i}(z) =\frac{t_i}{h_i} \bar{z}\bar{h}_i\bar{\eta}_i(z).$$ Thus, for almost all $z\in \Tt,$ the vectors $$\frac{t_0}{h_0} \bar{z}\bar{h}_0\bar{\eta}_0(z),\;\dots,\; \frac{t_{j-1}}{h_{j-1}} \bar{z}\bar{h}_{j-1}\bar{\eta}_{j-1}(z), \;(G-Q_j)(x-\tilde{x})(z) $$ are linearly dependent in $\Cc^m.$ Since $t_0 \geq t_1 \geq \dots \geq t_j >0$, it follows that $$\bita_0(z),\; \dots,\; \bita_{j-1}(z), \;(G-Q_j)(x-\tilde{x})(z) $$ are linearly dependent for almost all $z \in \Tt$ and so, by Corollary \ref{lin-depend}, $$\bita_0 \telwe \dots \telwe \bita_{j-1} \telwe (G-Q_j)(x-\tilde{x}) =0,$$ which certainly implies the desired relation \eqref{weneed}. Thus $T_j\colon X_j \to Y_j$ is well-defined. For the operator $T_j$ to be uniquely defined in the algorithm, it remains to prove $T_j$ is independent of the choice of $Q_j \in H^\infty ( \Dd, \CCmn)$ subject to equations (\ref{G-qii}). Let $Q_1, Q_2 \in H^\infty(\Dd,\CCmn)$ satisfy \begin{equation}\label{qixiyiti}(G-Q_1)x_i = t_i y_i,\quad(G-Q_2)x_i = t_i y_i,\quad y_i^* (G-Q_1) = t_i x_i^*,\quad y_i^* (G-Q_2) = t_i x_i^* \end{equation} for $i=0, \cdots, j-1.$ We shall prove that, for all $x \in H^2(\Dd, \Cc^n),$ $$P_{Y_j} ( \bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_{j-1} \telwe (G-Q_1)x ) = P_{Y_j}( \bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_{j-1} \telwe (G-Q_2)x ) .$$ The latter equality holds if and only if, for all $x \in H^2(\Dd, \Cc^n),$ $$P_{Y_j} ( \bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_{j-1} \telwe (Q_2 -Q_1)x ) =0 $$which is equivalent to the assertion that $\bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j-1} \telwe (Q_2 -Q_1)x$ is orthogonal to $ \bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_{j-1} \telwe q$ for all $x \in H^2(\Dd, \Cc^n)$ and for all $q \in H^2(\Dd, \Cc^m)^\perp.$ \noindent Equivalently $$\langle \bar{\eta}_0 \telwe \cdots \telwe \bar{\eta}_{j-1} \telwe (Q_2 -Q_1)x ,\bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_{j-1} \telwe q \rangle_{L^2(\Tt, \we^{j+1}\Cc^m)}=0$$ for all $x \in H^2 (\Dd, \Cc^n)$ and for all $q \in H^2(\Cc^m)^\perp.$ Set $Ax= (Q_2-Q_1)x,$ $x\in H^2(\Dd,\Cc^n).$ \noindent By Proposition \ref{we}, $$ \langle \bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_{j-1} \telwe (Q_2 -Q_1 )x ,\bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_{j-1} \telwe q\rangle_{L^2(\Tt, \we^{j+1}\Cc^m)}$$ \noindent is equal to $$\frac{1}{2\pi}\int\limits_0^{2\pi} \footnotesize{ \det \begin{pmatrix} \langle \bar{\eta}_0 (e^{i\theta}) , \bar{\eta}_0(e^{i\theta})\rangle_{\Cc^m} & \cdots & \langle \bar{\eta}_0(e^{i\theta}), \bar{\eta}_{j-1}(e^{i\theta})\rangle_{\Cc^m} & \langle \bar{\eta}_0(e^{i\theta}), q(e^{i\theta}) \rangle_{\Cc^m} \\ \vdots & \ddots& \vdots & \vdots\\ \langle \bar{\eta}_{j-1}(e^{i\theta}), \bar{\eta}_0(e^{i\theta})\rangle_{\Cc^m} & \cdots & \langle \bar{\eta}_{j-1}(e^{i\theta}), \bar{\eta}_{j-1}(e^{i\theta})\rangle_{\Cc^m} & \langle \bar{\eta}_{j-1}(e^{i\theta}), q (e^{i\theta})\rangle_{\Cc^m} \\ \langle A(e^{i\theta})x(e^{i\theta}) , \bar{\eta}_0(e^{i\theta})\rangle_{\Cc^m} & \cdots & \langle A(e^{i\theta})x(e^{i\theta}) , \bar{\eta}_{j-1}(\eiu)\rangle_{\Cc^m} &\langle A(e^{i\theta})x(e^{i\theta}) , q(e^{i\theta})\rangle_{\Cc^m}\end{pmatrix} d\theta }.$$ Notice that $Ax$ and $q$ are orthogonal in $L^2(\Tt, \Cc^m) $ and, by Proposition \ref{onxi}, $\{\bar{\eta}_i(z)\}_{i=0}^{j-1}$ is an orthonormal sequence in $\Cc^m$ almost everywhere on $\Tt.$ Also, for $i=0, \cdots, j-1,$ by equations (\ref{qixiyiti}), $$ \begin{array}{cllll} \langle A(e^{i\theta})x(e^{i\theta}) , \overline{ \eta}_i(e^{i\theta})\rangle\rangle_{\Cc^m} &=\displaystyle \eta_i^T(e^{i\theta}) A(e^{i\theta})x(e^{i\theta}) \\&= \displaystyle \frac{e^{-i\theta} y_i^*(e^{i\theta}) }{h_i (e^{i\theta}) } A(e^{i\theta})x(e^{i\theta})\\ &=\displaystyle \frac{e^{-i\theta}}{h_i (e^{i\theta}) }\left( y_i^*(e^{i\theta}) (G-Q_1)(z)x(z)-y_i^*(e^{i\theta})(G-Q_2)(z)x(z)\right)\\ &=\displaystyle \frac{e^{-i\theta}}{h_i (e^{i\theta}) } (t_i x_i^*x-t_ix_i^*x)=0. \end{array}$$ Thus $$\begin{array}{clllllll} &\langle \bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_j \telwe (Q_2 -Q_1 )x ,\bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_j \telwe q\rangle_{L^2(\Tt, \we^{j+1}\Cc^m)} \vspace{2ex} \\ &= \displaystyle\frac{1}{2\pi}\int\limits_0^{2\pi} \det \begin{pmatrix} 1 & 0 & \cdots & \langle \bar{\eta}_0(e^{i\theta}), q(e^{i\theta}) \rangle_{\Cc^m} \\ 0 & 1 & \cdots& \langle \bar{\eta}_2(e^{i\theta}), q (e^{i\theta})\rangle_{\Cc^m}\\ \vdots & & \ddots & \vdots\\ 0 & 0 &\cdots & \langle A(e^{i\theta})x(e^{i\theta}) , q(e^{i\theta})\rangle_{\Cc^m} \end{pmatrix}\; d\theta \vspace{2ex} \\ &= \langle Ax , q\rangle_{L^2 ( \Tt, \Cc^m)} \vspace{2ex} =0. \end{array}$$ \noindent Consequently $$P_{Y_j} ( \bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_{j-1} \telwe (G-Q_1)x ) = P_{Y_j}( \bar{\eta}_0\telwe \cdots \telwe \bar{\eta}_{j-1} \telwe (G-Q_2)x ) ,$$ and so $T_j$ is independent of the choice of $Q_j$ subject to equations \eqref{G-qii}. \end{proof} \section{Superoptimal analytic approximation}\label{statet_algorithm} In this section we present our main result, which is an algorithm for the superoptimal analytic approximation of a matrix-valued function on the circle. In Subsection \ref{known} we recall certain known results and Peller and Young's algorithm (Theorem \ref{superopconstruct}). In Subsection \ref{Alg_statement} we present an alternative algorithm for the superoptimal approximant, based on exterior powers of Hilbert spaces. The proof of the validity of the new algorithm relies on the cited work given in Subsection \ref{known}. \subsection{Known results}\label{known} \begin{theorem}[Hartman's Theorem, \cite{Peller}, p. 74]\label{2.04} Let $E,F$ be separable Hilbert spaces and let $\Phi \in L^\infty(\Tt, \mathcal{L}(E,F)).$ The following statements are equivalent: \begin{enumerate} \item[i)] The Hankel operator $H_\Phi$ is compact on $H^2(\Dd,E)$; \item[ii)] $\Phi \in H^\infty(\Dd, \mathcal{L}(E,F))+ C (\Tt,\mathcal{K}(E,F))$; \item[iii)] there exists a function $\Psi \in C (\Tt,\mathcal{K}(E,F))$ such that $\hat{\Phi}(n) = \hat{\Psi}(n)$ for $n<0.$ \end{enumerate} \end{theorem} \index{Hartman's Theorem} \index{compact Hankel operator} \begin{theorem}[\cite{page}]\label{nehtmatr} For any matrix-valued $\phi \in L^{\infty}(\mathbb{T}, \Cc^{m\times n} ),$ $$\inf\limits_{Q\in H^{\infty}(\mathbb{D}, \Cc^{m\times n})} \|\phi-Q\|_\infty=\|H_\phi \|$$and the infimum is attained. \end{theorem}\vspace{4ex} \begin{definition}[\cite{superop}, p. 306] The class of \emph{quasi-continuous} functions is defined by $$QC=(H^\infty(\Dd, \CCmn) + C(\Tt, \CCmn)) \cap \overline{(H^\infty(\Dd, \CCmn) + C(\Tt, \CCmn))}.$$ In other words this class consists of functions on the circle which belong to $H^\infty + C$ and have the property that their complex conjugates belong to $H^\infty + C$ as well. \index{quasi-continuous function} \end{definition} We shall also need the class of functions of vanishing mean oscillation, as described, for example, in \cite[Appendix 2, Section 5]{Peller}. \begin{definition} For any function $f\in L^1(\T)$ and any arc $I$ in $\T$ let \[ f_I \df \frac{1}{m(I)} \int_I f dm \] where $m$ is Lebesgue measure on $\T$. Thus, $f_I$ is the mean of $f$ over $I$. The function $f$ is said to have {\em vanishing mean oscillation} if \[ \lim_{m(I) \to 0} \frac{1}{m(I)} \int_I |f-f_I| \, dm =0. \] The space of functions of vanishing mean oscillation on $\T$ is denoted by $\vmo$. \end{definition} \index{$\vmo$} $\vmo$ is also related to the compactness of Hankel operators. The following is \cite[Theorem 5.8]{Peller}. \begin{theorem} Let $\ph\in L^2$. Then $H_\ph$ is compact if and only if $P_-\ph\in \vmo$. \end{theorem} It is therefore not surprising that the spaces $QC$ and $\vmo$ are closely related. In fact \begin{theorem}\label{vmoqc}\cite[Page 729]{Peller} \[ QC=\vmo \cap L^\infty. \] \end{theorem} \noindent It follows from another characterization of $\vmo$, to wit \[ \vmo \ =\ \{f+\tilde g: f,g \in C(\T)\}, \] where $\tilde g$ denotes the harmonic conjugate of $g$ \cite[Theorem A2.8]{Peller}. \begin{remark} For $G \in H^\infty(\Dd, \CCmn) + C(\Tt, \CCmn)$ we will say that a function $Q \in H^\infty(\Dd, \CCmn)$ which minimizes the norm $\|G-Q\|_{L^\infty}$ is a function {\em at minimal distance from $G.$} By Nehari's Theorem, all such functions $Q$ satisfy $ \|G-Q\|_{L^\infty}= \|H_G\|.$ \index{function at minimal distance from $G$} \end{remark} \begin{definition}[\cite{NagyFoias}, p. 190] For a separable Hilbert space $E,$ a function $\xi \in H^\infty(\Dd, E)$ will be called \emph{inner} if for almost every $z \in \Tt,$ $$ \|\xi(z)\|_E=1.$$ \end{definition} \begin{theorem}[\cite{superop}, Theorem 1.1]\label{coouter} Let $\varphi$ be an $n\times 1 $ inner matrix function. There exists a co-outer function $\varphi_c \in H^\infty(\Dd,\Cc^{n\times (n-1)})$ such that $$ \Phi = \begin{pmatrix} \varphi & \bar{\varphi}_c\end{pmatrix}$$ is unitary-valued on $\Tt$ and all minors of $\Phi$ on the first column are in $H^\infty.$ \end{theorem} For a function $G: \T \to \Cc^{m\times n}$ and a space $X$ of scalar functions on $\T$, we write $G \in X$ to mean that each entry of $G$ belongs to $X$. Next we describe some properties that a space $X$ of equivalence classes of scalar functions on the circle may possess \cite[Page 330]{superop}. Define the non-linear operator $\cala = \cala^{(m,n)}$ on the space of $m\times n$ functions $G \in H^\infty(\Dd, \CCmn) + C(\Tt, \CCmn)$ by saying that $\cala^{(m,n)} G$ is the unique superoptimal approximation in $H^\infty(\Dd, \CCmn)$ to $G$. We say that $X$ is {\em hereditary for} $\cala$ if, for every scalar function $g\in X$, the best analytic approximation $\cala g$ of $g$ belongs to $X$. Consider the following conditions on $X$ from \cite{pellkhr}: \begin{enumerate} \item[($\al$1)] $X$ contains all polynomial functions and $X\subset \vmo$; \item[($\al$2)] $X$ is hereditary for $\cala$; \item[($\al$3)] if $f\in X$ then $\bar z \bar f \in X$ and $P_+f \in X$; \item[($\al$4)] if $f,g \in X \cap L^\infty$ then $fg \in X\cap L^\infty$; \item[($\al$5)] if $f\in X \cap H^2$ and $h\in H^\infty$ then $T_{\bar h}f\in X \cap H^2$. \end{enumerate} The relevance of these properties is contained in the following statement, which is \cite[Lemma 5.3]{superop}. Recall that a function $f\in L^\infty$ is said to be {\em badly approximable} if the best analytic approximant to $f$ is the zero function. In view of Nehari's Theorem, $f$ is badly approximable if and only if $\|f\|_\infty = \|H_f\|$. \index{badly approximable} \begin{lemma}\label{lem5.3} Let $X$ satisfy $(\al 1)$ to $(\al 5)$ and let $\ph\in X$ be an $n\times 1$ inner function. Let $\ph_c$ be an $n \times (n-1)$ function in $H^\infty$ such that $[\ph \ \bar\ph_c]$ is unitary-valued a.e. on $\T$ and has all its minors on the first column in $H^\infty$. Then each entry of $\ph_c$ belongs to $X$. \end{lemma} Below we shall use a modified version of \cite[Theorem 0.2]{superop}. \begin{theorem}\label{1.7}\emph{\cite[Theorem 0.2]{superop}} Let $\phi \in L^{\infty}(\Tt, \Cc^{m\times n})$ be such that $H_\phi$ has a Schmidt pair $(v,w)$ corresponding to the singular value $t=\|H_\phi\|$. Let $Q$ be a function in $H^{\infty}(\Dd, \Cc^{m \times n})$ at minimal $L^\infty$-distance from $\phi$. Then $$(\phi - Q) v = tw$$ and \[ (\phi-Q)^*w= tv. \] Moreover \begin{equation}\label{normvz} \|w(z)\|_{\mathbb{Cc}^m} = \|v(z)\|_{\mathbb{Cc}^n} ~ \text{almost everywhere on}~ \mathbb{T} \end{equation} and $$\| \phi (z) - Q(z)\| = t ~\text{almost everywhere on}~ \mathbb{T}.$$ \end{theorem} \begin{proof} By Nehari's Theorem, $\|\phi-Q\|_{L^\infty}= t$ and, by hypothesis, \[ H_\phi v=tw, \qquad H_\phi^* w=tv. \] If $t=0$ then $\phi\in H^\infty(\Dd,\Cc^{m\times n})$, so that $\phi=Q$ and the statement of the theorem is trivially true. We may therefore assume $t>0$. Thus $H_\phi^*H_\phi v = t^2 v$, and so $v$ is a maximising vector for $H_\phi$. We can assume that $v$ is a unit vector in $H^2(\Dd, \Cc^n)$, and then $w$ is a unit vector in $H^2(\Dd, \Cc^m)^\perp$ and is a maximising vector for $H_\phi^*$. We have \[ t = \|H_\phi v \| = \|H_{\phi - Q} v \|= \| P_{-} (\phi - Q) v\| \leq \| (\phi - Q) v\| \leq \|\phi - Q\|_{L^\infty} = t. \] The inequalities must hold with equality throughout, and therefore $\| P_{-} (\phi - Q) v\| = \| (\phi - Q) v\|$, which implies that $(\phi - Q) v \perp H^2$ and so $$H_\phi v= P_{-} (\phi -Q)v = (\phi -Q)v.$$ Furthermore $\| (\phi - Q) v\|$ = $\| (\phi - Q) \|_{L^\infty} \|v\|$ and since $v(z)$ is therefore a maximizing vector for $\phi(z)-Q(z)$ for almost all $z$, we have $\| \phi(z)-Q(z) \| = \|H_\phi \|.$ Likewise, \begin{align*} t=\|H_\phi^*\|= \|H_{\phi-Q}^*\|&= \|H_{\phi-Q}^* w\|= \|P_+(\phi-Q)^*w\|_{L^2} \leq \|(\phi-Q)^*w\|_{L^2}\\ & \leq \|(\phi-Q)^*\|_{L^\infty}\|w\|_{L^2} = \|(\phi-Q)^*\|_{L^\infty} = t. \end{align*} Again, the inequalities hold with equality throughout, and in particular \[ \|P_+(\phi-Q)^*w\|_{L^2} = \|(\phi-Q)^*w\|_{L^2}, \] so that $(\phi-Q)^*w \in H^2$ and \[ (\phi-Q)^*w= H_\phi^*w = tv. \] \end{proof} \begin{lemma}\cite[p. 315-316]{superop} \label{2.2} Let $m,n>1,$ let $G \in H^\infty ( \Dd, \CCmn) +C (\Tt, \CCmn)$ and \noindent $t_0 = \| H_G\| \neq 0.$ Suppose that $v$ is a maximizing vector of $H_G$ and let \begin{equation}\label{eqhgv} H_{G} v = t_{0} w. \end{equation} \noindent Then $v, \bar{z}\bar{w} \in H^2(\Dd, \Cc^n)$ have the factorizations \begin{equation} \label{eq212} v= v_0 h , \; \; \bar{z}\bar{w} = \phi w_0 h \end{equation} \noindent for some scalar outer function $h,$ some scalar inner $\phi,$ and column-matrix inner functions $v_0 , w_0.$ Moreover there exist unitary-valued functions $V, W$ of types $n \times n , \; m \times m$ respectively, of the form \begin{equation}\label{eq222} V= \begin{pmatrix} v_0 & \bar{\alpha} \end{pmatrix}, \; W^T = \begin{pmatrix} w_0 & \bar{\beta} \end{pmatrix},\end{equation} \noindent where $\alpha, \beta $ are inner, co-outer functions, quasi-continuous functions of types $n\times (n-1), \; m\times (m-1)$ respectively, and all minors on the first columns of $V, W^T$ are in $H^\infty.$ \noindent Furthermore every $Q \in H^\infty (\Dd, \CCmn)$ which is at minimal distance from $G$ satisfies \begin{equation}\label{eq2223}W(G-Q)V = \begin{pmatrix} t_0 u_0 &~& 0\\ 0 &~& F \end{pmatrix}\end{equation} for some $F \in H^\infty (\Dd, \Cc^{(m-1)\times(n-1)}) + C(\Tt, \Cc^{(m-1)\times(n-1)})$ and some quasi-continuous function $u_0$ given by \begin{equation}\label{eq223}u_0 = \frac{\bar{z}\bar{\phi}\bar{h}}{h}\end{equation} with $|u_0 (z)|=1$ almost everywhere on $\Tt.$ \end{lemma} In the statement of the lemma, in saying that an $m \times n$ matrix-valued function $\al$ is {\em co-outer} \index{co-outer} we mean that each column of $\al$ is in $H^{\infty}_m$ and $\al^TH^2_m$ is dense in $H^2_n$. (In \cite[Page 190]{NagyFoias}, such a function $\al$ is said to be {\em *-outer}). \begin{proof} First we construct $V$ and $W$ with the properties \eqref{eqhgv} to \eqref{eq2223}. By equation \eqref{normvz}, $\|v(z) \| = \|w(z)\|$ almost everywhere, and so the column-vector functions $v, \bar{z}\bar{w}$ in $H^2$ have the same (scalar) outer factor $h.$ This property yields the inner-outer factorizations (\ref{eq212}) for some column inner functions $v_0, w_0$. By Theorem \ref{coouter}, there exists an inner co-outer function $\al$ of type $n\times (n-1)$ such that $V\df [v_0 \ \bar\al]$ is unitary-valued almost everywhere on $\T$ and all minors on the first column of $V$ are in $H^\infty$. Likewise there exists an inner co-outer function $\beta$ of type $m \times (m-1)$ such that $W\df [w_0 \ \bar\beta]^T$ is unitary-valued almost everywhere on $\T$ and all minors on the first column of $W^T$ are in $H^\infty$. Next we show that $u_0$ given by equation \eqref{eq223} is quasicontinuous. Let $Q \in H^\infty(\Dd, \CCmn)$ be at minimal distance from $G.$ Then $$\|G-Q\|_\infty = \|H_G\|= t_0.$$ \noindent By Theorem \ref{1.7}, $$(G-Q)v = t_0 w$$ and by the factorizations (\ref{eq212}) we have $$(G-Q) v_0 h = t_0 \bar{z} \bar{\phi} \bar{h} \bar{w}_0$$ and by equations (\ref{eq222}) and (\ref{eq223}) $$(G-Q) V \begin{pmatrix} 1 & 0 & \cdots & 0 \end{pmatrix}^T = W^* \begin{pmatrix} t_0 u_0 & 0 & \cdots & 0 \end{pmatrix}^T.$$ \noindent Thus $$W(G-Q)V = \begin{pmatrix} t_0 u_0 & f\\ 0 & F \end{pmatrix}$$ for some $f \in L^\infty(\Tt, \Cc^{1\times (n-1)}), \; F \in L^\infty(\Tt, \Cc^{(m-1)\times (n-1)}).$ \noindent Because $t_0 = \|H_G\|,$ it follows that $|u_0|=1$ almost everywhere, and from Nehari's Theorem $$\|W(G-Q)V\|_\infty = \|G-Q\| = \|H_G\| = t_0,$$ and we have $f=0.$ So, $W(G-Q)V$ has the form (\ref{eq2223}). Now, $\|H_{u_0}\| \leq \|u_0\|_\infty =1$ and $\|H_{u_0}h\|=\|\bar{z}\bar{\phi} \bar{h}\| = \|h\|.$ Hence $$\|H_{u_0}\|=1 = \|u_0\|_\infty,$$ which implies that $u_0$ is badly approximable. The $(1,1)$ entries of equation (\ref{eq2223}) are $$w_0^T (G-Q)v_0 = t_0 u_0.$$ Since $v_0 \in H^\infty(\Dd, \Cc^n ) , \; w_0 \in H^\infty(\Dd, \Cc^m)$ and $H^\infty(\Dd,\C) + C(\Tt,\C)$ is an algebra, \\ $u_0 \in H^\infty + C.$ By a result in \cite[Section 3.1]{pellkhr}, if $u_0 \in H^\infty + C$ and $u_0$ is badly approximable then $\bar{u}_0 \in H^\infty +C.$ Thus $u_0$ is quasi-continuous. Now we show that $v_0, w_0 \in QC$. It follows from Nehari's Theorem that \[ (G-Q)^* w = t_0 v, \] much as in the proof of \cite[Theorem 0.2]{superop}. Indeed, since $H_G^* w = t_0 v$ and $H_G^* = P_+M_{(G-Q)^*} | {H^2}^\perp$, we have (assuming, as we may, that $v$ and $w$ are unit vectors), \begin{align*} t_0 = \|H_G^*w\| &= \|P_+(G-Q)^*w\| \\ &\leq \|(G-Q)^* w\| \leq \|G-Q\|_\infty \|w\| =t_0. \end{align*} It follows that the inequalities hold with equality, and so \[ \|P_+(G-Q)^*w\| = \|(G-Q)^* w\|, \] whence \[ P_+(G-Q)^*w = (G-Q)^* w, \] and so \be\label{usethis} (G-Q)^* w = H_G^*w = t_0 v, \ee as claimed. Taking complex conjugates in the last equation we have \[ (G-Q)^T \bar w = t_0 \bar v. \] Thus, by equation \eqref{eq212}, \[ (G-Q)^Tz\ph w_0 h = t_0 \bar h \bar{v}_0 \] for some outer function $h$ and scalar inner $\ph$. Therefore \[ \bar{v}_0 = \frac{(G-Q)^Tz\ph w_0 h}{t_0 \bar h}. \] Recall that $u_0 = \bar z \bar \ph \bar h/h$, and so \[ \bar{v}_0 = \frac{1}{t_0}(G-Q)^T\bar{u}_0w_0. \] Since $u_0\in QC, \ G-Q\in H^\infty+C$ and $w_0 \in H^\infty$, it follows that $\bar{v}_0 \in H^\infty +C$. Since also $v_0\in H^\infty$, we have $v_0\in QC$. To complete the proof of Lemma \ref{2.2}, all that remains is to show that $\al,\beta $ are quasicontinuous and $F\in H^\infty + C$. This will follow from Lemma \ref{lem5.3} above. The space $\vmo$ satisfies conditions $(\al1)$ to $(\al5)$, as stated on \cite[Page 335]{superop}, and we have $v_0 \in QC \subset \vmo$. Hence we may apply Lemma \ref{lem5.3} with $\ph=v_0$ to deduce that $\al \in \vmo$. Since also $\al\in L^\infty$, it follows from Theorem \ref{vmoqc} that $\al\in QC$. Likewise, $\beta\in QC$. To show that $F\in H^\infty + C$, for $1<i \leq m, \; 1<j\leq n$ consider the $2\times 2$ minor of equation (\ref{eq2223}) with indices $1i , 1j :$ \begin{equation}\label{eqsum} \sum\limits_{r<s,\; k<l} W_{1i , rs} (G-Q)_{rs, kl} V_{kl , 1j} = t_0 u_0 F_{i-1 , j-1}. \end{equation} \noindent By the analytic minors property of $W,V,$ $$V_{kl,1j}, W_{1i,rs} \in H^\infty.$$ Since $G-Q \in H^\infty(\Dd, \CCmn) + C(\Tt, \CCmn) ,$ the left hand side of equation (\ref{eqsum}) is in $H^\infty (\Dd, \Cc^{2\times 2}) + C(\Tt,\Cc^{2\times 2})$ and hence $u_0F \in H^\infty(\Dd, \C^{(m-1)\times(n-1)} + C(\Tt, \C^{(m-1)\times(n-1)}).$ Thus $$ F = \bar{u}_0 (u_0 F) \in H^\infty(\Dd, \C^{(m-1)\times(n-1)}) + C(\Tt, \C^{(m-1)\times(n-1)}). $$ \end{proof} \begin{definition}\label{thematicdef} We say that a unitary-matrix-valued function $V$ is a \emph{thematic completion} of a column-matrix inner function $v_0\in H^\infty(\Dd,\Cc^n)$ if $V = \begin{pmatrix} v_0 & \bar{\alpha} \end{pmatrix}$, for some co-outer function $\alpha \in H^\infty(\Dd,\Cc^{n\times(n-1)})$ such that $V(z)$ is a unitary matrix for almost all $z \in \Tt$ and such that all minors on the first column of $V$ are analytic. \end{definition}\index{thematic completion} \begin{remark} \rm By Theorem $1.1$ of \cite{superop}, every column-matrix inner function has a thematic completion. Thematic completions are not unique, for if $V = \begin{pmatrix} v_0 & \bar{\alpha} \end{pmatrix}$ is a thematic completion of $v_0$, then so is $\begin{pmatrix} v_0 & \bar{\alpha}U\end{pmatrix}$ for any constant $(n-1)$-square unitary matrix $U.$ However, by Corollary $1.6$ of \cite{superop}, the thematic completion of $v_0$ \emph{is} unique up to multiplication on the right by a constant unitary matrix of the form $\mathrm{diag}\{1,U\}$ for some constant $(n-1)$- square matrix $U,$ and so it is permissible to speak of ``{\em the} thematic completion of $v_0$". Furthermore, by Theorem $1.2$ of \cite{superop}, thematic completions have constant determinants almost everywhere on $\Tt,$ and hence $\alpha, \beta$ are inner matrix functions. Observe that, as we showed above, if the column $v_0$ belongs to $\mathrm{VMO},$ then the thematic completion of $v_0$ is quasi-continuous. Similarly, if the column $w_0$ belongs to $\mathrm{VMO}$, then the thematic completion of $w_0$ is quasi-continuous. Thus $\alpha,\beta$ are inner, co-outer, quasi-continuous functions of types $n\times(n-1)$ and $m\times (m-1)$ respectively. \end{remark} \begin{lemma}[\cite{superop}, p. 316]\label{f+hinfty} Let $m,n>1,$ let $G \in H^\infty(\Dd, \CCmn)+C(\Tt, \CCmn),$ let $\|H_G\|=t_0$ and let $Q_1 \in H^\infty(\Dd, \CCmn)$ be at minimal distance from $G,$ so that in the notation of Lemma \ref{2.2}, \begin{equation}\label{g-q1vf} W(G-Q_1)V= \begin{pmatrix} t_0 u_0 & 0 \\ 0 & F \end{pmatrix}\end{equation} for some $F \in H^\infty(\Dd, \Cc^{m-1\times n-1})+ C(\Tt, \Cc^{m-1\times n-1}).$ Let $$\mathcal{E} = \{ G-Q : Q \in H^\infty(\Dd, \CCmn), \|G-Q\|_\infty=t_0\}.$$ Then \begin{equation}\label{wev}W \mathcal{E} V = \begin{pmatrix} t_0 u_0 & 0 \\ 0 & F +H^\infty(\Dd, \Cc^{m-1\times n-1}) \end{pmatrix}\cap B(t_0),\end{equation} where $B(t_0)$ is the closed ball of radius $t_0$ in $L^\infty(\Tt, \CCmn).$ \end{lemma} \begin{proof} Let $E_1 = G-Q_1 \in \mathcal{E}$ and for $Q \in H^\infty(\Dd,\CCmn)$ consider $$E = E_1 -Q.$$ By Lemma \ref{2.2}, there exists a function $g \in L^\infty(\Dd,\Cc^{m-1\times n-1})$ such that $$ WEV = \begin{pmatrix} t_0 u_0 & 0 \\ 0 & g \end{pmatrix}. $$ The latter equation combined with equation (\ref{g-q1vf}), yields $$\begin{array}{clllll} WQV&= W(G-Q_1)V-WEV\vspace{2ex}\\&= \begin{pmatrix} 0 & 0 \\ 0 & F-g \end{pmatrix}\in \begin{pmatrix} 0 & 0 \\ 0 & L^\infty(\Tt, \Cc^{m-1\times n-1}) \end{pmatrix}\cap WH^\infty(\Dd,\CCmn)V.\end{array}$$ By (\cite{superop}, Lemma 1.5), $WQV \in H^\infty(\Dd,\CCmn),$ say $F-g=q \in H^\infty(\Dd,\CCmn).$ Then $$WEV= \begin{pmatrix} t_0 u_0 & 0 \\ 0 & g \end{pmatrix}= \begin{pmatrix} t_0 u_0 & 0 \\ 0 & F-q \end{pmatrix}, $$which proves the inclusion ($\subseteq$) in equation \eqref{wev}. \noindent Conversely, suppose $q \in H^\infty(\Dd,\Cc^{m-1\times n-1} )$ and $$\|F-q\|_\infty \leq t_0.$$ By (\cite{superop}, Lemma 1.5), there exists a function $Q\in H^\infty(\Dd,\CCmn)$ such that $$ WQV = \begin{pmatrix} 0 & 0 \\ 0 & q \end{pmatrix}$$ and thus $$ W(E_1 - Q)V = \begin{pmatrix} t_0 u_0 & 0 \\ 0 & F-q \end{pmatrix}. $$Then $$ E_1 - Q = G-(Q_1-Q) \in \mathcal{E},$$ and so $$\begin{pmatrix} t_0 u_0 & 0 \\ 0 & F-q \end{pmatrix}\in W\mathcal{E}V. $$ Hence equality holds in equation (\ref{wev}). \end{proof} \begin{lemma}[\cite{Constr}, p. 16]\label{3.1constr} Let $G\in H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn)$ and let $(x_0, y_0)$ be a Schmidt pair for the Hankel operator $H_G$ corresponding to the singular value $t_0 = \|H_G\|.$ Let $x_0 = \xi_0 h_0$ be the inner-outer factorization of $x_0,$ where $\xi_0\in H^\infty(\Dd,\Cc^n)$ is the inner and $h_0\in H^2(\Dd,\Cc)$ is the scalar outer factor of $x_0 \in H^2(\Dd,\Cc^n),$ and let $$ V_0 = \begin{pmatrix} \xi_0 & \overline{\alpha_0} \end{pmatrix}$$ be a unitary-valued function on $\Tt$, where $\alpha_0 \in H^\infty(\Dd,\Cc^{n \times (n-1)})$ is co-outer. Then $$V_0 \begin{pmatrix} 0 & H^2(\Dd,\Cc^{n-1}) \end{pmatrix}^T$$ is the orthogonal projection of $H^2(\Dd,\Cc^n)$ onto the pointwise orthogonal complement of $x_0$ in $L^2(\Tt,\Cc^n).$ Similarly $$V_0^* \begin{pmatrix} 0 & H^2(\Dd,\Cc^{n-1})^\perp \end{pmatrix}$$ is the orthogonal projection of $H^2(\Dd,\Cc^n)^\perp$ onto the pointwise orthogonal complement of $x_0$ in $L^2(\Tt,\Cc^n).$ \end{lemma} \begin{lemma}[\cite{Constr}, p. 16]\label{3.2constr} Let $G,x_0, y_0$ be defined as in Lemma \ref{3.1constr} and let $\mathcal{K}, \mathcal{L}$ be the projections of $H^2(\Dd,\Cc^n),H^2(\Dd,\Cc^m)^\perp$ onto the pointwise orthogonal complements of $x_0, y_0$ in $L^2(\Tt,\Cc^n),L^2(\Tt,\Cc^m)$ respectively. Let $Q_0 \in H^\infty(\Dd,\CCmn)$ be at minimal distance from $G,$ let $F$ be the $(2,2)$ block of $W_0(G-Q_0)V_0,$ as in Lemma \ref{2.2}, that is, \begin{equation}\label{V0W0}V_0=\begin{pmatrix} \xi_0 & \overline{\alpha_0} \end{pmatrix},\quad W_0 = \begin{pmatrix} \eta_0 & \overline{\beta_0} \end{pmatrix}^T\end{equation} are unitary-valued functions on $\Tt,$ $\alpha_0, \beta_0$ are co-outer functions of size $n\times (n-1) , m\times (m-1)$ respectively and all minors on the first columns of $V_0, W_0^T$ are in $H^\infty.$ Let $Q \in H^\infty(\Dd,\CCmn)$ satisfy \begin{equation}\label{g-q3.1}(G-Q)x_0 = \|H_G\| y_0 , \quad y_0^* (G-Q) = \|H_G\| x_0^*.$$ Then $H_F$ is a unitary multiple of the operator $$ \Gamma := P_\mathcal{L} M_{G-Q}|\mathcal{K},\end{equation} where $M_{G-Q}:L^2(\Tt,\Cc^n) \to L^2(\Tt,\Cc^m)$ is the operator of multiplication by $G-Q.$ More explicitly, if $U_1 : H^2(\Dd,\Cc^{n-1}) \to \mathcal{K},$ $U_2 : H^2(\Dd,\Cc^{m-1})^\perp \to \mathcal{L}$ are defined by \begin{equation} \label{u1u2} U_1 \chi = V_0 \begin{pmatrix} 0 \\ \chi \end{pmatrix}, \quad U_2 \psi = W_0^* \begin{pmatrix} 0 \\ \psi \end{pmatrix}\quad \text{for all}\quad \chi \in H^2(\Dd,\Cc^{n-1}), \psi \in H^2(\Dd,\Cc^{m-1}),\end{equation} then $U_1 , U_2$ are unitaries and $$H_F = U_2^* \Gamma U_1.$$ \end{lemma} \begin{lemma}[\cite{superop}, p. 337]\label{L6.2} Let $\alpha\in \mathrm{QC}$ of type $m\times n,$ where $m\geq n,$ be inner and co-outer. There exists $A\in H^\infty(\Dd,\Cc^{n\times m})$ such that $A\alpha=I_n.$ Here $I_n$ denotes the $n\times n$ identity matrix. \end{lemma} Theorem \ref{superopconstruct} gives the algorithm for the superoptimal analytic approximant constructed in \cite{Constr}. \begin{theorem}[\cite{Constr}, p. 17]\label{superopconstruct} Let $G \in H^\infty(\Dd,\CCmn)+ C(\Tt,\CCmn).$ The superoptimal approximant $\mathcal{A}G$ to $G$ is given by the following formula. \noindent If $H_G=0,$ then $\mathcal{A}G=G.$ Otherwise define spaces $K_j\subset L^2(\Tt,\Cc^n), N_j\subset L^2(\Tt,\Cc^m),$ vectors $\chi_j \in K_j,$ $\psi_j \in N_j,$ $H^\infty$ functions $Q_j,$ operators $\Gamma_j$ and positive $\lambda_j$ as follows. \noindent Let $$K_0 = H^2(\Dd,\Cc^n),\quad N_0= H^2(\Dd,\Cc^m)^\perp,\quad Q_0=0.$$ Let $$ \Gamma_j = P_{N_j} M_{G-Q_j} |K_j : K_j \to N_j,\quad \lambda_j=\|\Gamma_j\|,$$where $P_{N_j}$ is the orthogonal projection onto $N_j$. If $\lambda_j=0$ set $r=j$ and terminate the construction. Otherwise let $\chi_j, \psi_j$ be a Schmidt pair for $\Gamma_j$ corresponding to the singular value $\lambda_j.$ Let $K_{j+1}$ be the range of the orthogonal projection of $K_j$ onto the pointwise orthogonal complement of $\chi_0, \cdots, \chi_j$ in $L^2(\Tt,\Cc^n).$ Let $N_{j+1}$ be the projection of $N_j$ onto the pointwise orthogonal complement of $\psi_0, \cdots, \psi_j$ in $L^2(\Tt,\Cc^m).$ Let $Q_{j+1} \in H^\infty(\Dd,\CCmn)$ be chosen to satisfy, for $0\leq k \leq j,$ \begin{equation}\label{32constr} Q_{j+1} \chi_k = G\chi_k - t_k\psi_k, \quad \psi_k^* Q_{j+1}=\psi_k^* G - t_k \chi_k^* .\end{equation} \noindent Then each $\Gamma_j$ is a compact operator, $Q_j$ with the above properties does exist, the construction terminates with $r\leq \min(m,n)$ and \begin{equation}\label{g-ag=sum} G-\mathcal{A}G= \displaystyle \sum\limits_{j=0}^{r-1} \frac{\lambda_j \psi_j \chi_j^*}{|h_j|^2}.\end{equation} \end{theorem} \noindent We shall derive a similar formula for the superoptimal analytic approximant $\mathcal{A}G,$ by making use of exterior products of Hilbert spaces. \subsection{Algorithm for superoptimal analytic approximation} \label{Alg_statement} In this section we consider the superoptimal analytic approximation problem for a function $G \in H^\infty ( \Dd, \CCmn) + C(\Tt, \CCmn).$ We first state the algorithm for the solution of Problem \ref{mainproblem}; later we shall prove the claims that are made in this description of the algorithm. We will assume here the result of Peller and Young \cite{superop} that Problem \ref{mainproblem} has a unique solution (see Theorem \ref{1.8}). For convenience, we give citations of the steps in this paper where the corresponding claims are proved. \begin{proof} [\emph{\textbf{Algorithm:}}] Let $G\in H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn).$ In this subsection we shall give a fuller and more precise statement of the algorithm for $\mathcal{A}G$ outlined in the Introduction, Section \ref{intro}, in preparation for a subsequent formal proof of Theorem \ref{mathcalAG}, which asserts that if entities $r,t_i,x_i,y_i,h_i$ for $i=0,\dots,r-1,$ are generated by the algorithm, then the superoptimal approximant is given by equation $$\mathcal{A}G = \displaystyle G- \sum\limits_{i=0}^{r-1} \frac{t_i y_i x_i^*}{|h_i|^2}. $$ The proof will be by induction on $r,$ which is the least index $j\geq 0$ such that $T_j =0,$ where $T_0 = H_G, T_1,T_2,\dots$ is a sequence of operators recursively generated by the algorithm. \\ \textbf{Step 0.} \noindent Let $ t_0 = \| H_G\| .$ \noindent If $t_0 = 0,$ then $H_G=0,$ which implies $G\in H^\infty ( \Dd, \CCmn).$ In this case, the algorithm terminates, we define $r$ to be zero and the superoptimal approximant $\mathcal{A}G$ is given by $\mathcal{A}G = G$, in agreement with the formula \begin{equation}\label{formAG} G-\mathcal{A}G = \sum\limits_{j=0}^{r-1} \frac{t_j y_j x_j^*}{x_j^*x_j} \end{equation} contained in the statement of Theorem \ref{mathcalAG} (since the sum on the right hand side of equation \eqref{formAG} is empty, and therefore by convention is interpreted as being zero). Otherwise, $t_0 > 0$. By Theorem \ref{2.04} and Lemma \ref{2.2}, $H_G$ is a compact operator and so there exists a Schmidt pair $(x_0 , y_0)$ corresponding to the singular value $t_0$ of $H_G.$ By the definition of the Schmidt pair $(x_0, y_0)$ corresponding to $t_0$ for the Hankel operator $H_G : H^2 (\Dd, \Cc^n ) \to H^2 (\Dd, \Cc^m)^\perp,$ $$x_0 \in H^2(\Dd, \Cc^n ),\quad y_0 \in H^2 (\Dd, \Cc^m)^\perp $$ are non-zero vector-valued functions such that $$H_Gx_0 = t_0 y_0\quad \mbox{ and } \quad H_G^* y_0 =t_0 x_0. $$ By Lemma \ref{2.2}, $x_0 \in H^2(\Dd,\Cc^n)$ and $\bar{z}\bar{y}_0 \in H^2(\Dd, \Cc^m)$ admit inner-outer factorizations \begin{equation}\label{xi0eta0} x_0 = \xi_0 h_0 , \quad \bar{z}\bar{y}_0 = \eta_0 h_0 \end{equation} for some scalar outer factor $h_0 \in H^2(\Dd, \Cc)$ and column matrix inner functions $\xi_0\in H^\infty(\Dd, \Cc^n)$, $ \eta_0\in H^\infty(\Dd, \Cc^m). $ Then \begin{equation}\label{equl} \|x_0 (z)\|_{\Cc^n} = |h_0(z)| = \|y_0 (z)\|_{\Cc^m} \;\; \text{almost everywhere on }\; \Tt.\end{equation} \noindent We write equations (\ref{xi0eta0}) as \begin{equation}\label{31} \xi_0 = \frac{x_0}{h_0} \; , \; \eta_0 = \frac{\bar{z}\bar{y}_0}{h_0}. \end{equation} By equations (\ref{equl}) and (\ref{31}), \begin{equation}\label{xi0eta0=1} \| \xi_0 (z) \|_{\Cc^n} =1= \| \eta_0(z) \|_{\Cc^m}\; \text{almost everywhere on}\; \Tt. \end{equation} \noindent Let $t_0\neq0.$ By Lemma \ref{2.2}, every function $Q_1 \in H^\infty(\Dd, \CCmn)$ which is at minimal distance from $G$ satisfies \begin{equation}\label{G-Q0}(G-Q_1)x_0 = t_0 y_0,\quad y_0^* (G-Q_1) = t_0 x_0^*. \end{equation} \textbf{Step 1.} Let \begin{equation}\label{X_0} X_1 \stackrel{\text{def}}{=} \xi_0 \dot{\we} H^2(\Dd,\Cc^n ) \vspace{2ex}.\end{equation} By Proposition \ref{xjsubseth2}, $ X_1$ is a closed subspace of $H^2(\Dd, \we^2 \Cc^n )$. \noindent Similarly, $$\eta_0 \telwe zH^2(\Dd,\Cc^m) \subset zH^2(\Dd,\we^2\Cc^m) $$ and therefore $$\bar{\eta}_0 \telwe \overline{z H^2(\Dd,\Cc^m)} \subset \bar{z} \overline{H^2(\Dd,\we^2\Cc^m)}, $$ that is, if \begin{equation}\label{Y_0} Y_1 \stackrel{\text{def}}{=} \bar{\eta}_0 \dot{\we} H^2(\Dd, \Cc^m)^\perp,\end{equation} then $$ Y_1 \subset H^2 (\Dd, \we^2 \Cc^m)^\perp.$$ By Proposition \ref{clwe}, $ Y_1$ is a closed subspace of $H^2 (\Dd, \we^2 \Cc^m)^\perp .$ \noindent Choose any function $Q_{1} \in H^\infty(\Dd, \CCmn)$ which satisfies the equations \eqref{G-Q0}. Consider the operator $T_1 : X_1 \to Y_1$ defined by \begin{equation}\label{T_0} T_1 ( \xi _0 \dot{\we} x ) = P_{Y_1} (\bar{\eta}_0\dot{\we} (G-Q_1)x) \; \text{for all} \; x\in H^2(\Dd, \Cc^n ),\end{equation}where $P_{Y_1}$ is the projection from $L^2(\Tt, \we^2 \Cc^m)$ on $Y_1.$ By Corollary \ref{projwell} and Proposition \ref{Twell}, $T_1$ is well-defined. If $T_1=0,$ then the algorithm terminates, we define $r$ to be $1$ and, in agreement with Theorem \ref{mathcalAG}, the superoptimal approximant $\mathcal{A}G$ is given by the formula $$G- \mathcal{A}G = \displaystyle\sum\limits_{i=0}^{r-1}\frac{t_i y_i x_i^*}{|h_i|^2}= \frac{t_0 y_0 x_0^*}{|h_0|^2}, $$ and the solution is $$\mathcal{A}G =G - \frac{t_0 y_0 x_0^*}{|h_0|^2} .$$ If $T_1\neq 0,$ let $t_{1} = \|T_1\| >0.$ By Theorem \ref{T0compact}, $T_1$ is a compact operator and so there exist $v_1 \in H^2(\Dd, \Cc^n),\; w_1 \in H^2(\Dd, \Cc^m)^\perp$ such that $(\xi_0 \telwe v_1 , \bar{\eta}_0 \telwe w_1) $ is a Schmidt pair for $T_1$ corresponding to $t_1.$ Let $h_1$ be the scalar outer factor of $ \xi_0 \telwe v_1$ and let \begin{equation} x_{1} = ( I_{\Cc^m} - \xi_0 \xi_0^*)v_1, \;\; y_1= (I_{\Cc^m} - \bar{\eta}_0 \eta_0^T)w_1, \end{equation} where $\mathrm{I}_{\Cc^n}$ and $\mathrm{I}_{\Cc^m}$ are the identity operators in $\Cc^n$ and $\Cc^m$ respectively.\index{identity operator} Then, by Proposition \ref{onxi}, \begin{equation}\label{x1y1h1} \|x_1 (z)\|_{\Cc^n} = |h_1(z)| = \|y_1 (z)\|_{\Cc^m}\;\text{almost everywhere on}\; \Tt.\end{equation} By Theorem \ref{1.8}, there exists a function $Q_2 \in H^\infty(\Dd, \CCmn)$ such that both\newline $s_0^\infty(G-Q_2)$ and $s_1^\infty(G-Q_2)$ are minimized and $$ s_1^\infty(G-Q_2)=t_1. $$ By Proposition \ref{g-q1y1t1x1}, any such $Q_2$ satisfies \begin{equation}\label{3111} \begin{aligned}(G-Q_2) x_0 = t_0 y_0 , \quad y_0^* (G-Q_2) = t_0 x_0^* \\ (G-Q_2)x_1 = t_1 y_1 , \quad y_1^* (G-Q_2)=t_1 x_1^*.\end{aligned} \end{equation} Choose any function $Q_{2} \in H^\infty(\Dd, \CCmn)$ which satisfies the equations \eqref{3111}. Define \begin{equation}\label{311} \xi_1 = \frac{x_1}{h_1} , \quad \eta_1 = \frac{\bar{z}\bar{y}_1}{h_1}. \end{equation} By equations (\ref{x1y1h1}) and (\ref{311}), $\| \xi_1(z) \|_{\Cc^n} =1= \| \eta_1(z) \|_{\Cc^n}$ almost everywhere on $\Tt.$\\ \textbf{Step 2.} Define $$\begin{array}{cllll} X_2 &\stackrel{\text{def}}{=} \xi_0 \dot{\we} \xi_1 \dot{\we} H^2(\Dd, \Cc^n)\vspace{2ex}\\ Y_2 &\stackrel{\text{def}}{=} \bar{\eta}_0\dot{\we} \bar{\eta}_1 \dot{\we} H^2(\Dd, \Cc^m)^\perp . \end{array}$$ Note that, by Proposition \ref{xjclosed}, $X_2$ is a closed linear subspace of $H^2(\Dd, \we^3 \Cc^n),$ and, by Proposition \ref{clwegen}, $Y_2$ is a closed linear subspace of $H^2 (\Dd, \we^3 \Cc^m)^\perp.$ \noindent Now consider the operator $T_2 : X_2 \to Y_2$ given by \begin{equation}\label{T1} T_2 ( \xi _0 \dot{\we} \xi_1\dot{\we} x ) = P_{Y_2} (\bar{\eta}_0\dot{\we} \bar{\eta}_1\dot{\we} (G-Q_2)x),\end{equation} where $P_{Y_2}$ is the projection from $L^2(\Tt,\Cc^m)$ on $Y_2.$ \noindent By Corollary \ref{projwellgen} and Proposition \ref{Twell}, $T_2$ is well defined, that is, it does not depend on the choice of $Q_2 \in H^\infty(\Dd,\CCmn)$ satisfying equations \eqref{3111}. If $T_2 =0,$ then the algorithm terminates, we define $r$ to be $2$ and, according to Theorem \ref{mathcalAG}, the superoptimal approximant $\mathcal{A}G$ is given by the formula $$G- \mathcal{A}G = \sum\limits_{i=0}^{r-1} \frac{t_i y_i x_i^*}{|h_i|^2}= \frac{t_0 y_0 x_0^*}{|h_0|^2}+ \frac{t_1 y_1 x_1^*}{|h_1|^2}.$$ If $T_2 \neq 0$, then let $t_2= \|T_2\|.$ By Theorem \ref{T2compactt}, $T_2$ is a compact operator and hence there exist $v_2 \in H^2(\Dd,\Cc^n), \; w_2 \in H^2(\Dd,\Cc^m)^\perp$ such that $$( \xi _0 \dot{\we} \xi_1\dot{\we} v_2 ,\; \bar{\eta}_0\dot{\we} \bar{\eta}_1\dot{\we} w_2)$$is a Schmidt pair for $T_2$ corresponding to $\|T_2\|=t_2.$ \\ Let $h_2$ be the scalar outer factor of $ \xi _0 \dot{\we} \xi_1\dot{\we} v_2$. Note that, by Proposition \ref{xjclosed}, $ \xi _0 \dot{\we} \xi_1\dot{\we} v_2 \in H^2(\Dd, \we^3 \Cc^n).$ Let \begin{equation} x_{2} = ( I_{\Cc^m} - \xi_0 \xi_0^*-\xi_1 \xi_1^* )v_2, \;\; y_2= (I_{\Cc^m} - \bar{\eta}_0 \eta_0^T - \bar{\eta}_1 \eta_1^T)w_2. \end{equation} Then, by Proposition \ref{onxi}, \begin{equation}\label{x2y2h2} \|x_2 (z)\|_{\Cc^n} = |h_2(z)| = \|y_2 (z)\|_{\Cc^m}\;\text{almost everywhere on}\; \Tt.\end{equation} Define \begin{equation}\label{xisetais}\xi_2 = \frac{x_2}{h_2} , \; \eta_2 = \frac{\bar{z} \bar{y}_2}{h_2}. \end{equation} \noindent Clearly $\|\xi_2(z)\|_{\Cc^n}=1 $ and $\|\eta_2(z)\|_{\Cc^m}=1$ almost everywhere on $\Tt.$\\ \textbf{Recursive step.} Suppose that, for $j \le \min(m,n) -2$, we have constructed \begin{equation}\label{rec_step}\begin{aligned} &t_0 \geq t_1 \geq \cdots \geq t_j > 0\\ &x_0, x_1, \cdots, x_j \in L^2 (\Tt, \Cc^n)\\ &y_0 , y_1 , \cdots , y_j \in L^2(\Tt, \Cc^m) \\ &h_0, h_1, \cdots, h_j \in H^2(\Dd,\Cc) \; \text{outer}\\ & \xi_0,\xi_1, \cdots, \xi_j \in L^\infty(\Tt,\Cc^n)\; \text{pointwise orthonormal on}\;\Tt\\ & \eta_0, \eta_1, \cdots , \eta_j \in L^\infty(\Tt,\Cc^m)\; \text{pointwise orthonormal on}\;\Tt\\ &X_0 = H^2(\Dd,\Cc^n),X_1, \cdots, X_j \\ &Y_0 = H^2(\Dd,\Cc^m)^\perp, Y_1, \cdots, Y_j\\ &T_0, T_1, \cdots, T_j \; \text{compact operators}.\end{aligned}\end{equation} By Theorem \ref{1.8}, there exists a function $Q_{j+1} \in H^\infty(\Dd, \CCmn)$ such that $$\left(s_0^\infty(G-Q_{j+1}), s_1^\infty(G-Q_{j+1}), \cdots , s_{j+1}^\infty(G-Q_{j+1})\right)$$ is lexicographically minimized. By Proposition \ref{g-qjj}, any such function $Q_{j+1}$ satisfies \begin{equation}\label{g-qi}(G-Q_{j+1})x_i = t_i y_i, \quad y_i^* (G-Q_{j+1}) = t_i x_i^*, \quad i=0, 1, \cdots, j. \end{equation} \index{$X_{j}$} Choose any function $Q_{j+1} \in H^\infty(\Dd, \CCmn)$ which satisfies the equations \eqref{g-qi}. Define \begin{equation}\label{X_j} X_{j+1} \stackrel{\text{def}}{=} \xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_j \dot{\we} H^2(\Dd,\Cc^n) \end{equation} \index{$Y_{j}$} \begin{equation}\label{Y_j} Y_{j+1} \stackrel{\text{def}}{=} \bar{\eta}_0 \dot{\we} \bar{\eta}_1 \dot{\we} \cdots \dot{\we} \bar{\eta}_j \dot{\we} H^2 (\Dd, \Cc^m)^\perp .\end{equation} Note that, by Proposition \ref{xjsubseth2}, $X_{j+1}$ is a subset of $H^2(\Dd,\we^{j+2}\Cc^n),$ and, by Proposition \ref{etatelwelj}, $Y_{j+1}$ is a closed subspace of $H^2 (\Dd, \we^{j+2} \Cc^m)^\perp.$ \index{$T_{j}$} \noindent Consider the operator $$T_{j+1} : X_{j+1} \to Y_{j+1}$$ given by \begin{equation}\label{T_j} T_{j+1}(\xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_j \dot{\we} x)= P_{Y_{j+1}} \left( \bar{\eta}_0 \dot{\we} \bar{\eta}_1 \dot{\we} \cdots \dot{\we} \bar{\eta}_j\dot{\we} (G-Q_{j+1})x \right).\end{equation} By Corollary \ref{projwellgen} and Proposition \ref{Twell}, $T_{j+1}$ is well-defined and does not depend on the choice of $Q_{j+1}$ subject to equations \eqref{g-qi}. If $T_{j+1}=0,$ then the algorithm terminates, we define $r$ to be $j+1,$ and, according to Theorem \ref{mathcalAG}, the superoptimal approximant $\mathcal{A}G$ is given by the formula $$G- \mathcal{A}G = \sum\limits_{i=0}^{r-1} \frac{t_i y_i x_i^*}{|h_i|^2}.$$ Otherwise, we define $t_{j+1} = \|T_{j+1}\| >0.$ By Theorem \ref{T0compact}, $T_{j+1}$ is a compact operator and hence there exist $v_{j+1} \in H^2(\Dd,\Cc^n), \; w_{j+1} \in H^2(\Dd,\Cc^m)^\perp$ such that \begin{equation}\label{schmpairtj+1}(\xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_j \dot{\we} v_{j+1}, \bar{\eta}_0 \dot{\we} \bar{\eta}_1 \dot{\we} \cdots \dot{\we} \bar{\eta}_j \dot{\we} w_{j+1})\end{equation} is a Schmidt pair for $T_{j+1}$ corresponding to the singular value $t_{j+1}.$ \noindent Let $h_{j+1}$ be the scalar outer factor of $\xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_j \dot{\we} v_{j+1},$ and let \begin{equation}\label{xj+1yj+1}x_{j+1} = (I_{\Cc^n} - \xi_0 \xi_0^* - \cdots - \xi_j \xi_j^*)v_{j+1}, \quad y_{j+1} = (I_{\Cc^m} - \bar{\eta}_0 \eta_0^T - \cdots- \bar{\eta}_j\eta_j^T) w_{j+1}.\end{equation} Then, by Proposition \ref{onxi}, \begin{equation}\label{xjyjhj} \index{$x_{j}$} \index{$y_{j}$} \|x_{j+1} (z)\|_{\Cc^n} = |h_{j+1}(z)| = \|y_{j+1} (z)\|_{\Cc^m}\;\text{almost everywhere on}\; \Tt.\end{equation} We define \begin{equation}\label{xij+1etaj+1} \xi_{j+1} = \frac{x_{j+1}}{h_{j+1}}, \quad \eta_{j+1}=\frac{\bar{z}\overline{y_{j+1}}}{h_{j+1}}.\end{equation} \noindent Clearly $\|\xi_{j+1}(z)\|_{\Cc^n}=1 $ and $\|\eta_{j+1}(z)\|_{\Cc^m}=1$ almost everywhere on $\Tt.$\\ This completes the recursive step. The algorithm terminates after at most $\min(m,n)$ steps, so that $r \leq \min(m,n)$ and, in accordance with Theorem \ref{mathcalAG} the superoptimal approximant $\mathcal{A}G$ is given by the formula $$G - \mathcal{A}G = \sum\limits_{i=0}^{r-1} \frac{ t_i y_i x_i^*}{|h_i|^2}.$$ \end{proof} \section{Application of the algorithm}\label{applic} Let us now apply the new algorithm to the example Peller and Young solved in \cite{Constr}. \begin{problem}\label{applic-ex} Let $G=B^{-1}A \in L^\infty(\mathbb{C}^{2\times 2})$ where $$A(z)= \begin{pmatrix} \sqrt{3}+2z \hspace{2ex} &0\\ 0 \hspace{2ex} &1 \end{pmatrix}, \;\; B(z) = \displaystyle\frac{1}{\sqrt{2}} \begin{pmatrix} z^2 \hspace{2ex} z\\ z \hspace{2ex} -1 \end{pmatrix} \qquad \mbox{ for all } z\in\T. $$ Find the superoptimal singular values of $G$ and its superoptimal approximant $\mathcal{A}G \in H^{\infty},$ that is, the unique $\mathcal{A}G \in H^\infty(\Dd, \mathbb{C}^{2\times 2})$ such that the sequence $$s^\infty(G-\mathcal{A}G) = (s_0^\infty ( G-\mathcal{A}G) , s_1^\infty(G-\mathcal{A}G), \cdots ) $$ is lexicographically minimized. \end{problem} On $\mathbb{T}$ $G$ is $$ G(z) = \displaystyle\frac{1}{\sqrt{2}}\begin{pmatrix} \sqrt{3}\bar{z}^2 +2\bar{z} \hspace{2ex} &\bar{z}\\ \sqrt{3}\bar{z}+2 \hspace{2ex} &-1 \end{pmatrix}. $$ \textbf{Step 0:} The operator $H_G^* H_G$ with the respect to the orthonormal basis $$ B=\left\{ \begin{pmatrix} 1\\ 0 \end{pmatrix},\begin{pmatrix} z\\ 0 \end{pmatrix}, \begin{pmatrix} 0\\ 1 \end{pmatrix},\begin{pmatrix} 0\\ z \end{pmatrix} \right\} $$of $(z^2H^2)^\perp,$ has matrix representation $$H^*_G H_G \sim \displaystyle\frac{1}{\sqrt{2}} \begin{pmatrix} &10 \hspace{2ex} &2\sqrt{3} \hspace{2ex} &2 \hspace{2ex} &0\\ &2\sqrt{3} \hspace{2ex} &3 \hspace{2ex} &\sqrt{3} \hspace{2ex} &0\\ &2 \hspace{2ex} &\sqrt{3} \hspace{2ex} &1 \hspace{2ex} &0\\ &0 \hspace{2ex} &0 \hspace{2ex} &0 \hspace{2ex} &0\\ \end{pmatrix}.$$ Then $\|H_G\| = \sqrt{6}$ and a non-zero vector $ x_0 \in H^2(\Dd,\Cc^2)$ such that $$\|H_Gx_0\|_{H^{2}(\Dd,\Cc^2)^\perp} = \|H_G\| \|x_0\|_{H^2(\Dd,\Cc^2)} $$ is $$x_0(z)= \begin{pmatrix} 4+\sqrt{3}z\\ 1 \end{pmatrix}.$$ For $(x_0,y_0)$ to be a Schmidt pair for $H_G$ corresponding to $\|H_G\|,$ the vector $y_0 \in H^2(\Dd,\Cc^2)^\perp$ can be calculated by $$y_0 (z)= \frac{H_G x_0(z)}{\|H_G\|} = \displaystyle 2\bar{z} \begin{pmatrix} \bar{z} + \sqrt{3}\\ 1 \end{pmatrix} \in H^2(\Dd,\Cc^2)^\perp. $$ Perform the inner-outer factorizations $$x_0=\xi_{0}h_0,\quad \bar{z}\bar{y}_0=\eta_{0}h_0$$ for some inner $\xi_{0}, \eta_{0} \in H^{\infty}(\Dd,\Cc^2)$ and some scalar outer $h_0 \in H^2(\Dd,\Cc).$ In this example $$\xi_0(z) = \frac{x_0}{h_0}= \frac{a}{4\sqrt{3}(1-\gamma z)}\begin{pmatrix} 4 +\sqrt{3}z \\1 \end{pmatrix},$$ $$\bar{\eta}_0(z) =\frac{zy_0}{\bar{h}_0} = \frac{2a}{4\sqrt{3}(1-\gamma \bar{z})} \begin{pmatrix} \bar{z} + \sqrt{3}\\ 1 \end{pmatrix} ,$$ where $$h_0(z) = \frac{4\sqrt{3}}{a}(1-\gamma z) ,$$ $a= \sqrt{10-2\sqrt{13}}$ and $\gamma= -\frac{a^2}{4\sqrt{3}}.$ A function $Q_1 \in H^\infty(\Dd,\Cc^{2\times 2})$ that satisfies $$(G-Q_1)x_0 = t_0 y_0 ,\quad (G-Q_1)^*y_0 = t_0 x_0 $$ is $$ Q_1(z) = \begin{pmatrix} 0 & \sqrt{6} \\ 2\sqrt{2} & -\sqrt{6}(z +\sqrt{3}) \end{pmatrix} .$$ \textbf{Step 1:} Let $X_1= \xi_0 \telwe H^2(\Dd,\Cc^2)$ and $Y_1 = \bar{\eta}_0 \telwe H^2(\Dd,\Cc^2)^\perp.$ Let $T_1:X_1 \to Y_1$ be given by $$T_1 (\xi_0 \telwe x) = P_{Y_1}(\bar{\eta}_0 \telwe (G-Q_1)x)$$ for all $x \in H^2(\Dd,\Cc^2).$ Note that $$\begin{array}{cll}X_1&=\left\{\xi_0 \telwe \begin{pmatrix} f_1 \\f_2 \end{pmatrix}\;:\; f_i \in H^2(\Dd,\Cc) \right\} \vspace{2ex} \\ & = \left\{\displaystyle\frac{a}{4\sqrt{3}} \frac{(4+\sqrt{3}z)f_2-f_1}{1-\gamma z}\; : \; f_i \in H^2(\Dd,\Cc)\right\} .\end{array}$$ If we choose $$ f_1 = -\frac{4\sqrt{3}}{a}(1-\gamma z)g\quad\text{and}\quad f_2 =0$$ for some $g\in H^2(\Dd,\Cc),$ we obtain $X_1 = H^2(\Dd,\Cc).$ Also $$\begin{array}{cll} Y_1 &= \left\{\bar{\eta}_0 \telwe \begin{pmatrix} \bar{z} \bar{\phi}_1\\ \bar{z} \bar{\phi}_2 \end{pmatrix}\; : \; \phi_i \in H^2(\Dd,\Cc)\right\}\vspace{2ex}\\& = \left\{\displaystyle\frac{a\bar{z}}{2\sqrt{3}} \frac{(\bar{z}+\sqrt{3})\bar{\phi}_2 - \bar{\phi}_1}{1-\gamma \bar{z}}\; : \; \phi_i \in H^2(\Dd,\Cc) \right\}.\end{array} $$ If we choose $$\phi_1 =-\frac{2\sqrt{3}}{a}(1-\gamma z)\psi, \quad \text{and}\quad \phi_2=0 $$ for some $\psi \in H^2(\Dd,\Cc),$ we find that $Y_1 = H^2(\Dd,\Cc)^\perp.$ We have $$T_1\left(\xi_0 \telwe \begin{pmatrix} f_1 \\ f_2\end{pmatrix} \right)=T_1\left(\xi_0 \telwe \begin{pmatrix} -\frac{4\sqrt{3}}{a} (1-\gamma z)g \\ 0\end{pmatrix} \right) = \frac{u(\gamma)}{z-\gamma}$$ where $$u(\gamma) = \sqrt{2}(1-\gamma^2)(2\sqrt{3}\gamma +1)g(\gamma).$$ Then $t_1= \|T_1\| = \sqrt{2}(4-\sqrt{13}) .$ Since $T_1 $ is a compact operator, there exist $v_1 \in H^2(\Dd,\Cc^2),$ $w_1 \in H^2(\Dd,\Cc^2)^\perp$ such that $$T_1(\xi_0\telwe v_1) = t_1 (\bar{\eta}_0 \telwe w_1) , \quad T_1^* (\bar{\eta}_0 \telwe w_1) = t_1 (\xi_0 \telwe v_1). $$ Here we can choose $$v_1(z) = \frac{4\sqrt{3}}{a}\begin{pmatrix}-1 \\ 0 \end{pmatrix}, \quad w_1(z)= \frac{2\sqrt{3}}{a}\bar{z}\begin{pmatrix} 1 \\ 0 \end{pmatrix}.$$ \noindent Perform the inner-outer factorisation of $\xi_0 \telwe v_1\in H^2(\Dd,\we^2\Cc^2).$ The function $h_1(z)= \frac{1}{1-\gamma z}$ is the scalar outer factor of $\xi_0 \telwe v_1.$ \noindent Let $$x_1 = (I - \xi_0(z)\xi_0^*(z))v_1(z), \quad y_1(z)= (I-\bar{\eta}_0(z) \eta_0^T(z))w_1(z).$$ \noindent Then $$x_1 = \frac{\gamma}{\alpha} \frac{1}{(1-\gamma z)(1-\gamma \bar{z})} \begin{pmatrix} \frac{-4\sqrt{3}}{\gamma} (1-\gamma z)(1-\gamma \bar{z})-19-4\sqrt{3}(z+\bar{z})\\ -4-\sqrt{3}\bar{z} \end{pmatrix} $$ and $$y_1 = \frac{2\gamma \bar{z}}{\alpha} \frac{1}{(1-\gamma z)(1-\gamma \bar{z})} \begin{pmatrix} \frac{\sqrt{3}}{\gamma} (1-\gamma z)(1-\gamma \bar{z})+4+\sqrt{3}(z+\bar{z})\\ z+\sqrt{3} \end{pmatrix} .$$ Calculations yield $$x_1 = \frac{\gamma}{\alpha} \frac{1}{(1-\gamma z)(1-\gamma \bar{z})} \begin{pmatrix} 1\\ -4-\sqrt{3}\bar{z} \end{pmatrix}, \quad y_1 = \frac{2\gamma \bar{z}}{\alpha} \frac{1}{(1-\gamma z)(1-\gamma \bar{z})} \begin{pmatrix} -1\\ z+\sqrt{3} \end{pmatrix}.$$ \noindent The algorithm stops after at most $ \min(m,n)$ steps, hence in this case after $2$ steps. Then, by Theorem \ref{Tkismultipleofhankel}, the unique analytic superoptimal approximant $\mathcal{A}G$ is given by the formula $$\mathcal{A}G = G - \frac{t_0 y_0 x_0^*}{|h_0|^2}- \frac{t_1 y_1 x_1^*}{|h_1|^2} .$$ All terms of $\mathcal{A}G$ can be calculated now, to give $$\mathcal{A}G= \frac{\sqrt{2}}{1-\gamma z} \begin{pmatrix} -\gamma & \sqrt{3}+4\gamma \\ 2+\gamma \sqrt{3} -\gamma z & -(\sqrt{3}+4\gamma)(\sqrt{3}+z) \end{pmatrix} ,$$ which is the \textit{unique superoptimal analytic approximant} for the given $G$ in Problem \ref{applic-ex}. \section*{Contents} \label{contents} \ref{intro}. Introduction \hfill Page \pageref{intro} \ref{history}. History and recent work \hfill \pageref{history} \ref{exterior}. Exterior powers of Hilbert spaces \hfill \pageref{exterior} \hspace*{1cm} \ref{ext_powers}. Exterior powers \hfill \pageref{ext_powers} \hspace*{1cm} \ref{point_w_p}. Pointwise wedge product and pointwise creation operators \hfill \pageref{point_w_p} \ref{statet_algorithm}. Superoptimal analytic approximation \hfill \pageref{statet_algorithm} \hspace*{1cm} \ref{known}. Known results \hfill \pageref{known} \hspace*{1cm} \ref{Alg_statement}. {Algorithm for superoptimal analytic approximation \hfill \pageref{Alg_statement} \ref{orthonomal}. Pointwise orthonormality of $\{\xi_i\}_{i=1}^j$ and $\{\bar{\eta}_i\}_{i=1}^j$ almost everywhere on $\Tt$ \hfill \pageref{orthonomal} \ref{Xjsubset}. The closed subspace $X_{j+1}$ of $H^2(\Dd,\we^{j+2}\Cc^n)$ \hfill \pageref{Xjsubset} \ref{Y_j-closed}. The closed subspace $Y_{j+1}$ of $H^2(\Dd,\we^{j+2}\Cc^m)^\perp$ \hfill \pageref{Y_j-closed} \ref{Tj-well-def}. $T_j$ is a well-defined operator \hfill \pageref{Tj-well-def} \ref{T_1-compact}. Compactness of the operators $T_1$ and $T_2$ \hfill \pageref{T_1-compact} \ref{T_j-compact}. Compactness of the operator $T_j$ \hfill \pageref{T_j-compact} \ref{applic}. Application of the algorithm \hfill \pageref{applic} References \hfill\pageref{bibliog} \section{Exterior powers of Hilbert spaces}\label{exterior} In this section we recall the well-established notion of the wedge product of Hilbert spaces. One can find definitions and properties of wedge products in \cite{depillis}, \cite{Greub}, \cite{pavan}, \cite{Wed} and \cite{Sim1,Sim2}. Here we present a concise version of this theory which we need for the new superoptimal algorithm. \subsection{Exterior powers}\label{ext_powers} In this subsection, we first present some results concerning the action of permutation operators on tensors, then we recall the definition of antisymmetric tensors and we define an inner product on the space of all antisymmetric tensors. In the following $E$ denotes a Hilbert space. We shall assume known the notion of the algebraic tensor product of vector spaces, which is precisely explained in \cite{Tensor}. For the Hilbert space tensor product of Hilbert spaces, see \cite{Dixmier}. One can find proofs of many statements given below, in our paper \cite{YCL20-3}. \begin{definition} $\otimes^{p}E$ \index{$\otimes^{p}E$} is the $p$-fold algebraic tensor product of $E$ and is spanned by tensors of the form $x_1 \otimes x_2 \otimes \dots \otimes x_p,$ where $ x_j \in E$ for $j=1, \dots, p.$ \vspace{2ex} \end{definition} \begin{definition}\label{inner} An inner product on $\otimes^{p}E$ is defined on elementary tensors by $$\langle x_1 \otimes x_2 \otimes \dots \otimes x_p, y_1 \otimes y_2 \otimes \dots \otimes y_p \rangle_{\otimes^{p}E}=p!\langle x_1 , y_1\rangle_E \cdots \langle x_p, y_p\rangle_E,$$ \vspace{3ex} for any $x_1, \dots, x_p, y_1, \dots, y_p \in E$, and is extended to $\otimes^p E$ by sesqui-linearity. \end{definition} \begin{definition} $\otimes_H^p E$ is the completion of $\otimes^p E$ with respect to the norm $\|u\| = \langle u, u\rangle^{1/2}_{\otimes^{p}E},$ for $u \in \otimes^p E.$ \end{definition} \begin{definition}\label{sigma} Let $\mathfrak{S}_p$ denote the \index{symmetric group} symmetric group on $\{1,\dots,p\},$ with the operation of composition. For $\sigma \in \mathfrak{S}_p$, we define \[ S_\sigma : \otimes^p E \to \otimes^p E \] on elementary tensors by $$\displaystyle S_\sigma(x_1 \otimes x_2 \otimes \dots \otimes x_p)=x_{\sigma(1)} \otimes x_{\sigma(2)} \otimes \dots \otimes x_{\sigma(p)},$$ and we extend $S_\sigma$ to $\otimes^p E$ by linearity, that is, for $u= \sum_{i=1}^n \la_i x_1^i \otimes \cdots \otimes x_p^i,$ we define $$S_\sigma (u) = \sum\limits_{i=1}^{n} \la_i S_\sigma (x_1^i \otimes \cdots \otimes x_p^i ).$$ for any $x_j^i \in E$ and $\la_i\in\C$. \vspace{2ex} \end{definition} \begin{remark} Clearly if $\sigma$ is a bijective self-map of $\{1, \dots, p\}$, then so is its inverse map $\sigma^{-1}$, and $(\mathfrak{S}_p, \circ)$ is a group under composition. Moreover, $$\sigma \circ \sigma^{-1} = \id = \sigma^{-1} \circ \sigma,$$ where $\id \in \mathfrak{S}_p$ is the identity map on $\{1,\dots,p\}.$ Then, if $\epsilon_{\sigma}$ denotes the signature of the permutation $\sigma$, $$\epsilon_{\sigma \circ \sigma^{-1}} = \epsilon_\sigma \epsilon_{\sigma^{-1}} =1, $$ hence $\epsilon_\sigma= \epsilon_{\sigma^{-1}}.$ \end{remark} \begin{proposition}\label{a.3} Let $E$ be a Hilbert space, and let $p$ be any positive integer. Then, for any $\sigma\in \mathfrak{S}_p$, $S_\sigma$ is a linear operator on the normed space $(\otimes^p E , \| \cdot \|)$, which extends to an isometry $\mathbf{S}_\sigma$ on $(\otimes^p_H E,\| \cdot \|) $. Furthermore, $\mathbf{S}_\sigma$ is a unitary operator on $\otimes_H^p E $, $\mathbf{S}_\sigma^*= \mathbf{S}_{\sigma\inv}$, and therefore \[ \mathbf{S}_\sigma^*\mathbf{S}_\sigma = \mathbf{S}_{\sigma\inv} \mathbf{S}_\sigma = I \] is the identity operator on $\otimes^p_H E$. \end{proposition} Henceforth we shall denote the extended operator $\mathbf{S}_\sigma$ by $S_\sigma$. \begin{definition}\label{a.4} A tensor $u \in \otimes_H^{p}E$ is said to be \emph{symmetric} if $S_\sigma(u)=u$ for all $\sigma \in \mathfrak{S}_p.$ \index{symmetric tensor} A tensor $u \in \otimes_H^{p}E$ is said to be \emph{antisymmetric} if $u=\epsilon_{\sigma}S_\sigma u$ for all $\sigma \in \mathfrak{S}_p$ \index{antisymmetric tensor} where $\epsilon_{\sigma}$ is the signature of $\sigma.$ \end{definition} \begin{definition} The space of all antisymmetric tensors in $\otimes_H^{p}E$ will be denoted by $\wedge^p E$. \index{$\wedge^p E$} \end{definition} \begin{theorem}\label{a.10} Let $E$ be a Hilbert space. Then $\bigwedge^{p}E$ is a closed linear subspace of the Hilbert space $\otimes_H^{p}E$ for any $p \geq 2.$ \end{theorem}\vspace{2ex} \begin{proof} For $\sigma \in \mathfrak{S}_p$ define the operator $$ f_\sigma \stackrel{\emph{def}}{=} S_\sigma - \epsilon_\sigma I\;\colon \otimes_H^p E \to \otimes_H^p E, $$ where $I$ denotes the identity operator on $ \otimes_H^p E $. Since $S_\sigma$ is a continuous linear operator on $\otimes_H^pE$, $f_\sigma$ is a continuous linear operator. The kernel of the operator $f_\sigma$ is $$ \begin{array}{cllllllllll} \ker f_\sigma &=\{ u \in \otimes_H^p E \colon (S_\sigma - \epsilon_\sigma I)(u)= 0\} \\ &= \{ u \in \otimes_H^pE \colon \epsilon_\sigma S_\sigma(u)= u\}. \end{array} $$ \noindent Since $f_\sigma$ is a continuous linear operator on $\otimes_H^p E,$ $\ker f_\sigma$ is a closed linear subspace of $\otimes_H^p E.$ Thus $\we^p E$ is a closed linear subspace of $\otimes_H^p E,$ since $$ \we^p E = \{u \in \otimes_H^p E \; \colon\epsilon_\sigma S_\sigma(u) = u \mbox{ for all } \sigma \in \mathfrak{S}_p\} = \bigcap\limits_{\sigma \in \mathfrak{S}_p} \ker f_\sigma. $$ \end{proof} Theorem \ref{a.10} implies that the orthogonal projection from $\otimes^p_H E$ onto $\we^p E$ is well defined on $\otimes^p_H E$. \begin{definition}\label{a.5} Let $E$ be a Hilbert space. For $x_1, \dots, x_p \in E,$ define $x_1 \wedge x_2 \wedge \dots \wedge x_p$ to be the orthogonal projection of the \index{elementary tensor}elementary tensor $x_1 \otimes x_2 \otimes \dots \otimes x_p$ onto $\wedge^p E$, that is $$x_1 \wedge x_2 \wedge \dots \wedge x_p= P_{\we^p E}( x_1 \otimes \cdots \otimes x_p).$$ \end{definition} One can find a proof of the following statement in \cite{YCL20-3}. \begin{theorem}\label{a.6} Let $E$ be a Hilbert space. For all $u \in \otimes_H^p E,$ $$P_{\wedge^p E}(u) = \displaystyle\frac{1}{p!} \sum\limits_{\sigma \in \mathfrak{S}_p}\epsilon_{\sigma}S_\sigma (u).$$ \end{theorem} \begin{proposition}\label{we} Let $E$ be a Hilbert space. The inner product in $\wedge^p_H E$ is given by : $$\langle x_1 \wedge \dots \wedge x_p, y_1 \wedge \dots \wedge y_p \rangle_{\wedge^p_H E} = \det \begin{pmatrix} \langle x_1, y_1 \rangle_E &\dots & \langle x_1, y_p \rangle_E\\ \vdots &\ddots & \vdots \\ \langle x_p, y_1 \rangle_E & \dots &\langle x_p, y_p \rangle_E \end{pmatrix},$$ for all $x_1, \dots, x_p, y_1, \dots, y_p \in E$. \end{proposition} \begin{proof} By Theorem \ref{a.6}, we have $$ \begin{array}{cllllllllllllllllll} &\langle x_1 \wedge \dots \wedge x_p, y_1 \wedge \dots \wedge y_p \rangle_{\wedge^p E} =\\ [5ex] &=\left\langle \displaystyle\frac{1}{p!}\sum\limits_{\sigma \in \mathfrak{S}_p}\epsilon_{\sigma}S_\sigma (x_1 \otimes x_2 \otimes \dots \otimes x_p) , \displaystyle\frac{1}{p!}\sum\limits_{\tau \in \mathfrak{S}_p}\epsilon_{\tau}S_\tau (y_1 \otimes y_2 \otimes \dots \otimes y_p) \right\rangle_{\otimes_H^p E} \vspace{2ex}\\ &=\;\displaystyle \frac{1}{p!} \sum\limits_{\sigma' \in \mathfrak{S}_p}\epsilon_{\sigma'} \langle x_1 \otimes x_2 \otimes \dots \otimes x_p,S_{\sigma'} (y_1 \otimes y_2 \otimes \dots \otimes y_p ) \rangle_ {\otimes_H^p E}\vspace{2ex} \\ &= \;\displaystyle \sum\limits_{\sigma'\in \mathfrak{S}_p}\epsilon_{\sigma'} \prod\limits_{i=1}^p \langle x_{i}, y_{\sigma'(i)}\rangle_E \vspace{2ex} \\ &= \det \begin{pmatrix} \langle x_1 , y_1 \rangle_E & \cdots & \langle x_1 , y_p\rangle_E\\ \vdots &\ddots & \vdots\\ \langle x_p , y_1 \rangle_E & \cdots & \langle x_p , y_p \rangle_E \end{pmatrix}, \;\; \text{by Leibniz' formula}. \end{array}$$ \end{proof} See \cite[Proposition 2.14]{YCL20-3} for slightly more detail. \begin{corollary}\label{lin-depend} Let $E$ be a Hilbert space and let $x_1,\dots,x_p\in E$. Then $x_1\we \cdots \we x_p =0$ if and only if $x_1,\dots,x_p$ are linearly dependent. \end{corollary} \begin{proof} Note that $x_1\we \cdots \we x_p =0$ if and only if $\| x_1\we \cdots \we x_p\|_{\we^{p}E}^2=0, $ which, by Proposition \ref{we}, holds if and only if $$\det [\langle x_i, x_j \rangle]_{i,j=1}^p=0 .$$ Thus $x_1\we \cdots \we x_p =0$ if and only if there exist complex numbers $\lambda_1,\dots,\lambda_p,$ which are not all zero, such that $$\begin{pmatrix} \langle x_1, y_1 \rangle_E &\dots & \langle x_1, y_p \rangle_E\\ \vdots &\ddots & \vdots \\ \langle x_p, y_1 \rangle_E & \dots &\langle x_p, y_p \rangle_E \end{pmatrix} \begin{pmatrix} \bar{\lambda}_1 \\ \vdots \\ \bar{\lambda}_p \end{pmatrix}=0 .$$ This holds if and only if there exist complex numbers $\lambda_1,\dots,\lambda_p,$ which are not all zero, such that $$ \langle x_i, \sum_{j=1}^p \lambda_j x_j \rangle_E=0 \quad \text{for}\; i=1,\dots,p .$$ The latter statement is equivalent to the assertion that there exist complex numbers $\lambda_1,\dots,\lambda_p,$ which are not all zero, such that $$ \langle \sum_{i=1}^p \lambda_i x_i, \sum_{j=1}^p \lambda_j x_j \rangle_E=0 ,$$ which in turn is equivalent to the condition that there exist complex numbers $\lambda_1,\dots,\lambda_p,$ not all zero, such that $$ \sum_{j=1}^p \lambda_j x_j =0.$$ The latter statement is equivalent to the linear dependence of $x_1,\dots,x_p$ as required. \end{proof} \begin{comment} \begin{corollary}\label{a.11} Let $E$ be a Hilbert space. Suppose $x,y \in E ,$ and $x$,$y$ are orthogonal in $E$, that is, $\langle x, y\rangle_E=0.$ Then $$\|x \we y\|_{\we^2E} = \|x\|_E \|y\|_E.$$ \end{corollary} \begin{proof} By Proposition \ref{we}, $$\|x\we y\|_{\we^2 E}^2 = \langle x \we y, x\we y\rangle_{\we^2E} = \det\begin{pmatrix} \langle x , x \rangle_E & \langle x, y \rangle_E\\ \langle y,x \rangle_E & \langle y,y\rangle_E \end{pmatrix}.$$ \noindent If $x$ is orthogonal to $y$ in $E,$ the off-diagonal entries are zero and thus $$\|x\we y\|_{\we^2 E}^2 = \| x\|_E^2 \|y\|_E^2.$$ \end{proof} \end{comment} \begin{lemma}\label{weon} Suppose $\{u_1, \cdots, u_n\}$ is an orthonormal set in $E.$ Then, for $j=1, \dots,n-1$ and for every $x \in E,$ $$\| u_1 \we \cdots \we u_j \we x\|_{\we^{j+1}E} =\| x - \displaystyle\sum\limits_{i=1}^j \langle x, u_i \rangle u_i\|_{E}. $$ \end{lemma} See \cite[Lemma 2.15]{YCL20-3}. \begin{definition} Let $(E, \| \cdot\|_E) $ be a Hilbert space. The \emph{$p$-fold Cartesian product of $E$} is defined to be the set $$ \underbrace{E\times \dots \times E}_{p-times} =\{ (x_1,\dots, x_p): x_i \in E \}.$$ Moreover, we define a norm on $\underbrace{E\times \dots \times E}_{p-times}$ by $$\|(x_1,\dots, x_p) \|=\{\sum_{i=1}^p \|x_i\|_E^2\}^\half. $$ \end{definition} \begin{definition} Let $E$ be a Hilbert space. We define the multilinear operator $$\Lambda \colon \underbrace{E\times \dots \times E}_{p-times} \to \we^p E$$ by $$\Lambda(x_1,\dots,x_p)= x_1 \we\dots \we x_p \quad \text{for all}\quad x_1,\dots,x_p \in E. $$ \end{definition} \begin{proposition}\label{Hadam}{\rm [Hadamard's inequality, \cite{Sing}, p. 477]} For any matrix $$A=(a_{ij})\in \Cc^{n\times n},$$ $$|\det(A)| \leq \prod\limits_{j=1}^{n}\left( \sum\limits_{i=1}^n |a_{ij} |^2 \right)^{1/2} \quad \text{and} \quad |\det(A)| \leq \prod\limits_{i=1}^{n}\left( \sum\limits_{j=1}^n |a_{ij} |^2 \right)^{1/2}. $$ \end{proposition} \begin{proposition}\label{weopiscontinuous1} Let $E$ be a Hilbert space. Then the multilinear mapping $$\Lambda \colon \underbrace{E\times \dots \times E}_{p-times} \to \we^p E$$ is bounded and \begin{equation}\label{Had-C-S-inq} \| \Lambda(x_1,\dots,x_p )\|_{\we^p E}^2 \leq \prod\limits_{j=1}^p \|x_j\|_E \left( \sum\limits_{i=1}^p \|x_i\|_E^2 \right)^{1/2}. \end{equation} \end{proposition} See \cite[Proposition 2.19]{YCL20-3} for more detail. \subsection{Pointwise wedge products and pointwise creation operators}\label{point_w_p} For the purposes of this paper we need to consider the wedge product of mappings defined on the unit circle or in the unit disk that take values in Hilbert spaces. To this end we introduce a notion of pointwise wedge product and we study its properties. \begin{definition}\label{a.17} \index{ pointwise wedge product on $\Tt$} Let $E$ be a Hilbert space and let $f,g\colon \Dd \to E$ $\mathrm{(} f,g\colon \Tt \to E \mathrm{)}$ be $E$-valued maps. We define the \emph{pointwise wedge product of $f$ and $g,$} $$f\telwe g \colon \Dd \to \we^2E \quad \mathrm{(} f\telwe g \colon \Tt \to \we^2E\mathrm{)}$$ by $$(f\telwe g) (z) = f(z) \we g(z) \quad \text{for all}\; z \in \Dd \quad \mathrm{(} \text{for almost all}\; z \in \Tt\mathrm{)}.$$ \end{definition}\vspace{3ex} \begin{definition}\label{pointwiseld} Let $E$ be a Hilbert space and let $\chi_1,\dots,\chi_n \colon \mathbb{D} \to E$ \newline $\mathrm{(}\chi_1,\dots,\chi_n \colon \mathbb{T} \to E \mathrm{)}$ be $E$-valued maps. We call $\chi_1,\dots \chi_n$ \emph{pointwise linearly dependent} on $\Dd$ \emph{(}or on $\Tt$\emph{)} if for all $z\in\Dd$ \rm{(}for almost all $z\in\Tt$ respectively\rm{)} the vectors $\chi_1(z), \dots,\chi_n(z)$ are linearly dependent in $E$. \index{pointwise linearly dependent on $\Tt$} \end{definition} \begin{remark} If $x_1, \dots, x_n$ are pointwise linearly dependent on $\Tt$, then $$(x_1\telwe \dots \telwe x_n)(z)=0$$ for almost all $z \in \Tt.$ \end{remark} \begin{comment} \begin{proposition}\label{a.18} Let $E$ be a separable Hilbert space and let $\displaystyle\frac{1}{p}+\frac{1}{q}=1,$ where \newline $1 \leq p,q \leq \infty$. Suppose that $x\in L^p(\mathbb{T},E),$ $y \in L^q(\mathbb{T},E).$ Then $$ x \dot{\we} y \in L^1(\mathbb{T},\we^2 E). $$ and \be\label{Hwedge} \|x \dot{\we} y\|_{ L^1(\mathbb{T},\we^2 E)} \leq \|x\|_{L^p(\T,E)} \|y\|_{L^q(\T,E)}. \ee \end{proposition}\vspace{3ex} \begin{proof} By Proposition \ref{we}, for all $z \in \Tt,$ $$\begin{array}{cllllllll} \|(x \dot{\wedge} y)(z)\|_{\we^2 E}^2 &= \langle x(z) \wedge y(z), x(z) \wedge y(z) \rangle_{\we^2 E}\\ &= \langle x(z) ,x(z) \rangle_{E} \cdot \langle y(z),y(z)\rangle_{E} - |\langle x(z) ,y(z) \rangle_{E}|^2 \\ &\leq\|x(z)\|_{E}^2 \|y(z)\|_{E}^2. \end{array}$$ Thus, for all $z \in \Tt,$ \begin{equation}\label{wenorm} \|(x\dot{\wedge} y)(z)\|_{\we^2 E} \leq \|x(z)\|_{E} \|y(z)\|_{E}. \end{equation} By Definition \ref{a.12}, \begin{equation}\label{eqwe} \| x\telwe y \|_{L^1(\Tt, \we^2 E)} = \displaystyle\frac{1}{2\pi} \int\limits_0^{2\pi} \|(x\dot{\wedge} y)(e^{i\theta})\|_{\we^2 E} \;d\theta \leq \frac{1}{2\pi} \int\limits_0^{2\pi}\|x(e^{i\theta})\|_E \|y(e^{i\theta})\|_E \; d\theta.\end{equation} \noindent Now by H\"{o}lder's inequality, \begin{equation}\label{hl} \frac{1}{2\pi}\int\limits_0^{2\pi} \|x(e^{i\theta})\|_E \|y(e^{i\theta})\|_E \;d\theta \leq \left(\frac{1}{2\pi}\int\limits_0^{2\pi} \|x(e^{i\theta})\|_E^p \; d\theta \right)^{1/p} \left(\frac{1}{2\pi}\int\limits_0^{2\pi} \|y(e^{i\theta})\|_E^q \; d\theta \right)^{1/q}. \end{equation} \noindent By inequalities (\ref{eqwe}) and (\ref{hl}), $x\dot{\wedge} y \in L^1(\mathbb{T},\we^2E)$ and the inequality \eqref{Hwedge} holds. \end{proof} \end{comment} \begin{proposition}\label{wejanalytic} Let $E$ be a Hilbert space and $x_1, x_2, \dots, x_n \colon \Dd \to E$ be analytic $E$-valued maps on $\Dd.$ Then, $$x_1 \telwe x_2\telwe \dots \telwe x_n\colon \Dd \to \we^n E $$ is also analytic on $\Dd$ and $$(x_1 \telwe x_2 \telwe \dots \telwe x_n)'(z) = x_1'(z)\we x_2(z) \we \dots \we x_n(z) + \dots + x_1(z) \we x_2(z)\we \dots \we x_n'(z)$$ for all $z\in \Dd.$ \end{proposition} The proof is straightforward. It follows from Proposition \ref{we}, continuity of $\Lambda$ (Proposition \ref{weopiscontinuous1}) and Hadamard's inequalities \ref{Had-C-S-inq}. \begin{definition} Let $E$ be a separable Hilbert space. If $x\in L^2(\Tt,E)$ and $y \in L^{\infty}(\Tt, E)$, then $y^*x \in L^2(\Tt ,E)$ is given by $(y^*x)(z)=\langle x(z),y(z) \rangle_E$ almost everywhere on $\mathbb{T}$. \end{definition} \begin{proposition}\label{xweyh2} Let $E$ be a separable Hilbert space, let $x\in H^2(\Dd,E)$ and let $y \in H^{\infty}(\Dd, E).$ Then $$x \dot{\wedge} y \in H^2 (\Dd, \wedge^2 E).$$ \end{proposition} See \cite[Proposition 3.8]{YCL20-3}. \begin{definition} Let $E$ be a Hilbert space. We say that a family $\{ f_\lambda \}_{ \lambda \in \Lambda}$ of maps from $\Tt$ to $E$ is {\em pointwise orthonormal} on $\Tt$, if for all $z$ in a set of full measure in $\Tt$, the family of vectors $\{ f_\lambda(z) \}_{ \lambda \in \Lambda}$ is orthonormal in $E$. \end{definition} \begin{proposition}\label{wel2conv} Let $E$ be a separable Hilbert space, and let $\; \xi_0,\xi_1, \cdots, \xi_j \in L^\infty(\Tt,E) $ be a pointwise orthonormal set on $\Tt$, and let $x\in L^2(\Tt,E)$. Then $$ \xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_j \dot{\we} x \in L^2 (\Tt, \wedge^{j+2} E),$$ and \[ \| \xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_j \dot{\we} x \|_{L^2 (\Tt, \wedge^{j+2} E)} \le \| x \|_{L^2(\Tt,E)}. \] \end{proposition} It follows from Lemma \ref{weon}. See \cite[Proposition 3.11]{YCL20-3} for a proof of this proposition. \begin{definition}\label{pwcre} Let $E$ be a separable Hilbert space. Let $\xi \in H^\infty(\Dd, E)$. We define the \emph{pointwise creation operator} $$C_\xi :H^2(\Dd,E) \to H^2(\Dd, \we^2E)$$ by $$ C_\xi f = \xi \telwe f\; \text{for} \; f \in H^2(\Dd,E).$$ \end{definition}\index{pointwise creation operator} \index{$C_\xi$} \begin{remark}\label{genfatouwe} Let $E$ be a separable Hilbert space. Let $\xi \in H^\infty(\Dd, E)$ and let $f \in H^2(\Dd, E)$. By the generalized Fatou's Theorem \cite[Chapter V]{NagyFoias}, the radial limits $$\lim_{r\to 1}\xi(r\eiu)\underset{\|\cdot\|_E}{=} \tilde{\xi}(\eiu), \quad \lim_{r\to 1} f(r\eiu)\underset{\|\cdot\|_E}{=} \tilde{f}(\eiu) \;\; (0<r<1) $$ exist almost everywhere on $\Tt$ and define functions $\tilde{\xi} \in L^\infty(\Tt,E)$ and $\tilde{f}\in L^2(\Tt,E)$ respectively, which satisfy the relations $$ \lim_{r \to 1} \| \xi(r\eiu) - \tilde{\xi}(\eiu) \|_E=0,\quad \lim_{r \to 1} \| f(r\eiu) - \tilde{f}(\eiu) \|_{E}=0 \;\; (0<r<1) $$ for almost all $ \eiu \in \Tt$. \end{remark}\index{$\tilde{f}$} \begin{lemma}\label{xitelweh21} Let $E$ be a separable Hilbert space. Let $\xi\in H^\infty(\Dd, E)$ and let $f \in H^2(\Dd, E).$ Then the radial limits $\lim_{r \to 1} (\xi(r\eiu) \we f(r\eiu))$ exist for almost all $\eiu \in \Tt$ and define functions in $L^2(\Tt,\we^2 E).$ \end{lemma} One can find a proof of this statement in \cite[Lemma 4.3]{YCL20-3}. \begin{remark}\label{H2subsetL2} Let $E$ be a separable Hilbert space. By \cite[Chapter 5, Section 1]{NagyFoias}, for any separable Hilbert space $E$, the map $f \mapsto \tilde{f}$ is an isometric embedding of $H^2(\Dd,E)$ in $L^2(\Tt,E)$, where $ \tilde{f}(\eiu)= \lim_{r \to 1} f(r\eiu).$ Since $H^2(\Dd,E)$ is complete and the embedding is isometric, the image of the embedding is complete, and therefore is closed in $L^2(\Tt, E).$ Therefore, the space $H^2(\Dd,E)$ is identified isometrically with a closed linear subspace of $L^2(\Tt, E).$ In future we shall use the same notation for $f$ and $\tilde{f}.$ \end{remark} \begin{definition}\label{pls} Let $E$ be a separable Hilbert space. Let $F$ be a subspace of $L^2(\Tt, E)$ and let $X$ be a subset of $L^2(\Tt,E).$ We define the \emph{pointwise orthogonal complement} of $X$ in $F$ to be the set $$\Poc(X,F) = \{ f \in F: f(z)\perp \{x(z):x \in X\}\; \text{for almost all}\;z \in \Tt\}.$$ \index{pointwise orthogonal complement} \index{$\Poc(X,F)$} \end{definition} \begin{proposition}\label{vclosed} Let $E$ be a separable Hilbert space. Let $\eta \in L^2(\Dd,E)$. Then \begin{enumerate} \item[(i)] The space $V= \{ f \in H^2 (\Dd,E): \langle f(z) , \eta(z) \rangle_{E} =0 \; \text{for almost all}\; z \in \Tt \}$ is a closed subspace of $H^2(\Dd, E).$ \item[(ii)] The space $V=\{ f \in L^2(\Tt,E): \langle f(z), \eta(z) \rangle_{E} =0 \;\text{for almost all}\; z\in \Tt \} $ is a closed subspace of $L^2(\Tt,E).$ \end{enumerate} \end{proposition} One can find a proof of this statement in \cite[Proposition 4.6]{YCL20-3}. \section{Introduction}\label{intro} In this paper we put forward a new algorithm for the computation of the superoptimal analytic approximation of a continuous matrix-valued function on the circle, a notion that arises naturally in the context of the classical ``Nehari problem", and also in the ``robust stabilization problem" in control engineering. To explain the term ``superoptimal", let us start from the elementary observation that a measure of the ``size" of a compact operator $T$ between Hilbert spaces is provided by the operator norm $\|T\|$ of $T$. However, a single number can only ever provide a coarse measure of the size of a multi-dimensional object, and there is a well-developed classical theory \cite{GK} of {\em `s-numbers'} or {\em `singular values'} of an operator or matrix, which provides much more refined information about an operator than the operator norm. Consider Hilbert spaces $\h,\k$ and an operator $T:\h\to\k$, and let $j\geq 0$. The quantity $s_j(T)$ is defined to be the distance, with respect to the operator norm, of $T$ from the set of operators of rank at most $j$: \index{$s_j(T)$} \[ s_j(T) \df \inf\{\|T-R\|: R\in\mathcal{L}(\h,\k), \rank R \leq j\}. \] Here, for Hilbert spaces $H,K$, we define by $\mathcal{L}(H,K)$ the Banach space of bounded linear operators from $H$ to $K$ with the operator norm. We denote by $\mathcal{K}(H,K)$ the Banach space of compact linear operators from $H$ to $K$ with the operator norm. In the setting of matrices $T$ (that is, in the case that $\h$ and $\k$ are finite-dimensional), $s_j(T)$ is often called the {\em $j$th singular value of $T$}. \index{singular values!of an operator} In this setting one can show that the singular values of $T$ are precisely the eigenvalues of $\sqrt{T^*T}$. The largest singular value of $T$ is the spectral radius of $\sqrt{T^*T}$, that is, $\|T\|$, and so clearly the set of all singular values of $T$ contains much more information than the norm $\|T\|$ alone. The use of $s$-numbers immediately gives rise to a measure of the error in an approximation of an operator- or matrix-valued function. Consider, for example, an $m \times n$-matrix-valued function $G$ on the unit circle $\T$, and suppose we wish to approximate $G$ by a matrix-valued function $Q$ of a specified form (such as a rational function of a prescribed McMillan degree). It is natural to regard the difference $G-Q$ as the ``error" in the approximation, and to regard the quantities \[ s_j^\infty (G-Q) \df \esssup_{z\in\t} s_j(G(z)-Q(z)) \] for $j\geq 0$ as measures of how good an approximation $Q$ is to $G$. We set \index{ $s^\infty(F)$} \[ s^\infty (G-Q) \df (s_0^\infty (G-Q), s_1^\infty(G-Q), \dots, s_j^\infty(G-Q), \dots), \] and say that $\cala G$ is a {\em superoptimal} approximation of $G$ in a given class $\calf$ of functions if $s^\infty(G-Q)$ attains its minimum with respect to the lexicographic ordering of the set of sequences of non-negative real numbers over $\calf$ when $Q =\cala G$. The notion of superoptimality pertains to matricial or operator-valued functions, and is therefore particularly relevant to control engineering and electrical networks more generally, since in these fields one must analyse engineering constructs whose mathematical representations are typically matrix-valued functions on the circle or the real line. In particular, a primary application is to the problem of designing automatic controllers for linear time-invariant plants with multiple inputs and outputs. Such design problems are often formulated in the frequency domain, that is, in terms of the Laplace or $z-$transform of signals. By this means the problem becomes to construct an analytic matrix-valued function in a disc or half-plane, subject to various constraints. An important requirement is usually to minimize, or at least to bound, some cost- or penalty-function. In practical engineering problems a wide variety of constraints and cost functions arise, and the engineer must take account of many complications, such as the physical limitations of devices and the imprecision of models. Engineers have developed numerous ways to cope with these complications \cite{francis,DFT}. One of them, developed in the 1980s, is $H^\infty$ control theory \cite{robust}. It is a wide-ranging theory, that makes pleasing contact with some problems and results of classical analysis; a seminal role is played by Nehari's theorem on the best approximation of a bounded function on the circle by an analytic function in the disc. Also important in the development of the theory was a series of deep papers by Adamyan, Arov and Krein \cite{aak1},\cite{aak2} which greatly extend Nehari's theorem and which apply to matrix-valued functions. In this context the notion of a superoptimal analytic approximation arose very naturally. Simple diagonal examples of a $2\times 2$-matrix-valued function $G$ on $\mathbb T$ show that the set of best analytic approximants to $G$ in the $L^\infty$ norm typically comprises an entire infinite-dimensional ball of functions, and so one is driven to ask for a stronger optimality criterion, and preferably one which will provide a unique optimum. The very term ``superoptimal" was coined by engineers even before its existence had been proved in generality. The paper \cite{superop} proved that the superoptimal approximant does indeed exist, and moreover is unique, as long as the approximand $G$ is the sum of a continuous function and an $H^\infty$ function on the circle. In engineering examples $G$ is usually rational and so continuous on the circle. Let us first provide some preliminary definitions and then formulate the problem. Throughout the paper, $\CCmn$ denotes the space of $m\times n$ complex matrices with the operator norm and $\Dd,\Tt$ denote the unit disc and the unit circle respectively. \index{$\Tt$}\index{$\Dd$} \index{$\CCmn$} \begin{definition}\label{1.5.2} Let $E$ be a Banach space. \noindent \index{$H^{\infty}(\mathbb{D},\Cc^{m\times n} )$} $H^{\infty}(\mathbb{D}, E)$ denotes the space of bounded analytic $E$-valued functions on the unit disk with supremum norm: $$\|Q\|_{H^{\infty}} \stackrel{\emph{def}}{=} \|Q\|_{\infty} \stackrel{\emph{def}}{=} \sup\limits_{z \in \mathbb{D}}\|Q(z)\|_{E}.$$ \index{$L^{\infty}(\mathbb{T}, \Cc^{m\times n})$}\noindent$L^{\infty}(\mathbb{T},\Cc^{m\times n} )$ is the space of essentially bounded weakly measurable $E$-valued functions on the unit circle with essential supremum norm $$\|f\|_{L^\infty}= \mathrm{ess} \sup\limits_{|z|=1}\|f(z)\|_{E},$$ and with functions equal almost everywhere identified. \noindent Also, $C(\Tt, E )$ is the space of continuous $E$-valued functions from $\Tt$ to $E.$ \end{definition} Naturally engineers need to be able to {\em compute} the superoptimal approximant of $G$. \begin{problem}[The superoptimal analytic approximation problem]\label{mainproblem} Given a function $G\in L^{\infty}(\mathbb{T}, \Cc^{m\times n}),$ find a function $Q \in H^{\infty}(\mathbb{D}, \mathbb{C}^{m\times n})$ such that the sequence $s^{\infty}(G-Q)$ is minimized with respect to the lexicographic ordering. \end{problem} In general, the superoptimal analytic approximant may not be unique. However, it has been proved that if the given function $G$ belongs to $H^\infty(\Dd,\Cc^{m\times n})+C(\Tt,\Cc^{m\times n}),$ then Problem \ref{mainproblem} has a unique solution. The following theorem, which was proved by V.V. Peller and N.J. Young in \cite{superop}, asserts what we have just stated. \begin{theorem}[\cite{superop}, p. 303]\label{1.8} Let $G\in H^{\infty}(\Dd , \Cc^{m\times n})+C(\Tt, \Cc^{m\times n} ).$ Then the minimum with respect to the lexicographic ordering of $s^\infty (G-Q)$ over all $Q\in H^\infty (\Dd, \CCmn ) $ is attained at a unique function $\cala G.$ Moreover, the singular values $s_j ( G(z) - \cala G(z))$ are constant almost everywhere on $\Tt$ for $j\geq 0$ . \end{theorem}\vspace{3ex} The topic of this paper is not the existence and uniqueness of the function $\cala G$ described in Theorem \ref{1.8}, but rather the {\em construction} of $\cala G$. In the proof of the validity of our construction, we have no compunction in making any use of results proved in \cite{superop}, such as the existence of some special matrix functions. For example, to justify our algorithm we shall prove, using results of \cite{superop}, that certain operators that we introduce are unitarily equivalent to block Hankel operators, which fact enables us to make use of general properties of Schmidt vectors of Hankel operators, without the need to calculate the symbols of those Hankel operators. The existence proof in \cite{superop} can in principle be turned into an algorithm, but into a very computationally intensive one. The construction is recursive, and at each step of the recursion one must augment a column-matrix function to a unitary matrix-valued function on the circle with some special properties. Computationally this step requires a {\em spectral factorization} of a positive semi-definite matrix-valued function on the circle. There are indeed algorithms for this step, but they involve an iteration which may be slow to converge and badly conditioned, especially if some function values have eigenvalues on or close to the unit circle. It is certainly desirable to avoid the matricial spectral factorization step if it is possible to do so. Our aim in this project was to devise an algorithm in which the iterative procedures are as few and as well-conditioned as possible. Iteration cannot be completely avoided; even in the scalar case, optimal error is the norm of a certain operator, and the best approximant is given by a simple formula involving the corresponding Schmidt vectors. Thus one has to perform a singular value decomposition. In the case that the approximand $G$ is of type $m\times n$ one must expect to solve $\min(m,n)$ successive singular value problems. However, from the point of view of numerical linear algebra, singular value decomposition is regarded as a fast, accurate and well-behaved operation. In this paper we describe an algorithm that is, in a sense, parallel to the construction of \cite{Constr} and that in addition to the spectral factorisation of \emph{scalar} functions, requires only rational arithmetic and singular-value decompositions. Several engineers have developed alternative approaches \cite{statesapce},\cite{tsaigu} based on state-space methods. These too are computationally intensive. For practical purposes, before even looking for an algorithm for the construction of $\mathcal{A}G$, we need to know that the problem of superoptimal analytic approximation is well posed, in sense that arbitrarily small perturbations of $G$ do not result in large fluctuations in $\mathcal{A}G$. This issue arises even for scalar $G$, and in fact it is known \cite{pellkhr} that, for general continuous functions $G$, $\mathcal{A}G$ does {\em not} depend continuously on $G$. However, Peller and Khruschev have shown in \cite{pellkhr} that, for $G$ in suitable subspaces $X$ of the continuous functions on $\T$, the best analytic approximation operator {\em is} continuous for $\|\cdot \|_X$, and so it makes sense to compute it. A similar assertion holds for matrix-valued functions $G$, as was shown by Peller and Young in \cite{PY-cont}. We believe that the present method, which makes use of exterior powers of Hilbert spaces and operators, provides a conceptual approach to the construction of superoptimal approximants which is a promising basis for computation. The theoretical justification of the algorithm we present in this paper is lengthy and elaborate. However, the implementation of the algorithm should be straightforward. It will be very interesting to see whether it leads to an efficient numerical method in the future. For vector-valued $L^p$ spaces we use the terminology of \cite{NagyFoias}. \begin{definition}\label{a.12} Let $E$ be a separable Hilbert space and let $1\leq p < \infty.$ Define \begin{enumerate} \item[{\rm (i)}] $L^p (\Tt,E)$ to be the normed space of measurable (weakly or strongly, which amounts to the same thing, in view of the separability of $E$) $E$-valued maps $f \colon \Tt \to E$ such that $$\|f\|_p = \left(\displaystyle\frac{1}{2\pi}\int_{0}^{2\pi} \|f(e^{i\theta})\|_E^p d\theta\right)^{1/p} <\infty ;$$ \item[{\rm (ii)}] $H^p (\mathbb{D},E)$ to be the normed space of analytic $E$-valued maps $f\colon\mathbb{D} \to E $ such that $$ \|f\|_p = \sup\limits_{0<r<1} \left(\frac{1}{2\pi}\int_{0}^{2\pi} \|f(re^{i\theta})\|_E^p d\theta\right)^{1/p} < \infty,$$ the left hand side of this inequality defining a norm on $H^p(\D,E)$. \end{enumerate} \end{definition} Our algorithm provides a solution $\mathcal{A}G$ to Problem \ref{mainproblem}. By computing the value of each $t_k$ at every step, we obtain each term $s_k^\infty(G-\mathcal{A}G)$ of the sequence $s^\infty(G-\mathcal{A}G).$ First we need the notion of a {\em Hankel operator} and the definitions of some long-established standard function spaces; for a more detailed account of these spaces see \cite[Chapter V]{NagyFoias}. If $E$ is a separable Hilbert space, then every function $f\in H^2(\D,E)$ has a radial limit at almost every point of $\T$, by a theorem of Fatou \cite[Chapter V]{NagyFoias}, and the map that takes a function $f\in H^2(\D,E)$ to its radial limit function embeds $H^2(\D,E)$ isometrically in $L^2(\T,E)$. In this paper we shall only envisage the case that $E$ is separable, and so we can always regard $H^2(\D,E)$ as a closed subspace of $L^2(\T,E)$. The operators $P_+, P_-$ on $L^2(\T,E)$ are the operators of orthogonal projection onto the closed subspaces $H^2(\Dd, E)$ and ${H^2(\Dd, E)}^\perp \df L^2(\T,E) \ominus H^2(\Dd,E)$. \index{$P_{-}$}\index{$P_{+}$}\index{$H^{2}(\Dd, E)^{\perp}$} \begin{definition}\label{defHankel} Let $E$ be a separable Hilbert space, and let $\ph$ be an essentially bounded measurable $\call(E)$-valued function on $\T$; then the {\em Hankel operator} $H_\ph$ is the operator from $H^2(\Dd, E)$ to ${H^2(\Dd, E)}^\perp$ given by \index{Hankel operator} \[ H_\ph x = P_-(\ph x) \quad \mbox{ for } x\in H^2(\Dd, E), \; \text{ where} \; (\ph x)(z)= \ph(z)x(z) \mbox{ for } \; z \in \T. \] \end{definition} \begin{remark}\index{unitary operator} In this paper we call an operator $U\colon H \to K$ between Hilbert spaces $H,K$ a \emph{unitary operator} if $U$ is both isometric and surjective. Some authors restrict the name "unitary operator" to the case that $H=K.$ Such authors would use a terminology like "isometric isomorphism" for our "unitary operator" in the case that $H\neq K.$ \end{remark} For any vector $x$ in a Hilbert space $E$, we denote by $x^*$ the linear functional $ \langle \cdot, x \rangle_E $ on $E$. For an $E$-valued function $x$ on a set $S \subset \C$, we define $E^*$-valued function $x^*$ on $S$ by $x^*(z)= x(z)^*$ for all $z \in S$. We observe that if $ x \in L^p(\T,E)$, where $1 \le p \le\infty$, then $ x^* \in L^p(\T,E)$ and $\|x^*\|_p =\|x\|_p$. If $ x, y\in E$, then $x y^*$ denotes the operator of rank one on $E$ defined by $x y^*(u)= \langle u, y\rangle_E x $ for all $u \in E$. This operator is sometimes denoted by $x \otimes y$ (see, for example, \cite[equation (1.17)]{AMY}). If $ x, y$ are $E$-valued functions on a set $S \subset \C$, then $x y^*$ is the function from $S$ to ${\mathcal L}(E)$ given by $x y^*(z)= x(z) y(z)^*$ for all $z \in S$. \begin{definition}[\cite{Young}, p. 206] Let $H,K$ be Hilbert spaces and let $T\colon H \to K$ be a compact operator. Suppose that $s$ is a singular value of $T.$ A \emph{Schmidt pair} for $T$ corresponding to $s$ is a pair $(x,y)$ of non-zero vectors, with $x\in H, \; y \in K,$ such that $$Tx=sy , \; T^*y=sx.$$\index{Schmidt pair} \end{definition} The following lemma is elementary. \begin{lemma}\label{schmmax} Let $T\in \mathcal{L}(H,K)$ be a compact operator and let $x\in H$, $y\in K$ be such that $(x,y)$ is a Schmidt pair for $T$ corresponding to $s=\|T\|.$ Then $x$ is a maximizing vector for $T,$ $y$ is a maximizing vector for $T^*,$ and $\|x\|_H = \|y\|_K.$ \end{lemma} \begin{definition}[\cite{NagyFoias}, p. 190] {\rm (i)}. The matrix-valued bounded analytic function $\Theta \in H^\infty(\Dd, \CCmn)$ is called \emph{inner} if $\;\Theta(e^{it})$ is an isometry from $\Cc^n$ to $\Cc^m$ for almost every $e^{it}$ on $\Tt$. {\rm (ii)}. An analytic $(m\times n)$-matrix-valued function $ \Phi$ on $\Dd$ is said to be \emph{outer} if $$\Phi H^2(\Dd, \Cc^n)=\{ \Phi f : f \in H^2(\Dd,\Cc^n) \}$$ is a norm-dense subspace of $H^2(\Dd,\Cc^m)$, and \emph{co-outer} if $$\Phi^TH^2(\Dd,\Cc^m)=\{\Phi^T g: g \in H^2(\Dd,\Cc^m)\}$$ is dense in $H^2(\Dd,\Cc^n)$. \index{matrix-valued function!inner}\index{matrix-valued function! outer}\index{matrix-valued function!co-outer}\index{$\Theta^T$} \end{definition} The following is a brief summary of our algorithm. A full account of all the steps, with definitions and justifications will be given in Section \ref{statet_algorithm}. Our method makes use of exterior powers $\we^pE$ of a finite-dimensional Hilbert space $E$ and of `pointwise wedge products' of $E$-valued functions $f,g$ on $\Dd$ or $\Tt$, defined by \[ (f\telwe g)(z) = f(z) \we g(z) \quad \mbox{ for all } z\in\Dd \mbox{ or for all } z\in\Tt. \] These notions are explained more fully in Subsection \ref{point_w_p}. \index{Algorithm} \begin{proof}[\emph{\textbf{Algorithm:}}]\let\qed\relax For a given $G \in H^\infty ( \Dd, \CCmn) + C(\Tt, \CCmn),$ the superoptimal analytic approximant $\mathcal{A}G \in H^\infty(\Dd,\CCmn)$ can be constructed as follows. i) \textbf{ Step 0.} Let $T_0= H_G$ be the Hankel operator with symbol $G$. Let $ t_0 = \| H_G\| .$ If $t_0 = 0,$ then $H_G=0,$ which implies $G\in H^\infty ( \Dd, \CCmn).$ In this case, the algorithm terminates, we define $r$ to be zero and the superoptimal approximant $\mathcal{A}G$ is given by $\mathcal{A}G = G.$ \noindent Let $t_0\neq 0.$ The Hankel operator $H_G$ is a compact operator and so there exists a Schmidt pair $(x_0 , y_0)$ corresponding to the singular value $t_0= \|H_G\|$ of $H_G.$ By the definition of a Schmidt pair $(x_0,y_0)$, $$x_0 \in H^2(\Dd, \Cc^n ),\quad y_0 \in H^2 (\Dd, \Cc^m)^\perp $$ are non-zero vector-valued functions such that $$H_Gx_0 = t_0 y_0 , \quad H_G^* y_0 =t_0 x_0. $$ The functions $x_0 \in H^2(\Dd,\Cc^n)$ and $\bar{z}\bar{y}_0 \in H^2(\Dd, \Cc^m)$ admit the inner-outer factorizations \begin{equation}\label{xi0eta01} x_0 = \xi_0 h_0 , \quad \bar{z}\bar{y}_0 = \eta_0 h_0 \end{equation} for some scalar outer factor $h_0 \in H^2(\Dd, \Cc)$ and column matrix inner functions $\xi_0\in H^\infty(\Dd, \Cc^n)$, $ \eta_0\in H^\infty(\Dd, \Cc^m). $ Then, \begin{equation}\label{equl1} \|x_0 (z)\|_{\Cc^n} = |h_0(z)| = \|y_0 (z)\|_{\Cc^m} \;\; \text{almost everywhere on }\; \Tt.\end{equation} \noindent We write equations (\ref{xi0eta01}) as \begin{equation}\label{3112} \xi_0 = \frac{x_0}{h_0}, \quad \eta_0 = \frac{\bar{z}\bar{y}_0}{h_0}. \end{equation} \noindent Then \begin{equation}\label{xi0eta0=11} \| \xi_0 (z) \|_{\Cc^n} =1= \| \eta_0(z) \|_{\Cc^m}\; \text{almost everywhere on}\; \Tt. \end{equation} There exists a function $Q_1 \in H^\infty(\Dd, \CCmn)$ which is at minimal distance from $G$; any such function satisfies \begin{equation}\label{G-Q01} (G-Q_1)x_0 = t_0 y_0,\quad y_0^* (G-Q_1) = t_0 x_0^*. \end{equation} Choose any function $Q_1 \in H^\infty(\Dd, \CCmn)$ which satisfies the equations \eqref{G-Q01}. ii) \textbf{Step 1.} Let \begin{equation}\label{X_01} X_1 = \xi_0 \dot{\we} H^2(\Dd,\Cc^n ) \subset H^2(\Dd, \we^2 \Cc^n ), \vspace{2ex}\end{equation} and let \begin{equation}\label{Y_01} Y_1 = \bar{\eta}_0 \dot{\we} H^2(\Dd, \Cc^m)^\perp \subset H^2 (\Dd, \we^2 \Cc^m)^\perp.\end{equation} $X_1$ is a closed linear subspace of $H^2(\Dd,\we^{2}\Cc^n)$. $Y_1$ is a closed linear subspace of $H^2 (\Dd, \we^2 \Cc^m)^\perp. $ \noindent Define the operator $$T_1 : X_1 \to Y_1$$ by \begin{equation}\label{T_01} T_1 ( \xi _0 \dot{\we} x ) = P_{Y_1} (\bar{\eta}_0\dot{\we} (G-Q_1)x) \; \text{for all} \; x\in H^2(\Dd, \Cc^n ),\end{equation} where $P_{Y_1}$ is the projection from $L^2(\Tt, \we^2 \Cc^m)$ on $Y_1.$ We show that $T_1$ is well-defined. If $T_1=0,$ then the algorithm terminates, we define $r$ to be $1$ and the superoptimal approximant $\mathcal{A}G$ is given by the formula $$G- \mathcal{A}G = \displaystyle\sum\limits_{i=0}^{r-1}\frac{t_i y_i x_i^*}{|h_i|^2}= \frac{t_0 y_0 x_0^*}{|h_0|^2}, $$ and the solution is $$\mathcal{A}G =G - \frac{t_0 y_0 x_0^*}{|h_0|^2} .$$ If $T_1\neq 0,$ let $t_{1} = \|T_1\| >0.$ $T_1$ is a compact operator and so there exist $v_1 \in H^2(\Dd, \Cc^n),\; w_1 \in H^2(\Dd, \Cc^m)^\perp$ such that $(\xi_0 \telwe v_1 , \bar{\eta}_0 \telwe w_1) $ is a Schmidt pair for $T_1$ corresponding to $t_1.$ Let $h_1$ be the scalar outer factor of $ \xi_0 \telwe v_1$ and let \begin{equation}\label{x1y1eq1} x_{1} = ( I_{\Cc^n} - \xi_0 \xi_0^*)v_1, \;\; y_1= (I_{\Cc^m} - \bar{\eta}_0 \eta_0^T)w_1 ,\end{equation} where $\mathrm{I}_{\Cc^n}$ and $\mathrm{I}_{\Cc^m}$ are the identity operators in $\Cc^n$ and $\Cc^m$ respectively.\index{identity operator} Then \begin{equation}\label{x1y1h11} \|x_1 (z)\|_{\Cc^n} = |h_1(z)| = \|y_1 (z)\|_{\Cc^m}\;\text{almost everywhere on}\; \Tt.\end{equation} There exists a function $Q_2 \in H^\infty(\Dd, \CCmn)$ such that both $s_0^\infty(G-Q_2)$ and $s_1^\infty(G-Q_2)$ are minimized and $$s_1^\infty(G-Q_2)=t_1.$$ Any such $Q_2$ satisfies \begin{equation}\label{31111}\begin{aligned}(G-Q_2) x_0 = t_0 y_0 , \quad y_0^* (G-Q_2) = t_0 x_0^* \\ (G-Q_2)x_1 = t_1 y_1 , \quad y_1^* (G-Q_2)=t_1 x_1^*.\end{aligned}\end{equation} Choose any function $Q_{2} \in H^\infty(\Dd, \CCmn)$ which satisfies the equations \eqref{31111}. Define \begin{equation}\label{311111} \xi_1 = \frac{x_1}{h_1} , \quad \eta_1 = \frac{\bar{z}\bar{y}_1}{h_1}. \end{equation} Then $\| \xi_1(z) \|_{\Cc^n} =1= \| \eta_1(z) \|_{\Cc^n}$ almost everywhere on $\Tt.$ \end{proof} \index{pointwise!orthonormal on $\Tt$} \begin{definition} Let $E$ be a Hilbert space. We say that a collection $\{\gamma_j\} $ of elements of $L^2(\Tt,E)$ is \emph{pointwise orthonormal on $\Tt$} if, for almost all $z \in \Tt$ with respect to Lebesgue measure, the collection of vectors $\{ \gamma_j(z)\}$ is orthonormal in $E.$ \end{definition} iii) \textbf{Inductive step}. Suppose we have constructed $$\begin{array}{clllllllll} &t_0 \geq t_1 \geq \cdots \geq t_j > 0\\ &x_0, x_1, \cdots, x_j \in L^2 (\Tt, \Cc^n)\\ &y_0 , y_1 , \cdots , y_j \in L^2(\Tt, \Cc^m) \\ &h_0, h_1, \cdots, h_j \in H^2(\Dd,\Cc) \; \text{outer}\\ & \xi_0,\xi_1, \cdots, \xi_j \in L^\infty(\Tt,\Cc^n)\; \text{pointwise orthonormal on}\; \Tt \\ & \eta_0, \eta_1, \cdots , \eta_j \in L^\infty(\Tt,\Cc^m) \;\text{pointwise orthonormal on}\; \Tt \\ &X_0 = H^2(\Dd,\Cc^n),X_1, \cdots, X_j \\ &Y_0 = H^2(\Dd,\Cc^m)^\perp, Y_1, \cdots, Y_j\\ &T_0, T_1, \cdots, T_j \; \text{compact operators}. \end{array}$$ There exists a function $Q_{j+1} \in H^\infty(\Dd, \CCmn)$ such that $$\left(s_0^\infty(G-Q_{j+1}), s_1^\infty(G-Q_{j+1}), \cdots , s_j^\infty(G-Q_{j+1})\right)$$ is lexicographically minimized. Any such function $Q_{j+1}$ satisfies \begin{equation}\label{g-qi1}(G-Q_{j+1})x_i = t_i y_i, \quad y_i^* (G-Q_{j+1}) = t_i x_i^*, \quad i=0, 1, \cdots, j. \end{equation} Choose any function $Q_{j+1} \in H^\infty(\Dd, \CCmn)$ which satisfies the equations \eqref{g-qi1}. Define \begin{equation}\label{X_j1} X_{j+1} = \xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_j \dot{\we} H^2(\Dd,\Cc^n), \end{equation} and let \begin{equation}\label{Y_j1} Y_{j+1}= \bar{\eta}_0 \dot{\we} \bar{\eta}_1 \dot{\we} \cdots \dot{\we} \bar{\eta}_j \dot{\we} H^2 (\Dd, \Cc^m)^\perp .\end{equation} $X_{j+1}$ is a closed subset of $H^2(\Dd,\we^{j+2}\Cc^n),$ and $Y_{j+1}$ is a closed subspace of $H^2 (\Dd, \we^{j+2} \Cc^m)^\perp.$ \noindent Consider the operator $$T_{j+1} : X_{j+1} \to Y_{j+1}$$ given, for all $x \in H^2(\Dd,\Cc^n)$, by \begin{equation}\label{T_j1} T_{j+1}(\xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_j \dot{\we} x)= P_{Y_{j+1}} \left( \bar{\eta}_0 \dot{\we} \bar{\eta}_1 \dot{\we} \cdots \dot{\we} \bar{\eta}_j\dot{\we} (G-Q_{j+1})x \right). \end{equation} $T_{j+1}$ is well defined. If $T_{j+1}=0,$ then the algorithm terminates, we define $r$ to be $j+1,$ and the superoptimal approximant $\mathcal{A}G$ is given by the formula $$G- \mathcal{A}G = \sum\limits_{i=0}^{r-1} \frac{t_i y_i x_i^*}{|h_i|^2}= \sum\limits_{i=0}^{j} \frac{t_i y_i x_i^*}{|h_i|^2}.$$ Otherwise, we define $t_{j+1} = \|T_{j+1}\| >0.$ Then $T_{j+1}$ is a compact operator and hence there exist $v_{j+1} \in H^2(\Dd,\Cc^n), \; w_{j+1} \in H^2(\Dd,\Cc^m)^\perp$ such that \begin{equation}\label{schmpairtj+11}(\xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_j \dot{\we} v_{j+1}, \bar{\eta}_0 \dot{\we} \bar{\eta}_1 \dot{\we} \cdots \dot{\we} \bar{\eta}_j \dot{\we} w_{j+1})\end{equation} is a Schmidt pair for $T_{j+1}$ corresponding to the singular value $t_{j+1}.$ \noindent Let $h_{j+1}$ be the scalar outer factor of $\xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_j \dot{\we} v_{j+1},$ and let \begin{equation}\label{xj+1yj+11}x_{j+1} = (I_{\Cc^n} - \xi_0 \xi_0^* - \cdots - \xi_j \xi_j^*)v_{j+1}, \quad y_{j+1} = (I_{\Cc^m} - \bar{\eta}_0 \eta_0^T - \cdots- \bar{\eta}_j\eta_j^T) w_{j+1},\end{equation} and define \begin{equation}\label{xij+1etaj+11} \xi_{j+1} = \frac{x_{j+1}}{h_{j+1}}, \quad \eta_{j+1}=\frac{\bar{z}\bar{y}_{j+1}}{h_{j+1}}.\end{equation} \noindent One can show that $\|\xi_{j+1}(z)\|_{\Cc^n}=1 $ and $\|\eta_{j+1}(z)\|_{\Cc^m}=1$ almost everywhere on $\Tt.$ This completes the recursive step. The algorithm terminates after at most $\min(m,n)$ steps, so that, $r \leq \min(m,n)$ and the superoptimal approximant $\mathcal{A}G$ is given by the formula $$G- \mathcal{A}G = \sum\limits_{i=0}^{r-1} \frac{ t_i y_i x_i^*}{|h_i|^2}.$$ \begin{remark} {\em Observe that, in step $j$ of the algorithm, we define an operator $T_j$ in terms of any function $Q_j \in H^\infty(\Dd,\Cc^{m \times n})$ that satisfies the equations \begin{equation}\label{g-q-j} (G-Q_{j})x_i = t_i y_i, \quad y_i^* (G-Q_{j}) = t_i x_i^*, \quad i=0, 1, \cdots, j-1. \end{equation} This constitutes a system of linear equations for $Q_j$ in terms of the computed quantities $x_i, t_i$ and $y_i$ for $i=0, \dots, j-1$, and we know, from Proposition \ref{g-qjj}, that the system has a solution for $Q_j$ in $H^\infty(\Dd,\Cc^{m \times n})$. By Proposition \ref{Twell} $T_j$ is independent of the choice of $Q_j$ that satisfies equations \eqref{g-q-j}.} \end{remark} \begin{remark}{\em At each step we need to find $\|T_j\|$ and a Schmidt pair \begin{equation}(\xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_{j-1} \dot{\we} v_{j}, \bar{\eta}_0 \dot{\we} \bar{\eta}_1 \dot{\we} \cdots \dot{\we} \bar{\eta}_{j-1} \dot{\we} w_{j}) \end{equation} for $T_{j}$ corresponding to the singular value $t_{j}.$ Then we compute the scalar outer factor $h_{j}$ of $\xi_0 \dot{\we} \xi_1 \dot{\we} \cdots \dot{\we} \xi_{j-1} \dot{\we} v_{j}\in H^2(\Dd, \wedge^{j+1}\Cc^n)$. These are the only spectral factorisations needed in the algorithm. Note that if $f\in H^2(\Dd, \Cc^n)$ has the inner-outer factorisation $f=hg$, with $h\in H^2(\Dd,\Cc)$ a scalar outer function and $g\in H^\infty(\Dd,\Cc^n)$ inner, then $(f^*f)(z)=|h(z)|^2 $ almost everywhere on $\Tt$, and so the calculation of $h$ requires us to find a spectral factorisation of the positive {\em scalar-valued} function $f^*f$ on the circle. } \end{remark} \begin{remark}{\em In a numerical implementation of the algorithm one would need to find a way to compute the norms and Schmidt vectors of the compact operators $T_j$. For this purpose it would be natural to choose convenient orthonormal bases of the cokernel $X_j\ominus \ker T_j$ and the range $\ran T_j$. It is safe to assume that in most applications $G$ will be a rational function, in which case the cokernel and range will be finite-dimensional. At step 0, $T_0$ is a Hankel operator, and the calculation of the matrix of $T_0$ with respect to suitable orthonormal bases is a known task \cite{Young83}; we believe that similar methods will work for step $j$. } \end{remark} In Theorem \ref{mathcalAG} we arrive at the following conclusion about the superoptimal approximant $\mathcal{A}G.$ \begin{theorem} Let $G\in H^\infty(\Dd, \Cc^{m\times n})+C(\Tt, \Cc^{m\times n}).$ Let $T_i, x_i, y_i, h_i$, for $i\ge 0$, be defined by the algorithm above. Let $r$ be the least index $j \ge 0$ such that $T_j=0$. Then $r\leq \min(m,n)$ and the superoptimal approximant $\mathcal{A}G$ is given by the formula $$ G-\mathcal{A}G= \displaystyle \sum\limits_0^{r-1} \frac{t_i y_i x_i^*}{|h_i|^2} .$$ \end{theorem} Wedge products, and in particular pointwise wedge products, along with their properties are studied in detail in Section \ref{exterior}. \section{History and recent work}\label{history} The Nehari problem of approximating an essentially bounded Lebesgue measurable function on the unit circle $\t$ by a bounded analytic function on the unit disk $\d$, has been attracting the interest of both pure mathematicians and engineers since the middle of the 20th century. The problem was first formulated and studied from the viewpoint of scalar-valued functions, and, in the years that followed, from the operator-valued perspective also, which motivated research into the superoptimal approximation problem. The Nehari problem in the scalar case first appeared in the paper of Nehari \cite{Neh1}. Given an essentially bounded complex valued function $g$ on $\t,$ one seeks its distance from $H^\infty$ with respect to the essential supremum norm, and wishes to determine for which elements of $H^\infty$ this distance is attained. It is also of interest to know whether the distance is attained at a uniquely determined function. Such problems have been studied in detail by Nehari \cite{Neh1}, Sarason \cite{sarason} and Adamjan, Arov and Krein in \cite{aak1} and \cite{aak2}. These authors proved that the distance of $g$ from $H^\infty$ is equal to the norm of the Hankel operator $H_g$ with symbol $g.$ Moreover, if $H_g$ has a maximizing vector in $H^2,$ then the bounded analytic complex-valued function $q$ that minimizes the essential supremum norm $\|g-q \|_{L^\infty}$ is uniquely determined and can be explicitly calculated (see, for example, \cite[p. 196]{Young}). Furthermore, if the essential norm $\|H_g\|_e$ is less than $\|H_g\|$, then $g$ has a unique best approximant. Pure mathematicians and engineers started seeking analogues of those results for matrix- and operator-valued functions. These generalizations are not only mathematically interesting, but are essential for applications in engineering, and especially in control theory. There has accordingly been an explosion of research in this field since 1980, on the part of both pure mathematicians and engineers. Page \cite{page} and Treil \cite{treil1} gave various extensions of the results of Adamjan, Arov and Krein to operator-valued functions. Page proved that for operator-valued mappings $T\in L^\infty(\t, \mathcal{L}(E_1,E_2)),$ $\inf\{\|T-\Phi \|:\Phi \in H^\infty(\d,\mathcal{L}(E_1,E_2) \}$ is equal to $\|H_T\|$. Here $E_1,E_2$ are Hilbert spaces and $\mathcal{L}(E_1,E_2)$ denotes the Banach space of bounded linear operators from $E_1$ to $E_2.$ Treil extended the Adamjan, Arov and Krein theorem in \cite{aak2} to an operator-valued analogue. However, in the matrix-valued setting there are typically infinitely many functions that minimize the $L^\infty$ norm of the error function. This fact is simply illustrated by the following example. Let $G(z) = \mathrm{diag} \{ \bar{z} , 0 \}$, for $ z \in \t.$ The norm of $H_G$ in this case is easily seen to be $1,$ and hence \emph{all} matrix-valued functions $Q \in H^\infty(\d,\c^{2\times 2})$ of the form $Q(z) = \mathrm{diag}\{ 0 , q(z) \},$ where $q \in H^\infty$ and $\|q\|_{H^\infty} \leq 1,$ minimize the norm $\|G-Q\|_\infty$, yielding the error $1$. However, if one goes on to minimize in turn the essential suprema of both singular values of $G(z)-Q(z)$ over $Q \in H^\infty(\d,\c^{2\times 2})$, one finds that such a minimum occurs uniquely when $q(z)$ is equal to $0.$ This type of example suggests that the enhanced approximation criterion based on successive singular values generates the ``very best" amongst the best approximants to $G$ by an element of $ H^\infty(\d,\c^{2\times 2})$. Such reflections led to the formulation of a strengthened approximation problem, the superoptimal approximation problem \ref{mainproblem} as explained on above. In \cite{young2} N.J. Young introduced this strengthened notion of optimal analytic approximation, subsequently called {\em superoptimal} approximation. Given a $G$ as above, find a $Q \in H^\infty(\d,\cmn)$ such that the sequence $s^\infty(G-Q)$ is lexicographically minimized. This criterion obviously constitutes a considerable strengthening of the notion of optimality, as one needs to determine a $Q\in H^\infty(\d,\cmn)$ that not only minimizes $\|G-Q\|_{L^\infty},$ but minimizes the $L^\infty$ norm of all the subsequent singular values $s_j(G(z)-Q(z))$ for $j\geq 0$. A good starting point for the superoptimal approximation problem of matrix functions is \cite{superop}. As we have said, the problem is to find, for a given $$G \in H^\infty(\d,\cmn)+C(\t,\cmn), $$ a function $Q \in H^\infty(\d,\cmn)$ such that the sequence $s^\infty(G-Q)$ is lexicographically minimized. Peller and Young proved some requisite preparatory results on ``thematic factorizations", on the analyticity of the minors of unitary completions of inner matrix columns and on the compactness of some Hankel-type operators with matrix symbols. These results provided the foundation for their main theorem, namely that if $G$ belongs to $H^\infty(\d,\cmn)+C(\t,\cmn),$ then there exists a unique $Q \in H^\infty(\d,\cmn)$ such that the sequence $s^\infty(G-Q)$ is lexicographically minimized as $Q$ varies over $ H^\infty(\d,\cmn)$; moreover for this $Q$, the singular values $s_j(G(z)-Q(z)$ are constant almost everywhere for $z\in\t$, for $j=0,1,2,\dots\;.$ Later, in \cite{Constr} Peller and Young presented a conceptual algorithm for the computation of the superoptimal approximant. Their algorithm is based on the theory developed in \cite{superop}. Also in \cite{Constr}, the algorithm was applied to a concrete example of a rational $2\times 2$ matrix-valued function $G$ in $H^\infty(\d,\c^{2\times 2})+C(\t,\c^{2\times 2})$ and the superoptimal approximant $\mathcal{A}G$ was calculated by hand. Additionally, Peller and Young in \cite{PY} studied superoptimal approximation by {\em meromorphic} matrix-valued functions, that is, matrix-valued functions in $H^\infty$ that have at most $k$ poles for some prescribed integer $k$. They modified the results of \cite{superop} and established a uniqueness criterion in the case that the given matrix-valued function $G$ is in $H^\infty + C$ and has at most $k$ poles. In addition, they provided an algorithm for the calculation of the superoptimal approximant. One can extend the above results to {\em operator}-valued functions on the circle; the operator-valued superoptimal approximation problem was studied by Peller in \cite{pellroper}. He generalized the notions of \cite{superop} and proved that there exists a unique superoptimal approximant in $H^\infty(\mathcal{B})$ for functions that belong to $H^\infty(\mathcal{B})+C(\mathcal{C}),$ where $\mathcal{B}$ denotes the space of bounded linear operators and $C(\mathcal{C})$ denotes the space of continuous functions on the circle taking values in the space of compact operators. Very badly approximable functions, that is, functions that have the zero function as a superoptimal approximant, were studied in the years that followed and a considerable amount of work was published. Peller and Young's paper \cite{superop} provided the motivation for the study of this problem, where they were able to algebraically characterise the very badly approximable matrix functions of class $H^\infty(\d,\cmn)+C(\t,\cmn). $ Their results were extended in \cite{fourblock} to the case of matrix functions $G$ for which $\|H_G\|_e$ is less than the smallest non-zero superoptimal singular value of $G.$ Very badly approximable matrix functions with entries in $H^\infty+C$ were completely characterised in \cite{pellerverybadly}. Recent work in \cite{lpappr} by Baratchart, Nazarov and Peller explores the analytic approximation of matrix-valued functions in $L^p$ of the unit circle by matrix-valued functions from $H^p$ of the unit disk in the $L^p$ norm for $p \leq 2.$ They proved that if a given matrix-valued function $\Psi \in L^p(\t,\cmn)$ is a `respectable' matrix function, then its distance from $H^p(\d,\cmn)$ is equal to $\|H_\Psi\|,$ and they obtained a characterisation of that distance also in the case $\Psi$ is a `weird' matrix-valued function. Furthermore, they established the notion of $p$-superoptimal approximation and illustrated the fact that every $n\times n$ rational matrix function has a unique $p$-superoptimal approximant for $2\leq p <\infty.$ For the case $p=\infty$ they provided a counterexample. In a more recent paper of Condori \cite{condori}, the author considered the relation between the sum of the superoptimal singular values of admissible functions in $L^\infty(\t,\cmn)$ and the superoptimal analytic approximation problem in the space $L^\infty(\t,S_p^{m,n}),$ where $S_p^{m,n}$ denotes the space of $m\times n$ matrices endowed with the Schatten-von Neumann norm $\| \cdot \|_{S_p^{m,n}}.$ He illustrated the fact that if $\Phi \in L^\infty(\t, \c^{n\times n})$ is an admissible matrix function of order $k$, then $Q \in H^\infty(\d,\c^{n\times n})$ is a best approximant function under the $L^\infty(\t,S_1^{n,n})$-norm and the singular values $s_j((\phi-Q)(z))$ are constant almost everywhere on $\t$ for $j=0,1,\dots, k-1$ if and only if $Q$ is a superoptimal approximant to $\Phi,$ $$\mathrm{ess}\sup_{z \in \t} s_j((\Phi-Q)(z))=0$$ for $j\geq k,$ and the sum of the superoptimal singular values of $\Phi$ is equal to $$\sup \left| \int_{\t} \mathrm{trace} (\Phi(\zeta)\Psi(\zeta))\;dm(\zeta) \right| ,$$ where $m,n>1,$ $1\leq k \leq \min(m,n)$ and the supremum is taken over all $\Psi \in H_0^1(\d,\c^{n\times m})$ for which $\|\Psi\|_{L^1(\t,\c^{n\times m})} \leq 1$ and $\mathrm{rank} \Psi(\zeta) \leq k$ almost everywhere on $\t.$ \section{Pointwise orthonormality of $\{\xi_i\}_{i=1}^j$ and $\{\bar{\eta}_i\}_{i=1}^j$ almost everywhere on $\Tt$}\label{orthonomal} These orthonormality properties will be needed for the justification of the main algorithm. \begin{proposition}\label{onxi} Let $m,n$ be positive integers with $\min(m,n) \geq 2,$ let \newline $G\in H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn)$ and let $0\leq j\leq \min(m,n)-2.$ Suppose we have applied steps $0,\dots,j$ of the superoptimal analytic approximation algorithm from Subsection \ref{Alg_statement} to $G$ and we have obtained $x_i,y_i$ as in equations \eqref{xj+1yj+1}, and $\xi_i, \eta_i$ as in equations \eqref{xij+1etaj+1} for $i=0,\cdots, j.$ Then \begin{enumerate} \item[\emph{(i)}] $\xi_0 \telwe v_1 = \xi_0 \telwe x_1, \quad \xi_0\telwe \cdots \telwe \xi_{j-1} \telwe v_j =\xi_0\telwe \cdots \telwe \xi_{j-1} \telwe x_j, $ $\bar{\eta}_0 \telwe w_1 = \bar{\eta}_0 \telwe y_1,$ \newline and $\bar{\eta}_0 \telwe\cdots \telwe \bar{\eta}_{j-1}\telwe w_j = \bar{\eta}_0 \telwe\cdots \telwe \bar{\eta}_{j-1}\telwe y_j ;$ \item[\emph{(ii)}] $\|x_j(z)\|_{\Cc^n} = \|y_j(z)\|_{\Cc^m}=|h_j(z)| $ almost everywhere on $\Tt;$ \item[\emph{(iii)}] The sets $\{\xi_i(z)\}_{i=0}^{j}$ and $\{\bar{\eta}_i(z) \}_{i=0}^j $ are orthonormal in $\Cc^n$ and $\Cc^m$ respectively for almost every $z \in \Tt.$ \end{enumerate} \end{proposition} \begin{proof} We will prove statement (ii) in Propositions \ref{x0wev1eta1wew1} and \ref{x0wev2eta2wew2}. Statement (i) is proven below in equations \eqref{xi0telx1}, \eqref{xi0xijisxi0xj}, \eqref{eta-wi-yi}. Let us prove assertion (iii). Since the function $G$ belongs to $ H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn),$ by Hartman's theorem, the Hankel operator with symbol $G$, denoted by $H_G,$ is a compact operator, and so there exist functions $$x_0 \in H^2(\Dd,\Cc^n),\quad y_0 \in H^2(\Dd,\Cc^m)^\perp$$ such that $(x_0,y_0)$ is a Schmidt pair corresponding to the singular value $t_0= \|H_G\|\neq 0.$ By Lemma \ref{2.2}, $x_0 , \bar{z}\bar{y}_0 $ admit the inner-outer factorizations $$ x_0 = \xi_0 h_0, \quad \bar{z}\bar{y}_0 = \eta_0 h_0$$ for column matrix inner functions $\xi_0\in H^\infty(\Dd, \Cc^n)$, $ \eta_0\in H^\infty(\Dd, \Cc^m) $ and some scalar outer factor $h_0 \in H^2(\Dd, \Cc).$ By Theorem \ref{1.7}, \begin{equation}\label{normxoyo} \|x_0 (z)\|_{\Cc^n} = |h_0(z)| = \|y_0 (z)\|_{\Cc^m} \;\; \text{almost everywhere on }\; \Tt.\end{equation} Thus \begin{equation} \label{xi01}\|\xi_{0}(z)\|_{\Cc^n} =1 \;\text{almost everywhere on}\; \Tt.\end{equation} Hence (iii) of Proposition \ref{onxi} holds for $\{\xi_i(z)\}_{i=0}^j$ in the case that $j=0.$ \noindent Let $T_1$ be given by equation (\ref{T_0}). By the hypothesis \eqref{rec_step}, $T_1$ is a compact operator, and if $T_1 \neq 0,$ then there exist $v_1\in H^2(\Dd,\Cc^n)$ and $w_1\in H^2(\Dd,\Cc^m)^\perp$ such that $(\xi_0 \telwe v_1, \bar{\eta}_0\telwe w_1)$ is a Schmidt pair corresponding to $\|T_1\|=t_1.$ By Proposition \ref{xweyh2}, $\xi_0 \telwe v_1 \in H^2(\Dd,\we^2\Cc^n).$ Let $h_1$ be the scalar outer factor of $\xi_0\telwe v_1.$ We define \begin{equation}\label{x1} x_1= (I_{n} - \xi_0 \xi_0^*) v_1 \end{equation} and \begin{equation}\label{xi11} \xi_1= \frac{x_1}{h_1}. \end{equation} Then, for $z\in \Dd,$ $$ \xi_1 (z) =\displaystyle\frac{1}{h_1(z)} v_1(z)-\frac{1}{h_1(z)} \xi_0 (z) \xi_0 (z)^* v_1(z). $$ Note that by equation (\ref{xi01}), $$ \xi_0^*(z)\xi_0(z) = \langle \xi_0 (z), \xi_0 (z)\rangle_{\Cc^n} =1\quad\text{almost everywhere on}\; \Tt,$$hence$$ \langle \xi_1(z), \xi_0(z)\rangle_{\Cc^n} = \xi_0^*(z) \xi_1(z)=\displaystyle\frac{1}{h_1(z)}\xi_0(z)^* v_1(z) - \displaystyle\frac{1}{h_1(z)}\xi_0 (z)^* \xi_0 (z) \xi_0 (z)^* v_1(z) =0$$ almost everywhere on $\Tt.$ Note that, by equation \eqref{x1}, for almost every $z \in \Tt,$ \begin{align} \xi_0 (z) \we v_1(z) &=\xi_0 (z) \we (x_1(z) + \xi_{0}(z) \xi_0(z)^* v_1(z) )\nonumber\vspace{2ex} \\&= \xi_0 (z) \we x_1(z) + \xi_0 (z) \we \xi_{0}(z) \xi_0(z)^* v_1(z) )\nonumber\vspace{2ex} \\ &= \xi_0(z) \we x_1(z)\label{xi0telx1},\end{align} the last equality following from the pointwise linear dependence of the vectors $\xi_0$ and $z\mapsto \xi_0(z) \langle v_1(z), \xi_0(z)\rangle_{\Cc^n}$ almost everywhere on $\Tt.$ Moreover, since $h_1$ is the scalar outer factor of $\xi_0 \telwe v_1,$ for almost every $z \in \Tt,$ we have $$\begin{array}{clll}|h_1(z)|&= \| \xi_0 (z) \we v_1(z)\|_{\we^2{\Cc^n}} = \| \xi_0(z) \we x_1(z)\|_{\we^2\Cc^n},\end{array}$$ By Lemma \ref{weon}, $$ \| \xi_0(z) \we x_1(z)\|_{\we^2\Cc^n} = \| x_1(z)- \langle x_1(z) , \xi_0(z) \rangle_{\Cc^n} \xi_0(z)\|_{\Cc^n} = \| x_1(z) \|_{\Cc^n} $$ almost everywhere on $\Tt.$ Hence, for almost every $z\in \Tt,$ \begin{equation}\label{h1=xi} |h_1(z)| = \|x_1(z)\|_{\Cc^n} \end{equation} and thus $$\| \xi_1(z) \|_{\Cc^n} = \displaystyle\frac{\|x_1(z)\|_{\Cc^n}}{|h_1(z)|} =1 \quad \text{almost everywhere on} \; \Tt.$$ \noindent Consequently, $\{ \xi_0(z) , \xi_1(z)\}$ is an orthonormal set in $\Cc^n$ for almost every $z \in \Tt.$ Hence (iii) of Proposition \ref{onxi} holds for $\{\xi_i(z)\}_{i=0}^j$ in the case that $j=1.$ \textbf{ Recursive step:} Suppose the entities in equations \eqref{rec_step} have been constructed and have the stated properties. Since by the inductive hypothesis $T_j$ is a compact operator, there exist $$v_j \in H^2(\Dd,\Cc^n),\quad w_j \in H^2(\Dd,\Cc^m)^\perp$$ such that $$ (\xi_0 \telwe \xi_1 \telwe \cdots \telwe \xi_{j-1} \telwe v_j, \overline{ \eta}_0\telwe \overline{ \eta}_1\telwe \cdots \telwe \overline{ \eta}_{j-1}\telwe w_j) $$is a Schmidt pair for $T_j$ corresponding to $\|T_j\|=t_j.$ Let us first prove that $\xi_0\telwe \xi_1 \telwe \cdots \telwe \xi_{j-1}\telwe v_j$ is an element of $ H^2(\Dd,\we^{j+1}\Cc^n).$ By hypothesis, \[x_i = (I_{n}-\xi_0 \xi_0^* -\dots - \xi_{i-1}\xi_{i-1}^*)v_i\quad \text{and}\quad \xi_i=\frac{x_i}{h_i}\]for $i=0,\dots,j-1.$ Then, for all $z\in \Dd,$ \begin{align*} \left( \xi_0\telwe\xi_1 \telwe \cdots \telwe \xi_{j-1} \telwe v_j\right) (z) &= \left(\xi_0\telwe\frac{x_1}{h_1}\telwe \cdots \telwe \frac{x_{j-1}}{h_{j-1}}\telwe v_j\right)(z)\\ &= \left(\frac{1}{h_1}\cdots \frac{1}{h_{j-1}} \xi_0 \telwe x_1 \telwe \cdots \telwe x_{j-1}\telwe v_j\right)(z). \end{align*} We obtain \[ (\xi_0\telwe\xi_1 \telwe \cdots \telwe \xi_{j-1} \telwe v_j)(z) = \left(\frac{1}{h_1}\cdots \frac{1}{h_{j-1}} \xi_0 \telwe v_1 \telwe \cdots \telwe v_{j-1}\telwe v_j\right)(z),\; \text{for all}\;z\in \Dd, \]due to pointwise linear dependence of $\xi_k$ and $z\mapsto \xi_k(z) \langle v_{j}(z), \xi_k(z)\rangle_{\Cc^n}$ on $\Dd,$ for all $k=0,\dots,j-1.$ By Proposition \ref{wejanalytic}, \[\frac{1}{h_1}\cdots \frac{1}{h_{j-1}} \xi_0 \telwe v_1 \telwe \cdots \telwe v_{j-1}\telwe v_j\] is analytic on $\Dd.$ Moreover, by Proposition \ref{wel2conv}, since $\xi_0,\xi_1,\dots,\xi_{j-1}$ are pointwise orthogonal on $\Tt,$ \[ \|\xi_0\telwe \xi_1 \telwe \cdots\telwe \xi_{j-1}\telwe v_j\|_{L^2(\Tt,\we^{j+1}\Cc^n)} < \infty. \]Therefore \[ \xi_0\telwe \xi_1 \telwe \cdots\telwe \xi_{j-1}\telwe v_j \in H^2(\Dd,\we^{j+1}\Cc^n). \] Let $h_{j}$ be the scalar outer factor of $\xi_1 \telwe \xi_2 \telwe \cdots \telwe \xi_{j-1} \telwe v_j.$ We define \begin{equation}\label{xj} x_{j} = (I_{n} - \xi_0 \xi_0^* - \cdots - \xi_{j-1} \xi_{j-1}^*)v_j\end{equation} and \begin{equation}\label{xijj} \xi_{j}=\frac{x_{j}}{ h_{j}}.\end{equation} Let us show that $\{ \xi_0(z), \cdots, \xi_{j-1}(z), \xi_{j}(z)\}$ is an orthonormal set in $\Cc^n$ almost everywhere on $\Tt.$ We have $$ \xi_{j} = \displaystyle\frac{1}{h_{j}} v_j - \displaystyle\frac{1}{h_{j} } \xi_0 \xi_0^* v_j - \cdots - \displaystyle\frac{1}{h_{j} }\xi_{j-1} \xi_{j-1}^* v_j ,$$ and so, for $i=0,\dots, j-1,$ $$ \begin{array}{cll} &\langle \xi_{j}(z), \xi_{i}(z) \rangle_{\Cc^n}= \displaystyle\frac{1}{h_j(z)}\xi_{i}^*(z) v_j(z) - \displaystyle\frac{1}{h_j(z)}\xi_{i}^*(z) \xi_0(z) \xi_0^*(z) v_1(z)- \cdots \vspace{2ex}\\ &\hspace{37ex}-\displaystyle\frac{1}{h_j(z)}\xi_{i}^*(z) \xi_{j-1}(z) \xi_{j-1}^*(z) v_1(z) \end{array}$$ \text{almost everywhere on} $\Tt.$ Note that by the inductive hypothesis, for $i,k=0,1,\cdots, j-1$ and for almost all $z \in \Tt$, $$ \xi_{i}^*(z) \xi_k (z)= \left\{ \begin{array}{ll} 0,\quad \text{for} \; i\neq k\\ 1,\quad \text{for} \;i=k\end{array}.\right.$$ Thus, for $i=0,\dots,j-1,$ $$\langle \xi_{j}(z), \xi_{i}(z) \rangle_{\Cc^n}= \displaystyle\frac{1}{h_{j}(z)}\xi_{i}^*(z) v_j(z) - \displaystyle\frac{1}{h_{j}(z)}\xi_{i}^*(z) \xi_i(z) \xi_i^* v_j(z)=0 $$almost everywhere on $\Tt,$ and hence, by induction on $j$ and for all integers $j=0,\dots, r-1,$ $\{ \xi_0(z), \cdots, \xi_{j-1}(z), \xi_{j}(z)\}$ is an orthogonal set in $\Cc^n$ for almost all $z \in \Tt.$ Let us show that $$\xi_0(z) \we \cdots \we \xi_{j-1}(z)\we v_j(z) =\xi_0(z) \we \cdots \we \xi_{j-1}(z)\we x_{j}(z) $$almost everywhere on $\Tt.$ Equation \eqref{xj} yields $$ \begin{array}{cllll} &\xi_0(z) \we \cdots \we \xi_{j-1}(z)\we v_j(z) \vspace{2ex}\\&= \xi_0(z) \we \cdots \we \xi_{j-1}(z)\we (x_{j}(z)+ \xi_0 (z) \xi_0^* (z) v_j(z) \\ &\hspace{32ex}+ \cdots + \xi_{j-1}(z)\xi_{j-1}^*(z)v_j(z)) \\ &= \xi_0(z) \we \cdots \we \xi_{j-1}(z)\we (x_{j}(z) + \xi_0 (z) \langle v_j(z), \xi_0(z)\rangle_{\Cc^n} + \cdots \\ &\hspace{32ex}+ \cdots + \xi_{j-1}(z) \langle v_j(z) ,\xi_{j-1}(z)\rangle_{\Cc^n} ) \end{array}$$almost everywhere on $\Tt.$ \noindent Notice that, for $i=0,\cdots, j-1,$ the vectors $\xi_i$ and $z \mapsto \xi_i(z)\langle v_j(z), \xi_i(z)\rangle_{\Cc^n}$ are pointwise linearly dependent almost everywhere on $\Tt.$ Thus for all $i=0,\cdots,j-1,$ $$ \xi_0 (z) \we \cdots \we \xi_{j-1}(z) \we \xi_i (z) \langle v_{i+1}(z), \xi_i(z) \rangle_{\Cc^n} =0 $$ almost everywhere on $\Tt.$ \noindent Hence \begin{equation}\label{xi0xijisxi0xj}\xi_0(z) \we \cdots \we \xi_{j-1}(z)\we v_{j}(z) =\xi_0(z) \we \cdots \we \xi_{j-1}(z)\we x_{j}(z) \quad \text{almost everywhere on} \; \Tt. \end{equation} \noindent Next, we shall show that $\|\xi_j(z)\|_{\Cc^n}=1$ for almost all $z \in \Tt.$ Recall that $h_j$ is the scalar outer factor of $\xi_1 \telwe \xi_2 \telwe \cdots \telwe \xi_{j-1} \telwe v_j,$ and therefore $$ \begin{array}{cllll} |h_{j}(z)|&=\|\xi_0(z) \we \cdots \we \xi_{j-1}(z)\we v_j(z)\|_{\we^{j+1}\Cc^n} = \| \xi_0(z) \we \cdots \we \xi_{j-1}(z)\we x_{j}(z)\|_{\we^{j+1}\Cc^n}\\ \end{array}$$almost everywhere on $\Tt.$ \noindent By the inductive hypothesis, $\{\xi_0(z) , \cdots, \xi_{j-1}(z)\}$ is an orthonormal set in $\Cc^n$ for almost all $z \in \Tt,$ hence, by Lemma \ref{weon}, \begin{align} |h_j(z)|&= \| \xi_0(z) \we \cdots \we \xi_{j-1}(z)\we x_{j}(z)\|_{\we^{j+1}\Cc^n}\nonumber \vspace{2ex}\\ &= \| x_j(z) - \sum\limits_{i=0}^{j-1} \langle x_{j}(z), \xi_i(z)\rangle \xi_i(z) \|_{\Cc^n}\nonumber \vspace{2ex} \\ &= \|x_{j}(z)\|_{\Cc^n} \;\text{almost everywhere on}\;\Tt.\label{hj=xj} \end{align} Thus $$\| \xi_{j}(z)\|_{\Cc^n} =\frac{\| x_j(z)\|_{\Cc^n}}{|h_j(z)|} =1$$ almost everywhere on $\Tt,$ and hence, by induction on $j,$ $\{ \xi_0(z), \cdots, \xi_{j-1}(z), \xi_{j}(z)\}$ is an orthonormal set in $\Cc^n$ for almost all $z \in \Tt,$ and for all integers $j=0,\dots,r-1.$ \noindent Next, we will prove inductively that the set $\{\bar{\eta}_i(z) \}_{i=0}^j, $ defined in equations (\ref{xij+1etaj+1}), is orthonormal. For $i=0,$ by equation (\ref{normxoyo}), we have \begin{equation} \label{eta01}\|\bar{\eta}_{0}(z)\|_{\Cc^m} =1 \;\text{almost everywhere on}\; \Tt.\end{equation} \noindent Let $T_1$ be given by equation (\ref{T_0}). $T_1$ is assumed to be a compact operator, and if $T_1 \neq 0,$ there exist $v_1\in H^2(\Dd,\Cc^n)$ and $w_1\in H^2(\Dd,\Cc^m)^\perp$ such that $(\xi_0 \telwe v_1, \bar{ \eta}_0\telwe w_1)$ is a Schmidt pair corresponding to $\|T_1\|=t_1.$ Suppose $h_1$ is the scalar outer factor of $\xi_0\telwe v_1.$ Let \begin{equation}\label{y1} y_1= (I_{m} - \bar{ \eta}_0 \eta_0^T) w_1 = w_1 -\bar{ \eta}_0 \eta_0^T w_1 \end{equation} and let $$\eta_1(z) = \frac{\bar{z} \bar{y}_1(z)}{h_1(z)} \quad \text{almost everywhere on} \; \Tt. $$ Then, $$ \bar{\eta}_1(z)= \frac{zy_0(z)}{\bar{h}_1(z)} = \frac{z w_1(z) }{\bar{h}_1(z)} - \frac{z \bar{\eta}_0(z) \eta_0^T(z) w_1(z)}{\bar{h}_1(z)} \quad \text{almost everywhere on} \; \Tt.$$ By equation (\ref{eta01}), $\left\|\bar{ \eta}_0(z)\right\|_{\Cc^m}=1$ almost everywhere on $\Tt.$ Hence $$ \begin{array}{cllllll} \langle \bar{\eta}_1(z) , \bar{\eta}_0(z) \rangle_{\Cc^m} &= \eta_0^T (z) \bar{\eta}_1(z)\vspace{2ex}\\ &=\displaystyle \frac{z \eta_0^T (z) w_1(z) }{\bar{h}_1(z)} - \frac{z \eta_0^T (z)\bar{\eta}_0(z) \eta_0^T(z) w_1(z)}{\bar{h}_1(z)}\vspace{2ex}\\ &= \displaystyle\frac{z \eta_0^T (z) w_1(z) }{\bar{h}_1(z)} - \frac{z \langle \bar{\eta}_0(z), \bar{\eta}_0(z)\rangle_{\Cc^m}\eta_0^T (z)w_1(z)}{\bar{h}_1(z)}\vspace{2ex}\\ &= \displaystyle\frac{z \eta_0^T (z) w_1(z) }{\bar{h}_1(z)} - \frac{z \eta_0^T (z) w_1(z) }{\bar{h}_1(z)}\vspace{2ex} \\ &= 0\quad \text{almost everywhere on} \; \Tt. \end{array}$$ Recall that $h_1$ is the scalar outer factor of $\xi_0 \telwe v_1.$ By equation \eqref{h1=xi} and Proposition \ref{x0wev1eta1wew1}, $$\|x_1(z)\|_{\Cc^n} = \|y_1(z)\|_{\Cc^m} = |h_1(z)| $$almost everywhere on $\Tt,$ thus $$\|\bar{ \eta}_1(z) \|_{\Cc^m} =\frac{\|z y_1(z) \|_{\Cc^m}}{|\bar{h}_1(z)|}=1 \quad \text{almost everywhere on} \; \Tt. $$ Consequently, $\{ \bar{ \eta}_0(z) , \bar{ \eta}_1(z)\}$ is an orthonormal set in $\Cc^m$ for almost every $z \in \Tt.$ Hence (iii) of Proposition \ref{onxi} holds for $\{\bar{\eta}_i\}_{i=0}^j$ in the case $j=1.$ \textbf{Recursive step:} Suppose the entities in equations \eqref{rec_step} have been constructed and have the stated properties. Since by the inductive hypothesis $T_j$ is a compact operator, there exist $$v_j \in H^2(\Dd,\Cc^n), \quad w_j \in H^2(\Dd,\Cc^m)^\perp$$ such that $$ (\xi_1 \telwe \xi_2 \telwe \cdots \telwe \xi_{j-1} \telwe v_j, \bar{ \eta}_0\telwe \bar{ \eta}_1\telwe \cdots \telwe \bar{ \eta}_{j-1}\telwe w_j) $$is a Schmidt pair for $T_j$ corresponding to $\|T_j\|=t_j.$ We have proved above that $$\xi_0 \telwe \cdots\telwe \xi_{j-1} \telwe v_j \in H^2(\Dd,\we^{j+1}\Cc^n).$$ Let $h_{j}$ be the scalar outer factor of $\xi_0 \telwe \xi_1 \telwe \cdots \telwe \xi_{j-1} \telwe v_j.$ We define $$y_{j} =( I_{m} - \bar{\eta}_0 \eta_0^T - \cdots - \bar{\eta}_{j-1} \eta_{j-1}^T)w_j$$ and \begin{equation}\label{etajj} \bar{\eta}_{j}=\frac{zy_{j}}{ \bar{h}_{j}}\end{equation} Let us show that $\{ \bar{ \eta}_0 (z), \dots, \bar{ \eta}_{j}(z) \}$ is an orthonormal set in $\Cc^m$ almost everywhere on $\Tt.$ We have $$ \bar{\eta}_{j} = \frac{z w_j }{\bar{h}_{j} } -\cdots -\frac{{z} \bar{\eta}_{j-1} \eta_{j-1}^T w_j }{\bar{h}_{j} } $$ and so, for $i=0,\dots, j-1,$ $$ \begin{array}{cllllll} \langle \bar{\eta}_{j}(z), \bar{\eta}_{i}(z) \rangle_{\Cc^m}&= \eta_i^T(z) \bar{\eta}_{j}(z)\vspace{2ex} \\ &= \displaystyle \frac{z \eta_i^T (z) w_j(z) }{\bar{h}_{j}(z)} -\cdots -\frac{z\eta_i^T(z) \bar{\eta}_{j}(z)\eta_j^T(z)w_j(z)}{\bar{h}_{j}(z)} \end{array}$$ almost everywhere on $\Tt.$ \noindent Notice that, by the inductive hypothesis, for $i,k= 0 ,\dots, j-1$ and for almost all $z \in \Tt,$ $$ \eta_{i}^T(z) \bar{\eta}_k (z)= \left\{ \begin{array}{ll} 0,\quad \text{for} \; i\neq k\\ 1,\quad \text{for} \;i=k\end{array}.\right.$$ Hence, for $i=0,\dots,j-1,$ $$\langle \bar{\eta}_{j}(z), \bar{\eta}_{i}(z) \rangle_{\Cc^m}= \displaystyle \frac{z \eta_i^T(z) w_j(z) }{\bar{h}_{j}(z)} - \displaystyle \frac{z \eta_i^T (z) w_j(z) }{\bar{h}_{j}(z)}=0$$ almost everywhere on $\Tt.$ Thus by induction on $j$, for all integers $j=0,\dots,r-1,$ $\{ \bar{ \eta}_0 (z), \dots, \bar{ \eta}_{j}(z) \}$ is an orthogonal set in $\Cc^m$ almost everywhere on $\Tt.$ \noindent To complete the proof, we have to prove that $\left\|\bar{\eta}_{j}(z)\right\|_{\Cc^m}=1$ for almost all $z \in \Tt.$ Recall that $h_j$ is the scalar outer factor of $\xi_0 \telwe \xi_1 \telwe \cdots \telwe \xi_{j-1} \telwe v_j.$ By Proposition \ref{xjwevjetajwewj}, $$|h_j(z)|= \| x_j(z)\|_{\Cc^n} = \|y_j(z)\|_{\Cc^m} $$ almost everywhere on $\Tt,$ thus $$ \left\|\bar{\eta}_{j}(z)\right\|_{\Cc^m}=\displaystyle \|\frac{zy_j(z)}{\bar{h}_j(z)}\|_{\Cc^m}=1 $$ almost everywhere on $\Tt,$ and hence, $\{ \bar{ \eta}_0 (z), \dots, \bar{ \eta}_{j}(z) \}$ is an orthonormal set in $\Cc^m$ almost everywhere on $\Tt.$ Note that, for $j= 1, \dots, r-1 $, \begin{align}\label{eta-wi-yi} \bar{\eta}_0 \telwe \cdots \telwe \bita_{j-1} \telwe y_{j} &=\bar{\eta}_0 \telwe \cdots \telwe \bita_{j-1}\telwe (I_{m}-\bar{\eta}_0\eta_0^T-\dots- \bita_{j-1}\eta_{j-1}^T)w_{j}\nonumber\vspace{2ex}\\ &=\bar{\eta}_0 \telwe \cdots \telwe \bita_{j-1} \telwe w_{j} - \sum\limits_{k=0}^{j-1} \bar{\eta}_0 \telwe \cdots \telwe \bita_{j-1}\telwe \bita_k \eta_k^Tw_{j}\nonumber\vspace{2ex}\\&= \bar{\eta}_0 \telwe \cdots \telwe \bita_{j-1} \telwe w_{j} \end{align} on account of the pointwise linear dependence of $\bita_{k}$ and $z\mapsto \bita_k(z) \langle w_{j}(z), \bita_k(z)\rangle_{\Cc^m}$ almost everywhere on $\Tt$. \end{proof} \section{The closed subspace $X_{j+1}$ of $H^2(\Dd,\we^{j+2}\Cc^n)$} \label{Xjsubset} \noindent Notice that, although $x_0 \in H^2(\Dd, \Cc^n)$ and $\xi_0$ is inner, $x_i$ and $\xi_i$ might not be in $H^2(\Dd, \Cc^n)$ in general for $i=1,\cdots, \min(m,n)-1.$ However, for every $x \in H^2(\Dd,\Cc^n),$ the pointwise wedge product $$\xi_0\telwe\cdots\telwe\xi_j\telwe x $$ \emph{is} an element of $H^2(\Dd,\we^{j+2}\Cc^n)$ as the following proposition asserts. \begin{proposition}\label{xjsubseth2} Let $G \in H^\infty(\Dd, \CCmn)+C(\Tt,\CCmn),$ and let $j\leq n-1. $ Let the vector-valued functions $\xi_0, \xi_1, \cdots, \xi_j$ be constructed after applying steps $0,\dots,j$ of the algorithm above and be given by equations \eqref{xij+1etaj+1}. Then $$ \xi_0 \telwe \dots \telwe \xi_j \telwe H^2(\Dd,\Cc^n)$$ is a subset of $H^2(\Dd,\we^{j+2}\Cc^n).$ \end{proposition} \begin{proof} For $j=0,$ since $G\in H^\infty(\Dd,\CCmn)+C(\Tt,\CCmn),$ the Hankel operator $H_G$ is compact. There exist $x_0 \in H^2(\Dd,\Cc^n), y_0 \in H^2(\Dd,\Cc^m)^\perp$ such that $(x_0,y_0)$ is a Schmidt pair for the Hankel operator $H_G$ corresponding to the singular value $\|H_G\|.$ By Lemma $\ref{2.2},$ $x_0, y_0$ admit the inner-outer factorizations $$x_0 = \xi_0 h_0 ,\quad \bar{z} \bar{y}_0 = \eta_0 h_0$$ for some inner $\xi_0 \in H^\infty(\Dd,\Cc^n), \eta_0 \in H^\infty(\Dd,\Cc^m)$ and some scalar outer $h_0\in H^2(\Dd,\Cc).$ \noindent Then, by Proposition \ref{xweyh2}, $\xi_0 \telwe H^2(\Dd,\Cc^n) \subset H^2(\Dd,\we^2\Cc^n).$ Let us now consider the case where $j=1.$ By definition, $$X_1 = \xi_0 \telwe H^2(\Dd,\Cc^n), \quad Y_1 = \bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp$$ and, by the inductive hypothesis, $T_1 \colon X_1 \to Y_1$ given by equation (\ref{T_0}) is a compact operator. Suppose $\|T_1\|\neq 0$ and let $(\xi_0 \telwe v_1 , \bar{\eta}_0 \telwe w_1)$ be a Schmidt pair corresponding to $\|T_1\|,$ where $v_1 \in H^2(\Dd,\Cc^n)$ and $w_1 \in H^2(\Dd,\Cc^m)^\perp.$ We define $$x_1 = (I_{n}-\xi_0\xi_0^*)v_1.$$ Note that, by Proposition \ref{xweyh2}, $\xi_0 \telwe v_1 \in H^2(\Dd,\we^2\Cc^n).$ Let $h_1 \in H^2(\Dd,\Cc)$ be the scalar outer factor of $\xi_0 \telwe v_1 \in H^2(\Dd,\we^2\Cc^n).$ Then we define $$\xi_1 = \frac{x_1}{h_1}.$$ Note that $\xi_0$ and $z \mapsto \xi_0(z)\langle v_1(z), \xi_0(z)\rangle_{\Cc^n}$ are pointwise linearly dependent on $\Dd,$ since $\xi_0^*v_1$ is a mapping from $\Dd$ to $\Cc.$ Thus, for all $x \in H^2(\Dd,\Cc^n)$ and $z \in \Dd,$ we have $$ (\xi_0 \telwe \xi_1 \telwe x)(z) = \displaystyle \xi_0(z) \we \xi_1(z)\we x(z)\vspace{2ex}\\ = \displaystyle \xi_0(z) \we \frac{x_1(z)}{h_1(z)}\we x(z),$$ and by substituting the value of $x_1$, we find $$\begin{array}{cllll} &\displaystyle \xi_0(z) \we \frac{x_1(z)}{h_1(z)}\we x(z)\vspace{2ex}\\&=\displaystyle \frac{1}{h_1(z)} \xi_0(z) \we (v_1(z)- \xi_0(z) \xi_0(z)^* v_1(z))\we x(z)\vspace{2ex}\\ &= \displaystyle\frac{1}{h_1(z)} \xi_0(z) \we v_1(z)\we x(z) - \displaystyle\frac{1}{h_1(z)} \xi_0(z) \we \xi_0(z) \xi_0(z)^* v_1(z) \we x(z) \vspace{2ex} \\ &= \left(\displaystyle\frac{1}{h_1} \xi_0 \telwe v_1 \telwe x\right)(z). \end{array}$$ Note that $v_1\in H^2(\Dd,\Cc^n),$ $\xi_0\in H^\infty(\Dd,\Cc^n)$ and $h_1 \in H^2(\Dd,\Cc)$ is the scalar outer factor of $\xi_0 \telwe v_1$. By Proposition \ref{wejanalytic}, for every $x \in H^2(\Dd,\Cc^n),$ $$\frac{1}{h_1} \xi_0 \telwe v_1 \telwe x $$ is analytic on $\Dd.$ By Proposition \ref{wel2conv}, since $\xi_0$ and $\xi_1$ are pointwise orthogonal almost everywhere on $\Tt,$ $$\| \xi_0 \telwe \xi_1 \telwe x\|_{L^2(\Tt,\we^3\Cc^n)} < \infty. $$ Hence, $$\xi_0 \telwe \xi_1 \telwe x \in \frac{1}{h_1} \xi_0 \telwe v_1 \telwe H^2(\Dd,\Cc^n) \subset H^2(\Dd,\we^3\Cc^n) .$$ \textbf{Recursive step:} suppose we have constructed vector-valued functions $\xi_0, \dots , \xi_{j-1},$ $\eta_0 , \dots , \eta_{j-1},$ spaces $X_j,Y_j$ and a compact operator $T_j \colon X_j \to Y_j$ after applying steps $0,\dots,j$ of the algorithm from Subsection \ref{Alg_statement} satisfying \begin{equation}\label{3.2.3sub} \xi_0 \telwe \cdots \telwe \xi_{j-1} \telwe H^2(\Dd,\Cc^n) \subset H^2(\Dd,\we^{j+1}\Cc^n).\end{equation} Since $T_j$ is a compact operator, there exist vector-valued functions $v_{j}\in H^2(\Dd,\Cc^n), w_{j} \in H^2(\Dd,\Cc^m)^\perp$ such that $$(\xi_0 \telwe \cdots \telwe \xi_{j-1} \telwe v_{j}, \overline{\eta}_0 \telwe \dots \telwe \overline{\eta}_{j-1}\telwe w_{j}) $$ is a Schmidt pair for $T_j$ corresponding to $\|T_{j}\|.$ Define \begin{equation} \label{xj321} x_{j} = (I_{n} - \xi_0 \xi_0^* - \dots - \xi_{j-1}\xi_{j-1}^*)v_{j}. \end{equation} By assumption, $\xi_0 \telwe \cdots \telwe \xi_{j-1} \telwe v_{j} $ lies in $H^2(\Dd,\we^{j+1} \Cc^n).$ Let $h_j \in H^2(\Dd,\Cc)$ be the scalar outer factor of $\xi_0 \telwe \cdots \telwe \xi_{j-1} \telwe v_{j}.$ Define $\xi_{j} = \frac{x_j}{h_j}.$ Note that $\xi_i(z)$ and $z \mapsto \xi_i(z)\langle v_j(z), \xi_i(z)\rangle_{\Cc^n}$ are pointwise linearly dependent on $\Dd$ for $i=0,\dots,j-1.$ Thus, for all $x\in H^2(\Dd,\Cc^n)$ and all $z \in \Dd,$ \begin{align} (\xi_0 \telwe \cdots \telwe \xi_{j-1} \telwe \xi_j \telwe x)(z) &= (\xi_0 \telwe \cdots \telwe \xi_{j-1} \telwe \frac{x_j}{h_j}\telwe x)(z)\nonumber\vspace{2ex}\\ &= \xi_0(z) \we \cdots \we \xi_{j-1}(z) \we \frac{1}{h_j(z)} \bigg( v_{j}(z)- \xi_0(z)\xi_0^*(z)v_j(z) -\cdots \bigg.\nonumber\\ &\hspace{30ex}- \bigg. \xi_{j-1}(z)\xi_{j-1}^*(z) v_{j}(z) \bigg) \we x(z)\nonumber\vspace{2ex}\\ &= \xi_0(z) \we \cdots \we \xi_{j-1}(z) \we \frac{1}{h_j(z)} v_{j}(z) \we x(z) \nonumber\\ &- \sum\limits_{i=0}^{j-1}\xi_0(z)\we \dots\we \xi_{j-1}(z) \we \xi_i(z)\xi_i^*(z)v_j(z)\we x(z)\nonumber \vspace{2ex}\\ &= \left(\frac{1}{h_j}\xi_0 \telwe \cdots \telwe \xi_{j-1} \telwe v_{j} \telwe x\right)(z)\label{1hjvj}.\end{align} Recall that, for $i=0,\dots,j-1,$ by the algorithm from Subsection $3.2.1,$ $$x_i = (I_{n} -\xi_0\xi_0^* - \dots-\xi_{i-1}\xi_{i-1}^*)v_i $$ and $$ \xi_i = \frac{x_i}{h_i}.$$ By equation \eqref{1hjvj}, for all $z \in \Dd,$ $$(\xi_0 \telwe \cdots \telwe \xi_{j-1} \telwe \xi_j \telwe x)(z) =\left(\frac{1}{h_j}\xi_0 \telwe \cdots \telwe \xi_{j-1} \telwe v_{j} \telwe x\right)(z) . $$ Substituting $\frac{x_i}{h_i}$ for $\xi_i$ in the latter equation, where $x_i$ are given by equation \eqref{xj321} for $i=1,\dots,j-1,$ we obtain $$(\xi_0 \telwe \cdots \telwe \xi_{j-1} \telwe \xi_j \telwe x)(z)=\left(\frac{1}{h_1} \frac{1}{h_2} \cdots \frac{1}{h_j}\xi_0 \telwe v_1 \telwe \cdots \telwe v_{j-1} \telwe v_{j} \telwe x\right)(z),\; z\in\Dd $$ on account of the pointwise linear dependence of $\xi_k$ and $z \mapsto \langle v_k(z),\xi_i(z) \rangle_{\Cc^n} \xi_k(z) $ on $\Dd,$ for $k=0,\dots,j.$ By Proposition \ref{wejanalytic}, for every $x \in H^2(\Dd,\Cc^n),$ $$\frac{1}{h_1} \frac{1}{h_2} \cdots \frac{1}{h_j} \xi_0 \telwe v_1 \telwe\cdots \telwe v_j \telwe x$$ is analytic on $\Dd.$ By Proposition \ref{wel2conv}, since $\xi_0,\xi_1,\dots,\xi_j$ are pointwise orthogonal on $\Tt,$ $$\| \xi_0 \telwe \xi_1 \telwe \cdots \xi_j \telwe x\|_{L^2(\Tt,\we^{j+2}\Cc^n)} < \infty. $$ Thus, for every $x \in H^2(\Dd,\Cc^n),$ $$\xi_0 \telwe \xi_1 \telwe\cdots \telwe \xi_j \telwe x \in H^2(\Dd,\we^{j+2}\Cc^n) $$ and the claim has been proved. \end{proof} \begin{proposition}\label{xjclosed} In the notation of Proposition \ref{xjsubseth2}, $$ \xi_0 \telwe \dots \telwe \xi_j \telwe H^2(\Dd,\Cc^n)$$ is a closed subspace of $H^2(\Dd,\we^{j+2}\Cc^n).$ \end{proposition} \begin{proof} Let us first show that $\xi_0 \telwe H^2(\Dd,\Cc^n)$ is a closed subspace of $H^2(\Dd,\we^2\Cc^n).$ Observe that, by Proposition \ref{xweyh2}, $\xi_0 \telwe H^2(\Dd,\Cc^n)\subset H^2(\Dd,\we^2\Cc^n).$ Let $$\Xi_0 =\{ f \in H^2(\Dd,\Cc^n): \langle f(z), \xi_0 (z) \rangle_{\Cc^n}=0\quad \text{almost everywhere on} \; \Tt \}.$$ Consider a vector-valued function $w\in H^2(\Dd,\Cc^n).$ For all $z\in \Dd,$ we may write $w$ as $$w(z)= w(z) -\langle w(z),\xi_0(z)\rangle_{\Cc^n}\xi_0(z)+\langle w(z),\xi_0(z)\rangle_{\Cc^n}\xi_0(z).$$ Then, for all $w\in H^2(\Dd,\Cc^n)$ and for all $z\in \Dd,$ $$\begin{array}{clllll}(\xi_0 \telwe w)(z)& = \xi_0(z)\we \big(w(z) -\langle w(z),\xi_0(z)\rangle_{\Cc^n}\xi_0(z)+\langle w(z),\xi_0(z)\rangle_{\Cc^n}\xi_0(z)\big)\vspace{2ex}\\&=\xi_0(z)\we \big(w(z) -\langle w(z),\xi_0(z)\rangle_{\Cc^n}\xi_0(z) \big) \end{array}$$ on account of the the pointwise linear dependence of $\xi_0$ and $z \mapsto \langle w(z),\xi_0(z) \rangle_{\Cc^n} \xi_0(z) $ on $\Dd.$ Note that $$w(z)- \langle w(z),\xi_0(z)\rangle_{\Cc^n}\xi_0(z)\in \Xi_0,$$ thus $$\xi_0 \telwe H^2(\Dd,\Cc^n) \subset \xi_0 \telwe \Xi_0. $$ By Corollary \ref{vclosed}, $\Xi_0$ is a closed subspace of $H^2(\Dd,\Cc^n),$ hence $$ \xi_0 \telwe H^2(\Dd,\Cc^n) \supset \xi_0 \telwe \Xi_0,$$and so, $$\xi_0 \telwe H^2(\Dd,\Cc^n) = \xi_0 \telwe \Xi_0 .$$ Consider the mapping $$C_{\xi_0}\colon \Xi_0 \to \xi_0 \telwe \Xi_0 $$ given by $$C_{\xi_0} w = \xi_0 \telwe w $$for all $w\in \Xi_0.$ Notice that, by Proposition \ref{onxi}, $\|\xi_0(e^{i\theta})\|_{\Cc^n}^2=1$ for almost every $\eiu \in \Tt.$ Therefore, for any $w \in \Xi_0,$ we have $$\begin{array}{cllllll} \|\xi_0 \telwe w\|_{L^2(\Tt,\we^{2}\Cc^n)}^2 &=\displaystyle \frac{1}{2\pi} \int\limits_0^{2\pi} \langle \xi_0 \telwe w, \xi_0 \telwe w \rangle (e^{i\theta})d\theta \vspace{3ex} \\ &= \displaystyle \frac{1}{2\pi} \int\limits_0^{2\pi}\left( \|\xi_0(e^{i\theta})\|_{\Cc^n}^2 \|w(e^{i\theta})\|_{\Cc^n}^2 - |\langle w(e^{i\theta}), \xi_0(e^{i\theta})\rangle|^2\right)\; d\theta \vspace{3ex}\\ &= \|w\|_{L^2(\Tt, \Cc^n)}^2, \end{array}$$ since $w$ is pointwise orthogonal to $\xi_0$ almost everywhere on $\Tt.$ Thus the mapping $$C_{\xi_0}\colon \Xi_0 \to \xi_0 \telwe \Xi_0$$ is an isometry. Furthermore, $C_{\xi_0}\colon \Xi_0 \to \xi_0 \telwe \Xi_0$ is a surjective mapping, thus $\Xi_0$ and $\xi_0 \telwe \Xi_0$ are isometrically isomorphic. Therefore, since $\Xi_0$ is a closed subspace of $H^2 (\Dd, \Cc^n),$ the space $\xi_0 \telwe \Xi_0$ is complete, hence is a closed subspace of $H^2(\Dd,\we^2\Cc^n)$. Hence $\xi_0 \telwe H^2(\Dd,\Cc^n)$ is a closed subspace of $H^2(\Dd,\we^2\Cc^n).$ To prove that $\xi_0 \telwe \dots \telwe \xi_j \telwe H^2(\Dd,\Cc^n) $ is a closed subspace of $H^2(\Dd,\we^{j+2}\Cc^n)$, let us consider $$\Xi_j =\{f \in H^2(\Dd,\Cc^n):\langle f(z), \xi_i (z)\rangle_{\Cc^n} =0, \; \text{for}\; i=0,\cdots,j \} $$ which is the pointwise orthogonal complement of $\xi_0, \dots,\xi_j$ in $H^2(\Dd,\Cc^n).$ Let $\psi \in H^2(\Dd,\Cc^n).$ We may write $\psi$ as $$\psi(z) = \psi(z) - \sum\limits_{i=0}^j \langle \psi(z), \xi_i(z)\rangle_{\Cc^n}\xi_i(z) +\sum\limits_{i=0}^j \langle \psi(z), \xi_i(z)\rangle_{\Cc^n}\xi_i(z). $$ Then, for all $\psi \in H^2(\Dd,\Cc^n)$ and for almost all $z \in \Tt,$ $$(\xi_0 \telwe \cdots \telwe \xi_j \telwe \psi)(z) = \xi_0(z)\we\cdots\we \left( \psi(z)- \sum\limits_{i=0}^j \langle \psi(z), \xi_i(z)\rangle_{\Cc^n}\xi_i(z) \right)$$ due to the pointwise linear dependence of $\xi_k$ and $z \mapsto \xi_k(z) \langle \psi(z), \xi_k(z)\rangle_{\Cc^n} $ almost everywhere on $\Tt.$ \noindent Notice that $\left( \psi(z)- \sum\limits_{i=0}^j \langle \psi(z), \xi_i(z)\rangle_{\Cc^n}\xi_i(z) \right)$ is in $\Xi_j,$ thus $$\xi_0 \telwe \cdots \telwe \xi_j \telwe H^2(\Dd,\Cc^n) \subset \xi_0 \telwe \cdots \telwe \xi_j \telwe \Xi_j. $$The reverse inclusion holds by the definition of $\Xi_j,$ hence $$\xi_0 \telwe \cdots \telwe \xi_j \telwe H^2(\Dd,\Cc^n) = \xi_0 \telwe \cdots \telwe \xi_j \telwe \Xi_j.$$ Consequently, in order to prove the proposition it suffices to show that $\xi_0 \telwe \cdots \telwe \xi_j \telwe \Xi_j $ is a closed subspace of $H^2(\Dd, \we^{j+2} \Cc^n).$ By Corollary \ref{vclosed}, $\Xi_j$ is a closed subspace of $H^2(\Dd, \Cc^n),$ being an intersection of closed subspaces. For any $f \in \Xi_j,$ $$ \begin{array}{clllll} &\|\xi_0 \telwe \xi_1 \telwe\cdots\telwe \xi_j \telwe f\|_{L^2 ( \Tt, \we^{j+2} \Cc^n)}^2 \vspace{2ex} \\ &=\displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} \|\xi_0 (e^{i\theta})\|_{\Cc^n}^2 & \cdots & \cdots & \langle \xi_0 (e^{i\theta}) , f(e^{i\theta})\rangle_{\Cc^n}\\ \langle \xi_1 (e^{i\theta}) , \xi_0 (e^{i\theta}) \rangle_{\Cc^n} & \|\xi_1 (e^{i\theta})\|_{\Cc^n}^2 & \cdots & \langle \xi_1 (e^{i\theta}) , f(e^{i\theta})\rangle_{\Cc^n}\\ \vdots & \vdots & \ddots & \vdots\\ \langle f(e^{i\theta}) , \xi_0 (e^{i\theta})\rangle_{\Cc^n} & \cdots & \cdots &\|f(e^{i\theta})\|_{\Cc^n}^2 \end{pmatrix} d\theta.\end{array}$$ Note that $f$ and $\xi_i$ are pointwise orthogonal almost everywhere on $\Tt,$ and, by Proposition \ref{onxi}, $\{\xi_0(z), \dots, \xi_j(z)\}$ is an orthonormal set for almost every $z \in \Tt.$ Hence $$\begin{array}{clll} &\|\xi_0 \telwe \xi_1 \telwe\cdots\telwe \xi_j \telwe f\|_{L^2 ( \Tt, \we^{j+2} \Cc^n)}^2 \vspace{2ex}\\ &= \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} 1& 0& \cdots & 0\\ 0& 1& \cdots & 0\\ \vdots & \vdots& \ddots & \vdots \\ 0 & 0 & \cdots & \|f(e^{i\theta})\|_{\Cc^n}^2 \end{pmatrix}d\theta \vspace{2ex}\\ &= \|f\|_{L^2 ( \Tt, \Cc^n)}^2.\end{array}$$ Thus $$\xi_0 \telwe \xi_1 \telwe\cdots\telwe \xi_j \telwe \cdot \colon \Xi_j \to \xi_0 \telwe \xi_1 \telwe\cdots\telwe \xi_j \telwe \Xi_j $$ is an isometry. Furthermore $$(\xi_0 \telwe \xi_1 \telwe\cdots\telwe \xi_j \telwe \cdot)\colon \Xi_j \to \xi_0 \telwe \xi_1 \telwe\cdots\telwe \xi_j \telwe \Xi_j $$ is a surjective mapping, thus $\Xi_j$ and $\xi_0 \telwe \cdots \telwe \xi_j \telwe \Xi_j$ are isometrically isomorphic. Therefore, since $\Xi_j$ is a closed subspace of $H^2(\Dd,\Cc^n),$ the space $\xi_0 \telwe \cdots \telwe \xi_j \telwe \Xi_j $ is a closed subspace of $H^2(\Dd,\we^{j+2}\Cc^n).$ Hence $$\xi_0 \telwe \cdots \telwe \xi_j \telwe H^2(\Dd,\Cc^n) $$ is a closed subspace of $H^2(\Dd,\we^{j+2}\Cc^n).$ \end{proof} \section{The closed subspace $Y_{j+1}$ of $H^2(\Dd,\we^{j+2}\Cc^m)^\perp$} \label{Y_j-closed} \begin{proposition}\label{clwe} Given $\bar{\eta}_0 = \frac{zy_0}{\overline{h_0}}$ as constructed in the algorithm in Subsection \ref{Alg_statement} the space $\bar{\eta}_0 \telwe H^2(\Dd, \Cc^m)^\perp$ is a closed subspace of $H^2(\Dd, \we^2 \Cc^m)^\perp.$ \end{proposition} \begin{proof} As in Proposition \ref{xjsubseth2}, one can show that $$\eta_0\telwe zH^2(\Dd,\Cc^m) \subset zH^2(\Dd,\we^2\Cc^m) $$and therefore $$\bar{ \eta}_0 \telwe \bar{z} \overline{ H^2(\Dd,\Cc^m)} \subset \bar{z} \overline{ H^2(\Dd,\we^2\Cc^m)}. $$Hence $$\bar{ \eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp \subset H^2(\Dd,\we^2\Cc^m)^\perp. $$ By virtue of the fact that complex conjugation is a unitary operator on $L^2(\Tt,\Cc^m),$ an equivalent statement to Proposition \ref{clwe} is that $\eta_0 \telwe zH^2(\Dd,\Cc^m)$ is a closed subspace of $zH^2(\Dd,\we^2 \Cc^m).$ Let $$ V= \{ f \in zH^2 (\Dd,\Cc^m) \; : \; \langle f(z) , \eta_0 (z) \rangle_{\Cc^m} =0\quad \text{for almost all}\; z \in \Tt \} $$ be the pointwise orthogonal complement of $\eta_0$ in $zH^2(\Dd,\Cc^m).$ \noindent Consider $g \in zH^2 (\Dd,\Cc^m).$ We may write $g$ as $$g(z) = g(z) - \langle g(z) , \eta_0 (z) \rangle_{\Cc^m}\cdot \eta_0 (z) + \langle g(z) , \eta_0 (z) \rangle_{\Cc^m} \cdot\eta_0 (z)$$ for every $z \in \Dd.$ Then, for all $g \in zH^2(\Dd, \Cc^m)$ and for all $z \in \Dd,$ $$(\eta_0 \telwe g) (z) = \eta_0 (z) \we [g(z) - \langle g(z) , \eta_0 (z) \rangle_{\Cc^m} \eta_0(z)]$$ on account of the pointwise linear dependence of $\eta_0$ and $ z \mapsto \langle g(z) , \eta_0(z) \rangle_{\Cc^m} \eta_0(z)$ on $\Dd.$ \noindent Note that $g(z)-\langle g(z) , \eta_0 (z) \rangle_{\Cc^m}\eta_0 (z) \in V,$ thus $$\eta_0 \telwe zH^2(\Dd, \Cc^m) \subset \eta_0 \telwe V.$$ The reverse inclusion is obvious, hence $$\eta_0 \telwe zH^2(\Dd, \Cc^m) = \eta_0 \telwe V.$$ \noindent To prove the proposition, it suffices to show that $\eta_0 \telwe V$ is a closed subspace of $zH^2 ( \Dd, \we^2 \Cc^m).$ \noindent Consider the mapping $$C_{\eta_0}\colon V \to \eta_0 \telwe V$$ defined by $$ C_{\eta_0} \nu = \eta_0 \telwe \nu$$ for all $\nu \in V.$ Notice that, by Proposition \ref{onxi}, $\|\eta_0(e^{i\theta})\|_{\Cc^m}^2=1$ for almost every $\eiu \in \Tt.$ Then, for any $\upsilon \in V,$ we have $$\begin{array}{cllllll} \|\eta_0 \telwe \upsilon\|_{L^2(\Tt,\we^{2}\Cc^m)}^2 &=\displaystyle \frac{1}{2\pi} \int\limits_0^{2\pi} \langle \eta_0 \telwe \upsilon, \eta_0 \telwe \upsilon \rangle (e^{i\theta})d\theta \vspace{3ex} \\ &= \displaystyle \frac{1}{2\pi} \int\limits_0^{2\pi}\left( \|\eta_0(e^{i\theta})\|_{\Cc^m}^2 \|\upsilon(e^{i\theta})\|_{\Cc^m}^2 - |\langle \upsilon(e^{i\theta}), \eta_0(e^{i\theta})\rangle|^2\right)\; d\theta \vspace{3ex}\\ &= \|\upsilon\|_{L^2(\Tt, \Cc^m)}^2, \end{array}$$ since $\upsilon$ is pointwise orthogonal to $\eta_0$ almost everywhere on $\Tt.$ Thus the mapping $C_{\eta_0}\colon V \to \eta_0 \telwe V$ is an isometry. \noindent Note that by Corollary \ref{vclosed}, $V$ is a closed subspace of $zH^2 (\Dd, \Cc^m).$ Furthermore, $$C_{\eta_0}\colon V \to \eta_0 \telwe V$$ is a surjective mapping, thus $V$ and $\eta_0 \telwe V$ are isometrically isomorphic. Therefore, since $V$ is a closed subspace of $zH^2(\Dd,\Cc^m),$ the space $\eta_0 \telwe V$ is complete and therefore a closed subspace of $zH^2(\Dd,\we^2\Cc^m)$. Hence $\bar{\eta}_0 \telwe H^2(\Dd,\Cc^m)^\perp$ is complete and therefore a closed subspace of $H^2(\Dd,\we^2\Cc^m)^\perp.$ \end{proof} \begin{corollary}\label{projwell} The orthogonal projection $P_{Y_1}$ from $L^2(\Tt, \we^2\Cc^m)$ onto $\bar{\eta}_0 \telwe H^2(\Dd, \Cc^m)^\perp $ is well defined. \end{corollary} \begin{proof} By Proposition \ref{H2subsetL2}, $H^2(\Dd,\we^2\Cc^m)$ can be identified with a closed subspace of $L^2(\Tt,\we^2\Cc^m),$ thus we have $$ H^2(\Dd,\we^2\Cc^m)^\perp= L^2(\Tt,\we^2\Cc^m)\ominus H^2(\Dd,\we^2\Cc^m).$$ Now the assertion follows immediately from Proposition \ref{clwe}. \end{proof} \begin{proposition}\label{clwegen} Let $0\leq j \leq m-2.$ Let the functions $\bar{\eta}_i$ be given by equations \eqref{xij+1etaj+1} in the algorithm from Subsection \ref{Alg_statement}, that is, $\bar{\eta}_i= \displaystyle \frac{zy_i}{\overline{h}_i}$ for $i =0, \cdots, j.$ Then, the space $$\bar{\eta}_0 \telwe \bar{\eta}_1\telwe\cdots\telwe \bar{\eta}_j \telwe H^2(\Dd, \Cc^m)^\perp$$ is a closed linear subspace of $H^2(\Dd,\we^{j+2}\Cc^m)^\perp.$ \end{proposition} \begin{proof} First let us show that, for every $x\in H^2(\Dd,\Cc^m),$ $$\eta_0 \telwe \eta_1 \telwe \cdots \telwe \eta_j \telwe z x \in zH^2(\Dd,\we^{j+2}\Cc^m). $$ Recall that $$ y_{j} = (I_{m} -\bar{\eta}_0\eta_0^T-\dots - \bar{\eta}_{j-1}\eta_{j-1}^T )w_{j}$$and \begin{equation}\label{etajwj1} \eta_0 \telwe \cdots \telwe \eta_{j-1} \telwe \bar{z}\bar{y}_{j} =\eta_0 \telwe \cdots \telwe \eta_{j-1} \telwe (\bar{z}\bar{w}_{j} - \sum\limits_{i=0}^{j-1}{\eta}_i\eta_i^* \bar{z} \bar{w}_{j} )= \eta_0 \telwe \cdots \telwe \eta_{j-1} \telwe \bar{z} \bar{w}_{j} \end{equation} because of the pointwise linear dependence of $\eta_i$ and $ z \mapsto \langle \bar{z} \bar{w}_{j+1}(z), \eta_i(z) \rangle_{\Cc^m} \eta_i(z)$ on $\Dd.$ \noindent By Proposition \ref{xjwevjetajwewj}, $$|h_i(z)| = \|y_i(z)\|_{\Cc^m} $$almost everywhere on $\Tt.$ \noindent Substituting $\eta_i = \frac{\bar{z}\bar{y}_i}{h_i} $ for all $i=0,\dots,j-1$ in equation \eqref{etajwj1}, we obtain $$\eta_0 \telwe \cdots \telwe \eta_{j-1} \telwe \bar{z}\bar{y}_{j}= \frac{1}{h_0} \frac{1}{h_1} \cdots \frac{1}{h_j} \eta_0 \telwe \bar{z}\bar{w_1} \telwe \cdots \telwe \bar{z}\bar{w_j}.$$ Observe that, by Proposition \ref{wejanalytic}, for every $x \in H^2(\Dd,\Cc^m),$ $$ \frac{1}{h_0} \frac{1}{h_1} \cdots \frac{1}{h_j} \eta_0 \telwe \bar{z}\bar{w_1} \telwe \cdots \telwe \bar{z}\bar{w_j}\telwe zx $$is analytic on $\Dd.$ By Proposition \ref{wel2conv}, for all $x \in H^2(\Dd,\Cc^m),$ since $\eta_0, \cdots,\eta_j$ are pointwise orthogonal on $\Tt,$ $$\|\eta_0 \telwe \eta_1 \telwe \cdots \telwe \eta_j \telwe zx \|_{L^2(\Tt,\we^{j+2}\Cc^m)} < \infty. $$ Hence, for every $x\in H^2(\Dd,\Cc^m),$ $$\eta_0 \telwe \eta_1 \telwe \cdots \telwe \eta_j \telwe zx = z\frac{1}{h_0} \frac{1}{h_1} \cdots \frac{1}{h_j} \eta_0 \telwe \bar{z}\bar{w}_1 \telwe \cdots \telwe \bar{z}\bar{w}_j \telwe x$$ is in $ zH^2(\Dd,\we^{j+2}\Cc^m) .$ \noindent Taking complex conjugates, we infer that $$Y_{j+1}\stackrel{\text{def}}{=} \bar{\eta}_0 \dot{\we} \cdots \telwe \bar{\eta}_{j-1} \dot{\we} \bar{\eta}_j\dot{\we} H^2 (\Dd, \Cc^m)^\perp \subset H^2(\Dd,\we^{j+2}\Cc^m)^\perp.$$ Let us prove that $Y_{j+1}$ is a closed linear subspace of $H^2(\Dd,\we^{j+2}\Cc^m)^\perp.$ Since complex conjugation is a unitary operator on $L^2(\Tt,\Cc^m),$ an equivalent statement to the above is that $$\eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe zH^2(\Dd, \Cc^m) $$ is a closed linear subspace of $zH^2(\Dd, \we^{j+2} \Cc^m).$ \noindent Let $$V_j =\{ \varphi \in zH^2(\Dd, \Cc^m) \; : \; \langle \varphi(z) , \eta_i (z) \rangle_{\Cc^m} = 0 , \;\text{for}\; i=0, \cdots, j\}$$ be the pointwise orthogonal complement of $\eta_0 , \cdots , \eta_j$ in $zH^2(\Dd, \Cc^m).$ Consider $f \in zH^2(\Dd, \Cc^m).$ We may write $f$ as $$ f(z) = f(z) - \sum\limits_{i=0}^j \langle f(z) , \eta_i (z) \rangle\eta_i(z) + \sum\limits_{i=0}^j \langle f(z) , \eta_i (z) \rangle\eta_i(z). $$ Then, for all $ f \in zH^2 (\Dd, \Cc^m)$ and for almost all $z \in \Tt,$ $$ (\eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe f ) (z) = \eta_0 (z) \we \eta_1 (z) \we \cdots \we \eta_j (z) \we \left( f(z) - \sum\limits_{i=0}^j \langle f(z) , \eta_i (z) \rangle\eta_i(z)\right).$$ Notice that $\left(f(z) - \sum\limits_{i=0}^j \langle f(z) , \eta_i (z) \rangle\eta_i(z)\right) \in V_j, $ thus $$ \eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe zH^2(\Dd, \Cc^m) \subset \eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe V_j .$$ The reverse inclusion holds by the definition of $V_j,$ hence $$\eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe zH^2(\Dd, \Cc^m) = \eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe V_j .$$ Consequently, in order to prove the proposition it suffices to show that $\eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe V_j $ is a closed subspace of $zH^2(\Dd, \we^{j+2} \Cc^m).$ By Corollary \ref{vclosed}, $V_j$ is a closed subspace of $zH^2(\Dd, \Cc^m),$ being an intersection of closed subspaces. For any $f \in V_j,$ we have $$ \begin{array}{clllll} &\|\eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe f\|_{L^2 ( \Tt, \we^{j+2} \Cc^m)}^2 \vspace{2ex} \\ &=\displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} \|\eta_0 (e^{i\theta})\|_{\Cc^m}^2 & \cdots & \langle \eta_0 (e^{i\theta}) , f(e^{i\theta})\rangle_{\Cc^m}\\ \langle \eta_1 (e^{i\theta}) , \eta_0 (e^{i\theta}) \rangle_{\Cc^m}& \|\eta_1 (e^{i\theta})\|_{\Cc^m}^2 & \cdots\\ \vdots & & \hspace{-20ex} \ddots \\ \langle f(e^{i\theta}) , \eta_0 (e^{i\theta})\rangle_{\Cc^m} & \cdots & \|f(e^{i\theta})\|_{\Cc^m}^2 \end{pmatrix} d\theta.\end{array}$$ Note that $f$ and $\eta_i$ are pointwise orthogonal almost everywhere on $\Tt$ and, by Proposition \ref{onxi}, $\{\eta_0(z), \dots, \eta_j(z)\}$ is an orthonormal set for almost every $z \in \Tt.$ Hence $$\begin{array}{clll} &\|\eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe f\|_{L^2 ( \Tt, \we^{j+2} \Cc^m)}^2 \vspace{2ex}\\ &= \displaystyle\frac{1}{2\pi} \int_0^{2\pi} \det \begin{pmatrix} 1 & 0 & \cdots & 0\\ 0 & 1 & \cdots & 0\\ \vdots & \vdots& \ddots & \vdots\\ 0 & 0 & \cdots & \|f(e^{i\theta})\|_{\Cc^m}^2 \end{pmatrix}d\theta \vspace{2ex}\\ &= \|f\|_{L^2 ( \Tt, \Cc^m)}^2.\end{array}$$ Thus $$\eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe \cdot \colon V_j \to \eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe V_j $$ is an isometry. Furthermore $$(\eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe \cdot)\colon V_j \to \eta_0 \telwe \eta_1 \telwe\cdots\telwe \eta_j \telwe V_j $$ is a surjective mapping, thus $V_j$ and $\eta_0 \telwe \cdots \telwe \eta_j \telwe V_j$ are isometrically isomorphic. Therefore, since $V_j$ is a closed subspace of $zH^2(\Dd,\Cc^m),$ the space $\eta_0 \telwe \cdots \telwe \eta_j \telwe V_j $ is a closed subspace of $zH^2(\Dd,\we^{j+2}\Cc^m).$ Hence $$\bar{ \eta}_0\telwe \cdots \telwe \bar{ \eta}_j \telwe H^2(\Dd,\Cc^m)^\perp $$ is a closed subspace of $H^2(\Dd,\we^{j+2}\Cc^m)^\perp.$ \end{proof} \begin{corollary}\label{projwellgen} Let $0\leq j \leq m -2.$ The orthogonal projection $$P_{Y_j}\colon L^2(\Tt, \we^{j+2}\Cc^m)\to Y_j$$ is well-defined. \end{corollary} \begin{proof} By Proposition \ref{H2subsetL2}, $H^2(\Dd,\we^{j+2}\Cc^m)$ can be identified with a closed subspace of $L^2(\Tt,\we^{j+2}\Cc^m),$ thus we have $$ H^2(\Dd,\we^{j+2}\Cc^m)^\perp= L^2(\Tt,\we^{j+2}\Cc^m)\ominus H^2(\Dd,\we^{j+2}\Cc^m).$$ Now the assertion follows immediately from Proposition \ref{clwegen}. \end{proof}
1,116,691,498,142
arxiv
\section{Introduction and Physics motivation} The charmless hadronic decay $B_s^0 \rightarrow \eta^\prime \eta$ is suppressed in the Standard Model (SM) and proceeds only through transitions sensitive to Beyond-the-Standard-Model (BSM) physics~\cite{Bevan:2014iga}. BSM scenarios, such as a fourth generation of fermions, supersymmetry with broken R-parity, and a two-Higgs doublet model with flavor-changing neutral currents, could affect the branching fraction and {\it CP} asymmetry of this decay~\cite{belleiiphysicsbook}. The expected branching fraction for $\beep$ in the SM spans a range of $(2 - 4)\times10^{-5}$~\cite{bf1, bf2, bf3, bf4, bf5}. Once branching fractions for two-body decays $B_{d,s}^0 \to \eta\et$, $\eta^{\prime}\eta$, and $\eta^{\prime}\etp $ are measured, it would be possible to extract {\it CP}-violating parameters using a formalism based on SU(3)/U(3) symmetry~\cite{bf1}. To achieve this goal, at least four of these six branching fractions need to be measured. Only the branching fraction for $B_s^0 \to \eta^{\prime}\eta^{\prime}$ has been measured so far ~\cite{bsepep}. In this Letter, we report the results of the first search for the decay $B_s^0 \rightarrow \eta^\prime \eta$ using the full Belle data sample of $121.4~\textrm{fb}^{-1}$ collected at the $\Upsilon(5S)$ resonance. The inclusion of the charge-conjugate decay mode is implied throughout. The Belle detector was a large-solid-angle magnetic spectrometer that operated at the KEKB asymmetric-energy $e^+e^-$ collider~\cite{KEKB}. The detector components relevant to our study include a tracking system comprising a silicon vertex detector (SVD) and a central drift chamber (CDC), a particle identification (PID) system that consists of a barrel-like arrangement of time-of-flight scintillation counters (TOF) and an array of aerogel threshold Cherenkov counters (ACC), and a CsI(Tl) crystal-based electromagnetic calorimeter (ECL). All these components were located inside a superconducting solenoid coil that provided a 1.5~T magnetic field. A detailed description of the Belle detector can be found elsewhere~\cite{Belle}. The $\Upsilon(5S)$ resonance decays into $B_s^{*0} \mybar{B}_s^{*0}$, $B_s^{*0} \mybar{B}_s^0$, and $ B_s^0 \mybar{B}_s^0$ pairs, where the relative fractions of the two former decays are $f_{B_{s}^{*0} \mybar{B}_s^{*0}} =(87.0\pm1.7)\%$ and $f_{B_{s}^{*0} \mybar{B}_s^0}=(7.3\pm1.4)\%$~\cite{frac}, respectively. Signal $B_s^0$ mesons originate from the direct decays of $\Upsilon(5S)$ or from radiative decays of the excited vector state $B_s^{*0}$. The $\Upsilon(5S)$ production cross section is $340 \pm 16$~pb~\cite{frac}. To present our nominal result for $\mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta)$ we use the world average value for the fraction of $B_s^{(*)0}\mybar{B}_s^{(*)0}$ in $b\bar{b}$ events $f_s = 0.201 \pm 0.031$~\cite{PDG}, the data sample is therefore estimated to contain $(16.60 \pm 2.68) \times 10^{6}$ $B_s^0$ mesons. We also report the results for $f_s \times \mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta)$. To maximize discovery potential of the analysis and to validate the signal extraction procedure, we use a sample of background Monte Carlo (MC) simulated events equivalent to six times the data statistics. In addition, to estimate the overall reconstruction efficiency we use a high-statistics signal MC sample, where the other $B_s^{(\ast)0}$ meson decays according to known branching fractions~\cite{PDG}. Both samples are used to develop a model implemented in the unbinned extended maximum-likelihood (ML) fit to data. The MC-based model is validated with a control sample of $B^0 \rightarrow \eta^\prime K_S^0$ decays reconstructed from 711~${\rm fb^{-1}}$ of $\Upsilon(4S)$ data. We reconstruct $\eta$ candidates using pairs of electromagnetic showers not matched to the projections of charged tracks to the ECL and therefore identified as photons. We require that the reconstructed energies of these showers exceed 50 (100) MeV in the barrel (endcap) region of the ECL. The larger energy threshold for the endcaps is due to the larger beam-related background in these regions. To reject hadronic showers mimicking photons, the ratio of the energies deposited by a photon candidate in the $(3\times3)$ and $(5\times 5)$ ECL crystal arrays centered on the crystal with the largest deposited energy is required to exceed 0.75. The reconstructed invariant mass of the $\eta$ candidates is required to be $515 \le M(\gamma\gamma) \le 580$~${\rm MeV}/{\it c}^2$, which corresponds, approximately, to a $\pm3\sigma$ Gaussian resolution window around the nominal $\eta$ mass~\cite{PDG}. To suppress combinatorial background arising due to low-energy photons, the magnitude of the cosine of the helicity angle ($\cos\theta_{\textrm{hel}}$) is required to be less than 0.97, where $\theta_{\textrm{hel}}$ is the angle in the $\eta$ candidate's rest frame between the directions of its Lorentz boost from the laboratory frame and one of the photons. The $\eta^{\prime}$ candidates are formed by combining pairs of oppositely charged pions with the $\eta$ candidates. We require the reconstructed $\eta^{\prime}$ invariant mass to be in the range $920\le M(\pi^+\pi^-\eta) \le 980$~${\rm MeV}/{\it c}^{2}$, which corresponds, approximately, to the range $[-10,+6]\sigma$ of the Gaussian resolution, after performing a kinematic fit constraining the reconstructed mass of the $\eta$ candidate to the nominal $\eta$ mass~\cite{PDG}. To identify charged pion candidates, the ratios of PID likelihoods, $R_{i/\pi}={{\mathcal L}}_{i}/({\mathcal{L}}_{\pi}+{\mathcal{L}}_{i})$, are used, where $L_{\pi}$ is the likelihood for the track being a pion, while $L_i$ is the corresponding likelihood for the kaon ($i=K$) or electron ($i=e$) hypotheses. We require $R_{K/\pi}\le0.6$ and $R_{e/\pi}\le0.95$ for pion candidates. The likelihood for each particle species is obtained by combining information from CDC, TOF and ACC~\cite{nakano_pid}, and (for electrons only) ECL~\cite{eid}. According to MC studies, these requirements reject 28\% of background, while the resulting efficiency loss is below 3\%. Charged pion tracks are required to originate from near the interaction point (IP) by restricting their distance of closest approach to the $z$ axis to be less than 4.0~cm along the $z$ axis and 0.3~cm perpendicular to it, respectively. The $z$ axis is opposite to the direction of the $e^+$ beam. These selection criteria suppress beam-related backgrounds and reject poorly reconstructed tracks. To reduce systematic uncertainties associated with track reconstruction efficiency, the transverse momenta of charged pions are required to be greater than 100~${\rm MeV}/{\it c}$. To identify $B_s^0 \rightarrow \eta^\prime \eta$ candidates we use (shown here in natural units) the beam-energy-constrained $B_s^0$ mass, $M_{\rm bc} =\sqrt{E_{\rm beam}^2-p_{B_s}^2}$, the energy difference, $\Delta E=E_{B_s}-E_{\rm beam}$, and the reconstructed invariant mass of the $\eta^\prime$, where $E_{\rm beam}$, $p_{B_s}$ and $E_{B_s}$ are the beam energy, the momentum and energy of the $B_s^0$ candidate, respectively. All these quantities are calculated in the $e^+e^-$ center-of-mass frame. To improve the $\Delta E$ resolution, the $\eta^{\prime}$ candidates are further constrained to the nominal mass of $\eta^{\prime}$, though most of the improvement comes from the $\eta$ mass constraint. Signal candidates are required to satisfy selection criteria $M_{\rm bc} >5.3$~${\rm GeV}/{\it c}^2$ and $-0.4 \le \Delta E \le0.3$~GeV. In a Gaussian approximation, the $\Delta E$ resolution is approximately 40~MeV. Similarly, the $M_{\rm bc} $ resolution is 4~${\rm MeV}/c^2$. To take advantage of all available information in case the data indicate signal presence, we include $M(\pi^+\pi^-\eta)$ in the three-dimensional (3D) ML fit used to statistically separate the signal from background. We define the signal region: $5.35<M_{\rm bc} <5.43$~${\rm GeV}/{\it c}^2$, $-0.25<\Delta E<0.10$~GeV, and $0.94<M(\pi^+\pi^-\eta)<0.97$~${\rm GeV}/{\it c}^2$. The area outside the signal region is considered as sideband. To optimize sensitivity we use a narrower signal region $5.39<M_{\rm bc} <5.43$~${\rm GeV}/{\it c}^2$ which would contain the largest signal contribution. Hadronic continuum events from $e^+e^-\to q\bar{q}$ ($q=u,d,c,s$) are the primary source of background. Because of large initial momenta of the light quarks, continuum events exhibit a ``jetlike'' event shape, while $B_s^{(*)0}\mybar{B}_s^{(*)0}$ events are distributed isotropically. We utilize modified Fox-Wolfram moments~\cite{ksfw}, used to describe the event topology, to discriminate between signal and continuum background. A likelihood ratio ($\mathcal{LR}$) is calculated using Fisher discriminant coefficients obtained in an optimization based on these moments. We suppress the background using a discovery-optimized selection on $\mathcal{LR}$ obtained by maximizing the value of Punzi's figure of merit~\cite{punzi}: \begin{equation} {\rm FOM} =\frac{\varepsilon(t) }{a/2+\sqrt{B(t)}}, \label{eq:FOM} \end{equation} \noindent where $t$ is the requirement on $\mathcal{LR}$, $\varepsilon$ and $B$ are the signal reconstruction efficiency and the number of background events expected in the signal region for a given value of $t$, respectively. The quantity $a$ is the desired significance (which we vary between 3 and 5) in the units of standard deviation. To predict $B(t)$ we multiply the number of events in the data sideband by the ratio of the numbers of events in the signal region and sideband in the background MC sample. We require signal candidates to satisfy the requirement $\mathcal{LR} \ge 0.95$, which corresponds to $B(0.95)=3.3$ and 48 background events in the signal region and sideband, respectively. This 47\%-efficient requirement removes 99\% of background. Using MC simulation we estimate continuum background to comprise 97\% of the remaining events. The background events containing real $\eta^{\prime}$ mesons exhibit a peak in the $M(\pi^+\pi^-\eta)$ distribution, however, they are distributed smoothly in $M_{\rm bc} $ and $\Delta E$. The fraction of this peaking background is a free parameter in our ML fits. About 14\% of the reconstructed signal MC events contain multiple candidates primarily arising due to misreconstructed $\eta$ mesons. In such events we retain the candidate with the smallest value of $\sum{\chi^2_{\eta}}+\chi^2_{\pi^+\pi^-}$, where $\chi^2_{\eta}$ denotes the $\eta$ mass-constrained fit statistic, the summation is over the two $\eta$ candidates, and $\chi^2_{\pi^+\pi^-}$ quantifies the quality of the vertex fit for two pion tracks. Simulation shows that this procedure selects the correct $B_s^0$ candidate in 62\% of such events. The overall reconstruction efficiency is 10\%. To extract the signal yield, we perform an unbinned extended ML fit to the 3D distribution of $M_{\rm bc} $, $\Delta E$, and $M(\pi^+\pi^-\eta)$. The likelihood function is \begin{equation} \mathcal{L}=\frac{e^{-\sum_{j}^3 n_j}}{N!}\prod_{i=1}^{N}\left(\sum_{j}^3 n_{j}\mathcal{P}_{j}[M_{\rm bc}^i, \Delta E^i, M^i(\pi^+\pi^-\eta) ]\right), \label{eq:PDF} \end{equation} \noindent where $i$ is the event index, $N$ is the total number of events, $j$ denotes the fit component (the three components are background, correctly reconstructed signal, and misreconstructed signal described later), and the parameters $n_j$ represent signal and background yields. Due to negligible correlations among fit variables for both background and correctly reconstructed signal events, the probability density function (PDF) for each fit component is assumed to factorize as $\mathcal{P}[M_{\rm bc}^i, \Delta E^i, M^i(\pi^+\pi^-\eta) ] = \mathcal{P}[M_{\rm bc} ^{i}] \cdot \mathcal{P}[\Delta E^{i}] \cdot \mathcal{P}[M^i(\pi^+\pi^-\eta)]$. The signal PDF is represented by a weighted sum of the three PDFs describing possible $B_s^0 \rightarrow \eta^\prime \eta$ signal contributions from $B_s^{(*)0}\mybar{B}_s^{(*)0}$ pairs, where the weights are fixed according to previous measurements~\cite{frac}. To validate our fitting model and adjust the PDF shape parameters used to describe the signal, we use the control sample of $B^0 \rightarrow \eta^\prime K_S^0$ decays. We reconstruct $K_S^0$ candidates via secondary vertices associated with pairs of oppositely charged pions~\cite{ks_reco} using a neural network technique~\cite{NN}. The following information is used in the network: the momentum of the $K_S^0$ candidate in the laboratory frame; the distance along the $z$ axis between the two track helices at the point of their closest approach; the flight length in the $x-y$ plane; the angle between the $K_S^0$ momentum and the vector joining the $K_S^0$ decay vertex to the IP; the angle between the pion momentum and the laboratory-frame $K_S^0$ momentum in the $K_S^0$ rest frame; the distance-of-closest-approach in the $x-y$ plane between the IP and the two pion helices; and the pion hit information in the SVD and CDC. The selection efficiency is 87\% over the momentum range of interest. We also require that the reconstructed $\pi^+\pi^-$ invariant mass is within 12~${\rm MeV}/{\it c}^2$, which is about 3.5$\sigma$, of the nominal $K_S^0$ mass~\cite{PDG}. We require $5.20\le M_{\rm bc} \le 5.30$~${\rm GeV}/{\it c}^2$ for $B^0$ candidates. The control-sample signal region is $5.27<M_{\rm bc} <5.29$~${\rm GeV}/{\it c}^2$, $-0.20<\Delta E<0.10$~GeV, and $0.94<M(\pi^+\pi^-\eta)<0.97$~${\rm GeV}/{\it c}^2$. All other selection criteria are the same as those used to select $B_s^0$ candidates. This control sample is used to validate the $\eta$ and $\eta^\prime$ reconstruction and its effect on the resolution functions and PDF shape parameters. The validation of $K_S^0$ reconstruction was performed previously in a similar $B_s^0$ analysis~\cite{Pal:2015ghq}. The presence of four photons in the final state gives rise to a sizable misreconstruction probability for the signal events. We study these self-crossfeed (SCF) events using the signal MC sample. A large correlation between $M_{\rm bc} $ and $\Delta E$ for such signal events is taken into account by describing the correctly reconstructed signal and SCF components separately with two different PDF sets. The latter comprise approximately 14\% of the reconstructed signal and are excluded from the estimate of its efficiency. The Pearson correlation coefficient for the region with largest correlations for SCF signal events is 27\%. A sum of a Gaussian and a Crystal Ball~\cite{xbal} function is used to model the correctly reconstructed signal in each of the three fit variables. For $M_{\rm bc} $ and $M(\pi^+\pi^-\eta)$ we use a sum of these two functions with the same mean but different widths, while for $\Delta E$ both the mean and width are different. A Bukin function~\cite{bukin} and an asymmetric Gaussian are used to model the SCF contribution in $M_{\rm bc} $ and $\Delta E$, respectively. For $M(\pi^+\pi^-\eta)$, we use a sum of a Gaussian and a first-order Chebyshev polynomial. In our nominal fit to data the fraction of correctly reconstructed signal is fixed to its MC value. The signal PDF shape parameters for $M_{\rm bc} $ and $\Delta E$ are validated using the $B^0 \rightarrow \eta^\prime K_S^0$ control sample. We use an ARGUS~\cite{argus} function to describe the background distribution in $M_{\rm bc} $ and a first-order Chebyshev polynomial for $\Delta E$. To model the peaking part in $M(\pi^+\pi^-\eta)$ we use the signal PDF, because the peak is due to real $\eta^{\prime}$ mesons, while an additional first-order Chebyshev polynomial is used for the non-peaking contribution. The projections of the fit to the $B^0 \rightarrow \eta^\prime K_S^0$ control sample are shown in Fig.~\ref{fit_data_y4s}. \begin{figure*}[htb!] \small \begin{center} \includegraphics[width=\textwidth]{./fig/fit3D_Y4S_data_new.pdf} \end{center} \caption{Signal-region projections of the fit results on $M_{\rm bc} $, $\Delta E$, and $M(\pi^+\pi^-\eta)$ for the $B^0 \rightarrow \eta^\prime K_S^0$ control sample. Points with error bars are data, blue solid curves are the results of the fit, black dashed curves are the background component, and cyan-filled regions show the signal component. } \label{fit_data_y4s} \end{figure*} \begin{figure*}[htb!] \small \begin{center} \includegraphics[width=\textwidth]{./fig/fit3D_Y5S_data_new.pdf} \end{center} \caption{Signal-region projections of the fit results on $M_{\rm bc} $, $\Delta E$, and $M(\pi^+\pi^-\eta)$ for $B_s^0 \rightarrow \eta^\prime \eta$. The $M_{\rm bc} $ signal region of the dominant signal contribution, $5.39 < M_{\rm bc} < 5.43$~${\rm GeV}/{\it c}^2$, is used to plot the $\Delta E$ and $M(\pi^+\pi^-\eta)$ projections. Points with error bars are data, blue solid curves are the results of the fit, black dashed curves are the background component, and pink-filled regions show the signal component. The three $M_{\rm bc} $ peaks in the signal component (from right to left) correspond to contributions from $B_s^{*0} \mybar{B}_s^{*0}$, $B_s^{*0} \mybar{B}_s^0$, and $ B_s^0 \mybar{B}_s^0$ pairs. } \label{fit_data} \end{figure*} To further test and validate our fitting model, ensemble tests are carried out by generating MC pseudoexperiments. In these experiments we use PDFs obtained from full detector simulation and the $B^0 \rightarrow \eta^\prime K_S^0$ data. We perform 1000 pseudoexperiments for each assumed number of signal events. An ML fit is executed for each sample prepared in these experiments. The signal yield distribution obtained from these fits exhibits good linearity. We use the results of pseudoexperiments to construct classical confidence intervals (without ordering) using a procedure due to Neyman~\cite{frequentist_approach}. For each ensemble of pseudoexperiments, the lower and upper ends of the respective confidence interval represent the values of fit signal yields for which 10\% of the results lie below and above these values, respectively. These intervals are then combined to prepare a classical confidence belt~\cite{belt_method,belt_method_2} used to make a statistical interpretation of the results obtained from data. The confidence intervals prepared using this statistical method are known to slightly ``overcover'' for the number of signal events~\cite{fc}, therefore resulting in a conservative upper limit. We apply the 3D model to the data and obtain $2.7 \pm 2.5$ signal and $57.3 \pm 7.8$ background events. The signal-region projections of the fit are shown in Fig.~\ref{fit_data}. We observe no significant signal and estimate a 90\% confidence-level (CL) upper limit on the branching fraction for the decay $B_s^0 \rightarrow \eta^\prime \eta$ using the following formula: \begin{equation} \mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta) < \frac{N_{\textrm{UL}}^{90\%}}{N_{B_s^0} \times \varepsilon \times \mathcal{B}}~, \label{eq_ul} \end{equation} \noindent where $N_{B_s^0}$ is the number of $B_s^0$ mesons in the full Belle data sample, $\varepsilon$ is the overall reconstruction efficiency for the signal $B_s^0$ decay, and $\mathcal{B}$ is the product of the subdecay branching fractions for $\eta$ and $\eta^\prime$ reconstructed in our analysis. Further, $N_{\textrm{UL}}^{90\%}$ is the expected signal yield of approximately 6.6 events at 90\% CL obtained from the confidence belt constructed using the frequentist approach~\cite{frequentist_approach}. Using Eq.~(\ref{eq_ul}) we estimate a 90\% CL upper limit on the branching fraction $\mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta) < 6.2 \times 10^{-5}$. We also estimate a 90\% CL upper limit on the product $f_s \times \mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta) < 1.2 \times 10^{-5}$. The systematic uncertainties are not included in these estimates. Sources of systematic uncertainties and their relative contributions are listed in Table~\ref{tab:lkr_sys}. The relative uncertainties on $f_s$ and $\sigma(\Upsilon(5S))$ are 15.4\% and 4.7\%, respectively. The systematic uncertainty due to $\eta$ reconstruction is 2.1\% per $\eta$ candidate~\cite{eta_syst}. Track reconstruction~\cite{track_syst} and PID systematic uncertainties are 0.35\% and 2\% per track, respectively. We estimate the systematic uncertainty due to the $\mathcal{LR}$ requirement to be 10\%, which represents the relative change in efficiency when this requirement is varied by $\pm$0.02 about the nominal value of 0.95. This range of variation is defined by the statistics of the control sample which is used to validate the efficiency and its dependence on the $\mathcal{LR}$ requirement. Systematic uncertainty due to signal PDF shape is estimated by varying the fixed parameters within their statistical uncertainties determined with $B^0 \rightarrow \eta^\prime K_S^0$ data. When varying these parameters, we observe an 11\% change in the signal yield obtained from the data and use this number as an estimate of PDF parametrization systematics. Systematic uncertainty due to $f_{B_{s}^{(*)0} \mybar{B}_s^{(*)0}}$ is evaluated by varying relative fractions of possible contributions to signal PDF and is 1.3\%. When varying the SCF contribution by $\pm 50$\% of itself, we observe a 4\% change in the results of the fit to data, which we use as an estimate of SCF PDF systematic uncertainty. The relative uncertainties on $\eta$ and $\eta^{\prime}$ branching fractions are 1\% and 1.2\%, respectively. The statistical uncertainty due to MC statistics is estimated to be 0.1\%. The overall systematic uncertainties for $\mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta)$ and $f_s \times \mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta)$ are estimated by adding the individual contributions in quadrature and are 23.1\% and 17.2\%, respectively. These systematic uncertainties are included in the $N_{\textrm{UL}}^{90\%}$ estimates of approximately 7.0 and 6.9 events by smearing the fit yield distributions while constructing the confidence belt used to extract the results. We estimate the upper limits on the branching fraction $\mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta) < 6.5 \times 10^{-5}$ and on the product $f_s \times \mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta) < 1.3 \times 10^{-5}$ at 90\% CL. Finally, using the number of signal events obtained from the fit we estimate $\mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta) = (2.5 \pm 2.2 \pm 0.6) \times 10^{-5}$ and $f_s \times \mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta) = (0.51 \pm 0.44 \pm 0.09) \times 10^{-5}$, where, for each of the two estimates, the first uncertainty is statistical and the second is systematic. We summarize the results in Table~\ref{tab:results}. \begin{table} \caption{Summary of systematic uncertainties.} \begin{ruledtabular} \begin{tabular}{l|c} Source & Uncertainty (\%) \\ \hline $f_s$ & 15.4 \\ $\sigma(\Upsilon(5S))$ & 4.7 \\ $\eta$ reconstruction & 4.2 \\ Tracking & 0.7 \\ PID & 4.0 \\ $\mathcal{LR}$ selection & 10.0 \\ PDF parametrization & 11.0 \\ $f_{B_{s}^{(*)0} \mybar{B}_s^{(*)0}}$ & 1.3 \\ SCF PDF & 4.0 \\ Branching fraction of $\eta$ & 1.0 \\ Branching fraction of $\eta^{\prime}$ & 1.2 \\ MC statistics & 0.1 \\ \end{tabular} \end{ruledtabular} \label{tab:lkr_sys} \end{table} \begin{table} \caption{Summary of the results for $f_s \times \mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta)$ and $\mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta)$. See the text for more information.} \begin{ruledtabular} \begin{tabular}{c|c} Quantity & Value \\ \hline \multirow{2}{*}{$f_s \times \mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta)$} & $(0.51 \pm 0.44 \pm 0.09) \times 10^{-5}$ \\ & $< 1.3 \times 10^{-5}$ @ 90\% CL \\ \hline \multirow{2}{*}{$\mathcal{B}(B_s^0 \rightarrow \eta^\prime \eta)$} & $(2.5 \pm 2.2 \pm 0.6) \times 10^{-5}$ \\ & $< 6.5 \times 10^{-5}$ @ 90\% CL \\ \end{tabular} \end{ruledtabular} \label{tab:results} \end{table} In summary, we have used the full data sample recorded by the Belle experiment at the $\Upsilon(5S)$ resonance to search for the decay $B_s^0 \rightarrow \eta^\prime \eta$. We observe no statistically significant signal and set a 90\% CL upper limit of $6.5 \times 10^{-5}$ on its branching fraction. To date, our result is the only experimental information on $B_s^0 \rightarrow \eta^\prime \eta$ and is twice as large as the most optimistic SM-based theoretical prediction. This decay can be probed further at the next-generation Belle~II experiment~\cite{belle2} at the SuperKEKB collider in Japan. We thank the KEKB group for the excellent operation of the accelerator; the KEK cryogenics group for the efficient operation of the solenoid; and the KEK computer group, and the Pacific Northwest National Laboratory (PNNL) Environmental Molecular Sciences Laboratory (EMSL) computing group for strong computing support; and the National Institute of Informatics, and Science Information NETwork 5 (SINET5) for valuable network support. We acknowledge support from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT) of Japan, the Japan Society for the Promotion of Science (JSPS), and the Tau-Lepton Physics Research Center of Nagoya University; the Australian Research Council including grants DP180102629, DP170102389, DP170102204, DP150103061, FT130100303; Austrian Federal Ministry of Education, Science and Research (FWF) and FWF Austrian Science Fund No.~P~31361-N36; the National Natural Science Foundation of China under Contracts No.~11435013, No.~11475187, No.~11521505, No.~11575017, No.~11675166, No.~11705209; Key Research Program of Frontier Sciences, Chinese Academy of Sciences (CAS), Grant No.~QYZDJ-SSW-SLH011; the CAS Center for Excellence in Particle Physics (CCEPP); the Shanghai Pujiang Program under Grant No.~18PJ1401000; the Shanghai Science and Technology Committee (STCSM) under Grant No.~19ZR1403000; the Ministry of Education, Youth and Sports of the Czech Republic under Contract No.~LTT17020; Horizon 2020 ERC Advanced Grant No.~884719 and ERC Starting Grant No.~947006 ``InterLeptons'' (European Union); the Carl Zeiss Foundation, the Deutsche Forschungsgemeinschaft, the Excellence Cluster Universe, and the VolkswagenStiftung; the Department of Atomic Energy (Project Identification No. RTI 4002) and the Department of Science and Technology of India; the Istituto Nazionale di Fisica Nucleare of Italy; National Research Foundation (NRF) of Korea Grant Nos.~2016R1\-D1A1B\-01010135, 2016R1\-D1A1B\-02012900, 2018R1\-A2B\-3003643, 2018R1\-A6A1A\-06024970, 2018R1\-D1A1B\-07047294, 2019K1\-A3A7A\-09033840, 2019R1\-I1A3A\-01058933; Radiation Science Research Institute, Foreign Large-size Research Facility Application Supporting project, the Global Science Experimental Data Hub Center of the Korea Institute of Science and Technology Information and KREONET/GLORIAD; the Polish Ministry of Science and Higher Education and the National Science Center; the Ministry of Science and Higher Education of the Russian Federation, Agreement 14.W03.31.0026, and the HSE University Basic Research Program, Moscow; University of Tabuk research grants S-1440-0321, S-0256-1438, and S-0280-1439 (Saudi Arabia); the Slovenian Research Agency Grant Nos. J1-9124 and P1-0135; Ikerbasque, Basque Foundation for Science, Spain; the Swiss National Science Foundation; the Ministry of Education and the Ministry of Science and Technology of Taiwan; and the United States Department of Energy and the National Science Foundation.
1,116,691,498,143
arxiv
\section{Introduction} Humans are able to understand a very large variety of complex actions performed by others. Automatic monitoring of human actions is, on the other hand, a long-standing and challenging problem in computer vision. In the literature, one finds substantial efforts along the lines of temporal segmentation and recognition of continuous human action sequences \citep{Yamato1992,Rui00,Gupta2009,Wang2012,Zhou13,Koppula13}. Recent works have mostly approached this problem from the perspective of analyzing motion patterns and by matching appearance-based features for the monitoring of action sequences. Due to the very large intra-person motion variability, such approaches, however, require fully labeled large training data and do not generalize well. Different from conventional approaches, we here introduce a novel method for action understanding, that relies only on the spatiotemporal hand-object relations that happen during an action. We use our recently introduced ``Semantic Event Chain" (SEC) concept \citep{Aksoy2010,Aksoy2011} as a descriptive action representation method. SECs capture the underlying spatiotemporal structure of continuous actions by sampling only decisive key temporal points derived from the spatial interactions between hands and objects in the scene. The SEC representation is invariant to large variations in trajectory, velocity, object type and pose used in the action. Therefore, SECs can be employed for the classification task of actions as demonstrated in various experiments in \citep{Aksoy2010,Aksoy2011,Aksoy2013}. In this paper, we aim at analyzing long and complex action sequences in which a human is manipulating multiple objects in different orders for a specific task, such as ``{\it preparing a breakfast}" or ``{\it making a sandwich}". Such actions are commonly called ``{\it manipulation}" since hands are intensively interacting with objects towards a goal. Thus, instead of analysing entire human body configurations or motions, we only (compactly) encode spatiotemporal hand-object relations by those event chains. In the context of action understanding, different taxonomies have been proposed to date in the literature. The term {\it action} is a rather general description for any type of individual behavior like {\it walking}, {\it jumping}, or {\it pushing}. In the context of this paper, more specific terms such as {\it manipulation} or {\it manipulation action} are used denoting that we are dealing which specific actions where hands are interacting with objects. Fig.~\ref{fig:action_taxonomy} shows the hierarchy of terms used in this paper. A manipulation primitive, e.g.\@\xspace {\it approach} or {\it lift}, is the smallest basic component of a manipulation. Different sequences of primitives lead to different types of atomic manipulations such as {\it pushing} or {\it cutting}. Finally, manipulation sequences or activities, e.g.\@\xspace ``{\it making a sandwich}", contain a series of chained atomic manipulations. We note that a semantic understanding of actions can happen for any of these components but sometimes also ``beneath", for example at the level of ``motions" (e.g.\@\xspace moving your hand in a certain way to perform ``punch" or ``push"). Both would amount to an ``Approach" (bottom of Fig.~\ref{fig:action_taxonomy}) but such (dynamic) levels are not included in this taxonomy and the SEC framework represents one certain specific level by which many manipulation actions can be distinguished but certain other ones will be considered type-identical. We refer the reader to the Discussion section, where we will dig a bit deeper into these aspects, which -- albeit being even of a philosophical origin -- have quite a strong influence on the algorithmic treatment of the ``action problem". \begin{figure}[!t] \centering \includegraphics[scale=0.8]{Figure_ActionTaxonomy.pdf} \caption{Taxonomy of manipulation actions. } \label{fig:action_taxonomy} \end{figure} The proposed framework has two processing stages: manipulation {\it temporal segmentation} and {\it recognition}. The temporal segmentation stage detects the changes in the spatiotemporal relations emerge between objects and hands in the scene. These detected changes results in parsing of individual actions performed sequentially or concurrently. The recognition stage requires an alphabet of atomic manipulations (e.g.\@\xspace~ {\it cutting} or {\it stirring}) which is provided up-front by learned SEC models for each ``atom" using the unsupervised learning method introduced in \cite{AksoyRAS2015}. Using this, manipulation sequences can be recognized and the framework then also deals with objects. However, roles of individual objects are usually not unique. A cup, for example, is commonly used for {\it filling} and/or {\it drinking} actions. The same cup can, however, also be utilized as a {\it pedestal} to put something on top of it after having first turned it upside down. Thus, depending on the intended goal, roles of objects can vary from manipulation to manipulation. We will show that we can extract object-like entities (image segments) with the here proposed framework and cluster them according to their exhibited roles in each recognized manipulation type without requiring any prior object knowledge. The rest of the paper is organized as follows. We start with introducing the state of the art. We then continue with a detailed description of each processing step. Next, we provide experimental results on various datasets and finally we finish with a discussion. \section{State of the Art} \label{sec:soa} Understanding continuous human actions is of fundamental importance in computer vision and has very broad potential application areas such as video surveillance, multimedia retrieval, virtual reality, and human-robot interaction \citep{Sukthankar2007,Pardowitz08,HoaiLD2011,Pei13}. There is a large corpus of work in both temporal segmentation and recognition of human actions in computer vision and machine learning. Achievements obtained in these topics will now be summarized but we cannot provide complete coverage of all works in these fields. See \cite{Poppe2010}, \cite{Ahad2011}, or \cite{Weinland2011} for comprehensive surveys. \subsection{Action Temporal Segmentation} \label{sec:actiondecomposition} Temporal segmentation, i.e.\@\xspace decomposition, is the process of segmenting the input data stream, i.e.\@\xspace action sequences, into individual action instances, i.e.\@\xspace atomic actions. The main difficulties here are the possibly large number of action combinations, the variable durations of the different atoms, and the irregularity and variability of actions performed by different people in different (scene) contexts. To cope with these problems, there exist different approaches such as boundary detection and sliding window methods, as well as higher-level grammars, which are widely used. Boundary detection methods \citep{Rui00,Weinland2006,Shiv2008} essentially investigate start and end points of actions from temporal discontinuities or extrema in acceleration or velocities of the motion profiles. Although such approaches are attractive due to being invariant to action classes, they highly depend on the observed motion pattern which can exhibit high intra-class variations. Conventional sliding window approaches \citep{Zhong2004,Sukthankar2007} are searching for correspondences between previously learned action features and the current action segment under the sliding window. The temporal segmentation performance, however, heavily depends on the recognition results which can be affected by the predefined window size. Alternatively, higher-level grammars \citep{Peursum2004,Lv2006} build a single large network from individually modelled actions. Such grammars are then used to model transitions between single actions to further parse action sequences by computing the minimum cost path through the network using efficient dynamic programming techniques. However, such methods require large amount of training data to learn a state sequence for each action and also to capture state transitions between individual actions. Along these lines, not only grammars with generative models, e.g.\@\xspace Hidden Markov Models (HMMs) \citep{Peursum2004,Lv2006}, but also discriminative frameworks based on multi-class Support Vector Machines (SVM) \citep{HoaiLD2011} and semi-Markov models \citep{Qinfeng11} were meanwhile proposed to perform simultaneous action segmentation and recognition. Recent work \cite{Wang2012} also introduced a probabilistic graphical model with additional substructure transition and discriminative boundary models in order to tackle the problem of continuous action segmentation and recognition. An unsupervised hierarchical bottom-up framework was presented in \cite{Zhou13} for temporal partitioning of human motion into disjoint segments. Although all those approaches yield encouraging results, the requirement of fully labeled training data limits transfer to new sequences. Such approaches are based on bottom-up continuous motion patterns that have high variability in appearance and shape across individual demonstrations of the same action. The computational complexity, as seen in \cite{Zhou13}, also limits their applicability to long sequences. In contrast to the aforementioned temporal segmentation approaches, we propose a method that corresponds to top-down semantic analysis of the video data without being affected by the low level data variations in object or motion domains. Among the existing methods, the work in \cite{Pei13}, which is an event parsing approach based on a stochastic event grammar, is most strongly related to our framework since it also employs binary spatial relations (e.g.\@\xspace touch, near, in, etc.) between objects and agents in the scene. In contrast to our method, this framework heavily relies on a semi-supervised object recognition in order to derive atomic actions. Each action is coupled with an object, thus, a new set of atomic actions has to be learned in the case of a scenario with novel objects. Different from this, our approach does not require any prior object information and just relies on the semantic interaction between objects and hands in the scene. \subsection{Action Recognition} \label{sec:actionrecognition} Action recognition is the labeling process of a given image sequence, which can be considered as a four dimensional data stream composed of spatial and temporal components. There exists extensive literature on topics related to action recognition. The previous works can be categorized under two main groups based on the action types. The first group of work \citep{Bobick2001,Sminchisescu2006,Scovanner2007,Laptev2008} benefits from the intrinsic hand or body movement features, and concentrates on monitoring of full body motions, such as {\it walking} and {\it running}. The second group covers manipulation actions (e.g.\@\xspace~{\it cutting, stirring}) in which interactions between objects and hands play the most crucial role rendering the discriminative cues. Our proposed recognition approach falls into this group, which are in general less understood and less investigated. Only a few solutions have been proposed so far \citep{Gupta2009,Fathi2011,Kjellstrom11,Yang13,Ramirez2013}. \subsubsection{Recognition of Human Motion} \label{sec:recognitionofhumanmotion} Vision-based human action recognition methods have two processing stages: action representation and classification. In the action representation phase, most proposed techniques extract global or local discriminative image features either in a top-down fashion by tracking regions of interest (e.g.\@\xspace a detected person in the scene) or as a collection of independent patches in a bottom-up fashion. Global feature representation methods encode each region of interest as a whole from silhouettes \citep{Bobick2001}, contours \citep{Chen2006}, or optical flow \citep{Efros03}. When using a local representation, however, patches are calculated around space-time interest points detected by corner detectors \citep{Laptev2003}, local histograms of oriented gradients (HOG) together with histograms of oriented flow (HOF) descriptors \citep{Laptev2008}, or scale-invariant feature transform (SIFT) descriptors \citep{Scovanner2007}. In the action classification phase, approaches are mostly based on generative or discriminative temporal state-space models. Generative approaches (e.g.\@\xspace HMMs \citep{Yamato1992,Hoang2012}) learn to model each action class with all variations, whereas discriminative models (e.g.\@\xspace Conditional Random Fields (CRF) \citep{Sminchisescu2006,Kjellstrom11}) learn the probability of each action conditioned on the observations without modeling the class. Although such probabilistic frameworks have by far been most widely used for action recognition, they heavily rely on low-level scene features together with the body or hand motion without employing any semantic information. In addition, classical HMM based approaches are not suitable for recognizing parallel streams of actions \citep{Graf2010} and cannot easily describe structures with repetitions or recursions \citep{Lee2013}. In contrast to generative HMM based frameworks, our event chain based action representation method also obeys the Markovian assumption, but the main difference is that all states, i.e.\@\xspace columns, in the event chains are observable. These states represent {\it key events}, i.e.\@\xspace topological changes in the scene. Furthermore, since detailed movement variations are not considered, event chains do not require a large corpus of training data for learning individual actions as shown in our previous works \citep{Aksoy2011,AksoyRAS2015}. \subsubsection{Recognition of Manipulation Actions} \label{sec:recognitionofmaniac} Ideas to utilize topological relations to reach semantics of manipulation actions can be found as early as in 1975. The first approach, introduced in \cite{Badler1975}, represented a scene by directed graphs in which each graph node identifies one object and edges describe relative spatial information (e.g.\@\xspace left, front, etc.) between objects. Based on object movement patterns, i.e.\@\xspace topological changes in the scene, events are defined to represent actions. The main drawback of this approach is that actions could not really be observed by vision at this time and observation is substituted by idealized hand-made image sequences. In the late nineties, causal semantics has been started to be used for interpreting manipulation videos. \cite{Brand1996} analyzed globally consistent causal evolution of the scene over time. The method detected meaningful changes in the motions and collisions of surfaces of foreground-segmented scene blobs. The same method was extended with heuristic and probabilistic models in \cite{Brand97} to enforce longer-term consistencies in the video parse. \red{Different from our approach, this method requires prior object information for scene blob detection and employs motion features such as object velocity profile, which very much harms the generalization property of action recognition. } Along these lines, \cite{Siskind1994} presented a logical semantic notion for describing event primitives of simulated simple motions in animated line drawings. Furthermore, a maximum-likelihood-based approach was introduced in \cite{Siskind1996} to reason about a stream of 2D ellipses, each abstractly represented the position, orientation, shape, and size of the manipulated objects in manipulation actions. \red{\cite{Bobick1998} applied a stochastic Context-Free Grammar (CFG) on top of an HMM based gesture detector. The discretized HMM output was fed to the CFG parser to estimate discrete symbol streams. } More recently, \cite{Fern02} suggested to also incorporate force dynamic with temporal and relational information to recognize visual events in manipulation videos. \red{\cite{Minnen2003} introduced parameterized stochastic grammars to recognize and make predictions about actions without requiring the object identity, but they were limited to recognizing semantically complex action including concurrent events. The work of \cite{Ryoo2006} was a hierarchical approach which started with the extraction of human body-segments and continued with the estimation of body poses at each frame. Gesture sequences were then estimated as symbolic scene states. At the highest level, a CFG was introduced to represent recursive action concepts. Although this work has similarities to our framework in terms of spatiotemporal scene representation, their framework rather focuses on human motions (e.g.\@\xspace hugging and punching) and can not handle missing or noisy sub-events that can occur during the action. } Even today there are still only a few approaches \citep{Sridhar08,Kjellstrom11,Yang13,Nagahama2013,Ramirez2013} attempting to arrive at the semantics of manipulation actions in conjunction with assessing the manipulated objects. \cite{Sridhar08} advocates a method for encoding an entire manipulation sequence by an activity graph, which stores the complete stream of spatiotemporal object interactions. The main difficulty here is that very complex and large activity graphs need to be decomposed for the further recognition process. In the work of \cite{Kjellstrom11}, segmented hand poses and velocities are used to classify manipulation actions. A histogram of gradients approach with a support vector machine classifier is used to categorize manipulated objects. Factorial conditional random fields are then employed to compute the correlation between objects and manipulations. However, this work does not consider interactions between the objects. Different from this, visual semantic graphs, inspired from our scene graphs, were introduced in \cite{Yang13} to recognize abstract action consequences (e.g.\@\xspace~{\it Assemble}, {\it Transfer}) only based on changes in the structure of the manipulated object without considering interactions between hands and objects. \cite{Nagahama2013} presented a method for hierarchical estimation of contact relationships (e.g.\@\xspace~{\it on}, {\it into}) between multiple objects. The previous work in \cite{Ramirez2013} suggested extraction of abstract hand movements, such as {\it moving}, {\it not \ moving} or {\it tool used}, to further reason about more specific action primitives (e.g.\@\xspace~{\it Reaching}, {\it Holding}) by employing not only hand movements but also the object information. Their methods are rather for detecting actions which span only across short time intervals. Although all those works to a certain extent improve the recognition of manipulations and/or objects, none of them addresses the temporal segmentation of long chained manipulation sequences into single atomic elements or even into key events, i.e.\@\xspace primitives, of individual manipulations. \red{ Specific attention has also been directed to understanding action by means of hand-object interactions (many times: ``grasping", \citep{Elliott1984,Cutkosky1989,Ekvall2005,Feix2009,Wimmer2011,Bullock2013,Liu2016}). This issue is complex and can be quite confounding when trying to get closer to the semantics of actions. To be able to understand this problem we first need to better introduce our approach and we will therefore discuss the issue of hand-object interactions at greater length only later (in section~\ref{sec:discussion}). } \begin{table*}[!t] \begin{center} \caption{\red{Comparison of recent action recognition approaches}} \scalebox{1.0}{ \begin{tabular}{ l c c c c c c c c c c c c p{6cm} } \\ \multicolumn{1}{l}{Paper} & \mcrot{1}{l}{90}{\twoelementtable{Action}{Segmentation}} & \phantom{p} & \mcrot{1}{l}{90}{\twoelementtable{Parallel}{Actions}} & \phantom{p} & \mcrot{1}{l}{90}{\twoelementtable{Viewpoint}{Invariance}} & \phantom{p} & \mcrot{1}{l}{90}{Semantic} & \mcrot{1}{l}{90}{Motion} & \mcrot{1}{l}{90}{Multi-agent} & \mcrot{1}{l}{90}{\twoelementtable{Object}{Information}} & \phantom{p} & \mcrot{1}{l}{90}{Depth Cue} & \multicolumn{1}{c}{Comment} \\ \midrule \midrule \cite{Badler1975} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{\OK} & \OK & \OK & - & \multicolumn{2}{c}{\OK} & - & Not applied to real image streams \\ \hline \cite{Brand97} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \OK & \OK & - & \multicolumn{2}{c}{\OK} & - & The event detection and scoring steps are hand-tuned, not adaptive or robust \\ \hline \cite{Bobick1998} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \OK & \OK & - & \multicolumn{2}{c}{-} & - & Considers simple hand gestures \\ \hline \cite{Rui00} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & - & \OK & - & \multicolumn{2}{c}{\OK} & - & Highly depends on the observed motion pattern \\ \hline \cite{Minnen2003} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \OK & - & - & \multicolumn{2}{c}{-} & - & Not applicable to complex actions with cluttered scenes \\ \hline \cite{Zhong2004} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & - & - & \OK & \multicolumn{2}{c}{-} & - & Depends on the predefined window size required for temporal action segmentation \\ \hline \cite{STIP2005} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & - & \OK & \OK & \multicolumn{2}{c}{-} & - & Highly depends on the observed scene context \\ \hline \cite{Ryoo2006} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{\OK} & \OK & - & \OK & \multicolumn{2}{c}{-} & - & Poor performance when having missing or noisy sub-events \\ \hline \cite{Sridhar08} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{\OK} & \OK & - & \OK & \multicolumn{2}{c}{-} & - & Large and complex activity graphs need to be decomposed \\ \hline \cite{Gupta2009} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \OK & \OK & - & \multicolumn{2}{c}{\OK} & - & Depends on the scene context (e.g.\@\xspace object texture and pose) \\ \hline \cite{Kjellstrom11} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \OK & \OK & - & \multicolumn{2}{c}{\OK} & - & Object interactions are not considered \\ \hline \cite{Wang2011} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & - & \OK & \OK & \multicolumn{2}{c}{-} & - & Highly depends on the scene context and motion pattern \\ \hline \cite{Fathi2011} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & - & \OK & - & \multicolumn{2}{c}{\OK} & - & Object recognition follows action identification \\ \hline \cite{Yang13} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \OK & - & - & \multicolumn{2}{c}{\OK} & - & Requires prior knowledge about manipulated objects \\ \hline \cite{Ramirez2013} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{\OK} & \OK & - & \OK & \multicolumn{2}{c}{\OK} & - & Abstract approach but also employs object information \\ \hline \cite{Wang2013} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & - & \OK & \OK & \multicolumn{2}{c}{-} & - & Highly depends on the followed motion pattern \\ \hline \cite{Koppula13} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \OK & \OK & - & \multicolumn{2}{c}{\OK} & \OK & Complex approach, employing human skeleton information, object segments and object tracks\\ \hline \cite{Li_2015_CVPR} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} & \multicolumn{2}{c}{\OK} & - & \OK & - & \multicolumn{2}{c}{\OK} & - & Depends on the scene context and motion profiles\\ \hline \cite{Wei2016} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{-} & \OK & - & - & \multicolumn{2}{c}{\OK} & \OK & Requires human skeleton and depends on the object recognition \\ \hline \textbf{Ours} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{\OK} & \multicolumn{2}{c}{\OK} & \OK & - & \OK & \multicolumn{2}{c}{-} & \OK & Requires multi-object and hand tracking \\ \hline \bottomrule \end{tabular} \label{table:soacomparison}} \end{center} \end{table*} Recent works such as \cite{Koppula13} described a Markov random field based model for decomposing and labeling the sequences of human sub-activities together with manipulated object roles. In the modeling process they employed human skeleton information, object segments and the observed object tracks. Likewise, \cite{Gupta2009} introduced a Bayesian model by using hand trajectories and hand-object interactions while segmenting and estimating observed manipulation sequences. In \cite{Fathi2011} hierarchical models of manipulations were learned with weak supervision from an egocentric perspective without using depth information. In contrast to our framework, these approaches are not suitable for detecting and recognizing parallel streams of actions since the applied models can only assign one label to each computed temporal segment. Following this analysis we believe that the here presented work is the first study that applies semantic reasoning in order to decompose chained manipulation sequences and to recognize embedded serial and parallel (overlapping) manipulation streams in conjunction with the manipulated objects without employing any prior object knowledge. \red{ In Table~\ref{table:soacomparison} we provide a detailed comparison of some recent action recognition approaches together with our proposed method. This side-by-side comparison shows which approaches can perform joint action segmentation (second column), detect parallel actions (third column) and which are viewpoint invariant (fourth column). Those are three main features come with the proposed SEC framework. The fifth column of the table indicates which of those methods employ high-level action semantics for the recognition process. Although there exist many different semantic approaches, to the best of our knowledge, the SEC framework is the only one, which is fully grounded at the signal (i.e.\@\xspace pixel) level. This SEC feature leads to extraction of key events, i.e.\@\xspace primitives, of individual manipulations and also to the clustering of objects according to their roles in an action. The next column highlights whether the motion profile of objects is being incorporated during the recognition phase. This feature makes action recognition biased to the followed movement pattern (trajectory). This can harm the method's power for generalization, which is not the case in our framework. The seventh column shows which methods can handle multi-agent action streams. This property indicates whether the recognition method can deal with cluttered scenes where more than one subject manipulates multiple objects. The last two columns respectively indicate whether the corresponding method requires prior object information or whether it depends on depth information. Different from other approaches, SECs do not employ any object information in advance but require depth for a better performance. } \section{Method} \label{sec:method} Before describing the complete framework in detail, we will briefly provide an overview of each algorithmic step illustrated in Fig.~\ref{fig:block_diagram}. \begin{figure*}[!b] \centering \includegraphics[scale=0.65]{Figure_BlockDiagram.pdf} \caption{Block diagram of the algorithm. } \label{fig:block_diagram} \end{figure*} \begin{figure*}[!b] \centering \includegraphics[scale=0.5]{Figure_ChainedAction_07.pdf} \caption{Semantic segmentation of a sample manipulation sequence where a hand is first replacing a bucket, putting an apple down and then hiding it with the same bucket. (a)~The extracted event chain where each column corresponds to one {\it key frame}, some of which are shown on the top with original images, respective objects (colored regions), and main graphs. Rows are spatial relations between object pairs, e.\,g. between the yellow ($2$) and red buckets ($5$) in the first row. Possible spatial relations are $N$, $T$, and $A$ standing for {\it Not touching}, {\it Touching}, and {\it Absence}. Each colored block in the SEC indicates a sequence of $[N, T, \cdots, T, N]$ relations. (b)~Computed probabilities of each object to estimate the {\it manipulator}. For instance, the object number $2$ exists only in the first three rows of the SEC, therefore, detected blocks only in these rows are superimposed and assigned for that object to calculate the probabilities given on the right. (c)~Decomposed SEC segments with respect to the ground truth. Black blocks represent null actions. {\it P} and {\it S} stand for the estimated {\it primary} and {\it secondary objects}. (d)~Detected manipulation types at each segment.} \label{fig:samplechainedactionSEC} \end{figure*} The proposed semantic action temporal segmentation and recognition framework is triggered with the observation of manipulation actions demonstrated by a human. The image sequence of any observed manipulation is first segmented to separately track each object-like entity (including hand) in the scene by using computer vision methods \citep{Abramov10,AbramovRGBD12}. Note, explicit object information is not provided and the method just tracks ``image segments". Tracked image segments, i.e.\@\xspace objects, are then represented by scene graphs to derive a matrix like manipulation representation, the so-called Semantic Event Chain (SEC). Objects are the graph nodes and edges exist between objects that touch each other. Graphs are only stored when their topology changes (i.e. when nodes ore edges are formed or deleted). Hence, essentially we record the touching or un-touching events between objects here. The core algorithm to extract event chains has been described elsewhere \citep{Aksoy2011}. In the first step we create a SEC library of various atomic manipulations (e.g. {\it Cutting} or {\it Stirring}) by learning an event chain model for each individual type with a method introduced in \cite{AksoyRAS2015}. This method is model-free and is only based on the intrinsic correlations in the topology changes of the graphs, which are highly characteristic for different atoms. As mentioned, these two steps (SEC algorithm and SEC learning) been described earlier \citep{Aksoy2011,AksoyRAS2015} and the main contribution of this paper lies elsewhere: It is the semantic segmentation and recognition of long and complex manipulation sequences, as depicted by a dashed box in Fig.~\ref{fig:block_diagram}. In brief: The event chain representation of the observed manipulation is first scanned to estimate the main manipulator, i.e.\@\xspace the hand, in the scene without employing any object recognition method. Solely based on the interactions between the hand and manipulated objects in the scene, the event chain is decomposed into segments. Those are further fragmented into sub-units to detect parallel action streams. Each parsed SEC segment is then compared with the model SECs in the library to decide whether the current SEC sample belongs to one of the known manipulation models or represents a novel manipulation. The proposed framework is running in an automated and unsupervised manner to monitor chained manipulation sequences performed either sequentially or in parallel. In the next sections we will present the core algorithmic components with all details, however, those which have been introduced elsewhere will only be briefly summarized. \subsection{Manipulation Observation} \label{sec:observation} In this work, we address the automatic temporal segmentation and monitoring of long and complex manipulation sequences in which a human is manipulating multiple objects in various orders to perform specific tasks, such as ``{\it making a sandwich}" or ``{\it preparing a breakfast}". During the observation phase, the demonstrated manipulation is recorded from the subject's own point of view with a static $RGB-D$ camera since we are interested in the spatiotemporal interactions between the manipulated objects and hands. The top row in Fig.~\ref{fig:samplechainedactionSEC} depicts some original scene images from a sample chained manipulation demonstration. \subsection{Segmentation and Tracking} \label{sec:segmentationandtracking} The image segmentation algorithm is based on the color and depth information fed from the Kinect device and uses phase-based optical flow \citep{Pauwels10} to track objects between consecutive frames. Data transmission between different pre-processing sub-units is achieved with the modular system architecture described in \cite{PaponOcl2011}. Since segmentation and tracking approaches are not in the core of this paper and were comprehensively described elsewhere \citep{Abramov10,AbramovRGBD12}, we omit details here. Note that for image segments, i.e.\@\xspace objects, we will always use more descriptive human terms like {\it hand}, {\it bucket}, etc., but we emphasize that the system has no such knowledge and it entirely lies on consistently tracked image segments. \subsection{Semantic Event Chain (SEC) Extraction} \label{sec:secs} Each segmented image is represented by a graph: nodes represent object centers and edges indicate whether two objects touch each other or not. By using depth information we exclude the graph node for the background (supporting surface) since it is, in general, not employed as the main object manipulated in the action. By using an exact graph matching technique, the framework discretizes the entire graph sequence into decisive main graphs. A new main graph is identified whenever a new node or edge is formed or an existing edge or node is deleted. Thus, each main graph represents a ``key frame" in the manipulation sequence, where a discrete change has happened. All issued main graphs form the core skeleton of the SEC, which is a matrix where rows are spatial relations (e.\,g. touching) between object pairs and columns describe the scene configuration at the time point when a new main graph has occurred. Fig.~\ref{fig:samplechainedactionSEC}~(a) depicts the SEC representation for a sample chained manipulation demonstration, in which a hand is first replacing a bucket, then putting an apple down on the table and then hiding it with the same bucket. For instance, the first row of the SEC represents the spatial relations between graph nodes $2$ and $5$ which are the yellow and red buckets, respectively. On the top of Fig.~\ref{fig:samplechainedactionSEC}~(a) some sample {\it key frames} including original images, respective objects (colored regions), and corresponding main graphs are given to illustrate topological configurations at the related SEC columns. Possible spatial relations are {\it Not touching (N)}, {\it Touching (T)}, and {\it Absence (A)}, where $N$ means that there is no edge between two objects, i.e.\@\xspace graph nodes corresponding to two spatially separated objects, $T$ represents a touching event between two neighboring objects, and the absence of an object yields $A$. In the event chain representation, all pairs of objects need to be considered once, however, static rows which do not contain any change from $N$ to $T$ or vise versa are deleted as being irrelevant. For instance, the relation between the left and right hand is always $N$ and never switches to $T$ to trigger an event, therefore, the respective row is ignored in the event chain. Consequently, the SEC in Fig.~\ref{fig:samplechainedactionSEC}~(a) encodes relations only between objects $2,~5,~ 7,$ and $8$, although many more objects are existing in the scene. Hence, the semantics of the manipulation is now represented by a $6 \times 21$ matrix despite of having approximately $1100$ frames in the entire demonstration. The SEC extraction explained briefly in this section has been described in detail in \cite{Aksoy2011}. \subsection{Learning of Model SECs} \label{sec:learning} In this section, we will briefly describe both the learning method employed to explore model SECs for single atomic manipulations and the semantic similarity measure between two event chains. We, however, omit the finer details here and refer the interested reader to \cite{AksoyRAS2015} for a comprehensive description of those approaches. The main aim of the learning method is to generate a vocabulary of single atomic manipulations, e.g.\@\xspace~{\it Putting}, {\it Hiding}, or {\it Pushing}. Such a vocabulary can then be employed to monitor the decomposed long manipulation sequences (see Fig.~\ref{fig:block_diagram}). \begin{figure}[!t] \centering \includegraphics[scale=0.5]{Figure_LearningBlockDiagram.pdf} \caption{Overview of the learning framework.} \label{fig:learning_overview} \end{figure} The learning approach essentially searches for common spatiotemporal information embedded in the rows and columns of event chains derived from a training manipulation set. Fig.~\ref{fig:learning_overview} shows the overview of the learning approach. A new model is initiated with the first SEC sample of an unknown atomic manipulation. Once the next demonstration is observed, the respective SEC is derived and compared with the already known SEC models. If the semantic similarity ($\delta$) between this novel SEC sample and any of the known models is higher than a threshold ($\tau$), the corresponding model is updated with the new sample. Otherwise, the SEC sample is labeled as a new model. The threshold value $\tau$ is directly estimated from the distribution of semantic similarities between observed SEC sample and those known models. To update an existing model, the learning procedure just needs to search for all common rows and columns existing both in the new SEC sample and the model. In the case of having additional rows or columns in the new SEC, the model is extended by these extra ones. Finally, the model SEC consists of only those rows and columns observed frequently in the new acquired SEC samples. The learning framework works in an on-line unsupervised manner as described in detail in \cite{AksoyRAS2015}. A batch mode implementation had already been introduced in \cite{Aksoy2011}. In order to measure the semantic similarity between two event chains, we basically compare rows and columns of SECs using simple sub-string search and counting algorithms. Relational changes are considered while comparing the rows, whereas for the columns the temporal order counts. We first search for the correspondences between rows of two event chains since rows can be shuffled. The searching process compares and counts equal entries of one row against other rows using a standard substring search which does not rely on dimensions and allows comparing arbitrarily long manipulation actions. We then examine the order of columns to get the final similarity result. Details for similarity calculations are given in \cite{AksoyRAS2015}. Fig.~\ref{fig:learned_sec_models} shows learned SEC models for eight different atomic manipulations in the ManiAc dataset \citep{AksoyRAS2015} explained in Section~\ref{sec:maniacdataset}. Each arrow on the top of SEC models indicates the weight values of these most commonly observed event chain columns. Weight values depicted with arrows on the left represent how often each SEC row is obtained in the trained samples. It can be seen that in all models rows are quite commonly observed in the trained samples as their weight values are close to $1$. This is, however, not the case for a few SEC model columns. For instance, the weight value for the last column of the \textit{Chopping} model drops to $0.27$. This is because even though each subject grasps a tool and chops an object in a similar temporal order, they leave the scene in different orders; for example, one subject first removes the hand supporting the object to be chopped and then withdraws the hand holding the tool whereas another subject either does it the other way around or removes both hands at the same time. Another reason of having smaller weight values is the noise propagated from the segmentation and tracking components as observed in the \textit{Cutting} model. Nevertheless, we can extract all these variations that occurred due to the nature of manipulation or noise and pick the most often observed states as a representative model for each manipulation action. \begin{figure}[!t] \centering \includegraphics[trim={3cm 3cm 5cm 0.65cm},scale=0.45]{Figure_SEC_Models.pdf} \caption{Complete learned SEC models for eight different manipulations. Weight values shown with arrows on the left and top respectively indicate how often each row and column in the SEC is obtained in the trained samples. } \label{fig:learned_sec_models} \end{figure} It is obvious that these $N$-$T$-$A$ patterns in the learned SEC models are very unique, except for \textit{Cutting} and \textit{Chopping} which have a quite similar SEC structure. This is because both manipulations semantically represent the same manipulation consequence, hence, both have the same fundamental action primitives, i.e. similar columns in the event chains. The only differences are mostly in the followed trajectories and velocity of the movements which are not captured by SECs. We discuss about such naturally emerging high semantic similarities in Section~\ref{sec:maniacdataset}. It is also important to highlight that some SEC models have symmetric patterns, such as those in \textit{Hiding} and \textit{Uncovering} or \textit{Taking} and \textit{Putting} models. This is fundamentally very correct since backward playing of any of these manipulations will lead to its symmetrical counterpart. This is a very important feature coming with the semantic event chain representation of manipulations. \subsection{Manipulation Temporal Segmentation} \label{sec:manipulationdecomposition} Once the SEC pattern of a manipulation sequence is derived, we continue with the temporal segmentation phase which considers the semantic information embedded in the event chain. The temporal segmentation method first searches for an object which plays the main role in the manipulation, or in other words, which acts as a {\it manipulator}. We assume that each manipulation is driven by one such main actor, e.g.\@\xspace~a {\it hand} and that it is most frequently interacting with the objects in the scene. To make the rest of the algorithm more clear, we start with the assumption of having only {\it single-hand} manipulations, however, this can be extended to multiple hands as will be discussed in section~\ref{sec:mot_dataset}. For this framework we employ the following reasonable action descriptive rules: \begin{itemize} \item The {\it manipulator} can purposefully manipulate, i.e.\@\xspace~{\it touch}, only one object at a time, which will be named {\it primary object}, e.g.\@\xspace~a {\it knife}. \item The manipulation sequence can consist of multiple {\it primary objects}. Each, however, has to be separately manipulated at different time intervals. For instance, the hand cutting a cucumber with a knife is not allowed to stir milk with a spoon unless releasing the knife first. \item All other objects interacting with the {\it primary objects} will be called {\it secondary objects}, e.g.\@\xspace the cucumber to be cut. \end{itemize} Those rules, first introduced in \cite{Woergoetter2013}, form the main skeleton of our proposed temporal segmentation method. We first start converting these rules into meaningful spatial relational sequences to make them compatible with the SEC representation. For instance, these rules require that the event chain must have at least a row holding spatial relations between the {\it manipulator} and {\it primary object} defined as: \begin{equation} {\it manipulator}, {\it primary~object} ~~~ \begin{bmatrix} N & T & \cdots & T & N \end{bmatrix}, \label{ntn_sequence} \end{equation} where the {\it manipulator} is first not touching (N) the {\it primary object}, then touches (T) {\it primary object} to apply a certain task on it. Depending on the manipulation, the temporal length of the touching (T) relation can vary. Finally, the {\it manipulator} releases (N) the {\it primary object} and continues with a different {\it primary object}. We note that as there is no object recognition method existing to identify graph nodes, we first need to identify the {\it manipulator} or {\it primary/secondary objects} using just the naked graph nodes, based only on the above introduced action descriptive rules. \subsubsection{Estimating the Manipulator} \label{sec:extractingmanipulator} To achieve this, we apply probabilistic reasoning to estimate object roles in the manipulation. Probability values for each object are assigned based on similarities of their relations with Eq.~\eqref{ntn_sequence} and the length of their touching relations. Let $\xi$ be a semantic event chain with the size of $n \times m$ and assume that $\xi$ includes $q$ different objects, the set of which can be written as \begin{eqnarray} \mathcal{S} =\{s_{1}, s_{2}, \cdots , s_{q}\} \quad . \end{eqnarray} The event chain $\xi$ can then be described as: \begin{equation} \xi = \begin{bmatrix} s_{1,1},s_{1,2}\\ s_{2,1},s_{2,2}\\ \vdots \\ s_{n,1},s_{n,2}\\ \end{bmatrix} = \begin{bmatrix} r_{1,1} & r_{1,2} & \cdots & r_{1,m} \\ r_{2,1} & r_{2,2} & \cdots & r_{2,m} \\ \vdots & \vdots & \ddots & \vdots \\ r_{n,1} & r_{n,2} & \cdots & r_{n,m} \end{bmatrix}, \label{sec_matrix} \end{equation} where $s_{i,:} \in \mathcal{S}$, and $r_{i,j} \in \{ A, N, T\}$ is representing the spatial relation between an object pair $s_{i,1}$ and $s_{i,2}$ at time $j$. We assign a probability value $p_{k}$ to each object $s_{k}$ existing in $\xi$ to define the likelihood of being the {\it manipulator} as \begin{eqnarray} \mathcal{P} = \{ p_{k}: \ k \in [ 1,\cdots ,q ] \} \ \quad , \end{eqnarray} \begin{eqnarray} p_{k}= \frac{\sum^{m}_{j=1} \delta_{k,j}}{m} \label{prob_segments} \quad , \end{eqnarray} \begin{eqnarray} \resizebox{.8\hsize}{!}{$\delta_{k,j} = \left\{ \begin{array}{l l} 1 & \quad \text{if ~ $s_{k} \in s_{i,:}$ ~ , ~ $[N, T, \cdots, T, N] \in r_{i,:}$ } \\ & \quad \text{ and ~ $r_{i,j} \in [N, T, \cdots, T, N]$ , $i \in [ 1,\cdots ,n ] $ }\\ 0 & \quad \text{else}\\ \end{array} \right. \quad , $ } \end{eqnarray} where $\delta$ essentially investigates how wide a touching event $T$ expands over all the temporal length of $\xi$ in the case of having the relational sequence of $[N, T, \cdots, T, N]$ (given in Eq.~\eqref{ntn_sequence}) in all rows (i.e.\@\xspace $s_{i,:}$) that include the respective object $s_{k}$. The {\it manipulator} is finally estimated as the object $s_{k^{\ast}}$ with the highest probability; that is, \begin{equation} k^{\ast}= \underset{1 \leq k \leq q}{\operatorname{arg\,max} }~( p_{k}) \label{prob_manipulator} \quad . \end{equation} In Appendix~\ref{sec:manipulatorestimationappendix} we provide the pseudocode for estimating the {\it manipulator} from Eqs.~\eqref{ntn_sequence}$-$\eqref{prob_manipulator}. For example, the colored blocks given in the SEC in Fig.~\ref{fig:samplechainedactionSEC}~(a) indicate where sequences of $[N, T, \cdots, T, N]$, similar to the one given in Eq.~\eqref{ntn_sequence}, are detected. Fig.~\ref{fig:samplechainedactionSEC}~(b) links those blocks to the corresponding objects in the SEC to indicate which object has the longest block, i.e.\@\xspace highest probability value. For instance, as object number $2$ exists only in the first three rows of the SEC, detected blocks only in these rows will be superimposed and assigned to that object. On the right side of Fig.~\ref{fig:samplechainedactionSEC}~(b), the final probability values computed from Eq.~\eqref{prob_segments} are given. Since blocks associated with object number $7$ cover the widest temporal stretch along the SEC, it is correctly estimated as the {\it manipulator} from Eq.~\eqref{prob_manipulator}. \subsubsection{Decomposing SECs} \label{sec:decomposingsecs} Following the estimation of the {\it manipulator}, the SEC is ready to be decomposed into shorter segments. The temporal segmentation proceeds considering the $[N, T, \cdots, T, N]$ sequences that belong to the {\it manipulator}, since any change from $N$ to $T$ and from $T$ to $N$ defines the natural start and end points of the manipulation. It is here important to note that we cannot directly assume each $[N, T, \cdots, T, N]$ sequence as a segment due to spurious spatial relations propagated from noisy segmentation and tracking phases. Therefore, we first apply a low pass filter to those rows with the {\it manipulator} and then label each time interval between an $[N, T]$ and $[T, N]$ change as a potential action segment. Each segment is assigned a confidence value indicating the frequency of the touching relation. Finally, action segments that are encapsulated by others or share a common temporal zone are merged to converge to the ultimate temporal segmentation of the manipulation. Let $\mathcal{A}$ be a set of action segment candidates as \begin{eqnarray} \mathcal{A} =\{a_{1}, a_{2}, \cdots , a_{l}\} \quad , \end{eqnarray} where each segment $a_{\phi}$ represents the area between the start and end time points of a $[N, T, \cdots, T, N]$ sequence detected in each row $i$ that includes the {\it manipulator}; written as \begin{eqnarray} a_{\phi}=[t_{i}^{Start} ~~ t_{i}^{End} ) \quad , \end{eqnarray} where $t_{i}^{Start}$ , $t_{i}^{End} \in [ 1,\cdots ,m ] $ and $m$ is the column number of $\xi$ in Eq.~\eqref{sec_matrix}. Due to early vision problems, such as illumination variation or occlusion, noisy flickering spatial relations can occur at any action segment, $a_{\phi}$. We, thus, measure the rate of $T$ relations in each segment as a confidence value, $c_{\phi}$; that is computed as \begin{eqnarray} c_{\phi}= \frac{\sum^{t_{i}^{End}}_{j=t_{i}^{Start}} \theta_{i,j}}{t_{i}^{End} - t_{i}^{Start} } \label{conf_val} \quad , \end{eqnarray} \begin{eqnarray} \theta_{i,j} = \left\{ \begin{array}{l l} 1 & \quad \text{if ~ $r_{i,j}=T$ }\\ 0 & \quad \text{else}\\ \end{array} \right. \quad , \end{eqnarray} \begin{figure}[!b] \centering \includegraphics[scale=0.6]{Figure_SegmentMerging.pdf} \caption{Merging action segments that share a common temporal field. } \label{fig:segmentmerging} \end{figure} where $r_{i,j}$ is representing the spatial relation of the {\it manipulator} in $\xi$ in Eq.~\eqref{sec_matrix}. We then consider the action segments with higher confidence value than a predefined threshold $\tau_{conf}$. The confident segments are further compared to ignore those that are completely covered by others. We also merge segments that share a common field more than a threshold $\tau_{merge}$ (see Fig.~\ref{fig:segmentmerging}). Let assume that $a_{1}=[t_{i}^{Start} ~~ t_{i}^{End} )$ and $a_{2}=[t_{j}^{Start} ~~ t_{j}^{End} )$ are two segments as given in Fig.~\ref{fig:segmentmerging}. In the case of having $\frac{|a_{1} \cap a_{2}|}{min(|a_{1}|,|a_{2}|)} \geq \tau_{merge}$, those two segments will be fused yielding a new segment $a_{new}$ with the length of $[t_{i}^{Start} ~~ t_{j}^{End} )$ as illustrated in Fig.~\ref{fig:segmentmerging}. Note that $\tau_{conf}$ and $\tau_{merge}$ are chosen as $0.6$ in all our experiments. If we come back to the example in Fig.~\ref{fig:samplechainedactionSEC}~(b), we see four candidate action fragments which are the red, blue, green, and gray blocks of the {\it manipulator}, i.e.\@\xspace object number $7$. However, the gray block is ignored as it is entirely surrounded by the red one. Thus, the remaining three blocks construct the ultimate temporal points at which the manipulation will be cut. Fig.~\ref{fig:samplechainedactionSEC}~(c) illustrates the final temporal segmentation results together with the ground truth defined by a human. Note that the end point of each block in Fig.~\ref{fig:samplechainedactionSEC}~(c) is considered as the beginning of the next consecutive one. Compared to the ground truth, the frame-wise temporal segmentation accuracy of the manipulation sequence in Fig.~\ref{fig:samplechainedactionSEC}~(c) is computed as $96\%$. \subsection{Manipulation Recognition} \label{sec:manipulationrecognition} In the recognition phase, we aim at identifying the types of performed manipulations for each decomposed SEC segment. The recognition process is based on the semantic similarities between the currently decomposed SEC segments and the pre-learned model SECs (see Section~\ref{sec:learning}). Once the entire event chain is decomposed into smaller units, we first distinguish the {\it primary} and {\it secondary objects} manipulated in each parsed segment. Recalling the action descriptive rules introduced in section~\ref{sec:manipulationdecomposition}, we define the object that is mostly interacting with the {\it manipulator} as the {\it primary object} and all other objects interacting with the {\it primary object} as the {\it secondary objects}. For instance, the event chain decomposed in Fig.~\ref{fig:samplechainedactionSEC}~(c) has three main pieces as indicated by red, blue, and green blocks, respectively. In the temporal interval of the red block (between the second and eighth columns of the SEC), the object number $2$ (the yellow bucket) is estimated as the {\it primary object} since it has most touching events with the previously detected {\it manipulator}, i.e.\@\xspace object number $7$. Next, objects $5$ and $8$ (the apple and red bucket) are found to be {\it secondary objects} because they are the only objects sharing a touching relation with the {\it primary object} within this same temporal interval. All estimated {\it primary} and {\it secondary objects} in each parsed SEC segment are indicated in Fig.~\ref{fig:samplechainedactionSEC}~(c). The main reason for reformulating the manipulations in terms of interactions between the {\it manipulator}, {\it primary} and {\it secondary objects} is two fold: First, we attempt to reduce the degree of noise in the decomposed event chain segments. As the action rules described in section~\ref{sec:manipulationdecomposition} do not allow the {\it manipulator} to interact with any object other than the {\it primary object}, we omit, for instance, the fourth row of the SEC in Fig.~\ref{fig:samplechainedactionSEC}~(a). This is because the {\it manipulator} (object number $7$) is accidentally touching the red bucket (object number 8) as highlighted by the gray block. Details of such a high level de-noising process were described in \cite{AksoyRAS2015} to efficiently cope with noisy spatiotemporal information coming from the early vision stage. Second, using this assumption we can also diagnose parallel streams of simultaneous manipulations by considering the fact that each manipulation has to have a unique {\it secondary object}. In other words, detection of multiple {\it secondary objects} may imply either noisy elements in the event chain or the existence of parallel manipulations. Hence, we treat each combination of the {\it manipulator}, {\it primary} and {\it secondary objects} as a separate manipulation hypothesis and choose the one, that has the highest semantic similarity with the learned SEC models, as the final recognition result. For this we introduce a brute force combinatorial process which considers all combinations of the entire estimated {\it secondary objects} (which are only a few) together with the {\it manipulator} and {\it primary object} to accurately identify the actual performed manipulations. The total number of the combinations can be computed as \begin{eqnarray} \mathcal{C}= \sum^{N}_{k=1} \frac{N!}{k!(N-k)!} \label{combinatorial_hypotheses} \quad , \end{eqnarray} where $N$ is the number of the existing {\it secondary objects} at a given SEC segment. We then use those combinations to generate various hypotheses that correspond to sets of manipulations. For instance, a hypothesis can represent either a single manipulation (e.g.\@\xspace~ {\it Hiding}) or concurrently performed more manipulations such as {\it Putting} and {\it Pushing}. The crucial rule here is that each hypothesis must consist of the entire {\it secondary object} set. \begin{figure}[!t] \centering \includegraphics[scale=0.6]{Figure_Combinatorial_Recognition.pdf} \caption{Detection of parallel manipulation streams in a decomposed SEC segment that has two {\it secondary objects}. {\it M}, {\it P}, and {\it S} stand for the {\it manipulator}, {\it primary} and {\it secondary objects}, respectively. As in the example of the SEC segment depicted in the red block in Fig.~\ref{fig:samplechainedactionSEC}~(c) (a sample key frame is also given here on the left), there are two possible hypotheses and each defines a different set of manipulation streams which are depicted by unique colors. Recognition score of each stream is given below.} \label{fig:sec_hypothesis} \end{figure} \begin{figure*}[!t] \centering \includegraphics[scale=0.65]{Figure_Combinatorial_Recognition_3.pdf} \caption{The entire hypothesis set estimated in the case of having three {\it secondary objects}. {\it M}, {\it P}, and {\it S} stand for the {\it manipulator}, {\it primary} and {\it secondary objects}, respectively.} \label{fig:five_sec_hypotheses} \end{figure*} For instance, the first parsed SEC segment, depicted by the red block in Fig.~\ref{fig:samplechainedactionSEC}~(c), has two {\it secondary objects} (object numbers $5$ and $8$). Fig.~\ref{fig:sec_hypothesis} illustrates the computed two hypotheses each of which has a different object combination, i.e.\@\xspace manipulation stream. The first hypothesis is composed of two separate (parallel) manipulation streams, each utilizes one of the {\it secondary objects} as indicated by unique colors, whereas the next hypothesis employs both {\it secondary objects} together as one manipulation stream. Unlike the {\it secondary objects}, in both hypotheses the {\it manipulator} and {\it primary object} are remaining the same. Note that, even though the scene involves many more objects, the number of hypotheses is remaining small due to the consideration of only those objects that are sharing touching events with the {\it primary object}. Thus, our approach does not suffer from combinatorial explosion. The maximum number of combinations, i.e.\@\xspace $\mathcal{C}$, observed so far in all our experiments, is $15$. The entire hypothesis set, in the case of having three {\it secondary objects}, is depicted in Fig.~\ref{fig:five_sec_hypotheses}. Next, we extract and smooth the corresponding event chains of all possible streams in each hypothesis and compare them semantically with the SEC models of atomic manipulations that have previously been learned and stored in the library. In the semantic comparison process (see section~\ref{sec:learning}), we introduce a similarity threshold $\tau_{sem}$ to explore whether two event chains belong to the same manipulation type. In the case of having a too low semantic similarity with any of the known SEC models, the respective event chain will then be assigned to {\it Unknown}. Unless otherwise stated, we keep the threshold value $\tau_{sem}$ as $72\%$ which has been found in an on-line unsupervised manner as introduced in \citep{AksoyRAS2015}. The manipulation recognition task is now nothing else than computing the best hypothesis that has the highest semantic similarity with those individual models in the library. Note that if a hypothesis has multiple streams, the mean similarity value is considered to compare it with that of the other hypotheses. Fig.~\ref{fig:sec_hypothesis} shows at the bottom the final similarity scores in each hypothesis. As the first hypothesis has much higher recognition rate, our proposed approach successfully returns two parallel manipulation streams; {\it Taking~Down} the yellow bucket (object 2) from the red bucket (object 5) while {\it Uncovering} the green apple (object 8). Fig.~\ref{fig:samplechainedactionSEC}~(d) illustrates the final manipulation types recognized at each decomposed SEC segment for the manipulation sequence depicted in Fig.~\ref{fig:samplechainedactionSEC}~(a). \section{Results} \label{sec:results} In this section, we will provide experimental results from our proposed temporal segmentation and recognition method using different datasets. We first start benchmarking with our large Manipulation Action (ManiAc) dataset \citep{AksoyRAS2015} and will then continue with a recently published Manipulation Action Consequences (MAC) dataset \citep{Yang13}. The next section is covering the two-hand manipulations on the Multiple Object Tracking (MOT) dataset \citep{Koo14}. In the later section we will show baseline experiments conducted on these three datasets. \red{ In the very last section we will apply our SEC-based method to the MPII Cooking Activities dataset \citep{Rohrbach12} which involves long and parallel actions as RGB only image streams.} \begin{figure}[!t] \centering \centering \includegraphics[scale=0.45]{Figure_Actions_Training.pdf} \caption{The ManiAc dataset with eight different single manipulation scenarios: {\it Pushing}, {\it Hiding}, {\it Putting}, {\it Stirring}, {\it Cutting}, {\it Chopping}, {\it Taking}, and {\it Uncovering}.} \label{fig:trainingactions} \centering \includegraphics[scale=0.6]{Figure_Actions_Training_CuttingVersion.pdf} \caption{Sample images from the ManiAc dataset. In the green frame, a sample image from each demonstration of the {\it Cutting} action, performed by 5 different individuals, is given. The blue frame depicts 30 different objects manipulated in all 120 manipulation demonstrations.} \label{fig:training_cutting_versions} \end{figure} \subsection{Manipulation Action (ManiAc) Dataset} \label{sec:maniacdataset} The ManiAc dataset, introduced in our previous work \citep{AksoyRAS2015}, investigates eight different single atomic manipulation actions: {\it Pushing}, {\it Hiding}, {\it Putting}, {\it Stirring}, {\it Cutting}, {\it Chopping}, {\it Taking}, and {\it Uncovering}. The complete dataset is publicly available at \url{www.dpi.physik.uni-goettingen.de/~eaksoye/MANIAC_DATASET}. Fig.~\ref{fig:trainingactions} shows sample frames from each action type. In the dataset, each manipulation has 15 different versions demonstrated by 5 different individuals using in total 30 various objects. Fig.~\ref{fig:training_cutting_versions} depicts all objects used in the dataset and some sample frames from the {\it Cutting} action demonstrated by $5$ different subjects. We benefit from this manipulation action dataset that consists of in total 120 single demonstrations in order to create a vocabulary of atomic manipulations by learning the semantics of actions, i.e.\@\xspace the model SECs as described in section~\ref{sec:learning}. For this, all $15$ demonstrations of each of the $8$ single atomic manipulations were employed in a batch mode to derive a SEC model for each manipulation type. The learned $8$ models were then stored in the SEC library to be further used in the monitoring stage. The ManiAc dataset additionally provides $20$ long and complex chained manipulation sequences, such as ``{\it making a sandwich}", ``{\it preparing a breakfast}", ``{\it pouring and stirring milk}", or ``{\it cutting and moving a piece of bread}". These chained sequences have in total $103$ different versions of the learned $8$ single atomic manipulations as well as some novel tasks, such as {\it Pouring}. All these chained manipulations were performed in different orders, either sequentially or in parallel, with novel objects in various scene contexts to make the temporal segmentation and recognition steps more challenging. Fig.~\ref{fig:testchainedactions} shows sample frames with tracked objects and scene graphs from four different chained manipulations to give an impression of the differences in the demonstrated long scenarios. This figure shows that scenes can be cluttered and graphs can also include more nodes some of which are occasionally changing labels due to occlusion problems. Even in such problematic cases, by applying the proposed top-down semantic reasoning we can not only decompose and recognize performed parallel or serial manipulation streams, but also extract manipulated objects by solely considering the semantics {\it (essence)} of manipulations. \begin{figure}[!b] \centering \includegraphics[scale=0.5]{Figure_Actions_Testing.pdf} \caption{Sample frames with respective image segments and scene graphs from four different long chained manipulation sequences in the ManiAc dataset. In the given red, green, blue, and yellow frames subjects are performing different tasks such as ``{\it making a sandwich}", ``{\it preparing a breakfast}", ``{\it pouring and stirring milk}", and ``{\it cutting and moving a piece of bread}". } \label{fig:testchainedactions} \end{figure} All manipulations shown in this dataset were recorded with a single Microsoft Kinect sensor which provides both color and depth image sequences. Note that colored objects are preferred to cope with the intrinsic limitations of the Kinect device. \begin{figure*}[!t] \centering \includegraphics[scale=0.6]{Figure_Decomposition_Results.pdf} \caption{Manipulation temporal segmentation accuracy together with the true and false positive recognition rates of the detected manipulations in $20$ chained long sequences in the ManiAc dataset.} \label{fig:decompositionaccuracy} \end{figure*} In the first stage, we analyzed the temporal segmentation accuracy of the $20$ chained manipulation sequences in the ManiAc dataset. As described in section~\ref{sec:manipulationdecomposition}, the temporal segmentation process is bootstrapped with the estimation of the main {\it manipulator} in the scenario. We acquired $100\%$ correct {\it manipulator} estimation rate in all chained sequences. This obtained highest precision also leads to robust semantic manipulation temporal segmentation. Frame-wise temporal segmentation accuracy was next computed by comparing our results with the human defined ground truth. The blue bars in Fig.~\ref{fig:decompositionaccuracy} indicate the final temporal segmentation rates successfully computed from each chained sequence. We obtained $91\%$ mean temporal segmentation accuracy over all $20$ sequences. The reason of the little deviation from the ground truth is that the noisy segmentation and tracking information delay the detection time of the spatial relations (i.e.\@\xspace touching events) between the {\it manipulator} and the {\it primary objects} in the scene. Such delays, however, do not corrupt the recognition phase as described below. \begin{figure*}[!b] \centering \includegraphics[scale=0.75]{Figure_Recognition_Results.pdf} \caption{Confusion matrix showing (a) the manipulation recognition accuracies of the tested manipulation types embedded in the $20$ chained sequences and (b) the usage rate of different {\it primary objects} manipulated in the known manipulation types.} \label{fig:recognitionaccuracy} \end{figure*} After the temporal segmentation process, we evaluated the recognition rates of the sequential and parallel manipulation streams existing in each chained sequence. As pointed out in section~\ref{sec:manipulationrecognition}, the recognition process is essentially based on the prediction of the {\it primary} and {\it secondary objects} and is followed with the comparison of the decomposed manipulation streams with the learned SEC models stored in the library. The green and red bars in Fig.~\ref{fig:decompositionaccuracy} depict the true and false positive recognition rates of the detected manipulation streams in each chained sequence when compared with the learned $8$ SEC models. We computed the mean true and false positive rates as $80\%$ and $6\%$, respectively. There are two reasonable explanations for observing a slight drop in true positives and having a relatively small false positive rate in some sequences. First, the novel manipulation types (e.g.\@\xspace~{\it Pouring}) in the chained sequences have not previously been learned as SEC models. The proposed framework treats such novel manipulations as {\it Unknown} if their semantic similarities with the already known models are less than the learned threshold $\tau_{sem}$ (introduced as $72\%$ in section~\ref{sec:manipulationrecognition}). Therefore, the framework exhibits a lower than $100\%$ true positive rate. The second, and the most important, factor is that the learned {\it Cutting} and {\it Chopping} models are semantically similar and thus are merged in the recognition phase. This is because both manipulations have the same fundamental action primitives, i.e.\@\xspace similar columns in the event chains, and the only differences are mostly in the followed trajectories and velocity of the movements which are not captured by SECs. Hence, the framework naturally merges those two manipulation types, which leads to an increment in the false positive rates. This result is fully compatible with our previous findings shown in \cite{AksoyRAS2015}, in which SEC models are on-line learned without using any human intervention. The on-line learning framework in \cite{AksoyRAS2015} retrieved one single SEC model by naturally merging both {\it Cutting} and {\it Chopping} manipulation samples due to exploring high semantic similarity between each type. Note, there is no ``ultimate truth" with respect to what one might call semantically similar (or dissimilar) actions. We will discuss these conceptional problems at great length in the Discussion section. Here, we note that all our recognition results are intrinsically consistent and robust. \begin{figure*}[!t] \centering \includegraphics[scale=0.72]{Figure_Decomposition_Image.pdf} \caption{Automatic temporal segmentation and recognition results of the $20$ chained manipulation sequences versus human labeled ground truth. The action segments are color coded. Black frames indicate the border of each manipulation stream. Gray color represents the {\it Idle} actions in which the {\it manipulator} is not interacting with any object while switching from one manipulation to the next.} \label{fig:decompositionvsgtruth} \end{figure*} To quantitatively evaluate this, Fig.~\ref{fig:recognitionaccuracy}~(a) shows a confusion matrix depicting the recognition accuracies of the $103$ tested manipulation samples, existing in the $20$ chained sequences, with respect to the learned $8$ SEC models. The first impression that the figure conveys is that the tested {\it Cutting} samples are naturally interpreted as {\it Chopping} because of the reason discussed above. However, no other confusion occurs, except an observed $10\%$ misclassification rate for the {\it Pushing} action. It is also interesting to note that the novel {\it Pouring} manipulations demonstrated in the chained sequences were never confused with any of the known SEC models because of having a distinct semantics and, thus, were always classified as {\it Unknown}. Fig.~\ref{fig:decompositionvsgtruth} displays the final temporal segmentation and recognition results of the chained sequences together with the human labeled ground truth. This side-by-side comparison shows that the system had successfully handle also the complicated parallel manipulation streams. Note that the lengths of chained sequences are normalized for the sake of clarity in the display. \begin{table*}[!b] \centering \caption{Action detection rates with and without considering the role of \textit{secondary objects}. The first row identifies $20$ chained actions in the ManiAc dataset. The second and third rows respectively indicate the total number of single and parallel atomic actions in each chained action. $TP^{+}$ and $FP^{+}$ represent \textit{True Positive} and \textit{False Positive} rates when \textit{secondary objects} are considered, whereas $TP^{-}$ and $FP^{-}$ show the results in the case of ignoring \textit{secondary objects}. } \begin{center} \scalebox{0.6}{ \begin{tabular}{ |p{1.3cm}||p{0.7cm}|p{0.7cm}|p{0.7cm}| p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}| p{0.7cm}|p{0.7cm}|p{0.7cm}| p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}|p{0.7cm}| p{0.7cm}|p{0.7cm}||p{0.7cm}|p{10.7m}|} \hline &\centering 1 &\centering 2 &\centering 3 &\centering 4 &\centering 5 &\centering 6 &\centering 7 &\centering 8 &\centering 9 &\centering 10 &\centering 11 &\centering 12 &\centering 13 &\centering 14 &\centering 15 &\centering 16 &\centering 17 &\centering 18 &\centering 19 &\centering 20 &$\centering$Total\\ \hline \centering \textit{Single} &\centering 4 &\centering 3 &\centering 1 &\centering 6 &\centering 5 &\centering 1 &\centering 2 &\centering 2 &\centering 2 &\centering 1 &\centering 3 &\centering 2 &\centering 2 &\centering 3 &\centering 4 &\centering 2 &\centering 0 &\centering 5 &\centering 2 &\centering 4 &$\centering$ 54\\ \centering \textit{Parallel} &\centering 0 &\centering 0 &\centering 2 &\centering 0 &\centering 3 &\centering 2 &\centering 2 &\centering 2 &\centering 0 &\centering 3 &\centering 0 &\centering 0 &\centering 0 &\centering 0 &\centering 0 &\centering 6 &\centering 10 &\centering 8 &\centering 11 &\centering 0 &$\centering$ 49\\ \hline \centering $TP^{+}$ &\centering $100\%$ &\centering $100\%$ &\centering $100\%$ &\centering $83\%$ &\centering $100\%$ &\centering $100\%$ &\centering $100\%$ &\centering $75\%$ &\centering $50\%$ &\centering $75\%$ &\centering $33\%$ &\centering $100\%$ &\centering $50\%$ &\centering $67\%$ &\centering $75\%$ &\centering $87\%$ &\centering $60\%$ &\centering $77\%$ &\centering $92\%$ &\centering $75\%$ &$\textbf{80}\%$\\ \centering $TP^{-}$ &\centering $100\%$ &\centering $100\%$ &\centering $33\%$ &\centering $83\%$ &\centering $62\%$ &\centering $33\%$ &\centering $50\%$ &\centering $0\%$ &\centering $50\%$ &\centering $0\%$ &\centering $33\%$ &\centering $100\%$ &\centering $0\%$ &\centering $67\%$ &\centering $50\%$ &\centering $12\%$ &\centering $0\%$ &\centering $23\%$ &\centering $15\%$ &\centering $50\%$ &$43\%$\\ \hline \centering $FP^{+}$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $17\%$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $33\%$ &\centering $0\%$ &\centering $0\%$ &\centering $33\%$ &\centering $0\%$ &\centering $12\%$ &\centering $0\%$ &\centering $8\%$ &\centering $8\%$ &\centering $0\%$ & $\textbf{5.5}\%$\\ \centering $FP^{-}$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $17\%$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $33\%$ &\centering $0\%$ &\centering $0\%$ &\centering $33\%$ &\centering $0\%$ &\centering $0\%$ &\centering $0\%$ &\centering $8\%$ &\centering $0\%$ &\centering $0\%$ &$4.5\%$\\ \hline \end{tabular} } \end{center} \label{tab:secobjcontribution} \end{table*} As also addressed in section~\ref{sec:manipulationrecognition}, without requiring any object recognition framework we can correctly derive the {\it primary} and {\it secondary objects} utilized in the perceived manipulations. Fig.~\ref{fig:recognitionaccuracy}~(b) indicates the estimated {\it primary object} types that were frequently manipulated in the detected manipulation samples from the $20$ chained sequences. For instance, {\it Spoon} was the only object type primarily employed in the detected {\it Stirring} manipulations, whereas {\it Knife} and {\it Cleaver} were heavily preferred in the {\it Cutting} and {\it Chopping} tasks. On the other hand, other object kinds like {\it Bread}, {\it Cheese}, and {\it Salami} were utilized in the {\it Taking}, {\it Putting}, and {\it Hiding} samples. This is because the scenarios such as ``{\it making a sandwich}" or ``{\it preparing a breakfast}" required taking cheese or bread slices and putting them on top of each other, which naturally resulted in the disappearance of some objects correctly interpreted as hiding. These findings verify that the proposed framework can also automatically discover the link between actions and objects. Results on the likewise estimated {\it secondary objects} are not shown here to save some space. Taken together this clearly demonstrates (Fig.~\ref{fig:recognitionaccuracy}) a very high recognition rate for actions and objects in the long and complex ManiAc dataset and the next section shows that this also holds for other data. \begin{figure*}[!t] \centering \includegraphics[scale=0.6]{Figure_MAC_Actions.pdf} \caption{Sample frames from the four different manipulation categories in the MAC dataset.} \label{fig:mac_actions} \end{figure*} \begin{figure*}[!b] \centering \includegraphics[scale=0.78]{Figure_MAC_Results.pdf} \caption{Experimental results from the MAC dataset. (a) Manipulation temporal segmentation accuracies. (b) Recognition accuracies of the decomposed manipulations with respect to the same $8$ SEC models learned from the ManiAc dataset. } \label{fig:mac_results} \end{figure*} In order to address the main contribution of the \textit{secondary object} estimation, we repeat the detection of parallel actions without taking the role of \textit{secondary objects} into account. Table~\ref{tab:secobjcontribution} shows the total number of single and parallel actions embedded in $20$ chained actions together with the average true and false positive rates in the cases of including and excluding the role of \textit{secondary objects}. We observe $80\%$ true positive rate ($TP^{+}$) once \textit{secondary objects} are estimated as proposed in section~\ref{sec:manipulationrecognition}. On the other hand, when the exploration of \textit{secondary objects} is omitted, the overall average accuracy (mean of true positives, i.e.\@\xspace $TP^{-}$) drops to $43\%$ as depicted in the very last column in Table~\ref{tab:secobjcontribution}. The main drop in the accuracy particularly occurs in cases of having parallel action streams. For instance, activity numbers $16$ and $17$ in Table~\ref{tab:secobjcontribution} have more parallel actions involved, hence, correctly detected action rates dramatically drop. If the manipulation activity is composed of only single atomic actions, e.g.\@\xspace first and second chained actions, the average accuracy remains the same. This results support the claim that \textit{secondary objects} play a crucial role only in detecting parallel actions. It is also important to note that the rate of false positives ($FP^{-}$) slightly decreases to $4.5\%$ from $5.5\%$. This little drop is a very important finding indicates that parallel actions are treated as {\it Unknown} rather than being misclassified once \textit{secondary object} are neglected. This also reveals the robustness of our action recognition method. \subsection{Manipulation Action Consequences (MAC) Dataset} \label{sec:macdataset} The recently published MAC dataset \citep{Yang13} contains in total $24$ manipulation demonstrations categorized under four different action types: {\it ASSEMBLE}, {\it TRANSFER}, {\it DIVIDE}, and {\it DEFORM}. Each category has $6$ various samples which were recorded with either a single Kinect device or a regular RGB camera. These $24$ demonstrations consist of a total of $31$ single atomic manipulations, some of which were presented as chained sequences with sequential and parallel manipulation streams (e.g.\@\xspace~{\it Making a sandwich}). Fig.~\ref{fig:mac_actions} displays sample images from various scenarios in each manipulation category to indicate the level of differences between the demonstrated tasks. \begin{figure*}[!t] \centering \includegraphics[scale=0.8]{Figure_KOREAN_Pushing.pdf} \caption{The event chain representation for the two-hand {\it Pushing} action sequence in the MOT dataset. The blue and red blocks in the SEC highlight decomposed {\it pushing} actions performed by the left and right hands, respectively. On the top sample original images with respective objects (colored regions) and main graphs are displayed. } \label{fig:mot_pushing} \end{figure*} Since the the MAC dataset is very problematic due to missing depth information and huge changes between hand movements in consecutive frames (i.e.\@\xspace frame dropping problem), we bypassed the segmentation and tracking phases and created event chains in a supervised manner using human expertise. As one of the biggest advantages of our proposed semantic segmentation and recognition framework, the cognitive systems here do not additionally require a new exhausting training period to test this novel dataset since the concept of semantics yields always the same essence of the manipulations, which does not dramatically alter with new manipulation observations. This allows us to employ the same SEC models, learned from the ManiAc dataset, in order to recognize the decomposed manipulations in the MAC dataset. Fig.~\ref{fig:mac_results}~(a) depicts the temporal segmentation results of the $24$ manipulations in the MAC dataset. The mean temporal segmentation accuracy was computed as $81\%$ over all $24$ sequences. Because some manipulation samples in the MAC dataset do not adhere to the action descriptive rules introduced in section~\ref{sec:manipulationdecomposition}, the temporal segmentation accuracy dramatically dropped, for example, for manipulation sample number $12$. This action consists of a box rotating on a turntable as depicted by the last frame in Fig.~\ref{fig:mac_actions}. This is not a manipulation in any sense and, thus, this action is outside the scope of this paper. Therefore, our proposed framework treats these types of actions as {\it Unknown}. Fig.~\ref{fig:mac_results}~(b) displays the final recognition success of all decomposed $31$ atomic manipulations embedded in the $24$ demonstrations. The novel manipulation types, such as {\it Pouring}, {\it Opening}, and {\it Closing} were all correctly treated as {\it Unknown} due to having unique and distinct semantics compared to the previously learned 8 SEC models. On the other hand, the {\it Painting} sample, which is not existing in the learned SEC models, was interpreted as {\it Putting}. This is an entirely correct reasoning since the paint can be treated as an object that is being put on some other objects. (Again we point to the discussion section for an account on ``semantic similarities".) Although almost all manipulations were recognized with high accuracy, the {\it Pushing} and {\it Cutting} samples were misinterpreted. \subsection{Multiple Object Tracking (MOT) Dataset} \label{sec:mot_dataset} The MOT dataset was recently introduced in \cite{Koo14} to investigate the multiple object tracking problem by applying dynamically updated object models without employing any prior knowledge. The dataset consists of $8$ different scenarios with a total of $23$ atomic actions. The scenarios demonstrate both single- and two-hand chained manipulation sequences, such as {\it Pushing}, {\it Stacking}, {\it Unstacking}, and {\it Occluding}. Fig.~\ref{fig:mot_pushing} and~\ref{fig:mot_occlusion} display sample frames from the {\it Pushing} and {\it Occluding} scenarios. Since the MOC dataset already provides the segmented scene configuration in $3D$, we bypassed our image segmentation and tracking step (section~\ref{sec:segmentationandtracking}) and started directly with the event chain extraction. To cope with multiple hands in the manipulation, we biased the {\it manipulator} estimation step (section~\ref{sec:extractingmanipulator}) with the actual number of hands in the scene. Given $K$ as number of the {\it manipulators}, we computed the probability values from Eq.~{\ref{prob_segments}} for all possible combinations of objects, i.e.\@\xspace graph nodes in the SEC, taken $K$ at a time. The object combination with the highest probability value, i.e.\@\xspace longest $[N, T, \cdots, T, N]$ sequence, was then considered as representing the {\it manipulator}. Fig.~\ref{fig:mot_pushing} illustrates the two-hand {\it Pushing} action sequence from the MOT dataset together with the extracted SEC representation and some sample key frames. Object numbers $1$ and $2$ were correctly estimated as the main {\it manipulators} and the extracted event chain was accordingly broken up into pieces, each of which is highlighted with the red and blue blocks in Fig.~\ref{fig:mot_pushing}. \begin{figure}[!b] \centering \includegraphics[scale=0.67]{Figure_KOREAN_Results.pdf} \caption{Experimental results from the MOT dataset. (a) Manipulation temporal segmentation accuracies. (b) Recognition accuracies of the decomposed manipulations with respect to the same $8$ SEC models learned from the ManiAc dataset. } \label{fig:mot_results} \end{figure} \begin{figure*}[!t] \centering \includegraphics[scale=0.8]{Figure_KOREAN_Permanence.pdf} \caption{Two-hand {\it Occluding} actions from the MOT dataset. In the red frame, object labels of the occluded and replaced objects are altering, which leads to failures in the temporal segmentation phase. In the blue frame, both hands are occluding objects without carrying out any specific task, hence, {\it occluding} actions were recognized as {\it Unknown}.} \label{fig:mot_occlusion} \end{figure*} Fig.~\ref{fig:mot_results}~(a) depicts the final temporal segmentation accuracies for each of the $8$ sequences in the MOT dataset. The drop in the second manipulation sample is due to the inconsistencies in the tracking phase, i.e.\@\xspace due to a {\it segment discontinuity} problem. In this second scenario, for instance, two hands were moving objects around after occluding them completely. At each time the hands withdrew, the replaced objects emerged with novel object labels as illustrated in the red frame in Fig.~\ref{fig:mot_occlusion}. As the object labels altered unpredictably, the event chain representation could not capture any sequence of $[N, T, \cdots, T, N]$ relations, required for the manipulator estimation as described in section~\ref{sec:extractingmanipulator}, which led to failures in the temporal segmentation phase. If we exclude this second manipulation sample, the mean temporal segmentation accuracy reaches $83\%$. This result underlines the fact that our proposed semantic segmentation method is also suitable for manipulations with multiple {\it manipulators} as long as the scene objects are consistently trackable. Fig.~\ref{fig:mot_results}~(b) depicts the recognition successes of all $23$ atomic actions performed either sequentially or in parallel in the $8$ different scenarios. As in the case of the MAC dataset, we here again employed the SEC models, learned from the ManiAc dataset, during the process of manipulation recognition. In the monitoring stage, some versions of the {\it Pushing} manipulations were missed out because of the same {\it segment discontinuity} problem pointed out above. On the other hand, all versions of the {\it Occluding} demonstrations were correctly recognized as {\it Unknown} since in such actions the hands were indeed not applying any certain task on the objects, but were rather performing random movements (to verify the stability in the segmentation). Thus, hands and objects were here not interacting to issue any {\it touching} event required by the action descriptive rules summarized by Eq.~\eqref{ntn_sequence}. The blue frame in Fig.~\ref{fig:mot_occlusion} depicts sample frames from a version of the {\it Occluding} scenario, in which two hands are randomly moving above the objects without aiming at any specific task. Results derived from the MOT dataset consequently confirm the scalability of our proposed temporal segmentation and recognition framework to multi-hand manipulation actions even with the flexibility of replacing some submodules, such as the segmentation and tracking method. \subsection{Baseline Experiments} \label{sec:baselineexperiments} In our baseline experiments, we used appearance and trajectory based state-of-the-art action descriptors, such as Space-Time Interest Points (STIP), Dense Trajectories (DT) and Improved Dense Trajectories (IDT). STIPs described in \citep{STIP2005} are local image points around which the image values exhibit significant structural variations in both space and time domains. DT was introduced in \citep{Wang2011} and computes various dense features such as static apprearance information, local motion information and relative motion between pixels. IDT \citep{Wang2013} is the extended version of DT by taking into account camera motion to remove false trajectories consistent with camera motion. In all our baseline experiments with STIP, DT and IDT we used the default parameters coming with the publicly available source codes. We computed Fisher vectors together with Gaussian Mixture Models in order to create the visual vocabulary from STIP features. In the case of DT and IDT descriptors, we implemented a standard bag-of-features representation to construct a codebook. We clustered extracted DT and IDT features using K-means into a codebook of $400$ words, which is the same size for the STIP feature vocabulary. Detected DT and IDT descriptors were then assigned to their closest word in the codebook by considering the Euclidean distance. All detected STIP, DT, and IDT action descriptors were then passed to Support Vector Machines (SVM) in a one-versus-all fashion. It is here important to note that we performed several tests on various sets of codebook sizes with different SVM kernels (e.g.\@\xspace linear and Chi2) and reported the highest results out of these experimental evaluations. Fig.~\ref{fig:baseline_features} illustrates detected STIP, DT, and IDT features on some samples frames from different datasets. \begin{figure}[!b] \centering \includegraphics[scale=0.6]{Figure_Baseline.png} \caption{Sample images with detected STIP, DT and IDT features. } \label{fig:baseline_features} \end{figure} \begin{table*}[!b] \centering \caption{Classification performance comparison among different methods on three various datasets. Pr, Rc, and FS stand for Precision, Recall, and F-Score, respectively.} \begin{center} \scalebox{0.75}{ \begin{tabular}{ |p{5.5cm}||p{1.4cm}|p{1.4cm}|p{1.4cm}|| p{1.4cm}|p{1.4cm}|p{1.4cm}||p{1.4cm}|p{1.4cm}|p{1.4cm}| } \hline & \multicolumn{3}{|c||}{ManiAc \citep{AksoyRAS2015}} & \multicolumn{3}{|c||}{MAC \citep{Yang13}} & \multicolumn{3}{|c|}{MOT \citep{Koo14}} \\ \hline & \centering Pr& \centering Rc& \centering FS & \centering Pr & \centering Rc & \centering FS & \centering Pr & \centering Rc & $\centering$ FS \\ \hline STIP \citep{STIP2005} &\centering $20.6\%$ &\centering $31.8\%$ &\centering $25.0\%$ &\centering $36.6\%$ &\centering $24.0\%$ &\centering $29.0\%$ &\centering $46.6\%$ &\centering $31.6\%$ & $37.7\%$ \\ DT \citep{Wang2011} &\centering $10.1\%$ &\centering $4.7\%$ &\centering $6.4 \%$ &\centering $12.5\%$ &\centering $20.8\%$ &\centering $15.6\%$ &\centering $59.1\%$ &\centering $57.8\%$ & $58.4\%$ \\ IDT \citep{Wang2013} &\centering $13.7\%$ &\centering $14.0\%$ &\centering $13.9\%$ &\centering $12.5\%$ &\centering $20.0\%$ &\centering $15.3\%$ &\centering $18.3\%$ &\centering $17.2\%$ & $17.7\%$ \\ SECs (\textit{Ours}) &\centering $\textbf{91.8\%}$ &\centering $\textbf{91.0\%}$ &\centering $\textbf{91.4\%}$ &\centering $\textbf{52.6\%}$ &\centering $\textbf{53.1\%}$ &\centering $\textbf{52.8\%}$ &\centering $\textbf{93.3\%}$ &\centering $\textbf{87.5\%}$ & $\textbf{90.3\%}$ \\ \hline \end{tabular} } \end{center} \label{tab:baselineresults} \end{table*} In our first baseline experimental set up, we investigated the discriminative power of these local visual descriptors, i.e.\@\xspace STIP, DT and IDT, and compared with our SEC based approach. For this task, we used $120$ demonstrations of $8$ different atomic action types provided in the ManiAc dataset (see Section~\ref{sec:maniacdataset}). Since each action class has $15$ different versions, we employed the first $10$ samples for training and the rest for testing. A separate SVM classifier was defined for each class type. Fig.~\ref{fig:baseline_validation} presents a per-class test score after performing the same training procedure for all four methods. Here, each bar in the plot shows the classification accuracy ($\%$) referring to the average number of true positive predictions out of total tested sample number. This first experiment shows that local visual descriptors do not perform well and have poor performances on some specific action types, such as \textit{Chopping}, whereas our proposed SEC method has the highest score for all action types. Over eight classes we obtained $92.5\%$ average classification rate for the SEC method, whereas it was $37.5\%$, $37.5\%$, and $40\%$ for STIP, DT, and IDT methods, respectively. \begin{figure}[!t] \centering \includegraphics[scale=0.2]{Figure_SEC_STIP_DT_IDT_Validation_10_5.png} \caption{Classification accuracies of SEC, STIP, DT, and IDT based methods on the ManiAc dataset. } \label{fig:baseline_validation} \end{figure} Next, we would like to address the problem of transferring the learned codebooks, i.e.\@\xspace visual vocabularies, across different datasets. In this task, we trained the same four classifiers, i.e.\@\xspace SEC, STIP, DT, and IDT based approaches, with all $15$ versions provided for each action type in the ManiAc dataset. In the testing phase, we first used $20$ long action sequences coming with the ManiAc dataset (see Section~\ref{sec:maniacdataset}). We then measured the performance of all these already trained classifiers on the MAC and MOT datasets described in Section~\ref{sec:macdataset} and Section~\ref{sec:mot_dataset}, respectively. We here note that these experiments essentially assess the action recognition power of the state-of-the-art methods which can not perform temporal action segmentation. Hence, we provided manually segmented actions for STIP, DT and IDT methods, whereas we let our SEC framework automatically segment all these long activities as explained in Section~\ref{sec:decomposingsecs}. In the computation of precision and recall values in all methods, we treated \textit{Cutting} and \textit{Chopping} actions as type-similar due to having high semantic structures. Different from conventional mehtods, in the SEC approach, we also introduced \textit{Unknown} class to assess zero-shot action recognition performance for novel action types. Table~\ref{tab:baselineresults} shows the average classification performance comparison among the aforementioned methods on three datasets. We again obtained quite low performances with the STIP, DT and IDT based classifiers. Our proposed SEC method has the highest precision and recall values and outperforms these state of the art methods. This empirical result shows that conventional approaches are far away from the generalization perspective. Both baseline experiments indicated in Fig.~\ref{fig:baseline_validation} and Table~\ref{tab:baselineresults} demonstrate that employing the semantic information yields more accurate recognition performance compared to what the conventional approaches can achieve. This is likely because such conventional action descriptors highly depend on the scene context and followed trajectory patterns which can vastly alter from one demonstration to another as it is the case for the here benchmarked datasets. For instance, as depicted in Fig.~\ref{fig:training_cutting_versions} each of $8$ atomic actions in the ManiAc dataset was performed by five different persons each followed different trajectories by using several objects in different scene contexts. This indeed poses a challenge for the recognition task. Furthermore, in the long activities coming with all three dataset, individual manipulation streams were performed either sequentially or concurrently at varying speeds in more cluttered scenes (see Fig.~\ref{fig:testchainedactions}). Thus, optical flow features even for the same activity have significant changes. As also reported in several other papers \cite{Li_2015_CVPR,Fathi2011} local spatio-temporal features required by conventional methods are often captured at locations irrelevant to a performed action due to wrongly computed optical flow information. In this regard, it is more likely that the conventional models received vastly different and sparse optical flow signals from the trained and tested action sets. Unlike our SEC approach, traditional action classifiers cannot detect overlapping actions. Those missed actions also lead to significant drops in the final accuracy computation. Another reason of having the low performance with conventional methods is that the test data set involved novel action types, such as \textit{Pouring} and \textit{Painting} (see Fig.~\ref{fig:testchainedactions} and Fig.~\ref{fig:mac_actions}) for which we lack pre-trained action models in order to test zero-shot action recognition performance. However, the main aim of these conventional action descriptors is to capture descriptive key features that are relevant to trained actions, which are in general short actions. Therefore, those unseen actions were misclassified by these methods. \begin{figure*}[!b] \centering \includegraphics[scale=0.6]{Figure_MPII_DecompositionImage.png} \caption{\red{Temporal segmentation result for a sample MPII cooking activity. The action segments are color coded. Bounding boxes on each frame represent the tracking results.}} \label{fig:mpIIDecompImage} \end{figure*} Consequently, conventional feature extractors are essentially based on appearance and motion in certain space time volumes (intervals). Most of those techniques computes local space-time gradients. As baseline experimental results highlight such approaches are not sufficient for modeling variations even in the same action type. Different from these unified descriptors, the here presented SEC framework can handle variations in the performed actions and detect untrained action types as \textit{Unknown} as depicted in Figs.~\ref{fig:recognitionaccuracy},~\ref{fig:mac_results}, and~\ref{fig:mot_results}. The data presented in Fig.~\ref{fig:baseline_validation} and Table~\ref{tab:baselineresults} are clear indications of the scalability and stability features of our proposed SEC approach. Furthermore, Table~\ref{tab:baselineresults} clearly supports the generalization capacity and the transferability of the learned SEC representations. In contrast to the state-of-the-art action descriptors, the proposed SEC approach can also automatically execute temporal action segmentation and additionally categorize manipulated objects according to their performed roles, all in the same coherent framework. \red{\subsection{MPII Cooking Activities Dataset}} \label{sec:mpiidataset} \red{ The MPII Cooking Activities dataset, introduced in \citep{Rohrbach12}, contains videos of different activities in the real-world cooking domain. Although this dataset has long and parallel action demonstrations, it provides RGB only image streams, without any depth information. Therefore, this section investigates the performance of our SEC framework, particularly, in the case of having missing depth cues.} \red{ We selected $5$ random cooking scenarios from this dataset and used the object tracking data provided in \cite{Yang16}. Tracking of objects was based on color and texture features processed by a random forest classifier at each 10th frame. A tracking-by-detection method \citep{Danelljan14} was further employed to complete the tracking process for missing frames. Fig.~\ref{fig:mpIIDecompImage} depicts sample frames with tracking results for one of the cooking scenarios. Given the tracked objects, we extracted the corresponding SEC representation by simply measuring the intersection between object bounding boxes. We also introduced an additional ``\textit{overlapping}" relation which indicates that one bounding box is completely surrounded by another one. } \red{The selected $5$ scenarios involve in total $111$ demonstrations of $42$ different atomic actions such as {\it Open Fridge}, {\it Cut Bread}, {\it Wash Cucumber}, {\it Dry Hands with Towel}, etc. The complete list of atomic actions is provided in Fig.~\ref{fig:mpIIConfMatrix}. In all these $5$ scenarios, in total $20$ various objects (e.g.\@\xspace~{\it bread, knife, fridge, etc.}) were manipulated by $4$ different subjects. There exist in total $13397$ frames in all demonstrations, i.e.\@\xspace the average activity duration is about $1.5$ minutes. } \red{ A model SEC representation for each atomic action was either extracted from their first occurrences in all demonstrations or manually introduced. These model SECs were further enriched with the object identities provided by the tracking method. Due to lack of depth cues, the interaction between hands and objects cannot be accurately parsed. Therefore, we applied an alternative brute force search method which scans the raw SEC matrix of each long activity and searches for the best match of model SECs by also incorporating the object information. The matched SECs are not only giving the action recognition result, but lead also to the final temporal segmentation. } \red{ Fig.~\ref{fig:mpIIDecompImage} shows the frame-wise temporal segmentation result together with the human defined ground truth for one of the processed MPII cooking activity. Each atomic action is indicated by a colored block, some of which were demonstrated in parallel. On the top of Fig.~\ref{fig:mpIIDecompImage} some sample frames with tracked object bounding boxes are given. This figure shows that our SEC framework can handle such complicated real-world sequential and parallel action demonstrations even though depth is not provided. The average frame-wise temporal segmentation accuracy for this activity is computed as $80\%$. Fig.~\ref{fig:mpIIDecompAccuracy} depicts the overall temporal segmentation accuracies as frame-wise true and false positive recognition rates in $5$ cooking activities. The average temporal segmentation accuracy is computed as $71\%$. } \begin{figure}[!t] \centering \includegraphics[scale=0.5]{Figure_MPII_DecompositionAccuracy.png} \caption{\red{Temporal segmentation accuracy as frame-wise true and false positive recognition rates in all $5$ MPII cooking activities.}} \label{fig:mpIIDecompAccuracy} \end{figure} \begin{figure}[!b] \centering \includegraphics[scale=0.41]{Figure_MPII_ConfusionMatrix.png} \caption{\red{Confusion matrix for $42$ atomic actions demonstrated in all $5$ MPII activities.}} \label{fig:mpIIConfMatrix} \end{figure} \red{ We also measured the class-wise recognition accuracies for each of $42$ atomic actions which were demonstrated in total $111$ times in $5$ scenarios. Fig.~\ref{fig:mpIIConfMatrix} depicts the final confusion matrix between different action types. The average true and false positive rates were measured as $78\%$ and $4\%$, respectively. In this accuracy computation step, a detected action segment is counted as true positive if there exists more than $50\%$ match with the corresponding ground truth data. Note that the false positive rate is relatively low, which also indicates that some of actions are just missed due to misaligned temporal segmentation. } \red{ The main reason of observing slight drops in the temporal segmentation and recognition accuracies is the missing depth information. This problem mainly causes incorrect spatial relation computations between tracked objects. Fig.~\ref{fig:mpIINoisyImages} shows two examples from such noisy frames. On the left, the green box (i.e.\@\xspace left hand) is touching to both dark green (i.e.\@\xspace cupboard) and dark red (i.e.\@\xspace fridge) boxes due to missing depth information. Note that, here, the person hand is indeed far away from both objects. On the right, bounding boxes of both hands and cutting board are still touching to the bounding box of the drawer (dark blue) even though the hand is much more above the drawer. We, here, strongly underline the fact our SEC framework highly benefits from the depth feature of the scene. The acquired results on the MPII dataset consequently suggest that the proposed SEC framework can still provide fairly good results on real-world RGB only image streams. \\} \begin{figure}[!t] \centering \includegraphics[scale=0.45]{Figure_MPII_NoisyImages.png} \caption{\red{Noisy spatial object relations due to lack of depth cues. }} \label{fig:mpIINoisyImages} \end{figure} \section{Discussion} \label{sec:discussion} The main contribution of our paper is a novel method for automatic semantic segmentation and recognition of long and complex manipulation sequences. The proposed framework is essentially based only on the interactions between hands and manipulated objects in the scene. Our approach can consequently parse not only sequential and concurrent (overlapping) manipulation streams but also basic manipulation primitives (e.g.\@\xspace columns of event chains) of each detected manipulation. Without requiring any prior knowledge about objects, our framework can further extract object-like scene entities (image segments) that share similar roles in the monitored manipulations. Furthermore, due to the fact that SECs do not care about the \emph{actually used objects}, the framework can be transferred -- without re-learning -- to different databases. \subsection{The Problem of Action Semantics} In Fig.~\ref{fig:action_taxonomy} we had introduced one possible taxonomy and we had started to discuss the problem of how to understand actions. Essentially, here we would like to make a case for a layered understanding: There is no such thing as a \emph{one and only action description}! The technical literature often bypasses this problem and this may lead to biased and possibly impractical solutions. Philosophy may actually offer some useful thoughts here. For example consider a man who \emph{simultaneously} moves his arm, operates the pump, replenishes the water supply, poisons the inhabitants. Anscombe asks whether this is one or four actions \citep{Anscombe1963}? Mele discussed this by stating (\cite{Mele1992} pg. 5): \begin{quote} ``Competing models of act-individuation are aligned with competing models of the individuation of events generally; and different theories of -- or at least different vocabularies for expressing -- the relationship of the mental [semantic action understanding] to the physical [sensory perception]" (\it{square brackets added by us}). \end{quote} The same problem surfaces in this study: The SEC framework cannot distinguish between ``punch" and ``push". The dynamics of these actions is very different but their SECs are the same. And the same had been observed for ``cut" and ``chop". Mele's statement clearly expresses that such phenomena will necessarily happen for any action modeling approach. There are competing models possible for action understanding and they may capture actions at different levels of granularity! Thus, we are allowed to understand ``cut" and ``chop" as semantically similar (destruction actions \cite{Woergoetter2013,Yang13}) but we may choose to look deeper and distinguish them -- maybe in a second stage analysis -- by ways of their different motion trajectories \citep{Woergoetter2013}. Our own action learning supports this stance. We are first trying to master the basic skill (racket hits the ball) before learning finer motion details (topspin). \red{The issue of hand-object relations (often “grasping”), to which much attention has been devoted \citep{Elliott1984,Cutkosky1989,Ekvall2005,Feix2009,Wimmer2011,Bullock2013,Liu2016}, is related to this semantic-level-problem, too. Also here there is often the confound that the literature mixes levels. Understanding the essence of a manipulation does, we would argue, hardly ever require an understanding of ``how it has been achieved". For example, unscrewing a lid can be performed by humans, monkeys, bears, octopuses, and some other animals all of whom employ different grasps to do so. This notwithstanding the essence of such a manipulation remains semantically the same and more such examples exist (say: the bending of a hook to make a tool done by our hands or by a Caledonian Crow, that uses its beak). This argument gets even stronger when thinking of robotic manipulators. On the other hand, we do not wish to deny that sometimes levels blend into each other and semantically motivated (goal-directed) manipulation-requirements (for example: I want to break off a piece of rather hard wood) may indeed enforce a certain grasp (here power grasp and not pinch grip). } A second intriguing observation is that the SEC framework fundamentally ignores objects. Actions are extracted independently of the actual objects involved and the same physical thing (``cup") can take different roles (``being filled", ``being put on a plate", etc.). This way, the framework creates a tight link between actions and object-roles (but not objects as such) as suggested by the concept of ``Object-Action Complexes" \citep{Woergoetter2009,KrugerOAC2011}. \subsection{Successes and Failures} We applied our framework on three different recently published manipulation action datasets to evaluate its robustness. In each dataset, the temporal partitioning and recognition phases are quantified with respect to the human-defined ground truth. Observed high accuracies confirm the robustness of our method. One of the most fundamental advantages of our SEC based monitoring approach is the lack of the requirement of any additional training set in the case of evaluating novel manipulation datasets. Since SECs encode the underlying structure of manipulation actions, the already learned SEC models from our own dataset could also be employed to evaluate other datasets. This shows the generalization power of our method, which is not the case for almost all other approaches that are based on motion patterns. We also need to emphasize that our benchmarking results are not comparable with results in \cite{Yang13} and \cite{Koo14}. This is due to the fact that none of these benchmark providers are aimed at both, temporal segmentation and recognition of serial or parallel manipulation streams. For example, the method in \cite{Yang13} can only recognize abstract action consequences such as {\it Assemble} or {\it Transfer}. In this case, different manipulations like {\it Hiding} and {\it Putting} will be interpreted as the same class {\it Assemble}, whereas both can successfully be distinguished by our approach. As a strong contribution, our method consequently provides a richer action representation than that of others approaches. Note that the conventional datasets \citep{Schuldt04,Gupta07,Koppula13} have not been considered here since they employ entire human body configurations and movements as main features and therefore do not provide hand-object features. The concept of semantic event chains has also been successfully utilized and extended by others \citep{Luo2011,Vuga2013,David2014} for monitoring purposes. The work in \cite{David2014} presented active learning of goal directed manipulation sequences, each was recognized using semantic similarities between event chains. Our scene graphs were also represented with kernels in \cite{Luo2011} to further apply different machine learning approaches. Additional trajectory information was used in \cite{Vuga2013} to reduce noisy events occur in SECs. All these studies confirm the scalability of the event chains to various monitoring tasks. We presented our framework in a batch-type computation; that is, once the entire input stream of visual data is acquired, we first estimated the manipulator from the SEC representation and then parsed each manipulation stream respectively. However, this is not a limitation of the proposed work, since it can also run on-the-fly, i.e.\@\xspace over the course of performing the activity, as soon as any kind of hand recognition method (which is not in the scope of this paper!) is additionally provided. The main drawback of the here presented framework is the {\it segment discontinuity} problem. Since we heavily rely on tracked objects, inconsistently tracked over-segmented scenes can lead to failures in the proposed method. \subsection{Future Directions} To address some of the remaining problems, we are currently investigating {\it feature binding} and {\it object permanence} concepts as potential solutions to reduce failures due to the {\it segment discontinuity} problem. As discussed above, we are also aware of the fact that {\it touching} is a very unitary, discrete event. This allows rigorous classifications at a certain level of action granularity but stops sort of the finer details of an action. Consequentially, the next steps in action analysis should also involve trajectory and pose information. We strongly advocate this type a of ``layered" approach, where SECs allow classifications up to a certain semantic level and where the system can then begin ``to look deeper" allowing for further separations into finer classes. First attempts along such a layered analysis have been started \citep{WoergoetterTAMD2014} and will be much in the focus of our future works. \section*{Appendix}
1,116,691,498,144
arxiv
\section{Introduction} In the last decades the searches for a dark matter (DM) signal from the Sun were performed looking for possible excesses of neutrinos or gamma-rays associated with the Sun's direction. However, as it was noted in Ref.~\cite{Schuster:2009fc}, several DM models that have been recently developed to explain various experimental results also imply an associated solar flux of high-energy cosmic-ray electrons and positrons (CREs). On the other hand, no known astrophysical mechanisms are expected to generate a significant high-energy CRE $(>100 \units{GeV})$ excess associated with the Sun. A class of models in which DM annihilates to CREs through a new light intermediate state $\phi$ \cite{Pospelov:2007mp,ArkaniHamed:2008qn} has been considered to explain the excesses in local CRE fluxes reported by PAMELA~\cite{Adriani:2008zr}, ATIC~\cite{:2008zzr}, and \emph{Fermi}~\cite{Abdo:2009zk,Ackermann:2010ij}. In these scenarios DM particles captured by the Sun through elastic scattering interactions would annihilate to $\phi$ pairs in the Sun's core, and if the $\phi$ could escape the surface of the Sun before decaying to CREs, these models could produce an observable CRE flux. Another class of models in which DM scatters off of nucleons predominantly via inelastic scattering has been proposed as a means of reconciling the results of DAMA and DAMA/LIBRA~\cite{Bernabei:2008yi,Bernabei:2010mq} with CDMS-II~\cite{Ahmed:2009zw,Ahmed:2010hw} and other experiments (e.g., \cite{Chang:2008gd,Finkbeiner:2009ug}; see also \cite{Savage:2008er} for a comprehensive discussion of experimental constraints). If DM is captured by the Sun only through inelastic scattering (iDM), this could lead to a non-negligible fraction of DM annihilating outside of the Sun's surface. For models in which iDM annihilates to CREs, an observable flux at energies above a few tens of GeV could be produced. During its first year of operation, the Large Area Telescope (LAT) onboard the {\em Fermi} satellite~\cite{Atwood:2009ez} has collected a substantial number of CRE events, which has allowed a precise measurement of the energy spectrum over a broad energy range from a few $\units{GeV}$ up to $1\units{TeV}$~\cite{Abdo:2009zk,Ackermann:2010ij}. Furthermore, a directional analysis of the high-energy CRE events was performed in the Galactic reference frame~\cite{Ackermann:2010ip}, and showed no evidence of anisotropies. In this paper we use the high-energy CRE data set to search for flux variations correlated with the Sun's direction. Since the Sun is moving with respect to the Galactic reference frame, the previously-reported absence of anisotropies in the CRE flux observed in the Galactic frame does not necessarily imply a negative result. \section{Data selection} \label{sec:datasel} The {\em Fermi} LAT is a pair-conversion telescope designed to detect gamma rays in the energy range from $20 \units{MeV}$ to more than $300 \units{GeV}$. A full description of the apparatus is given in~\cite{Atwood:2009ez}. Even though it is a photon detector, it has been demonstrated that the LAT is also an excellent CRE detector~\cite{Abdo:2009zk,Ackermann:2010ij,Ackermann:2010ip}. For this analysis we used the CRE data sample collected by the LAT during its first year of operation, starting from August 4, 2008. The event selection was performed in the same way as in Ref.~\cite{Ackermann:2010ip}; approximately $1.35\times10^{6}$ CRE events with energies larger than $60\units{GeV}$ passed the selection cuts. As discussed in Ref.~\cite{Ackermann:2010ip}, the energy threshold of $60\units{GeV}$ was chosen because it is higher than the geomagnetic cutoff in any part of {\em Fermi}'s orbit. Unlike gamma rays, CREs are deflected by interactions with magnetic fields encountered during their propagation in interstellar space. In particular, CREs coming from the Sun are deflected by both the Sun's and the Earth's magnetic fields. Geomagnetic effects on CREs have been studied using a code that reconstructs the trajectories of charged particles in the Earth's magnetic field based on the International Geomagnetic Reference Field (IGRF) model~\cite{IGRF}. Since the {\em Fermi} LAT cannot measure the sign of the electric charge, we associated both an electron and a positron track with each CRE event detected by the LAT. Each track starts from the detection point with the same energy of the event and with a direction opposite to that of the event, and ends at a very large distance (larger than 100 Earth-radii from the Earth's center). The distribution of deflection angles at different energies was analyzed. The simulation demonstrated that, at energies above $20\units{GeV}$, $90\%$ of the particles are deflected with respect to the original direction within an angle $\delta_{90 \% }$ given by the approximate formula: \begin{equation} \delta_{90\% } \approx \frac{2.8\hbox{$^\circ$}}{E(\units{TeV})} \label{eq:deflection} \end{equation} where $E$ is the particle energy. Hence, due to the geomagnetic field, the reconstructed directions of CREs with energies above $100\units{GeV}$ detected by the LAT and coming from any given direction of the sky will be spread over a cone with an angular radius of about $30\hbox{$^\circ$}$ centered on the original incoming direction. The directions of incoming CREs are also affected by the Heliospheric Magnetic Field. A detailed study of its effects on CREs is beyond the scope of this work, however in Ref.~\cite{Roberts:2010yh}, it was shown that CREs with energies of several hundreds of $\units{GeV}$ can travel through the center of the solar system without experiencing significant deflections. CREs travelling in the Solar System may also suffer energy losses, mainly due to the Inverse Compton (IC) scatterings on the photons emitted by the Sun and to the synchrotron radiation (SR) emitted in the interactions with the Heliospheric Magnetic Field. To study the energy loss processes of CREs travelling from the Sun to the Earth we implemented a simple toy model, in which we assumed that CREs propagate from the Sun's surface to the Earth in straight lines and with velocity $c$. Following Ref.~\cite{Orlando:2008uk}, we assumed that the Sun can be modeled as a black body with a temperature of $5777\units{K}$ and with a photon density given by: \begin{equation} \label{eq:blackbody} N_{ph}(\epsilon,r) = 0.5 n_{bb}(\epsilon) \left[ 1 - \sqrt{ 1 - \frac{R_{\odot}^{2}}{r^{2}} } ~ \right] \end{equation} where $n_{bb}(\epsilon)$ is the blackbody photon energy density (Planck's equation), $R_{\odot}$ is the solar radius and $r$ is the distance from the center of the Sun. The IC energy loss rate of CREs was then evaluated as in Ref.~\cite{Schlickeiser:2009qq} as: \begin{equation} \label{eq:ICloss} - \left( \frac{dE}{dt} \right)_{IC} = \frac{4}{3} \sigma_{T} c W \beta^{2} \frac{\gamma_{k}^{2} \gamma^{2}}{\gamma_{k}^{2} + \gamma^{2}} \end{equation} where $\beta c$ and $\gamma$ are respectively the velocity and the Lorentz factor of the CRE, $W$ is the photon energy density evaluated from eq.~\ref{eq:blackbody}, $\sigma_{T}$ is the Thomson cross section and $\gamma_{k}$ is given by: \begin{equation} \gamma_{k} = \frac{3 \sqrt{5} m_{e} c^{2}}{8 \pi k_{B} T} \end{equation} where $m_{e}$ is the electron mass, $k_{B}$ is the Boltzmann's constant and $T=5777K$ is the temperature of the Sun's surface. The evaluation of the SR energy loss rate is not easy, because the structure of the Heliospheric Magnetic Field is rather complex~\cite{Parker:1958zz}. However, as a first approximation, we assumed that the strength of the Heliospheric Magnetic Field drops from the Sun's surface as: \begin{equation} B(r) = B_{0} \frac{R_{\odot}^{2}}{r^{2}} \end{equation} where $B_{0}=1\units{gauss}$ is the strength of the field on the Sun's surface. We did not include the contribution of the Geomagnetic field to synchrotron energy losses because, even though the field strength at the LAT altitude is of the order of $1\units{gauss}$, the path length of CREs in the Geomagnetic field is of the order of a few Earth radii, which is negligible with respect to the path length in the Heliospheric Magnetic field, which is of the order of a few solar radii. The synchrotron energy loss rate of CREs was then calculated as~\cite{Rybicki}: \begin{equation} \label{eq:syncloss} - \left( \frac{dE}{dt} \right)_{S} = \frac{4}{3} \sigma_{T} c W_{B} \beta^{2} \gamma^{2} \end{equation} where $W_{B}$ is the magnetic field energy density that includes only the contribution from the Heliospheric Magnetic Field and, in our model, results to be negliglible with respect to the IC energy loss rate. Using eqs.~\ref{eq:ICloss} and~\ref{eq:syncloss}, we calculated that CREs in the energy range from $60\units{GeV}$ to $1\units{TeV}$ travelling from the Sun to the Earth lose no more than $2\%$ of their initial energy. Therefore, in the calculations of the following sections, we will neglect all energy loss processes. \section{Data analysis and results} To study the CRE flux from the Sun's direction and to search for variations with respect to the average flux, we implemented two complementary analysis approaches: (i) flux asymmetry analysis and (ii) comparison of the solar flux with the isotropic flux. \subsection{Flux asymmetry studies} \label{sec:fluxasym} \begin{figure*}[!ht] \includegraphics[width=0.48\linewidth]{Figure_1_a.eps} \includegraphics[width=0.48\linewidth]{Figure_1_b.eps} \includegraphics[width=0.48\linewidth]{Figure_1_c.eps} \includegraphics[width=0.48\linewidth]{Figure_1_d.eps} \caption{Differential flux asymmetry between real and fake Sun evaluated in cones with angular radii $\Delta \Theta = 30\hbox{$^\circ$}$ (top left panel), $45\hbox{$^\circ$}$ (top right panel), $60 \hbox{$^\circ$}$ (bottom left panel) and $90\hbox{$^\circ$}$ (bottom right panel). The fluxes are multiplied by $E^{3}$ (the energy values correspond to the bin centers) since the energy spectrum of CREs is approximately proportional to $E^{-3}$ in this energy range. Only statistical error bars are shown. \label{fig:e3flux}} \end{figure*} \begin{table*}[!ht] \begin{tabular}{||c||c|c|c||} \hline Angular radius & Maximum deviation ($\sigma_{max}$) & $P(|\sigma_{max}|)$ & $P(|\sigma|>|\sigma_{max}|)$ \\ \hline $30 \hbox{$^\circ$}$ & 2.690 & 0.007 & 0.113 \\ \hline $45 \hbox{$^\circ$}$ & -2.542 & 0.011 & 0.171 \\ \hline $60 \hbox{$^\circ$}$ & -2.806 & 0.005 & 0.082 \\ \hline $90 \hbox{$^\circ$}$ & -2.947 & 0.003 & 0.050 \\ \hline \end{tabular} \caption{For each cone used for the flux asymmetry analysis the maximum deviations (either positive or negative) from the null value are shown and the corresponding probabilities of observing larger values in the hypothesis of null flux asymmetry. The last column shows the probability of finding at least one energy bin with a larger flux asymmetry than the maximum observed value. \label{tab:prob1}} \end{table*} This approach compares the CRE flux from the Sun with the flux from a fake source (fake Sun) placed in the sky position opposite to that of the Sun. To perform our analyses, we chose a custom reference frame derived from ecliptic coordinates. The ecliptic coordinates associated with each CRE event were evaluated from equatorial coordinates using the formulae in Ref.~\cite{Duffett}. Indicating with $(\lambda, \beta)$ the pair of ecliptic coordinates (longitude and latitude, respectively) associated with any given direction, the Sun's direction will always lie in the plane $\beta=0$. In fact, since the Sun is moving eastwards along the path of the ecliptic, its ecliptic latitude will always be zero by definition, while its ecliptic longitude will always increase, describing a complete $360\hbox{$^\circ$}$ cycle in one year~\cite{Duffett}. In our custom reference frame, the coordinates associated with each direction are defined as: \begin{equation} \left\{ \begin{array}{l} \lambda^{\prime} = \lambda - \lambda_{Sun} \\ \beta^{\prime} = \beta \end{array} \right. \label{eq:coord} \end{equation} where $\lambda_{Sun}$ is the ecliptic longitude of the Sun, evaluated from the Sun's ephemeris using a software interfaced to the JPL libraries~\cite{JPL}. In this reference frame, the Sun's coordinates will always be $(\lambda^{\prime}_{Sun}=0\hbox{$^\circ$}, \beta^{\prime}_{Sun}=0\hbox{$^\circ$})$. On the other hand, the fake Sun will always be located at the coordinates $(\lambda^{\prime}_{fake~Sun}=180\hbox{$^\circ$}, \beta^{\prime}_{fake~Sun}=0\hbox{$^\circ$})$. Due to the geomagnetic field's effects on CRE trajectories described in \S\ref{sec:datasel}, we consider the fluxes from extended sky regions centered on the Sun (and on the fake Sun). In particular, we compare the CRE fluxes from directions within cones of angular radii $\Delta \Theta$, centered on the position of the Sun and the fake Sun. According to Eq.~\ref{eq:deflection}, $90\%$ of CREs with energies of $100 \units{GeV}$ are deflected within a cone of about $30\hbox{$^\circ$}$ angular radius, and so we chose this value as the minimum angular radius of the sky regions to be investigated because the DM models discussed in Ref.~\cite{Schuster:2009fc} predict a CRE flux excess from the Sun in the energy range above $100\units{GeV}$. To measure the fluxes from different sky regions, we first divided the sky into a grid of pixels, then evaluated the CRE fluxes from individual pixels (each pixel was treated as a point source), and finally integrated the fluxes from the pixels belonging to the selected sky regions. We used the HEALPix~\cite{Gorski:2004by} pixelization scheme, and divided the sky into $12288$ equal-area pixels, each covering a solid angle of about $10^{-3} \units{sr}$. The CRE differential fluxes from individual pixels were evaluated according to the following equation: \begin{equation} \cfrac{d\Phi_{i} (E)}{dE} = \frac{1}{\Delta E} \cfrac{N_{i}(E) \times (1-c(E))} {\mathcal{E}_{i}(E)} \label{eq:fluxdiff} \end{equation} where $d\Phi_{i}(E)/dE$ is the differential CRE flux (expressed in particles per unit energy, unit area and unit time) in the energy interval $[E, E+\Delta E]$ from the $i$th pixel, $N_{i}(E)$ is the number of observed CRE events from the $i$th pixel with energies between $E$ and $E+\Delta E$, $c(E)$ is the residual contamination (the contamination values are reported in Ref.~\cite{Ackermann:2010ij}) and $\mathcal{E}_{i}(E)$ is the exposure of the $i$th pixel, which is calculated taking into account the effective area of the instrument, and the live time of the $i$th pixel. The dependence of the effective area on the CRE direction in the instrument, expressed in terms of the off-axis and azimuth angles $\theta$ and $\phi$, is also taken into account in the calculation. The CRE flux from a cone of angular radius $\Delta \Theta$ centered on the Sun is then given by: \begin{equation} \cfrac{d \Phi_{Sun} (E | \Delta \Theta)}{dE} = \sum_{i \in ROI(\Delta \Theta)} \cfrac{d \Phi_{i} (E)}{dE} \label{eq:fluxroi} \end{equation} where $ROI (\Delta \Theta)$ denotes the set of pixels (region of interest) at an angular distance less than $\Delta \Theta$ from the Sun. The flux from the fake Sun is evaluated in a similar way. The flux asymmetry can then be evaluated as: \begin{equation} \cfrac{dA_{\Phi}(E|\Delta \Theta)}{dE} = \cfrac{d\Phi_{Sun}(E | \Delta \Theta)}{dE} - \cfrac{d\Phi_{Fake~Sun}(E | \Delta \Theta)}{dE} \label{eq:asymmetry1} \end{equation} The variable $dA_{\Phi}(E|\Delta \Theta)/dE$ defined in Eq.~\ref{eq:asymmetry1} is the difference between the CRE flux from the Sun and the fake Sun; the flux of the fake Sun is assumed to be representative of the average CRE flux across the sky. Positive (negative) values of $dA_{\Phi}(E | \Delta \Theta)/dE$ indicate an excess (deficit) of CREs from the Sun. We emphasize that this approach relies on the assumption that the flux from the fake Sun region is representative of the average CRE flux. In Fig.~\ref{fig:e3flux} the differential CRE flux asymmetries $dA_{\Phi}(E|\Delta \Theta)/dE$ between the real and the fake Sun are shown for four different ROIs, with angular radii of $30\hbox{$^\circ$}$, $45\hbox{$^\circ$}$, $60 \hbox{$^\circ$}$ and $90\hbox{$^\circ$}$. No significant CRE flux excesses or deficits from the Sun are observed at any energy. In the plots of Fig.~\ref{fig:e3flux} only statistical error bars are shown. As pointed out in Ref.~\cite{Ackermann:2010ij}, the main source of systematic uncertainties in the evaluation of CRE fluxes is the imperfect knowledge of the detector's effective area. Assuming that it is affected only by a normalization error, when calculating the error on the flux differences $dA_{\Phi}(E|\Delta \Theta)/dE$, the contribution from the normalization error will be proportional to $|dA_{\Phi}(E|\Delta \Theta)/dE|$ (see the discussion in Ref.~\cite{D'Agostini:1993uj}), and therefore it will be negligible with respect to the statistical error. Assuming that the measured flux asymmetries in each energy bin behave as Gaussian random variables, we expressed the excesses and deficits (with respect to the hypothesis of a null flux asymmetry) in units of $\sigma$ ($\sigma$ is the statistical error associated with each measurement), and evaluated the corresponding probabilities of measuring larger excesses or deficits assuming the null hypothesis. Table~\ref{tab:prob1} shows, for each value of the angular radius $\Delta \Theta$, the maximum observed deviations from the null flux asymmetry in units of $\sigma$, and the corresponding probabilities of measuring larger flux asymmetries in the null hypothesis. As shown in Table~\ref{tab:prob1}, the flux asymmetries in all of the ROIs are always within $3 \sigma$ of zero. The last column of Table~\ref{tab:prob1} shows the probabilities of finding, in each ROI, at least one energy bin with a flux asymmetry larger than the maximum observed value. The probabilities were calculated assuming that the flux asymmetries measured in each of the $17$ energy bins used in our analysis are uncorrelated. The calculations were performed taking only statistical errors into account; if systematic errors were also taken into account, the significance of the deviations of the flux asymmetries from zero would be smaller. The same analysis was repeated for integral fluxes above various energy thresholds and again no evidence of flux asymmetries was found. \subsubsection{Evaluation of statistical upper limits on the CRE flux asymmetry} \begin{figure*} \includegraphics[width=0.48\linewidth]{Figure_2_a.eps} \includegraphics[width=0.48\linewidth]{Figure_2_b.eps} \includegraphics[width=0.48\linewidth]{Figure_2_c.eps} \includegraphics[width=0.48\linewidth]{Figure_2_d.eps} \caption{Statistical upper limits at confidence levels of $68\%$ (dotted lines), $95\%$ (dashed lines) and $99\%$ (continuous lines) for the CRE flux asymmetry between real and fake Sun, evaluated in cones with angular radii $\Delta \Theta = 30\hbox{$^\circ$}$ (top left panel), $45\hbox{$^\circ$}$ (top right panel), $60 \hbox{$^\circ$}$ (bottom left panel) and $90\hbox{$^\circ$}$ (bottom right panel).\label{fig:e3upperlimits_fake}} \end{figure*} The previous analysis did not provide any evidence of a CRE flux excess from the Sun with respect to the fake Sun, so we set statistical upper limits on this signal by following the approach outlined in Ref.~\cite{Cowan1998} (pp.~136-139). In each energy bin, the measured flux asymmetry $dA_{\Phi}(E | \Delta \Theta)/dE$ can be seen as a realization of a Gaussian random variable. Assuming the hypothesis of a CRE flux excess from the Sun, its expectation value must be non-negative: \begin{equation} \langle \frac{dA_{\Phi}(E | \Delta \Theta)}{dE} \rangle \geq 0 \end{equation} To set upper limits on $\langle dA_{\Phi}(E | \Delta \Theta)/dE \rangle$ we implemented the Bayesian method described in Ref.~\cite{Cowan1998}, assuming a uniform prior density. The $\sigma$ of the Gaussian probability distribution function associated with each measurement of $dA_{\Phi}(E | \Delta \Theta)/dE$ is the statistical error associated with the measurement. Fig.~\ref{fig:e3upperlimits_fake} shows the statistical upper limits on CRE flux asymmetries at different confidence levels, considering different cones centered on the Sun with angular radii ranging from $30\hbox{$^\circ$}$ to $90\hbox{$^\circ$}$. In Table~\ref{tab:ul1} the values of the upper limits at $95\%$ confidence level for the flux asymmetry $dA_{\Phi}(E | \Delta \Theta)/dE$ in the different ROIs are summarized. The calculated upper limits are also expressed in terms of fractions of the CRE flux from the region of the fake Sun. \begin{table*} \begin{tabular}{||c||c|c||c|c||c|c||c|c||} \hline \hline & \multicolumn{2}{|c||}{$\Delta \Theta = 30\hbox{$^\circ$}$} & \multicolumn{2}{|c||}{$\Delta \Theta = 45\hbox{$^\circ$}$} & \multicolumn{2}{|c||}{$\Delta \Theta = 60\hbox{$^\circ$}$} & \multicolumn{2}{|c||}{$\Delta \Theta = 90\hbox{$^\circ$}$} \\ \hline Energy & Flux UL & Fractional & Flux UL & Fractional & Flux UL & Fractional & Flux UL & Fractional \\ (GeV) & ($\units{GeV^{-1}m^{-2}s^{-1}}$) & UL & ($\units{GeV^{-1}m^{-2}s^{-1}}$) & UL & ($\units{GeV^{-1}m^{-2}s^{-1}}$) & UL & ($\units{GeV^{-1}m^{-2}s^{-1}}$) & UL \\ \hline \hline $ 60.4 - 68.2 $ & $ 3.508 \cdot 10^{-6} $ & $ 0.008 $ & $ 4.650 \cdot 10^{-6} $ & $ 0.005 $ & $ 3.934 \cdot 10^{-6} $ & $ 0.002 $ & $ 7.506 \cdot 10^{-6} $ & $ 0.002 $ \\ $ 68.2 - 77.4 $ & $ 2.114 \cdot 10^{-6} $ & $ 0.007 $ & $ 2.496 \cdot 10^{-6} $ & $ 0.004 $ & $ 3.518 \cdot 10^{-6} $ & $ 0.003 $ & $ 4.096 \cdot 10^{-6} $ & $ 0.002 $ \\ $ 77.4 - 88.1 $ & $ 2.744 \cdot 10^{-6} $ & $ 0.013 $ & $ 5.506 \cdot 10^{-6} $ & $ 0.012 $ & $ 3.131 \cdot 10^{-6} $ & $ 0.004 $ & $ 4.555 \cdot 10^{-6} $ & $ 0.003 $ \\ $ 88.1 - 101 $ & $ 2.516 \cdot 10^{-6} $ & $ 0.019 $ & $ 5.127 \cdot 10^{-6} $ & $ 0.018 $ & $ 4.400 \cdot 10^{-6} $ & $ 0.009 $ & $ 7.696 \cdot 10^{-6} $ & $ 0.008 $ \\ $ 101 - 116 $ & $ 2.190 \cdot 10^{-6} $ & $ 0.024 $ & $ 1.963 \cdot 10^{-6} $ & $ 0.010 $ & $ 2.779 \cdot 10^{-6} $ & $ 0.008 $ & $ 3.845 \cdot 10^{-6} $ & $ 0.006 $ \\ $ 116 - 133 $ & $ 1.026 \cdot 10^{-6} $ & $ 0.017 $ & $ 9.091 \cdot 10^{-7} $ & $ 0.007 $ & $ 1.935 \cdot 10^{-6} $ & $ 0.009 $ & $ 1.583 \cdot 10^{-6} $ & $ 0.004 $ \\ $ 133 - 154 $ & $ 1.471 \cdot 10^{-6} $ & $ 0.039 $ & $ 1.261 \cdot 10^{-6} $ & $ 0.015 $ & $ 1.337 \cdot 10^{-6} $ & $ 0.010 $ & $ 1.795 \cdot 10^{-6} $ & $ 0.006 $ \\ $ 154 - 180 $ & $ 5.671 \cdot 10^{-7} $ & $ 0.021 $ & $ 5.297 \cdot 10^{-7} $ & $ 0.009 $ & $ 7.971 \cdot 10^{-7} $ & $ 0.008 $ & $ 1.435 \cdot 10^{-6} $ & $ 0.007 $ \\ $ 180 - 210 $ & $ 8.580 \cdot 10^{-7} $ & $ 0.054 $ & $ 1.348 \cdot 10^{-6} $ & $ 0.039 $ & $ 1.419 \cdot 10^{-6} $ & $ 0.025 $ & $ 1.489 \cdot 10^{-6} $ & $ 0.013 $ \\ $ 210 - 246 $ & $ 1.252 \cdot 10^{-6} $ & $ 0.133 $ & $ 9.240 \cdot 10^{-7} $ & $ 0.045 $ & $ 8.574 \cdot 10^{-7} $ & $ 0.024 $ & $ 7.953 \cdot 10^{-7} $ & $ 0.011 $ \\ $ 246 - 291 $ & $ 2.905 \cdot 10^{-7} $ & $ 0.049 $ & $ 4.411 \cdot 10^{-7} $ & $ 0.034 $ & $ 6.033 \cdot 10^{-7} $ & $ 0.027 $ & $ 1.556 \cdot 10^{-6} $ & $ 0.036 $ \\ $ 291 - 346 $ & $ 3.946 \cdot 10^{-7} $ & $ 0.111 $ & $ 5.473 \cdot 10^{-7} $ & $ 0.073 $ & $ 7.581 \cdot 10^{-7} $ & $ 0.059 $ & $ 7.778 \cdot 10^{-7} $ & $ 0.030 $ \\ $ 346 - 415 $ & $ 2.457 \cdot 10^{-7} $ & $ 0.115 $ & $ 2.925 \cdot 10^{-7} $ & $ 0.064 $ & $ 3.715 \cdot 10^{-7} $ & $ 0.048 $ & $ 6.160 \cdot 10^{-7} $ & $ 0.040 $ \\ $ 415 - 503 $ & $ 1.567 \cdot 10^{-7} $ & $ 0.140 $ & $ 2.057 \cdot 10^{-7} $ & $ 0.085 $ & $ 2.979 \cdot 10^{-7} $ & $ 0.072 $ & $ 3.187 \cdot 10^{-7} $ & $ 0.038 $ \\ $ 503 - 615 $ & $ 6.318 \cdot 10^{-8} $ & $ 0.094 $ & $ 7.052 \cdot 10^{-8} $ & $ 0.049 $ & $ 9.512 \cdot 10^{-8} $ & $ 0.041 $ & $ 1.644 \cdot 10^{-7} $ & $ 0.037 $ \\ $ 615 - 772 $ & $ 7.297 \cdot 10^{-8} $ & $ 0.251 $ & $ 1.047 \cdot 10^{-7} $ & $ 0.169 $ & $ 1.486 \cdot 10^{-7} $ & $ 0.134 $ & $ 2.501 \cdot 10^{-7} $ & $ 0.111 $ \\ $ 772 - 1000 $ & $ 6.097 \cdot 10^{-8} $ & $ 0.493 $ & $ 5.951 \cdot 10^{-8} $ & $ 0.192 $ & $ 9.720 \cdot 10^{-8} $ & $ 0.178 $ & $ 2.088 \cdot 10^{-7} $ & $ 0.196 $ \\ \hline \hline \end{tabular} \caption{Statistical upper limits at $95\%$ confidence level on the CRE flux asymmetries between the real and the fake Sun, evaluated in cones of angular radii $\Delta \Theta = 30\hbox{$^\circ$}, 45\hbox{$^\circ$}, 60\hbox{$^\circ$}, 90\hbox{$^\circ$}$. The upper limits are also expressed in terms of fractions of the CRE flux from the fake Sun.} \label{tab:ul1} \end{table*} \subsection{Comparison with an isotropic flux} \label{sec:isotropic} The second approach used in this analysis is based on the event-shuffling technique employed in Ref.~\cite{Ackermann:2010ip}, which was used to build a simulated sample of isotropic CREs starting from the real events. Simulated events are built by randomly coupling the arrival times and the arrival directions (in local instrument coordinates) of real events. The simulated event sample used for this analysis is the same used in Ref.~\cite{Ackermann:2010ip} and is $100$ times larger than the real one. In this case, given the angular radius $\Delta \Theta$ of a cone centered on the Sun, we evaluated the count differences between real and simulated CREs as: \begin{equation} \Delta N (E|\Delta \Theta) = N_{real}(E | \Delta \Theta) - \alpha (E) N_{sim}(E | \Delta \Theta) \label{eq:counts} \end{equation} where $N_{real}(E|\Delta \Theta)$ and $N_{sim}(E|\Delta \Theta)$ are respectively the number of CRE events in the real and simulated data sets with energy $E$ and arrival directions within the selected cone. The parameter $\alpha(E)$ in Eq.~\ref{eq:counts} is a normalization factor. If $N_{real}(E)$ and $N_{sim}(E)$ are the total numbers of real and simulated events with energy $E$, then $\alpha(E)=N_{real}(E)/N_{sim}(E)$~\footnote{The factor $\alpha(E)$ is not exactly equal to $1/100$ (that is the exact ratio between the sizes of the real and simulated event samples), because in the randomization process only the overall number of simulated events is fixed, but the simulated events in individual energy bins can change. However, since the size of the simulated data sample is large, $\alpha(E) \simeq 1/100$ in each energy bin.}. The count difference is finally converted into a flux asymmetry according to the following equation: \begin{equation} \cfrac{dA'_{\Phi}(E | \Delta \Theta)}{dE} = \cfrac{1}{\Delta E} \cdot \cfrac{ \Delta N(E|\Delta \Theta) (1-c(E))} {\mathcal{E} (E | \Delta \Theta)} \label{eq:asymmetry2} \end{equation} where $c(E)$ is the residual contamination and $\mathcal{E}(E|\Delta\Theta)$ is the exposure of the selected sky region, which is evaluated from the instrument effective area and the live times of the pixels belonging to that sky region. We emphasize that the two approaches used in this work are complementary, but not fully equivalent. In the first case the variable $dA_{\Phi}(E | \Delta \Theta)/dE$ is built using real events from different directions (real Sun and fake Sun). On the other hand, in the second case, the variable $dA'_{\Phi}(E | \Delta \Theta)/dE$ is built using real and simulated events from the same region of the sky. In both cases the goal of the analysis is to compare the CRE flux from the Sun with the average CRE flux. In the first case, the reference flux is evaluated looking at real events from the fake Sun region, while in the second case it is evaluated simulating an isotropic flux and looking at simulated events from the Sun region. The second approach, however, excludes potential systematic biases when calculating flux differences. In particular, to evaluate the flux from a given region of the sky requires knowledge of the exposure, which in turn depends on the effective area of the detector and on the observation live time. The effective area is calculated from Monte Carlo simulations and thus could be affected by systematics such as variations correlated with time and spacecraft position or miscalculations of its dependence on instrument coordinates. When evaluating the flux asymmetry $dA_{\Phi}(E | \Delta \Theta)/dE$ according to Eq.~\ref{eq:asymmetry1}, the systematic uncertainties involved in the evaluation of the two terms could be different, and the result could be biased. On the other hand, when evaluating the flux difference $dA'_{\Phi}(E | \Delta \Theta)/dE$ from Eq.~\ref{eq:asymmetry2}, inaccuracies in the effective area calculation can only result in a scale error on the flux difference. \begin{figure*} \includegraphics[width=0.48\linewidth]{Figure_3_a.eps} \includegraphics[width=0.48\linewidth]{Figure_3_b.eps} \includegraphics[width=0.48\linewidth]{Figure_3_c.eps} \includegraphics[width=0.48\linewidth]{Figure_3_d.eps} \caption{Differential flux asymmetry between real and simulated events from the Sun evaluated in cones with angular radii $\Delta \Theta = 30\hbox{$^\circ$}$ (top left panel), $45\hbox{$^\circ$}$ (top right panel), $60 \hbox{$^\circ$}$ (bottom left panel) and $90\hbox{$^\circ$}$ (bottom right panel). Only statistical error bars are shown. \label{fig:e3counts}} \end{figure*} To determine whether the real counts differ significantly from the simulated ones, we performed a hypothesis test following the prescriptions of Ref.~\cite{Li:1983fv}. Denoting with $N_{real}(E | \Delta \Theta)$ and $N_{sim}(E | \Delta\Theta)$ the real and the simulated counts in the energy bin $E$, the null hypothesis for this analysis is that $N_{real}(E | \Delta \Theta) = \alpha(E) N_{sim}(E | \Delta \Theta)$. Following Ref.~\cite{Li:1983fv}, we evaluated the significance in each energy bin as: \begin{equation} \begin{split} & S(E | \Delta \Theta) = \pm \sqrt{2} \left\{ N_{real}(E | \Delta \Theta) \ln \left[ \cfrac{1 + \alpha(E)}{\alpha(E)} \right. \right. \\& \left. \cfrac{N_{real}(E | \Delta \Theta)} {N_{real}(E | \Delta \Theta) + N_{sim}(E | \Delta \Theta)} \right] + N_{sim}(E | \Delta \Theta) \\& \left. \ln \left[ \left(1 + \alpha(E) \right) \cfrac{N_{sim}(E | \Delta \Theta)} {N_{real}(E | \Delta \Theta) + N_{sim}(E | \Delta \Theta)} \right] \right\}^{1/2} \end{split} \label{eq:significance} \end{equation} with the convention of choosing the $+$ sign if $N_{real} (E | \Delta \Theta ) > \alpha(E) N_{sim} (E | \Delta \Theta)$ and the $-$ sign if $N_{real} (E | \Delta \Theta ) < \alpha(E) N_{sim} (E | \Delta \Theta)$. The significance values evaluated from Eq.~\ref{eq:significance} can be converted into probability values. In particular, since $S^{2}$ is a random variable following a $\chi^{2}$ distribution with $1$ degree of freedom~\cite{Li:1983fv}, one can easily evaluate the probability of observing a value of $S^{2}$ larger than the one observed. In Table~\ref{tab:prob2} the maximum deviations from the null flux asymmetries are shown for each ROI used for our analysis, together with the corresponding probabilities of finding larger deviations. The last column of the table shows the probabilities of finding, in each ROI, at least one energy bin with a flux asymmetry larger than the maximum observed value. Again, these probabilities were calculated assuming that the flux asymmetries measured in each of the $17$ energy bins used for our analysis are uncorrelated. The observed deviations from the null flux asymmetries are statistically insignificant. \begin{table*}[!ht] \begin{tabular}{||c||c|c|c||} \hline Angular radius & Maximum deviation ($S_{max}$) & $P(|S_{max}|)$ & $P(|S|>|S_{max}|)$ \\ \hline $30 \hbox{$^\circ$}$ & 1.925 & 0.054 & 0.611 \\ \hline $45 \hbox{$^\circ$}$ & 0.419 & 0.675 & 1.000 \\ \hline $60 \hbox{$^\circ$}$ & -0.654 & 0.513 & 1.000 \\ \hline $90 \hbox{$^\circ$}$ & 1.026 & 0.305 & 0.998 \\ \hline \end{tabular} \caption{For each cone used for the flux asymmetry analysis the maximum deviation (either positive or negative) in terms of significance from the null value are shown with the corresponding probability of finding a larger significance value. The last column shows the probability of finding at least one energy bin a with larger flux asymmetry than the maximum observed value. \label{tab:prob2}} \end{table*} The count differences were converted into flux differences according to Eq.~\ref{eq:asymmetry2}. In Fig.~\ref{fig:e3counts} the asymmetry variable $dA'_{\Phi}(E|\Delta \Theta)/dE$ between the real and simulated fluxes from the Sun is shown for the four cones with angular radii of $30\hbox{$^\circ$}$, $45\hbox{$^\circ$}$, $60 \hbox{$^\circ$}$ and $90\hbox{$^\circ$}$. Again, no evidence of a CRE signal from the Sun is observed. Similar results are obtained when integral fluxes are analyzed. \subsubsection{Evaluation of statistical upper limits on the CRE flux from the Sun} \begin{figure*} \includegraphics[width=0.48\linewidth]{Figure_4_a.eps} \includegraphics[width=0.48\linewidth]{Figure_4_b.eps} \includegraphics[width=0.48\linewidth]{Figure_4_c.eps} \includegraphics[width=0.48\linewidth]{Figure_4_d.eps} \caption{Statistical upper limits at confidence levels of of $68\%$ (dotted lines), $95\%$ (dashed lines) and $99\%$ (continuous lines) for the CRE fluxes from the Sun, evaluated in cones with angular radii $\Delta \Theta = 30\hbox{$^\circ$}$ (top left panel), $45\hbox{$^\circ$}$ (top right panel), $60 \hbox{$^\circ$}$ (bottom left panel) and $90\hbox{$^\circ$}$ (bottom right panel).\label{fig:e3upperlimits}} \end{figure*} As in \S\ref{sec:fluxasym}, our analysis in this case does not find evidence of a CRE signal from the Sun, and so we set statistical upper limits on solar CRE fluxes. Given a cone with angular radius $\Delta \Theta$ centered on the Sun, the observed counts $N_{real}(E | \Delta \Theta)$ in the energy bin $E$ can be seen as a realization of a Poisson random variable. Assuming the hypothesis of a CRE signal from the Sun, these counts will be the sum of a signal contribution, $N_{Sun}(E | \Delta \Theta)$, plus a background contribution, $N_{bkg}(E | \Delta \Theta)$: \begin{equation} N_{real}(E | \Delta \Theta) = N_{Sun}(E | \Delta \Theta) + N_{bkg}(E | \Delta \Theta). \end{equation} Both $N_{Sun}(E | \Delta \Theta)$ and $N_{bkg}(E | \Delta \Theta)$ can be seen as Poisson random variables. The only information available about $N_{Sun}(E | \Delta \Theta)$ is that, in the hypothesis of a CRE signal from the Sun, its average value (which is unknown) must be non-negative: \begin{equation} \langle N_{Sun}(E | \Delta \Theta) \rangle \geq 0. \end{equation} % On the other hand, the average value of $N_{bkg}(E | \Delta \Theta)$ can be estimated from the randomized data sets, and is given by: \begin{equation} \langle N_{bkg}(E | \Delta \Theta) \rangle = \alpha(E) N_{sim} (E | \Delta \Theta) \end{equation} where $\alpha(E)$ is the ratio between the total real and simulated events in the energy bin $E$. The goal of our analysis is to evaluate an upper limit on the average value $\langle N_{Sun}(E | \Delta \Theta) \rangle$, that will be converted into an upper limit on the CRE flux from the Sun after properly taking into account the detector acceptance and the live time. For our calculation we implemented the Bayesian method with the assumption of a uniform prior density. The mathematical details of the method can be found in Ref.~\cite{Cowan1998} (pp.~139-142). Fig.~\ref{fig:e3upperlimits} shows the statistical upper limits at different confidence levels on the CRE fluxes from different cones centered on the Sun and with angular radii ranging from $30\hbox{$^\circ$}$ to $90\hbox{$^\circ$}$. The results are consistent with those shown in Fig.~\ref{fig:e3upperlimits_fake}. In Table~\ref{tab:ul2} the values of the upper limits at $95\%$ confidence level for the flux asymmetry $dA'_{\Phi}(E | \Delta \Theta)/dE$ in the different ROIs are summarized. The calculated upper limits are also expressed in terms of fractional excess with respect to the expected isotropic flux. \begin{table*} \begin{tabular}{||c||c|c||c|c||c|c||c|c||} \hline \hline & \multicolumn{2}{|c||}{$\Delta \Theta = 30\hbox{$^\circ$}$} & \multicolumn{2}{|c||}{$\Delta \Theta = 45\hbox{$^\circ$}$} & \multicolumn{2}{|c||}{$\Delta \Theta = 60\hbox{$^\circ$}$} & \multicolumn{2}{|c||}{$\Delta \Theta = 90\hbox{$^\circ$}$} \\ \hline Energy & Flux UL & Fractional & Flux UL & Fractional & Flux UL & Fractional & Flux UL & Fractional \\ (GeV) & ($\units{GeV^{-1}m^{-2}s^{-1}}$) & UL & ($\units{GeV^{-1}m^{-2}s^{-1}}$) & UL & ($\units{GeV^{-1}m^{-2}s^{-1}}$) & UL & ($\units{GeV^{-1}m^{-2}s^{-1}}$) & UL \\ \hline \hline $ 60.4 - 68.2 $ & $ 7.344 \cdot 10^{-6} $ & $ 0.017 $ & $ 7.865 \cdot 10^{-6} $ & $ 0.008 $ & $ 8.390 \cdot 10^{-6} $ & $ 0.005 $ & $ 1.823 \cdot 10^{-5} $ & $ 0.006 $ \\ $ 68.2 - 77.4 $ & $ 4.366 \cdot 10^{-6} $ & $ 0.015 $ & $ 6.652 \cdot 10^{-6} $ & $ 0.010 $ & $ 1.033 \cdot 10^{-5} $ & $ 0.009 $ & $ 1.245 \cdot 10^{-5} $ & $ 0.006 $ \\ $ 77.4 - 88.1 $ & $ 3.060 \cdot 10^{-6} $ & $ 0.015 $ & $ 6.279 \cdot 10^{-6} $ & $ 0.014 $ & $ 5.237 \cdot 10^{-6} $ & $ 0.007 $ & $ 9.069 \cdot 10^{-6} $ & $ 0.006 $ \\ $ 88.1 - 101 $ & $ 2.987 \cdot 10^{-6} $ & $ 0.022 $ & $ 5.211 \cdot 10^{-6} $ & $ 0.018 $ & $ 6.548 \cdot 10^{-6} $ & $ 0.013 $ & $ 1.033 \cdot 10^{-5} $ & $ 0.010 $ \\ $ 101 - 116 $ & $ 2.162 \cdot 10^{-6} $ & $ 0.023 $ & $ 2.296 \cdot 10^{-6} $ & $ 0.012 $ & $ 4.093 \cdot 10^{-6} $ & $ 0.012 $ & $ 6.541 \cdot 10^{-6} $ & $ 0.010 $ \\ $ 116 - 133 $ & $ 1.471 \cdot 10^{-6} $ & $ 0.025 $ & $ 1.710 \cdot 10^{-6} $ & $ 0.013 $ & $ 3.363 \cdot 10^{-6} $ & $ 0.015 $ & $ 3.091 \cdot 10^{-6} $ & $ 0.007 $ \\ $ 133 - 154 $ & $ 1.090 \cdot 10^{-6} $ & $ 0.029 $ & $ 1.631 \cdot 10^{-6} $ & $ 0.020 $ & $ 2.384 \cdot 10^{-6} $ & $ 0.017 $ & $ 2.683 \cdot 10^{-6} $ & $ 0.010 $ \\ $ 154 - 180 $ & $ 8.568 \cdot 10^{-7} $ & $ 0.033 $ & $ 1.034 \cdot 10^{-6} $ & $ 0.019 $ & $ 1.729 \cdot 10^{-6} $ & $ 0.018 $ & $ 1.912 \cdot 10^{-6} $ & $ 0.010 $ \\ $ 180 - 210 $ & $ 8.360 \cdot 10^{-7} $ & $ 0.052 $ & $ 1.257 \cdot 10^{-6} $ & $ 0.037 $ & $ 1.185 \cdot 10^{-6} $ & $ 0.020 $ & $ 1.614 \cdot 10^{-6} $ & $ 0.014 $ \\ $ 210 - 246 $ & $ 8.587 \cdot 10^{-7} $ & $ 0.087 $ & $ 7.968 \cdot 10^{-7} $ & $ 0.038 $ & $ 8.626 \cdot 10^{-7} $ & $ 0.024 $ & $ 9.827 \cdot 10^{-7} $ & $ 0.014 $ \\ $ 246 - 291 $ & $ 2.983 \cdot 10^{-7} $ & $ 0.050 $ & $ 4.946 \cdot 10^{-7} $ & $ 0.038 $ & $ 6.749 \cdot 10^{-7} $ & $ 0.031 $ & $ 1.062 \cdot 10^{-6} $ & $ 0.024 $ \\ $ 291 - 346 $ & $ 3.630 \cdot 10^{-7} $ & $ 0.100 $ & $ 4.318 \cdot 10^{-7} $ & $ 0.056 $ & $ 5.716 \cdot 10^{-7} $ & $ 0.043 $ & $ 7.422 \cdot 10^{-7} $ & $ 0.028 $ \\ $ 346 - 415 $ & $ 1.681 \cdot 10^{-7} $ & $ 0.074 $ & $ 2.048 \cdot 10^{-7} $ & $ 0.043 $ & $ 2.997 \cdot 10^{-7} $ & $ 0.038 $ & $ 4.661 \cdot 10^{-7} $ & $ 0.030 $ \\ $ 415 - 503 $ & $ 1.205 \cdot 10^{-7} $ & $ 0.103 $ & $ 1.504 \cdot 10^{-7} $ & $ 0.060 $ & $ 2.176 \cdot 10^{-7} $ & $ 0.051 $ & $ 2.979 \cdot 10^{-7} $ & $ 0.035 $ \\ $ 503 - 615 $ & $ 1.113 \cdot 10^{-7} $ & $ 0.184 $ & $ 1.409 \cdot 10^{-7} $ & $ 0.108 $ & $ 1.457 \cdot 10^{-7} $ & $ 0.066 $ & $ 2.082 \cdot 10^{-7} $ & $ 0.047 $ \\ $ 615 - 772 $ & $ 6.154 \cdot 10^{-8} $ & $ 0.197 $ & $ 6.662 \cdot 10^{-8} $ & $ 0.098 $ & $ 1.125 \cdot 10^{-7} $ & $ 0.097 $ & $ 2.038 \cdot 10^{-7} $ & $ 0.088 $ \\ $ 772 - 1000 $ & $ 4.344 \cdot 10^{-8} $ & $ 0.286 $ & $ 5.276 \cdot 10^{-8} $ & $ 0.162 $ & $ 9.005 \cdot 10^{-8} $ & $ 0.161 $ & $ 1.617 \cdot 10^{-7} $ & $ 0.145 $ \\ \hline \hline \end{tabular} \caption{Statistical upper limits at $95\%$ confidence level on the CRE flux asymmetries between the real and the simulated Sun, generated by the event shuffling technique, evaluated in cones of angular radii $\Delta \Theta = 30\hbox{$^\circ$}, 45\hbox{$^\circ$}, 60\hbox{$^\circ$}, 90\hbox{$^\circ$}$. The upper limits are also expressed in terms of fractions of the isotropic CRE flux from the simulated Sun.} \label{tab:ul2} \end{table*} \subsubsection{Spherical harmonics analysis} \begin{figure*} \includegraphics[width=0.48\linewidth]{Figure_5_a.eps} \includegraphics[width=0.48\linewidth]{Figure_5_b.eps} \includegraphics[width=0.48\linewidth]{Figure_5_c.eps} \includegraphics[width=0.48\linewidth]{Figure_5_d.eps} \caption{Angular power spectra for different minimum energies: $60 \units{GeV}$ (top left panel), $100\units{GeV}$ (top right panel), $200 \units{GeV}$ (bottom left panel), $500\units{GeV}$ (bottom right panel). The points show the quantities $\hat{C}_{l} - C_{N}$. The dashed lines and the continuous lines show respectively the $3\sigma$ and $5\sigma$ intervals for the probability distribution of the white noise. \label{fig:powspectra}} \end{figure*} The previous analyses excluded variations of the CRE flux correlated from the Sun's direction. However, due to the effects of the heliospheric magnetic field and the geomagnetic field, a CRE signal from the Sun could produce a flux excess from a direction shifted with respect to the Sun's position. Also, since CREs from the Sun are expected to be spread over a cone with a finite angular radius, an excess of CREs from the Sun could induce an anisotropy on a large angular scale. To investigate this possibility we implemented a more general analysis method, based on spherical harmonics analysis of a fluctuation sky map. This method was applied in Ref.~\cite{Ackermann:2010ip} to search for anisotropies in the CRE flux in the Galactic reference frame, while in this paper we adopted the custom coordinates in Eq.~\ref{eq:coord} derived from the ecliptic reference frame. The fluctuation sky map was built starting from the real sky map and from the simulated one, which was generated using the randomized data sets. The analysis was performed on sky maps of the counts integrated above a given energy in order to retain sufficient statistics in each energy bin. The fluctuation in the $i$th pixel is: \begin{equation} f_{i}(>E) = \cfrac{N_{i,real}(>E) - \alpha N_{i,sim}(>E)} {\alpha N_{i,sim}(>E)}. \label{eq:fluctuation} \end{equation} The fluctuation sky map is then expanded in the basis of spherical harmonics to obtain the set of coefficients $a_{lm}$. The coefficients of the angular power spectrum are given by the variance of the $2l+1$ $a_{lm}$ coefficients at each multipole: \begin{equation} \hat{C}_{l} = \cfrac{1}{2l+1} \sum_{m=-l}^{l} | a_{lm} |^{2} \end{equation} Each coefficient $\hat{C}_{l}$ characterizes the intensity of the fluctuations on an angular scale of $\sim 180\hbox{$^\circ$} /l$. In the case of an isotropic flux, each of the coefficients $C_{l}$ can be seen as a random variable with a true value (white noise) given by: \begin{equation} C_{l} = C_{N} = \cfrac{4 \pi}{N} \end{equation} where $N$ is the total number of observed events. The confidence intervals for the $\hat{C}_{l}$ can be evaluated by noting that the random variable $(2l+1)\hat{C}_{l}/C_{l}$ follows a $\chi^{2}_{2l+1}$ distribution. In Fig.~\ref{fig:powspectra} the angular power spectra after the subtraction of the white noise contribution $C_{N}$ are shown for four different minimum energies ($60\units{GeV}$, $100\units{GeV}$, $200\units{GeV}$ and $500\units{GeV}$). The curves show the $3\sigma$ and $5\sigma$ probability intervals assuming the hypothesis of an isotropic CRE flux. All the data points lie within the $3\sigma$ interval, indicating that the measurements are consistent with the hypothesis of an isotropic CRE flux. Hence, we conclude that no preferred CRE arrival directions are observed. \section{Solar CRE fluxes from dark matter} We now determine constraints on DM model parameters by comparing our upper limits on solar CRE fluxes to the predicted fluxes of the two DM annihilation scenarios considered in Ref.~\cite{Schuster:2009fc}: (1) capture of DM particles by the Sun via elastic scattering interactions and subsequent annihilation to $e^{\pm}$ through an intermediate state $\phi$, and (2) capture of DM particles by the Sun via inelastic scattering interactions and subsequent annihilation of the captured DM particles outside the Sun directly to $e^{\pm}$. \subsection{Dark matter annihilation through an intermediate state} \label{sec:intstate} In this case we assume the standard scenario for WIMP capture by the Sun, namely that DM particles $\chi$ are captured by the Sun through elastic scattering interactions and then continue to lose energy through subsequent scatterings, eventually thermalizing and sinking to the core where they annihilate. In general, the only annihilation products which can escape the Sun are neutrinos; photons and charged particle final states are trapped by interactions with the dense matter in the Sun. However, recently scenarios have been proposed in which DM particles annihilate into a light intermediate state $\phi$, i.e., $\chi\chi \rightarrow \phi\phi$, with the $\phi$ subsequently decaying to standard model particles; these models have been suggested to provide a means of explaining an excess in the CRE spectrum reported by ATIC and \emph{Fermi} and in the positron fraction reported by PAMELA by DM annihilation or decay \cite{Pospelov:2008jd,Cholis:2008wq,Cholis:2008qq,Kuhlen:2009is,Bergstrom:2009fa, Grasso:2009ma,Cirelli:2010nh}. For the case considered in Ref.~\cite{Schuster:2009fc}, the $\phi$ are assumed to be able to escape the Sun without further interactions, with each $\phi$ decaying to an $e^{\pm}$ pair. If this decay happens outside the surface of the Sun, the $e^{\pm}$ could reach the Earth and may be detectable in the form of an observed excess of CREs from the direction of the Sun. The DM particles are assumed to annihilate at rest in the core of the Sun, so in the lab frame the energy of the $\phi$, $E_{\phi}$, is equal to the DM particle mass $m_{\chi}$. We assume $\phi$ to be a light scalar such that $m_{\phi} \ll m_{\chi}$, hence the $\phi$ are relativistic. The energy of the $\phi$ is described by the parameter $\beta_{\rm cl} = v_{\rm cl}/c$ where $v_{\rm cl}$ is the relative velocity of the lab frame and the $\phi$ rest frame (hereafter, the CM frame). The $\phi$ are assumed to lose a negligible amount of energy exiting the Sun. A $\phi$ decays into an $e^{\pm}$ pair with an isotropic angular distribution in the CM frame. Both the $e^{+}$ and $e^{-}$ have the same energy in the CM frame, parameterized by $\beta_{\rm jc} = v_{\rm jc}/c = \sqrt{1-(4 m_{e}^{2}/m_{\phi}^{2})}$, where $m_{e}$ is the electron mass and $v_{\rm jc}$ is the velocity of particle $j$ (the electron or positron) in the CM frame. However, in the lab frame the $e^{\pm}$ are boosted and so the angular distribution is no longer isotropic, and the energy of an $e^{\pm}$ in the lab frame depends on the angle at which it is emitted relative to the direction in which the $\phi$ is traveling. In the lab frame the angle at which the $e^{\pm}$ is emitted is denoted $\theta_{\rm lab}$; in the CM frame it is $\theta_{\rm cm}$. The $e^{\pm}$ are assumed to travel in straight lines and to suffer no energy losses before reaching the detector, i.e., the effects of magnetic fields are assumed to be negligible. The flux of $e^{\pm}$ per time, area, and energy per cosine of the detector angle from the direction $\theta_{\rm det}$ is given by integrating the differential rate of decay of $\phi$ along the line of sight in the direction of $\theta_{\rm det}$, \begin{eqnarray} \label{eq:flux} \frac{dN}{dt\, dA\, d\cos\theta_{\rm det} dE_{\rm det}}(\theta_{\rm det},E_{\rm det}) = \\ \nonumber \int_{0}^{\infty}\!\!\!dR\; \frac{dN}{dV dt}\; \frac{d\Gamma}{d\cos\theta_{\rm det}} \delta(E_{\rm det} - E), \end{eqnarray} where $R$ is the distance from the detector in the line-of-sight direction defined by $\theta_{\rm det}$. At each $\theta_{\rm det}$ we exclude the contribution from $\phi$ decays occurring in the range of $R$ within the surface of the Sun. The first term in Eq.~\ref{eq:flux} is the rate of production of $e^{\pm}$ (equal to twice the rate of $\phi$ decays) per volume at a volume element a distance $r$ from the center of the Sun, and is given by \begin{equation} \label{eq:decayrate} \frac{dN}{dV dt}\left(r\left(\theta_{\rm det},R\right)\right) = 2\, \frac{C_{\odot}e^{-r/L}}{4 \pi r^{2} L} \end{equation} where $C_{\odot}$ is the capture rate of DM particles in the Sun and the characteristic decay length $L \equiv \gamma_{\rm cl} c \tau$, where $\gamma_{\rm cl} = \gamma(\beta_{\rm cl})$ with $\gamma(\beta) = 1/\sqrt{1-\beta^{2}}$ and $\tau$ is the lifetime of the $\phi$. Equilibrium is assumed, i.e., for every two DM particles that are captured, two annihilate, so $C_{\odot}$ also represents the rate of production of $\phi$ particles by annihilation at the center of the Sun. For the range of scattering cross-sections considered here, the capture rate is in general sufficiently large that equilibrium is a valid assumption, although we discuss this issue in greater detail when presenting our constraints. We consider separately the cases of solar capture by spin-independent scattering and spin-dependent scattering. Using DarkSUSY~\cite{Gondolo:2004sc}, we calculate the capture rate via elastic scattering $C_{\odot}$ as a function of $m_{\chi}$. A Maxwell-Boltzmann velocity distribution is assumed, with the solar velocity relative to the DM rest frame $v_{\odot} = 220 \units{km/s}$, the DM velocity dispersion $\tilde{v} = 270 \units{km/s}$, and the local DM density $\rho_{\rm DM} = 0.3 \units{GeV/cm^{3}}$. For the case of spin-independent scattering we set the spin-dependent scattering cross-section $\sigma_{\rm SD}=0 \units{cm^{2}}$ and calculate the capture rates for $\sigma_{\rm SI}=10^{-43} \units{cm^{2}}$; for spin-dependent scattering we set $\sigma_{\rm SI}=0 \units{cm^{2}}$ and calculate the capture rates for $\sigma_{\rm SD}=10^{-40} \units{cm}^{2}$. The reference values of $\sigma_{\rm SI}$ and $\sigma_{\rm SD}$ roughly correspond to the current experimental upper limits on these parameters. For the range of parameters considered here the capture rate scales linearly with $\sigma_{\rm SD}$ and $\sigma_{\rm SI}$. The second term in Eq.~\ref{eq:flux} is the angular distribution of the $e^{\pm}$ from the decay of a $\phi$, as observed in the lab frame, and expressed in terms of detector angle, \begin{equation} \frac{d\Gamma}{d\cos\theta_{\rm det}} = \frac{d\Gamma}{d\cos\theta_{\rm cm}} \; \left| \frac{d\cos\theta_{\rm cm}}{d\cos\theta_{\rm lab}}\right| \; \frac{d\cos\theta_{\rm lab}}{d\cos\theta_{\rm det}}. \end{equation} In the rest frame of the $\phi$ the decays are isotropic, so, after integrating over the azimuthal angle we can write \begin{equation} \label{eq:angdistrib} \frac{d\Gamma}{d\cos\theta_{\rm cm}} = -1/2. \end{equation} The transformation between the CM angle and lab angle is given by~\cite{Dick:2009} \begin{equation} \left| \frac{d\cos\theta_{\rm cm}}{d\cos\theta_{\rm lab}} \right|= \frac{[\gamma_{\rm cl}^{2} (\alpha + \cos\theta_{\rm cm})^2+\sin^{2} \theta_{\rm cm}]^{3/2}}{\left|\gamma_{\rm cl}(1 + \alpha \cos\theta_{\rm cm})\right|}, \end{equation} with $\gamma_{\rm cl} = \gamma(\beta_{\rm cl})$ and $\alpha = \beta_{\rm cl}/\beta_{\rm jc} $. The lab and detector angles are related by \begin{equation} \theta_{\rm lab}=\theta_{\rm det} + \sin^{-1}\left(\frac{R \sin\theta_{\rm det}}{r}\right), \end{equation} which gives \begin{equation} \frac{d\cos\theta_{\rm lab}}{d\cos\theta_{\rm det}} = \frac{(| D_{\odot} - R\cos(\theta_{\rm det}) | + R\cos(\theta_{\rm det}) )^{2}} {r | D_{\odot} - R\cos(\theta_{\rm det}) |}. \end{equation} The delta function in Eq.~\ref{eq:flux} enforces that the energy observed at the detector is equal to the energy of the emitted $e^{\pm}$ boosted to the lab frame, \begin{equation} \label{eq:energydelta} E(\theta_{\rm cm}) = \frac{1}{2} \gamma_{\rm cl} m_{\phi } (1 + \beta_{cl} \cos\theta_{\rm cm}). \end{equation} Note that because the energy in the lab frame depends only on $\theta_{\rm cm}$, and because $\theta_{\rm lab}$ is determined by $\theta_{\rm cm}$, fixing $E_{\rm det}$ corresponds to selecting only CREs emitted at the corresponding $\theta_{\rm lab}$. For a specified $\theta_{\rm det}$, the $\theta_{\rm lab}$ of particles observed along the line-of-sight $R$ varies, hence the observed energy of CREs emitted from a point along the line-of-sight is a function of $R$, i.e., $E_{\rm det}(R)$. We rewrite the delta function in Eq.~\ref{eq:flux} as the composition \begin{equation} \delta(E_{\rm det} - E(R)) =\cfrac{\delta(R-R_{0})}{\cfrac{{\rm d}E}{{\rm d}R}(R_{0})} \end{equation} and then perform the integration over $R$. The parameter $R_{0}$ is the value of $R$ along the line-of-sight in the direction $\theta_{\rm det}$ where $\theta_{\rm lab}$ takes the value required to generate CREs with a given $E_{\rm det}$. We evaluate the CRE flux within a ROI of $30 \hbox{$^\circ$}$ centered on the Sun, and fix the value of $m_{\phi}=1\units{GeV}$. We calculate limits for three values of the decay length $L=5\units{AU}$, $1\units{AU}$, and $0.1\units{AU}$. Decreasing $L$ increases the observed CRE flux by condensing the region within which most $\phi$ decay. However, we emphasize that even for as large a decay length as $L=5\units{AU}$, the signal in the energy range used in this analysis is strongly peaked in the direction of the Sun and extends only a few degrees at most. Since the $\phi$ in this scenario are relativistic, in the lab frame the emitted $e^{\pm}$ are boosted along the direction the $\phi$ is moving, and so only $\phi$ exiting the Sun very close to the direction of the detector will produce decay products with large enough $\theta_{\rm lab}$ to reach the detector. In particular, for the $e^{\pm}$ to have sufficient energy to fall within the energy range of this analysis, a significant fraction of the $\phi$ energy must be deposited into the $e^{\pm}$ that reach the detector. This only occurs for $e^{\pm}$ emitted with very small $\theta_{\rm lab}$. This also leads to an energy dependence of the angular signal: for a given DM scenario, the angular extent of the flux at high energies is smaller than at lower energies. We note that decreasing $m_{\phi}$ for a fixed $m_{\chi}$ narrows the angular extent of the signal, and therefore has little impact on our results. We confirmed that for $m_{\phi}$ as large as $10\units{GeV}$, the cross-section limits vary negligibly except for a slight weakening of the limit at the lowest end of the $m_{\chi}$ range considered here. \begin{figure} \includegraphics[width=0.48\textwidth]{Figure_6.eps} \caption{Constraints on DM annihilation to $e^{+}e^{-}$ via an intermediate state, from solar CRE flux upper limits. Solar capture of DM is assumed to take place via spin-independent scattering. The constraints obtained for three values of the decay length $L$ of the intermediate state are shown. Models above the curves exceed the solar CRE flux upper limit at $95\%$ CL for a $30\hbox{$^\circ$}$ ROI centered on the Sun. \label{fig:intstatelimSI}} \end{figure} \begin{figure} \includegraphics[width=0.48\textwidth]{Figure_7.eps} \caption{Constraints on DM parameters for annihilation to $e^{+}e^{-}$ via an intermediate state as in Fig.~\ref{fig:intstatelimSI}, except assuming solar capture by spin-dependent scattering. \label{fig:intstatelimSD}} \end{figure} Figs.~\ref{fig:intstatelimSI} and~\ref{fig:intstatelimSD} show the constraints on $\sigma_{\rm SI}$ and $\sigma_{\rm SD}$ as a function of $m_{\chi}$, derived from the upper limits on the solar CRE flux obtained in~\S\ref{sec:isotropic}. For each $m_{\chi}$ the CRE flux in each energy bin used in this analysis was calculated, and the limit on the scattering cross-section was set by the energy bin providing the strongest constraint. The jagged shape of the curve reflects the transitions between the energy bins setting the strongest limit. Models above the curves exceed the $95\%$ CL solar CRE flux upper limit for the $30\hbox{$^\circ$}$ ROI in at least one energy bin. Ref.~\citep{Schuster:2009fc} notes that due to the Parker spiral shape of the Sun's magnetic field, CREs emitted from the Sun may be deflected in such a way as to appear to originate from a source displaced by up to $30\hbox{$^\circ$}$ from the Sun's position. If we instead consider larger ROIs centered on the Sun in order to accommodate the expected angular distribution of the flux of a displaced source, the constraints derived on the scattering cross-sections would be weakened by $\sim 30\%$ using the flux upper limit for the $45\hbox{$^\circ$}$ ROI, or by a factor of $\sim2$ if the $60\hbox{$^\circ$}$ ROI flux upper limit were used. The bounds on the scattering cross-sections we derive for $e^{\pm}$ final states are significantly below the typical constraints from direct detection experiments, and so we are prompted to examine more closely the validity of our assumption of equilibrium. For the limiting values we derive on elastic scattering cross-sections, capture and annihilation are effectively in equilibrium assuming an annihilation cross-section consistent with thermal relic dark matter $\langle \sigma v \rangle = 3 \times 10^{-26} \units{cm^{3}} \units{s^{-1}}$ for all values of the decay length $L$ considered here. In particular, for the limiting values of the scattering cross-sections the flux suppression relative to the equilibrium flux for any mass we consider is always less than 3\% (following the standard calculation implemented in Ref.~\cite{Gondolo:2004sc}), and thus we work under the assumption of equilibrium, noting that there remain uncertainties in the capture rate calculation at the level of a factor of a few (e.g., \cite{Sivertsson:2009nx}). Decays to $e^{\pm}$ are generally accompanied by final state radiation (FSR), so these scenarios can also be constrained by solar gamma-ray observations. Ref.~\cite{Schuster:2009au} derived bounds on the rate of decay to $e^{\pm}$ by requiring that the predicted FSR does not exceed the solar gamma-ray emission measured by \emph{Fermi}. However, the constraints we obtained on the elastic scattering cross-sections from the solar CRE flux correspond to constraints on the annihilation rate roughly 2-4 orders of magnitude stronger than those placed by gamma-ray constraints on FSR. The strength of the CRE limits relative to those from FSR increases for larger $m_\chi$. The relative strength of the constraints derived from the CRE flux limits compared to those from the gamma-ray measurements can be attributed in part to the fact that the FSR flux produced by annihilation to $e^{+}e^{-}$ is $\sim2$-3 orders of magnitude smaller than the CRE flux. FSR emission also must compete with a known background gamma-ray flux from the Sun~\cite{Giglietto:2010}. Furthermore, the FSR constraints in Ref.~\cite{Schuster:2009au} were derived using the preliminary \emph{Fermi} measurement of the solar spectrum which extends only to $10 \units{GeV}$, while this analysis spans CRE energies from $60 \units{GeV}$ to $\sim 1 \units{TeV}$. Since the FSR photon spectrum is harder than the measured solar gamma-ray spectrum, the strongest constraints are obtained from the highest energy bin in that analysis. Due to the fact that the FSR spectrum associated with the DM mass range considered in this analysis extends substantially higher than $10 \units{GeV}$, the existing FSR constraints are significantly less competitive than our CRE constraints. A measurement of the solar gamma-ray emission at higher energies could likely strengthen the FSR constraints to some extent. \subsection{Inelastic dark matter} We now consider the flux of $e^{\pm}$ from annihilation of DM particles captured by the Sun but with orbits which take them outside the surface of the Sun. In a standard WIMP scenario, DM particles captured by the Sun via elastic scattering quickly undergo subsequent scatterings which cause them to settle to the core, and hence the fraction of captured DM particles outside the surface of the Sun at any given time is negligible \cite{Sivertsson:2009nx}. However, this is not necessarily the case for inelastic dark matter (iDM)~\cite{TuckerSmith:2001hy, Finkbeiner:2007kk,ArkaniHamed:2008qn}. This class of models has garnered interest recently in light of claims that iDM could naturally explain such observations as the $511 \units{keV}$ line observed by INTEGRAL/SPI~\cite{Finkbeiner:2007kk} and the apparently inconsistent results of DAMA/LIBRA and CDMS if the DM scattered inelastically and thereby transitioned to an excited state with a slightly heavier mass. For a DM particle $\chi$ to scatter inelastically off a nucleon $N$ via the process $\chi + N \rightarrow \chi^{\star} + N$, the DM must have energy $E \ge \delta(1+m_{\chi}/m_{N})$, where $\delta=m_{\chi^{\star}}-m_{\chi}$. Particles captured by the sun by inelastic scattering typically lose enough energy after only a few interactions to prevent further energy loss by scattering. If the elastic scattering cross-section is sufficiently small ($\sigma_{\rm n} \lesssim 10^{-47} \units{cm^{2}}$, e.g., Ref.~\cite{Schuster:2009fc}), the captured particles will be unable to thermalize and settle to the core, and instead will remain on relatively large orbits. As a result, the density of captured DM particles outside the Sun may not be negligible in an iDM scenario, and the annihilation of those particles to $e^{\pm}$ could thus produce an observable flux of CREs from the direction of the Sun. While it is not necessary for DM to annihilate primarily to $e^{\pm}$ in order to explain the direct detection results (since direct detection experiments are not sensitive to the dominant annihilation channels), leptophilic iDM is strongly motivated since it could provide a consistent interpretation of multiple data sets \cite{Finkbeiner:2007kk, ArkaniHamed:2008qn,Batell:2009zp,Cholis:2009va}. In the following we will assume that the DM particles annihilate at rest and thus the energy of the $e^{\pm}$ produced in annihilation is well-approximated by $E_{\rm CRE} = m_{\chi}$. We will further assume that the CREs suffer no significant energy losses between production at the surface of the Sun and arrival at the detector, and so we expect a mono-energetic flux of CREs in this scenario. For simplicity, we assume all annihilations occur at the surface of the Sun (as in \cite{Schuster:2009fc}), since the density of DM falls off quickly with distance from the Sun. Naturally, $e^{\pm}$ produced in annihilations inside the surface of the Sun cannot escape the Sun, and thus do not produce a detectable flux. The isotropic flux of $e^{\pm}$ particles from the Sun is \begin{equation} \label{eq:idmfluxeqn} F = 2\, \frac{\Gamma_{\rm A, out}}{4 \pi D_{\odot}^{2}} \end{equation} where $\Gamma_{\rm A, out}$ is the annihilation rate of DM particles outside the surface of the Sun. The factor of 2 accounts for the fact that 2 CREs are emitted per annihilation of a pair of DM particles. However, it is also necessary to take into account that CREs produced on the surface of the Sun opposite to the Earth are extremely unlikely to reach the detector, so we assume the flux of CREs \emph{observable} at the detector is a factor of 2 smaller than that given by Eq.~\ref{eq:idmfluxeqn}. Following Refs.~\citep{Nussinov:2009ft, Menon:2009qj}, we assume that capture and annihilation of particles in this scenario is in equilibrium, i.e., $\Gamma_{\rm A} = \frac{1}{2}C_{\odot}$, where $\Gamma_{\rm A}$ is the total annihilation rate at all radii. We emphasize, however, that due to significant uncertainties in the density profile of the captured iDM particles, the assumption of equilibrium is less robust in this case than in the elastic scattering scenario. Ref.~\cite{Nussinov:2009ft} concludes that equilibrium will be attained, but notes the sizable uncertainties in this calculation. On the other hand, for the limiting cross-sections we determine for this scenario, the condition for equilibrium given in Ref.~\cite{Menon:2009qj} for inelastic capture requires a minimum annihilation cross-section ranging from more than an order of magnitude smaller than for a thermal relic for small masses and $\delta=110\units{keV}$ to a factor of $\sim 3$ larger than thermal for larger masses and $\delta=140\units{keV}$. In light of the uncertainties in this calculation, we again work under the assumption of equilibrium when deriving limits on the scattering cross-section. Defining $f_{\rm out}$ as the fraction of captured DM particles outside the Sun at a given instant, we have \begin{equation} \Gamma_{\rm A, out} = f_{\rm out} \Gamma_{\rm A} = \frac{1}{2}f_{\rm out} C_{\odot}. \end{equation} The capture rate of iDM particles by the Sun $C_{\odot}$ was calculated by Refs.~\citep{Nussinov:2009ft, Menon:2009qj}. Both studies note that there are uncertainties in this calculation at the factor of a few level. We use the capture rate as a function of DM mass $m_{\chi}$ and mass splitting $\delta$ as given in Fig.~2 of Ref.~\citep{Menon:2009qj}, and interpolate the results shown in that figure. The capture rates were calculated assuming the following parameters: the velocity of the Sun in the DM rest frame $v_{\odot} = 250 \units{km/s}$, the DM velocity dispersion $\tilde{v} = 250 \units{km/s}$, the local DM density $\rho_{\rm DM} = 0.3 \units{GeV/cm^{3}}$, and the cross-section per nucleon in the elastic limit $\sigma_{0} = 10^{-40} \units{cm^{2}}$. The relation between the total inelastic scattering cross-section and the total elastic scattering cross-section is given in Eq.~7 of Ref.~\citep{Menon:2009qj}. The capture rate scales linearly with $\rho_{\rm DM}$ and $\sigma_{0}$, while the dependence on $v_{\odot}$ and $\tilde{v}$ is mild over the mass range of interest ($m_{\chi} \sim 100 \units{GeV}$ to $\sim 1 \units{TeV}$). We note, however, that the constraints obtained by direct detection experiments may be more sensitive to variations in the assumed velocity distribution of the DM particles. The parameter $f_{\rm out}$ was calculated by Ref.~\citep{Schuster:2009fc} by simulating the capture of DM particles by the Sun via inelastic scattering. Here we interpolate the values of $f_{\rm out}$ as a function of $\delta$ shown in Fig.~4 of that work, which were calculated for $m_{\chi} = 1 \units{TeV}$. Those authors note that the dependence on $m_{\chi}$ is weak for the mass range of interest, thus we adopt the values of $f_{\rm out}$ determined by \cite{Schuster:2009fc} for $m_{\chi} = 1\units{TeV}$ for all masses considered. We caution that the calculation of $f_{\rm out}$ is subject to severe uncertainties, and a detailed study beyond the scope of this work is needed to more robustly estimate the value of this parameter. In particular, we note that $f_{\rm out}$ varies by more than an order of magnitude over the range of $\delta$ values considered in this study, and we therefore stress that the calculation of $f_{\rm out}$ introduces uncertainties in the derived scattering cross-section limits of at least a factor of a few. We calculate the flux of CREs from annihilation of DM in this scenario as a function of $m_{\chi}$ and $\sigma_{0}$ for three values of the parameter $\delta$. We then derive constraints on the $m_{\chi}$-$\sigma_{0}$ parameter space by requiring that the predicted flux of each DM model does not exceed the $95\%$ CL upper limits on solar CRE fluxes for a $30\hbox{$^\circ$}$ ROI centered on the Sun, again using the results derived in~\S\ref{sec:isotropic}. Since the region from which the DM-induced flux originates in this scenario is the angular extent of the Sun, the $30\hbox{$^\circ$}$ ROI is more than sufficient to encompass all of the DM signal. \begin{figure} \includegraphics[width=0.48\textwidth]{Figure_8.eps} \caption{Constraints on iDM model parameters for three values of the mass splitting $\delta$. Models above the curves produce a solar CRE flux that exceeds the $95\%$ CL flux upper limit for a $30\hbox{$^\circ$}$ ROI centered on the Sun in one or more energy bins. \label{fig:idmlim}} \end{figure} The predicted flux is mono-energetic, however the finite energy resolution of the LAT will result in the observed events being assigned to more than one energy bin. Since this may have a non-negligible impact on the derived scattering cross-section limits for DM masses near the energy bin edges, we convolve the predicted signal from each model with the energy resolution of the LAT and calculate its flux in each energy bin used in the analysis. We approximate the energy dispersion of the LAT as a Gaussian with $\sigma$ given by the half-width of the $68\%$ containment window (see Fig.~9 of \cite{Ackermann:2010ij}). For the energy range considered here the energy resolution ranges from $\sim 5\%$ to $\sim 14\%$. The cross-section limit at each mass is obtained from the energy bin providing the strongest constraint. Fig.~\ref{fig:idmlim} shows the constraints from the solar CRE flux upper limits on iDM models in the $m_{\chi}$-$\sigma_{0}$ parameter space for three values of $\delta$. Models in the regions above the curves exceed the $95\%$ CL flux upper limit for the $30\hbox{$^\circ$}$ ROI in at least one energy bin. The rounded shape of the curves is due to accounting for the energy resolution of the LAT. These limits exclude the regions of parameter space compatible with the results of DAMA/LIBRA and CDMS (in addition to several other direct detection experiments) as determined by Ref.~\cite{Ahmed:2010hw} for $\delta=120\units{keV}$, for the range of masses accessible to our analysis ($m_\chi \gtrsim 70\units{GeV}$), assuming the dominant annihilation channel is $e^{\pm}$. Models consistent with both DAMA/LIBRA and CDMS at 90\% CL exist for values of $\delta$ ranging from $\sim 85\units{keV}$ to $\sim 135\units{keV}$~\cite{Ahmed:2010hw}; for masses from 70~GeV to 250 GeV the range of allowed scattering cross-sections is from $\sigma_{0} \sim 10^{-40} \units{cm^{2}}$ to $\sigma_{0} \sim 10^{-39}$ cm$^{2}$~\cite{Chang:2008gd}. Although the uncertainties in the calculation of the DM fluxes in this scenario are significant, we emphasize that constraining $\sigma_{0} \lesssim 10^{-40}\units{cm^{2}}$ is sufficient to exclude the cross-sections of models consistent with both data sets. The bounds we derive exclude the relevant cross-sections by 1-2 orders of magnitude, and hence we conclude that the parameter space of models preferred by DAMA/LIBRA can be confidently ruled out for $m_\chi \gtrsim 70\units{GeV}$ for annihilation to $e^{\pm}$ despite the uncertainties in the flux calculation. This analysis constrains DM models in which the primary annihilation channel is to $e^{\pm}$. We emphasize that although other annihilation channels can be probed by gamma-ray \cite{Atkins:2004qr,Batell:2009zp,Schuster:2009au} or neutrino \cite{Nussinov:2009ft,Menon:2009qj,Schuster:2009au} measurements, the upper limits on solar CRE fluxes provide a uniquely strong constraint on the $e^{\pm}$ final state, which is inaccessible to neutrino telescopes since no neutrinos are produced for this annihilation channel. \section{Conclusions} We used a sample of about $1.3 \times 10^{6}$ CRE events with energies above $60\units{GeV}$ detected by the {\em Fermi} LAT during its first year of data-taking to search for flux excesses or deficits correlated with the Sun's direction. Two analysis approaches were implemented, and neither yielded evidence of an enhancement in the CRE flux from the direction of the Sun. This result agrees with the more general one shown in Ref.~\cite{Ackermann:2010ip}, where no evidence of anisotropies was found in CRE arrival directions above $60\units{GeV}$ in the Galactic reference frame. We derived limits on DM models which generate a CRE flux from the Sun's direction for the two scenarios discussed in Ref.~\cite{Schuster:2009fc}. In the case of annihilation of DM through an intermediate state and subsequent decay to $e^{\pm}$, the upper limits on solar CRE fluxes provide significantly stronger constraints on the DM scattering cross-section than limits previously derived by constraining the FSR emission associated with this decay channel using solar gamma-ray measurements. For the iDM scenario, the solar CRE flux upper limits exclude the range of models which can reconcile the data from DAMA/LIBRA and CDMS for $m_{\chi} \gtrsim 70\units{GeV}$, assuming DM annihilates predominantly to $e^{\pm}$. Since direct detection experiments are not sensitive to the dominant annihilation channels of the DM particles, other data, e.g., solar gamma-ray measurements and neutrino searches, may be able to further constrain these models by excluding regions of parameter space for alternative annihilation channels. \begin{acknowledgments} The {\em Fermi} LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \`a l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K.~A.~Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France. The authors thank Joakim Edsj\"{o} for his valuable contribution during the preparation of this manuscript. JSG thanks J.~Beacom, B.~Dasgupta, S.~Horiuchi, D.~Malyshev, and I.~Yavin for helpful discussions. \end{acknowledgments}
1,116,691,498,145
arxiv
\section{Introduction} \label{sec:Introduction} In this article, we present predictions for mass-loss rates and velocity structures for a grid of $\textrm{O}$-type stars, using two distinct methods for solving the wind dynamics. Mass loss forms an integral aspect characterizing massive $\textrm{O}$-type stars. Because of their short lifetimes, massive stars are important tracers of star formation in galaxies. Furthermore, they enrich the interstellar medium with metals, both during their lives via stellar winds, as well as when they explode at the very end of their evolution. In order to build an evolutionary framework for massive stars, it is essential to map the mass-loss processes during the various evolutionary stages, as the exact rates of mass loss greatly influence the evolutionary tracks \citep[e.g.][]{1981A&A....99...97M, 1986ARA&A..24..329C}. The effects of mass loss on the evolutionary tracks are at least two-fold: first and foremost the stellar mass is reduced, and secondly, the rotational velocity is strongly affected, as the mass also carries away angular momentum \citep[e.g.][]{1998A&A...329..551L,2000A&A...361..159M}. For the continuous stellar winds from massive stars, the outflow is thought to be driven by the transfer of energy and momentum from the radiation field to the atmosphere through the absorption of photons in atomic transitions. The exact amount of momentum and energy transfer has been the subject of both theoretical and observational studies for many decades ~\citep{1970ApJ...159..879L,1975ApJ...195..157C,1986A&A...164...86P, 1996A&A...305..171P,1997ApJ...477..792D,1999A&A...350..181V,2004A&A...417.1003K,2007A&A...473..603M}. For luminous $\textrm{O}$-type stars, with log($L/\ensuremath{\mathit{L}_{\odot}}) > 5.2$, the theoretical predictions of \cite{2000A&A...362..295V} seem to be in reasonable agreement with empirical mass-loss rates provided that $\textrm{O}$-stars are only subject to modest amounts of wind clumping (with clump filling factors of only 5-10). However, for objects with luminosities log($L/\ensuremath{\mathit{L}_{\odot}})$ below approximately 5.2, a severe drop -- by a factor of $\sim$100 -- in the empirically determined modified wind momentum (basically a multiplication of the mass-loss rate and the terminal velocity) has been revealed. This problem has in literature been referred to as ``the weak-wind problem'' ~\citep{1996A&A...305..171P,2005A&A...441..735M,2008A&ARv..16..209P,2009A&A...498..837M}. It deserves proper investigation simply because of the enormity of the effect. It is particularly important to find out whether the problem is caused by the mass-loss diagnostics or the predictions, as both are also applied to more luminous stars, where agreement between diagnostics and predictions has seemingly been achieved. But how certain can we be that this agreement is not a coincidence if we are aware of severe problems at lower luminosity? Furthermore, we note that the oft-used mass-loss predictions of \cite{2000A&A...362..295V} are semi-empirical, in the sense that empirical values for the wind velocity structure and terminal velocity are used as input to the modelling. In order to trust our overall knowledge of the mass-loss rates from $\textrm{O}$-type stars -- at {\it all} masses and luminosities -- it is pivotal to further scrutinize the \cite{2000A&A...362..295V} assumptions, most notably that of the adopted wind dynamics. Recently, \cite{2008A&A...492..493M} suggested a new parametrization of the line acceleration, expressing it as a function of radius rather than of the velocity gradient, as in \citeauthor{1975ApJ...195..157C} (\citeyear{1975ApJ...195..157C}; henceforth CAK) theory. The implementation of this new formalism allows for local dynamical consistency, as one can determine the energy and momentum transfer at each location in the wind through the use of Monte Carlo simulations. Although the formalism was applied with three independent starting conditions that showed convergence to the same wind parameters, it has thus far only been applied to one object, that of an O5 dwarf. For the adopted line force parameterization \citeauthor{2008A&A...492..493M} identify an exact solution in case the medium is isothermal. We expand on this result by also accounting for a temperature stratification. To allow for such a study we employ the new line acceleration parameterization but solve for the wind dynamics consistently by applying a numerical method to solve for the momentum equation. The purpose of our study is threefold: {\em i)} to solve the wind dynamics numerically, and compare the results to those of \cite{2008A&A...492..493M}, {\em ii)} to compute a larger grid of dynamically derived O-star mass-loss rates and wind terminal velocities, and determine the accuracy of the predictions made by \cite{2000A&A...362..295V}, and {\em iii)} to utilize the grid in order to investigate the weak-wind problem. Our paper is organized as follows. In Sect.~\ref{sec:Method}, we start off describing the core of our method and the different methods to treat the wind equation. The results are presented in Sect.~\ref{sec:Results} and discussed in the Sect.~\ref{sec:discussion}. We end with the conclusions (Sect.~\ref{sec:Conclusions}). \section{Method} \label{sec:Method} The method of \citet{1997ApJ...477..792D} and \citet{1999A&A...350..181V}, applied to derive the mass-loss rates of O and early-B type stars \citep{2000A&A...362..295V,2001A&A...369..574V}, Luminous Blue Variable stars \citep{2002A&A...393..543V} and Wolf-Rayet stars \citep{2005A&A...442..587V}, is an extension of a treatment developed by \citet{1985ApJ...288..679A}. It is based on an iteration cycle between the stellar atmosphere model {\sc isa-wind} \citep{1993A&A...277..561D} and a Monte Carlo simulation, {\sc mc-wind} \citep{1997ApJ...477..792D}, in which the energy per unit time $\Delta L$ that is extracted from the radiation field in interactions of photons with the gas, is computed. From this a mass-loss rate \mdot\ is computed on the basis of which a new {\sc isa-wind} model is constructed. The predicted mass loss is the one for which the input mass-loss rate of {\sc isa-wind} equals the mass-loss rate computed by {\sc mc-wind}. As is consistently pointed out in the papers referred to above, the method does not address the equation of motion but uses a prescribed trans-sonic velocity structure. This implies that although in a global sense the method fulfills the constraint of energy conservation, it need not hold that the actual local forces acting on the gas are consistent with the force implied by the adopted velocity law. \cite{2008A&A...492..493M} relax on this assumption and present an improved treatment of the problem introducing a new way to parametrize the line force. We first discuss an approach presented by these authors, which we refer to as the ``best-$\beta$ method, as it allows to link to empirically derived estimates of the steepness of the velocity law (characterized by a parameter $\beta$, see below). In a second step, we present solutions that numerically solve the wind dynamics. We first briefly introduce {\sc isa-wind} in Sect.~\ref{sec:isawind}, emphasizing the treatment of the heuristic velocity law, and {\sc mc-wind} in Sect.~\ref{sec:mcwind}, focusing on the determination of the mass-loss rate using the global energy argument. In Sect.~\ref{sec:lineforce} we recapitulate the essentials of the parametrization of the line force by \citeauthor{2008A&A...492..493M} and in Sect.~\ref{sec:bestbeta} the principle of their best-$\beta$ method. In the following subsection we introduce our hydrodynamical method. Finally, Sect.~\ref{sec:sonicpoint1} is devoted to a discussion of the physical conditions at the sonic point. \subsection{The model atmosphere} \label{sec:isawind} The code {\sc isa-wind} computes the structure, radiation field and ionization/excitation state of the gas of an outflowing stellar atmosphere in non local thermodynamic equilibrium (non-LTE), assuming radiative equilibrium. No artificial separation between the photosphere and wind is assumed. The temperature structure is treated somewhat simplified in that it results from initial LTE based Rosseland opacities (i.e. grey). The fact that the temperature structure is not affected by possible departures from the populations from their LTE state implies that the effect of line blanketing is not treated self-consistently, although non-LTE line blocking is taken into account. Radiation transfer in spectral lines is treated in the Sobolev approximation~\citep{1960mes..book.....S}. The input stellar parameters are the luminosity $L$, the effective temperature \teff\ (specifying the radius $R$), the mass $M$ and chemical abundances. The wind is described by the mass-loss rate \mdot\ and a velocity structure, which are connected through the equation of mass continuity \beq \label{eq:continuity} \dot{M} = 4 \pi r^2 \varv(r) \rho(r), \end{equation} where $\rho(r)$ is the mass density and $v(r)$ is the velocity at radius $r$. Outside the photosphere, the velocity structure is assumed to follow a $\beta$-law, i.e. \beq \label{eq:betalaw} \varv(r) = \vinf \left( 1- \frac{r'}{r}\right)^{\beta}. \end{equation} The free parameter $\beta$ is a measure of the velocity gradient. A low value of $\beta$ implies that the velocity approaches the terminal flow velocity \vinf\ relatively close to the star; for a large value this happens only further out in the wind. The $\beta$-law does not hold in the photosphere since the line force is not the dominant term in the equation of motion, but gravity and the acceleration due to the gas pressure gradient also contribute to the flow structure. The radius $r'$ is a smoothing parameter that is used to connect the $\beta$-law to the (quasi) hydrostatic photosphere and must assure that $\varv(r)$ and its spatial derivative are continuous at the point where one couples the photospheric velocity law to the $\beta$-law. The velocity structure in the photosphere is determined by solving the non-isothermal equation of motion, neglecting line radiation pressure and assuming that continuum radiation pressure is the result from Thomson scattering only. An inner boundary velocity (or density) is chosen, which may be used to tune the total Rosseland optical depth of the photosphere and wind (see below). The wind is assumed to be homogeneous, i.e. the outflowing gas is not clumped \citep[but see][]{2011A&A...526A..32M}, and the terminal velocity is chosen to be $2.6$ times the effective escape velocity from the stellar photosphere, which is in reasonable concordance with empirically determined terminal velocities of O-stars ~\citep{1995ApJ...455..269L,2000ARA&A..38..613K}. The base of the photosphere is positioned at a Rosseland optical depth of about 20-25 and the wind extends out to 20 \rstar. \subsection{The Monte Carlo method {\sc mc-wind}} \label{sec:mcwind} The code {\sc mc-wind} uses the model atmosphere structure computed by {\sc isa-wind} to determine the total amount of energy that is transferred from the radiation field to the wind -- in interactions of photons with ions in the gas -- by means of a Monte Carlo simulation of the trajectories of photon packets emitted at the base of the photosphere and escaping through the outer boundary of the model. Each photon can travel an optical depth weighted (random) distance to a point of interaction. This point is determined by taking into account all the opacity the photon encounters on its path, so it includes contributions from both lines and continua. At the point of interaction the type of interaction is determined, using proper weighing functions \citep{1999A&A...350..181V}. The possible interactions are thermal absorption and emission, electron scattering and line scattering. The interaction is assumed to be coherent in the co-moving frame of the ion. In the observers frame, however, energy can be exchanged from the radiation field to the gas (or vice versa). It is traced which ion is involved in the interaction, such that, for instance, the contributions to the radiative force can be dissected and identified. This provides us with a powerful tool to study the nature of the line force at each location in the wind. The radiative force per unit mass equals ~\citep{1985ApJ...288..679A}: \beq \label{eq:mclineforce} \mathit{g}_{\rm rad} = -\frac{1}{\mdot} \frac{d L}{d r}, \end{equation} where $dL$ is the amount of energy lost by the radiation field in a layer of thickness $dr$. Once the total amount of energy transferred to the wind is known, the mass-loss rate that can be driven {\em for the density and velocity structure of the adopted {\sc isa-wind} model} can be calculated. Neglecting enthalpy: \beq \label{eq:mcdeltal} \Delta L = \frac{1}{2} \mdot \left(\vinf^2 + \varv_{\rm esc,N}^2\right), \end{equation} where $\Delta L$ is the total amount of energy lost by the radiation field and \beq \label{eq:vesc} \varv_{\rm esc,N} = \sqrt{\frac{2GM_*}{R_*}}, \end{equation} is the Newtonian escape velocity from the stellar surface. $G$ is the gravitational constant. A new {\sc isa-wind} atmosphere, adopting the mass-loss rate as determined in {\sc mc-wind}, is computed followed by a new Monte Carlo simulation. This procedure is repeated until the input mass-loss rate of {\sc mc-wind} equals the output mass-loss rate. Although the mass-loss rate that is predicted in this way reflects that in a {\em global} sense the energy that is needed to drive the wind is indeed extracted from the radiation field, it does not mean that the input line force (implied by the velocity law) equals the output line force from the Monte Carlo simulation {\em locally}, i.e. the equation of motion of the wind is not solved. Here we improve on this situation using two methods. Both methods A and B require a parametrization of the line force predicted by {\sc mc-wind}. We therefore first discuss this aspect. \subsection{Line force parametrization} \label{sec:lineforce} Figure~\ref{fig:mcline} shows the Monte-Carlo line force (blue crosses) as is produced in the first iteration step of a typical O3\,V star ($L = 10^{5.83}$ \ensuremath{\mathit{L}_{\odot}}, \teff = 44,600\,K and $M = 58\,\ensuremath{\mathit{M}_{\odot}}$). The Monte Carlo line force is determined in a statistical way and shows scatter. Given the delicate nature of the equation of motion it can not be used as such and must be represented by an appropriate analytical fit function. We adopt a parametrization of the line force as a function of radius, rather than of optical depth, as opted for by \citet{1975ApJ...195..157C}. In Sect.~\ref{sec:MCAK-theory} we show that this leads to a more accurate numerical representation of the line force, at least for the type of stars studied here. In doing so, we follow \cite{2008A&A...492..493M}, who motivate \beq \label{eq:grad} \ensuremath{\mathit{g}_{\rm rad}^{\rm line}} = \left\{ \begin{array}{rl} 0 & \hspace{6mm} \textrm{if } \hspace{2mm} r < r_{\circ} \\ \mathit{g}_{\circ} \,(1-r_{\circ}/r)^{\gamma} / r^{2} & \hspace{6mm} \textrm{if } \hspace{2mm} r \geq r_{\circ}, \\ \end{array} \right. \end{equation} where $\mathit{g}_{\circ}$, $r_{\circ}$ , and $\gamma$ are fit parameters to the Monte Carlo line force. This choice of the fit function, i.e. without any explicit dependency of the line force on the velocity gradient, implies that in our models the critical point is the sonic point. Figure~\ref{fig:mcline} shows a typical result for this fit (black dotted curve). The deviations are (as mentioned) due to scatter in the simulation. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth,angle=0]{lineforceO3LVfirstit} \caption{The line force (blue crosses) as predicted by {\sc mc-wind} in the first iteration step. A fit (black dotted line) using Eq.~\ref{eq:grad} to represent the line force is overplotted. Note the modest scatter on the Monte Carlo results due to noise.} \label{fig:mcline} \end{center} \end{figure} \subsection{Method A: Best-$\beta$ solution} \label{sec:bestbeta} In this section, we use the line force representation Eq.~\ref{eq:grad} to determine -- after making certain assumptions -- an analytical solution of the velocity law in the outer part of the wind, following \cite{2008A&A...492..493M}. This solution can be compared to the $\beta$-law (Eq.~\ref{eq:betalaw}) and used to derive \vinf\ and the $\beta$ value that is most representative. This is useful in comparing to the often applied $\beta$-law. We aim to find a solution of the equation of motion \beq \label{eq:motion} v \frac{dv}{dr} = -\frac{R_* \vesc^2}{2 r^2} + \ensuremath{\mathit{g}_{\rm rad}^{\rm line}} - \frac{1}{\rho}\frac{d p}{d r}, \end{equation} where $p$ is the gas pressure and \beq \label{eq:vesceff} \vesc = v_{\rm esc,N} \sqrt{1-\Gamma}, \end{equation} is the effective surface escape velocity of the star. $\Gamma$ is the continuum radiation pressure in units of the Newtonian gravitational acceleration. Sufficiently far from the photosphere this term is dominated by radiation pressure on free electrons, i.e. $\Gamma$\,=\,$\Gamma_{\rm e}$, where $\Gamma_{\rm e}$ is essentially constant for early-type stars. Close to or in the photosphere, the acceleration due to free-free, bound-free and bound-bound processes may compete with electron scattering and should, in principle, be considered in Eq.~\ref{eq:vesceff}. For our best-$\beta$ solution, however, we assume a constant continuum acceleration, which we set to $\Gamma_{\rm e}$. Substituting the equation of state for an ideal gas and using Eq.~\ref{eq:continuity}, the term $(1/\rho)\,dp/dr$ can be written as: \beq \label{eq:pressure} \frac{1}{\rho}\frac{d p}{d r} = -\frac{a^2}{v} \frac{dv}{dr} - \frac{2 a^2}{r} + \frac{k}{m} \frac{d T}{d r}, \end{equation} where $k$ is Boltzmann's constant, $m$ the mean particle mass and $a(r)$ is the local sound speed, given by: \beq a = \sqrt{\frac{k T}{m}}. \end{equation} We assume the wind to be isothermal, such that the sound speed is constant. The equation of motion can now be rewritten as \beq \label{eq:motionfinal} a_{\circ} \left(\frac{v}{a_{\circ}} - \frac{a_{\circ}}{v} \right)\frac{dv}{dr} = -\frac{R_* \vesc^2}{2 r^2} + \frac{2 a_{\circ}^2}{r} + \ensuremath{\mathit{g}_{\rm rad}^{\rm line}}, \end{equation} where $a_{\circ}$ is the isothermal sound speed at the effective temperature of the star. Equation \ref{eq:motionfinal} is a critical point equation, where the left- and right-hand side vanish at the point $v(r_s)=a_{\circ}$, i.e. where $r_s$ is the radius of the sonic point. It yields several types of solutions. \citet{2008A&A...492..493M} show that for the isothermal case and a line force as described in Eq.~\ref{eq:grad}, analytical expressions for all types of solutions of Eq.~\ref{eq:motionfinal} can be constructed by means of the Lambert W function (as for a further discussion of the solution of Eq.~\ref{eq:motionfinal} containing an additional centrifugal term, see \citeauthor{2001mueller} \citeyear{2001mueller}). Even for the interesting trans-sonic case of a stellar wind, the analytical solution has an intricate shape. However, a useful approximate wind solution for the velocity law can be constructed if the pressure related terms $2a^{2}/r$ and $a/v$ can be neglected. We note, however, that at the sonic point the contribution of the two pressure terms is non-negligible \citep{2008A&A...492..493M}. After some manipulation one finds that the approximate velocity law is given by: \beq \label{eq:vlawapprox} v(r) = \sqrt{ \frac{R_* v_{\rm esc}^2}{r} + \frac{2}{r_{\circ}}\frac{\mathit{g}_{\circ}}{\left( 1+\gamma \right)} \left(1-\frac{r_{\circ}}{r} \right)^{\gamma + 1} + C}, \end{equation} where $C$ is an integration constant. From this equation, the terminal wind velocity can be derived if the integration constant $C$ can be determined. This can be done assuming that at radius $r_{\circ}$ the velocity approaches zero. This yields \beq \label{eq:integrationc} C = - \frac{R_* \vesc^2}{r_{\circ}}. \end{equation} In the limit $r \rightarrow \infty$ we find that: \beq \label{eq:vinf} \vinf = \sqrt{\frac{2}{r_{\circ}} {\frac{\mathit{g}_{\circ}}{(1+\gamma)} - \frac{R_* \vesc^2}{2}}}. \end{equation} The terminal velocity $\vinf$ can also be determined from the equation of motion. At the critical point, the left-hand and right-hand side of Eq.~\ref{eq:motionfinal} both equal zero. Introducing \vinf\ in relation to $\mathit{g}_{\circ}$ as expressed in Eq.~\ref{eq:vinf}, we find \beq \label{eq:vinfnew} v_{\infty,{\rm new}} = \sqrt{\frac{2}{r_{\circ}} \left[ \left( \frac{r_s}{r_s-r_{\circ}} \right)^{\gamma} \frac{r_s}{(1+\gamma)} \left( \frac{\vesc}{2} - 2 r_s \right) - \vesc^2 \right]}. \end{equation} A direct comparison to the $\beta$-law can be made for the supersonic regime of the wind and results in \beq \label{eq:beta} \beta = \frac{1+\gamma}{2}. \end{equation} Given the assumptions made in this derivation, this result is only approximately correct. The procedure that is followed to obtain the best-$\beta$ solution is that in each Monte Carlo simulation the values of $\mathit{g}_{\circ}$, $r_{\circ}$, and $\gamma$ are determined by fitting the output line force. Using these values and the current value of the sonic point radius, Eqs.~\ref{eq:vinf},~\ref{eq:vinfnew} and~\ref{eq:beta} are used to determine $\vinf$ and $\beta$. \vinf\ derived from Eq.~\ref{eq:vinfnew}, the mass loss predicted in {\sc mc-wind}, and the expression derived for $\beta$ serve as input for a new {\sc isa-wind} model. The two codes are iterated until convergence is achieved. Following \cite{2008A&A...492..493M}, we assume that convergence is achieved when the values for \vinf\ derived from Eqs.~\ref{eq:vinf} and~\ref{eq:vinfnew} agree within 10 percent. This implies that our predicted terminal velocities have at least this uncertainty. Once the velocity convergence criterion is fulfilled all fit parameters and the values for \mdot\ and the sonic point radius will be stable to within five to ten percent. \subsection{Method B: Hydrodynamic solution} \label{sec:ns} The accuracy of the best-$\beta$ solution hinges on the assumptions that the wind is isothermal and that the Eddington-factor $\Gamma$ is constant (here taken to be equal to $\Gamma_{\rm e}$). It may be expected that these assumptions have an impact on the velocity structure near the sonic point, which is where the mass-loss rate is set. To assess this impact and to improve on the physical treatment, we devise a numerical solution of the equation of motion (Eq.~\ref{eq:motionfinal}) throughout the entire photosphere and wind, referred to as the hydrodynamic solution. To this end we start our solution at the critical point $\varv = a$ and proceed both down-stream and up-stream using a $4^{\rm th}$ order Runge Kutta method with adaptive stepsize control \citep{1992nrfa.book.....P}. Applying l'H\^{o}pital's rule \citep[see e.g.][]{1999isw..book.....L} an expression can be devised to determine $dv/dr$ at $v(r_{\rm s}) = a$. In order to determine the location of the sonic point $r_{\rm s}$ we require \beq \label{eq:criticalp} -\frac{R_* \vesc^2}{2 r_{\rm s}^2} + \frac{2 a^2}{r_{\rm s}} + \ensuremath{\mathit{g}_{\rm rad}^{\rm line}} = 0. \end{equation} The above equation is solved numerically. \begin{figure*} \centering \resizebox{18.0cm}{!}{ \includegraphics[width=0.70\textwidth,angle=0]{velocitylaw} \includegraphics[width=0.70\textwidth,angle=0]{velocitylawsmallx} \includegraphics[width=0.70\textwidth,angle=0]{velocitylawsmallsmallx} } \caption{A comparison between the best-$\beta$ and hydrodynamic velocity laws for an O3\,V star. Here, the best-$\beta$ velocity law is the one determined by using the fit-parameters of the hydrodynamic solution. The label `approx' in the left panel refers to the approximate velocity law, as given by Eq.~\ref{eq:vlawapprox}. Three different radial regimes are plotted: full radial range (left panel); the region around the sonic point (middle panel) in which the location of the sonic point in the hydrodynamic solution is indicated with an arrow, and the photospheric region (right panel). Note that at high velocities both methods, as well as the approximate velocity law, yield very similar velocity stratifications. Near the sonic point the best-$\beta$ velocity law is steeper than the hydrodynamic velocity law. Its sonic point is closer to the star. Given this difference, the right panel shows that in the photosphere where hydrostatic equilibrium controls the equation of motion, the shape of both velocity profiles is very similar. } \label{fig:velocitylaw} \end{figure*} So far, the hydrodynamic solution assumes an iso-thermal medium. At the sonic point the temperature gradient is very small, therefore the location of $r_{\rm s}$ can be reliably determined using Eq.~\ref{eq:criticalp}, even if $dT/dr$ would be taken into account. The neglect of the temperature gradient in the hydrodynamic solution in the region below the sonic point has a significant impact on the structure -- for instance on the total (Rosseland) optical depth from the inner boundary to the sonic point. To solve this problem, we account for the temperature structure inward of the critical point. This requires an iterative procedure between the solution of the non-isothermal equation of motion \beq \label{eq:motionnoniso} a \left(\frac{v}{a} - \frac{a}{v} \right)\frac{dv}{dr} = -\frac{R_* v_{\rm esc}^2}{2 r^2} + \frac{2 a^2}{r} + \ensuremath{\mathit{g}_{\rm rad}^{\rm line}} - \frac{k}{m} \frac{dT}{dr}, \end{equation} and the temperature structure (see Sect.~\ref{sec:isawind}). After starting numerical integration of the velocity structure at the sonic point $r_{\rm s}$ now determined by applying Eq.~\ref{eq:criticalp}, but using the local value of the temperature at the sonic point) in the down-stream direction, we include the $dT/dr$ term in Eq.~\ref{eq:motionnoniso}. This implies that the location of the sonic point is not affected. In the up-stream direction the temperature gradient is negligible, and is ignored. Figure~\ref{fig:velocitylaw} compares the best-$\beta$ and hydrodynamic velocity laws for an O3 main sequence star. It shows that the best-$\beta$ solution behaves very similar to the numerical velocity law. Note that if one zooms in on the location of the sonic point, one sees that for the best-$\beta$ velocity structure $r_{\rm s}$ is positioned slightly more inward, or, alternatively, that the velocity law is steeper in the lower part of the wind. In the best-$\beta$ method, the absolute scaling of the velocity structure in the photosphere is based on the adopted velocity at the inner boundary of the model (see Sect.~\ref{sec:isawind}). Therefore, only the position of $r_{\rm s}$ as predicted by the hydrodynamical method is physically meaningful, albeit in the context of the assumption $\Gamma = \Gamma_{\rm e}$. Once this iterative procedure has converged, and the non-LTE state of the gas is computed throughout the atmosphere, we iterate between {\sc isa-wind} and {\sc mc-wind} in the same manner as described in Sect.~\ref{sec:bestbeta}. Again, save for \vinf, the fit parameters converge on an accuracy of better than ten percent in a few iteration cycles. For \vinf\ we are forced to adopt an accuracy of 20 percent. Our predicted terminal velocities have at least this uncertainty. \subsubsection{Remaining assumptions and uncertainties} In the hydrodynamic solution the contribution of bound-free and free-free opacity to the continuum radiation pressure is ignored (see Sect.~\ref{sec:isawind}). In the photosphere, the contribution of these processes to $\Gamma$ is not negligible and may in fact be of the order of $\Gamma_{\rm e}$. We use the Sobolev approximation for line radiation transfer. The Sobolev approximation becomes ill-founded for small velocity gradients $dv/dr$ or velocities lower than the sound speed. \cite{1986A&A...164...86P} showed that in the photosphere (where the velocity is very small) the line force is underestimated in the Sobolev approximation. Further out, in the region of the sonic point, the line optical depth is overestimated compared to co-moving frame values, implying an overestimate of the line force in this region and therefore an overestimate of the mass-loss rate. In addition to the above two sources of uncertainty to the balance of forces at and below the sonic point is the quality of the fitting function Eq.~\ref{eq:grad} in this part of the wind, that may be uncertain by up to a factor of two. This is not expected to be a big problem deep in the photosphere, as both the fit function as well as the simulated Monte Carlo line force are small compared to the radiative force on free electrons, but at the sonic point it might play a role. \subsection{The line force at the sonic point: a test for the validity of the best-$\beta$ method} \label{sec:sonicpoint1} The critical point of the equation of motion is the sonic point. A dissimilarity between the sonic point and the critical point may occur when the line force is represented by an explicit function of $dv/dr$, such as in CAK and modified-CAK theory~\citep{1986A&A...164...86P}. Although these descriptions provide extremely valuable insights, they do make assumptions regarding the behavior of \ensuremath{\mathit{g}_{\rm rad}^{\rm line}}\ (See Sect.~\ref{sec:MCAK-theory}). The same applies for our method. Here we want to point out that Eq.~\ref{eq:criticalp} implies that -- whatever the description of the line force -- at the sonic point $\ensuremath{\mathit{g}_{\rm rad}^{\rm line}} \simeq \ensuremath{\mathit{g}_{\rm eff}}$, as the pressure gradient terms $2a^{2}/r$ and $(k/m)\,dT/dr$ are small compared to the line force. This is characteristic for monotonic winds of early-type stars \citep[see][for non-monotonic flows]{2000ApJ...532L.125F}. Here $\ensuremath{\mathit{g}_{\rm eff}} = G M_{*} (1-\Gamma) / r^{2} = R_{*} \vesc^{2} / 2r^{2}$. This implies that for the velocity structure to be a physical solution it must be that at the sonic point $\ensuremath{\mathit{g}_{\rm rad}^{\rm line}}/\ensuremath{\mathit{g}_{\rm eff}} \simeq 1$, as pointed out by~e.g. \cite{1975ApJ...195..157C}. We require from our best-$\beta$ solutions, that this criterion is fulfilled. If $\ensuremath{\mathit{g}_{\rm rad}^{\rm line}}/\mathit{g}_{\rm eff}$ is not approximately equal to $1$ at the sonic point, we interpret this as a failure of the wind to become trans-sonic due to a lack of line force at the location in the wind where it is essential. Dynamical effects might occur, such as fall back, that are beyond the topic of this paper. In any case, we interpret such solutions as cases in which the wind cannot be initiated by line driving alone. For the hydrodynamical solution a failure to fulfill the above requirement implies that we do not find a solution at all. \section{Results} \label{sec:Results} \subsection{Grid} \label{sec:grid} In order to study our predictions of the wind properties of O-type stars in a systematic manner, we define a grid of main sequence, giant and supergiant stars using the spectral calibration of \citet{2005A&A...436.1049M} adopting theoretical effective temperature scales. This calibration is based on non-LTE models that take into account line blanketing effects and an outflowing stellar wind. We have applied solar abundances as derived by~\cite{1989GeCoA..53..197A}, consistent with the predictions of \cite{2000A&A...362..295V}. For all stars in the grid, we have derived the mass-loss rate, terminal velocity and $\beta$-parameter. The hydrodynamic solution does not feature a $\beta$, rather $\gamma$ is the parameter that describes the slope of the velocity law. To better facilitate a comparison between the different methods we have applied Eq.~\ref{eq:beta} to convert $\gamma$ into a $\beta$-value, referred to as $\beta_{\gamma}$. The calculated grid is given in Table~\ref{table:grid-results}. The final column lists the mass-loss rate as predicted using the fitting formula of \cite{2000A&A...362..295V} that assume a fixed value of $\beta = 1$, but for an input variable value of $\vinf = 2.6\,\vesc$. Figures \ref{fig:masslossLV}, \ref{fig:masslossLIII} and \ref{fig:masslossLI} show wind properties as a function of effective temperature, for dwarfs, giants and supergiants respectively. We present all the results of the best beta method, i.e. prior to applying the requirement defined in Sect.~\ref{sec:sonicpoint1} that at the sonic point the acceleration due to the line force should approach the effective gravity. Having pointed this out, we first draw attention to the striking behavior in our best-$\beta$ predictions of dwarfs. In the direction of decreasing temperature, the terminal flow velocity drastically increases for spectral types O6.5 or later. For giants the O7 star shows a similar behavior. We argue below that this behavior reflects the failure of the wind to become supersonic, therefore we interpret these solutions to be non-physical. \subsection{Early O-stars (spectral types O3 through O6)} Let us, however, first focus on stars of spectral type earlier than O6.5. The two methods give quite comparable results. The best-$\beta$ method predicts \mdot\ values that are higher by up to $\sim$0.1 to 0.3\,dex in all cases, i.e. dwarfs, giants and supergiants. The best-$\beta$ terminal flow velocities are $\sim$10 to 20 percent lower compared to the hydrodynamic solutions. These differences can be understood by focusing on the velocity structures near the sonic point. In the best-$\beta$ solution the velocity law is steeper in the region near the sonic point, therefore the sonic point is closer to the photosphere. This leads to a higher mass-loss rate and lower terminal velocity. The absolute value of the terminal velocity and the ratio of \vinf\ to the effective escape velocity as a function of temperature will be compared to observations in Sect.~\ref{sec:discussion}. Typical error bars on the \vinf\ determination are 10 percent for the best-$\beta$ solutions (see Sect.~\ref{sec:bestbeta}) and 20 percent for the exact solutions (see Sect.~\ref{sec:ns}) due to Monte Carlo noise on the line force (see also Fig.~\ref{fig:mcline}). The slope of the velocity law in the best-$\beta$ solution increases slightly with luminosity class, from typically 0.85 in dwarfs, to 0.95 in giants, to 1.0 in supergiants. In the hydrodynamic models (method B) the $\beta_{\gamma}$ value is typically 0.05-0.10 lower than the corresponding best-$\beta$ solution. \begin{table*}[t] \begin{center} \caption{Model parameters, following \cite{2005A&A...436.1049M}, and predicted wind properties for dwarfs, giants and supergiants. The label "spec" indicates that spectroscopic masses are adopted from \citeauthor{2005A&A...436.1049M} Predictions give the mass-loss rate, terminal velocities and $\beta$ parameters for both method A (best-$\beta$ solution) and B (hydrodynamic solution). The 11th column states whether or not the best-$\beta$ solution fulfills the requirement that at the sonic point $\ensuremath{\mathit{g}_{\rm rad}^{\rm line}}/\mathit{g}_{\rm eff} \simeq 1 $ (see Sect.~\ref{sec:sonicpoint1}). For the hydrodynamic solutions a failure of this requirement implies that we do not find a solution at all. The last column provides the $\mdot$ by~\cite{2001A&A...369..574V} when we use $\vinf = 2.6 \,v_{\rm esc}$ in their mass-loss recipe. The hydrodynamic solution does not provide a $\beta$ value, but rather the fit parameter $\gamma$. As to facilitate a comparison, we applied Eq.~\ref{eq:beta} to convert this $\gamma$ into a $\beta$. \label{table:grid-results}} \tiny \begin{tabular}{rrrrrrrrrrrrrrr} \hline\\[-9pt] \hline\\[-7pt] \multicolumn{7}{l}{Model Parameters} & \multicolumn{3}{l}{Method A} & \multicolumn{4}{l}{Method B} & Vink et al.\\[1pt] ST & \teff\ & $\log \mathit{g}_{\rm spec}$ & $\log L$ & R & $M_{\rm spec}$ & $v_{\rm esc}$ & $\log \mdot$ & \vinf & $\beta$ & $\Gamma \simeq 1$ & $\log \mdot$ & \vinf & $\beta_{\gamma}$ & $\log \mdot$ \\[1pt] & K & cm s$^{-2}$ & \ensuremath{\mathit{L}_{\odot}} & \ensuremath{\mathit{R}_{\odot}} & \ensuremath{\mathit{M}_{\odot}} & km/sec & log \ensuremath{\mathit{M}_{\odot}}/yr & km/sec & & at $v_{s}$ & \ensuremath{\mathit{M}_{\odot}}/yr & km/sec & & \ensuremath{\mathit{M}_{\odot}}/yr\\[1pt] \hline\\[-7pt] \multicolumn{14}{l}{{\em Dwarfs}} \\[1.5pt] 3 & 44616 & 3.92 & 5.83 & 13.84 & 58.34 & 1054 & -5.641 & 3794 & 0.92 & yes & -5.972 & 4530 & 0.87 & -5.375 \\[1.5pt] 4 & 43419 & 3.92 & 5.68 & 12.31 & 46.16 & 1016 & -5.836 & 3599 & 0.90 & yes & -5.929 & 3973 & 0.83 & -5.571 \\[1.5pt] 5 & 41540 & 3.92 & 5.51 & 11.08 & 37.28 & 992 & -5.969 & 2838 & 0.84 & yes & -6.118 & 3394 & 0.79 & -5.829 \\[1.5pt] 5.5 & 40062 & 3.92 & 5.41 & 10.61 & 34.17 & 990 & -6.152 & 2762 & 0.84 & yes & -6.265 & 3260 & 0.77 & -6.011 \\[1.5pt] 6 & 38151 & 3.92 & 5.30 & 10.23 & 31.73 & 994 & -6.386 & 2697 & 0.81 & yes & -6.493 & 3277 & 0.77 & -6.234 \\[1.5pt] 6.5 & 36826 & 3.92 & 5.20 & 9.79 & 29.02 & 983 & [-7.243] & [6395] & [1.87] & no & -6.918 & 5244 & 0.95 & -6.427 \\[1.5pt] 7 & 35531 & 3.92 & 5.10 & 9.37 & 26.52 & 972 & [-7.340] & [7325] & [2.65] & no & -- & -- & -- & -6.624 \\[1.5pt] 7.5 & 34419 & 3.92 & 5.00 & 8.94 & 24.15 & 959 & [-7.745] & [12028] & [1.81] & no & -- & -- & -- & -6.820 \\[1.5pt] 8 & 33383 & 3.92 & 4.90 & 8.52 & 21.95 & 944 & [-7.781] & [10650] & [1.57] & no & -- & -- & -- & -7.019 \\[1.5pt] 8.5 & 32522 & 3.92 & 4.82 & 8.11 & 19.82 & 923 & [-7.802] & [9427] & [1.40] & no & -- & -- & -- & -7.167 \\[1.5pt] 9 & 31524 & 3.92 & 4.72 & 7.73 & 18.03 & 908 & [-7.818] & [8283] & [1.21] & no & -- & -- & -- & -7.374 \\[1.5pt] 9.5 & 30488 & 3.92 & 4.62 & 7.39 & 16.46 & 892 & [-7.793] & [6704] & [1.10] & no & -- & -- & -- & -7.590 \\[1.5pt] \hline\\[-7pt] \multicolumn{14}{l}{{\em Giants}} \\[1.5pt] 3 & 42942 & 3.77 & 5.92 & 16.57 & 58.62 & 915 & -5.445 & 3275 & 0.90 & yes & -5.551 & 3756 & 0.87 & -5.182 \\[1.5pt] 4 & 41486 & 3.73 & 5.82 & 15.83 & 48.80 & 866 & -5.540 & 2945 & 0.90 & yes & -5.641 & 3272 & 0.84 & -5.303 \\[1.5pt] 5 & 39507 & 3.69 & 5.70 & 15.26 & 41.48 & 837 & -5.630 & 2460 & 0.90 & yes & -5.810 & 3053 & 0.83 & -5.491 \\[1.5pt] 5.5 & 38003 & 3.67 & 5.63 & 15.13 & 38.92 & 833 & -5.867 & 2852 & 0.96 & yes & -5.946 & 3160 & 0.83 & -5.629 \\[1.5pt] 6 & 36673 & 3.65 & 5.56 & 14.97 & 36.38 & 825 & -6.100 & 3165 & 0.98 & yes & -6.108 & 3200 & 0.84 & -5.769 \\[1.5pt] 6.5 & 35644 & 3.63 & 5.49 & 14.74 & 33.68 & 810 & -6.278 & 3534 & 1.07 & yes & -6.320 & 3743 & 0.91 & -5.902 \\[1.5pt] 7 & 34638 & 3.61 & 5.43 & 14.51 & 31.17 & 798 & [-6.804] & [7140] & [3.46] & no & -- & -- & -- & -6.016 \\[1.5pt] 7.5 & 33487 & 3.59 & 5.36 & 14.34 & 29.06 & 785 & -6.606 & 4408 & 1.20 & yes & -- & -- & -- & -6.166 \\[1.5pt] 8 & 32573 & 3.57 & 5.30 & 14.11 & 26.89 & 768 & -6.655 & 3668 & 1.05 & yes & -6.692 & 3857 & 0.93 & -6.286 \\[1.5pt] 8.5 & 31689 & 3.55 & 5.24 & 13.88 & 24.84 & 749 & -6.557 & 2266 & 0.80 & yes & -6.770 & 3490 & 0.91 & -6.409 \\[1.5pt] 9 & 30737 & 3.53 & 5.17 & 13.69 & 23.07 & 733 & -6.812 & 2960 & 0.90 & yes & -- & -- & -- & -6.564 \\[1.5pt] 9.5 & 30231 & 3.51 & 5.12 & 13.37 & 21.04 & 709 & -6.848 & 2594 & 0.89 & yes & -6.923 & 3002 & 0.85 & -6.646 \\[1.5pt] \hline\\[-7pt] \multicolumn{14}{l}{{\em Supergiants}} \\[1.5pt] 3 & 42551 & 3.73 & 6.00 & 18.47 & 66.89 & 912 & -5.347 & 3346 & 0.92 & yes & -5.445 & 3719 & 0.86 & -5.083 \\[1.5pt] 4 & 40702 & 3.65 & 5.94 & 18.91 & 58.03 & 837 & -5.387 & 2877 & 0.92 & yes & -5.497 & 3299 & 0.86 & -5.144 \\[1.5pt] 5 & 38520 & 3.57 & 5.87 & 19.48 & 50.87 & 779 & -5.561 & 2974 & 0.95 & yes & -5.554 & 3030 & 0.86 & -5.247 \\[1.5pt] 5.5 & 37070 & 3.52 & 5.82 & 19.92 & 48.29 & 764 & -5.611 & 2938 & 1.04 & yes & -5.664 & 3153 & 0.87 & -5.352 \\[1.5pt] 6 & 35747 & 3.48 & 5.78 & 20.33 & 45.78 & 747 & -5.751 & 3000 & 1.05 & yes & -5.814 & 3270 & 0.90 & -5.438 \\[1.5pt] 6.5 & 34654 & 3.44 & 5.74 & 20.68 & 43.10 & 732 & -5.945 & 3531 & 1.16 & yes & -5.920 & 3328 & 0.93 & -5.520 \\[1.5pt] 7 & 33326 & 3.40 & 5.69 & 21.14 & 40.91 & 715 & -5.995 & 3230 & 1.09 & yes & -6.059 & 3606 & 0.96 & -5.642 \\[1.5pt] 7.5 & 31913 & 3.36 & 5.64 & 21.69 & 39.17 & 702 & -6.036 & 2702 & 1.03 & yes & -6.116 & 3043 & 0.90 & -5.781 \\[1.5pt] 8 & 31009 & 3.32 & 5.60 & 22.03 & 36.77 & 678 & -6.058 & 2366 & 1.06 & yes & -6.181 & 2756 & 0.88 & -5.873 \\[1.5pt] 8.5 & 30504 & 3.28 & 5.58 & 22.20 & 33.90 & 644 & -6.143 & 2498 & 1.08 & yes & -6.189 & 2572 & 0.90 & -5.895 \\[1.5pt] 9 & 29569 & 3.23 & 5.54 & 22.60 & 31.95 & 629 & -6.385 & 2988 & 1.05 & yes & -6.319 & 2640 & 0.91 & -5.998 \\[1.5pt] 9.5 & 28430 & 3.19 & 5.49 & 23.11 & 30.41 & 613 & -6.487 & 2921 & 1.08 & yes & -6.449 & 2642 & 0.93 & -6.148 \\[1.5pt] \hline\\[-7pt] \end{tabular} \end{center} \normalsize \end{table*} \begin{figure*} \centering \resizebox{10.3cm}{!}{ \includegraphics[width=0.60\textwidth,angle=0]{mdotLV} \includegraphics[width=0.60\textwidth,angle=0]{betaLV} } \resizebox{10.3cm}{!}{ \includegraphics[width=0.60\textwidth,angle=0]{vinfLV} \includegraphics[width=0.60\textwidth,angle=0]{vinfvescLV} } \caption{Predicted \mdot, \vinf ~and~ $\beta$ for the main sequence stars. Best-$\beta$ solutions are given in red and hydrodynamic solutions in green. For comparison, theoretical results by \citet{2001A&A...369..574V} are provided in blue.} \label{fig:masslossLV} \end{figure*} \begin{figure*} \centering \resizebox{10.3cm}{!}{ \includegraphics[width=0.60\textwidth,angle=0]{mdotLIII} \includegraphics[width=0.60\textwidth,angle=0]{betaLIII} } \resizebox{10.3cm}{!}{ \includegraphics[width=0.60\textwidth,angle=0]{vinfLIII} \includegraphics[width=0.60\textwidth,angle=0]{vinfvescLIII} } \caption{Predicted \mdot, \vinf ~and~ $\beta$ for giants. Colors have the same meaning as in Fig.~\ref{fig:masslossLV}.} \label{fig:masslossLIII} \end{figure*} \begin{figure*} \centering \resizebox{10.3cm}{!}{ \includegraphics[width=0.60\textwidth,angle=0]{mdotLI} \includegraphics[width=0.60\textwidth,angle=0]{betaLI} } \resizebox{10.3cm}{!}{ \includegraphics[width=0.60\textwidth,angle=0]{vinfLI} \includegraphics[width=0.60\textwidth,angle=0]{vinfvescLI} } \caption{Predicted \mdot, \vinf ~and~ $\beta$ for supergiants. Colors have the same meaning as in Fig.~\ref{fig:masslossLV}.} \label{fig:masslossLI} \end{figure*} \subsection{Late O-stars (spectral types O6.5 through O9.5)} \label{sec:lateO} Figures~\ref{fig:masslossLV} and~\ref{fig:masslossLIII} show that for spectral type O6.5 the terminal velocity of dwarfs and giants suddenly peaks, relative to spectral type O6. We investigate this behavior in more detail in Fig.~\ref{fig:sonicpoint} in which the line force from the Monte Carlo simulation is plotted in the region around the sonic point for the best-$\beta$ solutions of the dwarf O6 and O6.5 star. For the O6 star, the line force at the base of the wind (below the sonic point) rises steeply. {\em At first the dominant contributors are iron lines, notably from Fe\,{\sc v}}. The ensemble of transitions mainly occur between excited states, although some are from meta-stable states that are relatively strongly populated. Further out, the iron contribution levels out (at $\sim 30$\%) and other elements start to contribute to the force, such as carbon, nitrogen, sulfur, argon and nickel. The contribution of resonance lines of carbon and nitrogen at the sonic point amounts to $\sim$ 20\%. Note that at the sonic point the $\ensuremath{\mathit{g}_{\rm rad}^{\rm line}}/\mathit{g}_{\rm eff} \sim 1$ condition is nicely fulfilled for the O6\,V. For the O6.5 star this is not the case. Here the line force at the base of the wind (below the sonic point) rises only gradually. The difference with the O6\,V star is that in this region iron is mainly in the form of Fe\,{\sc iv}, which for this particular spectral flux distribution is less efficient in absorbing stellar flux than are Fe\,{\sc v} lines\footnote{We note that a similar situation occurs at spectral type B1, where the relatively inefficient Fe {\sc iv} lines are replaced by the more effective Fe {\sc iii} lines \citep{1999A&A...350..181V}.}. Therefore the velocity structure will be shallower, limiting the potential of other elements in contributing to the force. As a result the sonic point starts to shift out to larger radii, and we find that at the sonic point the cumulative line acceleration is some 40\% less than the effective gravity. We therefore interpret this outcome as a failure of the wind to become supersonic at $r_{\rm s}$ and do not consider it to be a physical solution. The best-$\beta$ solutions where we clearly encounter this problem have brackets placed around the predicted wind properties as listed in Table~\ref{table:grid-results}. These include all the dwarf stars of spectral type O6.5 or later. They are to be considered non-physical. The supergiants do not suffer from this problem. In all cases the $\ensuremath{\mathit{g}_{\rm rad}^{\rm line}}/\mathit{g}_{\rm eff} \sim$\,1 was reached at the sonic point and we consider them physical solutions. The terminal velocities for the O6.5\,I to O9.5\,I scatter by about 20\%, with a small hint that here also the O6.5 star has a higher \vinf. The latter occurs because elements such as silicon, iron and sulfur add to the line force in the outer wind along with the normal contribution of carbon, nitrogen and oxygen. The value of $\beta$ for the late spectral types increases to 1.05 from 1.0 for earlier spectral types. The $\beta_{\gamma}$ values associated to the hydrodynamic solutions increase marginally compared to that in early-O stars. \begin{figure} \resizebox{8cm}{!}{ \includegraphics[width=0.70\textwidth,angle=0]{forcearticleO6} } \resizebox{8cm}{!}{ \includegraphics[width=0.70\textwidth,angle=0]{forcearticleO65} } \caption{The Monte Carlo line force as a fraction of the effective gravity in the region around the sonic point for the best-$\beta$ solution of the O6\,V (top panel) and O6.5\,V (bottom panel) stars. The contribution of iron is shown separately (blue dotted line). Note that in the case of the O6.5 star the line acceleration does not balance the effective gravity at the sonic point. This is interpreted as a failure to support a line driven wind.} \label{fig:sonicpoint} \end{figure} \section{Discussion} \label{sec:discussion} In discussing our results we first compare with previous theoretical predictions for mass-loss rates and terminal velocities in sections \ref{sec:jorickmassloss} and \ref{sec:MCAK-theory}. We compare to observations in section \ref{sec:Observations}. \subsection{Comparison to Vink et al. mass-loss recipe} \label{sec:jorickmassloss} The Monte Carlo method by \citet{1997ApJ...477..792D} as summarized in Sect.~\ref{sec:Method} has been used by \citet{2000A&A...362..295V,2001A&A...369..574V} to compute a grid of mass-loss rates for O-type stars from which a fitting formula has been derived that provides \mdot\ as a function of luminosity, effective temperature, mass and the ratio of the terminal velocity over the effective escape velocity, i.e. $\vinf/\vesc$. This mass loss prescription is widely used in stellar evolution predictions \citep[see e.g][]{2003A&A...404..975M,2005A&A...443..243P, 2006ApJ...647..483L,2006A&A...452..295E,2009CoAst.158...55B,2010A&A...512L...7V}. For the spectral range that is investigated here the canonical value, based on empirical findings, for the ratio of terminal velocity over escape velocity is 2.6. To compare to the \cite{2000A&A...362..295V} results, we calculated the mass-loss rate of our grid of stars using their prescription, that assumes $\beta = 1$. The results are given in the last column of Table~\ref{table:grid-results}. Figure~\ref{fig:windenergy} shows the total energy that is extracted from the radiation field and that is transferred to the stellar wind for all three methods: best-$\beta$, hydrodynamic and \citeauthor{2000A&A...362..295V} prescription. All three methods yield similar, but not identical results in the regime where the best-$\beta$ and hydrodynamical method provide physical solutions. In terms of mass loss rates, we find that the predictions with the best-$\beta$ and hydrodynamical method are on average about 0.2 to 0.3 dex lower than \citeauthor{2000A&A...362..295V}, again with the clear exception of the stars for which we fail to drive a stellar wind. As suggested by the similar wind energies the terminal velocities predicted by our best-$\beta$ and hydrodynamical method turn out to be higher than adopted by \citeauthor{2000A&A...362..295V}. We discuss these \vinf\ in more detail in Sect.~\ref{sec:Observations} as well as the reason why \citeauthor{2000A&A...362..295V} are able to predict \mdot\ values for late O-type dwarfs and giants, where we fail. \emph{We emphasize that if the \citeauthor{2000A&A...362..295V} prescription is used assuming the terminal velocities predicted by our best-$\beta$ or hydrodynamical method, wherever these yield physical solutions, the mass loss rates agree to within $\sim$0.1 dex.} \begin{figure*}[t!] \centering \resizebox{18cm}{!}{ \includegraphics[width=0.70\textwidth,angle=0]{windenergymainsequence} \includegraphics[width=0.70\textwidth,angle=0]{windenergygiants} \includegraphics[width=0.70\textwidth,angle=0]{windenergysupergiants} } \caption{The wind energy as a function of the effective temperature for dwarfs (left panel), giants (middle panel) and supergiants (right panel). The best-$\beta$ method is given by the red squares, the hydrodynamic method by the green circles and the \citeauthor{2000A&A...362..295V}-recipe by the blue triangles. Note that the kinetic energy in the wind is almost equal for all three methods. The mass-loss rate and terminal velocity for the three methods vary. \label{fig:windenergy}} \end{figure*} \subsection{Comparison to (Modified)CAK-theory} \label{sec:MCAK-theory} Since \citet{1970ApJ...159..879L} it is generally accepted that the winds of massive stars are driven by the transfer of momentum (and energy) from the radiation field to the atmospheric gas, and that atomic transitions play a pivotal role in this process. \cite{1975ApJ...195..157C} describe the force associated to atomic transitions by introducing a force multiplier \beq \label{eq:cak} M(t) = \frac{\ensuremath{\mathit{g}_{\rm rad}^{\rm line}}(t)}{\mathit{g}_{\rm e}(t)} = k\,t^{-\alpha}, \end{equation} where $k$ and $\alpha$ are fitting parameters and $t$ is an optical depth like parameter given by: \beq \label{eq:tforce} t = \sigma_{\rm e} \,\rho\, v_{\rm th} \left( \frac{dv}{dr} \right)^{-1}. \end{equation} Here $\rho$ is the density, $v_{\rm th}$ the thermal velocity of carbon ions at the effective temperature of the star \citep{1986A&A...164...86P} and $\sigma_{\rm e}$ the mass scattering coefficient of the free electrons. This parametrization of the line force is based on the expression of the force multiplier for a single spectral line, \beq \label{eq:cakline} M_{\rm line}(t) = \frac{\Delta\nu_{\rm D} \,F_{\nu}}{F} \frac{1}{t} \left[ 1-\exp{(\eta t)} \right], \end{equation} where $\Delta \nu_{\rm D}$ is the Doppler shift of the frequency of the spectral line due to the thermal velocity of the particles in the wind, $F_{\nu}$ is the flux at frequency $\nu$, $F$ the total flux and $\eta$ is the ratio of line opacity to electron scattering opacity. Note that for optically thin lines $M_{\rm line}(t)$ becomes independent of $t$, whilst for optically thick lines $M_{\rm line}(t) \propto t$. The cumulative effect of an ensemble of lines of various strengths is then expressed by Eq.~\ref{eq:cak}. The constant $\alpha$ in this expression is a measure of the ratio of line acceleration from optically thick lines only to the total line acceleration and $k$ is related to the overall (line)strength of the ensemble of lines. See \cite{2000A&AS..141...23P} for a more in depth discussion on the CAK line force. It is assumed that $\alpha$ and $k$ are constants throughout the wind \citep[but see][]{2002ApJ...577..389K}. The effects of changes in the ionization structure of the wind are modeled by multiplying expression~\ref{eq:cak} by the term $(n_{\rm e}/W)^{\delta}$, introduced by \citet{1982ApJ...259..282A}. $n_{\rm e}$ is the electron number density, $W$ the dilution factor and $\delta$ is a constant. The parametrization of the line force as given by Eq. \ref{eq:cak} leads to an analytical expression for $\mdot$ and $\vinf$ as a function of the fitting parameters $k$ and $\alpha$ and the stellar parameters. These expressions are given by \cite{1975ApJ...195..157C}. \cite{1986A&A...164...86P} extend these expressions to account for the finite size of the stellar disk. To allow for a comparison with Abbott's results for the behavior of the force multiplier as a function of $t$, we ignore the effect of $n_{\rm e}/W$. We calculated the force multiplier of the simulated line force, i.e. Eq.~\ref{eq:grad}, at all our radius grid points and determined the corresponding value $t$. Typically, the optical depth like parameter ranges from $t = 10^{-5}$ to large $t$. At large $t$ the line force can be neglected compared to the continuum radiation force. Following \cite{1982ApJ...259..282A}, we do not consider these large $t$ points in this discussion but we focus on the range $t < 10^{-0.5}$. \begin{figure}[b!] \begin{center} \resizebox{8cm}{!}{ \includegraphics[width=0.70\textwidth,angle=0]{MT}} \resizebox{8cm}{!}{ \includegraphics[width=0.70\textwidth,angle=0]{MR} } \end{center} \caption{{\em Top panel:} The logarithm of the force multiplier $M(t)$ for the hydrodynamic solution of our O3\,V model (red plusses) as a function of the optical depth like parameter $t$; our fit function Eq.~\ref{eq:grad} to this data (green line) and the CAK fit function Eq.~\ref{eq:cak} to this data (blue line). {\em Bottom panel:} The force multiplier as a function of radius $r$ for the same star. Note that the CAK force multiplier is smaller than ours for large radii, resulting in a lower predicted $\vinf$.} \label{fig:cakforcemultiplyer} \end{figure} Figure~\ref{fig:cakforcemultiplyer} compares the behavior of the force multiplier for our best-$\beta$ solution of the O3\,V star. {\em Note that our Monte Carlo solution shows that $M(t)$ can not be fully described by a strict power law, as assumed in (modified) CAK}, or equivalently, $\alpha$ is not independent of $t$ \citep{2000PhDT........45V}. Two causes can be pointed out \citep[see e.g.][]{1987A&A...184..227P,1997A&A...322..598S}: {\em i)} the presence of a diffuse field due to multiple scatterings; {\em ii)} a complex behavior of radial stratification of the excitation and ionization, specifically near the sonic point. Here the wind accelerates rapidly, which causes a sudden steep drop in the electron density. As a result, elements which happen to have two dominant ionization stages (near $r_{\rm s}$) may temporarily re-ionize. The introduction of a $\delta$-dependence of the CAK force multiplier in Eq.~\ref{eq:cak}, by adding the term $(n_{\rm e}/W)^{\delta}$, does not improve the fit to the Monte Carlo line force. This extended CAK description describes a plane through the three dimensional space spanned by the logarithms of $t$, $n_{\rm e}/W$ and $M(t)$, while the Monte Carlo line force follows a curved line through this space and is thus not confined to that plane. Our fitting function, Eq.~\ref{eq:grad}, nicely captures the curved behavior of the line force in the supersonic part of the flow.\footnote{We also compared our fitting formula to the output of the starting model of the iteration cycle of the O3\,V star. This allows us to assess whether the iteration procedure perhaps {\em forces} the line force into the shape of Eq.~\ref{eq:grad}. However, also the line force resulting from the first iteration cycle is well represented by our description, indicating that it is quite generic.} Using the force multipliers $k$ and $\alpha$ as derived from the Monte Carlo line force we can compute the CAK mass loss and terminal velocity \citep{1975ApJ...195..157C}. The $\mdot_{\rm CAK}$ values derived from these $k$ and $\alpha$ are typically 0.0 to 0.3 dex higher than our best-$\beta$ results and our hydrodynamic solutions. A comparison of the terminal velocities is not meaningful since the slope in the supersonic part of the wind is not represented well by $\alpha$. Therefore, the velocities derived with our $k$ and $\alpha$ are on the order of the escape velocity. If we compare to the modified CAK terminal velocities, following \citep{1986A&A...164...86P}, we note that they are slightly lower than the velocities we derive (see also section ~\ref{sec:Observationtv}). \subsection{Comparison with observations} \label{sec:Observations} In this section, we compare our results to observations. We first compare predicted and empirical terminal velocities. Given that we find that for stars more luminous than $10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}\ our mass-loss rates agree well with the \citeauthor{2000A&A...362..295V} prescription -- which has been extensively scrutinized \citep[see e.g.][] {2004A&A...415..349R} -- we focus the comparison of empirical and predicted mass-loss rates on lower luminosity stars, for which a `weak-wind problem' has been identified. \subsubsection{Terminal velocities} \label{sec:Observationtv} Several studies have been devoted to measuring the terminal velocities of early-type stars. Summarizing the work by \cite{1989ApJS...69..527H,1990ApJ...361..607P,1995ApJ...455..269L, 1997MNRAS.284..265H,1996A&A...305..171P}, and \cite{1999A&A...350..970K}, \cite{2000ARA&A..38..613K} derive that the average value of empirically determined terminal velocities for stars hotter than 21\,000\,K is $\vinf = 2.65\,\vesc$. The quoted accuracy of this mean value is roughly 20 percent. The \vinf\ values are ``measured'' from the maximum blue-shifted absorption $v_{\rm max}$ in resonance lines of ions such as C\,{\sc iv}, N\,{\sc v} and Si\,{\sc iv}, located in the ultraviolet part of the spectrum. These measurements are prone to systematic uncertainties, that have been extensive discussed in the literature (see for instance the above references). They may work in both directions. Effects that may cause the terminal velocity to be higher than $v_{\rm max}$ are measurements from lines that are not saturated in the outer wind (where for all practical purposes \vinf\ is reached) or from ions that recombine in the outer wind. The former may be expected for stars with weak winds, the latter is more likely to occur in very dense winds. Effects that may cause \vinf\ to be smaller than $v_{\rm max}$ may be the presence of turbulence in the outflow or the presence of strong atmospheric absorption at wavelengths slightly bluer than the wavelength corresponding to the terminal velocity, mistakenly contributed to absorption in the resonance line. Given the possible occurrence of these systematic effects, the uncertainty in the terminal velocity may be 10-15 percent for supergiants, and substantially larger than 20 percent for dwarfs. The error in the ratio $\vinf/\vesc$ also includes uncertainties in \vesc. The largest contribution to this error comes from uncertainties in the stellar masses, that have been derived from a comparison to tracks of stellar evolution. It seems realistic to adopt a 30-40 percent uncertainty in the empirical values of $\vinf/\vesc$ rather than the 20 percent; quoted at the beginning of this section. Although \cite{2000ARA&A..38..613K} (and also \citeauthor{1995ApJ...455..269L} \citeyear{1995ApJ...455..269L}) conclude that the ratio $\vinf/\vesc$ is constant for O-type stars, the results of \citet{1989ApJS...69..527H} show this ratio to decrease with temperature, from about 3.5 at 31\,500\,K to about 2.4 at 43\,500\,K. A luminosity class dependence of $\vinf/\vesc$ has to our knowledge not yet been reported. \begin{figure} \centering \resizebox{8cm}{!}{ \includegraphics[width=0.70\textwidth,angle=0]{vinfvescobs} } \caption{Predictions of the ratio of terminal velocity over effective escape velocity at the surface for both the best-$\beta$ (in red) and hydrodynamic method (in green) as a function of effective temperature. The blue circles denote the data from \cite{1995ApJ...455..269L} and the black triangles show the empirical values from \cite{1989ApJS...69..527H} in which \vinf\ is determined from the ultraviolet P-Cygni profiles. } \label{fig:observationalvelocities} \end{figure} Figure~\ref{fig:observationalvelocities} shows our predictions of $\vinf/\vesc$ plotted against temperature. Different symbol types denote best-$\beta$, hydrodynamic results and empirical values from \cite{1989ApJS...69..527H} and \cite{1995ApJ...455..269L}. As discussed in Sections~\ref{sec:bestbeta} and~\ref{sec:ns} our predictions of \vinf\ have individual random error bars of 10 to 20 percent In all cases, our predictions result in terminal velocities that are larger than observed. For the main sequence O3-O6 stars the mean predicted ratio is 3.1 for the best-$\beta$ method and 3.6 for the hydrodynamical method. This is 17\% and 36\% higher than the observed mean value of 2.65. For giants the discrepancies are respectively 30\% and 44\%. For the supergiants the largest discrepancies are found. Using all O stars, the best-$\beta$ method over-predicts the ratio by 54\%. The hydrodynamical method yields values that are on average 60\% higher. The discrepancy between theory and observations thus seems to increase from dwarfs to giants to supergiants. Given the uncertainties, the over-prediction for the dwarfs may not be significant. The predictions show a tentative trend of a decreasing $\vinf / \vesc$ with temperature. As pointed out, recent empirical studies do not recover this behavior. Interestingly, this type of trend appears to be visible in the study by \citet{1989ApJS...69..527H}. Their trend is plotted in Fig.~\ref{fig:observationalvelocities}, featuring a slope that is comparable to the slope of our predictions. However, given the uncertainties in the current empirical estimates of \vinf, we do not feel that this can be applied to (further) scrutinize the theory. Larger predicted terminal velocities are also reported by \cite{1995ApJ...455..269L}. In their sample, dominated by supergiants, a comparison to CAK models yields over-predictions by about 33 percent, so slightly less compared to what we find. The reason for the over-predicted \vinf\ values is unclear. Possible explanations (for part of the problem) include, {\em i)} overestimated corrections for the effect of turbulence (see above), {\em ii)} a clumped and porous outer wind, hampering the acceleration of the flow in this part of the outflow from reaching as high a terminal velocity as predicted here and in (modified) CAK \cite[see][]{2011A&A...526A..32M}, or {\em iii)} a systematic over-estimate of stellar masses. A systematic discrepancy between masses of galactic stars derived from comparing their positions in the Hertzsprung-Russell diagram to evolutionary tracks and masses calculated from the spectroscopically determined gravity was reported by e.g. \cite{1992A&A...261..209H} \citep[but see][]{2010A&A...524A..98W}. Improvements in both the model atmospheres and fitting procedure seem to have reduced, but possibly not yet eliminated, the size of this discrepancy \citep{2004A&A...415..349R, 2005A&A...441..711M}. Unfortunately, progress in resolving the differences in predicted and empirical $\vinf/\vesc$ ratios quite strongly depends on our knowledge of stellar masses. Hopefully, detailed studies of very large populations, such as the VLT-FLAMES Survey of massive stars \citep{2005A&A...437..467E} and the VLT-FLAMES Tarantula Survey \citep{2011A&A...530A.108E} may help resolve this issue. \subsubsection{Mass loss rates: the weak-wind problem} \label{sec:weakwind} Relatively recent, analysis of appreciable samples of galactic stars using sophisticated model atmospheres has revealed a mismatch, possibly as high as a factor of 100, between empirically derived mass-loss rates and theoretical predictions for stars less luminous than about $10^{5.2}$\, \ensuremath{\mathit{L}_{\odot}}\ \citep[see e.g.][and Fig.~\ref{fig:modified}]{2005A&A...441..735M, 2007A&A...473..603M,2009A&A...498..837M}. As mass loss scales with some power of the luminosity, this problem occurs below a critical mass-flux and is termed the `weak-wind' problem \citep[for a recent review, see][]{2008A&ARv..16..209P}. Proposed explanations address deficiencies in determining the empirical mass-loss rates as well as in mass-loss predictions. Regarding empirical \mdot\ determinations it should be realized that only UV resonance lines can be used as a diagnostic in the weak-wind regime, whilst in the (lets call it the) strong wind regime H$\alpha$ and, in the Galactic case, radio-fluxes may also be used. The ions that produce the UV resonance profiles, such as C\,{\sc iv}, N\,{\sc v} and Si\,{\sc iv}, often represent minor ionization species. The ionization continua of these species border the soft X-ray regime and therefore wind material may be susceptible to (non-thermal) processes producing soft X-ray emission, such as shocks or magnetic mechanisms \citep{2005A&A...441..735M}. From a theoretical viewpoint, potential causes of the weak-wind problem include the decoupling of the major driving ions (the metals) from the bulk of the plasma at low densities, when Coulomb coupling fails, and the subsequent ionic runaway \citep{1992A&A...262..515S,1995A&A...301..823B,1996A&A...309..867B,2000A&A...359..983K, 2002ApJ...568..965O,2003A&A...402..713K}; the shadowing of wind-driving lines by photospheric lines \citep{1996A&A...309..867B}, and the neglect of curvature terms in the velocity field \citep{1998ASPC..131..245P,1999ApJ...510..355O}. The results presented in this paper point to a cause for the weak winds related to the predictions of mass loss. This potential cause was quantitatively explored by \cite{2010A&A...512A..33L}, who pointed out that the global dynamical constraint imposed by \cite{2000A&A...362..295V} and recapped in Sec. \ref{sec:mcwind} (notably Eq. \ref{eq:mcdeltal}) need not guarantee that the derived mass-loss rates are consistent with stationary trans-sonic flows. Here we have shown that although this assumption by \cite{2000A&A...362..295V} is allowed for stars with luminosities above $10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}, it is not for lower luminosities. This luminosity limit for galactic O stars agrees with the empirical limit at $\sim 10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}\ to within 0.1 dex in $\log L$. The physical cause of the different \mdot\ regimes (weak and strong winds) is a lack of line acceleration at the base of the wind. In main sequence stars, a contribution of Fe\,{\sc v} lines is present in O6 stars and is missing in the (lower luminosity and cooler) O6.5 stars, where Fe\,{\sc iv} is more dominant (see Sect.~\ref{sec:lateO}). The importance of the Fe\,{\sc v}/{\sc iv} ionization balance has been pointed out by \cite{2010A&A...512A..33L} and is confirmed by our results. In one fundamental aspect our results differ from that of \cite{2010A&A...512A..33L}. For the sample of low luminosity stars (i.e. less than $10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}) investigated by \citet{2009A&A...498..837M}, \cite{2010A&A...512A..33L} predicts mass-loss rates that are about 1.4 dex lower than anticipated by \cite{2000A&A...362..295V}. The hydrodynamical method presented in this paper, which identifies the ability of the star to drive an outflow with a balance of the line force and the gravitational force at the sonic point, predicts that these stars do not have a wind at all. As it is clear form the presence of the shape of UV resonance lines that these stars do have stellar winds (with average mass loss rates that are 0.8 dex lower than \citeauthor{2010A&A...512A..33L}'s predictions), our result suggests that either some other mechanism is driving the wind or is supplementing the line acceleration at the base of the wind. Which force (or forces) counterbalances gravity remains to be identified, but perhaps magnetic pressure, effects of turbulence, and/or pulsations may play a role. We do point out that once material is accelerated, there is sufficient opacity available to further accelerate it to larger velocities. Interestingly, \cite{2005A&A...441..735M} report that for their sample of galactic weak-wind objects the average value of $\vinf/\vesc$ is rather close to unity, and not 2.65. Although they concede that given the low wind densities their \vinf\ values may be lower limits, they point to a mechanism of X-ray heating proposed by \cite{1994MNRAS.266..917D} that may perhaps explain these results. In the outer atmospheres of weak winds the cooling times can become quite long, such that heating of the material in for instance shocks may warm up the medium and strongly modify the ionization structure, in effect canceling the line force. Modified CAK theory does not predict the weak-wind discontinuity. In this theory the adopted values for $k$ and $\alpha$ are based on the input stellar spectrum, while the dilution and excitation/ionization changes throughout the wind are described by a fixed $\delta$. Therefore, no self-consistent feedback between the wind properties and the line acceleration is accounted for. We note that in the predictions by \cite{2001A&A...375..161P} a change of slope of the modified wind momentum luminosity relation can be seen at a luminosity of about $10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}. It is tentative to suggest that if \citeauthor{2001A&A...375..161P} would have implemented the iterative procedure that we use, they might have identified a weak-wind regime on theoretical grounds. \begin{figure} \centering \resizebox{9cm}{!}{ \includegraphics[width=0.70\textwidth,angle=0]{_wlr_gal_total} } \caption{The modified wind momentum $D_{\rm mom} = \sqrt(\rstar/\ensuremath{\mathit{R}_{\odot}}) \mdot \vinf$ as a function of stellar luminosity using the data of \cite{2007A&A...473..603M}. Black symbols refer to mass-loss estimates based on the fitting of the H$\alpha$-profile; grey symbols are mass-loss estimates that rely strongly on ultraviolet resonance lines. Note that the H$\alpha$ estimates at $L < 10^{5.2}\,\ensuremath{\mathit{L}_{\odot}}$ are upper limits. A steep jump--of about 2 dex--can be seen at a luminosity of $10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}. The red dots are our predictions for $D_{\rm mom}$. The squares denote the supergiants, the circles the giants and the triangles the main-sequence stars. Below $10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}\ we do not find wind solutions for dwarf stars. At higher luminosity our predictions are close to the observed values. We therefore interpret the origin of the weak-wind problem in dwarf stars to be connected to a lack of line driving for objects less bright than about $10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}\ .} \label{fig:modified} \end{figure} \section{Conclusions} \label{sec:Conclusions} We have presented new mass-loss rates and terminal wind velocities for a grid of massive $\textrm{O}$-type stars, improving the treatment of physics in the Monte Carlo method by \citet{1985ApJ...288..679A} and \citet{1997ApJ...477..792D} to predict wind properties of early-type stars. Two new types of solutions have been discussed. First, building on the work of \citet{2008A&A...492..493M}, we present so-called best-$\beta$ solutions in which one still assumes a $\beta$-type velocity law (see Eq.~\ref{eq:betalaw}) in the wind, but in which the terminal velocity and $\beta$ are no longer adopted but constrained by requiring that they best fit the line force (distribution). Second, we abandon the $\beta$-type velocity structure and introduce numerical solutions of the wind stratification. Our main conclusions are: \begin{enumerate} \item For stars more luminous than $10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}\, the best-$\beta$ and hydrodynamical method yield $\beta$ and $\vinf$ results in agreement with each other (within 5-20 percent), whilst the mass-loss rates agree within a factor of 2. \item Furthermore, both methods are in very good agreement with the mass-loss prescription by \cite{2000A&A...362..295V} using our terminal velocities in their recipe. This implies that the main assumption entering the method on which the \citeauthor{2000A&A...362..295V} results are based -- i.e. that the momentum equation is not solved explicitly -- {\em is not compromising their predicted \mdot\ in this luminosity range}. Terminal velocity is an input parameter to the \citeauthor{2000A&A...362..295V} recipe. If we apply the canonical value $\vinf = 2.6 \varv_{\rm esc}$, the discrepancy between our mass-loss rates and their mass-loss rates is of the order of $0.2 \textrm{, although occasionally }0.6 \textrm{ dex}$. \item At luminosities $\la 10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}\ our hydrodynamical method fails to produce an outflow because of a lack of line driving at the base of the wind. This critical luminosity coincides with the onset of the `weak-wind problem'. \item For O dwarfs the above luminosity criterion translates to a boundary between starting and failing to start a wind at O6/O6.5. The direct cause of the failure to start a wind in O6.5 V stars is the lower luminosity and the lack of Fe\,{\sc v} lines at the base of the wind compared to O6 V stars. \item The fact that our hydrodynamical method fails to drive a wind at $L \la 10^{5.2}$ \ensuremath{\mathit{L}_{\odot}}\ may imply that some other mechanism is driving the weak winds or is supplementing the line acceleration at the base of the wind to help drive gas and initiate the wind. \item For stars more luminous than $10^{5.2}$\,\ensuremath{\mathit{L}_{\odot}}\ we predict, using the best-$\beta$ and hydrodynamical method, terminal velocities that are typically 35 and 45 percent higher than observed. Such over-predictions are similar to what is seen in MCAK-theory \citep{1995ApJ...455..269L}. \item We predict beta values in the range 0.85 to 1.05, with a trend that supergiants have slightly higher $\beta$ values than dwarfs. This range of $\beta$ values agrees very well with empirical results by \citet{2003ApJ...586..996M}. \end{enumerate} \begin{acknowledgements} We would like to thank the referee Achim Feldmeier for constructive comments. \end{acknowledgements} \bibliographystyle{aa}
1,116,691,498,146
arxiv
\section{Introduction} Forces developed by contracting skeletal muscle depend on the structure and geometry of the contracting fascicles, and their interaction with the surrounding connective tissues. Recent studies have highlighted the complexity of the internal structure of the muscles in 3D, and the changes to this structure during contraction e.g. \cite{Rana20133d}. However, relatively little is known about the mechanisms that relate the structure to function. It is likely that regional variations in muscle structure, tissue properties and activation patterns all contribute to the force output from the muscle. In order to understand such effects it is necessary to use a muscle model that can incorporate these complexities. An efficient way, in terms of both time and cost, to test these effects would be with a 3D finite element simulation platform based on a realistic mathematical model of muscle. Muscle models and their related simulations have evolved over the last decade to incorporate 3D structural and architectural parameters such as fascicle orientations and connective tissue properties e.g.,\cite{Oomens2003,Blemker2005,Bol2008}. Features such as fascicle activation patterns, structural changes (for instance changes in fascicle curvature and orientation) under isometric and dynamic contractions and their effects on the force and power generated by the whole muscle have been investigated in a number of previous works e.g.\cite{rahemi2014regionalizing, Carrasco1999}. While recent developments in imaging and signal processing techniques are enhancing our ability to measure detailed structure \cite{namburete2011computational, Rana20133d} and activation profiles e.g.\cite{hodson2013myoelectric,kinugasa2011unique,staudenmann2009heterogeneity} in a muscle, all the intended parameters may be hard or impossible to collect in a single experiment. Therefore, there is a need to use mathematical models to get insight into muscle function where large number of parameters can be manipulated or measured during a simulation of muscle contraction. Here we present the results of 3D finite element simulations of a skeletal muscle model that has been developed specifically to investigate the relation between the muscle’s internal structure and activation patterns and its force output \cite{rahemi2014regionalizing}. The model has the ability to include detailed 3D architecture and regionalized submaximal activity in different groups of fascicles. It integrates the effects of different tendon and aponeurosis properties on the force transfer within the muscle-tendon unit from its origin to insertion. Furthermore, we have previously shown that this mathematical modelling framework can predict the deformations of the internal structure within the muscle, and the force vector developed by the whole muscle, while the activity patterns within the muscle can be varied and regionalized \cite{rahemi2014regionalizing}. The main purpose of the current work is to present the validity of this modelling framework using different sets of experimental data. A validated computational model of muscle can be used to test mechanisms and investigate the effect of parameters that are difficult or impossible to measure. The second purpose of this work is to demonstrate some of the effects of the tendon and aponeurosis properties on the structural properties of the muscle during contraction. \section{Methods} A 3D finite-element model of MG muscle was developed based on the continuum theory for fibre-reinforced composite materials. The tissues were transversely isotropic and were constrained to have nearly incompressible behaviour. The mathematical framework for this work has been previously described \cite{rahemi2014regionalizing}. The computational model was validated by comparing the force-length properties of the whole muscle to experimental measures, and also by comparing the shape, orientation and curvature of the modelled muscle fascicles to similar measures that have recently been made available through ultrasound imaging studies. A unipennate muscle belly was modelled with dimensions similar to the medial gastrocnemius in man. The model coordinate system had the Z-axis running proximal-distal along the line of action of the muscle, the Y-axis ran from the deep to the superficial direction and the X-axis ran across the medial-lateral width of the muscle. This model had the same constitutive law and geometries that we have previously used \cite{rahemi2014regionalizing}. However, for this study the activation patterns and structural parameters along with mathematical boundary and initial condition were altered. The end planes of aponeuroses were defined as the transverse planes where the aponeuroses would join onto the external tendons, and mark the proximal and distal ends of the muscle belly. Some simulations were run for isometric contractions of the muscle belly where the end planes of the aponeuroses were fixed. Other simulations were run for the whole muscle-tendon unit with the external tendons included: for these, the proximal and distal ends of the muscle-tendon unit were fixed during contraction. Simulations in this study were done using a set of C++ libraries for finite element modelling \cite{dealii}. Each simulation was run with an increasing and uniform level of activation across all fascicles. The simulations were terminated when the nonlinear iterations did not converge within specified tolerences within given number of steps; this point depended on the initial state and boundary conditions for each simulation. Where groups of simulations are compared together, they were compared up to the highest activation level that was commonly achieved across the set. Each simulation took approximately 10 minutes to run [on a standalone 8-core (16 thread) computer], and this time included that for mesh initialization, matrix setup, iterative solving and result output. \subsection{Simulation vs. Experiments - Validation of a muscle model} Two sets of simulations were carried out on a muscle belly geometry (see Figure 1 in \cite{rahemi2014regionalizing}). Initially the muscle belly was a parallelepiped with 65 mm initial fascicle length, 15 degree pennation angle, and each aponeurosis was a rectangular cuboid of 210 $\times$ 55 $\times$ 3 mm. The initial stretch values for both the muscle and aponeuroses fascicles were set to one. This stretch corresponds to the optimal length for the muscle fascicles. A set of simulations was run to map the force-length relation for the muscle belly, and a second set of simulations was run to test the trajectories of the muscle fascicles and the strains within the tissues during contraction. \subsubsection{Force-length test for isometric contractions of a muscle belly} The model of the muscle belly was adjusted to different lengths by fixing one end at its aponeurosis end plane, and passively displacing the other aponeusosis end plane to a new position. When the length of the muscle belly reached the desired length, both end planes for the aponeuroses were fixed to maintain the muscle belly at an isometric length, and the activation level in the muscle fascicles was then ramped up. The range over which the muscle belly length changed was selected so that pre-activation fascicle stretch in the fascicles was between 0.75 and 1.35. This is close to the range for stretches in the medial gastrocnemius (MG) that have been reported when the ankle is passively moved from 30 degrees plantarflexion to 15 degrees dorsiflexion \cite{Maganaris1998}. To achieve this, the muscle belly was shortened about 6\% for the lower bound of the fascicle stretch range. However, lengthening of the belly was selected to surpass the natural range so the force-stretch curve could be plotted for a longer range. The simulations at different lengths reached a common activity level of 30\%. The magnitude of the passive and total belly forces were computed along with the muscle fascicle lengths at which those forces were developed. The active muscle force was taken as the difference between the total force and the passive force for a set of common muscle fascicle lengths. \subsubsection{Internal structural changes during isometric contractions of the muscle belly} Both end planes of the aponeuroses for the initial geometry were fixed and the activation was uniformly ramped up. Geometrical properties of fascicles both in 2D (fascicle curvature) mid-longitudinal and transverse planes (Figs. \ref{fig:planes}, \ref{fig:curvemap}) and 3D (fascicle path, along-fascicle and transverse strains) were measured at different activity levels (Fig. \ref{fig:3dpath} and Table \ref{tab:strains}). Undeformed fascicles (Figs. \ref{fig:planes}, \ref{fig:3dpath}) were chosen as groups of points that fit along lines that connect the two aponeuroses and have 15 degrees inclination (pennation) in the initial geometry. These fascicles were then tracked throughout all simulations to measure the structural deformations at the fascicle level. The mean pennation and curvature of the fascicles along with the along-fascicle (longitudinal) and transverse strains were extracted from the deformed fascicle data after the contractions had been simulated. The extent of fascicle curvature across the whole muscle belly in its mid-longitudinal plane was quantified by its root-mean-square (RMS) value for each activity level (\% MVC). Fascicle sheets were defined as the 3D faces that run longitudinally through muscle and contain fascicles that were originally in the same YZ-plane of the undeformed geometry. Figure \ref{fig:planes}B shows the intersection of these sheets with the mid-transverse plane. \subsection{The effect of tendon and aponeurosis properties on structural changes of the muscle tendon unit} Proximal and distal tendons were attached to the geometry of the muscle belly, where the distal tendon mimics the Achilles tendon. Both tendons had the same thickness and width as aponeuroses, but had lengths of 20 and 160 mm for the proximal and distal tendons, respectively. Initial tests showed that considerable rotations of the muscle belly during contraction as the aponeuroses end planes aligned along the line-of-action of the whole muscle tendon unit (Fig. \ref{fig:unsupported}). To minimize this rotation, the deep aponeurosis (that was attached to the distal tendon) was constrained to not move any more in a deep direction during contraction. The free end of proximal tendon was fixed and the free end of the distal tendon was pulled about 0.2\% of the total muscle-tendon unit length as an initialization step to settle the system into a initially stable structure. It was then fixed to keep the muscle-tendon unit isometric. Two situations were investigated: (1) the tendon and aponeurosis had the same material properties that were equal to the tendon properties, and (2) the tendon and aponeurosis had distinctive material properties which are shown below. These simulations achieved a common activation level of 10\%, and the patterns of aponeurosis and tendon strains were compared for the two material formulations. Constitutive equations for the tendon and aponeurosis. The mathematical formulation and implementation of these properties can be found in \cite{rahemi2014regionalizing}. We denote by $\lambda$ and $\sigma$ are along-fascicle stretch and stress, respectively, and $I_1$ is the first invariant of right Cauchy-Green deformation tensor.s For the tendon, the along-fascicle stress-stretch (in Pa) is given by \begin{equation} \label{equ:tend_stress} \sigma_{Tend}(\lambda)=\begin{cases} 10^4 \times 1.904\times(\lambda ^{68.8} -1), & 1 \leq \lambda \leq 1.07 \\ 10^4 \times 1.904\times(6758\times(\lambda-1.07)+104.1), & 1.07< \lambda. \end{cases} \end{equation} The tendon base (matrix) material strain energy (Pa) is given by \begin{equation} \label{equ:base_tend} \Psi_{Tend}= 10^4 \times 2.857\times(I_1-3). \end{equation} For the Aponeurosis, the along-fascicle stress-stretch properties are given by \begin{equation} \label{equ:apol_stress} \sigma_{Apo\ }(\lambda)=\begin{cases} 10^6 \times 3.053\times(\lambda ^{124.6} -1), & 1 \leq \lambda \leq 1.025 \\ 10^6 \times 3.053\times(17375 \times(\lambda-1.025)+20.7), & 1.025< \lambda. \end{cases} \end{equation} while the base material of the aponeurosis is given by \begin{equation*} \label{equ:base_tend} \Psi_{Apo\ }= 10^4 \times 57.84\times e^{579.6\times(I_1-3)} . \end{equation*} \section{Results} The force-length properties for the contracting muscle belly are shown in Figure \ref{fig:fl} along with selected data from experimental studies on human muscle. As the muscle was activated, the stretch in the connective tissues allowed the fascicles to shorten, and so the fascicle lengths were different between the active and passive states. Plots shown in Figure \ref{fig:fl} are all for equivalent fascicle lengths, and so the active force was calculated by subtracting the passive force at a slightly longer belly length away from the total force for a contracting muscle. The total and active muscle belly force showed a peak for fascicle stretch of 1, however, the overall shapes of the active and passive plots for the muscle belly were different from the plots for purely muscle fascicles due to the effects from the aponeurosis, muscle structure and pennation. This modelling framework has previously shown \cite{rahemi2014regionalizing} that the belly force and fascicle pennation becomes larger when the activation state of the muscle belly increases. In the current study the pennation also increased when the belly was passively shortened, and decreased when the belly was passively lengthened. The range of pennation for passive and 30\% active belly were 11.6-19.3 degrees and 13.4-21.2 degrees, respectively, as the belly length was reduced. The muscle fascicles in the MG belly, changed from their initially straight configuration to a curved state during contraction. The fascicles showed an S-shaped profile in the mid-longitudinal plane (Fig. \ref{fig:planes}) with the fascicles intersecting with the aponeurosis at a lower angle than their mean orientation would predict. These curvatures profiles match those that we have previously seen experimentally using ultrasound-based imaging \cite{namburete2011computational}, and both are shown in Fig. \ref{fig:curvemap}). The magnitude of the fascicle curvatures increased as the contraction level increased, and the increases in curvature matched the increases experimentally observed in contracting MG (Figs. \ref{fig:curvemap}, \ref{fig:RMS}). Strain measures for muscle tissue in the centre of the muscle belly are shown for an isometric contraction at 40\% in Table \ref{tab:strains} along with experimentally measured values \cite{wakeling2014transverse}. The transverse strains in the fascicle (mid-longitudinal:YZ) plane were much smaller than the strains normal to this plane. The Poisson's ratio in the fascicle plane was calculated as the magnitude of the ratio between transverse and along-fascicle strains in this plane and was 0.089. The fascicle sheets bulged in both medial and lateral directions when the muscle belly contracted (Figs. \ref{fig:planes}, \ref{fig:3dpath}), and the bulge increased as the activity level rose. The path of the fascicles in 3D showed them running along the fascicle sheets as they bulged, and thus formed a part of a helix (demonstrated by their varying azimuthal angle along their length (Fig. \ref{fig:3dpath}). When the whole muscle-tendon unit was simulated (with the external tendons included), the muscle belly showed substantial rotations as the aponeurosis end planes aligned to be closer to the line-of-action of the muscle (Fig. \ref{fig:unsupported}). Subsequent simulations of the MTU constrained the deep aponeurosis to not displace any deeper, and this forced the bulging of the muscle belly to be in the superficial direction. This was to emulate a simplified set of constraints that occur on the MG within the intact leg. The final simulations (Fig. \ref{fig:multi}) showed that when a stiffer aponeurosis was used instead of adopting tendon properties, the strains in aponeurosis were smaller. Also the strains in the muscle tissue were more uniform when a stiffer material for the aponeurosis was used. \section{Discussion} Validating a mathematical framework and numerical implementation of it for human muscle is a challenge, due in part to the fact that muscle forces cannot be directly measured in vivo. In this study we have compared the force output from a computational 3D FEM model with the forces estimated from studies of ankle joint flexion-extension experiments. The general pattern of the force-length relationship generated by the model matches those from the experimental studies. Experimental measures can identify the overall shape of the muscle with MRI \cite{gilles2006anatomical} and even the internal trajectories of the muscle fascicles using diffusion-tensor MRI \cite{heemskerk2009quantitative,heemskerk2011vivo,infantolino2012arrangement}. While this information is very important, the relatively long scan times of MR imaging preclude such measurements for active contractions \cite{rana2011vivo}. However, the aim of the presented muscle model is to understand the mechanisms occurring during muscle contractions. It is therefore important to validate the muscle model in its contracted state. For this study we have used ultrasound-based measures \cite{namburete2011computational, Rana20133d} of the internal structure during contraction (fascicle orientations, curvatures, and strains) to validate the model. The model in this study has a simplistic initial geometry that has the overall dimensions and mean fascicle pennation of the MG in man, but without the details of the geometry or internal structure. Furthermore, all the muscle fascicles within the model had the same material properties and thus represented the same fibre-types. Additionally, the activation was uniform across all fascicles: again these are gross simplifications compared to the physiological complexities and variations that occur within muscles \textit{in-vivo}. Nonetheless, the emergent features from the model showed a remarkable similarity to the experimental measures that are available for comparison, giving confidence that the model can identify general features and consequences of the muscle structure that were not a result of idiosyncrasies or muscle-specific details of geometry, structure or activation. Intramuscular pressure develops within muscles during contraction \cite{Sjogaard1986,sejersted1984intramuscular,Maton2006}, and the fascicles curve around the regions of higher pressure. Previous modelling studies \cite{Van_Leeuwen1992,vanLeeuwen1995} have shown how the curvatures in both the muscle fascicles and aponeurosis must balance the intramuscular pressure, and indeed our current model shows curvatures developing in both these structures. However, in these previous studies the curvatures of the muscle fascicles were constrained to be constant along their lengths, whereas this was not a constraint in the current model. The muscle fascicles in the current model started straight in their initial configuration, but developed S-shaped profiles when quantified in the mid-longitudinal plane. Both the S-shaped profiles and the increases in curvature that occurred with increasing activity and muscle force mirror those that we have previously imaged for the MG using B-mode ultrasound \cite{namburete2011computational,Rana20133d}. A consequence of the S-shaped trajectories is that the angle at which the fascicles insert onto the aponeurosis can be reduced, allowing for a greater component of traction in the line of action of the whole muscle along the direction of the aponeuroses. When tracked in 3D, the muscle fascicles followed curved paths on their fascicle sheets indicating that change in architecture is not simply due to a bulge of the sheets. The active configuration of these fascicles indicate that S-shaped fascicles in 2D curvature maps (Fig. \ref{fig:curvemap}) are not only the result of projecting the fascicles on a 2D plane {rana2014curve} but comes from curling of the fascicles in a helical path. These 3D helical paths are curved around the centre of the muscle (Fig. \ref{fig:3dpath}) where the intramuscular pressure is higher. It is generally assumed that muscle fascicles are isovolumetric \cite{Baskin1967}, and isovolumetric assumptions dictate the relation between longitudinal and transverse strains. Poisson's ratio is the absolute value of ratio of the transverse to the longitudinal strain, and should be 0.5 for small strains in an incompressible and elastic material. The simulations in this study showed that as the activation increased, the transverse strain (in the mid-longitudinal plane) was lower than expected, resulting in a Poisson’s ratio of 0.089, however this was compensated for by greater transverse strains in the orthogonal direction (Table \ref{tab:strains}). The muscle fascicles were represented as transversely isotropic materials in this model \cite{rahemi2014regionalizing}, and so the asymmetry in their transverse bulging must reflect asymmetries in the transverse stresses acting on the fascicles. Being a unipennate model, there would have been a larger compressive force in the mid-longitudinal plane that was bounded by the aponeuroses that were being squeezed together by the pennate fascicles, than in the medial-lateral direction where there was no aponeurosis bounding the muscle. Indeed, the model has showed muscle belly bulging to its sides, but decreasing in its thickness between the aponeuroses during contraction \cite{rahemi2014regionalizing}, in a similar manner to the decreases in thickness observed for the MG in vivo \cite{randhawa2013-1}. Recently we have quantified transverse bulging of the muscle fascicles in the MG from B-mode ultrasound images \cite{wakeling2014transverse}, showing a Poisson’s ratio of 0.09; this matches the simulated results and provides confidence that emergent features of the model explain realistic features of muscle contraction. When the model was evaluated with external tendons, there was a need to constrain displacements of the geometry since the unconstrained simulation (Fig. \ref{fig:unsupported}) showed a large displacement of the muscle in the Y-direction. This illustrates that a range of additional boundary constraints may need to be applied to finite element models of muscle-tendon units in order to result in more realistic deformation. In the case that the aponeurosis and tendon were given the same material properties a pattern of non-uniform strains resulted in the aponeurosis. This non-uniformity in strain is similar to that observed in previous experiments \cite{finni2003nonuniform,Muramatsu2001}, but our modelling study shows this can be an emergent feature of the muscle, and not necessarily due to differnces between active and inactive motor units in submaximally activated muscle, as previously suggested \cite{finni2003nonuniform}. The aponeurosis strains were smaller than the tendon strains for both formulations (Table \ref{tab:materials}) of material property (Fig.\ref{fig:multi}). Although there is an obvious jump in strain between the tendon and aponeurosis when a stiffer material is used for the aponeurosis, the difference in strains was less than 2\%. A benefit of such a material distribution would be that a more uniform distribution of strains occurs in the fascicles, and this would allow the fascicles to produce more consistent forces along their length. The simulated results from this finite element model match the general patterns from experimental and imaging results. Whole muscle force is partly shaped by the internal geometry of the muscle fascicles, and their interactions with the aponeuroses \cite{rahemi2014regionalizing}, and so cannot be explained entirely by modelling a muscle as a scaled-up muscle fibre \cite{wakeling2011modelling}. As the fascicles shorten they must increase in cross-sectional area in order to maintain their volume, but asymmetric bulging occurs due to asymmetries in the compressive stress acting on the fascicles during contraction. The fascicles curve and adopt S-shaped profiles that align their traction to be closer to the aponeurosis direction, and they curl across fascicle sheets that in turn bulge around the intramuscular pressure that develops during contraction. Material properties of the aponeuroses affect the strains in the fascicles and thus their force generating potenitial. The muscle model that we have validated in this study will provide a useful tool for understanding the mechanisms that relate muscle structure to its contractile function. \section*{Acknowledgement} We gratefully acknowledge funding from Natural Sciences and Engineering Research Council of Canada (Nilima Nigam and James M. Wakeling) and the Canada Research Chairs Program (Nilima Nigam). \bibliographystyle{plain}
1,116,691,498,147
arxiv
\section{Introduction} \label{sec:two} Microlensing constitutes one of the major methods to detect and characterize extrasolar planets \citep{mao91, gould92a}. The method is sensitive to planets that are difficult to be detected by using other methods such as cool planets at or beyond the snow line \citep{bond04, gaudi08, dong09, sumi10, muraki11} and planets at large distances \citep{janczak10}. It is also sensitive to low-mass planets \citep{beaulieu06, bennett08}, making it possible to detect terrestrial planets from ground observations. Due to the weak dependence on the host-star brightness, it also enables to detect planets around low-mass stars down to M-type dwarfs \citep{udalski05, miyake11, batista11} and even to sub-stellar mass objects. In addition, it is the only method that can detect old planetary-mass objects that are not bound to stars \citep{sumi11}. Therefore, microlensing is important for the complete census of the frequency and properties of planets \citep{gould10, cassan12}. Current microlensing planet searches are being conducted based on a specially designed strategy where survey and follow-up observations work in close coordination. There are two main reasons for this strategy. The first reason is that the probability of a lensing event is very low. For a star located in the Galactic bulge, toward which planetary microlensing searches are being conducted, the chance to detect a lensed star at a specific time is of order $10^{-6}$ \citep{udalski94, alcock00}. Considering that a planet can be detected for a small fraction of lensing events, it is essential to maximize the detection rate of lensing events to increase the rate of planet detections. Survey observations are designed for this purpose by monitoring a large area of the Galactic bulge field. The second reason for the survey/follow-up strategy is that the duration of a planetary signal is short. The planetary signal is a short-term perturbation to the smooth standard light curve of the primary-induced lensing event. To densely cover planetary perturbations, follow-up observations are designed to focus on events detected by survey observations. Under the current strategy of microlensing searches, high-magnification events are important targets for follow-up observations. A typical number of events alerted at a certain time by survey experiments is of order 10. Considering that each event typically lasts for several dozens of days, it is difficult to follow all alerted events with a restricted number of telescopes. To maximize the planet detection efficiency, therefore, priority is given to events for which the planet detection probability is high. Currently, the highest priority is given to high-magnification events. For a lens with a planet, there exist two sets of disconnected caustics, where one set is located away from the planet-host star (planetary caustic) while the other set is always located close to the host star (central caustic). For a high-magnification event, the sensitivity to a planetary companion is very high because the source trajectory always passes close to the perturbation region around the central caustic induced by the planet \citep{griest98}. The efficiency of the strategy focusing on high-magnification events is demonstrated by the fact that 7 out of 13 microlensing planets detected as of the end of 2011 were detected through this channel. \begin{figure}[t] \epsscale{1.15} \plotone{f1.eps} \caption{\label{fig:one} Central caustics induced by a planetary (left panel) and a binary companion (right panel). The regions with brownish and bluish colors represent the areas where the lensing magnification is higher and lower than the corresponding single-lensing magnification, respectively. For each tone, the color changes to darker shades when the fractional difference between the single and binary magnification is > 2$\%$, 4$\%$, 8$\%$, and 16$\%$, respectively. }\end{figure} \begin{figure}[ht] \epsscale{1.15} \plotone{f2.eps} \caption{\label{fig:two} Light curves resulting from the two source trajectories (straight lines with arrows) marked in Fig. \ref{fig:one}. }\end{figure} Perturbations near the peak of a high-magnification lensing event (central perturbations) can be produced not only by a planet but also by a binary companion \citep{han09,shin12}. For a binary lens where the projected separation between the lens components is substantially smaller than the Einstein radius (close binary), there exists a small single set of caustics formed around the barycenter of the binary. For a binary where the projected separation is substantially larger than the Einstein radius (wide binary), on the other hand, there exist two sets of caustics each of which is located close to each lens component. Then, for a high-magnification event resulting from the source trajectory passing close to the center of mass of a close binary or close to one of the lens components of a wide binary, there can be a short-term perturbation near the peak of the lensing light curve, similar to the central perturbation induced by a planet. It is known that the central perturbation induced by a planet can be generally distinguished from that induced by a binary because the caustic shapes and the resulting magnification patterns around the two types of caustics are different from each other. In this paper, we present a case of central perturbations for which it is difficult to distinguish between the planetary and binary interpretations. In $\S$2, we describe details of the degeneracy. In $\S$3, we demonstrate the degeneracy for two microlensing events OGLE-2011-BLG-0526 and OGLE-2011-BLG-0950/MOA-2011-BLG-336 that were detected during the 2011 observation season. In $\S$4, we summarize the results and conclude. \section{DEGENERACY} \begin{deluxetable*}{ll} \tablecaption{Telescopes\label{table:one}} \tablewidth{0pt} \tablehead{ \multicolumn{1}{c}{event} & \multicolumn{1}{c}{telescopes} } \startdata OGLE-2011-BLG-0526 & OGLE 1.3 m Warsaw telescope at Las Campanas Observatory in Chile \\ & MiNDSTEp 1.54 m Danish telescope at La Silla Paranal Observatory in Chile \\ & PLANET 0.6 m at Perth Observatory in Australia \\ & PLANET 1.0 m at SAAO in South Africa \\ & RoboNet 2.0 m Liverpool telescope (LT) in La Palma, Spain \\ \hline OGLE-2011-BLG-0950/ & OGLE 1.3 m Warsaw telescope at Las Campanas Observatory in Chile \\ MOA-2011-BLG-336 & MOA 1.8 m at Mt. John Observatory in New Zealand \\ & $\mu$FUN 1.3 m SMARTS telescope at CTIO in Chile \\ & $\mu$FUN 0.4 m at Auckland Observatory in New Zealand \\ & $\mu$FUN 0.4 m at Farm Cove Observatory (FCO) in New Zealand \\ & $\mu$FUN 0.4 m at Kumeu Observatory in New Zealand \\ & $\mu$FUN 0.6 m at Observatorio do Pico Dos Dias (OPD) in Brazil \\ & $\mu$FUN 1.0 m at Wise Observatory in Israel \\ & MiNDSTEp 1.54 m Danish telescope at La Silla Paranal Observatory in Chile \\ & PLANET 1.0 m at SAAO in South Africa \\ & RoboNet 2.0 m Faulkes Telescope North (FTN) in Hawaii \\ & RoboNet 2.0 m Faulkes Telescope South (FTS) in Australia \\ & RoboNet 2.0 m LT in La Palma, Spain \enddata \end{deluxetable*} The pattern of central perturbations in a lensing light curve is basically determined by the shape of the central caustic. For both planetary and binary cases, the central caustics form a closed figure that is composed of concave curves that meet at cusps. The general magnification pattern is that a positive perturbation occurs when the source is located in the region outside the caustic extending from cusps while a negative perturbation occurs when the source is located in the region between cusps. Here a ``positive'' (``negative'') perturbation means that the magnification of the perturbed part of the light curve is higher (lower) than the magnification of the corresponding single-lensing event. The central caustics induced by a planet and a binary companion have different shapes and thus the resulting patterns of magnification around the two types of caustics are different from each other. In Figure \ref{fig:one}, we present the central caustics and the magnification patterns around them for the representative cases of the planetary and binary lenses, respectively. \begin{figure}[ht] \epsscale{1.15} \plotone{f3.eps} \caption{\label{fig:three} Light curve of OGLE-2011-BLG-0526. Also drawn is the best-fit single-lensing light curve that is obtained with data except those around the perturbation. Colors of data points are chosen to match those of the labels of observatories where the data were taken. The inset shows the enlarged view of the peak region. }\end{figure} \begin{figure}[ht] \epsscale{1.15} \plotone{f4.eps} \caption{\label{fig:four} Light curve of OGLE-2011-BLG-0950/MOA-2011-BLG-336. Notations are same as those in Fig. \ref{fig:three}. }\end{figure} The central caustic induced by a planet has a shape of an arrowhead with four cusps. One cusp corresponding to the sharp tip of the arrowhead-shaped caustic is located on the star-planet axis. This cusp is strong in the sense that light curves resulting from source trajectories passing close to the cusp exhibit strong deviations from the single-lens expectation. Two other cusps are located off the star-planet axis corresponding to the blunt ends of the arrowhead-shaped caustic. These two cusps are moderately strong. The fourth cusp, which is located on the star-planet axis between the two off-axis cusps, is weak in the sense that it creates relatively weak deviations. Due to the weakness of the last cusp, there exists an extended region of negative perturbation between the two off-axis cusps. The central caustic induced by a wide or a close binary has an asteroid shape with four cusps. Two of the cusps are located on the binary-lens axis and the other two are along a line perpendicular to the axis. The caustic is symmetric with respect to the two lines connecting the on-axis and off-axis cusps. Due to the symmetry of the caustic, all cusps are of similar strength. Regions of positive perturbations form outside the caustic extending from the cusps and regions of negative perturbations form between the positive-perturbation regions. \begin{deluxetable*}{l|llll|llll} \tablecaption{Best-fit Parameters\label{table:two}} \tablewidth{0pt} \tablehead{ \multicolumn{1}{c|}{parameter} & \multicolumn{4}{c|}{OGLE-2011-BLG-0526} & \multicolumn{4}{c}{OGLE-2011-BLG-0950/MOA-2011/BLG-336} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{B} & \multicolumn{1}{c}{C} & \multicolumn{1}{c|}{D} & \multicolumn{1}{c}{A} & \multicolumn{1}{c}{B} & \multicolumn{1}{c}{C} & \multicolumn{1}{c}{D} } \startdata $\chi^2$ & 423.6 & 420.0 & 422.2 & 422.9 & 3073.5 & 2968.6 & 2969.0 & 3076.9 \\ $u_0$ & 0.141$\pm$0.001 & 0.117$\pm$0.002 & 0.117$\pm$0.002 & 0.140$\pm$0.020 & (9.3$\pm$0.1)$10^{-3}$ & (8.6$\pm$0.1)$10^{-3}$ & (8.7$\pm$0.1)$10^{-3}$ & (9.0$\pm$0.3)$10^{-3}$ \\ $t_{\rm E}$ (days) & 11.63$\pm$0.08 & 12.15$\pm$0.09 & 12.37$\pm$0.10 & 11.60$\pm$1.91 & 61.39$\pm$0.67 & 65.21$\pm$0.85 & 65.27$\pm$0.76 & 62.41$\pm$1.90 \\ ${\it s}$ & 0.311$\pm$0.003 & 0.48$\pm$0.01 & 1.94$\pm$0.02 & 6.43$\pm$0.05 & 0.075$\pm$0.001 & 0.70$\pm$0.01 & 1.43$\pm$0.01 & 22.7$\pm$0.3 \\ ${\it q}$ & 0.91$\pm$0.04 & (3.5$\pm$0.2)$10^{-2}$ & (3.9$\pm$0.2)$10^{-2}$ & 28.5$\pm$10.6 & 0.83$\pm$0.09 & (5.8$\pm$0.2)$10^{-4}$ & (6.0$\pm$0.2)$10^{-4}$ & 2.36$\pm$0.21 \\ $\alpha$ & -0.795$\pm$ 0.010 & 4.718$\pm$0.004 & 4.718$\pm$0.004 & 0.765$\pm$0.007 & 0.739$\pm$0.005 & 4.664$\pm$0.002 & 4.664$\pm$0.002 & 0.722$\pm$0.002 \\ $\rho_{\star}$ $(10^{-3})$ & 80$\pm$2 & -- & -- & 79$\pm$7 & 3.2$\pm$0.3 & 4.6$\pm$0.1 & 4.6$\pm$0.1 & 3.4$\pm$0.3 \\ $\pi_{{\rm E},N}$ & -- & -- & -- & -- & 0.22$\pm$0.15 & -0.10$\pm$0.17 & -0.29$\pm$0.14 & 0.12$\pm$0.09 \\ $\pi_{{\rm E},E}$ & -- & -- & -- & -- & -0.04$\pm$0.03 & 0.02$\pm$0.03 & 0.03$\pm$0.02 & -0.03$\pm$0.02 \enddata \end{deluxetable*} \begin{figure}[ht] \epsscale{1.15} \plotone{f5.eps} \caption{\label{fig:five} Distribution of $\Delta\chi^2$ in the parameter space of the projected binary separation ($s$) and the mass ratio ($q$) for OGLE-2011-BLG-0526. The regions marked in red, yellow, green, sky blue, and blue correspond to those with $\Delta\chi^2 <$ $6^2$, $12^2$, $18^2$, $24^2$, and $30^2$, respectively. The cross marks represent the locations of the local minima. The lower panels show the source trajectories (straight lines with arrows) with respect to the caustics for the individual local solutions. The small orange circle on each source trajectory represents the relative scale of the source star. }\end{figure} \begin{figure}[ht] \epsscale{1.15} \plotone{f6.eps} \caption{\label{fig:six} Distribution of $\Delta\chi^2$ in the $s-q$ parameter space for OGLE-2011-BLG-0950/MOA-2011-BLG-336. The regions marked in red, yellow, green, sky blue, and blue correspond to those with $\Delta\chi^2 <$ $13^2$, $26^2$, $39^2$, $52^2$, and $65^2$, respectively. Notations are same as in Fig. \ref{fig:five}. }\end{figure} Despite the basically different caustic shapes and the resulting magnification patterns, we find a case of central perturbations for which it is difficult to distinguish between the planetary and binary interpretations. This degeneracy is illustrated in Figures \ref{fig:one} and \ref{fig:two}. The planetary lensing case for this degeneracy occurs when the source trajectory passes the negative perturbation region behind the back end of the arrowhead-shaped central caustic with an angle between the source trajectory and the star-planet axis (source-trajectory angle) of $\alpha \sim 90^{\circ}$. For a binary case, a similar perturbation occurs when the source trajectory passes through the negative perturbation region between two cusps of an astroid-shaped caustic with a source-trajectory angle of $\sim 45^{\circ}$. For both cases, the morphology of the resulting perturbation is that the peak of the light curve appears to be blunt and flat. \section{ACTUAL EVENTS} We search for high-magnification events with similar central perturbations among those detected during the 2011 observation season. From this search, we find that two events including OGLE-2011-BLG-0526 and OGLE-2011-BLG-0950/MOA-2011-BLG-336 exhibit such central perturbations. In this section, we investigate the severity of the degeneracy by conducting detailed modeling of the light curves for these events. \begin{figure}[ht] \epsscale{1.15} \plotone{f7.eps} \caption{\label{fig:seven} Light curve of OGLE-2011-BLG-0526 near the peak region and the residuals from 4 local solutions. The model light curve drawn over the data is based on one of the local solutions (local ``B''). Colors of data points are chosen to match those of the labels of observatories where the data were taken. }\end{figure} The event OGLE-2011-BLG-0526 occurred on a Galactic bulge star that is positioned at $(\alpha,\delta)_{J2000}$ = $(18^{\rm h}02^{\rm m}45^{\rm s}\hskip-2pt.37, -28^{\circ}01^{\prime}25^{\prime\prime}\hskip-2pt.8)$, which correspond to the Galactic coordinates $(l,b)$ = $(2.69^{\circ},-2.79^{\circ})$. The event was detected and alerted to the microlensing community by the Optical Gravitational Lensing Experiment (OGLE) group. High-magnification events are usually realerted after the first alert. Unfortunately, no high-magnification alert was issued for this event and thus follow-up observations were conducted by using a fraction of telescopes available for follow-up observations. As a result, the coverage of the peak is not very dense. The telescopes used for the observations of this event are listed in Table \ref{table:one}. The event OGLE-2011-BLG-0950/MOA-2011-BLG-336 also occurred on a Galactic bulge star located at $(\alpha,\delta)_{J2000}$ = $(17^{\rm h}57^{\rm m}16^{\rm s}\hskip-2pt.63, -32^{\circ}39^{\prime}57^{\prime\prime}\hskip-2pt.0)$, corresponding to $(l,b)$ = $(358.07^{\circ},-4.05^{\circ})$. It was independently discovered from the survey experiments conducted by the OGLE and the Microlensing Observation in Astrophysics (MOA) groups. A high-magnification alert was issued for this event 4 days before the peak. Based on this alert, follow-up observations were conducted by using 13 telescopes located in 8 different countries. As a result, the perturbation was more densely covered than the perturbation of OGLE-2011-BLG-0526. In Table \ref{table:one}, we also list the telescopes used for the observations of this event. Initial reductions of the data taken from different observatories were processed by using photometry codes developed by the individual groups. For the purpose of improving the data quality, we conducted additional photometry for all follow-up data of OGLE-2011-BLG-0950/MOA-2011-BLG-336 by using codes based on difference imaging photometry. For the use of modeling, we rescaled the error bars of the data sets so that $\chi^{2}$ per degree of freedom becomes unity for each data set, where the value of $\chi^{2}$ is calculated based on the best-fit solution obtained from modeling. We eliminated 3$\sigma$ outliers from the best-fit solution in the modeling. \begin{figure}[ht] \epsscale{1.15} \plotone{f8.eps} \caption{\label{fig:eight} Light curve of OGLE-2011-BLG-0950/MOA-2011-BLG-336 near the peak region and the residuals from 4 local solutions. The model light curve drawn over the data is based on one of the local solutions (local ``C''). Notations are same as those in Fig. \ref{fig:seven}. }\end{figure} In Figures \ref{fig:three} and \ref{fig:four}, we present the light curves of the two events. Also drawn are the best-fit single-lensing light curves. For both events, the light curves are well represented by those of standard single-lensing events except for the short-lasting perturbations near the peak. The common morphology of the perturbations is that the peak appears to be flat and blunt. To investigate the nature of the perturbations, we conducted binary-lens modeling of the light curves. In the modeling of each light curve, we searched for the solution of the binary-lensing parameters that best describe the observed light curve by minimizing $\chi^{2}$ in the parameter space. For OGLE-2011-BLG-0526, the time scale of the event is not long ($t_{\rm E} \sim$ 12 days) and thus we modeled the light curve using 7 basic binary-lens parameters. The first 3 of these parameters characterize the geometry of the lens-source approach and they include the Einstein time scale, $t_{\rm E}$, the time of the closest lens-source approach, $t_0$, and the lens source separation at that moment, $u_0$, in units of the Einstein radius. The other 3 parameters characterize the binary lens. These parameters include the mass ratio between the lens components, $q$, the projected separation in units of the Einstein radius, $s$, and the angle between the source trajectory and the binary axis, $\alpha$. The last parameter of the normalized source radius $\rho_{\star}$ describes the deviation of the light curve affected by the finite-source effect and it represents the angular source radius $\theta_{\star}$ in units of the angular Einstein radius $\theta_{\rm E}$, i.e.\ $\rho_{\star}=\theta_{\star}/\theta_{\rm E}$. For OGLE-2011-BLG-0950/MOA-2011-BLG-336, the duration of the event ($t_{\rm E} \sim$ 65 days) is relatively long. For such a case, the motion of the source with respect to the lens may deviate from a rectilinear one due to the change of the observer's position caused by the orbital motion of the Earth around the Sun and this deviation can cause a long-term deviation in the light curve \citep{gould92a}. Consideration of this ``parallax effect'' requires to include two additional parameters $\pi_{{\rm E},N}$ and $\pi_{{\rm E},E}$, which represent the two components of the lens parallax ${{\bf \pi}_{\rm E}}$ projected on the sky in the north and east equatorial coordinates, respectively. The direction of the parallax vector corresponds to the relative lens-source motion in the frame of the Earth at a specific time of the event. Its size corresponds to the ratio of the Earth's orbit to the physical Einstein radius, $r_{\rm E}$ = $D_{L}\theta_{\rm E}$, projected on the observer plane, i.e. $\pi_{\rm E}=({\rm AU}/r_{\rm E})[(D_{\rm S}-D_{\rm L})/D_{\rm S}]$. Knowing that central perturbations can be produced either by a planet or by a binary companion, we conduct a thorough search for solutions in the $s-q$ parameter space encompassing both planet and binary regimes to investigate the possible existence of local minima. In Figures \ref{fig:five} and \ref{fig:six}, we present the resulting distributions of $\Delta\chi^2$ in the $s-q$ parameter space for the individual events. From the distributions, it is found that there exist four distinct local minima for both events. Among them, two minima are located in the region with $s > 1$ and the other two are located in the region with $s < 1$. For each close/wide binary pair, one local minimum is located in the regime of a binary mass ratio ($q \sim 1$) and the other minimum is located in the regime of a planet mass ratio ($q \ll 1$). We designate the individual minima by ``A'' ($s < 1$ with binary $q$), ``B'' ($s < 1$ with planetary $q$), ``C'' ($s > 1$ with planetary $q$), and ``D'' ($s > 1$ with binary $q$). In Table \ref{table:two}, we present the lensing parameters of the individual local minima that are obtained by further refining the local solutions in the corresponding parameter space. The exact locations of the local minima are marked by ``X'' on the $\Delta\chi^2$ maps in Figures \ref{fig:five} and \ref{fig:six}. For each local solution, we also present the caustic and the source trajectory. We note that the size of the caustic for the binary with $s < 1$ is scaled by the Einstein radius corresponding to the total mass of the lens, while the caustic size for the binary with $s > 1$ is scaled by the Einstein radius corresponding to the mass of the lens component that the source approaches. The findings from the comparison of the local solutions and the corresponding lens-system geometries are summarized as below. \begin{enumerate} \item For both events, $\chi^2$ differences from the best-fit single-lensing models are very big. We find that $\Delta\chi^2$ = 1085 for OGLE-2011-BLG-0526 and $\Delta\chi^2$ = 5644 for OGLE-2011-BLG-0950/MOA-2011-BLG-336, implying that the perturbations of both events are clearly detected. \item Despite the clear signature of the perturbation, we find that the degeneracy of the four local solutions is severe. To better show the subtle differences between the local solutions, we present the residuals of the data from the individual local solutions in Figures \ref{fig:seven} and \ref{fig:eight} for OGLE-2011-BLG-0526 and OGLE-2011-BLG-0950/MOA-2011-BLG-336, respectively. We also present the enlargement of the perturbed parts of the light curve in the upper panel of each figure. For the case of OGLE-2011-BLG-0526, the $\chi^2$ difference between the planetary and binary models is $\sim$ 3, implying that the degeneracy is very severe. For the case of OGLE-2011-BLG-0950/MOA-2011-BLG-336, the planetary solution is favored over the binary solution with $\Delta\chi^2 \sim$ 105 and thus the stellar binary model is formally excluded. However, from the visual inspection of the residuals, it is found that systematic residuals of the data from the planetary model are larger than the difference between the planetary and binary models. In addition, the CTIO, Danish, and OGLE data of overlapping coverage appear to be different from each other by an amount at least as large as the difference between the planetary and stellar binary models. Therefore, it is difficult to claim a planet discovery based on < 1\% variations in the light curve. \item For a pair of solutions with similar mass ratios, the solutions with $s > 1$ and $s < 1$ result in a similar caustic shape. The degeneracy between these solutions, often referred to as $s \leftrightarrow s^{-1}$ degeneracy, is known to be caused by the symmetry of the lens-mapping equation between close and wide binaries \citep{dominik99,albrow99,afonso00,an05,chung05}. \end{enumerate} The degeneracy between the pairs of solutions with planetary and binary mass ratios corresponds to the degeneracy mentioned in $\S$ 2. To be noted is that despite the large difference in caustic shape, the resulting perturbations appear to be very alike. The planet/binary degeneracy introduced in this work was not known before. This is mostly because the caustics induced by a planet and a binary companion have very different shapes and thus it is widely believed that perturbations induced by the two types of companions can be easily distinguished. Considering that two events of a single season suffer from this degeneracy along with the fact that perturbations caused by non-caustic-crossing source trajectories have larger cross sections, it is expected that central perturbations suffering from this is common. \section{CONCLUSION} We introduced a new type of degeneracy in the planet/binary interpretation of central perturbations in microlensing light curves. The planetary lensing case for this degeneracy occurs when the source trajectory passes the negative perturbation region behind the back end of the arrowhead-shaped central caustic with a source-trajectory angle of $\sim 90^{\circ}$. For a binary case, a similar perturbation occurs when the source trajectory passes through the negative perturbation region between two cusps of an astroid-shaped caustic with a source-trajectory angle of $\sim 45^{\circ}$. For both cases, the morphology of the resulting perturbation is that the peak of the light curve appears to be blunt and flat. From investigation of events detected during the 2011 microlensing observation season, we found 2 events OGLE-2011-BLG-0526 and OGLE-2011-BLG-0950/MOA-2011-BLG-336, which exhibit such perturbations. From detailed modeling of the light curves, we demonstrated the severity of the degeneracy. Considering that 2 events during a single season suffer from the degeneracy, we conclude that central perturbations experiencing the degeneracy should be common. \acknowledgments Work by CH was supported by Creative Research Initiative Program (2009-0081561) of National Research Foundation of Korea. The OGLE project has received funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013) / ERC grant agreement no. 246678. The MOA experiment was supported by grants JSPS22403003 and JSPS23340064. TS was supported by the grant JSPS23340044. Y. Muraki acknowledges support from JSPS grants JSPS23540339 and JSPS19340058. The MiNDSTEp monitoring campaign is powered by ARTEMiS (Automated Terrestrial Exoplanet Microlensing Search; Dominik et al. 2008, AN 329, 248). MH acknowledges support by the German Research Foundation (DFG). DR (boursier FRIA), OW (FNRS research fellow) and J. Surdej acknowledge support from the Communaut\'{e} fran\c{c}aise de Belgique Actions de recherche concert\'{e}es -- Acad\'{e}mie universitaire Wallonie-Europe. KA, DMB, MD, KH, MH, CL, CS, RAS, and YT are thankful to Qatar National Research Fund (QNRF), member of Qatar Foundation, for support by grant NPRP 09-476-1-078. CS received funding from the European Union Seventh Framework Programme (FPT/2007-2013) under grant agreement 268421. This work is based in part on data collected by MiNDSTEp with the Danish 1.54 m telescope at the ESO La Silla Observatory. The Danish 1.54 m telescope is operated based on a grant from the Danish Natural Science Foundation (FNU). A. Gould and B.S. Gaudi acknowledge support from NSF AST-1103471. B.S. Gaudi, A. Gould, and R.W. Pogge acknowledge support from NASA grant NNG04GL51G. Work by J.C. Yee is supported by a National Science Foundation Graduate Research Fellowship under Grant No. 2009068160. S. Dong's research was performed under contract with the California Institute of Technology (Caltech) funded by NASA through the Sagan Fellowship Program. Research by TCH was carried out under the KRCF Young Scientist Research Fellowship Program. TCH and CUL acknowledge the support of Korea Astronomy and Space Science Institute (KASI) grant 2012-1-410-02. Dr. David Warren provided financial support for Mt. Canopus Observatory.
1,116,691,498,148
arxiv
\section{Introduction} In this paper, we study regularity properties of induced cost (under several criteria) on a controlled diffusion process with respect to a control topology defined by Borkar \cite{Bor89}, and implications of these properties on existence and, in particular, approximations for optimal controlled diffusions. We will arrive at very general approximation results for optimal control policies by quantized (finite action / piecewise constant) stationary control policies for a general class of controlled diffusions in the whole space ${\mathds{R}^{d}}$\, as well as time-discretizations for the criteria with finite horizons. Such a problem is of significant practical consequence, and accordingly has been studied extensively in a variety of setups. Due to its wide range of applications in domains that spans from mathematical finance, large deviations and robust control, vehicle and mobile robot control and several other fields, the stochastic optimal control problems for controlled diffusions have been studied extensively in literature see, e.g., \cite{Bor-book}, \cite{HP09-book} (finite horizon cost) \cite{BS86}, \cite{BB96} (discounted cost) \cite{Ari-12}, \cite{AA13}, \cite{BG90}, \cite{BG88I}, \cite{BG90b}, \cite{ABG-book} (ergodic cost) and references therein\,. Typically, there are two main approaches to deal with these problems. The first one is the Bellman's Dynamic Programming Principal (DPP). The DPP approach allows one to characterize the value function of the optimal control problem as the unique solution of the associated Hamilton-Jacobi-Bellman (HJB) equation \cite{Bor-book}, \cite{HP09-book}, \cite{ABG-book}, \cite{Lions-83A}, \cite{Lions-83B}. The second one is Pontryagin maximum principal (in the stochastic framework) \cite{PBGM62}\,. For numerical methods as well as learning theoretic methods, it is imperative to arrive a rigorous approximation results. In the continuous-time literature, most of the approximation results are build on time-discretization and mainly focused on finite horizon or discounted cost criteria see, e.g., \cite{KD92}, \cite{KH77}, \cite{KH01}, \cite{KN98A}, \cite{KN2000A}, \cite{BR-02}, \cite{BJ-06}\,, though the ergodic control and control up to an exit time criteria have also been studied \cites{KD92,kushner2014partial}. For finite horizon criteria, a commonly adopted approach of approximating controlled diffusions by a sequence of discrete time Markov chain via weak convergence methods was studied by Kushner and Kushner and Dupuis, see \cite{KD92}, \cite{KH77}, \cite{KH01}\,. These works deal with numerical procedures to construct near optimal control policies for controlled diffusion models by approximating the space of (open-loop adapted) relaxed control policies with those that are piece-wise constant, and by considering the weak convergence of approximating probability measures on the path space to the measure on the continuous-time limit. It is shown in \cite{KD92}, \cite{KH77}, \cite{KH01} that if the constructed controlled Markov chain satisfies certain ``consistency" condition at the discrete-time sampling instants, then the state process and the corresponding value function asymptotically approximates the continuous time state process and the associated value function. This approach has been referred to as the {\it weak convergence} approach. In an alternative program, building on finite difference approximations for Bellman's equations utilizing their regularity properties, Krylov \cite{KN98A}, \cite{KN2000A} established the convergence rate of for such approximation techniques, where finite difference approximations are studied to arrive at stability results. In particular, some estimates for the error bound of the finite-difference approximation schemes in the problem of finding viscosity or probabilistic solutions to degenerate Bellman's equations are established. The proof technique is based on mean value theorems for stochastic integrals (as in \cite{KN2001AA}), obtained on the basis of elementary properties of the associated Bellman's equations\,. Also, for controlled non-degenerate diffusion processes, it is shown in \cite{KN99AA} that using policies which are constant on intervals of length $h^2$, one can approximate the value function with error of order $h^{\frac{1}{3}}$\,. In \cite{BR-02}, \cite{BJ-06} Barles et. al. improved the error bound obtained in the previous work \cite{KN98A}, \cite{KN2000A}, \cite{KN99AA} of Krylov\,. Borkar, for the finite-horizon cost case \cite{Bor89}, \cite{Bor-book} pursued an alternative approach to show continuity (when only stationary state feedback policies are considered for finite horizon problems) in his newly introduced topology; he studied the dependence of the strategic measures (on the path space) on the control policy, via regularity properties of generator functions. Additionally, Borkar \cite{Bor89} did not study the implications in approximations. Instead of the approaches adopted in the aforementioned studies, in this paper, utilizing regularity results of the associated Poisson equations via PDE theory, we arrive at continuity results under relatively weaker set of assumptions on the diffusion coefficients (with the exception of Krylov's method, which is tailored for finite horizon problems). Our approach allows one to arrive at a unification of approximation methods for finite horizon criterion, infinite discounted criterion, control up to an exit time, and ergodic cost criterion problems. Accordingly, our primary approach is to utilize the regularity properties of the partial differential equations directly, first via uniqueness of solutions, and then via regularity properties of the solutions to establish consistency of optimality equations satisfied by the limits of solutions (as policies converge). We will see that one can obtain rather concise, direct, and general results. Additionally, our results can be used to present weaker conditions under which the weak convergence methods can be applicable or when discretized approximations can be shown to be near optimal: For example it will be a consequence of our analysis that for many of the criteria one can utilize piece-wise continuous or continuous control policies for near optimality, which then, implies \cite[Assumption A2.3, pp. 322]{KD92} used for approximations under ergodic cost criteria (where invariant measures under sampled chains can be shown to converge to the invariant measure of a continuous-time limit as discretization gets finer). Furthermore, we do not impose uniform boundedness conditions on the drift term or (uniform) Lipschitz continuity conditions, a common assumption in \cite{KD92}, \cite{KH77}, \cite{KH01}, \cite{KN98A}, and \cite{KN2000A}. As noted above, the study of the finite action/piecewise constant approximation problem plays important role in computing near optimal policies and learning algorithms for controlled diffusions in ${\mathds{R}^{d}}$\,. As it is pointed out in \cite{RF-15N}, \cite{JPR-19P}, piecewise constant policies are also useful in numerical methods for solving HJB equations. The computational advantage comes from the fact that over the intervals in which the policy is constant, we have to only solve the linear PDEs\,. In the continuous time setup learning problems become much more involved due to the complex structure of the dynamics and the optimality equation. One common approach to overcome these difficulties is to construct simpler models by discretizing time, space and action spaces which approximates the original continuous time model\,. In a recent work \cite{BK-22R}, the authors studied an approximate $Q$-learning algorithm for controlled diffusion models by discretizing the time, space and action spaces. Under mild assumptions, they produced a learning algorithm which converges to some approximately optimal control policy for a discounted cost problem. They assumed that the discretization is uniform in time but the discretization in state and action can be non-uniform\,. Similar learning algorithm for controlled diffusions is proposed in \cite{RP-97B}, this result is based on the finite difference and finite element approximations (as in,\cite{KD92})\,. Thus, if one can establish that learning a control model with finitely many control actions is sufficient for the approximate optimality, then it will be easier to produce efficient learning algorithms for the original model\,. In the literature of discrete time Markov decision processes (MDPs), various approximation techniques are available to address the approximation problems, e.g., approximate dynamic programming, approximate value or policy iteration, approximate linear programming, simulation based techniques, neuro-dynamic programming (or reinforcement learning), state aggregation, etc. (see \cite{Bertsekas1975A}, \cite{BVR-06}, \cite{DP-13SIAM}, \cite{SLS-18B} and the references therein)\,. For discrete time controlled models the near optimality of quantized policies studied extensively in the literature see, e.g., \cite{KY22A}, \cite{SYL17}, \cite{SSL-16}, \cite{SST-20A}, \cite{SLS-18B} \,. In \cite{SYL17}, \cite{SSL-16}, authors studied the finite state, finite action approximation (respectively) of fully observed MDPs with Borel state and action spaces, for both discounted and average costs criteria\,. In the compact state space case explicit rate of convergence is also established in \cite{SYL17}\,. Later, these results are extended to partially observed Markov decision process setup in \cite{KY22A}, \cite{SST-20A}, also see the references therein\,. Recently, \cite[Section 4]{arapostathis2021optimality} established the denseness of the performance of deterministic policies with finite action spaces, among the performance values attained by the set of all randomized stationary policies. \subsection*{Contributions and main results} In this manuscript our main goal is to study the following approximation problem: for a general class of controlled diffusions in ${\mathds{R}^{d}}$ under what conditions one can approximate the optimal control policies for both finite/infinite horizon cost criteria by policies with finite actions/ piecewise constant policies? In order to address these questions, we first show that both finite horizon and infinite horizon (discounted/ergodic) costs are continuous as a function of control policies under Borkar topology \cite{Bor89}. We establish these results by exploiting the existence and uniqueness results of the associated Poisson equations (see, Theorem~\ref{TContFHC} (finite horizon), Theorem~\ref{T1.1} (discounted), Theorem~\ref{T1.1Exit} (control up to an exit time), Theorem~\ref{ergodicnearmono1}, \ref{ergodicLyap1} (ergodic)). The analysis of ergodic cost case is relatively more involved. One of the major issues in analyzing the ergodic cost criteria under the near-monotone hypothesis is the non-uniqueness/restricted uniqueness of the solution of the associated HJB/Poisson equation (see, \cite[Example~3.8.3]{ABG-book},\cite{AA13})\,. In \cite[Example~3.8.3]{ABG-book},\cite{AA13} it is shown that under near-monotone hypothesis the associated HJB/Poisson equation may admit uncountable many solutions\,. In this paper, we have shown that under near-monotone hypothesis the associated Poisson equation admits unique solution in the space of compatible solution pairs (see, \cite[Definition~1.1]{AA13})\,. Continuity results obtained in the paper will be also useful in establishing the existence of optimal policies of the corresponding optimal control problems. Next, utilizing the Lusin's theorem and Tietze's extension theorem we show that under Borkar topology, quantized (finite actions/ piecewise constant) stationary policies are dense in the space of stationary Markov policies (see, Section~\ref{DensePol})\,. Following and briefly modifying the proof technique of the denseness of stationary policies, including time also as a parameter we establish that piecewise constant Markov policies are dense in the space of Markov policies under Borkar topology (see, Theorem~\ref{TDPCMP}). Then, using our continuity and denseness results, we deduce that for both finite and infinite horizon cost criteria, the optimal control policies can be approximated by quantized (finite actions/ piecewise constant) policies with arbitrary precision (see, Theorem~\ref{TFiniteOptApprox1} (finite horizon), Theorem~\ref{T1.2ExitCost} (control upto an exit time), Theorem~\ref{ErgodNearmOPT1}, \ref{TErgoOptApprox1} (infinite horizon)). The remaining part of the paper is organized as follows. In Section~\ref{PD} we provide the problem formulation\,. The continuity of discounted cost/ cost up to an exit time as a function of control policy are proved in Section~\ref{CDiscCost}. Similar continuity result for ergodic cost is presented in Section~\ref{CErgoCost}, where we establish these results under two types of condition; stability or near-monotonicity. Section~\ref{DensePol} is devoted to establish the denseness of finite action/piecewise constant stationary policies under Borkar topology. Then using the denseness and continuity results we show the near optimality of finite models for cost up to an exit time and discounted/ ergodic cost criteria in Section~\ref{NOptiFinite}. Finally, in Section~\ref{TimeDMarkov}, we analyze the denseness of piecewise constant Markov policies under Borkar topology and then exploiting the denseness result we prove the near optimality of the piecewise constant Markov policies for finite horizon cost criterion. \subsection*{Notation:} \begin{itemize} \item For any set $A\subset\mathds{R}^{d}$, by $\uptau(A)$ we denote \emph{first exit time} of the process $\{X_{t}\}$ from the set $A\subset\mathds{R}^{d}$, defined by \begin{equation*} \uptau(A) \,:=\, \inf\,\{t>0\,\colon X_{t}\not\in A\}\,. \end{equation*} \item $\sB_{r}$ denotes the open ball of radius $r$ in $\mathds{R}^{d}$, centered at the origin, \item $\uptau_{r}$, ${\Breve\uptau}_{r}$ denote the first exist time from $\sB_{r}$, $\sB_{r}^c$ respectively, i.e., $\uptau_{r}:= \uptau(\sB_{r})$, and ${\Breve\uptau}_{r}:= \uptau(\sB^{c}_{r})$. \item By $\trace S$ we denote the trace of a square matrix $S$. \item For any domain $\cD\subset\mathds{R}^{d}$, the space $\cC^{k}(\cD)$ ($\cC^{\infty}(\cD)$), $k\ge 0$, denotes the class of all real-valued functions on $\cD$ whose partial derivatives up to and including order $k$ (of any order) exist and are continuous. \item $\cC_{\mathrm{c}}^k(\cD)$ denotes the subset of $\cC^{k}(\cD)$, $0\le k\le \infty$, consisting of functions that have compact support. This denotes the space of test functions. \item $\cC_{b}({\mathds{R}^{d}})$ denotes the class of bounded continuous functions on ${\mathds{R}^{d}}$\,. \item $\cC^{k}_{0}(\cD)$, denotes the subspace of $\cC^{k}(\cD)$, $0\le k < \infty$, consisting of functions that vanish in $\cD^c$. \item $\cC^{k,r}(\cD)$, denotes the class of functions whose partial derivatives up to order $k$ are H\"older continuous of order $r$. \item $\Lp^{p}(\cD)$, $p\in[1,\infty)$, denotes the Banach space of (equivalence classes of) measurable functions $f$ satisfying $\int_{\cD} \abs{f(x)}^{p}\,\mathrm{d}{x}<\infty$. \item $\Sob^{k,p}(\cD)$, $k\ge0$, $p\ge1$ denotes the standard Sobolev space of functions on $\cD$ whose weak derivatives up to order $k$ are in $\Lp^{p}(\cD)$, equipped with its natural norm (see, \cite{Adams})\,. \item If $\mathcal{X}(Q)$ is a space of real-valued functions on $Q$, $\mathcal{X}_{\mathrm{loc}}(Q)$ consists of all functions $f$ such that $f\varphi\in\mathcal{X}(Q)$ for every $\varphi\in\cC_{\mathrm{c}}^{\infty}(Q)$. In a similar fashion, we define $\Sobl^{k, p}(\cD)$. \item For $\mu > 0$, let $e_{\mu}(x) = e^{-\mu\sqrt{1+\abs{x}^2}}$\,, $x\in{\mathds{R}^{d}}$\,. Then $f\in \Lp^{p,\mu}((0, T)\times {\mathds{R}^{d}})$ if $fe_{\mu} \in \Lp^{p}((0, T)\times {\mathds{R}^{d}})$\,. Similarly, $\Sob^{1,2,p,\mu}((0, T)\times {\mathds{R}^{d}}) = \{f\in \Lp^{p,\mu}((0, T)\times {\mathds{R}^{d}}) \mid f, \frac{\partial f}{\partial t}, \frac{\partial f}{\partial x_i}, \frac{\partial^2 f}{\partial x_i \partial x_j}\in \Lp^{p,\mu}((0, T)\times {\mathds{R}^{d}}) \}$ with natural norm (see \cite{BL84-book}) \begin{align*} \norm{f}_{\Sob^{1,2,p,\mu}} =& \norm{\frac{\partial f}{\partial t}}_{\Lp^{p,\mu}((0, T)\times {\mathds{R}^{d}})} + \norm{f}_{\Lp^{p,\mu}((0, T)\times {\mathds{R}^{d}})} \\ & + \sum_{i}\norm{\frac{\partial f}{\partial x_i}}_{\Lp^{p,\mu}((0, T)\times {\mathds{R}^{d}})} + \sum_{i,j}\norm{\frac{\partial^2 f}{\partial x_i \partial x_j}}_{\Lp^{p,\mu}((0, T)\times {\mathds{R}^{d}})}\,. \end{align*} Also, we use the following convention $\norm{f}_{\Sob^{1,2,p,\mu}} = \norm{f}_{1,2,p,\mu}$\,. \end{itemize} \section{The Borkar Topology on Control Policies, Cost Criteria, and the Problem Statement}\label{PD} Let $\mathbb{U}$ be a compact metric space and $\pV=\mathscr{P}(\mathbb{U})$ be the space of probability measures on $\mathbb{U}$ with topology of weak convergence. Let $$b : {\mathds{R}^{d}} \times \mathbb{U} \to {\mathds{R}^{d}}, $$ $$ \sigma : {\mathds{R}^{d}} \to \mathds{R}^{d \times d},\, \sigma = [\sigma_{ij}(\cdot)]_{1\leq i,j\leq d},$$ be given functions. We consider a stochastic optimal control problem whose state is evolving according to a controlled diffusion process given by the solution of the following stochastic differential equation (SDE) \begin{equation}\label{E1.1} \mathrm{d} X_t \,=\, b(X_t,U_t) \mathrm{d} t + \upsigma(X_t) \mathrm{d} W_t\,, \quad X_0=x\in{\mathds{R}^{d}}. \end{equation} Where \begin{itemize} \item $W$ is a $d$-dimensional standard Wiener process, defined on a complete probability space $(\Omega, \sF, \mathbb{P})$. \item We extend the drift term $b : {\mathds{R}^{d}} \times \pV \to {\mathds{R}^{d}}$ as follows: \begin{equation*} b (x,\mathrm{v}) = \int_{\mathbb{U}} b(x,\zeta)\mathrm{v}(\mathrm{d} \zeta), \end{equation*} for $\mathrm{v}\in\pV$. \item $U$ is a $\pV$ valued adapted process satisfying following non-anticipativity condition: for $s<t\,,$ $W_t - W_s$ is independent of $$\sF_s := \,\,\mbox{the completion of}\,\,\, \sigma(X_0, U_r, W_r : r\leq s)\,\,\,\mbox{relative to} \,\, (\sF, \mathbb{P})\,.$$ \end{itemize} The process $U$ is called an \emph{admissible} control, and the set of all admissible controls is denoted by $\mathfrak U$ (see, \cite{BG90}). By a Markov control we mean an admissible control of the form $U_t = v(t,X_t)$ for some Borel measurable function $v:\mathds{R}_+\times{\mathds{R}^{d}}\to\pV$. The space of all Markov controls is denoted by $\mathfrak U_{\mathsf{m}}$\,. If the function $v$ is independent of $t$, i.e., $U_t = v(X_t)$ then $U$ or by an abuse of notation $v$ itself is called a stationary Markov control. The set of all stationary Markov controls is denoted by $\mathfrak U_{\mathsf{sm}}$. To ensure existence and uniqueness of strong solutions of \cref{E1.1}, we impose the following assumptions on the drift $b$ and the diffusion matrix $\upsigma$\,. \begin{itemize} \item[\hypertarget{A1}{{(A1)}}] \emph{Local Lipschitz continuity:\/} The function $\upsigma\,=\,\bigl[\upsigma^{ij}\bigr]\colon\mathds{R}^{d}\to\mathds{R}^{d\times d}$, $b\colon{\mathds{R}^{d}}\times\mathbb{U}\to{\mathds{R}^{d}}$ are locally Lipschitz continuous in $x$ (uniformly with respect to the other variables for $b$). In other words, for some constant $C_{R}>0$ depending on $R>0$, we have \begin{equation*} \abs{b(x,\zeta) - b(y, \zeta)}^2 + \norm{\upsigma(x) - \upsigma(y)}^2 \,\le\, C_{R}\,\abs{x-y}^2 \end{equation*} for all $x,y\in \sB_R$ and $\zeta\in\mathbb{U}$, where $\norm{\upsigma}:=\sqrt{\trace(\upsigma\upsigma^{\mathsf{T}})}$\,. Also, we are assuming that $b$ is jointly continuous in $(x,\zeta)$. \medskip \item[\hypertarget{A2}{{(A2)}}] \emph{Affine growth condition:\/} $b$ and $\upsigma$ satisfy a global growth condition of the form \begin{equation*} \sup_{\zeta\in\mathbb{U}}\, \langle b(x, \zeta),x\rangle^{+} + \norm{\upsigma(x)}^{2} \,\le\,C_0 \bigl(1 + \abs{x}^{2}\bigr) \qquad \forall\, x\in\mathds{R}^{d}, \end{equation*} for some constant $C_0>0$. \medskip \item[\hypertarget{A3}{{(A3)}}] \emph{Nondegeneracy:\/} For each $R>0$, it holds that \begin{equation*} \sum_{i,j=1}^{d} a^{ij}(x)z_{i}z_{j} \,\ge\,C^{-1}_{R} \abs{z}^{2} \qquad\forall\, x\in \sB_{R}\,, \end{equation*} and for all $z=(z_{1},\dotsc,z_{d})^{\mathsf{T}}\in\mathds{R}^{d}$, where $a:= \frac{1}{2}\upsigma \upsigma^{\mathsf{T}}$. \end{itemize} \subsection{The Borkar Topology on Control Policies}\label{B-topo} We now introduce the Borkar topology on stationary or Markov controls \cite{Bor89} \begin{itemize} \item[•]\emph{Topology of Stationary Policies:} From \cite[Section~2.4]{ABG-book}, we have that the set $\mathfrak U_{\mathsf{sm}}$ is metrizable with compact metric. \begin{definition}[Borkar topology of stationary Markov policies]\label{DefBorkarTopology1A} A sequence $v_n\to v$ in $\mathfrak U_{\mathsf{sm}}$ if and only if \begin{equation}\label{BorkarTopology} \lim_{n\to\infty}\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x = \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x \end{equation} for all $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$ and $g\in \cC_b({\mathds{R}^{d}}\times \mathbb{U})$ (for more details, see \cite[Lemma~2.4.1]{ABG-book}, \cite{Bor89})\,. \end{definition} \item[•]\emph{Topology of Markov Policies:} In the proof of \cite[Theorem~3.1, Lemma~3.1]{Bor89}, replacing $A_n$ by $\hat{A}_n = A_n\times [0,n]$ and following the arguments as in the proof of \cite[Theorem~3.1, Lemma~3.1]{Bor89}, we have the following topology on the space of Markov policies $\mathfrak U_{\mathsf{m}}$\,. \begin{definition}[Borkar topology of Markov policies]\label{BKTP1} A sequence $v_n\to v$ in $\mathfrak U_{\mathsf{m}}$ if and only if \begin{equation}\label{BorkarTopologyM} \lim_{n\to\infty}\int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(t,x)\int_{\mathbb{U}}g(x,t,\zeta)v_{n}(t,x)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t = \int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(t,x)\int_{\mathbb{U}}g(x,t,\zeta)v(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t \end{equation} for all $f\in L^1({\mathds{R}^{d}}\times [0, \infty))\cap L^2({\mathds{R}^{d}}\times [0, \infty))$ and $g\in \cC_b({\mathds{R}^{d}}\times [0, \infty)\times \mathbb{U})$\,. \end{definition} \end{itemize} It is well known that under the hypotheses \hyperlink{A1}{{(A1)}}--\hyperlink{A3}{{(A3)}}, for any admissible control \cref{E1.1} has a unique weak solution \cite[Theorem~2.2.11]{ABG-book}, and under any stationary Markov strategy \cref{E1.1} has a unique strong solution which is a strong Feller (therefore strong Markov) process \cite[Theorem~2.2.12]{ABG-book}. \subsection{Cost Criteria} Let $c\colon{\mathds{R}^{d}}\times\mathbb{U} \to \mathds{R}_+$ be the \emph{running cost} function. We assume that $c$ is bounded, jointly continuous in $(x, \zeta)$ and locally Lipschitz continuous in its first argument uniformly with respect to $\zeta\in\mathbb{U}$. We extend $c\colon{\mathds{R}^{d}}\times\pV \to\mathds{R}_+$ as follows: for $\pv \in \pV$ \begin{equation*} c(x,\pv) := \int_{\mathbb{U}}c(x,\zeta)\pv(\mathrm{d}\zeta)\,. \end{equation*} In this article, we consider the problem of minimizing finite horizon cost, $\alpha$-discounted cost and ergodic cost, respectively: \subsubsection{Finite Horizon Cost} For $U\in \mathfrak U$, the associated \emph{finite horizon cost} is given by \begin{equation}\label{FiniteCost1} \cJ_{T}(x, U) = \Exp_x^{U}\left[\int_0^{T} c(X_s, U_s) \mathrm{d}{s} + H(X_T)\right]\,. \end{equation} and the optimal value is defined as \begin{equation}\label{FiniteCost1Opt} \cJ_{T}^*(x) \,:=\, \inf_{U\in \mathfrak U}\cJ_{T}(x, U)\,. \end{equation} Then a policy $U^*\in \mathfrak U$ is said to be optimal if we have \begin{equation}\label{FiniteCost1Opt1} \cJ_{T}(x, U^*) = \cJ_{T}^*(x)\,. \end{equation} \subsubsection{Discounted Cost Criterion} For $U \in\mathfrak U$, the associated \emph{$\alpha$-discounted cost} is given by \begin{equation}\label{EDiscost} \cJ_{\alpha}^{U}(x, c) \,:=\, \Exp_x^{U} \left[\int_0^{\infty} e^{-\alpha s} c(X_s, U_s) \mathrm{d} s\right],\quad x\in{\mathds{R}^{d}}\,, \end{equation} where $\alpha > 0$ is the discounted factor and $X(\cdot)$ is the solution of \cref{E1.1} corresponding to $U\in\mathfrak U$ and $\Exp_x^{U}$ is the expectation with respect to the law of the process $X(\cdot)$ with initial condition $x$. The controller tries to minimize \cref{EDiscost} over his/her admissible policies $\mathfrak U$\,. Thus, a policy $U^{*}\in \mathfrak U$ is said to be optimal if for all $x\in {\mathds{R}^{d}}$ \begin{equation}\label{OPDcost} \cJ_{\alpha}^{U^*}(x, c) = \inf_{U\in \mathfrak U}\cJ_{\alpha}^{U}(x, c) \,\,\, (\,=:\, \,\, V_{\alpha}(x))\,, \end{equation} where $V_{\alpha}(x)$ is called the optimal value. \subsubsection{Ergodic Cost Criterion} For $U\in \mathfrak U$, the associated \emph{ergodic cost} is given by \begin{equation}\label{ErgCost1} \sE_{x}(c, U) = \limsup_{T\to \infty}\frac{1}{T}\Exp_x^{U}\left[\int_0^{T} c(X_s, U_s) \mathrm{d}{s}\right]\,. \end{equation} and the optimal value is defined as \begin{equation}\label{ErgCost1Opt} \sE^*(c) \,:=\, \inf_{x\in{\mathds{R}^{d}}}\inf_{U\in \mathfrak U}\sE_{x}(c, U)\,. \end{equation} Then a policy $U^*\in \mathfrak U$ is said to be optimal if we have \begin{equation}\label{ErgCost1Opt1} \sE_{x}(c, U^*) = \sE^*(c)\,. \end{equation} \subsubsection{Control up to an Exit Time}\label{exitTimeSection} For each $U\in\mathfrak U$ the associated cost is given as \begin{equation*} \hat{\cJ}_{e}^{U}(x) \,:= \, \Exp_x^{U} \left[\int_0^{\tau(O)} e^{-\int_{0}^{t}\delta(X_s, U_s) \mathrm{d} s} c(X_t, U_t) \mathrm{d} t + e^{-\int_{0}^{\tau(O)}\delta(X_s, U_s) \mathrm{d} s}h(X_{\tau(O)})\right],\quad x\in{\mathds{R}^{d}}\,, \end{equation*} where $O\subset {\mathds{R}^{d}}$ is a smooth bounded domain, $\tau(O) \,:=\, \inf\{t \geq 0: X_t\notin O\}$, $\delta(\cdot, \cdot): \bar{O}\times\mathbb{U}\to [0, \infty)$ is the discount function and $h:\bar{O}\to \mathds{R}_+$ is the terminal cost function. The optimal value is defined as \[\hat{\cJ}_{e}^{*}(x)=\inf_{U\in \mathfrak U}\hat{\cJ}_{e}^{U}(x),\] We assume that $\delta\in \cC(\bar{O}\times \mathbb{U})$, $h\in\Sob^{2,p}(O)$. \subsection{Problems Studied} The main purpose of this manuscript will be to address the following problems: \begin{itemize} \item[•]\textbf{Continuity of finite and infinite horizon costs.} Suppose $\{v_n\}_{n\in \mathds{N}}$ is a sequence of control policies which converge to another control policy $v$ in some sense (in particular, under Borkar topology, see Subsection~\ref{B-topo}). Does this imply that \begin{itemize} \item[•]\emph{for finite horizon cost:} $\cJ_{T}(x, v_n)\to \cJ_{T}(x, v)$\,? \item[•]\emph{for discounted cost:} $\cJ_{\alpha}^{v_n}(x, c)\to \cJ_{\alpha}^{v}(x, c)$ \,? \item[•]\emph{for ergodic cost:} $\sE_{x}(c, v_n)\to \sE_{x}(c, v)$ \,? \item[•] \emph{for cost up to an exit time:} $\hat{\cJ}_{e}^{v_n}(x) \to \hat{\cJ}_{e}^{v}(x)$ \,? \end{itemize} \item[•]\textbf{Near optimality of quantized policies.} For any given $\epsilon > 0$, whether it is possible to construct a quantized (finite action/ piecewise constant) policy $v_{\epsilon}$ such that \begin{itemize} \item[•]\emph{for finite horizon cost:} $\cJ_{T}(x, v_{\epsilon})\to \cJ_{T}^*(x) + \epsilon$\,? \item[•]\emph{for discounted cost:} $\cJ_{\alpha}^{v_{\epsilon}}(x, c)\to V_{\alpha}(x) + \epsilon$ \,? \item[•]\emph{for ergodic cost:} $\sE_{x}(c, v_{\epsilon})\to \sE^*(c) + \epsilon$ \,? \item[•] \emph{for cost up to an exit time:} $\hat{\cJ}_{e}^{v_{\epsilon}}(x) \leq \hat{\cJ}_{e}^{*}(x) + \epsilon$ \,? \end{itemize} \end{itemize} In this manuscript, we have shown that under a mild set of assumptions the answers to the above mentioned questions are affirmative. For the finite horizon case, we also study the time-discretization approximations as a further implication of our analysis. Let us introduce a parametric family of elliptic operator, which will be useful in our analysis\,. With $\zeta\in \mathbb{U}$ treated as a parameter, we define a family of operators $\sL_{\zeta}$ mapping $\cC^2({\mathds{R}^{d}})$ to $\cC({\mathds{R}^{d}})$ by \begin{equation}\label{E-cI} \sL_{\zeta} f(x) \,:=\, \trace\bigl(a(x)\nabla^2 f(x)\bigr) + \,b(x,\zeta)\cdot \nabla f(x)\,, \end{equation} where $f\in \cC^2({\mathds{R}^{d}})\cap\cC_b({\mathds{R}^{d}})$\, and for $\pv \in\pV$ we extend $\sL_{\zeta}$ as follows: \begin{equation}\label{EExI} \sL_\pv f(x) \,:=\, \int_{\mathbb{U}} \sL_{\zeta} f(x)\pv(\mathrm{d} \zeta)\,. \end{equation}Also, for each $v \in\mathfrak U_{\mathsf{sm}}$, we define \begin{equation}\label{Efixstra} \sL_{v} f(x) \,:=\, \trace(a\nabla^2 f(x)) + b(x,v(x))\cdot\nabla f(x)\,. \end{equation} \section{Continuity of Expect Cost under Various Criteria in Control Policies under the Borkar Topology} \subsection{Continuity for Discounted Cost/Cost upto an Exit Time}\label{CDiscCost} Since the proof techniques are almost similar, in this section, we analyze the continuity of both discounted cost as well as the cost upto an exit time with respect to the policies in the space of stationary policies under Borkar topology (see Definition~\ref{DefBorkarTopology1A}), i.e., we show that the maps $v\to \cJ_{\alpha}^{v}$ and $v\to \hat{\cJ}_{e}^{v}$ are continuous on $\mathfrak U_{\mathsf{sm}}$\,. \subsubsection{Continuity of Discounted Cost} Now we prove the continuity of the discounted cost as a function of the control policies\,. \begin{theorem}\label{T1.1} Suppose Assumptions (A1)-(A3) hold. Then the map $v\mapsto \cJ_{\alpha}^{v}(x, c)$ from $\mathfrak U_{\mathsf{sm}}$ to $\mathds{R}$ is continuous. \end{theorem} \begin{proof} Let $\{v_n\}_n$ be a sequence in $\mathfrak U_{\mathsf{sm}}$ such that $v_n\to v$ in $\mathfrak U_{\mathsf{sm}}$\,. It known that $\cJ_{\alpha}^{v_n}(x, c)$ is a solution to the Poisson's equation (see, \cite[Lemma~A.3.7]{ABG-book}) \begin{equation}\label{ET1.1A} \sL_{v_n}\cJ_{\alpha}^{v_n}(x, c) - \alpha \cJ_{\alpha}^{v_n}(x, c) = - c(x, v_n(x))\,. \end{equation} Now by standard elliptic p.d.e. estimates as in \cite[Theorem~9.11]{GilTru}, for any $p\geq d+1$ and $R >0$, we deduce that \begin{equation}\label{ET1.1B} \norm{\cJ_{\alpha}^{v_n}(x, c)}_{\Sob^{2,p}(\sB_R)} \,\le\, \kappa_1\bigl(\norm{\cJ_{\alpha}^{v_n}(x, c)}_{L^p(\sB_{2R})} + \norm{c(x, v_n(x))}_{L^p(\sB_{2R})}\bigr)\,, \end{equation} for some positive constant $\kappa_1$ which is independent of $n$\,. Since \begin{equation*} \norm{c}_{\infty} \,:=\, \sup_{(x,u)\in{\mathds{R}^{d}}\times\mathbb{U}} c(x,u) \leq M \, <\,\infty \,, \quad \text{and}\quad \cJ_{\alpha}^{v_n}(x, c) \leq \frac{\norm{c}_{\infty}}{\alpha}\,, \end{equation*} from \cref{ET1.1A} we obtain \begin{equation}\label{ETC1.3B} \norm{\cJ_{\alpha}^{v_n}(x, c)}_{\Sob^{2,p}(\sB_R)} \,\le\, \kappa_1 M\bigl(\frac{|\sB_{2R}|^{\frac{1}{p}}}{\alpha} + |\sB_{2R}|^{\frac{1}{p}}\bigr)\,. \end{equation}We know that for $1< p < \infty$, the space $\Sob^{2,p}(\sB_R)$ is reflexive and separable, hence, as a corollary of Banach Alaoglu theorem, we have that every bounded sequence in $\Sob^{2,p}(\sB_R)$ has a weakly convergent subsequence (see, \cite[Theorem~3.18.]{HB-book}). Also, we know that for $p\geq d+1$ the space $\Sob^{2,p}(\sB_R)$ is compactly embedded in $\cC^{1, \beta}(\bar{\sB}_R)$\,, where $\beta < 1 - \frac{d}{p}$ (see \cite[Theorem~A.2.15 (2b)]{ABG-book}), which implies that every weakly convergent sequence in $\Sob^{2,p}(\sB_R)$ will converge strongly in $\cC^{1, \beta}(\bar{\sB}_R)$\,. Thus, in view of estimate \cref{ETC1.3B}, by a standard diagonalization argument and Banach Alaoglu theorem, we can extract a subsequence $\{V_{\alpha}^{n_k}\}$ such that for some $V_{\alpha}^*\in \Sobl^{2,p}({\mathds{R}^{d}})$ \begin{equation}\label{ET1.1C} \begin{cases} \cJ_{\alpha}^{v_{n_k}}(x, c)\to & V_{\alpha}^*\quad \text{in}\quad \Sobl^{2,p}({\mathds{R}^{d}})\quad\text{(weakly)}\\ \cJ_{\alpha}^{v_{n_k}}(x, c)\to & V_{\alpha}^*\quad \text{in}\quad \cC^{1, \beta}_{loc}({\mathds{R}^{d}}) \quad\text{(strongly)}\,. \end{cases} \end{equation} In the following we will show that $V_{\alpha}^* = \cJ_{\alpha}^{v}(x, c)$. Note that \begin{align*} b(x,v_{n_k}(x))\cdot \nabla \cJ_{\alpha}^{v_{n_k}}(x, c) - b(x,v(x))\cdot \nabla V_{\alpha}^*(x) = & b(x,v_{n_k}(x))\cdot \nabla \left(\cJ_{\alpha}^{v_{n_k}}(x, c) - V_{\alpha}^*\right)(x) \\ & + \left(b(x,v_{n_k}(x)) - b(x,v(x))\right)\cdot \nabla V_{\alpha}^*(x)\,. \end{align*} Since $\cJ_{\alpha}^{v_{n_k}}(x, c)\to V_{\alpha}^*$ in $\cC^{1, \beta}_{loc}({\mathds{R}^{d}})$ and $b$ is locally bounded, on any compact set $b(x,v_{n_k}(x))\cdot \nabla \left(\cJ_{\alpha}^{v_{n_k}}(x, c) - V_{\alpha}^*\right)(x)\to 0$ strongly. Also, since $\nabla V_{\alpha}^*\in \cC^{1, \beta}_{loc}({\mathds{R}^{d}})$, in view of the topology of $\mathfrak U_{\mathsf{sm}}$, for any $\phi\in\cC_c^{\infty}({\mathds{R}^{d}})$ we have \begin{equation*} \lim_{n\to\infty}\int_{{\mathds{R}^{d}}}b(x,v_{n_k}(x))\cdot \nabla V_{\alpha}^*(x)\phi(x)\mathrm{d} x = \int_{{\mathds{R}^{d}}}b(x,v(x))\cdot \nabla V_{\alpha}^*(x)\phi(x)\mathrm{d} x\,. \end{equation*} Hence, as $k\to \infty$, we obtain \begin{equation}\label{ET1.1D} b(x,v_{n_k}(x))\cdot \nabla \cJ_{\alpha}^{v_{n_k}}(x, c) + c(x, v_{n_k}(x)) \to b(x,v(x))\cdot \nabla V_{\alpha}^*(x) + c(x, v(x))\quad\text{weakly}\,. \end{equation} Now, multiplying by a test function $\phi\in \cC_{c}^{\infty}({\mathds{R}^{d}})$, from \cref{ET1.1A}, it follows that \begin{align*} \int_{{\mathds{R}^{d}}}\trace\bigl(a(x)\nabla^2 \cJ_{\alpha}^{v_{n_k}}(x, c)\bigr)\phi(x)\mathrm{d} x + \int_{{\mathds{R}^{d}}}\{b(x,v_{n_k}(x))\cdot \nabla \cJ_{\alpha}^{v_{n_k}}(x, c) + & c(x, v_{n_k}(x))\}\phi(x)\mathrm{d} x \\ &= \alpha\int_{{\mathds{R}^{d}}} \cJ_{\alpha}^{v_{n_k}}(x, c)\phi(x)\mathrm{d} x\,. \end{align*} Hence, using \cref{ET1.1C}, \cref{ET1.1D}, and letting $k\to\infty$ (in the sense of distributions), we obtain \begin{equation}\label{ET1.1E} \int_{{\mathds{R}^{d}}}\trace\bigl(a(x)\nabla^2 V_{\alpha}^*(x)\bigr)\phi(x)\mathrm{d} x + \int_{{\mathds{R}^{d}}} \{b(x,v(x))\cdot \nabla V_{\alpha}^*(x) + c(x, v(x))\}\phi(x)\mathrm{d} x = \alpha\int_{{\mathds{R}^{d}}} V_{\alpha}^*(x)\phi(x)\mathrm{d} x\,. \end{equation} Since $\phi\in \cC_{c}^{\infty}({\mathds{R}^{d}})$ is arbitrary and $V_{\alpha}^*\in \Sobl^{2,p}({\mathds{R}^{d}})$ from \cref{ET1.1E}, we deduce that the function $V_{\alpha}^*\in \Sobl^{2,p}({\mathds{R}^{d}})\cap \cC_{b}({\mathds{R}^{d}})$ satisfies \begin{equation}\label{ET1.1F} \trace\bigl(a(x)\nabla^2 V_{\alpha}^{*}(x)\bigr) + b(x,v(x))\cdot \nabla V_{\alpha}^{*}(x) + c(x, v(x)) = \alpha V_{\alpha}^{*}(x)\,. \end{equation} Let $X$ be the solution of the SDE \cref{E1.1} corresponding to $v$. Now applying It$\hat{\rm o}$-Krylov formula, we obtain the following \begin{align*} & \Exp_x^{v}\left[ e^{-\alpha T}V_{\alpha}^{*}(X_{T})\right] - V_{\alpha}^{*}(x)\nonumber\\ & \,=\,\Exp_x^{v}\left[\int_0^{T} e^{-\alpha s}\{\trace\bigl(a(X_s)\nabla^2 V_{\alpha}^{*}(X_s)\bigr) + b(X_s, v(X_s))\cdot \nabla V_{\alpha}^{*}(X_s) - \alpha V_{\alpha}^{*}(X_s))\} \mathrm{d}{s}\right] \,. \end{align*} Hence, by \cref{ET1.1F}, we get \begin{align}\label{ET1.1G} \Exp_x^{v}\left[ e^{-\alpha T} V_{\alpha}^{*}(X_{T})\right] - V_{\alpha}^{*}(x) \,=\,- \Exp_x^{v}\left[\int_0^{T} e^{-\alpha s}c(X_s, v(X_s))\mathrm{d}{s}\right] \,. \end{align} Since $V_{\alpha}^{*}$ is bounded and $$\Exp_x^{v}\left[ e^{-\alpha T} V_{\alpha}^{*}(X_{T})\right] = e^{-\alpha T}\Exp_x^{v}\left[ V_{\alpha}^{*}(X_{T})\right],$$ letting $T\to\infty$, it follows that \begin{equation*} \lim_{T\to\infty}\Exp_x^{v}\left[ e^{-\alpha T} V_{\alpha}^{*}(X_{T})\right] = 0\,. \end{equation*} Thus, letting $T \to \infty$ by monotone convergence theorem, from \cref{ET1.1G}, we obtain \begin{align}\label{ET1.1H} V_{\alpha}^{*}(x) \,=\, \Exp_x^{v}\left[\int_0^{\infty} e^{-\alpha s}c(X_s, v(X_s)) \mathrm{d}{s}\right] = \cJ_{\alpha}^{v}(x, c)\,. \end{align}This completes the proof. \end{proof} \subsubsection{Continuity of Cost upto an Exit Time} Following the proof technique of Theorem~\ref{T1.1}, now we show that the cost upto an exit time (defined in Subsection~\ref{exitTimeSection}) is continuous as a function of the control policies\,. \begin{theorem}\label{T1.1Exit} Suppose Assumptions (A1)-(A3) hold. Then the map $v\mapsto \hat{\cJ}_{e}^{v}(x)$ from $\mathfrak U_{\mathsf{sm}}$ to $\mathds{R}$ is continuous\,. \end{theorem} \begin{proof} Let $\{v_n\}_n$ be a sequence in $\mathfrak U_{\mathsf{sm}}$ such that $v_n\to v$ in $\mathfrak U_{\mathsf{sm}}$\,. From \cite[Theorem~9.15]{GilTru}, it follows that there exist a unique function $\psi_n(x)\in \Sob^{2,p}(O)$ satisfying the following Poisson's equation \begin{equation}\label{T1.1ExitA} \sL_{v_n}\psi_{n}(x) - \delta(x, v_n(x)) \psi_{n}(x) + c(x, v_n(x)) = 0\quad \text{with}\quad \psi_n = h\,\,\, \text{on}\,\,\partial{O}\,. \end{equation} Applying It$\hat{\rm o}$-Krylov formula, one can show that $\psi_n(x) = \hat{\cJ}_{e}^{v_n}(x)$ (this stochastic representation also ensures the uniqueness of the solution of \cref{T1.1ExitA} )\,. Now following the argument as in Theorem~\ref{T1.1}, by standard elliptic p.d.e. estimates \cite[Theorem~9.11]{GilTru}, we deduce that there exists $\psi(x)\in \Sob^{2,p}(O)$ such that $\psi_n \to \psi$ weakly in $\Sob^{2,p}(O)$\,. Thus, closely following the proof of Theorem~\ref{T1.1}, letting $n\to \infty$\,, from \cref{T1.1ExitA} it follows that \begin{equation}\label{T1.1ExitB} \sL_{v}\psi(x) - \delta(x, v(x)) \psi(x) + c(x, v(x)) = 0\quad \text{with}\quad \psi = h\,\,\, \text{on}\,\,\partial{O}\,. \end{equation} Again, by It$\hat{\rm o}$-Krylov formula, using \cref{T1.1ExitB} we deduce that $\psi(x) = \hat{\cJ}_{e}^{v}(x)$\,. This completes the proof of the theorem\,. \end{proof} \subsection{Continuity for Ergodic Cost}\label{CErgoCost} In this section we study the continuity of the ergodic costs with respect to policies under Borkar topology in the space of stationary Markov policies. We will study this problem under two sets of assumptions: the first is so called near-monotonicity assumption on the running cost function and other one is Lyapunov stability assumption on the system. Our proof strategies will be slightly different under these two setups: In the former we will build on regularity properties of invariant probability measures, in the latter we will build more directly on regularity properties of solutions to HJB equations\,. \subsubsection{Under a near-monotonicity assumption}\label{NearMonotone} We assume that the running cost function $c$ is near-monotone with respect to $\sE^*(c)$, i.e., \begin{itemize} \item[\hypertarget{A4}{{(A4)}}] It holds that \begin{equation}\label{ENearmonot} \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \mathbb{U}} c(x,\zeta) > \sE^*(c)\,. \end{equation} \end{itemize} This condition penalizes the escape of probability mass to infinity. Since our running cost $c$ is bounded it is easy to see that $\sE^*(c) \leq \norm{c}_{\infty}$\,. It is known that under \cref{ENearmonot}, optimal control exists in the space of stable stationary Markov controls (see, \cite[Theorem~3.4.5]{ABG-book}). First, we prove that for each stable stationary Markov policy $v\in \mathfrak U_{\mathsf{sm}}$ the associated Poisson's equation admits a unique solution in a certain function space. This uniqueness result will be useful in establishing the continuity and near optimality of quantized policies. For the following supporting result, we closely follow \cite{ABG-book}\,. \begin{theorem}\label{NearmonotPoisso} Suppose that Assumptions (A1) - (A4) hold. Let $v\in\mathfrak U_{\mathsf{sm}}$ be a stable control with unique invariant measure $\eta_{v}$, such that \begin{equation}\label{ENearmonotPoisso1} \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \mathbb{U}} c(x,\zeta) > \inf_{x\in{\mathds{R}^{d}}}\sE_x(c, v)\,. \end{equation} Then, there exists a unique pair $(V^v, \rho_v)\in \Sobl^{2,p}({\mathds{R}^{d}})\times \mathds{R}$, \, $1< p < \infty$, with $V^v(0) = 0$, $\inf_{{\mathds{R}^{d}}} V^v > -\infty$ and $\rho_v = \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v(x)(\mathrm{d}{u})\eta_{v}(\mathrm{d}{x})$, satisfying \begin{equation}\label{EErgonearPoisso1A} \rho_v = \left[\sL_{v}V^v(x) + c(x, v(x))\right] \end{equation} Moreover, we have \begin{itemize} \item[(i)]$\rho_v = \inf_{{\mathds{R}^{d}}}\sE_x(c, v)$\,. \item[(ii)] for all $x\in {\mathds{R}^{d}}$ \begin{equation}\label{EErgonearPoisso1B} V^v(x) \,=\, \lim_{r\downarrow 0}\Exp_{x}^v\left[\int_{0}^{{\Breve\uptau}_{r}} \left( c(X_t, v(X_t)) - \rho_v\right)\mathrm{d} t\right]\,. \end{equation} \end{itemize} \end{theorem} \begin{proof} Since $c$ is bounded, we have $\left(\rho^{v} \,:=\,\right) \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v(x)(\mathrm{d}{u})\eta_{v}(\mathrm{d}{x}) \leq \norm{c}_{\infty}$\,. In view of \cref{ENearmonotPoisso1}, by writing $\rho^v =\alpha \int J^v_{\alpha}(x, c)\eta_{v}(\mathrm{d} x)$ from \cite[Lemma~3.6.1]{ABG-book}, we have \begin{equation}\label{EErgonearPoisso1C} \inf_{\kappa(\rho^v)}\cJ_{\alpha}^{v}(x, c) = \inf_{{\mathds{R}^{d}}}\cJ_{\alpha}^{v}(x, c) \leq \frac{\rho^{v}}{\alpha}\,, \end{equation} where $\kappa(\rho^v) \,:=\, \{x\in {\mathds{R}^{d}}\mid \min_{\zeta\in\mathbb{U}}c(x,\zeta) \leq \rho^v\}$ and $\cJ_{\alpha}^{v}(x, c)$ is the $\alpha$-discounted cost defined as in \cref{EDiscost}. As earlier, we have that $\cJ_{\alpha}^{v}(x, c)$ is a solution to the Poisson's equation (see, \cite[Lemma~A.3.7]{ABG-book}) \begin{equation}\label{EErgonearPoisso1D} \sL_{v}\cJ_{\alpha}^{v}(x, c) - \alpha \cJ_{\alpha}^{v}(x, c) = - c(x, v(x))\,. \end{equation} Since $x\to\min_{\zeta\in\mathbb{U}}c(x,\zeta)$ is continuous, we have $\kappa(\rho^v)$ is closed and \cref{ENearmonotPoisso1} implies that it is bounded. Therefore $\kappa(\rho^v)$ is compact and hence for some $R_0>0$, we have $\kappa(\rho^v)\subset \sB_{R_{0}}$\,. This gives us $\inf_{\sB_{R_{0}}} \cJ_{\alpha}^{v}(x, c) = \inf_{{\mathds{R}^{d}}}\cJ_{\alpha}^{v}(x, c)$\,. Thus, following the arguments as in \cite[Lemma~3.6.3]{ABG-book}, we deduce that for each $R> R_0$ there exist constants $\Tilde{C}_{2}(R), \Tilde{C}_{2}(R, p)$ depending only on $d, R_0$ such that \begin{equation}\label{EErgonearPoisso1E} \left(\osc_{\sB_{2R}} \cJ_{\alpha}^{v}(x, c) := \right)\sup_{\sB_{2R}} \cJ_{\alpha}^{v}(x, c) - \inf_{\sB_{2R}} \cJ_{\alpha}^{v}(x, c) \leq \Tilde{C}_{2}(R)\left(1 + \alpha\inf_{\sB_{R_0}}\cJ_{\alpha}^{v}(x, c) \right)\,, \end{equation} \begin{equation}\label{EErgonearPoisso1F} \norm{\cJ_{\alpha}^{v}(\cdot, c) - \cJ_{\alpha}^{v}(0, c)}_{\Sob^{2,p}(\sB_R)}\leq \Tilde{C}_{2}(R, p) \left(1 + \alpha\inf_{\sB_{R_0}}\cJ_{\alpha}^{v}(x, c) \right)\,. \end{equation} Hence, by following the arguments as in \cite[Lemma~3.6.6]{ABG-book}, we conclude that there exists $(V^{v}, \Tilde{\rho}^v)\in \Sobl^{2,p}({\mathds{R}^{d}})\times \mathds{R}$ such that along a subsequence (as $\alpha\to 0$), $\cJ_{\alpha}^{v}(\cdot, c) - \cJ_{\alpha}^{v}(0, c) \to V^{v}(\cdot)$ and $\alpha\cJ_{\alpha}^{v}(0, c) \to \Tilde{\rho}_{v}$ and the pair $(V^{v}, \Tilde{\rho}_v)$ satisfies \begin{equation}\label{EErgonearPoisso1G} \sL_{v}V^{v}(x) + c(x, v(x)) = \Tilde{\rho}_{v}\,. \end{equation} We will show that the subsequential limits are unique\,. From \cref{EErgonearPoisso1C}, we get $\Tilde{\rho}_{v} \leq \rho^{v}$. Now, in view of estimates \cref{EErgonearPoisso1C} and \cref{EErgonearPoisso1F}, it is easy to see that \begin{equation}\label{EErgonearPoisso1H} \norm{V^{v}}_{\Sob^{2,p}(\sB_R)}\leq \Tilde{C}_{2}(R, p) \left(1 + M \right)\,. \end{equation} Also, for each $x\in {\mathds{R}^{d}}$, we have \begin{align}\label{EErgonearPoisso1HLower} V^{v}(x) &= \lim_{\alpha\to 0}\left(\cJ_{\alpha}^{v}(x, c) - \cJ_{\alpha}^{v}(0, c)\right) \geq \liminf_{\alpha\to 0} \left(\cJ_{\alpha}^{v}(x, c) - \inf_{{\mathds{R}^{d}}}\cJ_{\alpha}^{v}(x, c) + \inf_{{\mathds{R}^{d}}}\cJ_{\alpha}^{v}(x, c) - \cJ_{\alpha}^{v}(0, c)\right) \nonumber\\ &\geq - \limsup_{\alpha\to \infty} \left(\cJ_{\alpha}^{v}(0, c) - \inf_{{\mathds{R}^{d}}}\cJ_{\alpha}^{v}(x, c)\right) + \liminf _{\alpha\to \infty} \left(\cJ_{\alpha}^{v}(x, c) - \inf_{{\mathds{R}^{d}}}\cJ_{\alpha}^{v}(x, c)\right)\nonumber\\ &\geq - \limsup_{\alpha\to \infty} \left(\cJ_{\alpha}^{v}(0, c) - \inf_{\sB_{R_0}}\cJ_{\alpha}^{v}(x, c)\right) + \liminf _{\alpha\to \infty} \left(\cJ_{\alpha}^{v}(x, c) - \inf_{{\mathds{R}^{d}}}\cJ_{\alpha}^{v}(x, c)\right)\nonumber\\ &\geq - \limsup_{\alpha\to \infty} \left(\osc_{\sB_{R_0}} \cJ_{\alpha}^{v}(x, c)\right);\quad \left(\text{since}\,\,\, \cJ_{\alpha}^{v}(x, c) - \inf_{{\mathds{R}^{d}}}\cJ_{\alpha}^{v}(x, c) \geq 0 \right)\,, \end{align} where in the third inequality we have used the fact that $\inf_{\sB_{R_0}} \cJ_{\alpha}^{v}(x, c) = \inf_{{\mathds{R}^{d}}} \cJ_{\alpha}^{v}(x, c)$\,. Thus, from \cref{EErgonearPoisso1E}, we deduce that \begin{equation}\label{EErgonearPoisso1I} V^{v}\geq -\Tilde{C}_{2}(R_0) \left(1 + M \right)\,. \end{equation} This shows that $\inf_{{\mathds{R}^{d}}} V^{v} > -\infty$\,. Now, applying It$\hat{\rm o}$-Krylov formula and using \cref{EErgonearPoisso1G} we obtain \begin{align*} \Exp_x^{v}\left[V^{v}\left(X_{T\wedge \uptau_{R}}\right)\right] - V^v(x)\,=\, \Exp_x^{v}\left[\int_0^{T\wedge \uptau_{R}} \left(\Tilde{\rho}_{v} - c(X_t, v(X_t))\right) \mathrm{d}{t}\right]\,. \end{align*} This implies \begin{align*} \inf_{y\in{\mathds{R}^{d}}}V^{v}(y) - V^v(x)\,\leq\, \Exp_x^{v}\left[\int_0^{T\wedge \uptau_{R}} \left(\Tilde{\rho}_{v} - c(X_t, v(X_t))\right) \mathrm{d}{t}\right]\,. \end{align*}Since $v$ is stable, letting $R\to \infty$, we get \begin{align*} \inf_{y\in{\mathds{R}^{d}}}V^{v}(y) - V^v(x)\,\leq\, \Exp_x^{v}\left[\int_0^{T} \left(\Tilde{\rho}_{v} - c(X_t, v(X_t))\right) \mathrm{d}{t}\right]\,. \end{align*}Now dividing both sides of the above inequality by $T$ and letting $T\to \infty$, it follows that \begin{align*} \limsup_{T\to \infty}\frac{1}{T}\Exp_x^{v}\left[\int_0^{T} c(X_t, v(X_t)) \mathrm{d}{t}\right] \,\leq\, \Tilde{\rho}_{v}\,. \end{align*} Thus, $\rho^v \leq \Tilde{\rho}_{v}$. This indeed implies that $\rho^v = \Tilde{\rho}_{v}$\,. The representation \cref{EErgonearPoisso1B} of $V^v$ follows by closely mimicking the argument of \cite[Lemma~3.6.9]{ABG-book}. Therefore, we have a solution pair $(V^v, \rho_v)$ to \cref{EErgonearPoisso1A} satisfying (i) and (ii). Next we want to prove that the solution pair is unique. To this end, let $(\hat{V}^v, \hat{\rho}_v)\in \Sobl^{2,p}({\mathds{R}^{d}})\times \mathds{R}$, \, $1< p < \infty$, with $\hat{V}^v(0) = 0$, $\inf_{{\mathds{R}^{d}}} \hat{V}^v > -\infty$ and $\hat{\rho}_v = \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v(x)(\mathrm{d}{u})\eta_{v}(\mathrm{d}{x})$, satisfying \begin{equation}\label{EErgonearPoisso1J} \hat{\rho}_v = \left[\sL_{v}\hat{V}^v(x) + c(x, v(x))\right] \end{equation} Since $\hat{V}^v$ is bounded from below, applying It$\hat{\rm o}$-Krylov formula and using \cref{EErgonearPoisso1J} we get \begin{align}\label{EErgonearPoisso1L} \limsup_{T\to \infty}\frac{1}{T}\Exp_x^{v}\left[\int_0^{T} c(X_t, v(X_t)) \mathrm{d}{t}\right] \,\leq\, \hat{\rho}_{v} \end{align} Hence, from \cref{EErgonearPoisso1L}, it follows that \begin{align}\label{EErgonearPoisso1M} \hat{\rho}_{v} = \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v(x)(\mathrm{d}{u})\eta_{v}(\mathrm{d}{x}) \leq \limsup_{T\to \infty}\frac{1}{T}\Exp_x^{v}\left[\int_0^{T} c(X_t, v(X_t)) \mathrm{d}{t}\right] \,\leq\, \hat{\rho}_{v} \end{align} This implies that $\hat{\rho}^{v} = \rho_{v}$\,. Now, applying It$\hat{\rm o}$-Krylov formula and using \cref{EErgonearPoisso1J}, we obtain \begin{align}\label{EErgonearPoisso1N} \hat{V}^v(x)\,=\, \Exp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}\wedge \uptau_{R}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \mathrm{d}{t} + \hat{V}^{v}\left(X_{{\Breve\uptau}_{r}\wedge \uptau_{R}}\right)\right]\,. \end{align} Since $v$ is stable and $\hat{V}^v$ is bounded from below, for all $x\in {\mathds{R}^{d}}$ we obtain \begin{equation*} \liminf_{R\to\infty}\Exp_x^{v}\left[\hat{V}^{v}\left(X_{\uptau_{R}}\right)\Ind_{\{{\Breve\uptau}_{r}\geq \uptau_{R}\}}\right]\geq \liminf_{R\to\infty}\left(\inf_{{\mathds{R}^{d}}}\hat{V}^{v}\right)\mathbb{P}_{x}\left({\Breve\uptau}_{r}\geq \uptau_{R}\right) = 0\,. \end{equation*} In the above we have used the fact that $\uptau_{R}\to\infty$ as $R\to \infty$ and $\Exp_x^{v}\left[{\Breve\uptau}_{r}\right] < \infty$\,. Again, since $v$ is stable we have $\Exp_x^{v}\left[{\Breve\uptau}_{r}\right] < \infty$ (see \cite[Theorem~2.6.10]{ABG-book})\,. Hence, letting $R\to\infty$ by Fatou's lemma from \cref{EErgonearPoisso1N}, it follows that \begin{align*} \hat{V}^v(x)&\,\geq\, \Exp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \mathrm{d}{t} +\hat{V}^{v}\left(X_{{\Breve\uptau}_{r}}\right)\right]\nonumber\\ &\,\geq\, \Exp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \mathrm{d}{t}\right] +\inf_{\sB_r}\hat{V}^{v}\,. \end{align*}Since $\hat{V}^{v}(0) =0$, letting $r\to 0$, we deduce that \begin{align}\label{EErgonearPoisso1o} \hat{V}^v(x)\,\geq\, \limsup_{r\downarrow 0}\Exp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \hat{\rho}_{v}\right) \mathrm{d}{t} \right]\,. \end{align} From \cref{EErgonearPoisso1B} and \cref{EErgonearPoisso1o}, it is easy to see that $V^v - \hat{V}^v \leq 0$ in ${\mathds{R}^{d}}$. On the other hand by \cref{EErgonearPoisso1A} and \cref{EErgonearPoisso1J} one has $\sL_{v}\left(V^v - \hat{V}^v\right)(x)\geq 0$ in ${\mathds{R}^{d}}$. Hence, applying the strong maximum principle \cite[Theorem~9.6]{GilTru}, one has $V^v = \hat{V}^v$. This proves uniqueness. \end{proof} Now we prove the continuity of ergodic cost under near-monotonicity assumption on the running cost function. \begin{theorem}\label{ergodicnearmono1} Suppose that Assumptions (A1)-(A4) hold. Let $\{v_n\}_n$ be a sequence of stable policies such that $v_n \to v$ in $\mathfrak U_{\mathsf{sm}}$\, and $\{\eta_{v_n}\}_n$ tight. If $$\sup_{n}\sE_x(c, v_n) < \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \mathbb{U}} c(x,\zeta),$$ then we have the following \begin{equation}\label{EErgonearOpt1A} \inf_{{\mathds{R}^{d}}}\sE_x(c, v_n) \to \inf_{{\mathds{R}^{d}}}\sE_x(c, v)\quad \text{as}\,\,\, n\to\infty\,. \end{equation} \end{theorem} \begin{proof} From Theorem~\ref{NearmonotPoisso}, we know that for each $n\in\mathds{N}$ there exists $(V^{v_n}, \rho_{v_n})\in \Sobl^{2,p}({\mathds{R}^{d}})\times \mathds{R}$, \, $1< p < \infty$, with $V^{v_n}(0) = 0$ and $\inf_{{\mathds{R}^{d}}} V^{v_n} > -\infty$, satisfying \begin{equation}\label{EErgoContnuity1A} \rho_{v_n} = \sL_{v_n}V^{v_n}(x) + c(x, v_n(x))\,, \end{equation} where $\rho_{v_n} = \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v_n(x)(\mathrm{d}{u})\eta_{v_n}(\mathrm{d}{x}) = \inf_{{\mathds{R}^{d}}}\sE_x(c, v_n)$\,. Now from \cite[Lemma~4.4]{AB-10Uni}, since we impose tightness apriori, we deduce that $\eta_{v_n} \to \eta_{v}$ in total variation topology. Hence the associated densities $\varphi_{v_n}\to \varphi$ in $\Lp^1({\mathds{R}^{d}})$ (see the proof of \cite[Lemma~3.2.5]{ABG-book}). Note that \begin{align} &\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\eta_{v_{n}}(\mathrm{d} x)\, -\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta) v(x)(\mathrm{d} \zeta)\eta_{v}(\mathrm{d} x) \nonumber \\ & = \bigg(\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\varphi_{v_n}(x)\mathrm{d} x - \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\varphi(x)\mathrm{d} x \bigg) \nonumber \\ &\quad + \bigg(\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\varphi(x)\mathrm{d} x -\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x,\zeta)v(x)(\mathrm{d} \zeta)\varphi(x)\mathrm{d} x \bigg) \end{align} Since $c$ is bounded, the first term of the right hand side converges to zero since $\varphi_{v_n}\to \varphi$ in $\Lp^1({\mathds{R}^{d}})$ and the second term converges to zero by the convergence of $v_n\to v$ (see Definition~\ref{DefBorkarTopology1A})\,. Hence, it follows that $\int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v_n(x)(\mathrm{d}{u})\eta_{v_n}(\mathrm{d}{x}) \to \int_{{\mathds{R}^{d}}}\int_{\mathbb{U}} c(x, u)v(x)(\mathrm{d}{u})\eta_{v}(\mathrm{d}{x})$\,. Thus, in view of Theorem~\ref{NearmonotPoisso}, we obtain $\inf_{{\mathds{R}^{d}}}\sE_x(c, v_n) \to \inf_{{\mathds{R}^{d}}}\sE_x(c, v)$ as $n\to \infty$\,. This completes the proof. \end{proof} \begin{remark} The tightness assumption is not superfluous. In view of \cite{AA13}, we know that the map $v\mapsto \inf_{{\mathds{R}^{d}}}\sE_x(c, v)$ in general may not be continuous on $\mathfrak U_{\mathsf{sm}}$ under near-monotone cost criterion (of the form, \cref{EErgonearOpt1A})\,. The reason is the following: for each $n\in\mathds{N}$ let $(V^{v_n}, \rho_{v_n})$ be the unique compatible solution pair (see, \cite[Definition ~1.1]{AA13}) of the equation \cref{EErgoContnuity1A}, if $(V^{v_n}, \rho_{v_n})$ converge to a solution pair $(\bar{V}, \bar{\rho})$ of the limiting equation of \cref{EErgoContnuity1A} as $n\to \infty$, the solution pair $(\bar{V}, \bar{\rho})$ may not necessarily be compatible (see, \cite{AA13}). One sufficient condition which ensure this continuity is the tightness of the space of corresponding invariant measures $\{\eta_{v_n}: n\in\mathds{N}\}$\,. \end{remark} \subsubsection{Under Lyapunov stability}\label{Lyapunov stability} In this section we study the continuity of ergodic cost criterion under Lyapunov stability assumption. We assume the following Lyapunov stability condition on the dynamics. \begin{itemize} \item[\hypertarget{A5}{{(A5)}}] There exists a positive constant $\widehat{C}_0$, and a pair of inf-compact functions $(\Lyap, h)\in \cC^{2}({\mathds{R}^{d}})\times\cC({\mathds{R}^{d}}\times\mathbb{U})$ (i.e., the sub-level sets $\{\Lyap\leq k\} \,,\{h\leq k\}$ are compact or empty sets in ${\mathds{R}^{d}}$\,, ${\mathds{R}^{d}}\times\mathbb{U}$ respectively for each $k\in\mathds{R}$) such that \begin{equation}\label{Lyap1} \sL_{\zeta}\Lyap(x) \leq \widehat{C}_{0} - h(x,\zeta)\quad \text{for all}\,\,\, (x,\zeta)\in {\mathds{R}^{d}}\times \mathbb{U}\,, \end{equation} where $h$ ($>0$) is locally Lipschitz continuous in its first argument uniformly with respect to the second and $\Lyap > 1$. \end{itemize} A function $f\in\mathcal{O}(\Lyap)$ if $f\leq \widehat{C}_1 \Lyap$ for some positive constant $\widehat{C}_1$\, and $f \in {\mathfrak{o}}(\Lyap)$ if $\displaystyle{\limsup_{|x|\to \infty} \frac{|f|}{\Lyap} = 0}$\,. Now following \cite[Lemma~3.7.8]{ABG-book}, we want to prove that a certain equation admits a unique solution in some suitable function space. This uniqueness result is crucial to obtain continuity of the map $v\to \sE_x(c, v)$ on $\mathfrak U_{\mathsf{sm}}$\,. \begin{theorem}\label{TErgoExis1} Suppose that Assumptions (A1)-(A3) and (A5) hold. Then for each $v\in \mathfrak U_{\mathsf{sm}}$ there exist a unique solution pair $(\widehat{V}^v, \widehat{\rho}^{v})\in \Sobl^{2,p}({\mathds{R}^{d}})\cap {\mathfrak{o}}(\Lyap)\times \mathds{R}$ for any $p >1$ satisfying \begin{equation}\label{EErgoOpt1A} \widehat{\rho}^{v} = \sL_{v}\widehat{V}^v(x) + c(x, v(x))\quad\text{with}\quad \widehat{V}^v(0) = 0\,. \end{equation} Furthermore, we have \begin{itemize} \item[(i)]$\widehat{\rho}^{v} = \sE_x(c, v)$ \item[(ii)] for all $x\in{\mathds{R}^{d}}$, we have \begin{equation}\label{EErgoOpt1C} \widehat{V}^v(x) \,=\, \lim_{r\downarrow 0}\Exp_{x}^{v}\left[\int_{0}^{{\Breve\uptau}_{r}} \left( c(X_t, v(X_t)) - \sE_x(c, v)\right)\mathrm{d} t\right]\,. \end{equation} \end{itemize} \end{theorem} \begin{proof} Existence of a solution pair $(\widehat{V}^v, \widehat{\rho}^{v})\in \Sobl^{2,p}({\mathds{R}^{d}})\cap {\mathfrak{o}}(\Lyap)\times \mathds{R}$ for any $p >1$ satisfying (i) and (ii) follows from \cite[Lemma~3.7.8]{ABG-book}\,. Also, it is known that along a subsequence $\alpha\cJ_{\alpha}^{v}(0, c)\to \widehat{\rho}^{v}$ and $\cJ_{\alpha}^{v}(x, c) - \cJ_{\alpha}^{v}(0, c)\to \widehat{V}^v$ uniformly over compact subsets of ${\mathds{R}^{d}}$ (see \cite[Lemma~3.7.8 (i)]{ABG-book})\,. Next we show that the sub-sequential limits are unique\,. This indeed imply the uniqueness of the solutions. Let $(\bar{V}^v, \bar{\rho}^{v})\in \Sobl^{2,p}({\mathds{R}^{d}})\cap {\mathfrak{o}}(\Lyap)\times \mathds{R}$ for any $p >1$ be any other solution pair of \cref{EErgoOpt1A} with $\bar{V}^v(0) = 0$. Thus, by It$\hat{\rm o}$-Krylov formula, for $R>0$ we obtain \begin{align}\label{ETErgoExis1A} \Exp_{x}^{v}\left[\bar{V}^v(X_{T\wedge\uptau_{R}})\right] - \bar{V}^v(x) &= \Exp_{x}^{v}\left[\int_{0}^{T\wedge\uptau_{R}} \sL_{v} \bar{V}^v(X_s) \mathrm{d} s\right]\nonumber\\ & = \Exp_{x}^{v}\left[\int_{0}^{T\wedge\uptau_{R}} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\mathrm{d} s \right]\,. \end{align} Note that \begin{equation*} \int_{0}^{T\wedge\uptau_{R}} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\mathrm{d} s = \int_{0}^{T\wedge\uptau_{R}} \bar{\rho}^{v} - \int_{0}^{T\wedge\uptau_{R}}c(X_s, v(X_s))\mathrm{d} s \end{equation*} Thus, letting $R\to \infty$ by monotone convergence theorem, we get \begin{equation*} \lim_{R\to\infty}\Exp_{x}^{v}\left[\int_{0}^{T\wedge\uptau_{R}} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\mathrm{d} s \right] = \Exp_{x}^{v}\left[\int_{0}^{T} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\mathrm{d} s \right]\,. \end{equation*} Since $\bar{V}^v \in {\mathfrak{o}}{(\Lyap)}$, in view of \cite[Lemma~3.7.2 (ii)]{ABG-book}, letting $R\to\infty$, we deduce that \begin{align}\label{ETErgoExis1BA} \Exp_{x}^{v}\left[\bar{V}^v(X_{T})\right] - \bar{V}^v(x) = \Exp_{x}^{v}\left[\int_{0}^{T} \left(\bar{\rho}^{v} - c(X_s, v(X_s))\right)\mathrm{d} s \right]\,. \end{align} Also, from \cite[Lemma~3.7.2 (ii)]{ABG-book}, we have \begin{equation*} \lim_{T\to\infty}\frac{\Exp_{x}^{v}\left[\bar{V}^v(X_{T})\right]}{T} = 0\,. \end{equation*} Hence, dividing both sides of \cref{ETErgoExis1BA} by $T$ and letting $T\to\infty$, we obtain \begin{align*} \bar{\rho}^{v} = \limsup_{T\to \infty}\frac{1}{T}\Exp_{x}^{v}\left[\int_{0}^{T} \left(c(X_s, v(X_s))\right)\mathrm{d} s \right]\,. \end{align*}This implies that $\bar{\rho}^{v} = \widehat{\rho}^{v}$\,. Again, applying It$\hat{\rm o}$-Krylov formula and using \cref{EErgoOpt1A}, we have \begin{align}\label{ETErgoExis1B} \bar{V}^v(x)\,=\, \Exp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}\wedge \uptau_{R}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \mathrm{d}{t} + \bar{V}^{v}\left(X_{{\Breve\uptau}_{r}\wedge \uptau_{R}}\right)\right]\,. \end{align} Also, from \cref{Lyap1}, by It$\hat{\rm o}$-Krylov formula it follows that \begin{align*} \Exp_x^{v}\left[\Lyap\left(X_{{\Breve\uptau}_{r}\wedge \uptau_{R}}\right)\right] - \Lyap(x)\,=\, \Exp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}\wedge \uptau_{R}} \sL_{v}\Lyap(X_t) \mathrm{d}{t}\right] \leq \Exp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}\wedge \uptau_{R}} \left(\widehat{C}_0 - h(X_t, v(X_t))\right) \mathrm{d}{t}\right]\,. \end{align*} This gives us the following (since $h(x,\zeta) > 0$) \begin{equation*} \Exp_x^{v}\left[\Lyap\left(X_{\uptau_{R}}\right)\Ind_{\{{\Breve\uptau}_{r}\geq \uptau_{R}\}}\right]\leq \widehat{C}_0 \Exp_x^{v}\left[{\Breve\uptau}_{r}\right] + \Lyap(x)\quad \text{for all} \,\,\, r <|x|<R\,. \end{equation*} Now, it is easy to see that \begin{equation*} -\sup_{\partial{\sB_R}}\frac{|\hat{V}^{v}|}{\Lyap} \left(\widehat{C}_0 \Exp_x^{v}\left[{\Breve\uptau}_{r}\right] + \Lyap(x)\right)\leq \Exp_x^{v}\left[\hat{V}^{v}\left(X_{\uptau_{R}}\right)\Ind_{\{{\Breve\uptau}_{r}\geq \uptau_{R}\}}\right] \leq \sup_{\partial{\sB_R}}\frac{|\hat{V}^{v}|}{\Lyap} \left(\widehat{C}_0 \Exp_x^{v}\left[{\Breve\uptau}_{r}\right] + \Lyap(x)\right)\,. \end{equation*} Since $\bar{V}^v \in {\mathfrak{o}}(\Lyap)$, from the above estimate, we get \begin{equation*} \liminf_{R\to\infty}\Exp_x^{v}\left[\hat{V}^{v}\left(X_{\uptau_{R}}\right)\Ind_{\{{\Breve\uptau}_{r}\geq \uptau_{R}\}}\right] = 0\,. \end{equation*}Thus, letting $R\to\infty$ by Fatou's lemma from \cref{ETErgoExis1B}, it follows that \begin{align*} \bar{V}^v(x)&\,\geq\, \Exp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \mathrm{d}{t} +\bar{V}^{v}\left(X_{{\Breve\uptau}_{r}}\right)\right]\nonumber\\ &\,\geq\, \Exp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \mathrm{d}{t}\right] +\inf_{\sB_r}\bar{V}^{v}\,. \end{align*}Since $\bar{V}^{v}(0) =0$, letting $r\to 0$, we deduce that \begin{align}\label{ETErgoExis1C} \bar{V}^v(x)\,\geq\, \limsup_{r\downarrow 0}\Exp_x^{v}\left[\int_0^{{\Breve\uptau}_{r}} \left(c(X_t, v(X_t)) - \bar{\rho}^{v}\right) \mathrm{d}{t} \right]\,. \end{align} Since $\widehat{\rho}^v = \bar{\rho}^v$, from \cref{EErgoOpt1C} and \cref{ETErgoExis1C}, it follows that $\widehat{V}^v - \bar{V}^v \leq 0$ in ${\mathds{R}^{d}}$. Also, since $(\widehat{V}^v, \widehat{\rho}^v)$ and $(\bar{V}^v, \bar{\rho}^v)$ are two solution pairs of \cref{EErgoOpt1A}, we have $\sL_{v}\left(\widehat{V}^v - \bar{V}^v\right)(x) = 0$ in ${\mathds{R}^{d}}$. Hence, by strong maximum principle \cite[Theorem~9.6]{GilTru}, one has $\widehat{V}^v = \bar{V}^v$. This proves uniqueness \end{proof} Next we prove that the map $v\to \inf_{{\mathds{R}^{d}}}\sE_x(c, v)$ is continuous on $\mathfrak U_{\mathsf{sm}}$ under the Borkar topology\,. \begin{theorem}\label{ergodicLyap1} Suppose that Assumptions (A1)-(A3) and (A5) hold. Let $\{v_n\}_n$ be a sequence of policies in $\mathfrak U_{\mathsf{sm}}$ such that $v_n \to v$ in $\mathfrak U_{\mathsf{sm}}$\,. Then we have \begin{equation}\label{EErgoLyap1A} \inf_{{\mathds{R}^{d}}}\sE_x(c, v_n) \to \inf_{{\mathds{R}^{d}}}\sE_x(c, v)\quad \text{as}\,\,\, n\to\infty\,. \end{equation} \end{theorem} \begin{proof} From Theorem~\ref{TErgoExis1}, we know that for each $n\in \mathds{N}$ there exists unique solution pair $(\widehat{V}^{v_n}, \widehat{\rho}^{v_n})\in \Sobl^{2,p}({\mathds{R}^{d}})\cap {\mathfrak{o}}(\Lyap)\times \mathds{R}$ for any $p >1$ satisfying \begin{equation}\label{EErgoLyap1B} \widehat{\rho}^{v_n} = \sL_{v_n}\widehat{V}^{v_n}(x) + c(x, v_n(x))\quad\text{with}\quad \widehat{V}^{v_n}(0) = 0\,, \end{equation} where \begin{itemize} \item[(i)]$\widehat{\rho}^{v_n} = \sE_x(c, v_n)$ \item[(ii)] for all $x\in{\mathds{R}^{d}}$, we have \begin{equation*} \widehat{V}^{v_n}(x) \,=\, \lim_{r\downarrow 0}\Exp_{x}^{v_n}\left[\int_{0}^{{\Breve\uptau}_{r}} \left( c(X_t, v_n(X_t)) - \sE_x(c, v_n)\right)\mathrm{d} t\right]\,. \end{equation*} \end{itemize} In view of \cref{Lyap1}, it is easy to see that, each $v\in\mathfrak U_{\mathsf{sm}}$ is stable and $\inf_{v\in\mathfrak U_{\mathsf{sm}}}\eta_v(\sB_R) > 0$ for any $R>0$ (see, \cite[Lemma~3.3.4]{ABG-book} and \cite[Lemma~3.2.4(b)]{ABG-book}). Thus, from \cite[Theorem~3.7.4]{ABG-book}, it follows that \begin{equation}\label{EErgoLyap1C} \norm{\cJ_{\alpha}^{v_n}(\cdot, c) - \cJ_{\alpha}^{v_n}(0, c)}_{\Sob^{2,p}(\sB_R)}\leq \frac{\widehat{C}_{2}(R, p)}{\eta_{v_n}(\sB_{2R})} \left(\frac{\widehat{\rho}^{v_n}}{\eta_{v_n}(\sB_{2R})} + \sup_{\sB_{4R}\times \mathbb{U}}c(x,\zeta) \right)\,, \end{equation} where the positive constant $\widehat{C}_{2}(R, p)$ depends only on $R$ and $p$\,. Since the running cost is bounded we have $\|c\|_{\infty} \leq M$ for some positive constant $M$. Thus, we have $\widehat{\rho}^{v_n} \leq M$. Hence from \cref{EErgoLyap1C}, we deduce that \begin{equation*} \norm{\cJ_{\alpha}^{v_n}(\cdot, c) - \cJ_{\alpha}^{v_n}(0, c)}_{\Sob^{2,p}(\sB_R)}\leq \frac{M\widehat{C}_{2}(R, p)}{\inf_{n}\eta_{v_n}(\sB_{2R})} \left(\frac{1}{\inf_{n}\eta_{v_n}(\sB_{2R})} + 1 \right)\,. \end{equation*} This implies that \begin{equation}\label{EErgoLyap1D} \norm{\widehat{V}^{v_n}}_{\Sob^{2,p}(\sB_R)} \leq \widehat{C}_{3}(R, p)\,, \end{equation}where $\widehat{C}_{3}(R, p)$ is a positive constant which depends only on $R$ and $p$\,. Hence, by a standard diagonalization argument and Banach Alaoglu theorem (see, \cref{ET1.1C}), one can extract a subsequence $\{\widehat{V}^{v_{n_k}}\}$ such that for some $\widehat{V}^*\in \Sobl^{2,p}({\mathds{R}^{d}})$ we have \begin{equation}\label{EErgoLyap1E} \begin{cases} \widehat{V}^{v_{n_k}}\to & \widehat{V}^*\quad \text{in}\quad \Sobl^{2,p}({\mathds{R}^{d}})\quad\text{(weakly)}\\ \widehat{V}^{v_{n_k}}\to & \widehat{V}^*\quad \text{in}\quad \cC^{1, \beta}_{loc}({\mathds{R}^{d}}) \quad\text{(strongly)}\,. \end{cases} \end{equation}Also, since $\widehat{\rho}^{v_n} \leq M$, along a further subsequence $\widehat{\rho}^{v_{n_k}}\to \widehat{\rho}^{*}$ (without loss of generality denoting by the same sequence). Now, by similar argument as in Theorem~\ref{T1.1}, multiplying by test function on the both sides of \cref{EErgoLyap1B} and letting $k\to\infty$, we deduce that $(\widehat{V}^*, \widehat{\rho}^{*})\in \Sobl^{2,p}({\mathds{R}^{d}})\times \mathds{R}$ satisfies \begin{equation}\label{EErgoLyap1F} \widehat{\rho}^{*} = \sL_{v}\widehat{V}^{*}(x) + c(x, v(x))\,. \end{equation}Since $\widehat{V}^{v_n}(0) = 0$ for each $n$, we get $\widehat{V}^*(0) = 0$ Next we want to show that $\widehat{V}^{*}\in {\mathfrak{o}}{(\Lyap)}$. Following the proof of \cite[Lemma~3.7.8]{ABG-book} (see, eq.(3.7.47) or eq.(3.7.50)), it is easy to see that \begin{equation*} \widehat{V}^{v_n}(x) \,\leq\, \Exp_{x}^{v_n}\left[\int_{0}^{{\Breve\uptau}_{r}} \left( c(X_t, v_n(X_t)) - \sE_x(c, v_n)\right)\mathrm{d} t + \widehat{V}^{v_n}(X_{{\Breve\uptau}_{r}})\right]\,. \end{equation*} This gives us the following estimate \begin{equation}\label{EErgoLyap1G} |\widehat{V}^{v_n}(x)| \,\leq\, M\sup_{n}\Exp_{x}^{v_n}\left[\int_{0}^{{\Breve\uptau}_{r}} \left( c(X_t, v_n(X_t)) + 1\right)\mathrm{d} t + \sup_{\sB_r}|\widehat{V}^{v_n}|\right]\,. \end{equation} We know that, for $d < p < \infty$, the space $\Sob^{2,p}(\sB_R)$ is compactly embedded in $\cC^{1, \beta}(\bar{\sB}_R)$\,, where $\beta < 1 - \frac{d}{p}$ (see \cite[Theorem~A.2.15 (2b)]{ABG-book}). Thus, from \cref{EErgoLyap1D}, we obtain $\displaystyle{\sup_{n}\sup_{\sB_r}|\widehat{V}^{v_n}| < \widehat{M}}$ for some positive constant $\widehat{M}$\,. Therefore, in view of \cite[Lemma~3.7.2 (i)]{ABG-book}, form \cref{EErgoLyap1G}, we deduce that $\widehat{V}^{*}\in {\mathfrak{o}}{(\Lyap)}$\,. Since the pair $(\widehat{V}^*, \widehat{\rho}^{*})\in \Sobl^{2,p}({\mathds{R}^{d}})\cap {\mathfrak{o}}{(\Lyap)} \times \mathds{R}$ satisfies \cref{EErgoLyap1F}, by uniqueness of solution of \cref{EErgoLyap1F} (see, Theorem~\ref{TErgoExis1}) it follows that $(\widehat{V}^*, \widehat{\rho}^{*})\equiv (\widehat{V}^v, \widehat{\rho}^{v})$\,. This completes the proof of the theorem\,. \end{proof} \section{Denseness of Finite Action/Piecewise Constant Stationary Policies}\label{DensePol} \subsection{Denseness of Policies with Finite Actions} Let $d_{\mathbb{U}}$ be the metric on the action space $\mathbb{U}$\,. Since $\mathbb{U}$ is compact, we have $\mathbb{U}$ is totally bounded. Thus, one can find a sequence of finite grids $\{\{\zeta_{n,i}\}_{i=1}^{k_n}\}_{n\geq 1}$ such that $$\min_{i= 1,2,\dots , k_n}d(\zeta, \zeta_{n,i}) < \frac{1}{n}\quad\text{for all}\,\,\, \zeta\in \mathbb{U}\,.$$ Let $\Lambda_{n} := \{\zeta_{n,1}, \zeta_{n,2}, \dots ,\zeta_{n,k_n}\}$ and define a function $Q_n: \mathbb{U}\to \Lambda_{n}$ by \begin{equation*} Q_n(\zeta) = \argmin_{\zeta_{n,i}\in \Lambda_n} d(\zeta, \zeta_{n,i})\,, \end{equation*} where ties are broken so that $Q_n$ is measurable. The function $Q_n$ is often known as nearest neighborhood quantizer (see, \cite{SYL17}). For each $n$ the function $Q_n$ induces a partition $\{\mathbb{U}_{n,i}\}_{i=1}^{k_n}$ of the action space $\mathbb{U}$ given by \begin{equation*} \mathbb{U}_{n,i} = \{\zeta\in \mathbb{U} : Q_n(\zeta) = \zeta_{n,i}\}\,. \end{equation*} By triangle inequality, it follows that $\text{diam}(\mathbb{U}_{n,i}):= \sup_{\zeta_1, \zeta_2 \in \mathbb{U}_{n,i}} d_{\mathbb{U}}(\zeta_1, \zeta_2) < \frac{2}{n}$\,. Now, for each $v\in\mathfrak U_{\mathsf{sm}}$ define a sequence of policies with finite actions as follows: \begin{equation}\label{DenseStra1} v_{n}(\zeta_{n,i}|x) = Q_n v(\zeta_{n,i}|x) = v(\mathbb{U}_{n,i}|x)\,. \end{equation} In the next lemma we prove that the space of stationary policies with finite actions are dense in $\mathfrak U_{\mathsf{sm}}$ with respect to the \emph{Borkar topology} (see, Definition~\ref{DefBorkarTopology1A}) \,. \begin{lemma}\label{DenseBorkarTopo} For each $v\in\mathfrak U_{\mathsf{sm}}$ there exists a sequence of policies $\{v_n\}_n$ (defined as in \cref{DenseStra1}) with finite actions, satisfying \begin{equation}\label{DenseStra2} \lim_{n\to\infty}\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x = \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x \end{equation} for all $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$ and $g\in \cC_b({\mathds{R}^{d}}\times \mathbb{U})$ \end{lemma} \begin{proof} Let $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$ and $g\in \cC_b({\mathds{R}^{d}}\times \mathbb{U})$. Then from the construction of the sequence $\{v_n\}_n$, it is easy to see that \begin{align*} |\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,u)v_{n}(x)(\mathrm{d} u)\mathrm{d} x & - \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x |\\ &\leq \int_{{\mathds{R}^{d}}}|f(x)|\sum_{i=1}^{k_n}\int_{\mathbb{U}_{n,i}}|g(x,\zeta_{n,i}) - g(x,\zeta)|v(x)(\mathrm{d} \zeta)\mathrm{d} x\,. \end{align*} Since $g\in\cC_b({\mathds{R}^{d}}\times \mathbb{U})$ and $\text{diam}(\mathbb{U}_{n,i}) < \frac{2}{n}$, it follows that \begin{equation*} |f(x)|\sum_{i=1}^{k_n}\int_{\mathbb{U}_{n,i}}|g(x,\zeta_{n,i}) - g(x,\zeta)|v(x)(\mathrm{d} \zeta)\rightarrow 0\quad\text{for all}\,\,\, x\in{\mathds{R}^{d}}\,. \end{equation*} As we know that $g$ is bounded, for some positive constant $M_1$ we have $|g| \leq M_1$. Thus, we deduce that \begin{equation*} |f(x)|\sum_{i=1}^{k_n}\int_{\mathbb{U}_{n,i}}|g(x,\zeta_{n,i}) - g(x,\zeta)|v(x)(\mathrm{d} \zeta)\leq 2M_1 |f(x)| \quad\text{for all}\,\,\, x\in{\mathds{R}^{d}}\,. \end{equation*}Since $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$, by dominated convergence theorem, we obtain \begin{equation*} \lim_{n\to\infty}\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,u)v_{n}(x)(\mathrm{d} u)\mathrm{d} x = \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x\,. \end{equation*}This completes the proof of the lemma. \end{proof} \subsection{Denseness of Piecewise Constant Policies} Let $d_{\mathscr{P}}$ be the Prokhorov metric on $\pV$\,. Since $(\mathbb{U}, d_{\mathbb{U}})$ is separable (being a compact metric space) thus convergence in $(\pV, d_{\mathscr{P}})$ is equivalent to weak convergence of probability measures. \begin{theorem}\label{TDPCP} For each $v\in \mathfrak U_{\mathsf{sm}}$ there exists a sequence of piecewise constant policies $\{v_m\}_{m}$ in $\mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{BorkarTopology2} \lim_{m\to\infty}\int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v_{m}(x)(\mathrm{d} \zeta)\mathrm{d} x = \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x \end{equation} for all $f\in L^1({\mathds{R}^{d}})\cap L^2({\mathds{R}^{d}})$ and $g\in \cC_b({\mathds{R}^{d}}\times \mathbb{U})$ \end{theorem} \begin{proof} Let $\sB_{0} = \emptyset$ and define $D_{n} = \sB_{n}\setminus \sB_{n-1}$ for $n\in \mathds{N}$\,. Thus it is easy to see that ${\mathds{R}^{d}} = \cup_{n=1}^{\infty} D_{n}$. Since each $v\in \mathfrak U_{\mathsf{sm}}$ is a measurable map $v: {\mathds{R}^{d}} \to \pV$, it follows that $\hat{v}_{n}\,:=\, v\arrowvert_{D_n} : D_n \to \pV$ is a measurable map. Hence, by Lusin's theorem (see \cite[Theorem~7.5.2]{D02-book}), for any $\epsilon_n > 0$ there exists a compact set $K_{n}^{\epsilon_n}\subset D_n$ and a continuous function $\hat{v}_{n}^{\epsilon_n} : K_{n}^{\epsilon_n}\to \pV$ such that (the Lebesgue measure of the set $D_n\setminus K_{n}^{\epsilon_n}$) $\arrowvert(D_n\setminus K_{n}^{\epsilon_n}) \arrowvert < \epsilon_n$ and $\hat{v}_{n} \equiv \hat{v}_{n}^{\epsilon_n}$ on $K_{n}^{\epsilon_n}$\,. Again, Tietze's extension theorem (see \cite[Theorem~4.1]{DG51}) there exists a continuous function $\tilde{v}_{n}^{\epsilon_n}: D_n \to \pV$ such that $ \tilde{v}_{n}^{\epsilon_n}\equiv \hat{v}_{n}^{\epsilon_n}$ on $K_{n}^{\epsilon_n}$\,. \begin{itemize} \item[\textbf{Step1}] Therefore for any $\hat{f}\in \Lp^1({\mathds{R}^{d}})\cap \Lp^2({\mathds{R}^{d}})$ and $\hat{g}\in \cC(\mathbb{U})$, we have \begin{align}\label{EBT1} &\arrowvert \int_{D_n}\hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_n}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \arrowvert \int_{D_n\setminus K_{n}^{\epsilon_n}} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_n\setminus K_{n}^{\epsilon_n}} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \arrowvert \int_{D_n\setminus K_{n}^{\epsilon_n}} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert + \arrowvert\int_{D_n\setminus K_{n}^{\epsilon}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \|\hat{g}\|_{\infty}\int_{D_n\setminus K_{n}^{\epsilon_n}}|\hat{f}(x)|\mathrm{d} x + \|\hat{g}\|_{\infty}\int_{D_n\setminus K_{n}^{\epsilon_n}}|\hat{f}(x)|\mathrm{d} x \nonumber\\ &\leq 2\|\hat{g}\|_{\infty}\|\hat{f}\|_{\Lp^2({\mathds{R}^{d}})} \sqrt{|(D_n\setminus K_{n}^{\epsilon_n})|} \leq 2\sqrt{\epsilon_n} \|\hat{g}\|_{\infty}\|\hat{f}\|_{\Lp^2({\mathds{R}^{d}})}\quad\text{(by H\"older's inequality)}\,. \end{align} Now, since $(\pV, d_{\mathscr{P}})$ is compact, for each $m\in \mathds{N}$ there exists a finite set $\widehat{\Lambda}_{m} = \{\mu_{m,1}, \mu_{m,2}, \dots , \mu_{m, k_m}\}$ such that $$\inf_{\mu_{m, i}\in \widehat{\Lambda}_{m}}d_{\mathscr{P}}(\mu, \mu_{m, i}) < \frac{1}{m}\quad \text{for any}\quad \mu\in \pV\,.$$ Let $\widehat{Q}_{m}: \pV \to \widehat{\Lambda}_{m}$ be defined as \begin{equation*} \widehat{Q}_{m} (\mu) = \argmin_{\mu_{m,i}\in\widehat{\Lambda}_{m}} d_{\mathscr{P}}(\mu, \mu_{m,i})\,. \end{equation*} Ties are broken so that $\widehat{Q}_{m}$ is a measurable map. Hence, it induces a partition $\{\widehat{U}_{m,i}\}_{i=1}^{k_m}$ of the space $\pV$ which is given by \begin{equation*} \widehat{U}_{m,i} = \{\mu\in\pV: \widehat{Q}_{m}(\mu) = \mu_{m,i}\}\,. \end{equation*}By triangle inequality it is easy to see that \begin{equation*} \diam(\widehat{U}_{m,i}) := \sup_{\mu_1, \mu_2}d_{\mathscr{P}}(\mu_1, \mu_2) < \frac{2}{m}\,. \end{equation*} Now, for $v\in\mathfrak U_{\mathsf{sm}}$ define $D_{n,i}^m = (\tilde{v}_{n}^{\epsilon_n})^{-1}(\widehat{U}_{m,i})$. This implies that $D_{n} = \cup_{i=1}^{k_m} D_{n,i}^m$\,. Define \begin{equation*} \hat{v}_{n,m}^{\epsilon_n}(x) := \sum_{i=1}^{k_m} \mu_{m,i}\Ind_{\{D_{n,i}^m\}}(x)\quad\text{for all}\quad x\in D_n\,\,\,\text{and}\,\,\,m\in \mathds{N}\,. \end{equation*} Therefore, we deduce that \begin{align}\label{EBT2} &\arrowvert \int_{D_n} \hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_n} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n,m}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \sum_{i=1}^{k_m} \arrowvert \int_{D_{n,i}^{m}} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_{n,i}^{m}} \hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\mu_{m,i}(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \sum_{i=1}^{k_m} \int_{D_{n,i}^{m}} |\hat{f}(x)| \arrowvert \int_{\mathbb{U}} \hat{g}(\zeta)\tilde{v}_{n}^{\epsilon_n}(x)(\mathrm{d} \zeta) - \int_{\mathbb{U}} \hat{g}(\zeta)\mu_{m,i}(\mathrm{d} \zeta)\arrowvert \mathrm{d} x \nonumber\\ &\leq \|\hat{f}\|_{\Lp^1({\mathds{R}^{d}})}\epsilon_n \quad\text{(for large enough $m$)}\,. \end{align} If we choose $\epsilon_n = \min\{\frac{\epsilon_n\|\hat{f}\|_{\Lp^2({\mathds{R}^{d}})}\|\hat{g}\|_{\infty}}{4}, \frac{\epsilon_n\|\hat{f}\|_{\Lp^1({\mathds{R}^{d}})}}{2}\}$, combining \cref{EBT1}, \cref{EBT2}, there exists $\bar{M}_0 >0$ (depending on $\hat{f}, \hat{g}$ and $\epsilon_n$) such that \begin{align}\label{EBT3} \arrowvert \int_{D_n}\hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_n}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{n,m}^{\epsilon_n}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \leq \epsilon_n\,, \end{align} for all $m\geq \bar{M}_0$\,. \item[\textbf{Step2}] Let $\epsilon > 0$ be a small number. Now define \begin{equation}\label{EBT4} \bar{v}_{m}^{\epsilon} := \sum_{n=1}^{\infty} \hat{v}_{n,m}^{\epsilon_n} \quad \text{for}\,\,\, m\in \mathds{N}\,. \end{equation} Since $\hat{f}\in\Lp^1({\mathds{R}^{d}})$ there exists $N_0 \in \mathds{N}$ such that $\int_{\sB_{N_0}^c}|\hat{f}(x)|\mathrm{d} x < \frac{\epsilon}{4\|\hat{g}\|_{\infty}}$ \begin{align*} &\arrowvert \int_{{\mathds{R}^{d}}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \arrowvert \int_{\sB_{N_0}^c} \hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert + \arrowvert \int_{\sB_{N_0}} \hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \frac{\epsilon}{2} + \arrowvert \int_{\sB_{N_0}} \hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \end{align*} Now, choose $\epsilon_{i} > 0$ for $i= 1,\dots , N_0$ such that $\sum_{i = 1}^{N_0} \epsilon_i < \frac{\epsilon}{2}$. Thus, in view of \cref{EBT3} there exists $M_i >0$ such that for each $i = 1, \dots , N_0$ \begin{equation*} \arrowvert \int_{D_i}\hat{f}(x) \int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{i}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{D_i}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\hat{v}_{i,m}^{\epsilon_i}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \leq \epsilon_i\,, \end{equation*} for all $m\geq M_i$\,. Hence, for $m\geq \max\{M_i,\, i = 1,\dots ,N_0\}$, we get \begin{align}\label{EBT5} \arrowvert \int_{\sB_{N_0}} \hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)(v(x) & - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber \\ & \leq \sum_{i=1}^{N_0} |\int_{D_{i}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)(\hat{v}_{i}(x) - \hat{v}_{i,m}^{\epsilon_i}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \leq \sum_{i = 1}^{N_0} \epsilon_i < \frac{\epsilon}{2}\,. \end{align} Therefore, for each $\epsilon >0$ we deduce that there exists a positive constant $\hat{M}_0$ (= $\max\{M_i,\, i = 1,\dots ,N_0\}$) such that for $m\geq \hat{M}_0$ (where $\hat{M}_0$ depends on $\hat{f}, \hat{g}, \epsilon$) \begin{equation}\label{EBT6} \arrowvert \int_{{\mathds{R}^{d}}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}}\hat{f}(x)\int_{\mathbb{U}} \hat{g}(\zeta)\bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \leq \epsilon\,. \end{equation} \item[\textbf{Step3}] Let $\{\hat{f}_k\}_{k\in\mathds{N}}$ and $\{h_{j}\}_{j\in\mathds{N}}$ be countable dense set in $\Lp^1({\mathds{R}^{d}})$ and $\cC(\mathbb{U})$ respectively\,. Thus \cref{EBT6} holds true for each $\hat{f}_k$ and $h_j$\,. Let $f\in\Lp^{1}({\mathds{R}^{d}})\cap \Lp^{2}({\mathds{R}^{d}})$ and $g\in\cC_{b}({\mathds{R}^{d}}\times \mathbb{U})$\,. Since $f\in \Lp^{1}({\mathds{R}^{d}})$ for $\epsilon > 0$ there exists $N_{1}\in \mathds{N}$ such that $\int_{\sB_{N_1}^c} |f(x)|\mathrm{d} x \leq \frac{\epsilon}{4\|g\|_{\infty}}$\,. This implies \begin{align}\label{EBT7} &\arrowvert \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}} g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}} g(x,\zeta)\bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \arrowvert \int_{\sB_{N_1}^c} f(x)\int_{\mathbb{U}} g(x,\zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert + \arrowvert \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} g(x,\zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \frac{\epsilon}{2} + \arrowvert \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} g(x, \zeta)(v(x) - \bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert\,. \end{align} It is well known that in $\cC_b(\bar{\sB}_{N_1}\times \mathbb{U})$ the functions of the form $\{\sum_{i}^{m} r_{i}(x)p_i(\zeta)\}_{m\in\mathds{N}}$ forms an algebra which contains constants, where $r_i\in \cC(\bar{\sB}_{N_1})$ and $p_i \in \cC(\mathbb{U})$\,. Thus by Stone-Weierstrass theorem there exists $\hat{m}$ (large enough) such that \begin{equation}\label{EBT8} \sup_{\sB_{N_1}\times \mathbb{U}} |g(x,\zeta) - \sum_{i}^{\hat{m}} r_{i}(x)p_i(\zeta)| \leq \frac{\epsilon}{24\|f\|_{\Lp^1({\mathds{R}^{d}})}}\,. \end{equation} Since $p_i \in \cC(\mathbb{U})$ we can find $h_{j(i)} \in \cC(\mathbb{U})$ such that \begin{equation}\label{EBT9} \sup_{\zeta\in \mathbb{U}} |p_i(\zeta) - h_{j(i)}(\zeta)| \leq \frac{\epsilon}{24\|f\|_{\Lp^1({\mathds{R}^{d}})}\|r_{i}\|_{\infty}}\,. \end{equation} Also, since $fr_i\in \Lp^1({\mathds{R}^{d}})$ there exists $\hat{f}_{k(i)}$ such that \begin{equation}\label{EBT9A} \int_{\sB_{N_1}} |f(x)r_i(x) - \hat{f}_{k(i)}(x)|\mathrm{d} x \leq \frac{\epsilon}{24\|f\|_{\Lp^1({\mathds{R}^{d}})}\|h_{i}\|_{\infty}}\,. \end{equation} Now, using \cref{EBT8}, \cref{EBT9}, \cref{EBT9A} we have the following \begin{align}\label{EBT10} &\arrowvert \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} g(x, \zeta)v(x) - \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} g(x, \zeta)\bar{v}_{m}^{\epsilon}(x))(\mathrm{d} \zeta)\mathrm{d} x \arrowvert\nonumber\\ \leq & \arrowvert \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} g(x, \zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} \sum_{i}^{\hat{m}} r_{i}(x)p_i(\zeta) v(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert\nonumber\\ & + \sum_{i=1}^{\hat{m}}\arrowvert \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)p_i(\zeta) v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)h_{j(i)}(\zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert\nonumber\\ & + \sum_{i=1}^{\hat{m}}\arrowvert \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)h_{j(i)}(\zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{\sB_{N_1}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert\nonumber\\ & + \sum_{i=1}^{\hat{m}}\arrowvert \int_{\sB_{N_1}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{\sB_{N_1}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)\bar{v}_{m}^{\epsilon}(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert \nonumber \\ & + \sum_{i=1}^{\hat{m}}\arrowvert \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)h_{j(i)}(\zeta)\bar{v}_{m}^{\epsilon}(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{\sB_{N_1}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)\bar{v}_{m}^{\epsilon}(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert\nonumber\\ & + \sum_{i=1}^{\hat{m}}\arrowvert \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)p_i(\zeta) \bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} r_{i}(x)h_{j(i)}(\zeta)\bar{v}_{m}^{\epsilon}(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert\nonumber\\ & + \arrowvert \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} g(x, \zeta)\bar{v}_{m}^{\epsilon}(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{\sB_{N_1}} f(x)\int_{\mathbb{U}} \sum_{i}^{\hat{m}} r_{i}(x)p_i(\zeta) \bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert\nonumber\\ & \leq \frac{\epsilon}{4} + \sum_{l=1}^{N_1}\sum_{i=1}^{\hat{m}}\arrowvert \int_{D_{l}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)v(x) (\mathrm{d} \zeta)\mathrm{d} x - \int_{D_{l}} \hat{f}_{k(i)}(x)\int_{\mathbb{U}} h_{j(i)}(\zeta)\bar{v}_{l,m}^{\epsilon_l}(x) (\mathrm{d} \zeta)\mathrm{d} x\arrowvert \end{align} Now, choose $\epsilon_{l,i}$ for $l = 1,\dots , N_1$ and $i=1,\dots ,\hat{m}$ in such a way that $\sum_{l=1}^{N_1}\sum_{i=1}^{\hat{m}}\epsilon_{l,i} \leq \frac{\epsilon}{4}$\,. Thus, in view of \cref{EBT3} there exists $\hat{M}_{2} := \max\{M_{k(i),j(i)}^{l}: i=1,\dots , \hat{m}; l = 1,\dots , N_1\}$ (where $M_{k(i),j(i)}^{l}\in \mathds{N}$ is the constant obtained as in \cref{EBT3} for $i=1,\dots , \hat{m}; l = 1,\dots , N_1$). Therefore, from \cref{EBT7} and \cref{EBT10}, we conclude that \begin{align}\label{EBT11} \arrowvert \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}} g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}}f(x)\int_{\mathbb{U}} g(x,\zeta)\bar{v}_{m}^{\epsilon}(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \leq \epsilon\,, \end{align} for all $m\geq \hat{M}_2$\,. This completes the proof of the theorem\,. \end{itemize} \end{proof} \begin{remark}\label{RContSta1} Following the discussions above, one can show that the space of continuous stationary policies are also dense in the space of stationary policies under Borkar topology. This is a useful result as continuity allows for many approximation results to be invoked with little effort (see e.g. \cite[Assumption A2.3, pp. 322]{KD92} where convergence properties of invariant measures corresponding to time-discretizations are facilitated). To see the denseness result; as earlier we have $\{f_i\}_{i\in\mathds{N}}$ is a countable dense set in $\Lp^{1}({\mathds{R}^{d}})$\,. Now for each $i\in\mathds{N}$, define a finite measure $\nu_i$ on $({\mathds{R}^{d}}, \sB({\mathds{R}^{d}}))$, given by $$\nu_i(A) = \int_{A} |f_i(x)|\mathrm{d} x \quad \forall \,\,\, A\in \sB({\mathds{R}^{d}})\,.$$ Let $v\in\mathfrak U_{\mathsf{sm}}$. Then, as in the proof of Theorem~\ref{TDPCP}, by successive application of Lusin's theorem (see \cite[Theorem~7.5.2]{D02-book}) and Tietze's extension theorem (see \cite[Theorem~4.1]{DG51}), for any $\epsilon_i >0$ there exists a closed set $K_i\in {\mathds{R}^{d}}$ and a continuous function $v^{i}: {\mathds{R}^{d}} \to \pV$ such that $v^{i}\equiv v$ on $K_i$ and $\nu_i({\mathds{R}^{d}}\setminus K_i) < \epsilon_{i}$\,. Hence, for any $g\in\cC_b({\mathds{R}^{d}}\times \mathbb{U})$, we have \begin{align*}\label{EBT1Remark} &\arrowvert \int_{{\mathds{R}^{d}}}f_i(x) \int_{\mathbb{U}} g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}}f_i(x)\int_{\mathbb{U}} g(x,\zeta)v^i(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq \arrowvert \int_{{\mathds{R}^{d}}\setminus K_i} f_i(x) \int_{\mathbb{U}}g(x,\zeta)v(x)(\mathrm{d} \zeta)\mathrm{d} x - \int_{{\mathds{R}^{d}}\setminus K_i} f_i(x) \int_{\mathbb{U}} g(x,\zeta)v^i(x)(\mathrm{d} \zeta)\mathrm{d} x \arrowvert \nonumber\\ &\leq 2\|g\|_{\infty}\int_{{\mathds{R}^{d}}\setminus K_i}|f_i(x)|\mathrm{d} x \nonumber\\ & = 2\|g\|_{\infty}\nu_i({\mathds{R}^{d}}\setminus K_i) \leq 2\|g\|_{\infty}\epsilon_i\,. \end{align*} Since $\{f_i\}_{i\in\mathds{N}}$ is dense in $\Lp^{1}({\mathds{R}^{d}})$, by choosing $\epsilon_i$ appropriately, we obtain our result\,. \end{remark} \section{Near Optimality of Finite Models for Controlled Diffusions}\label{NOptiFinite} First we prove the near optimality of quantized policies for the $\alpha$-discounted cost. \begin{theorem}\label{T1.2} Suppose Assumptions (A1)-(A3) hold. Then for each $\epsilon >0$ there exists a policy $v_{\epsilon}^*\in \mathfrak U_{\mathsf{sm}}$ with finite actions and piecewise constant policies $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{ET1.2A} \cJ_{\alpha}^{v_{\epsilon}^*}(x, c) \leq \inf_{U\in \mathfrak U}\cJ_{\alpha}^{U}(x, c) + \epsilon \quad\text{and}\quad \cJ_{\alpha}^{\bar{v}_{\epsilon}^*}(x, c) \leq \inf_{U\in \mathfrak U}\cJ_{\alpha}^{U}(x, c) + \epsilon \quad\quad\text{for all}\,\, x\in{\mathds{R}^{d}}\,. \end{equation} \end{theorem} \begin{proof} From \cite[Theorem~3.5.6]{ABG-book}, it follows that there exists $v^*\in \mathfrak U_{\mathsf{sm}}$ such that $\cJ_{\alpha}^{v^*}(x, c) = \inf_{U\in \mathfrak U}\cJ_{\alpha}^{U}(x, c)$ for all $x\in {\mathds{R}^{d}}$\,. Since the map $v\mapsto \cJ_{\alpha}^{v}(x, c)$ is continuous on $\mathfrak U_{\mathsf{sm}}$ (see, Theorem~ \ref{T1.1}) and the space of quatized stationary policies are dense in $\mathfrak U_{\mathsf{sm}}$ (see, Lemma~\ref{DenseBorkarTopo}), it follows that for each $\epsilon > 0$ there exists a quatized policy $v_{\epsilon}^*\in \mathfrak U_{\mathsf{sm}}$ satisfying \cref{ET1.2A}\,. Similarly, since the peicewise constant policies are dense in $\mathfrak U_{\mathsf{sm}}$ (see, Theorem~\ref{TDPCP}), we conclude that for any $\epsilon > 0$ there exists $\bar{v}_{\epsilon}^*\in\mathfrak U_{\mathsf{sm}}$ which satisfies \cref{ET1.2A}\,. This completes the proof. \end{proof} We now show that for the cost upto an exit time, the quantized (finite action/ piecewise constant) policies are near optimal\,. \begin{theorem}\label{T1.2ExitCost} Suppose Assumptions (A1)-(A3) hold. Then for each $\epsilon >0$ there exists a policy $v_{\epsilon}^*\in \mathfrak U_{\mathsf{sm}}$ with finite actions and piecewise constant policies $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{ET1.2ExitCostA} \hat{\cJ}_{e}^{v_{\epsilon}^*}(x) \leq \inf_{U\in \mathfrak U}\hat{\cJ}_{e}^{U}(x) + \epsilon \quad\text{and}\quad \hat{\cJ}_{e}^{\bar{v}_{\epsilon}^*}(x) \leq \inf_{U\in \mathfrak U}\hat{\cJ}_{e}^{U}(x) + \epsilon \quad\quad\text{for all}\,\, x\in{\mathds{R}^{d}}\,. \end{equation} \end{theorem} \begin{proof} From \cite[p. 229]{B05Survey}, we know that there exists $v^*\in \mathfrak U_{\mathsf{sm}}$ such that $\hat{\cJ}_{e}^{v^*}(x) = \inf_{U\in \mathfrak U}\hat{\cJ}_{e}^{U}(x)$\,. Now form the continuity of the map $v\to \hat{\cJ}_{e}^{v}(x)$ (see Theorem~\ref{T1.1Exit}) and the density results (see Section~\ref{DensePol}), it is easy to see that for any given $\epsilon > 0$ there exists policies $v_{\epsilon}^*\in \mathfrak U_{\mathsf{sm}}$ with finite actions and piecewise constant policies $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{sm}}$ satisfying \cref{ET1.2ExitCostA}\,. This completes the proof of the theorem\,. \end{proof} Next we prove the near optimality of the quantized policies for the ergodic cost under near-monotonicity assumption on the running cost\,. Let $$\Theta_{v} := \{v_n\mid v_n \,\,\text{is the quantized policy defined as in \cref{DenseStra1} corresponding to}\,\, v \}$$ and $$\bar{\Theta}_{v} := \{\bar{v}_n\mid \bar{v}_n \,\,\text{is the quantized policy defined as in \cref{EBT4} corresponding to}\,\, v \}\,.$$ In order to establish our result we are assuming that the invariant measures set $$\Gamma_{v^*} := \{\eta_{v_n^*}\mid \eta_{v_n^*}\,\,\text{is the invariant measure corresponding to}\,\, v_n^*\in \Theta_{v^*}\}$$ and $$\bar{\Gamma}_{v^*} := \{\eta_{\bar{v}_n^*}\mid \eta_{\bar{v}_n^*}\,\,\text{is the invariant measure corresponding to}\,\, \bar{v}_n^*\in \bar{\Theta}_{v^*}\}$$ are tight, where $v^*\in\mathfrak U_{\mathsf{sm}}$ is an ergodic optimal control. The sufficient condition which assures the required tightness is the following: if there exists a non-negative inf-compact function $f\in\cC^2({\mathds{R}^{d}})$ such that $$\sL_{v_{n}^*} f(x) \leq \kappa_0 - f(x)\quad\text{and}\quad \sL_{\bar{v}_{n}^*} f(x) \leq \kappa_0 - f(x)$$ for some constant $\kappa_0 >0$\,. \begin{theorem}\label{ErgodNearmOPT1} Suppose that Assumptions (A1) - (A4) hold. Also, suppose that corresponding to the optimal policy $v^*\in \mathfrak U_{\mathsf{sm}}$, the following set of invariant measures $\Gamma_{v^*}$ and $\bar{\Gamma}_{v^*}$ are tight and the running cost $c$ is near monotone with respect to $\sup_{v_n^*\in\Theta_{v^*}}\sE_x(c, v_{n}^*)$ and $\sup_{\bar{v}_n^*\in\bar{\Theta}_{v^*}}\sE_x(c, \bar{v}_{n}^*)$, that is, $$\sup_{v_n^*\in\Theta_{v^*}}\sE_x(c, v_{n}^*) < \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \mathbb{U}} c(x,\zeta)\quad \text{and}\quad \sup_{\bar{v}_n^*\in\bar{\Theta}_{v^*}}\sE_x(c, \bar{v}_{n}^*) < \liminf_{\norm{x}\to\infty}\inf_{\zeta\in \mathbb{U}} c(x,\zeta).$$ Then for any given $\epsilon >0$ there exists a policy $v_{\epsilon}\in \mathfrak U_{\mathsf{sm}}$ with finite actions and a piecewise constant policy $\bar{v}_{\epsilon}\in \mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{ENearmonOPTA1A} \sE_x(c, v_{\epsilon}) \leq \sE^*(c) + \epsilon\quad \text{and}\quad \sE_x(c, \bar{v}_{\epsilon}) \leq \sE^*(c) + \epsilon\,. \end{equation} \end{theorem} \begin{proof} From \cite[Theorem~3.6.10]{ABG-book}, we know there exits a stable $v^*\in \mathfrak U_{\mathsf{sm}}$ such that $\sE_x(c, v^*) = \sE^*(c)$\,. Since, by our assumption, the set of invariant measures $\Gamma_{v^*}$ and $\bar{\Gamma}_{v^*}$ are tight. Thus by the continuity result (see Theorem~\ref{ergodicnearmono1}) and the density results (see Lemma~\ref{DenseBorkarTopo}, Theorem~\ref{TDPCP}), we deduce that for each $\epsilon>0$ there exists $v_{\epsilon}\in \mathfrak U_{\mathsf{sm}}$ with finite actions and piecewise constant policy $\bar{v}_{\epsilon}\in \mathfrak U_{\mathsf{sm}}$ such that \cref{ENearmonOPTA1A} holds\,. This completes the proof. \end{proof} Now for the ergodic cost criterion, under the Lyapunov type stability assumption we prove near optimality of quantized policies. \begin{theorem}\label{TErgoOptApprox1} Suppose that assumptions (A1) - (A3) and (A5) hold. Then for any given $\epsilon>0$ there exists a quantized policy $v_{\epsilon}\in \mathfrak U_{\mathsf{sm}}$ with finite actions and a piecewise constant policy $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{sm}}$ such that \begin{equation}\label{ENearmonOPTA1} \sE_x(c, v_{\epsilon}) \leq \sE^*(c) + \epsilon \quad\text{and}\quad \sE_x(c, \bar{v}_{\epsilon}) \leq \sE^*(c) + \epsilon\,. \end{equation} \end{theorem} \begin{proof} From \cite[Theorem~3.7.14]{ABG-book}, we know that there exists $v^*\in \mathfrak U_{\mathsf{sm}}$ such that $\sE_x(c, v^*) = \sE^*(c)$. Now, since the space of quantized polices and piecewise constant policies are dense in $\mathfrak U_{\mathsf{sm}}$ (see, Lemma~\ref{DenseBorkarTopo} and Theorem~\ref{TDPCP}) and the map $v\to \inf_{{\mathds{R}^{d}}}\sE_x(c, v)$ is continuous on $\mathfrak U_{\mathsf{sm}}$ (see, Theorem~\ref{ergodicLyap1}). For any given $\epsilon>0$, one can find a quantized policy $v_{\epsilon}\in\mathfrak U_{\mathsf{sm}}$ with finite actions and a piecewise constant policy $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{sm}}$ such that \cref{ENearmonOPTA1} holds. \end{proof} \begin{remark}\label{RContStaNear1} In view of the continuity (see Section~\ref{CDiscCost}, Section~\ref{CErgoCost}) and the denseness (see Remark~\ref{RContSta1}) results, we have the near optimality of continuous stationary policies\,. \end{remark} \section{Finite Horizon Cost: Time Discretization of Markov Policies and Near Optimality of Piecewise Constant Policies}\label{TimeDMarkov} Recall \cref{FiniteCost1} as our cost criterion for the finite horizon setup. We will present three results in this section, where the ultimate goal is to arrive at near optimality of piecewise constant policies. While this approximation problem is a well-studied problem \cite{KD92}, \cite{HK-02A}, \cite{RF-16A}, our proof method is rather direct and appears to be new. Under uniform Lipschitz continuity and uniform boundedness assumptions on the diffusion coefficients and running cost function, in \cite{KD92}, \cite{HK-02A}, \cite{RF-16A} the authors have established similar approximation results using numerical procedures\,. \subsection*{Continuity of Finite Horizon Cost on Markov Policies under the Borkar Topology} For simplicity, in this subsection we are assuming that $a, b, c$ are uniformly bounded (it is possible to relax these boundedness assumptions). In particular we are assuming that \begin{itemize} \item[\hypertarget{B1}{{(B1)}}] The functions $a, b, c$ are are uniformly bounded, i.e., \begin{equation*} \sup_{(x,\zeta)\in {\mathds{R}^{d}}\times \mathbb{U}}\left[\abs{b(x,\zeta)} + \norm{a(x)} + \sum_{i}^{d} \norm{\frac{\partial{a}}{\partial x_i}(x)} + \abs{c(x, \zeta)}\right] \,\le\, \mathrm{K}\,. \end{equation*} for some positive constant $\mathrm{K}$\,. Moreover, $H\in \Sob^{2,p,\mu}({\mathds{R}^{d}})\cap \Lp^{\infty}({\mathds{R}^{d}})$\,,\,\, $p\ge 2$\,. \end{itemize} In view of \cite[Theorem~3.3, p. 235]{BL84-book}, the optimality equation (or, the HJB equation) \begin{align*} &\frac{\partial \psi}{\partial t} + \inf_{\zeta\in \mathbb{U}}\left[\sL_{\zeta}\psi + c(x, \zeta) \right] = 0 \\ & \psi(T,x) = H(x) \end{align*} admits a unique solution $\psi\in \Sob^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})\cap \Lp^{\infty}((0, T)\times{\mathds{R}^{d}})$\,,\,\, $p\ge 2$\,. Thus, by It\^{o}-Krylov formula (see the verification results as in \cite[Theorem~3.5.2]{HP09-book}), we know the existence of an optimal Markov policy, that is, there exists $v^*\in \mathfrak U_{\mathsf{m}}$ such that $\cJ_{T}(x, v^*) = \cJ_{T}^*(x)$\,. In the following theorem, we show that the finite horizon cost is continuous in $\mathfrak U_{\mathsf{m}}$ with respect to the Borkar topology (see Definition~\ref{BKTP1})\,. \begin{theorem}\label{TContFHC} Suppose Assumptions (A1), (A3) and (B1) hold. Then the map $v\mapsto \cJ_{T}(x, v)$ from $\mathfrak U_{\mathsf{m}}$ to $\mathds{R}$ is continuous. \end{theorem} \begin{proof} Let $v_n$ be a sequence in $\mathfrak U_{\mathsf{m}}$ such that $v_n \to v$ in $\mathfrak U_{\mathsf{m}}$, for some $v\in\mathfrak U_{\mathsf{m}}$\,. From \cite[Theorem~3.3, p. 235]{BL84-book}, we have that for each $n\in\mathds{N}$ there exists a unique solution $\psi_n\in\Sob^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})\cap \Lp^{\infty}((0, T)\times{\mathds{R}^{d}})$\,,\,\, $p\ge 2$ to the following Poisson equation \begin{align}\label{TContFHC1A} &\frac{\partial \psi_n}{\partial t} + \left[\sL_{v_n}\psi_n + c(x, v_n(t,x)) \right] = 0 \nonumber\\ & \psi_n(T,x) = H(x)\,. \end{align} By It\^{o}-Krylov formula, we deduce that \begin{align}\label{TContFHC1B} \psi_{n}(t,x) = \Exp_x^{v_n}\left[\int_t^{T} c(X_s, v_n(s, X_s)) \mathrm{d}{s} + H(X_T)\right] \end{align} This gives us \begin{equation}\label{TContFHC1C} \norm{\psi_n}_{\infty} \leq T\norm{c}_{\infty} + \norm{H}_{\infty}\,. \end{equation} Rewriting \cref{TContFHC1A}, we get \begin{align*} &\frac{\partial \psi_n}{\partial t} + \sL_{v_n}\psi_n + \lambda_0 \psi_n = \lambda_0 \psi_n - c(x, v_n(t,x)) \nonumber\\ & \psi_n(T,x) = H(x)\,, \end{align*} for some fixed $\lambda_0 >0$\,. Thus, by parabolic pde estimate \cite[eq. (3.8), p. 234]{BL84-book}, we deduce that \begin{equation}\label{TContFHC1D} \norm{\psi_n}_{\Sob^{1,2,p,\mu}} \leq \kappa_1 \norm{\lambda_0 \psi_n - c(x, v_n(t,x))}_{\Lp^{p,\mu}}\,. \end{equation} Hence, from \cref{TContFHC1C}, \cref{TContFHC1D}, it follows that $\norm{\psi_n}_{\Sob^{1,2,p,\mu}} \leq \kappa_2$ for some positive constant $\kappa_2$ (independent of $n$)\,. Since $\Sob^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})$ is a reflexive Banach space, as a corollary of Banach Alaoglu theorem, there exists $\psi^*\in\Sob^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})$ such that along a subsequence (without loss of generality denoting by same sequence) \begin{equation}\label{TContFHC1E} \begin{cases} \psi_n \to & \psi^*\quad \text{in}\quad \Sob^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})\quad\text{(weakly)}\\ \psi_n \to & \psi^*\quad \text{in}\quad \Sob^{0,1,p,\mu}((0, T)\times{\mathds{R}^{d}})\quad \text{(strongly)}\,. \end{cases} \end{equation} Since $v_n\to v$ in $\mathfrak U_{\mathsf{m}}$, multiplying both sides of the \cref{TContFHC1A} by test function $\phi\in\cC_c^{\infty}((0, T)\times {\mathds{R}^{d}})$ and integrating, we get \begin{align}\label{TContFHC1EA} \int_{0}^{T}\int_{{\mathds{R}^{d}}}\frac{\partial \psi_n}{\partial t}\phi(t,x)\mathrm{d} t \mathrm{d} x + & \int_{0}^{T}\int_{{\mathds{R}^{d}}}\trace\bigl(a(x)\nabla^2 \psi_n\bigr)\phi(t,x)\mathrm{d} t \mathrm{d} x \nonumber\\ & + \int_{0}^{T}\int_{{\mathds{R}^{d}}}\{b(x,v_{n}(t,x))\cdot \nabla \psi_n + c(x, v_{n}(t,x))\}\phi(t,x)\mathrm{d} t \mathrm{d} x = 0\,. \end{align} In view of \cref{TContFHC1E}, letting $n\to \infty$, from \cref{TContFHC1EA} we obtain that \begin{align*} \int_{0}^{T}\int_{{\mathds{R}^{d}}}\frac{\partial \psi^*}{\partial t}\phi(t,x)\mathrm{d} t \mathrm{d} x + & \int_{0}^{T}\int_{{\mathds{R}^{d}}}\trace\bigl(a(x)\nabla^2 \psi^*\bigr)\phi(t,x)\mathrm{d} t \mathrm{d} x \nonumber\\ & + \int_{0}^{T}\int_{{\mathds{R}^{d}}}\{b(x,v(t,x))\cdot \nabla \psi^* + c(x, v(t,x))\}\phi(t,x)\mathrm{d} t \mathrm{d} x = 0\,. \end{align*} This implies that $\psi^*\in\Sob^{1,2,p,\mu}((0, T)\times{\mathds{R}^{d}})$ satisfies \begin{align}\label{TContFHC1F} &\frac{\partial \psi^*}{\partial t} + \left[\sL_{v}\psi^* + c(x, v(t,x)) \right] = 0 \nonumber\\ & \psi(T,x) = H(x)\,. \end{align} Again, by It\^{o}-Krylov formula, it follows that \begin{align}\label{TContFHC1G} \psi^{*}(t,x) = \Exp_x^{v}\left[\int_t^{T} c(X_s, v(s, X_s)) \mathrm{d}{s} + H(X_T)\right]\,. \end{align} Therefore, from \cref{TContFHC1B} and \cref{TContFHC1G}, we conclude that $v\mapsto \cJ_{T}(x, v)$ from $\mathfrak U_{\mathsf{m}}$ to $\mathds{R}$ is continuous. \end{proof} \subsection{Time Discretization of Markov Policies} Following, and briefly modifying, our approach so far involving stationary policies, in this section we show that piece-wise constant Markov policies are dense in the space of Markov policies $\mathfrak U_{\mathsf{m}}$\,. Also, using this result we deduce the near optimality of piece-wise constant Markov policies\,. \begin{theorem}\label{TDPCMP} For any $v\in \mathfrak U_{\mathsf{m}}$ there exists a sequence of piecewise constant policies $\{v_m\}_{m}$ such that \begin{equation}\label{BorkarTopology3} \lim_{m\to\infty}\int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(x,t)\int_{\mathbb{U}}g(x,t,\zeta)v_{m}(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t = \int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(x,t)\int_{\mathbb{U}}g(x,t,\zeta)v(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t \end{equation} for all $f\in L^1({\mathds{R}^{d}}\times [0, \infty))\cap L^2({\mathds{R}^{d}}\times [0, \infty))$ and $g\in \cC_b({\mathds{R}^{d}}\times [0, \infty)\times \mathbb{U})$ \,. \end{theorem} \begin{proof} Let $\hat{\sB}_{0} = \emptyset$ and $\hat{\sB}_{n} = \sB_n\times [0, n)$. Then, define $\hat{D}_{n} = \hat{\sB}_{n}\setminus \hat{\sB}_{n-1}$ for $n\in \mathds{N}$\,. Now, it is clear that ${\mathds{R}^{d}}\times [0,\infty) = \cup_{n=1}^{\infty} \hat{D}_{n}$\,. Since $\bar{v}_{n}\,:=\, v\arrowvert_{\hat{D}_n} : \hat{D}_n \to \pV$ is a measurable map. As in Theorem~\ref{TDPCP}, by Lusin's theorem and Tietze's extension theorem, for any $\epsilon_n > 0$ there exists a compact set $\hat{K}_{n}^{\epsilon_n}\subset \hat{D}_n$ and a continuous function $\bar{v}_{n}^{\epsilon_n}: \hat{D}_n \to \pV$ such that $ \bar{v}_{n}^{\epsilon_n}\equiv \bar{v}_{n}$ on $\hat{K}_{n}^{\epsilon_n}$ and $\arrowvert(\hat{D}_n\setminus \hat{K}_{n}^{\epsilon_n}) \arrowvert < \epsilon_n$\,. Also, as in Theorem~\ref{TDPCP}, since $(\pV, d_{\mathscr{P}})$ is compact, for each $m\in \mathds{N}$ there exists a finite set $\widehat{\Lambda}_{m} = \{\mu_{m,1}, \mu_{m,2}, \dots , \mu_{m, k_m}\}$ and a quantizer $\widehat{Q}_{m}: \pV \to \widehat{\Lambda}_{m}$ which induces a partition $\{\widehat{U}_{m,i}\}_{i=1}^{k_m}$ of the space $\pV$\,. Now, for any $v\in\mathfrak U_{\mathsf{m}}$ define $\hat{D}_{n,i}^m = (\bar{v}_{n}^{\epsilon_n})^{-1}(\widehat{U}_{m,i})$. It is easy to see that $\hat{D}_{n} = \cup_{i=1}^{k_m} \hat{D}_{n,i}^m$\,. Define \begin{equation*} \bar{v}_{n,m}^{\epsilon_n}(x) := \sum_{i=1}^{k_m} \mu_{m,i}\Ind_{\{\hat{D}_{n,i}^m\}}(x)\quad\text{for all}\quad x\in \hat{D}_n\,\,\,\text{and}\,\,\,m\in \mathds{N}\,. \end{equation*} Hence, as in the proof of Theorem~\ref{TDPCP} (see Step~$1$), for any $\hat{f}\in L^1({\mathds{R}^{d}}\times [0, \infty))\cap L^2({\mathds{R}^{d}}\times [0, \infty)), \hat{g}\in \cC_b(\mathbb{U})$, there exists a positive constant $\bar{M}_0$ (depending on $\hat{f}, \hat{g}$ and $\epsilon_n$) such that \begin{align}\label{EBTM2} \arrowvert \int_{\hat{D}_n}\hat{f}(x,t) \int_{\mathbb{U}} \hat{g}(\zeta)\bar{v}_{n}(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t - \int_{\hat{D}_n}\hat{f}(x,t)\int_{\mathbb{U}} \hat{g}(\zeta)\bar{v}_{n,m}^{\epsilon_n}(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t\arrowvert \leq \epsilon_n\,, \end{align} for all $m\geq \bar{M}_0$\,. Now, for any given $\epsilon > 0$, define \begin{equation}\label{EBTM3} \Tilde{v}_{m}^{\epsilon} := \sum_{n=1}^{\infty} \bar{v}_{n,m}^{\epsilon_n} \quad \text{for}\,\,\, m\in \mathds{N}\,. \end{equation} Since $\hat{f}\in\Lp^1({\mathds{R}^{d}}\times [0, \infty))$ there exists $N_0 \in \mathds{N}$ such that $\int_{\hat{\sB}_{N_0}^c}|\hat{f}(x,t)|\mathrm{d} x \mathrm{d} t < \frac{\epsilon}{4\|\hat{g}\|_{\infty}}$\,. Then closely mimicking the argument of Theorem~\ref{TDPCP} (see Step~$2$), we have that for each $\epsilon >0$ there exists a positive constant $\hat{M}_0$ (depending on $\hat{f}, \hat{g}, \epsilon$) such that for all $m\geq \hat{M}_0$ \begin{equation}\label{EBTM4} \arrowvert \int_{[0, \infty)}\int_{{\mathds{R}^{d}}}\hat{f}(x,t)\int_{\mathbb{U}} \hat{g}(\zeta)v(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t - \int_{[0, \infty)}\int_{{\mathds{R}^{d}}}\hat{f}(x,t)\int_{\mathbb{U}} \hat{g}(\zeta)\Tilde{v}_{m}^{\epsilon}(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t\arrowvert \leq \epsilon\,. \end{equation} Let $\{\hat{f}_k\}_{k\in\mathds{N}}$ and $\{h_{j}\}_{j\in\mathds{N}}$ be countable dense set in $\Lp^1({\mathds{R}^{d}}\times [0, \infty))$ and $\cC(\mathbb{U})$ respectively\,. Suppose that $f\in\Lp^{1}({\mathds{R}^{d}}\times [0, \infty))\cap \Lp^{2}({\mathds{R}^{d}}\times [0, \infty))$ and $g\in\cC_{b}({\mathds{R}^{d}}\times [0, \infty)\times \mathbb{U})$\,. Since $f\in \Lp^{1}({\mathds{R}^{d}}\times [0, \infty))$, for given $\epsilon > 0$ there exists $N_{1}\in \mathds{N}$ such that $\int_{\hat{\sB}_{N_1}^c} |f(x,t)|\mathrm{d} x \mathrm{d} t\leq \frac{\epsilon}{4\|g\|_{\infty}}$\,. We know that in $\cC_b(\bar{\hat{\sB}}_{N_1}\times \mathbb{U})$ the functions of the form $\{\sum_{i}^{m} r_{i}(x,t)p_i(\zeta)\}_{m\in\mathds{N}}$ forms an algebra which contains constants, where $r_i\in \cC(\bar{\hat{\sB}}_{N_1})$ and $p_i \in \cC(\mathbb{U})$\,. Thus by Stone-Weierstrass theorem there exists $\hat{m}$ (large enough) such that \begin{equation}\label{EBTM5} \sup_{\hat{\sB}_{N_1}\times \mathbb{U}} |g(x,t,\zeta) - \sum_{i}^{\hat{m}} r_{i}(x,t)p_i(\zeta)| \leq \frac{\epsilon}{24\|f\|_{\Lp^1({\mathds{R}^{d}}\times [0, \infty))}}\,. \end{equation} Since $p_i \in \cC(\mathbb{U})$ one can choose $h_{j(i)} \in \cC(\mathbb{U})$ such that \begin{equation}\label{EBTM6} \sup_{\zeta\in \mathbb{U}} |p_i(\zeta) - h_{j(i)}(\zeta)| \leq \frac{\epsilon}{24\|f\|_{\Lp^1({\mathds{R}^{d}}\times [0, \infty))}\|r_{i}\|_{\infty}}\,. \end{equation} Also, since $fr_i\in \Lp^1({\mathds{R}^{d}}\times [0, \infty))$ there exists $\hat{f}_{k(i)}$ such that \begin{equation}\label{EBTM7} \int_{\hat{\sB}_{N_1}} |f(x,t)r_i(x,t) - \hat{f}_{k(i)}(x,t)|\mathrm{d} x \mathrm{d} t \leq \frac{\epsilon}{24\|f\|_{\Lp^1({\mathds{R}^{d}}\times [0, \infty))}\|h_{i}\|_{\infty}}\,. \end{equation} Thus, in view of \cref{EBTM5}, \cref{EBTM6}, \cref{EBTM7}, following the steps of Theorem~\ref{TDPCP} (see Step~$3$) we conclude that \begin{align*} \arrowvert \int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(x,t)\int_{\mathbb{U}} g(x,t,\zeta)v(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t - \int_{0}^{\infty}\int_{{\mathds{R}^{d}}}f(x,t)\int_{\mathbb{U}} g(x,t,\zeta)\bar{v}_{m}^{\epsilon}(x,t)(\mathrm{d} \zeta)\mathrm{d} x \mathrm{d} t\arrowvert \leq \epsilon\,, \end{align*} for all $m\geq \hat{M}_1$, for some positive constant $\hat{M}_1$\,. This completes the proof of the theorem\,. \end{proof} \subsection*{Near Optimality of Piecewise Constant Policies for Finite Horizon Cost} Now, from Theorem~\ref{TDPCMP} and Theorem~\ref{TContFHC}, we have the following near-optimality results\,. \begin{theorem}\label{TFiniteOptApprox1} Suppose that assumptions (A1),(A3) and (B1) hold. Then for any given $\epsilon>0$ there exists a piecewise constant policy $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{m}}$ such that \begin{equation}\label{TFiniteOptApprox1A} \cJ_{T}(x, \bar{v}_{\epsilon}^*) \leq \cJ_{T}^* + \epsilon \quad\text{for all} \quad x\in{\mathds{R}^{d}}\,. \end{equation} \end{theorem} \begin{proof} From our previous discussion, we know that there exists $v^*\in \mathfrak U_{\mathsf{m}}$ such that $\cJ_{T}(x, v^*) = \cJ_{T}^*$\,. Since the space of piecewise constant policies are dense in $\mathfrak U_{\mathsf{m}}$ (see Theorem~\ref{TDPCMP}) and the map $v\mapsto \cJ_{T}(x, v)$ is continuous on $\mathfrak U_{\mathsf{m}}$ (see Theorem~\ref{TContFHC}), for any given $\epsilon>0$, one can find a piecewise constant policy $\bar{v}_{\epsilon}^* \in \mathfrak U_{\mathsf{m}}$ such that \cref{TFiniteOptApprox1A} holds\,. \end{proof} \begin{remark} In view of the existence results as in \cite[Chapter~4]{LSU67-book}, in obtaining the near optimality of piecewise constant Markov policies for finite horizon costs, one can relax the uniform boundedness assumption (B1), in particular, under (A1)-(A3) we can deduce similar results\,. Which extends the results of \cite{KD92}, \cite{HK-02A}, \cite{RF-16A} to a more general control model\,. \end{remark} \section*{Conclusion} We studied regularity properties of induced cost (under several criteria) on a controlled diffusion process with respect to a control policy space defined by Borkar \cite{Bor89}. We then studied implications of these properties on existence and, in particular, approximations for optimal controlled diffusions. Via such a unified approach, we arrived at very general approximation results for optimal control policies by quantized (finite action / piecewise constant) stationary control policies for a general class of controlled diffusions in the whole space ${\mathds{R}^{d}}$\, as well as time-discretizations for the criteria with finite horizons.
1,116,691,498,149
arxiv
\section{Introduction} \label{s:intro} Patients with chronic kidney disease (CKD) are at increased risk of end stage renal disease (ESRD). Accurate prediction of the timing is of great importance in clinical research and practice to facilitate preparation for renal replacement therapy and individualize clinical decisions \citep{tangri2011predictive}. The typical ESRD risk equations are ``static'' prediction models in the sense that they are developed from survival regression models that relate the predictors at an earlier time point, such as baseline, to the time of ESRD \citep{echouffo2012risk, greene2017static}. Longitudinal data of those biomarkers between baseline and the terminal event are often available and potentially informative to the disease progression, but they are not used in prediction model development. In statistical literature, the prediction of the risk of clinical events using longitudinal data is often referred to as dynamic prediction in the sense that the prediction can be updated with accumulating longitiudinal data. Important fundamental work has been published in the last decade \citep{van2011dynamic, van2007dynamic, van2008dynamic, zheng2005partly, rizopoulos2011dynamic, proust2009development}. There are a number of challenges when this methodology is applied to the prediction of ESRD among CKD population. First, CKD patients have increased chance of mortality before reaching ESRD. Proper adjustment for competing risk is often needed in CKD studies \citep{noordzij2013we}. Second, previous literature has identified a large number of risk factors, including multiple biomarkers that are known to be causally associated with ESRD \citep{tangri2011predictive}. Put in the longitudinal context, it requires that the dynamic prediction model should accommodate multiple biomarkers with tractable computation. Some biomarkers, such as the estimated GFR, have diverse nonlinear progression trajectories \citep{li2012longitudinal, li2013within}. This feature could add to the complexity of the statistical analysis if modeling subject-specific longitudinal trajectories is needed. Third, it may take many years before a CKD patient reaches ESRD or death. The strength of association between biomarkers and the disease outcome may vary over time, leading to time-varying effects. Fourth, it is common that patients do not always follow a pre-specified clinical visits schedule. Even if the visit times are non-informative in the sense that they are not related to the health condition of the patients, the irregularly spaced and unsynchronized biomarker measurement times pose a challenge to the development of dynamic prediction model, as elucidated below. On the topic of dynamic prediction with competing risks data, \citet{rue2017bayesian} and \citet{andrinopoulou2017combined} modeled the joint distribution of longitudinal data and competing risks with shared random effect models, and estimated the parameters with Markov chain Monte Carlo. However, when a large number of random effects are needed to accommodate multiple and possibly nonlinear longitudinal trajectories per subject, fitting the joint model is computationally infeasible \citep{hickey2016joint}. \citet{cortese2010competing}, \citet{nicolaie2013adynamic} and \citet{nicolaie2013bdynamic} studied the problem using an alternative, computationally simpler approach called landmark modeling \citep{van2011dynamic}. Motivated by the specific needs of CKD research, our proposed methodology in this paper is different from the statistical literature above in some important aspects. First, the typical landmark approach involves pre-specifying a number of landmark time points distributed over the follow-up period, creating a landmark dataset at each landmark time point that consists of at-risk subjects and their predictor variables and time-to-event, and fitting the model to the stacked landmark datasets \citep{van2011dynamic, nicolaie2013adynamic}. The predictor variable is not always measured at the landmark times due to irregularly spaced and unsynchronized measurement times. Imputing the unknown value by the last or closest measurement is difficult to apply because that measurement could be years apart and because the progression of CKD includes both chronic periods, when biomarkers change slowly, and acute episodes, when biomarkers change more quickly. Our proposed method does not require pre-specification of the landmark times, which is important given that there is currently no guideline in the literature on how to set the number and locations of the landmark times. It can also accommodate the irregularly spaced observational times without explicit imputation. Second, the proposed method estimates the time-varying model parameters semiparametrically without imposing a parametric shape \citep{nicolaie2013adynamic, van2011dynamic}. Lastly, our approach is embedded within the framework of Fine-Gray sub-distributional hazard model \citep{fine1999proportional}, while the previous works were based on cause-specific hazard model \citep{nicolaie2013adynamic}, pseudo-observations \citep{nicolaie2013bdynamic} and multi-state model \citep{cortese2010competing}. The Fine-Gray model imposes a parsimonious relationship between predictors and the cumulative incidence function (CIF) of ESRD without a separate model for death, which is difficult to establish due to the heterogeneity in the causes of death. \section{The landmark dynamic prediction model for competing risks data}\label{sec:cmprsk_mod} \subsection{The notation and data structure} Let $T_{i}$ and $C_{i}$ be the time to the event of interest and time of censoring for subject $i$, and $\epsilon_{i}\in \{ 1,\ldots, K \}$ be the $K$ causes of the event. We observe the follow-up time $\tilde{T_{i}}=\textrm{min}(T_{i},C_{i})$, the censoring indicator $\Delta_{i}=1(T_{i}\le C_{i})$, and event type $\Delta_{i}\epsilon_{i}\in \{ 0,\ldots,K \}$. Without loss of generality, we assume $K=2$ throughout this paper. In the context of the data application, event 1 denotes ESRD, the clinical event of interest, and event 2 denotes death, the competing event. Let $\boldsymbol{Y}_{i}=[ \boldsymbol{Y}_{i1},\boldsymbol{Y}_{i2},\dots,\boldsymbol{Y}_{iq} ]$ denote the $n_{i}\times q$ matrix of subject $i$, with $n_{i}$ repeated measurements for each of the $q$ covariates. This notation covers both time-dependent (longitudinal) and time-independent (baseline) covariates. The repeated measurements are made at time points $\boldsymbol{t}_{i}=\{ t_{i1},t_{i2},\dots t_{in_{i}} \}$ ($t_{ij}<\tilde{T}_{i}$), which are not necessarily the same for all subjects. At any follow-up time $u$ ($u<\tilde{T}_{i}$), we denote by $\boldsymbol{\mathcal{H}}_{i}(u)$ the observed covariate process within a history window $[u-\tau_{2},u]$, where $\boldsymbol{\mathcal{H}}_{i}(u)=\{\boldsymbol{Y}_{i}(t_{ij}), t_{ij} ~|~ u-\tau_{2}\le t_{ij}\le u ;\ j=1,\ldots,n_{i} \}$. We observe independent and identically distributed training data $\mathcal{D}_{n}=\{\tilde{T}_{i},\Delta_{i}\epsilon_{i},\boldsymbol{Y}_{i}, \boldsymbol{t}_i,i=1,\ldots,n\}$, from which the dynamic prediction model is to be developed. Our interest is to estimate, for a future individual in the same population as the training data, indexed by subscript $_{o}$, the probability of ESRD in the next $\tau_{1}$ years (called prediction horizon) given survival up to time $s$ and the covariate information in the history window: $\pi(\tau_{1}\vert s,\boldsymbol{\mathcal{H}}_{o}(s))=P(T_{o}\in(s,s+\tau_{1}],\epsilon_{o}=1\vert\tilde{T}_{o}>s,\boldsymbol{\mathcal{H}}_{o}(s))$. We assume that the distribution of $\boldsymbol{t}_{i}$ is non-informative in the sense that it is independent of $\boldsymbol{Y}_{i}$ and $T_i$. We assume independent censoring in the sense that $C$ is conditionally independent of $T$ and $\epsilon$ given the baseline covariates. In the following, we describe the construction of landmark dataset. We use notation $T(s) = T - s$ to denote the residual lifetime when a generic subject is at risk at time $s$. For subject $i$ in the development dataset, we define $T_{ij}=T_{i}-t_{ij}$ and $C_{ij}=C_{i}-t_{ij}$ as the subject-specific residual times to event and censoring, starting from $t_{ij}$. For prediction up to the horizon $\tau_{1},$ we can artificially censor the residual times at $\tau_{1}$, i.e., we observe $\tilde{T}_{ij}=\textrm{min}(T_{ij},C_{ij},\tau_{1})$ and the event indicator $\tilde{\delta}_{ij}=1\big(T_{ij}\le\textrm{min}(C_{ij},\tau_{1})\big)\times\epsilon_{i}$, where $1(\cdot)$ is the indicator function. The artificial censoring helps to reduce the chance of misspecifying certain model assumptions \citep{van2011dynamic}. Throughout this paper, we focus on modeling the relationship between $T(s)$ and $\boldsymbol{H}(s)$ at any landmark time $s$. In the model development dataset, $T(s)$ and $\boldsymbol{H}(s)$ are observed at the longitudinal measurement times $t_{ij}$ ($i=1,2,...,n$, $j=1,2,...,n_i$), leading to the observed outcome and predictor data $\{ \tilde{T}_{ij},\tilde{\delta}_{ij},\boldsymbol{\mathcal{H}}_{i}(t_{ij}) \}$. Therefore, we also call $t_{ij}$ as the landmark time as they are the starting time of the residual lifetime outcome. When making a prediction, the new prediction is often made when new measurements become available, i.e., at a new $t_{ij}$ of that subject. From this perspective, the prediction time is also called a landmark time. \subsection{Sub-distribution hazard model with baseline covariates} { \color{black} We first briefly review the sub-distribution hazard (SDH) model for competing risks \citep{fine1999proportional} with baseline covariates $\boldsymbol{X}$. For prediction, the quantity of interest is the cumulative incidence function (CIF) given $\boldsymbol{X}$: $\pi_{1}(t^*;\boldsymbol{X})=P(T\le t^*,\epsilon=1\vert\boldsymbol{X})$. Under Fine and Gray's formulation, this CIF is formulated as \begin{equation} P(T\le t^*,\epsilon=1\vert\boldsymbol{X}) = 1 - exp\left( -\int_0^{t^*} \lambda_1(t|\boldsymbol{X}) dt \right) \label{eq:baseFG_CIF} \end{equation} with $\lambda_1(t|\boldsymbol{X}) = \lambda_{10}(t)\textrm{exp}(\boldsymbol{\alpha}^{T}\boldsymbol{X})$, where $\lambda_{10}(t)$ can be any non-negative function of time $t > 0$ and $\boldsymbol{\alpha}$ is a real vector. Fine and Gray further developed an interpretation for the $\lambda_1(t|\boldsymbol{X})$ function. They showed that it can be interpreted as a sub-distribution hazard, in the sense that $\lambda_{1}(t;\boldsymbol{X})=\textrm{lim}_{\Delta t \rightarrow 0}\dfrac{1}{\Delta t}P(t \le T \le t+\Delta t,\epsilon=1\vert\{T\ge t\}\cup\{T\le t\cap\epsilon\ne 1\},\boldsymbol{X})=-\dfrac{d\textrm{log}(1-\pi_{1}(t;\boldsymbol{X}))}{dt}$. Such a definition can be viewed as the hazard function for an improper random variable $1(\epsilon=1)\times T+1(\epsilon\ne1)\times\infty$. This interpretation also helps the development of an estimation procedure that is analogous to that of the Cox model. As far as prediction is of concern, characterizing the bilateral relationship between time to event $T$ ($\epsilon=1$) and the covariate vector $\boldsymbol{X}$ is all that is needed. The sub-distribution hazard value at a time $t$ ($t>0$) is not of direct relevance to this prediction. The sub-distribution hazard function $\lambda_1(t|\boldsymbol{X})$ serves as the internal machinery that helps the estimation of model (\ref{eq:baseFG_CIF}). This is a key observation that motivates the working model in the next subsection. } \subsection{Landmark proportional sub-distribution hazard model} { \color{black} By extending model (\ref{eq:baseFG_CIF}) to the context of dynamic prediction, we propose the following landmark proportional SDH model at landmark time $s$: \begin{equation} P(T(s)\le t^*,\epsilon=1\vert \boldsymbol{\mathcal{H}}(s), T > s ) = 1 - exp\left( -\int_0^{t^*} \lambda_1(t| \boldsymbol{\mathcal{H}}(s), s ) dt \right). \label{eq:working_model_generic} \end{equation} As the notation on the left hand side of (\ref{eq:working_model_generic}) suggests, at any given landmark time $s$, this model is specified for those subjects still at risk ($T > s$) at that time. If we treat the given $s$ as a new baseline, then this model is equivalent to a model (\ref{eq:baseFG_CIF}) specified for the residual life time $T(s)$ among the at-risk subjects at time $s$, given the predictor variables defined from $\boldsymbol{\mathcal{H}}(s)$. Since there are in theory infinitely many landmark time $s$, model (\ref{eq:working_model_generic}) is formulated under the working assumption that these models hold simultaneously. For a specific landmark dataset $\{\tilde{T}_{ij},\tilde{\delta}_{ij},\boldsymbol{\mathcal{H}}_{i}(t_{ij}),t_{ij};\ i=1,\ldots,n,j=1,\ldots,n_{i}\}$, this model implies that the bilateral relationship between each residual time to event $( \tilde{T}_{ij},\tilde{\delta}_{ij} )$ and the corresponding ``baseline'' predictor variables extracted from history $\boldsymbol{\mathcal{H}}_{i}(t_{ij})$ satisfies model (\ref{eq:working_model_generic}) at the corresponding landmark time $s = t_{ij}$. To fit this model, we define a working SDH function as: \begin{equation} \lambda_{1}(t^{*}\vert\boldsymbol{\mathcal{H}}_{i}(t_{ij}),t_{ij})=\lambda_{10}(t^{*},t_{ij})\textrm{exp}\Big(\boldsymbol{\beta}^{T}(t_{ij})\boldsymbol{\tilde{Y}}_{i}(t_{ij})\Big),\ t^{*}\in(0,\tau_{1}],\label{eq:working_model} \end{equation} where $\lambda_{10}(t^{*},t_{ij})$ is a bivariate smooth baseline SDH function, defined on the scale of the residual lifetime $t^{*}\in(0,\tau_{1}]$ and landmark time $t_{ij}$. We use $\boldsymbol{\tilde{Y}}_{i}(t_{ij})$ to denote the predictors at visit time $t_{ij}$, which are functions of the observed history $\boldsymbol{\mathcal{H}}_i(t_{ij})$. The time-varying coefficients $\boldsymbol{\beta}(.)$ are assumed to be smooth functions to allow the association to vary with the landmark time. For the bilateral relationship between $( \tilde{T}_{ij},\tilde{\delta}_{ij} )$ and $\boldsymbol{\tilde{Y}}_{i}(t_{ij})$, the corresponding value of the coefficient function is $\boldsymbol{\beta}(t_{ij})$. Our landmark dataset construction resembles that in the partly conditional model \citep{zheng2005partly}, which resets the follow-up time scale at each landmark time. From this perspective, the basic idea of the proposed methodology is more closely related to that model than the landmark model of \citet{van2007dynamic}. However, besides the accommodation of competing risk outcome, another difference between our approach and \citet{zheng2005partly} is that the time-varying coefficients are functions of the landmark time $t_{ij}$ instead of the derived follow-up time $t^{*}$. Therefore it differs from the usual time-varying coefficient model in survival analysis that is commonly used to deal with non-proportional hazards \citep{cai2003local}. With the artificial censoring at $\tau_{1}$, the covariate effect is more likely to be constant over $t^* \in (0, \tau_1)$ (but still vary with $t_{ij}$) and the proportional sub-distribution assumption is more likely to hold. \citep{liu2016robust}. Model (\ref{eq:working_model}) is called a ``working'' sub-distribution hazard function because it is used to facilitate the model fitting using the estimating equations developed by Fine and Gray (1999). While it implies that a subject's residual sub-distribution hazard at landmark time $s$ is $ \lambda_{1}(t^{*} = 0 \vert \boldsymbol{\mathcal{H}}(s), s) $, it does not imply that this subject's sub-distribution hazard at time $s + t^*$ ($t^* > 0$) given the history $\boldsymbol{\mathcal{H}}(s)$, is still given by (\ref{eq:working_model}). The hazard at time $s + t^*$ depends on $\boldsymbol{\mathcal{H}}(s + t^*)$. In general, the hazard at time $s + t^*$ conditional on $\boldsymbol{\mathcal{H}}(s)$ depends on both the hazard at time $s + t^*$ conditional on $\boldsymbol{\mathcal{H}}(s+t^*)$ and the conditional distribution of the paths of longitudinal covariates $\boldsymbol{\tilde{Y}}(u)$ ($u \in (s, s+t^*)$) given $\boldsymbol{\mathcal{H}}(s)$. This is elucidated by the concept of consistency \citep{Jewell1993}. Like other landmark (or partly conditional) models, the proposed model has not been proven as a consistent prediction model. However, this working model can still be a useful prediction tool as long as (\ref{eq:working_model_generic}) provides a good approximation to the bilateral relationship between $( \tilde{T}_{ij},\tilde{\delta}_{ij} )$ and $\boldsymbol{\tilde{Y}}_{i}(t_{ij})$.} \section{Model estimation and dynamic prediction of the CIF }\label{sec:mod_est} For estimation, we extend the kernel approach in \citet{li2017dynamic} to the competing risk context and formalize the idea of borrowing information from lagging covariates \citep{andersen2003attenuation,cao2015analysis}. Assume that $\boldsymbol{\beta}(.)$ has a continuous second derivative in a neighborhood of $s$, by local linear approximation, $\boldsymbol{\beta}(t_{ij})\approx\boldsymbol{\beta}(s)+\boldsymbol{\beta}'(s)(t_{ij}-s)$ for subject-specific time points $t_{ij}$ around $s$. The landmark dataset $\mathcal{L}_{m}$ are clustered multivariate time-to-event data with competing events, where the $n_{i}$ records from the same subject are correlated. For clustered competing risk data \citep{zhou2012competing}, we define the counting process for event 1 as $N_{ij}(t^{*})=1(t_{ij}\le T_{i}\le t_{ij}+t^{*},\tilde{\delta}_{ij}=1)$ and the at-risk process $R_{ij}(t^{*})=1-N_{ij}(t^{*}-)=1(T_{i}>t_{ij}+t^{*})+1(t_{ij}\le T_{i}\le t_{ij}+t^{*},\tilde{\delta}_{ij}\ne1)$. Based on a local ``working independence'' partial likelihood function \citep{zhou2012competing}, for any given landmark point $s$, we can estimate the parameters $\boldsymbol{\beta}(s)$ using a kernel-weighted estimation equation, by borrowing biomarker measurements from the neighboring time points, $\{ t_{ij}\in(s-h,s+h)\}$: \begin{equation} \sum_{i=1}^{n}\sum_{j=1}^{n_{i}}K_{h}(t_{ij}-s)\int_{0}^{t^{*}}w_{ij}(t)\cdot\Big\{\boldsymbol{\tilde{Z}}_{ij}(1,t_{ij}-s)-\boldsymbol{\bar{Z}}(\boldsymbol{\beta}(s),t)\Big\}\cdot\textrm{d}N_{ij}(t).\label{eq:est_eq} \end{equation} $K(\cdot)$ is a kernel function with bounded support on $[-1,1]$. $K_{h}(x)=h^{-1}K(x/h)$ and $h$ is the bandwidth. $\boldsymbol{\tilde{Z}}_{ij}(1,t_{ij}-s)=\boldsymbol{\tilde{Y}}(t_{ij})\otimes(1,t_{ij}-s)$ with $\otimes$ denoting the Kronecker product. We have the notations $\boldsymbol{\bar{Z}}(\boldsymbol{\beta}(s),t)=\dfrac{\hat{\boldsymbol{S}}^{(1)}(\boldsymbol{\beta}(s),t)}{\hat{\boldsymbol{S}}^{(0)}(\boldsymbol{\beta}(s),t)}$ , and \begin{align} \hat{\boldsymbol{S}}^{(r)}(\boldsymbol{\beta}(s),t)=&n^{-1}\sum_{l=1}^{n}\sum_{m=1}^{n_{i}}K_{h}(t_{lm}-s)w_{lm}(t)R_{lm}(t)\times\boldsymbol{\tilde{Z}}_{lm}(1,t_{lm}-s)^{\otimes r}\nonumber\\ &\times\textrm{exp}\Big(\boldsymbol{b}^{T}(s)\boldsymbol{\tilde{Z}}_{lm}(1,t_{lm}-s)\Big), \end{align} where $\boldsymbol{b}(s)=\{\boldsymbol{b}_{0}(s),\boldsymbol{b}_{1}(s)\}=\{\boldsymbol{\beta}(s),\boldsymbol{\beta'}(s)\}$, $\tilde{\boldsymbol{Z}}^{\otimes0}=1$ and $\tilde{\boldsymbol{Z}}^{\otimes0}=\tilde{\boldsymbol{Z}}$. The coefficient $\boldsymbol{\beta}(s)$ is estimated at each landmark $s$ using $\boldsymbol{\hat{\beta}}(s)=\boldsymbol{\hat{b}}_{0}(s)$. The variance of $\hat{\boldsymbol{\beta}}(s)$ can be estimated by bootstrap, which involves randomly sampling $n$ subjects from the original dataset with replacement, estimating the point estimator from each randomly sampled bootstrap dataset, and calculating the sample variance of the point estimators from all bootstrap datasets \citep{bootstrap}. The $w_{ij}(\cdot)$ in (\ref{eq:est_eq}) denotes the inverse probability censoring weight for competing events, modified from \citet{fine1999proportional}: \[ w_{ij}(t^{*})=1\big(C_{ij}\ge T_{ij}\wedge t^{*}\big)\frac{G(t^{*}\vert s)}{G\big(T_{ij}\wedge t^{*}\vert s\big)}, \] where $G(t^{*}\vert s)=P(C_{ij}\ge t^{*}\vert s)$ is the censoring distribution of the residual censoring time at landmark $s$, and $\wedge$ denotes the minimum of the two values. We use a kernel-weighted Kaplan-Meier estimator for the residual censoring distribution, estimated from the residual time to censoring around $s$: \[ \widehat{G}(t^{*}\vert s)=\prod_{\zeta\in\Omega,\zeta\le t}\Big\{1-\frac{\sum_{l}K_{h}(t_{lm}-s)\cdot1(\tilde{C}_{lm}=\zeta,\tilde{\delta}_{lm}=0)}{\sum_{l}K_{h}(t_{lm}-s)\cdot1(\tilde{C}_{lm}\ge\zeta)}\Big\}. \] Once we obtain the estimates of $\boldsymbol{\beta}(s)$, the baseline cumulative SDH function at time $s$ can be estimated by plugging in $\hat{\boldsymbol{\beta}}(s)$: \[ \hat{\Lambda}_{10}(t^{*},s)=\dfrac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n_{i}}K_{h}(t_{ij}-s)\int_{0}^{t^{*}}\frac{1}{\hat{\boldsymbol{S}}^{(0)}(\hat{\boldsymbol{\beta}}(s),t)}\hat{w}_{ij}(t)\textrm{d}N_{ij}(t). \] The conditional CIF for any future subject $_{o}$ can be estimated as \begin{align} \hat{\pi}_{1}(t^{*}\vert s,\boldsymbol{\mathcal{H}}_{o}(s)) &= \hat{P}(s<T_{o}\le s+t^{*},\epsilon_{o}=1\vert\tilde{T}_{o}>s,\boldsymbol{\mathcal{H}}_{o}(s))\nonumber \\ &= 1-\textrm{exp}\Big(-\hat{\Lambda}_{10}(t^{*},s)\times\textrm{exp}\Big(\hat{\boldsymbol{\beta}}^{T}(s)\boldsymbol{\tilde{Y}}_{o}(s)\Big)\Big). \end{align} \section{Quantifying the dynamic predictive accuracy}\label{sec:prederr} In this section, we study two predictive accuracy measures, the time-dependent receiver operating characteristic (ROC) curve, in particular the area under the ROC curve (AUC); and the Brier score (BS). In the dynamic prediction framework, the time-dependent predictive accuracy measures are functions of two time scales, the landmark time $s$ and the prediction horizon $\tau_{1}.$ The following procedure for estimating sensitivity, specificity, and BS were modified from the non-parametric kernel-weighted approach of \citet{wu2017quantifying} for competing risk data. \subsection{The dynamic time-dependent ROC curve and AUC} At any landmark time $s$, we want to evaluate how well the risk score, i.e., the estimated CIF, discriminates between subjects with the event of interest in the window $(s,s+\tau_{1}]$ versus those without. For any at-risk subject at time $s$ who experiences the main event within the time interval $(s,s+\tau_{1}]$, that occurrence is defined as a case: $D^{+}(s,\tau_{1})=\{i: s<T_{i}\le s+\tau_{1},\epsilon_{i}=1\}$. When a subject is event-free at $s+\tau_{1}$, that occurrence is defined as a control: $D^{-}(s,\tau_{1})=\{i: T_{i}>s+\tau_{1}\}$. An alternative definition for a control is to use the complementary set $\bar{D}^{+}(s,\tau_{1})=\{i: (s<T_{i}\le s+\tau_{1},\epsilon_{i}\ne1)\cup (T_{i}>s+\tau_{1})\}$, including subjects who experience a competing event within the time interval $(s,s+\tau_{1}]$ or remain event-free at $s+\tau_{1}$. To illustrate the ideas, we present the estimators for the former in this subsection. A similar extension can be made for the latter. For simplicity, we use the notation $U(\tau_{1}\vert s)$ to denote the individual predicted CIF (i.e., the risk score). Given a threshold value $c\in(0,1)$ , the time-dependent sensitivity and specificity functions are defined as $Se(c,s,\tau_{1})=P\Big(U(\tau_{1}\vert s)>c\vert D^{+}(s,\tau_{1})\Big)$ and $Sp(c,s,\tau_{1})=P\Big(U(\tau_{1}\vert s)\le c\vert D^{-}(s,\tau_{1})\Big).$ The estimators of sensitivity and specificity are \begin{align*} \widehat{Se}(c,s,\tau_{1}) & =\frac{\sum_{i\in\Re_{s}}\hat{W}_{1i}^{dyn}\cdot1(U_{i}(\tau_{1}\vert s)>c)}{\sum_{i\in\Re_{s}}\hat{W}_{1i}^{dyn}}\\ \widehat{Sp}(c,s,\tau_{1}) & =\frac{\sum_{i\in\Re_{s}}(1-\sum_{k=1}^{K}\hat{W}_{ki}^{dyn})\cdot1(U_{i}(\tau_{1}\vert s)\le c)}{\sum_{i\in\Re_{s}}(1-\sum_{k=1}^{K}\hat{W}_{ki}^{dyn})}, \end{align*} where $W_{1i}^{dyn}=P\Big(T_{i}(s)\in(0,\tau_{1}],\epsilon_{i}=1\vert\tilde{T_{i}}(s),\delta_{i},U_{i}\Big)=1(\tilde{\delta}_{ij}=0)\cdot\frac{F_{1}(\tau_{1}\vert U_{i},s)-F_{1}(\tilde{T}_{i}(s)\vert U_{i},s)}{S(\tilde{T}_{i}(s)\vert U_{i},s)}+1(\tilde{\delta}_{ij}=1)$, $T_{i}(s)=T_{i}-s$, $\tilde{T_{i}}(s)=\tilde{T}_{i}-s$ and $U_{i}$ is short for $U_{i}(\tau_1\vert s)$. $\Re_s$ is risk set within the neighborhood of $s$ which includes the most recent record at $t_{ij}$ for each subject $i$ $\{i: \tilde{T}_{i}\ge s, \vert t_{ij}-s\vert \le \vert t_{ij^{'}}-s\vert, \forall j^{'}=1,2,\ldots,n_{i},t_{ij}\in (s-h,s+h)\}$. $F_1(x\vert U_{i}, s)=P(T_{i}(s)\le x, \epsilon_{i}=1\vert U_{i},s)$ and $S(x \vert U_{i},s)=P(T_i(s)\ge x \vert U_i,s)$. For estimating the conditional probability weight $W_{1i}^{dyn}$, we treat the at-risk data set at landmark $s$ as the new baseline data set. The time-dependent ROC curve is a plot of sensitivity $Se(c,s,\tau_{1})$ over 1-specificity $1-Sp(c,s,\tau_{1})$, i.e., for $x\in[0,1]$, $R\widehat{OC}(x,s,\tau_{1})=\widehat{Se}(\widehat{Sp}^{-1}(1-x,s,\tau_{1}),s,\tau_{1})$. The AUC is estimated as $\widehat{AUC}(s,\tau_{1})=\int_{0}^{1}\widehat{ROC}(x,s,\tau_{1})\textrm{d}x$. \subsection{The dynamic time-dependent Brier score} The time-dependent BS under the dynamic competing risk framework is defined as $BS(\tau_{1},s)=E\Big(1(s<T\le s+\tau_{1},\epsilon=1)-U(\tau_{1}\vert s)\big\vert T>s\Big)^{2}$, where $1(\cdot)$ is the indicator function. Applying the weight $W_{1i}^{dyn}$, the BS can be estimated as \[ \widehat{BS}(\tau_{1},s)=\frac{1}{n_{s}}\sum_{i=1}^{n_{s}}\Big(\hat{W}_{1i}^{dyn}\times(1-U(\tau_{1}\vert s))^{2}+(1-\hat{W}_{1i}^{dyn})\times(0-U(\tau_{1}\vert s))^{2}\Big), \] where $n_{s}$ is the number of subjects at risk at landmark time $s$. The $AUC$ and $BS$ assess different aspects of the predictive model. $AUC$ evaluates the discrimination between a case and a control, and $BS$ quantifies the deviance of the predicted probability from the observed data. A model with perfect discrimination will have $AUC=1$, while $AUC$ close to 0.5 indicates poor discrimination that resembles a random guess. $BS$ is a prediction error metric, with smaller values indicating better prediction. \section{Simulation}\label{sec:simu} The simulation in this section mainly evaluates the prediction accuracy of the proposed model. A separate simulation, which evaluates the estimation of model parameters and bandwidth selection under the assumptions of the working model, is presented in the Web Appendix B. Similar to other studies evaluating the prediction accuracy of landmark models \citep{maziarz2017longitudinal}, we simulated longitudinal and competing risks data from a joint frailty model with shared random effects \citep{elashoff2008joint}. Details of the data generation process are described in the Web Appendix A. The data generating model included a baseline covariate and three longitudinal biomarkers. We considered two scenarios: (S1) the longitudinal biomarkers are non-informative for survival in the sense that their effects on both time-to-event outcome are zero, and (S2) the longitudinal biomarkers are informative in the sense that they have non-zero regression coefficients on both time-to-event outcomes. The incremental contribution of the longitudinal biomarkers to the prediction accuracy is expected to be zero under S1 and non-zero under S2. Table \ref{tab:simu_PE} presents the predictive accuracy of the proposed model under both S1 and S2. The full model (M1) includes both the longitudinal biomarkers and the baseline covariate; the null model (M0) only includes the baseline covariate. Since the data were simulated from a joint frailty model \citep{elashoff2008joint}, the proposed landmark SDH model worked under misspecification. However, regardless of whether the data generating model matches the fitting model, the predictive performance can always be evaluated. We considered both discrimination and calibration measures in Table \ref{tab:simu_PE}. For discrimination, we reported the true positive (TP) fraction and false positive (FP) fraction at a given threshold value, and the AUC as a global discrimination summary. For calibration, we used the Brier score. The predictive accuracy measures were evaluated at three landmark times $s=3,5,7$ with prediction horizon $\tau_{1}=1,3$. For each simulation, the proposed model was fit to a simulated training data set and the predictive accuracy measures were calculated from another simulated validation dataset from the same distribution. When all the longitudinal biomarkers are non-informative, the predictive accuracy measures of the full model and the null model are very similar. When the three longitudinal biomarkers are informative, including the longitudinal biomarkers in the prediction model substantially improves both discrimination and calibration. \begin{sidewaystable}[ht] \caption{The means (EST) and empirical standard deviations (ESD) of estimated predictive accuracy metrics comparing the full model with longitudinal biomarkers ($M_{1}$) and the null model with only the baseline covariate but without longitudinal biomarkers ($M_{0}$) in the simulation. Prediction horizon $\tau_{1}=3$. S1: non-informative longitudinal biomarkers. S2: informative longitudinal biomarkers. AUC: area under the ROC curve comparing the group experiencing the event of interest with those who experienced competing events or event-free. $TP(c):$ true positive fraction at threshold $c$. $FP(c):$ false positive fraction at threshold $c$. BS: Brier score. Sample size $n=500$.} \label{tab:simu_PE} \centering{}% \begin{tabular}{cccccccccccccc} \hline & & & \multicolumn{2}{c}{$AUC$} & & \multicolumn{2}{c}{$TP(0.25)$} & & \multicolumn{2}{c}{$FP(0.25)$} & & \multicolumn{2}{c}{$BS$}\tabularnewline \cline{4-5} \cline{7-8} \cline{10-11} \cline{13-14} & & & $M_{1}$ & $M_{0}$ & & $M_{1}$ & $M_{0}$ & & $M_{1}$ & $M_{0}$ & & $M_{1}$ & $M_{0}$ \tabularnewline \hline \multirow{6}{*}{S1} & \multirow{2}{*}{$s=1$} & $EST$ & 0.703 & 0.707 & & 0.514 & 0.512 & & 0.272 & 0.290 & & 0.161 & 0.165\tabularnewline & & $ESD$ & 0.031 & 0.030 & & 0.088 & 0.209 & & 0.064 & 0.173 & & 0.011 & 0.012\tabularnewline \cline{2-14} & \multirow{2}{*}{$s=3$} & $EST$ & 0.691 & 0.698 & & 0.726 & 0.675 & & 0.529 & 0.499 & & 0.199 & 0.203\tabularnewline & & $ESD$ & 0.034 & 0.032 & & 0.092 & 0.223 & & 0.099 & 0.237 & & 0.011 & 0.015\tabularnewline \cline{2-14} & \multirow{2}{*}{$s=5$} & $EST$ & 0.660 & 0.676 & & 0.751 & 0.705 & & 0.614 & 0.594 & & 0.211 & 0.223\tabularnewline & & $ESD$ & 0.050 & 0.049 & & 0.125 & 0.285 & & 0.133 & 0.309 & & 0.014 & 0.027\tabularnewline \hline \multirow{6}{*}{S2} & \multirow{2}{*}{$s=1$} & $EST$ & 0.894 & 0.611 & & 0.595 & 0.176 & & 0.091 & 0.126 & & 0.082 & 0.126\tabularnewline & & $ESD$ & 0.031 & 0.061 & & 0.117 & 0.217 & & 0.027 & 0.175 & & 0.034 & 0.077\tabularnewline \cline{2-14} & \multirow{2}{*}{$s=3$} & $EST$ & 0.882 & 0.578 & & 0.738 & 0.477 & & 0.240 & 0.452 & & 0.142 & 0.205\tabularnewline & & $ESD$ & 0.030 & 0.059 & & 0.132 & 0.345 & & 0.122 & 0.342 & & 0.064 & 0.070\tabularnewline \cline{2-14} & \multirow{2}{*}{$s=5$} & $EST$ & 0.884 & 0.531 & & 0.831 & 0.553 & & 0.350 & 0.552 & & 0.162 & 0.242\tabularnewline & & $ESD$ & 0.030 & 0.058 & & 0.082 & 0.400 & & 0.113 & 0.392 & & 0.035 & 0.052\tabularnewline \hline \end{tabular} \end{sidewaystable} Under S2, the estimated regression parameters of the proposed SDH model are plotted in Web Figure 1, which shows that the effects of the three longitudinal biomarkers are notably different from zero and are in the right direction suggested by the data generating model. In contrast, the estimated regression parameters of the proposed SDH model are close to zero when data were generated under S1. We conducted additional simulation to compare the predicted and true conditional risks at individual level. The true conditional risk of an individual (indexed by subscript $o$, at landmark time $s$, conditional on biomarker history $\boldsymbol{\mathcal{H}}(s)$, and with prediction horizon $\tau_1$) is defined as $\pi(\tau_{1}\vert s, \boldsymbol{\mathcal{H}}_{o}(s))=P(T_{o}\in(s,s+\tau_{1}],\epsilon_{o}=1\vert\tilde{T}_{o}>s,\boldsymbol{\mathcal{H}}_{o}(s))$. Since the true conditional risk varies by subject, landmark time and history, and does not have a tractable analytical expression, we calculated it empirically at 9 representative landmark time by history combinations (Web Table 1) as follows. We simulated data using the procedure in Web Appendix A but with one informative longitudinal biomarker $Y_2$ (the effect of $Y_1$, $Y_3$, and $X$ on the time-to-event outcome were all set to zero). The true CIF for the event of interest in the next $\tau_1=3$ years were obtained empirically as the proportion of subjects with that event within $(s,s+\tau_1]$ give survival up to time $s$, among those with nearly identical $Y_2$. For illustration, we chose three target $Y_2$ values: $m = 0$, $2$ and $4$ in Web Table 1. Subjects with marker values within $\pm 0.05$ of the target value were counted in the denominator of the proportion calculation. To ensure that there were enough subjects in the denominator, we simulated a very large dataset ($n=1,000,000$) without censoring. We restricted to the case of a single informative biomarker ($Y_{2}$) because it is less feasible to match subjects on multiple biomarkers. The conditional CIFs for hypothetical subjects with marker value $0,2,4$ at landmark times $1,3,5$ were estimated from 500 Monte Carlo repetitions. The average estimated CIF (EST), empirical standard deviation (ESD), percent bias (\% Bias) and mean squared error (MSE) are presented in Web Table 1. Being a working model, the proposed semi-parametric landmark SDH model worked under mis-specification in this simulation. However, the result suggests that the estimated CIF has little bias and the MSE is low. It may indicate that the proposed model was flexible enough to provide approximate the data well at multiple landmark times, despite that the simulated data do not exactly satisfy the working assumptions. Unlike the simulation results in Table \ref{tab:simu_PE}, the results in Web Table 1 pertain to the quality of predictions at individual level. The proposed landmark SDH model is a working model and it is not yet clear whether there exists a joint distirbution of longitudinal and competing risk data such that the model holds at all landmark times. This is a well known difficulty with landmark dynamic prediction models in general \citep{van2011dynamic, li2017dynamic} and it is not specific to our landmark model, though limited progress has been made in problems without competing risks \citep{zheng2005partly, DataGenerationPaper, JRSSCpaper}. Due to this difficulty, researchers often evaluate the numerical performance of the landmark models using data simulated from the joint model with shared random effects \citep{maziarz2017longitudinal, huang2016two}. This is also the simulation strategy that we chose. When the data generation model and the analysis model do not match, the estimated model parameters are difficult to evaluate and interpret, but prediction accuracy can still be assessed. We have designed scenarios S1 and S2 to demonstrate that incorporating informative longitudinal biomarkers improves predictive accuracy. This qualitative conclusion is unlikely to be invalidated by the magnitude of model misspecification. We designed three longitudinal biomarkers with complicated trajectories to demonstrate that the proposed model works in situations where joint modeling approaches may be difficult to apply. \section{Application to the AASK data}\label{sec:app} The AASK study included $1,094$ African Americans of age 18 to 70 years who were diagnosed with hypertensive renal disease and had baseline eGFRs between $20-65 mL/min/1.73m^{2}$ \citep{wright2002effect}. Subjects were followed every 6 months, with up to 12 years of longitudinal data collected at each visit. By the end of the study, 318 (29\%) individuals developed ESRD, the event of research interest, and 176 (16\%) died before ESRD. The median time to ESRD was 4.3 years and the median time to death was 5.2 years. We chose clinically relevant prediction horizons of $\tau_{1}=1$ or $3$ years and illustrated the dynamic prediction at years 3, 5, and 7. The key longitudinal biomarker is eGFR (estimated Glomerular Filteration Rate). Our previous publication demonstrated that this biomarker have diverse and possibly nonlinear individual progression patterns \citep{li2012longitudinal}. In addition, some CKD patients experienced acute kidney injury (AKI) during the follow-up, which may cause substantial short term variation in the eGFR (e.g., see Figure 2 of \citet{li2017dynamic}). The number of repeated measurements for eGFR ranged from 3 to 30, with over 50\% of individuals providing 17 or more measurements. In addition to the current value of eGFR at a clinical visit, we derived the rate of change in eGFR (linear eGFR slope) during the history window of $\tau_{2}=3$ years, because the eGFR slope is often used by clinicians to characterize the speed of progression in CKD \citep{Schluchter2001}. The estimation of eGFR slope followed the approach in our recent paper \citep{li2017dynamic}. Additional biomarkers included longitudinal measurements of serum albumin (Alb), urine protein to creatinine ratio (UP/Cr), serum phosphorus (Phos) and urine potassium (Upot). These biomarkers were considered because they have known biological association with disease progression and have been used in other CKD risk equations \citep{tangri2011predictive}. Also included in the prediction model were age at the time of prediction and an indicator of any hospitalization during the previous year. For the competing events of ESRD and death, we fit landmark SDH models separately using the same set of candidate predictors (Web Figure 2). The eGFR, its rate of change, and log UP/Cr were significantly associated with time to ESRD but not with time to death. In contrast, age, Alb and hospitalization are risk factors related to death. This indicates that the progression to ESRD and death may be related to different pathological processes, which justifies the proposal of modeling the competing events separately rather than as a composite outcome. After removal of the non-significant covariates, the final model for ESRD included eGFR, eGFR.slope, log UP/Cr and Phos, and the final model for death included age, Alb, log-Upot and hospitalization (Web Figure 3). We conducted bandwidth selection using 5-fold cross-validation. Predictive accuracy metrics were evaluated in the cross-validation dataset, and they were robust to different bandwidths (up to 3 digits after the decimal point). Therefore, we used the bandwidth of $h = 1.5$ in the final model, which provided a relatively smooth curve for the log-SDH ratio curve.The surface plots of the CIF for ESRD and death are illustrated in Figure \ref{fig:surface_plot}. \begin{figure}[ht] \begin{centering} \includegraphics[scale=0.7]{surf_plot_CIF} \par\end{centering} \caption{Estimated surface of the cumulative incidence function over the landmark time and prediction horizon. This shows an examplary population with age = 55, eGFR = 45 $ml/min/1.73m^{2}$, eGFR.slope = 0, UP/Cr = 0.3 $g/g$, albumin = 4 $g/dL$, and hospitalization within the past year. } \label{fig:surface_plot} \end{figure} Figure \ref{fig:dynpred1} presents the longitudinal profiles and individual dynamic predictions from three AASK subjects: subject 1 was event-free by the end of the study, subject 2 experienced ESRD after 7.5 years, and subject 3 died after 9.7 years. We demonstrated the biomarker values with real-time predicted 3-year probabilities of ESRD and death. The risk prediction was dynamically updated at each new clinical visit. Subject 1 demonstrated stable disease. Subject 2 demonstrated persistent decline in eGFR and notable increase in proteinuria (log-UP/Cr), which led to drastic increase in the risk of ESRD after year 5. In contrast, the risk of death for subject 2 increased moderately, which may be explained by the Alb level and hospitalization around year 7. For subject 3, the relatively stable eGFR and log-Up/Cr also stablized the subject's susceptibility to ESRD, but the frequent hospitalization and decreasing Alb level were associated with increased risk of death, possibly due to other co-morbidity. We did not estimate the model after year 8 because the number of observed clinical events was relatively small near the end of the follow-up. \begin{figure}[ht] \centering{}\includegraphics[scale=0.75]{dynpred_Plot_pred}\caption{Individual risk predictions for three selected subjects: subject 1 was censored (dotted vertical green line), subject 2 had ESRD (dotted vertical red line) and subject 3 died (dotted vertical black line). Three biomarkers are plotted over time: ``G'' is eGFR $(ml/min/1.73m^{2})$, ``R'' is log-urine protein-to-creatinine ratio ($g/g$), and ``A'' is albumin($g/dL$). The connected red dots are predicted probabilities of ESRD within a horizon of $\tau_{1}=3$ years. The gray vertical bars represent episodes of hospitalization, with the two vertical borders being admission and discharge dates. The connected black dots are the predicted probability of death within $\tau_{1}=3$ years. The y-axis to the left is the scale of eGFR, and the y-axis to the right is the scale of predicted probabilities (0 to 1). The other two biomarkers, log-UP/CR and albumin, are re-scaled to be displayed in the same plot with eGFR but their respective scales are not shown. The dynamic predicted probabilities of ESRD are calculated using the dynamic SDH model with four predictors: eGFR, eGFR slope in the past three years, log-UP/CR and phosphorus. The dynamic predicted probabilities of death are calculated using the dynamic SDH model with four predictors: current age, serum albumin, any hospitalization within the past year, and log urine potassium. } \label{fig:dynpred1} \end{figure} Figure \ref{fig:dynpred2} presents the profiles of the same three patients but with their dynamic CIFs (up to $\tau_1 = 3$ years) at landmark times $s=3,5,7$ years. For subject 1, the predicted CIFs for both ESRD and death were flat. In contrast, the predicted CIF of ESRD for subject 2 started to increase after year 5 and the increase became very prominent by year 7. This was likely caused by a combination of deteriorated renal function (eGFR) and proteinuria (log-UP/Cr). This patient reached ESRD shortly after year 7. The predicted CIF of ESRD for subject 3 stayed flat, but the CIF of death increased at year 7, after frequent hospitalization. This subject eventually died at year 9.6 without ESRD. \begin{sidewaysfigure}[ht] \centering{}\includegraphics[scale=0.75]{dynpred_Plot_select}\caption{Individual dynamic predicted CIF for the three selected subjects in Figure \ref{fig:dynpred1}. Each row in the panel represents one subject, the three columns are the predictions made at landmark years $s=3,5,7$. The prediction is made at the vertical blue dashed lines. The predicted CIFs up to $\tau_{1}=3$ years are plotted for the event of ESRD (red curve) and death (black curve). Symbols in the figure are similar to Figure \ref{fig:dynpred1}.} \label{fig:dynpred2} \end{sidewaysfigure} In Table \ref{tab:AASK_PE}, we summarized the predictive accuracy of the landmark SDH models for prediction horizons $\tau_{1}=1,3$ at three landmark years $s=3,5,7$. The model for ESRD achieved good discrimination with AUCs between 0.93-0.96. When we used the cutoff value of 0.05, the sensitivity (TP) and specificity (1-FP) could be controlled within 0.80-0.90 range under all scenarios. The prediction accuracy metrics were similar with different prediction horizons. In contrast, the model for death discriminated no better than a random guess, resulting in AUCs around 0.5; the prediction errors were also at least twice as large as those from predicting ESRD. More importantly, the AUCs from the proposed model improved in comparison with previous studies where AUCs were around 0.8 and always less than 0.9 \citep{li2017dynamic,maziarz2017longitudinal}. One possible explanation is that these previous studies treat ``time to ESRD or death'' as a composite outcome. This introduces noise and diminishes the predictive accuracy because all-cause death is difficult to predict with the selected biomarkers, which are prognostically specific to renal disease. The ROC curves for predicting ESRD were plotted in Web Figure 4. \begin{sidewaystable}[ht] \begin{centering} \caption{Measures of predictive accuracy from the landmark SDH model for ESRD and death in the analysis of AASK data. The estimates were obtained at three landmark years, $s=3,5,7$, with prediction horizon $\tau_{1}=1,3$ years. AUC: area under the ROC curve. $TP(c)$: true positive rate at threshold $c$; $FP(c)$: false positive rate at threshold $c$ ; thresholds $c$ are selected to be 0.05 for ESRD and 0.01 for death. BS: Brier score. } \label{tab:AASK_PE} \par\end{centering} \centering{}% \begin{tabular}{ccccccccccccc} \hline & & \multicolumn{2}{c}{$AUC$} & & \multicolumn{2}{c}{$TP(c)$} & & \multicolumn{2}{c}{$FP(c)$} & & \multicolumn{2}{c}{$BS$}\tabularnewline \cline{3-4} \cline{6-7} \cline{9-10} \cline{12-13} & & ESRD & Death & & ESRD & Death & & ESRD & Death & & ESRD & Death\tabularnewline \hline \multirow{3}{*}{$\tau_{1}=1$} & $s=3$ & 0.957 & 0.545 & & 0.925 & 0.568 & & 0.075 & 0.539 & & 0.024 & 0.191\tabularnewline & $s=5$ & 0.925 & 0.547 & & 0.885 & 0.585 & & 0.100 & 0.596 & & 0.026 & 0.043\tabularnewline & $s=7$ & 0.965 & 0.584 & & 0.832 & 0.692 & & 0.099 & 0.675 & & 0.021 & 0.035 \tabularnewline \hline \multirow{3}{*}{$\tau_{1}=3$} & $s=3$ & 0.944 & 0.558 & & 0.876 & 0.468 & & 0.119 & 0.378 & & 0.054 & 0.119 \tabularnewline & $s=5$ & 0.943 & 0.520 & & 0.852 & 0.498 & & 0.140 & 0.549 & & 0.052 & 0.093\tabularnewline & $s=7$ & 0.957 & 0.492 & & 0.863 & 0.343 & & 0.131 & 0.405 & & 0.048 & 0.110 \tabularnewline \hline \end{tabular} \end{sidewaystable} \section{Discussion}\label{sec:disc} For CKD patients, estimating the time to ESRD is crucial for the timely treatment management. Dynamic prediction is an attractive tool for this purpose, because it is adaptive to the changing health condition and prognostic history of the patient. It enables real-time monitoring of patient risk. In this paper, we develop novel methodology for dynamic prediction of ESRD among the CKD patients, and we overcome a number of analytical hurdles, including competing events of death, irregularly spaced clinical visit times, multiple biomarkers with complicated longitudinal trajectories, time-varying at-risk population, and time-varying covariate-outcome association. Our proposed methodology is flexible, because the model parameters are estimated semi-parametrically. Hence, it can effectively mitigate the risk of model misspecification. This feature is very important for dynamic prediction models, because, as explained in Section \ref{sec:simu}, the landmark dynamic prediction model is a working model and needs to provide adequate approximation to the data at all landmark times. Another advantage of the proposed methodology is that it is computationally simple, and it can be implemented through standard statistical software for competing risks analysis, regardless of how many longitudinal biomarkers are included as predictors. In this paper, the estimation process was accomplished with the available R function $coxph()$ after translating the competing events into a counting process \citep{geskus2011cause}. We believe that the simplicity in computation makes the proposed methodology attractive for various practical situations, including applications with large datasets, a large number of biomarkers with complicated longitudinal trajectories, and other longitudinal prognostic information that cannot be easily modeled at an individual-level (e.g., hospitalization episodes and medication history). Our kernel-based estimation approach relies on the assumption that the clinical visit times are non-informative. Future work is needed to study dynamic prediction when the frequency of clinical visits is related to the health condition of the patients. The predictors in our proposed model framework include pre-specified features extracted from the data history. Automatic extraction of predictive features from the longitudinal history is another topic that we will pursue in future research. \section*{Supplementary Materials} Web Appendices, Web Figures, and R code are available with this paper at the Biometrics website on Wiley Online Library. \section*{Acknowledgements} The authors declare no potential conflicts of interest with respect to the research, authorship and publication of this article. This research was supported by grants from the U.S. National Institutes of Health (P30CA016672, U01DK103225, R01DK118079). \bibliographystyle{biom} \section*{\textbf{Web Appendix A}: Data Generation Procedure for Simulation} The simulation results are presented in main text of the paper. This section presents details of the data generation procedure and parameter settings. The longitudinal processes are generated from equation (1) below. We simulated a total of $n$ subjects with independent and identically distributed data for each simulation run. \begin{align} Y_{i1}(t_{ij}) & =m_{i1}(t_{ij})+\epsilon_{i1}(t_{ij})=b_{i01}+b_{i11}\cdot t_{ij}+\epsilon_{i1}(t_{ij})\nonumber \\ Y_{i2}(t_{ij}) & =m_{i2}(t_{ij})+\epsilon_{i2}(t_{ij})=b_{i02}+b_{i12}\cdot t_{ij}^{3}+\epsilon_{i2}(t_{ij}) \label{eq:yiq-1} \\ \textrm{logit} & \{P(Y_{i3}(t_{ij})=1)\}=m_{i3}(t_{ij})=b_{i03}+b_{i13}\cdot t_{ij} , \nonumber \end{align} For both non-informative biomarker effect (S1) and informative biomarker effect (S2), the data were simulated according to the joint frailty model of longitudinal biomarkers and the competing risk event times (Elashoff et al, 2008). It includes the longitudinal sub-model and the following survival sub-model ($k=1,2$): \begin{equation} \lambda_{k}(t)=\lambda_{k0}(t)\textrm{exp}\{\gamma_{k}X_{i}+\sum_{q=1}^{3}\beta_{kq}m_{iq}(t)+v_{k}u_{i}\}. \label{eq:JM_frailty} \end{equation} The baseline hazard for the time-to-event outcome follows Weibull distribution with scale and shape parameters of (0.02, 2.3) and (0.01, 2.4) for event 1 and event 2 respectively. The longitudinal sub-model includes three longitudinal biomarkers. The first one $Y_{i1}(.)$ is a continuous biomarker with a linear mean trajectory $m_{i1}(.)$. The second biomarker $Y_{i2}(.)$ has a nonlinear subject-specific mean trajectory. The third biomarker is binary with a logit-linear mean trajectory. For the first two biomarkers, $\epsilon_{i1}(.)$ and $\epsilon_{i2}(.)$ are random noises with $N(0,0.5^2)$ distribution. Each biomarker's longitudinal trajectory is characterized by two random effects, denoted by $\boldsymbol{b}_{ip}=(b_{i0p},b_{i1p})^T$ ($p=1,2,3$). In the case of a linear trajectory, such as the first biomarker, they represent the subject-specific random intercept and slope. We let $\boldsymbol{b}_{i}=(\boldsymbol{b}_{i1}^T,\boldsymbol{b}_{i2}^T,\boldsymbol{b}_{i3}^T)^{T}\sim MVN(\boldsymbol{\Omega},\boldsymbol{D})$, where $\boldsymbol{\Omega}=(2.8,-0.14,2.1,0.01,-1,0.3)$ denote the population mean. The covariance matrix $\boldsymbol{D}$ can be decomposed into $\boldsymbol{D}=diag(\boldsymbol{\sigma}_{q})\times\boldsymbol{R}\times diag(\boldsymbol{\sigma}_{q})$, where the diagonal matrix $diag(\boldsymbol{\sigma}_{q})$ includes elements $\boldsymbol{\sigma}_{q}=(\sigma_{01},\sigma_{11},\sigma_{02},\sigma_{12},\sigma_{03},\sigma_{13})=(0.9,0.1,0.9,0.005,0.9,0.1)$ and correlation matrix $\boldsymbol{R}$. \begin{equation*} \mathbf{R} = \begin{pmatrix} 1 & 0.26 & -0.5 & -0.3 & -0.5 & -0.3 \\ & 1 & -0.65 & -0.3 & -0.5 & -0.3 \\ & & 1 & 0.35 & 0.5 & 0.3 \\ & & & 1 & 0.5 & 0.3\\ & & & & 1 & 0.3 \\ & & & & & 1 \end{pmatrix}. \end{equation*} In the survival sub-model, $u_{i}$ is the frailty term accounting for the correlation between two competing events, and the parameter $v_{1}$ is set to 1 to ensure identifiability. We let $u_{i}\sim N(0,\sigma_{u}^2)$ where $\sigma_{u}=0.5$. For S1, $\{\beta_{1q}\}$ and $\{\beta_{2q}\}$ are all set to be zero. For S2, we set $\{\beta_{1q}; q=1,2,3\}=(-1.2,0.3,1.5)$ and $\{\beta_{2q}; q=1,2,3\}=(-0.2,0.05,0.6)$. For both S1 and S2, the sub-model includes one baseline covariate $X_{i}\sim N(0.5,0.5^2)$ with regression coefficient $\gamma_1 = -1.5$ and $\gamma_2 = -1$. The censoring times are generated from a mixture of uniform distribution $\eta_{1}\textrm{Unif}(0,3)+\eta_{2}\textrm{Unif}(3,6)+\eta_{3}\textrm{Unif}(6,9)+\eta_{4}\textrm{Unif}(9,12)$, where the mixing probabilities $\eta_{1}$ to $\eta_{4}$ ($\sum_{i=1}^4{\eta_i}=1$) are chosen to control the censoring rate at approximately 25\%. For example, they equal to $(0.1, 0.1, 0.2, 0.6)$ for the simulation with informative biomarker and $(0.1, 0.1, 0.1, 0.7)$ for the simulation with non-informative biomarker. See the description of these two simulation scenarios below. The random intercept and random slope (time effect) are assumed to be positively correlated for each biomarker. We allow $\boldsymbol{Y}_{i1}$ and $\boldsymbol{Y}_{i2}$ to have mild negative correlation, and $\boldsymbol{Y}_{i1}$ and $\boldsymbol{Y}_{i3}$ mild positive correlation. The measurement times $t_{ij}$ are irregularly spaced and unsynchronized among different subjects. It was generated from $t_{ij}=\tilde{t}_{j}+e_{ij}$, where $\{ \tilde{t}_{j} \}$ is the scheduled measurement times from 0 to 12 years with 0.5 increment and $e_{ij}\sim Unif(-0.17,0.17)$. This setup corresponds to the practical situation where the subject had clinical visit within a two-month window around the scheduled visit times. For each simulation scenario, we used $500$ Monte Carlo repetitions and the sample size is $n=500$. \section*{\textbf{Web Appendix B}: Simulation on Local Linear Estimation} As explained in the Simulation section, the proposed landmark SDH model is a working model and it is therefore difficult to simulate data so that the model holds at all landmark times. This is a common feature of the landmark (or partly conditional) modeling approaches in general. In light of this difficulty, we resort to a simple albeit approximate approach to evaluating the quality of the proposed local linear estimation, at any landmark time $s$, as described below. We simulated a cross-sectional time-to-event data set at a given landmark $s$, e.g., $s=3$, which was treated as baseline for the purpose of this simulation. Scattered individual measurement times $\{ t_{ij} \}$ and the associated biomarker values $\boldsymbol{Y}_{i}(t_{ij})$ were simulated within a small neighborhood of $s$. The proposed landmark SDH model was used to generate independent competing risks data starting from each $t_{ij}$, following the simulation algorithm in Fine and Gray (1999). The log-SDH $\boldsymbol{\beta}(s)$ is assumed to be a quadratic function of $s$ (Web Figure 5). Note that this is not a really a landmark dataset because each subject only has one $t_{ij}$. Nonetheless, this dataset exactly satisfies the landmark SDH model so that we can use it to study the numerical performance of the proposed local linear estimation in a small neighborhood of $s$. Specifically, we evaluate the bias of estimating $\boldsymbol{\beta}(s)$ and the baseline CIF (Web Figure 6), $\pi_{0}(t^{*};s)=1-\textrm{exp}\Big(-\int_{0}^{t^{*}}\lambda_{10}(t,s)dt\Big)$, as well as the selection of the bandwidth. The results are presented in Web Figure 7. The three columns from left to right are the plots of the estimated log-SDH ratio, bias percentage, and mean squared error (MSE) against different bandwidths. The rows from top to bottom correspond to the three increasing sample sizes. For the plot of the log-SDH ratio (column 1), the mean estimated $\boldsymbol{\beta}(s)$ at $s=3$ over the Monte Carlo repetitions is close to the true value (red horizontal line) at small bandwidths (e.g. 0.3 and 0.5). With increased bandwidth, the estimator shows increasing downward bias. This is because the true $\boldsymbol{\beta}(s)$ function is concave (Web Figure 5), and the local linear fit underestimates it at the peak as the bandwidth increases. The empirical standard errors, shown in Web Figure 7 as the vertical whiskers, shrink with the increased bandwidth since more data points are included in the kernel estimation. From top to bottom, the empirical standard errors decrease when the sample size increases. Column 2 shows that the bias percentage generally increases with the bandwidth, except when the bandwidth is very small, in which case larger finite-sample bias may result due to very few data points available in the neighborhood defined by the bandwidth. In column 3, the U-shaped MSE curve is a demonstration of the typical bias-variance trade-off in kernel estimation. Overall, the percentage of absolute bias for the log-SDH ratio is very small, within 2\% for middle ranged bandwidths (the horizontal dashed line in column 2). The results from this simulation suggests that the proposed local linear estimation works as expected from typical local polynomial estimators. \section*{\textbf{Web Appendix C}: Table and Figures} \begin{table}[htbp] \centering \caption*{\textbf{Web Table 1}: The predicted CIF at different landmark times $s$ and biomarker values $m$. The true conditional risk (True) were obtained empirically using the method described in Section 5. The average estimated CIF (EST), percent bias (\%Bias), empirical standard deviation (ESD), and mean-squared errors ($\times 1,000$) (MSE) are reported. Prediction horizon $\tau_1=3$. The result is based on 500 Monte Carlo repetitions. } \begin{tabular}{ccccccc} \hline & & True & EST & \%Bias & ESD & MSE \\ \hline & $m=0$ & 0.167 & 0.168 & 0.703 & 0.029 & 0.836 \\ $s=1$ & $m=2$ & 0.357 & 0.350 & -2.030 & 0.025 & 0.670 \\ & $m=4$ & 0.610 & 0.639 & 4.734 & 0.052 & 3.583 \\ \hline & $m=0$ & 0.278 & 0.295 & 6.262 & 0.052 & 3.001 \\ $s=3$ & $m=2$ & 0.516 & 0.503 & -2.589 & 0.033 & 1.299 \\ & $m=4$ & 0.729 & 0.755 & 3.463 & 0.053 & 3.419 \\ \hline & $m=0$ & 0.312 & 0.327 & 4.762 & 0.089 & 8.068 \\ $s=5$ & $m=2$ & 0.505 & 0.491 & -2.795 & 0.062 & 4.104 \\ & $m=4$ & 0.681 & 0.691 & 1.407 & 0.064 & 4.160 \\ \hline \end{tabular} \label{tab:sim_predRisk} \end{table} \begin{figure}[ph] \centering{}\includegraphics[scale=0.8]{noninfo_simu_Frailty.pdf} \centering{}\includegraphics[scale=0.8]{info_simu} \caption*{\textbf{Web Figure 1}: Simulations with non-informative biomarker effect (upper panel) and informative biomarker effect (lower panel). The point estimator of log-SDH ratios corresponding to the three longitudinal biomarkers Y1, Y2 and Y3 (solid line) and their 95\% empirical confidence limits (dashed lines) are plotted over landmark time grids. The point estimator and the confidence limits are defined as the average, 2.5\% and 97.5\% quantiles of the point estimators from the Monte Carlo repetition.The horizontal dashed lines are the reference for zero effect. The estimated effects of the three biomarkers are close to zero when the biomarkers are non-informative and deviate from zero when the biomarkers are informative. Sample size $n=500$. } \label{fig:simu_informative} \end{figure} \begin{figure}[ph] \centering \begin{subfigure}{1.0\linewidth} \centering \includegraphics[scale=0.7]{logSDH_ESRD} \caption{ESRD} \end{subfigure} \begin{subfigure}{1.0\linewidth} \centering \includegraphics[scale=0.7]{logSDH_Death} \caption{Death} \end{subfigure} \caption*{\textbf{Web Figure 2}: The time-varying log-SDH ratios of Age, eGFR, eGFR.slope, log UP/Cr, Albumin and history of hospitalization for the outcome of ESRD and death. Black solid curves are the time-varying log-SDH ratios and grey dashed curves are the 95\% confidence intervals from bootstrap. Red dotted lines are the reference line of zero effect.} \label{fig:AASK_univ_betacurve} \end{figure} \begin{figure}[ph] \centering \begin{subfigure}{1.0\linewidth} \centering \includegraphics[scale=0.65]{logSDH_ESRD_Final} \caption{ESRD} \end{subfigure} \begin{subfigure}{1.0\linewidth} \centering \includegraphics[scale=0.65]{logSDH_Death_Final} \caption{Death} \end{subfigure} \caption*{\textbf{Web Figure 3}: The time-varying log-SDH ratios of Age, eGFR, eGFR.slope, log UP/Cr and Phosphorus for the outcome of ESRD; and the time-varying log-SDH ratios of Age, Albumin, log Urine Potassium, and history of hospitalization for the outcome of Death. Black solid curves are the time-varying log-SDH ratios and grey dashed curves are the 95\% confidence intervals from bootstrap. Red dotted lines are the reference lines of zero effect.} \end{figure} \begin{figure}[ph] \centering{}\includegraphics[scale=1.0]{ROCplot_tau3} \caption*{\textbf{Web Figure 4}: The time-dependent ROC curves for predicting ESRD at landmark years 3, 5, and 7, with prediction horizon $\tau_{1}=3$ years. The areas under the ROC curves (AUCs) are annotated on the plots. } \label{fig:AASK_ROC} \end{figure} \begin{figure}[ph] \begin{centering} \includegraphics[scale=0.6]{true_logSDH} \par\end{centering} \caption*{\textbf{Web Figure 5}: Simulation setting for Web Appendix B. For each subject, one longitudinal biomarker value was simulated at a randomly picked time within a neighborhood (pink shaded interval) of landmark time $s=3$. The curve shows the shape of the coefficient function $\beta(s)$. } \label{fig:simu_true_beta_curve} \end{figure} \vspace{3cm} \begin{figure}[ph] \begin{centering} \includegraphics[scale=0.6]{nlin_cif} \par\end{centering} \caption*{\textbf{Web Figure 6}: Estimation of baseline CIF in the simulation of Web Appendix B. The sample sizes are 500, 1000 and 2000 respectively. True baseline CIF (red curve) and the average estimated CIF over the Monte Carlo repetitions nearly overlap. } \label{fig:nlin_cif} \end{figure} \begin{figure}[ht] \centering{} \includegraphics[scale=0.8]{nlin_simu} \label{fig_simu_kernel} \caption*{\textbf{Web Figure 7}: Simulation results for the finite sample performance of local linear estimation (Web Appendix B). The average estimated log-SDH ratio, absolute bias percentage and mean squared error (MSE) are plotted against the bandwidth on the horizontal axis. The sample size equals to 500, 1000, and 2000. The bias-variance trade-off and their relationship with the bandwidth resemble the typical behavior of local polynomial estimation. } \end{figure} \end{document}
1,116,691,498,150
arxiv
\section{Model and Simulation Results} Though the previous work found that the replotted road networks of cities have scale-free characteristics \cite{Rosvall}, there is no well-accepted model for these networks up to now. Without losing generality, our simulation starts from generating the urban transportation network according to the most general Barab\'{a}si-Albert scale-free network model \cite{BA2}. In this model, starting from $m_0$ fully connected vertices, one vertex with $m$ edges is attached at each time step in such a way that the probability $\Pi_i$ of being connected to the existing vertex $i$ is proportional to the degree $k_i$ of the vertex, i.e. $\Pi_i={k_i \over \Sigma_j k_j}$, where $j$ runs over all existing vertices. The capacity of each vertex (road) is controlled by two parameters: (1) its maximum cars number $L$, which is proportional to its degree $k$ (a long road ordinarily has more intersections and can hold more cars): $L=\alpha \times k$; (2) the maximum number of cars handled per time step, which reflects the capability of intersections: $C=\beta \times L$. Motivated by the Internet information flow models \cite{Sole,Arenas, Tadic,Zhao,Wang}, the system evolves in parallel according to the following rules: 1. Add Cars - Cars are added with a given rate $R$ (cars per time step) at randomly selected vertices and each car is given a random destination. 2. Navigate Cars - If a car's destination is found in its next-nearest neighborhood, its direction will be set to the destination vertex. Otherwise, its direction will be set to a neighboring vertex $h$ with probability: $P_h={k^{\phi}_h \over \Sigma_i k^{\phi}_i}$. Here the sum runs over the neighboring vertices, and $\phi$ is an adjustable parameter. It is assumed that the cars are unaware of the entire network topology and only know the neighboring vertices' degree $k_i$. 3. Cars Motion -- At each step, only at most $C$ cars can leave a vertex (road) for other vertices and FIFO (first-in-first-out) queuing discipline is applied at each vertex. When the queue at a selected vertex is full, the vertex won't accept any more vehicles and the vehicle will wait for the next opportunity. Once a car arrives at its destination, it will be removed from the system. We first simulate the traffic on a network of $N=100$ vertices (roads) with $m0=m=2$, $\alpha=5$ and $\beta=0.2$. This relatively small system can be seen as simulating the backbone of a city's urban traffic network. The selection of $\beta$ is based on the single-road traffic flow theory which shows that the maximum flux on a highway is about $20\%$ of its maximum density \cite{Kerner,Kerner2,Kerner3,LiXB}. For simplicity, we do not consider the phenomenon that the flux decreases when the density is above $20\%$. The combination of $\alpha$ and $\beta$ can be also interpreted as: each intersection can handle one car turning for one road at each step. \begin{figure} \scalebox{0.50}[0.53]{\includegraphics{Fig2.EPS}} \caption{\label{Fig2} (color online). The overall capacity of a road network with $N=100$, $m0=m=2$, $\alpha=5$ and $\beta=0.2$. (a) The variation of car number $N_c$ for different $R$ when $\phi=0.1$. $R_c(=13)$ is determined at the point where the $N_c$ increment rate $\omega$ increases suddenly from zero and $N_c$ increases rapidly towards the system's maximum car number. (b) The critical $R_c$ versus $\phi$. The maximum of $R_c$ corresponds to $\phi=0.1$ marked by a dash line. The data are obtained by averaging $R_c$ over 10 network realizations. } \end{figure} To characterize the system's overall capacity, we first investigate the car number $N_c$ increment rate $\omega$ in the system: $\omega(R)=\lim_{t \rightarrow \infty} {\langle N_c(t+\Delta t)-N_c(t) \rangle \over \Delta t}$. Here $\langle N_c(t+\Delta t)-N_c(t)\rangle$ takes average over time windows of width $\Delta t$. Fig.\ref{Fig2}(a) shows the variation of $N_c$ with different $R$ for $\phi=0.1$. One can see that there is a critical $R_c$ ($=13$) at which $N_c$ runs quickly towards the system's maximum car number and $\omega(R)$ increases suddenly from zero. $\omega(R)=0$ corresponds to the cases of free flow state, which is attributed to the balance between the number of added and removed cars at the same time. However, if $R$ exceeds the critical value $R_c$, cars will in succession accumulate in the system and then congestion emerges and diffuses to everywhere. Ultimately almost no cars can arrive at their destinations. Evidently, $R_c$ is the onset of phase transition from free flow to jammed state. Hence, the system's overall capacity can be measured by the critical value of $R_c$ under which the system can maintain its normal and efficient functioning. Fig.\ref{Fig2}(b) depicts the variation of $R_c$ versus $\phi$. The maximum overall capacity occurs at $\phi=0.1$ (slightly greater than $0.0$) with $R_c^{max}=13$. Here we give a heuristic analysis for determining the optimal value of $\phi$ with the maximum capacity. If we neglect the queue length $L$ of each vertex, for $\phi=0$, cars will move in the system nearly like random walk. There is a well-known result from graph theory that if a particle performs a random walk, in the limit of long times, the time the particle spends at a given vertex is proportional to the degree $k$ of the vertex \cite{Bollob}. One can easily find out that, the number of cars observed averagely at a vertex is proportional to the degree of that vertex. Meanwhile, the cars handling ability of each vertex is assumed to be proportional to its degree. Thus in the case of $\phi=0$, this rule produces an average effect that no congestion occurs earlier on some vertices with particular degree than on others. Accordingly, $\phi=0$ results in the maximum system capacity. However, in our model, each vertex has a limited queue length $L=\alpha\times k$ and $R$ cars are generated randomly among all vertices at each time step, so small degree vertices are slightly more easily congested. Therefore, for our traffic model, a $\phi$ slightly larger than zero can enhance the system's capacity maximally. \begin{figure} \scalebox{0.50}[0.50]{\includegraphics{Fig3.EPS}} \caption{\label{Fig3} (color online). Average travel time $\langle T \rangle$ versus $\phi$ for $R=1$ and $2$. The data are truncated because the system jams when $\phi$ is either too large or too small. The right panel shows the variation of $\langle T \rangle$ versus $R$ when $\phi$ is fixed. The data are also truncated when the system jams. } \end{figure} Then we simulate the cars' travel time spent in the urban transportation system. It is also an important factor for measuring the system's efficiency. In Fig.\ref{Fig3}(a), we show the average travel time $\langle T \rangle$ versus $\phi$ under traffic load $R=1$ and $2$. In the free-flow state, almost no congestion on vertices occurs and the time for cars waiting in the vertex queue is negligible, therefore, the cars' travel time is approximately equal to their actual path length in replotted road map. But when the system is close to a jammed state, the travel time will increase rapidly. One can see that when $\phi$ is close to zero, the travel time is minimum. In Fig.\ref{Fig3}(b) inset, the average travel time is much longer when $\phi$ is negative than it is positive. These results are consistent with the above analysis that a maximum $R_c$ occurs when $\phi$ is slightly greater than zero. Or, in other words, this effect can also be explained as follows: when $\phi>0$, cars are more likely to move to the vertices with greater degree (main roads), which enables the main roads to be efficiently used and enhance the system's overall capability; but when $\phi$ is too large, the main roads will more probably get jammed, and the efficiency of the system will decrease. Finally, we try to reproduce the fundamental diagram (flux-density relation) of urban traffic system. It is one of the most important criteria that evaluates the transit capacity for a traffic system. Our model reproduced the phase transition and hysteresis in fundamental diagram. \begin{figure} \scalebox{0.50}[0.50]{\includegraphics{Fig4.EPS}} \caption{\label{Fig4} (color online). Fundamental diagram for a $N=100$ network with $m0=m=2$,$\alpha=5$, $\beta=0.2$, and different $\phi$. The data are averaged over 10 typical simulations on one realization of network. In each chart, the solid square line shows the flux variation when adding cars randomly to the system (increase density), while the empty circle line shows the flux variation when drawing out cars randomly from the system (decrease density). And the data are collected and averaged at 10,000-11,0000 steps after the intervention when the system has reached a steady state. The sudden transition density values are: 0.76 and 0.45 ($\phi=0.1$), 0.82 and 0.76($\phi=0.0$), 0.26 and 0.22 ($\phi=1.0$), 0.83 and 0.80 ($\phi=-0.5$). For different realiazations of network, the charts are similar in phases, but with different transition values. } \end{figure} To simulate a conservative system (constant density), we count the number of arriving cars at each time step and add the same number of cars to randomly selected vertices of the system at the beginning of next step. The flux is calculated as the number of successful car turnings from vertex to vertex through edges per step, as is similar to the Internet information flow. Here we ignore the movement of cars on a given road. In fact, the flux of car turnings at intersections can, to some extent, reflect the flux on roads. In Fig.\ref{Fig4}, the fundamental diagrams for $\phi=0.1,0.0,1.0$ and $-0.5$ are shown. The curves of each diagram show four flow states: free flow, saturate flow, bistable and jammed. For simplicity, we focus on the $\phi=0.1$ chart in the following description. As we can see, when the density is low (less than $\approx 0.1$), all cars move freely and the flux increases linearly with car density. It is the free-flow state that all vertices (roads) are operated below its maximum handling ability $C$. Then the flux's increment slows down and the flux gradually comes to saturation ($0.10 \sim 0.45$). In this region, the flux is restricted mainly by handling ability $C$ of vertices. One can see that when $\phi$ is close to zero, the saturated flux ($\approx 360$) is much higher than other values. At higher density, the model reproduces an important character of traffic flow - ``hysteresis". It can be seen that two branches of the fundamental diagram coexist between $0.45$ and $0.76$. The upper branch is calculated by adding cars to the system, while the lower branch is calculated by removing cars from a jammed state and allowing the system to relax after the intervention. In this way a hysteresis loop can be traced (arrows in Fig.\ref{Fig4}). The hysteresis loop indicates that the system is bistable in a certain range of vehicle density. And as we know so far, it is the first time that our model reproduces the hysteresis phenomenon in scale-free network traffic and in urban network traffic. To test the finite-size effect of our model, we simulate some bigger systems with much more vertices(roads). The simulation shows similar phase transition and hysteresis in fundamental diagram as shown in Fig.\ref{Fig5}(a). The flux's sudden drop to a jammed state from a saturated flow is a first order phase transition. This behavior can be explained by the sudden increment of full(jammed) vertices in the system (See Fig.\ref{Fig5}(b)). According to the evolution rules, when a vertex is full of cars, the cars at neighboring vertices can not turn to it. So the cars may also accumulate on the neighboring vertices and get jammed. This mechanism can trigger an avalanche across the system when the car density is high. As shown in Fig.\ref{Fig5}, the number of full vertices increase suddenly at the same density where the flux drop to zero and almost no car can reach its destination. As for the lower branch of the bistable state, starting from an initial jammed configuration, the system will have some jammed vertices that are difficult to dissipate. Clearly, these vertices will decrease the system efficiency by affecting the surrounding vertices until all vertices are not jammed, thus we get the lower branch of the loop. \begin{figure} \scalebox{0.50}[0.55]{\includegraphics{Fig5.EPS}} \caption{\label{Fig5} (color online). (a) Fundamental diagram for a $N=1000$ network with $m0=m=5$, $\alpha=1$,$\beta=0.2$ and $\phi=0.1$. (b) The averaged number of jammed vertices $\langle N_{jv} \rangle$. The symbols for increasing/decreasing density are the same as in Fig.\ref{Fig4}. One can see that the two sudden change points $0.32$ and $0.21$ in both charts are equal.} \end{figure} Moreover, an important conclusion can be drawn by comparing the $\phi=0.1$ chart with the $\phi=0.0$ chart in Fig.\ref{Fig4} that the $\phi=0.1$ chart has a much broader bistable region than the $\phi=0.0$ one. This means, when the system retreats from a heavy load jammed state, it is more difficult to reach a high efficiency state if $\phi$ is greater than zero that cars are more likely to move to main roads. In other words, though it is wise to take full advantage of the main roads when the entire traffic is light, it won't be so happy to do so at rush hours. In conclusion, a new traffic model for scale-free networks is proposed. In the new perspective of mapping roads to vertices and intersections to edges, and incorporating road/intersection capability limits, the model can be applied to urban traffic system. In a systemic view of overall efficiency, the model reproduces several significant characteristics of network traffic, such as phase transition, travel time, and fundamental diagram. A special phenomenon - the ``hysteresis" - can also be reproduced. Although the microscopic dynamics of urban traffic are not well captured, such as the movement of cars along streets, the interaction with traffic lights, and the differences between long and short streets, our model is still a simple and good representation for urban traffic, since much empirical evidence is well reproduced by the model. Further effort is deserved to consider more detailed elements for better mimicking real traffic systems. Moreover, by choosing other values of the parameters, the model may be applied to other networked systems, such as communication and power systems. We thank Na-Fang Chu for her useful discussion and suggestion. This work is financially supported by the National Natural Science Foundation of China (Grant No. 10532060, 10404025) and the Australian Research Council through a Discovery Project Grant.
1,116,691,498,151
arxiv
\section{Introduction}\label{sec:intro} The existence of strong electron correlations (SEC), due to the significant Coulomb interaction of holes in $d_{x^2-y^2}$-orbitals of copper ions, essentially complicates the study of low-temperature properties of cuprate high-temperature superconductors (HTSC). On the other hand, it is the large value of this interaction that allows to integrate out the high-energy states in the, most realistic for cuprates, three-band $p$$-$$d$ model or the Emery model \cite{Emery_1987,Varma_1987,Hirsch_1987,Gaididei_1988,Ovchinnikov_1989} and to obtain a more simple spin-fermion model (SFM) \cite{Barabanov_1988,Zaanen_1988,Emery_Reiter_1988,Prelovsek_1988,Stechel_1988}. An important difference of the last model from the other effective low-energy models of cuprates, such as the Hubbard model (for example, \cite{Kohno_2018,Kitatani_2019}) or the $t$$-$$J$ model (\cite{Spalek_2017}), is that the SFM clearly takes into account the spatial separation of hole states on the copper ion and two oxygen ions in the unit cell of CuO$_2$-planes. Within SFM, the concept of a spin polaron was developed \cite{Barabanov_1993,Barabanov_1996,Barabanov_1997}, which made it possible to achieve significant progress in describing the properties of cuprates both in the normal \cite{Barabanov_1997,Starykh_1995,Barabanov_Qpol_1997,Barabanov_2001,Kuzian_2003,DVB_2013}, and superconducting \cite{VDB_PLA_2015,VDB_JLTP_2015,VDB_JSNM_2016} phases. In particular, in \cite{VDB_PLA_2015,VDB_JLTP_2015,VDB_JSNM_2016} it was shown that the Cooper instability develops in an ensemble of spin polarons, and the exchange interaction between the spins localized on copper ions causes an effective attraction between spin-polaron quasiparticles. Recently, in \cite{Dzebisashvili_2018}, the spin polaron concept was used to describe the dependence of the London penetration depth $\lambda$ on the temperature $T$ in hole-doped cuprate HTSCs. An important result of these studies was the detection of the so-called inflection point in the calculated curves of $\lambda^{-2}(T)$, which was experimentally observed, for example, in La$_{1.83}$Sr$_{0.17}$CuO$_4$ \cite{Khasanov_PRL_2007,Wojek_2011}, YBa$_2$Cu$_3$O$_{7-\delta}$ \cite{Sonier_1999,Khasanov_2007} and Bi$_{2.15}$Sr$_{1.85}$CaCu$_2$O$_{8+\delta}$ \cite{Anukool_2009}. Unfortunately in \cite{Dzebisashvili_2018} the theoretical curves $\lambda^{-2}(T)$ exceeded the experimental ones for the La$_{2-x}$Sr$_{x}$CuO$_4$ (LSCO) \cite{Panagopoulos_1999} by 30\%-40\%, both regarding the value of $\lambda^{-2}_0$ (i.e. $\lambda^{-2}$ at $T=0$) and the value of $T_c$ which is the temperature at which $\lambda$ diverges. It is important to note that parameters of the SFM were not adjusted, but were chosen equal to those used earlier \cite{Barabanov_Qpol_1997,Barabanov_2001,DVB_2013,VDB_PLA_2015,VDB_JLTP_2015,VDB_JSNM_2016}. To obtain a satisfactory agreement of the $\lambda^{-2}(T)$ curves with the experimental data, it was necessary to reduce by almost two times both the parameter of the spin-fermion coupling $J$, which significantly affects the value of the superconducting current, and the super-exchange parameter $I$, which is the coupling constant in the spin-polaron ensemble, and thus, determining the critical temperature $T_c$. If the two-fold reduction of $J$, used to fit the results in \cite{Dzebisashvili_2018}, could still be somehow justified (the effective parameter $J$ depends on the parameters of the original Emery model and can vary within the specified limits), then the reduction of the exchange integral $I$ was only illustrative. In this work, it will be shown that taking into account the Coulomb repulsion between the holes on oxygen ions, eliminates the need to artificially underestimate the value of the super-exchange integral to achieve a satisfactory agreement between the theoretical and experimental temperature dependencies of the function $\lambda^{-2}(T)$ in cuprate HTSCs. The paper is organized as follows. In the second Section, SFM is formulated and necessary notations are introduced. The third Section describes the modification of the SFM Hamiltonian, when the magnetic field is switched on, and the method of calculating the London length. In the fourth Section, the projection method is briefly discussed, on the basis of which the spin polaron concept is implemented, and the system of equations for the Green's functions in the superconducting phase is given. The equations for the order parameter and spectrum of spin-polaron quasiparticles in the superconducting phase are discussed in Section \ref{sec:EqOP}. Section \ref{sec:London} presents the results of numerical calculations of the function $\lambda^{-2}(T)$. The main conclusions of the paper are formulated in the final seventh Section. \section{Spin-Fermion Model}\label{sec:SFM} The following ratio between the parameters of the Emery model corresponds to the SEC regime in the cuprate HTSCs: \begin{equation}\label{cond_1} \Delta_{pd}\sim(U_d-\Delta_{pd}) \gg t_{pd}>0, \end{equation} where $U_d$ is the Coulomb repulsion parameter of two holes on a copper ion, $\Delta_{pd}$ is the charge transfer gap between the hole states on copper and oxygen ions, and $t_{pd}$ is the hybridization parameter between the $d$- and $p$-orbitals on copper and oxygen ions, respectively. Inequalities (\ref{cond_1}) allow reducing the Emery model and obtaining SFM \cite{Barabanov_1988,Zaanen_1988,Emery_Reiter_1988,Prelovsek_1988,Stechel_1988}. Using the quasi-momentum representation for Fermi operators we write the SFM Hamiltonian in the form \cite{VVV_DDM_etal_2017} \begin{eqnarray}\label{HamSF} \hat{H}_{\mathrm{sp}\textrm{-}\mathrm{f}}=\hat{H}_{\mathrm{h}}+\hat{J}+\hat{I}+\hat{U}_{p}+\hat{V}_{pp}, \end{eqnarray} where \begin{eqnarray} \hat{H}_{\mathrm{h}}&= \sum_{k\alpha}\Bigl( \xi_{k_x}a_{k\alpha}^{\dagger}a_{k\alpha} +\xi_{k_y}b_{k\alpha}^{\dagger}b_{k\alpha}\nonumber\\ &\qquad+t_{k}\bigl(a_{k\alpha}^{\dagger}b_{k\alpha} +b_{k\alpha}^{\dagger}a_{k\alpha}\bigr)\Bigr),\label{def_Hh}\\ \hat{J}&= \frac{J}{N}\sum_{f, k, q\atop{\alpha, \beta}} e^{if(q-k)} u_{k\alpha}^{\dag}\bigl(\vec{S}_f{\vec{\sigma}}_{\alpha\beta}\bigr)u_{q\beta},\label{def_J}\\ \hat{I}&= \frac{I}{2}\sum_{f, \delta}\vec{S}_f\vec{S}_{f+\delta},\label{def_I}\\ \hat{U}_{p}&= \frac{U_p}{N}\sum_{1, 2, 3, 4}\Bigl[ a^{\dag}_{1\uparrow}a^{\dag}_{2\downarrow}a_{3\downarrow}a_{4\uparrow} +(a\to b)\Bigr]~\delta_{1+2-3-4},\label{def_Up}\\ \hat{V}_{pp}&= \frac{4V_1}{N}\sum_{1, 2, 3, 4\atop{\alpha, \beta}} \phi_{3-2}~a^{\dag}_{1\alpha}b^{\dag}_{2\beta}b_{3\beta}a_{4\alpha}~\delta_{1+2-3-4}\nonumber\\ &\quad+\frac{V_2}{N}\sum_{1, 2, 3, 4\atop{\alpha, \beta}}\Bigl[ \theta^{xy}_{2-3}~a^{\dag}_{1\alpha}a^{\dag}_{2\beta}a_{3\beta}a_{4\alpha} \nonumber\\ &\quad\qquad\qquad+\theta^{yx}_{2-3}(a\to b)\Bigr]~\delta_{1+2-3-4}.\label{def_V1} \end{eqnarray} When writing (\ref{def_Hh}-\ref{def_V1}) the following notations were used \begin{eqnarray}\label{Definitions} &\xi_{k_{x(y)}}=\tilde{\varepsilon}_p+2\tau s_{k,x(y)}^2-\mu,\quad \tilde{\varepsilon}_p=\varepsilon_p+2V_{pd},\nonumber\\ &t_{k}=(2\tau-4t)s_{k,x}s_{k,y},\quad s_{k,x(y)}=\sin\bigl(k_{x(y)}/2\bigr),\nonumber\\ &\phi_{k}=\cos\frac{k_x}{2}\cdot\cos\frac{k_y}{2},\quad \theta^{xy(yx)}_k=e^{ik_{x(y)}}+e^{-ik_{y(x)}},\nonumber\\ &\tau=t_{pd}^2\bigl(1-\eta\bigr)/\Delta_{pd},\quad \eta=\Delta_{pd}/\bigl(U_d-\Delta_{pd}-2V_{pd}\bigr),\nonumber\\ &J=4t_{pd}^2\bigl(1+\eta\bigr)/\Delta_{pd},\quad u_{k\beta}=s_{k,x}a_{k\beta}+s_{k,y}b_{k\beta}. \end{eqnarray} The $\hat{H}_{\mathrm{h}}$ operator describes holes on oxygen ions. $a_{k\alpha}^{\dagger}(a_{k\alpha})$ denotes the hole creation (annihilation) operators with a quasi-momentum $k$ and with a spin projection $\alpha=\pm1/2$ in the oxygen ion subsystem with the $p_x$-orbitals. Similar operators from the oxygen ion subsystem with the $p_y$-orbitals are denoted by $b_{k\alpha}^{\dagger}(b_{k\alpha})$. The parameter $\varepsilon_p$ corresponds to the bare binding energy of the holes on oxygen ions. This energy is increased by 2$V_{pd}$ taking into account the Coulomb interaction of the oxygen hole with the two nearest copper ions ($V_{pd}$ is the value of this interaction). The integral of the hole hopping between the oxygen ions is denoted by $t$. The parameter $\tau$ is due to hybridization of the $p$- and $d$-orbitals on the copper and oxygen ions. $\mu$ is the chemical potential. The $\hat{U}_p$ operator defined by (\ref{def_Up}) describes the Hubbard repulsion of two holes on an oxygen ion with the intensity of $U_p$. For brevity, quasi-momenta and spins with the corresponding indices are denoted by numbers, for example: $1\equiv\{k_1, \sigma_1\}$. The Kronecker symbol $\delta_{1+2-3-4}$ accounts for the momentum conservation law: $\delta_{k_1+k_2-k_3-k_4}$. $N$ is the number of unit cells. Intersite Coulomb interactions of the holes located at the nearest-neighbor and next-nearest-neighbor oxygen ions (figure \ref{fig-1}) are described by the operator $\hat{V}_{pp}$ (see (\ref{def_V1})). The value of these interactions is determined by the parameters $V_1$ and $V_2$, respectively. The functions $\phi_k$ and $\theta^{xy(yx)}_{k}$ appear in the transition from the Wannier representation to the quasi-momentum representation and take into account the crystal symmetry of the CuO$_2$-plane. The $\hat{J}$ operator appears in the second order in the hybridization parameter $t_{pd}$ and is defined by (\ref{def_J}). This operator takes into account both the exchange interaction between the spins of the holes on copper and oxygen ions, and the spin-correlated hoppings of the hole in the oxygen subsystem with the simultaneous flipping of the localized spin. The spin on the copper ion with the site index $f$ is described by the operator $\vec S_f$, and the vector $\vec{\sigma} $ in (\ref{def_J}) is composed of the Pauli matrices: $\vec{\sigma}=(\sigma^x, \sigma^y, \sigma^z)$. Finally, the $\hat{I}$ operator takes into account the super-exchange interaction between the nearest-neighbor spins on copper ions and appears in the fourth order of the perturbation theory on the parameter $t_{pd}$. Vector $\delta$ in (\ref{def_I}) connects the site $f$ from the copper sublattice with four nearest sites from the same sublattice. The SFM parameters --- the effective hopping $\tau$, the integrals of the $p$$-$$d$-exchange ($J$) and super-exchange ($I$) interactions --- are expressed in terms of the parameters of the original Emery model (see, for example, \cite{Zaanen_1988}). The latter are obtained with satisfactory accuracy \cite{Hybertsen_1989,Ogata_2008,Fischer_2011}. Taking this into account as well as the results in \cite{DVB_2013,VDB_JSNM_2016}, we have chosen the following values of the SFM parameters (in eV): $J=1.76$, $I=0.118$, $\tau=0.225$, $U_p=3$ \cite{Hybertsen_1989}. The value of the Coulomb interaction parameters $V_1$ can be estimated in the range $1$$-$$2$ eV \cite{Fischer_2011} albeit, as will be shown below, the particular value of $V_1$ turns out not to be too significant for d-wave superconductivity. The value of $V_2$ we estimated within $0.1$$-$$0.2$ eV according to \cite{VDKB_JMMM_2017}. For the oxygen-oxygen hopping integral we take $t=0.12$ eV which is a reduced value as compared to the one usually used. For choosing this value of $t$ we have at least two reasons following from our previous study of cuprate HTSC in both normal phase \cite{DVB_2013} and d-wave superconducting phase \cite{VDB_JSNM_2016}. An important circumstance to be taken into account in the spin-polaron approach is that the localized spin subsystem is in the quantum spin-liquid state. This means that the long-range magnetic order is absent in the copper ion subsystem: $\langle S^\alpha_f\rangle=0$ ($\alpha=x, y, z$), but short-range spin correlations remain. These correlations are taken into account through the spin correlation functions $C_j$, which are defined as thermodynamic average of the two spin operators located at a distance $r_j$: $C_j=\bigl\langle{\vec S}_f{\vec S}_{f+r_j}\bigr\rangle$, where $j$ is the number of the coordination sphere of the site $f$. In the spin-liquid phase, these correlators satisfy the sequence of equalities: $C_j=3\bigl\langle S^x_f S^x_{f+r_j}\bigr\rangle=3\bigl\langle S^y_f S^y_{f+r_j}\bigr\rangle=3\bigl\langle S^z_f S^z_{f+r_j}\bigr\rangle.$ In the low temperature range ($\lesssim 100$ K) the spin correlators are almost independent of temperature, but strongly depend on the doping $x$. The correlators $C_j$ as functions of $x$ were calculated, for example, in \cite{Barabanov_2011} based on the frustrated Heisenberg model on a square lattice in the framework of the spherically symmetric approach \cite{Shimahara_1991}. The values of $C_j$ (with $j=1, 2, 3$) used for different $x$ were taken from \cite{Barabanov_2001}. \begin{figure}[ht] \centering \includegraphics[width=0.4\textwidth]{fig1.eps} \caption{(Color online) The structure of the CuO$_2$-plane. Oxygen $p_{x}(p_y)$ orbitals and copper $d_{x^2-y^2}$ orbitals are shown. Wavy lines denote Coulomb interactions: $U_{p(d)}$ --- on-site Coulomb repulsion of holes on an oxygen (copper) ion; $V_{1}$ and $V_{2}$ --- intersite Coulomb interactions of the holes located at the nearest-neighbor and the next-nearest-neighbor oxygen ions, respectively. The bold green line with arrows stands for the super-exchange interaction ($I$) between spins on the nearest-neighbor copper ions. The bold blue lines next to the letter $J$ correspond to both the spin-fermion exchange interaction and the spin-correlated hoppings. $\tau$ --- effective hole hoppings arising due to $p$$-$$d$-hybridization in the second order of perturbation theory, $t$ --- the integral of direct hole hoppings between nearest oxygen ions (find $\tau$ and $t$ near the thin blue line with arrows).}\label{fig-1} \end{figure} \section{The London penetration depth}\label{sec:SFMwithA} Calculation of the penetration depth of the magnetic field $\lambda$ in superconductors is based on the London equation: ${\vec j}=-c/(4\pi\lambda^2) {\vec A}$, where $c$ is the speed of light. In the local approximation this equation establishes a relation between the superconducting current density ${\vec j}$ and the vector potential of the magnetic field ${\vec A}$, and the proportionality coefficient between them is determined by the value of $\lambda$. To calculate the superconducting current density ${\vec j}$ in an ensemble of spin-polaron quasiparticles we should include into Hamiltonian (\ref{HamSF}) terms accounting for coupling to the magnetic field. This can be done via Peierls substitution \cite{Peierls_1933,Lifshitz_1978}. Considering vector potential ${\vec A}_{q}$ in the long-wavelength limit: $q=0$ \cite{Schriffer_1964,Tinkham_1996} we find \cite{Dzebisashvili_2018} that Hamiltonian (\ref{HamSF}) acquires an additional phase \begin{eqnarray}\label{def_alpfa} \alpha_x=\frac{eg_x}{2c\hbar}A^x_{q=0} \end{eqnarray} in the argument of the trigonometric function $s_{k, x}$ (\ref{Definitions}). Here $g_x$ is the lattice constant and for simplicity, we directed the vector potential along the $x$-axis. Thus, a new definition of the function $s_{k,x}$, which takes into account the magnetic field, has the form: \begin{eqnarray*}\label{skx_alpha_x} s_{k, x}=\sin\bigl(k_x/2-\alpha_x\bigr). \end{eqnarray*} It is this definition for $s_{k, x}$ which will be used further. The function $u_{k}$ which is linearly related to $s_{k, x}$ also apparently changes (see (\ref{Definitions})). The function $s_{k, y}$ remains unchanged since in this case $A^y_{q=0}=0$. The Zeeman energy determined by the spin moments of the holes is not taken into account because in the long wavelength limit ($q\to0$) this energy tends to zero. The resulting expression for the average value of the superconducting current density, obtained in \cite{Dzebisashvili_2018} within SFM, is as follows: \begin{eqnarray}\label{curr_den} j_x(q=0)=\frac{eg_x}{\hbar}\sum_{k\alpha}\cos\Bigl(\frac{k_x}{2}-\alpha_x\Bigr) \Bigr[2\tau s_{k, x}\langle a^{\dag}_{k\alpha}a_{k\alpha}\rangle\nonumber\\ \quad +\bigl(2\tau-4t\bigr)s_{k,y}\langle a^{\dag}_{k\alpha}b_{k\alpha}\rangle +J\langle a^{\dag}_{k\alpha}L_{k\alpha}\rangle\Bigl], \end{eqnarray} where expressions for thermodynamic averages in square brackets are given in the Appendix by (\ref{eqCor_abL}). Expression (\ref{curr_den}), in particular, gives the correct behavior of the current density at $T \geq T_c$. Indeed, in the normal phase the dependence of all thermodynamic averages (\ref{eqCor_abL}) on the quasi-momentum $k_x$ is determined only as the difference $k_x-\alpha_x$. Therefore, a simple substitution of the integration variable $k_x \to k_x+\alpha_x$ in the integral in the right part of the expression (\ref{curr_den}) allows one to eliminate the phase $\alpha_x$. Since for $\alpha_x=0$ the integrand in (\ref{curr_den}) is antisymmetric to ${\vec k}$, the right part of (\ref{curr_den}), as required, vanishes. In the superconducting phase (for $T<T_c$), the dependence of the thermodynamic averages on $k_x$ is determined both by the difference $k_x-\alpha_x$ and by the sum of $k_x+\alpha_x$. In this case, the integral in (\ref{curr_den}) is nonzero. The inverse square of penetration depth $\lambda^{-2}$ was determined numerically according to the London equation at $T<T_c$ as \begin{eqnarray*} \frac{1}{\lambda^2}=-\frac{4\pi}{c}\cdot\frac{j_x(q=0)}{A^x_{q=0}}, \end{eqnarray*} where supercurrent density $j_x(q=0)$ is defined by (\ref{curr_den}). The described approach for calculating $\lambda$ is a sufficiently effective one, especially for multi-band systems, for which the analytical dependence of the quasiparticle spectrum on the quasi-momentum is unknown and can only be obtained numerically. The proposed approach is also convenient since there is no need to carry out cumbersome calculations connected with extracting paramagnetic and diamagnetic parts of the supercurrent density. \section{Equations for Green's Functions}\label{sec:equations} A significant feature of the Hamiltonian of the SFM (\ref{HamSF}) is a large value of the $p$$-$$d$-exchange interaction constant $J$, which greatly exceeds the values of all the other parameters of the model. This means that in calculating the energy structure of spin-polaron excitations and analyzing the conditions for superconducting pairing, one has to take into account this interaction exactly. An approach taking into consideration this strong $p$$-$$d$-exchange coupling and within which the corresponding spin-polaron quasiparticle appears is called the spin-polaron approach. For the particular implementation of this approach the Zwanzig-Mori projection technique has proved to be rather convenient \cite{Zwanzig_1961,Mori_1965,Roth_1968,Roth_1969,Rowe_1968,Tserkovnikov_1981,Plakida_book_2010,Mancini_2004}. According to the projection technique, first of all, it is necessary to introduce a minimal set of basis operators that allow one to correctly describe the quasiparticle excitations in the system. For the correct account of the strong spin-charge coupling in the SFM of interest, it is important to introduce into the specified basis, along with the bare hole operators $a_{k\alpha}$ and $b_{k\alpha}$, the operator \begin{equation*} \label{L_operator} L_{k\alpha}=\frac1N\sum_{fq\beta} e^{if(q-k)} \bigl(\vec{S}_f\vec{\sigma}_{\alpha\beta}\bigr)u_{q\beta}, \end{equation*} arising in the right part of the equations of motion for $a_{k\alpha}$ and $b_{k\alpha}$. As was shown in \cite{Barabanov_1993,Barabanov_1996,Barabanov_1997,Barabanov_2001} the three operators $a_{k\alpha}$, $b_{k\alpha}$ and $L_{k\alpha}$ are sufficient to describe spectral properties of Fermi excitations of the cuprate HTSCs in the normal phase. To analyze the conditions for Cooper instability the mentioned set of three operators, is necessary to be enlarged by three extra operators: $a_{-k\bar{\alpha}}^{\dag}$, $b_{-k\bar{\alpha}}^{\dag}$, $L_{-k\bar{\alpha}}^{\dag}$ ($\bar{\alpha}=-\alpha$) \cite{VDB_PLA_2015,VDB_JLTP_2015,VDB_JSNM_2016}, giving an opportunity to introduce anomalous thermodynamic averages. The next step of the projection technique is to project the equations of motion for the basis operators (or for the corresponding Green's functions) on the original set of basis operators. The application of this method to the SFM (\ref{HamSF}) with the above basis of six operators is described in \cite{Barabanov_2001,VDB_PLA_2015,VVV_DDM_etal_2017}. Omitting the details of the calculations, we give the answer for a closed system of equations for the Green's functions ($j=1, 2, 3$): \begin{eqnarray}\label{EqM_GF} (\omega-\xi_{x})G_{1j}&=\delta_{1j}+t_{k}G_{2j}+J_{x}G_{3j} +\Delta_{1k}F_{1j}+\Delta_{2k}F_{2j},\nonumber\\ (\omega-\xi_{y})G_{2j}&=\delta_{2j}+t_{k}G_{1j}+J_{y}G_{3j} +\Delta_{3k}F_{1j}+\Delta_{4k}F_{1j},\nonumber\\ (\omega-\xi_{L})G_{3j}&=\delta_{3j}K_{k}+\bigl(J_{x}G_{1j} +J_{y}G_{2j}\bigr)K_{k}+\frac{\Delta_{5k}}{K_k}F_{3j},\nonumber\\ (\omega+\xi_{x})F_{1j}&=\Delta_{1k}^*G_{1j} +\Delta_{3k}^*G_{2j}-t_{k}F_{2j}+J_{x}F_{3j},\nonumber\\ (\omega+\xi_{y})F_{2j}&=\Delta_{2k}^*G_{1j} +\Delta_{4k}^*G_{2j}-t_{k}F_{1j}+J_{y}F_{3j},\nonumber\\ (\omega+\xi_{L})F_{3j}&=\frac{\Delta^*_{5k}}{K_k}G_{3j} +\bigl(J_{x}F_{1j}+J_{y}F_{2j}\bigr)K_{k}. \end{eqnarray} Here, for the normal and anomalous Green's functions, we use the short notations $G_{ij}$ and $F_{ij}$, respectively. The meaning of these designations is revealed by the equalities: \begin{eqnarray*} &G_{11}=\bigl\langle\bigl\langle a_{k\uparrow}| a_{k\uparrow}^{\dag}\bigr\rangle\bigr\rangle_\omega, &F_{11}=\bigl\langle\bigl\langle a_{-k\downarrow}^{\dag}| a_{k\uparrow}^{\dag}\bigr\rangle\bigr\rangle_\omega,\nonumber\\ &G_{21}=\bigl\langle\bigl\langle b_{k\uparrow}| a_{k\uparrow}^{\dag}\bigr\rangle\bigr\rangle_\omega, &F_{21}=\bigl\langle\bigl\langle b_{-k\downarrow}^{\dag}| a_{k\uparrow}^{\dag} \bigr\rangle\bigr\rangle_\omega,\nonumber\\ &G_{31}=\bigl\langle\bigl\langle L_{k\uparrow}| a_{k\uparrow}^{\dag}\bigr\rangle\bigr\rangle_\omega,\quad &F_{31}=\bigl\langle\bigl\langle L_{-k\downarrow}^{\dag}| a_{k\uparrow}^{\dag}\bigr\rangle\bigr\rangle_\omega. \end{eqnarray*} The functions $G_{i2}$($F_{i2}$) and $G_{i3}$($F_{i3}$) ($i=1, 2, 3$) are defined in a similar way except for the operator $a^{\dag}_{k\uparrow}$ being substituted for $b^{\dag}_{k\uparrow}$ and $L^{\dag}_{k\uparrow}$, respectively. When writing the system (\ref{EqM_GF}) we use the functions: \begin{eqnarray}\label{notations} \xi_{x(y)}=&\xi_{k_{x(y)}},\quad J_{x(y)}=Js_{k, x(y)},\nonumber\\ \xi_L(k)=&\tilde\varepsilon_p-\mu-2t+5\tau/2-J-\tau C_1\gamma_{1k}/2 \nonumber\\ &+\bigl[(\tau-2t)\bigl(-C_1\gamma_{1k}+C_2\gamma_{2k}\bigr) +\tau C_3\gamma_{3k}/2\nonumber\\ &+JC_1(1+4\gamma_{1k})/4-IC_1(\gamma_{1k}+4)\bigr]/K_{k}, \end{eqnarray} where \begin{eqnarray} K_{k}=\bigl\langle\{L_{k\uparrow}, L^{\dag}_{k\uparrow}\}\bigr\rangle=3/4-C_1\gamma_{1k}, \end{eqnarray} and $\gamma_{jk}$ ($j=1, 2, 3$) denote the square lattice invariants: \begin{eqnarray} \gamma_{1k}&=(\cos(k_x-2\alpha_x)+\cos k_y)/2,\nonumber\\ \gamma_{2k}&=\cos (k_x-2\alpha_x)\,\cos k_y,\nonumber\\ \gamma_{3k}&=(\cos(2k_x-4\alpha_x)+\cos 2k_y)/2, \end{eqnarray} taking into account the magnetic field through the phase $\alpha_x$. The components of the superconducting order parameter $\Delta_{jk}$ are defined as anomalous thermodynamic averages: \begin{eqnarray}\label{def_Deltas} \Delta_{1k}&=\bigl\langle\bigl\{\bigl[a_{k\uparrow}, \hat H_{\mathrm{sp}\textrm{-}\mathrm{f}}\bigr], a_{-k\downarrow}\bigr\}\bigr\rangle,\nonumber\\ \Delta_{2k}&=\bigl\langle\bigl\{\bigl[a_{k\uparrow}, \hat H_{\mathrm{sp}\textrm{-}\mathrm{f}}\bigr], b_{-k\downarrow}\bigr\}\bigr\rangle,\nonumber\\ \Delta_{3k}&=\bigl\langle\bigl\{\bigl[b_{k\uparrow}, \hat H_{\mathrm{sp}\textrm{-}\mathrm{f}}\bigr], a_{-k\downarrow}\bigr\}\bigr\rangle,\nonumber\\ \Delta_{4k}&=\bigl\langle\bigl\{\bigl[b_{k\uparrow}, \hat H_{\mathrm{sp}\textrm{-}\mathrm{f}}\bigr], b_{-k\downarrow}\bigr\}\bigr\rangle,\nonumber\\ \Delta_{5k}&=\bigl\langle\bigl\{\bigl[L_{k\uparrow}, \hat H_{\mathrm{sp}\textrm{-}\mathrm{f}}\bigr], L_{-k\downarrow}\bigr\}\bigr\rangle. \end{eqnarray} \section{Equations for the superconducting order parameters and spin-polaron spectrum}\label{sec:EqOP} The equations for the components of the superconducting order parameter $\Delta_{jk}$ ($j=1, \dots, 5$) are obtained after calculating the commutators (and anticommutators) in the right hand part of formulas (\ref{def_Deltas}) and projecting the result of the calculations on the introduced basis of six operators. Since, according to the results of \cite{VVV_DDM_etal_2017}, the s-wave superconductivity in the SFM does not occur, when writing equations for $\Delta_{jk}$, we keep only those terms which correspond to the d-wave pairing. The result is given in the appendix by (\ref{Deltas}). The components $\Delta_{2k}$ and $\Delta_{3k}$ for the d-wave pairing turn out to be zero. It is important to note that in the expressions (\ref{Deltas}) for $\Delta_{jk}$ the Coulomb repulsion parameter between the holes located on the nearest-neighbor oxygen ions $V_1$ is missing, since according to \cite{Izyumov_1999,VDKB_2016} it should not contribute to the d-wave pairing. Anomalous thermodynamic averages in the system of equations (\ref{Deltas}) are calculated using the spectral theorem \cite{Zubarev_1960} and corresponding Green's functions of the system (\ref{EqM_GF}). To analyze the conditions for Cooper instability, it is sufficient to calculate the anomalous averages in the linear approximation with respect to the components $\Delta_{jk}$. As a result, a closed set of homogeneous integral equations for the components of the superconducting order parameter $\Delta^*_{lk}$ ($l=1, 4, 5$) is obtained as follows \begin{eqnarray}\label{Deltas_spectral_theorem} \Delta_{1k}^*=&-\bigl(\cos k_x-\cos k_y\bigr)\frac{2V_2}{N}\sum_{lq} \cos q_xM^{(l)}_{11}(q)\Delta_{lq}^*,\nonumber\\ \Delta_{4k}^*=&-\bigl(\cos k_x-\cos k_y\bigr)\frac{2V_2}{N}\sum_{lq} \cos q_xM^{(l)}_{22}(q)\Delta_{lq}^*,\nonumber\\ \Delta_{5k}^*=&+\bigl(\cos k_x-\cos k_y\bigr)\frac{I}{N}\sum_{lq} \bigl(\cos q_x-\cos q_y\bigr)\nonumber\\ &\qquad\times\Bigl(M^{(l)}_{33}(q)-C_1M^{(l)}_{uu}(q)\Bigr)\Delta_{lq}^*\nonumber\\ &+\frac{U_p}{N} \sum_{lq}C_1\Bigl(\cos(k_x-2\alpha_x) M^{(l)}_{11}(q)\nonumber\\ &\quad\qquad\qquad+\cos k_y M^{(l)}_{22}(q)\Bigr)\Delta_{lq}^*\nonumber\\ &-\bigl(\cos k_x-\cos k_y\bigr)\frac{2V_2}{N}\sum_{lq}C_1\cos q_x\nonumber\\ &\qquad\times\Bigl(M^{(l)}_{11}(q)+M^{(l)}_{22}(q)\Bigr)\Delta_{lq}^*. \end{eqnarray} When writing (\ref{Deltas_spectral_theorem}) we introduced the following functions \begin{eqnarray}\label{eqM} M^{(l)}_{uu}(q)=&-s_{q,x}^2M^{(l)}_{11}(q)-s_{q,y}^2M^{(l)}_{22}(q)\nonumber\\ &-s_{q,x}s_{q,y}\bigl(M^{(l)}_{12}(q)+M^{(l)}_{21}(q)\bigr),\\\label{eqMnm} M^{(l)}_{nm}(q)=&\sum_{j=1,4}\frac{f(-E_{jq})}{2(-1)^{j+1}E_q(E_{jq} -\epsilon_{2q})(E_{jq}-\epsilon_{3q})}\nonumber\\ &\times\frac{S^{(l)}_{nm}(q,E_{jq})}{(E_{jq}+\epsilon_{2,-q})(E_{jq}+\epsilon_{2,-q})}, \end{eqnarray} where $f(E)=1/(\exp\{E/T\}+1)$ is the Fermi-Dirac distribution function, $\epsilon_{jk}$ and $E_{jk}$ are the energies of quasiparticles in the normal and superconducting states, respectively and $S_{ij}^{(l)}(k, \omega)$ are functions defined in the Appendix (\ref{eqsSij}). The spectrum of Fermi excitations in the normal phase consists of three branches $\epsilon_{jk}$ ($j=1, 2, 3$) and is determined from the solution of the third order dispersion equation \begin{eqnarray}\label{det} \mathrm{det}_{k}(\omega)=&+\bigl(\omega-\xi_{x}\bigr) \bigl(\omega-\xi_{y}\bigr)\bigl(\omega-\xi_{L}\bigr)\nonumber\\ &-2J_{x}J_{y}t_{k}K_{k}-\bigl(\omega-\xi_{y}\bigr)J_{x}^2K_{k}\nonumber\\ &-\bigl(\omega-\xi_{x}\bigr)J_{y}^2K_{k}-\bigl(\omega-\xi_{L}\bigr)t_{k}^2=0,~ \end{eqnarray} following from condition of existence of nontrivial solution of the system (\ref{EqM_GF}) at $\Delta_{jk}=0$. With the doping levels $x$ typical for cuprates, the dynamics of the holes on oxygen ions is determined solely by the lower band with the dispersion $\epsilon_{1k}$. This branch of the spectrum is characterized by a minimum in the vicinity of ($\pi/2, \pi/2$) point of the Brillouin zone and is significantly separated from the two upper branches $\epsilon_{2k}$ and $\epsilon_{3k}$. The appearance of the lower branch is due to the strong spin-charge coupling, which induces an exchange interaction between the holes and localized spins at the nearest copper ions, as well as spin-correlated hoppings. The features of the spectrum $\epsilon_{1k}$ without magnetic field were discussed in \cite{VDB_PLA_2015}. In our case, taking into account the magnetic field is of fundamental importance. Since the chemical potential $\mu$ in the systems under consideration lies in the lower band with the dispersion $\epsilon_{1k}$, and the upper bands, as was mentioned above, are separated by a large energy gap, the spectra $\epsilon_{2k}$ and $\epsilon_{3k}$ are almost unchanged with transition to the superconducting phase: i.e. $E_{jk}=\epsilon_{jk}$ for $j=2, 3$. Obtaining an expression for the spectrum $E_{1k}$ for the lower spin-polaron band in the superconducting phase and in the weak magnetic field is described in detail in \cite{VDKKB_2019}. The expression for the spectrum $E_{1k}$ has the form \begin{eqnarray}\label{Ek} E_{1k}=\delta\epsilon_{1k}+ \sqrt{\epsilon^2_{1k}+\Delta^2_k}, \end{eqnarray} where $\delta\epsilon_{1k}$ is a correction to the polaron spectrum in the normal phase $\epsilon_{1k}$, which is antisymmetric in $k$ and linear in $\alpha_x$, and the gap function $\Delta^2_k$ is expressed as a sum of squares of the components of the superconducting order parameter \begin{eqnarray}\label{Delta2_k} \Delta^2_k=|\Delta_{1k}|^2+|\Delta_{4k}|^2+|\Delta_{5k}|^2/K^2_k. \end{eqnarray} Note that formally, in the sum over $j$ in the right hand side of expression (\ref{eqMnm}) it is necessary to take into account all the bands. However, since the upper bands (with $j=2, 3$) are empty, their contributions can be ignored. The value of the index $j=4$ in the sum over $j$ in (\ref{eqMnm}) corresponds to the spectrum $E_{4k}=-E_{1, -k}$. One can see from the system of equations (\ref{Deltas_spectral_theorem}) that the kernels of the integral equations are split, and the solutions of this system are to be found in the following form \begin{eqnarray}\label{eqs_solution} \Delta_{1k}=&B_{11}\bigl(\cos k_x-\cos k_y\bigr),\nonumber\\ \Delta_{4k}=&B_{41}\bigl(\cos k_x-\cos k_y\bigr),\nonumber\\ \Delta_{5k}=&B_{51}\cos k_x+B_{52}\cos k_y+B_{53}\bigl(\cos k_x-\cos k_y\bigr)\nonumber\\ &+B_{54}\bigl(\cos k_x-\cos k_y\bigr), \end{eqnarray} where the six amplitudes $B_{ij}$ determine the contribution of the corresponding basis functions to the expansion of the order parameter components. Substituting expansion (\ref{eqs_solution}) into equations (\ref{Deltas_spectral_theorem}) and equating the factors of the corresponding trigonometric functions, we obtain a system of six algebraic equations for determining the amplitudes $B_{ij}$. It is also necessary to add to this system an equation for self-consistently finding the chemical potential $\mu$: \begin{eqnarray}\label{Eq_mu} x=&\frac2N\sum_k\sum_{j=1,4} \frac{f(E_{jk})} {(-1)^{j+1}2E_{k}(E_{jk}-\varepsilon_{2k})(E_{jk}-\varepsilon_{3k})}\nonumber\\ &\times\frac{R^{x}(k,E_{jk})}{(E_{jk}+\varepsilon_{2,-k})(E_{jk}+\varepsilon_{3,-k})}, \end{eqnarray} where the function $R^{x}(k,\omega)$ is given in (\ref{eqqRx}). Numerical calculations show that the following relations between the amplitudes hold: $B_{11}=B_{41}\thickapprox-B_{51}=B_{52}$, $B_{54}/B_{51}\approx-10$, $B_{54}/B_{53}\approx-10^{2}$. Thus, it is seen that the largest contribution to the order parameter component $\Delta_{5k}$ gives the amplitude $B_{54}$, proportional to the exchange integral $I$. Regarding this exchange integral, it should be noted that its value depends on the doping $x$. In \cite{Barabanov_2001}, when calculating the exchange integral in the framework of the Heisenberg model, the effect of doping was simulated by the frustration of the exchange couplings. In accordance with \cite{Barabanov_2001}, we used the product $I(1-p)$ as the exchange integral, where $p$ is the frustration parameter varying from 0.15 to 0.275 with $x$ increasing from 0.03 to 0.22. \section{Results and discussion}\label{sec:London} Calculations of the temperature dependence of the magnetic penetration depth $\lambda$ taking into account the one-site Hubbard repulsion of holes and the Coulomb interaction between holes on the next-nearest-neighbor oxygen ions were carried out numerically based on expression (\ref{curr_den}) and self-consistent solution of the system of algebraic equations for the amplitudes $B_{ij}$ together with chemical potential equation (\ref{Eq_mu}). It is important to note that, except for $t$, the rest of the Emery model parameters were chosen to be equal to those that are generally accepted for hole-doped cuprate HTSCs. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{fig2.eps} \caption{The effect of Coulomb repulsion on the temperature dependence of inverse square of the London penetration depth in the SFM of cuprate HTSCs. Curve 1 is calculated with the value of the Coulomb interaction parameters $U_p=V_2=0$; curve 2 --- for $U_p=0$, $V_2=0.1$ eV; curve 3 --- for $U_p=3$ eV, $V_2=0$; curve 4 --- for $U_p=3$ eV, $V_2=0.1$ eV.The value of $V_1$ is not specified since according to (\ref{Deltas_spectral_theorem}) it does not contribute to the d-wave pairing in the SFM. The other model parameter are (in eV): $\tau=0.225$, $t=0.12$, $J=1.76$, $I=0.12$ and $\alpha_x=0.002$, $x=0.17$.}\label{fig-2} \end{center} \end{figure} The calculation results are presented in figure \ref{fig-2}. Curve 1 in this figure is given for comparison. It shows the dependence $\lambda^{-2}(T)$ in the absence of Coulomb interactions ($U_p=V_1=V_2=0$). The remaining curves demonstrate modification of the temperature dependence of $\lambda^{-2}$ with successive switching on the interactions. Note that the parameter $V_1$, as was said above, does not enter the set of equations for order parameter (\ref{Deltas_spectral_theorem}) \cite{VDKB_2016} and, therefore, does not affect the function $\lambda^{-2}(T)$. Curve 2 is obtained by take into account the interactions between the second neighbors; curve 3 --- only the Hubbard repulsion; and curve 4 --- both types of interaction. It is seen that the effect of the Coulomb interaction, in full agreement with the results of \cite{VVV_DDM_etal_2017,VDKB_JLTP_2018}, is manifested in a significant decrease in the critical temperature of the transition to the superconducting phase. The resulting decrease in $T_c$ allows us to achieve a much better agreement of the calculated temperature dependencies of $\lambda^{-2}$ with the experimental data. Figure \ref{fig-3} compares the temperature dependencies of $\lambda^{-2}$ obtained at different doping within the SFM model (solid lines) with that taken from the experiment on La$_{2-x}$Sr$_x$CuO$_4$ \cite{Panagopoulos_1999} (symbolic curves). The $T_c$$-$$x$ phase diagram, shown in the insert, is obtained within the spin-polaron approach and correlates well to the experimental phase diagram for LSCO superconductors in both the left boundary of the superconducting dome at $x\cong0.05$ and the maximum critical temperature $T_{max}$=39 K. At the same time, the right boundary of the theoretical dome exceeds that of experimental dome by the value of about 0.1. The reason for this is that in the present study, we adopted the low-density approximation, and hence in the strongly overdoped regime our theory seems to be insufficient. As a result a theoretical curve $\lambda^{-2}(T)$ for large doping $x=0.24$ significantly differs from the experimental one since in real LSCO at $x=0.24$ the critical temperature $T_c=20$ K, but according to the phase diagram shown in the insert $T_c=30$ K. A comparison of the temperature curves of $\lambda^{-2}$ for the same doping $x$ in the figure \ref{fig-3} shows that the values of $T_c$ and $\lambda^{-2}(T=0)$ are on the whole well reproduced for $x=0.15$$-$$0.22$. It can be seen from the figure that all the theoretical temperature dependencies $\lambda^{-2}(T)$, except for $x=0.10$, are slightly convex, as in most experiments on cuprate superconductors \cite{Sonier_1999,Panagopoulos_1999,Khasanov_PRL_2007}. For the doping level $x=0.10$ (the lowest solid curve in the figure \ref{fig-3}) the form of $\lambda^{-2}(T)$ is concave over the entire temperature range what seems to be incompatible with corresponding experimental curve measured in \cite{Panagopoulos_1999}. This discrepancy is most likely due to the strong spin-charge fluctuations which are well developed in the strongly underdoped regime and which, in particular, result in pseudogap (PG) behavior in cuprates. The present theory is, however, a mean field theory, it does not take into account these spin-charge fluctuations and therefore PG behavior. Since, however, the PG is weak at optimal and higher doping $x\geqslant 0.15$, we are confident that our results for $x=0.15$$-$$0.22$ will be unaffected by the PG behavior. The comparison of the calculated temperature dependencies $\lambda^{-2}(T)$ on Figure \ref{fig-3} with the corresponding curves from our previous paper \cite{Dzebisashvili_2018} leads to the conclusion that the main effect of taking into account the Coulomb interaction is the decrease of $T_c$. It is important that the main result of \cite{Dzebisashvili_2018}, the inflection point associated with the change of curvature of the function $\lambda^{-2}(T)$ and found experimentally in a number of compounds \cite{Khasanov_PRL_2007,Wojek_2011,Sonier_1999,Khasanov_2007,Anukool_2009,Howald_2018} remained unaffected. This inflection point was considered as a confirmation of the spin-polaron concept of quasiparticles in cuprate HTSCs. \begin{figure}[t] \begin{center} \includegraphics[width=0.4\textwidth]{fig3.eps} \caption{(Color online) Temperature dependence of the inverse square of the London penetration depth at five doping levels. The solid curves are calculated theoretically. The symbolic curves are taken from experimental work on La$_{2-x}$Sr$_{x}$CuO$_{4}$ \cite{Panagopoulos_1999}. Matching to one level of doping for solid and symbolic curves is indicated by the same color. The magnitudes of doping $x$ are indicated next to corresponding symbols. The insert shows doping dependence of the critical temperature. The model parameters (in eV): $\tau=0.225$, $t=0.12$, $J=1.76$, $I=0.118$, $U_p=3.3$, $V_2=0.1$. $V_1$ is not specified since it does not contribute to d-wave pairing in the SFM. The phase: $\alpha_x=0.002$.}\label{fig-3} \end{center} \end{figure} \section{Conclusion}\label{sec:conclusion} Within the spin polaron concept, the effect of Coulomb repulsion on modification of the temperature dependence of the London penetration depth $\lambda$ in cuprate high-temperature superconductors was studied. When obtaining expressions for calculating $\lambda$ two types of Coulomb interactions were taken into account: 1) Hubbard repulsion of two holes on one site and 2) Coulomb repulsion of two holes located on the next-nearest-neighbor oxygen ions. The interaction of the holes on the nearest-neighbor sites was not taken into account because, according to the results of \cite{VDKB_2016}, it does not contribute to the d-wave superconductivity within spin-fermion model. The calculation of the London penetration depth $\lambda$ was carried out on the basis of the method developed by the authors in \cite{Dzebisashvili_2018} in the framework of the spin-polaron approach, which takes into account the strong coupling between the charge and spin degrees of freedom, as well as the real structure of the CuO$_2$-planes with two oxygen ions per unit cell. On the basis of numerical calculations of the temperature dependence of inverse square of the London penetration depth, carried out with the generally accepted values of the Emery model parameters, it was shown that taking into account the Coulomb interaction results in, as expected from \cite{VVV_DDM_etal_2017,VDKB_JLTP_2018}, a significant decrease in the critical temperature corresponding to zeros of the function $\lambda^{-2}(T)$. This circumstance enabled one to achieve substantially better agreement of the theoretical curves with experimental results \cite{Panagopoulos_1999}, in rather broad range for $x$ around optimal doping ($x=0.15$, 0.20 and 0.22). At the same time for strongly overdoped and underdoped compounds our results for $\lambda^{-2}(T)$ reveal discrepancy with experimental data. We argue that for large doping ($x=0.24$) this discrepancy is because in our theory the low-density approximation was adopted and hence for doping as large as $x=0.24$ this approximation may be insufficient. On the other hand, the strong spin-charge fluctuations, which in the low doping regime are well developed due to proximity to antiferromagnetic region, are not taken into account properly in our theory. We suggest this to be the main reason for discrepancy of our results for $\lambda^{-2}(T)$ with experimental one at doping as small as $x=0.10$. However, for cuprates with moderate doping $x=0.15$, $0.20$, and $0.22$ the proposed theory describes the experimental dependencies $\lambda^{-2}(T)$ quite well and clearly shows that accounting for the Coulomb interaction leads to an almost three-fold decrease in the value of $T_c$, but does not change the functional form of the temperature dependence of $\lambda^{-2}$, which was obtained earlier. In particular, the inflection point of the function $\lambda^{-2}(T)$, whose existence is considered by us as a confirmation of the spin-polaron nature of the quasiparticles in cuprates, remained intact. \bigskip \ack The work was financially supported by the Russian Foundation for Basic Research (project \#18-02-00837, \#20-32-70059), the Government of Krasnoyarsk Region, the Krasnoyarsk Regional Science and Technology Support Fund (projects: \#18-42-240014 "One-orbit effective model of an ensemble of spin-polaron quasiparticles in the problem of describing the intermediate state and pseudogap behavior of cuprate superconductors" and \#18-42-243002 "Manifestation of spin-nematic correlations in spectral characteristics of electronic structure and their influence on practical properties of cuprate superconductors"). The work of K.K.K. was supported by the Council for Grants of the President of the Russian Federation (project MK-1641.2020.2).
1,116,691,498,152
arxiv
\section{Introduction}\label{int} In the paper \cite{SM} Smilansky suggested a mathematical model which he called "The irreversible quantum graph". In this model a one-dimensional quantum graph interacts with a finite system of harmonic oscillators attached at different points of the graph. Regardless of the physical meaning of this model, it is quite interesting from the mathematical point of view, since, being a singular perturbation problem, it exhibits many unusual effects. These effects appear already in the one-oscillator case. They were discussed in the survey paper \cite{S3}, see also references therein. In the simplest case (the graph is a real line, with only one oscillator attached) the problem consists in the study of a family of differential operators $\BA_\a$ on $ \mathbb R^2$, depending on the coupling parameter $\a\ge0$. The differential expression which defines the action of $\BA_\a$ does not involve $\a$, this parameter appears only in the transmission condition across a straight line in the plane. The operator $\BA_0$ admits an exhaustive description via the separation of variables, and the passage to $\BA_\a$ with $\a \ne0$ can be expressed, at least formally, in the terms of perturbations of quadratic forms. The main peculiarity of the problem stems from the fact that the perturbation is too strong: it is only relatively bounded but not relatively compact with respect to the operator $\BA_0$ (in the sense of quadratic forms). For this reason, the standard machinery of the perturbation theory does not work. Still, it turned out to be possible to give a detailed description of the spectrum $\sigma(\BA_\a)$ for all $\a>0$. A borderline value $\a^*$ of the parameter $\a$ exists, such that the properties of $\sigma(\BA_\a)$ are quite different for $\a<\a^*$ and for $\a\ge\a^*$. For $\a<\a^*$ the absolutely continuous (a.c.) spectrum of $\BA_\a$ is the same as for $\BA_0$, including the multiplicity. Eigenvalues appear below the bottom of $\sigma(\BA_0)$, their number grows indefinitely as $\a\nearrow\a^*$ and satisfies an asymptotic relation of a non-standard type. For $\a=\a^*$ these eigenvalues disappear and a new branch of the a.c. spectrum appears instead, filling $[0,\infty)$. For $\a$ above the threshold $\a^*$, the operator $\BA_\a$ is not semi-bounded any more and its a.c. spectrum fills the whole real line. Thus, the system exhibits a sort of phase transition as the parameter $\a$ crosses the threshold $\a^*$. The mathematical mechanism behind such a behaviour of the spectrum lies in a very special form of the transmission condition for the operator $\BA_\a$. This condition generates in a natural way an infinite Jacobi matrix which depends on the parameter $\a$ and whose spectral properties for $\a<\a^*$ and for $\a\ge\a^*$ are quite different. The papers \cite{ESI} and \cite{ESII} are devoted to the case of two oscillators, but actually their results show what happens in the general case of an arbitrary number of oscillators. It was an initiative of Des Evans, to start the work on these papers, and we take pleasure in emphasizing his role in the study of this class of problems. In the present paper we investigate another family of differential operators, say $\BL_\a$, of a similar nature. It was also proposed by Smilansky (private communication). Again, all operators in the family are determined by a differential expression not depending on the parameter, and they differ by the transmission condition. Like in the case of the family $\BA_\a$, a certain family of Jacobi matrices is closely related to the operator. However, the properties of the two families are rather different and another type of phase transition occurs. Namely, for large values of $\a$ the operator $\BL_\a$ has many self-adjoint realizations, and the negative spectrum of each realization is discrete and unbounded from below. The mechanism of this transition lies in an unusual breaking of the Shapiro -- Lopatinsky ellipticity condition in several points on the interface line, and the analysis of this situation involves a study of {\it a priori} estimates for some non-elliptic pseudodifferential operators. In the last section of the paper we briefly consider yet another family $\BM_\a$ of differential operators. It looks rather similar to the family $\BL_\a$, but some important details in the behaviour of the spectrum are quite different. Taken together, the families $\BA_\a$, $\BL_\a$, and $\BM_\a$ show that presence of the coupling parameter in the boundary condition may cause quite different types of the phase transition. It is tempting to develop a general scheme which would include all these examples as special cases. \section{Stating the problem. Preliminaries}\label{red} We study a family $\BL_\a$ of differential operators on the cylinder $\Omega= \mathbb R\times \mathbb S^1$ identified with the strip $ \mathbb R\times(0, 2\pi)$ with periodic boundary conditions for all functions involved. Further on, $x$ stands for the co-ordinate on $ \mathbb R$ and $y$ for the co-ordinate on $ \mathbb S^1$. The operator $\BL_\a$ is generated by the Laplacian $-\Delta U=-U''_{x^2}-U''_{y^2}$ and two conditions at $x=0$. The first condition is the continuity \begin{equation}\label{1.-1} U(0+,y)=U(0-,y)\ \left(=U(0,y)\right) \end{equation} and the second one is a `transmission condition' at $x=0$: \begin{equation}\label{1.0} U'_x(0+,y)-U'_x(0-,y)=i\a\left(U'_y(0,y)\cos y +(U(0,y)\cos y)'_y\right). \end{equation} In \eqref{1.0} $\a$ is a real parameter. The passage $\a\mapsto -\a$ corresponds to the change of variables $y\mapsto y+\pi$, which does not affect the spectrum. For this reason it is enough to consider $\a\ge0$. By using the Fourier expansion \begin{equation}\label{1.u} U=(2\pi)^{-1/2}\sum_{n\in \mathbb Z} u_n(x)e^{in y} \end{equation} (in short, $U\sim\{u_n\}$), we reduce the problem formally to an infinite system of ordinary differential operators on the real axis, \begin{equation}\label{1.1} -\Delta U\sim\{-u_n''+n^2u_n\},\;x\ne 0,\qquad n\in \mathbb Z, \end{equation} coupled by the conditions \begin{gather} u_n(0+)=u_n(0-)\ \left(=u_n(0)\right),\label{1.1x}\\ u_n'(0+)-u_n'(0-)=-\a\bigl((n+1/2)u_{n+1}(0)+(n-1/2)u_{n-1}(0)\bigr). \label{1.2} \end{gather} The operator $\BL_0$ is just the standard Laplacian on the cylinder $\Omega$, with the domain $H^2(\Omega)$. Thus, for $\a=0$ the above formal reduction of the partial differential operator is legal, the system decouples, and we get \begin{equation}\label{1.l0} \BL_0={\sum_{n\in \mathbb Z}}^\oplus\bigl(-\frac{d^2}{dx^2}+n^2\bigr). \end{equation} Here $-\frac{d^2}{dx^2}$ stands for the self-adjoint operator in $L^2( \mathbb R)$ with the domain $H^2( \mathbb R)$ and the symbol $\sum^\oplus$ denotes the orthogonal sum of operators. The expansion \eqref{1.l0} leads to the complete description of the spectrum $\sigma(\BL_0)$: it is absolutely continuous, fills the half-line $[0,\infty)$, and its multiplicity function is given by \begin{equation}\label{1.ac} \gm_{a.c.}(\l;\BL_0)=2+4[\l], \qquad\forall \l\ge 0, \end{equation} where, as usual, $[\l]$ denotes the integer part of a real number $\l$. For $\a\ne0$ we must first specify in what sense the conditions \eqref{1.-1} and \eqref{1.0} are understood. Suppose that $U\in L^2(\Omega)$ is a weak solution of the equation $-\Delta U=F\in L^2$ in each semi-cylinder \[\Omega_\pm=\{(x,y)\in\Omega:\pm x>0\}.\] Take any $\L\in \mathbb C\setminus[0,\infty)$ and consider the function \begin{equation}\label{1.u0} U_0=(\BL_0-\L)^{-1}(F-\L U)\in H^2(\Omega). \end{equation} The function $W=U-U_0$ belongs to $L^2(\Omega)$ and satisfies the equation $\Delta W+\L W=0$ in each semi-cylinder $\Omega_\pm$. Let $W_\pm$ stand for the restriction of $W$ to $\Omega_\pm$. The functions $W_\pm$ can be expanded in the Fourier series \begin{equation}\label{1A:Fourier} W_\pm(x,y)=\sum_{n} w_n^\pm e^{iny}e^{-|x|\sqrt{n^2-\L}}, \qquad \re\sqrt{n^2-\L}\ge0. \end{equation} Both series series converge in $L^2(\Omega_\pm)$, and \[ \int_{\Omega_\pm}|W_\pm(x,y)|^2dxdy= \sum_n\frac{|w_n^\pm|^2}{2\re{\sqrt{n^2-\L}}}.\] Hence, $W_\pm\in L^2(\Omega_\pm)$ is equivalent to $\sum_{n} |w_n^\pm|^2(n^2+1)^{-\frac12}<\infty$. It follows that for each $x\in \mathbb R$ the series in \eqref{1A:Fourier} converge in $H^{-\frac12}( \mathbb S^1)$ and moreover, $W_\pm(x,\cdot)$ are continuous as functions of $x$ with values in $H^{-\frac12}( \mathbb S^1)$. The same is true for the function $U$, and this explains the meaning of the condition \eqref{1.-1}: namely, \begin{equation}\label{1.cont} U(0+,y)=U(0-,y) \ {\text{as distributions in }}\ H^{-\frac12}( \mathbb S^1). \end{equation} \vskip0.2cm Denote by $\mathcal M(\Omega)$ the class of all functions $U\in L^2(\Omega)$ which meet the following conditions. \noindent 1. The distributions $\Delta (U\res\Omega_\pm)$ are functions in $L^2(\Omega_\pm)$. \noindent 2. The condition \eqref{1.cont} is satisfied. For any $\L\notin[0,\infty)$ we also set \begin{equation} \mathcal M_\L(\Omega)=\bigl\{W\in\mathcal M(\Omega):\Delta W+\L W=0\ {\text{in }} \Omega_\pm\bigr\}.\nonumber \end{equation} The Fourier expansion of any function $W\in\mathcal M_\L(\Omega)$ has the form \begin{equation}\label{1B:Fourier} W(x,y)=\sum_{n} w_n e^{iny}e^{-|x|\sqrt{n^2-\L}}, \end{equation} that is, for the coefficients in \eqref{1A:Fourier} we have $w_n^+=w_n^-\; (=w_n)$ and thus the function $W(x,\cdot)$ is even in $x$. We also conclude that \begin{equation} W\in\mathcal M_\L(\Omega)\Longleftrightarrow\ W(0,\cdot)\in H^{-\frac12}( \mathbb S^1).\nonumber \end{equation} Let us recall that in the terms of the Fourier coefficients $w_n$ the latter inclusion is equivalent to \begin{equation}\label{1.sob} \sum_n|w_n|^2(n^2+1)^{-\frac12}<\infty. \end{equation} Differentiation in \eqref{1B:Fourier} shows that for any $W\in\mathcal M_\L(\Omega)$ the derivatives $W'_y(x,\cdot),\;W'_x(x,\cdot)$ take values in the space $H^{-\frac32}( \mathbb S^1)$. The first of them, being an even function, is continuous in the topology of this space for all $x\in \mathbb R^1$. The second one is continuous in the topology of $H^{-\frac32}( \mathbb S^1)$ for $x\ge0$ and for $x\le0$ separately, and its jump across the circle $\{x=0\}$ is well defined as an element in $H^{-\frac32}( \mathbb S^1)$. The decomposition $U=U_0+W$, where $U_0$ is defined by \eqref{1.u0}, shows that the same is true for any $U\in\mathcal M(\Omega)$. In particular, this gives the precise meaning to both sides in \eqref{1.0} as distributions in $H^{-\frac32}( \mathbb S^1)$. Substituting the Fourier expansion \eqref{1B:Fourier} and its differentiated forms into \eqref{1.0}, we arrive at the system \eqref{1.1}, \eqref{1.1x}, \eqref{1.2} which is equivalent to the initial problem. \vskip0.2cm The following version of the Green formula is implied by the above argument. \begin{lem}\label{green} For any $U\in \mathcal M(\Omega)$ and $V\in H^2(\Omega)$ (so that $V(0,\cdot)\in H^{\frac32}( \mathbb S^1)$) we have \begin{equation}\label{1.gr}\begin{split} \left(\int_{\Omega_+}+\int_{\Omega_-}\right)(\Delta U \overline V-U\overline{\Delta V})&dxdy\\= -\int_{ \mathbb S^1}\left(U'_x(0+,y)-U'_x(0-,y)\right)&\overline{V(0,y)}dy, \end{split}\end{equation} where the integrals on the left-hand side are understood in the sense of distributions on $\Omega_\pm$ and the integral on the right-hand side is understood in the sense of distributions on $ \mathbb S^1$. \end{lem} Denote by $\mathcal B$ the differential operator appearing in the condition \eqref{1.0}: \begin{equation}\label{1.b} \mathcal B u=i(u'_y \cos y+(u\cos y)'_y). \end{equation} The operator $\mathcal B$ is symmetric as acting in the space $L^2( \mathbb S^1)$. The following useful equality, which is valid for $U\in\mathcal M(\Omega)$ satisfying \eqref{1.0} with an arbitrary $\a\ge0$ and any $ V\in H^2(\Omega)$, is a direct consequence of Lemma~\ref{green}: \begin{equation}\label{1.gr1}\begin{split} &\left(\int_{\Omega_+}+\int_{\Omega_-}\right)(\Delta U \overline V-U\overline{\Delta V})dxdy\\&= -\a\int_{ \mathbb S^1}U(0,y)\overline{\mathcal B V(0,y)}dy. \end{split} \end{equation} Indeed, substituting \eqref{1.0} into \eqref{1.gr} and integrating by parts, we arrive at \eqref{1.gr1}. \section{The problem of self-adjointness}\label{sad} \vskip0.2cm In order to study self-adjoint realizations of $\BL_\a$ for $\a>0$, we first of all introduce two sets, $\mathcal D_\a$ and $\mathcal D_\a^\bullet$, on which the operator is well defined. It is convenient to do this in the terms of the expansion \eqref{1.u}. \begin{defn}\label{dalpha} An element $U\sim\{u_n\}\in\mathcal M(\Omega)$ lies in $\mathcal D_\a$ if and only if $u_n\res \mathbb R_\pm\in H^2( \mathbb R_\pm)$ for all $n\in \mathbb Z$, the conditions \eqref{1.1x} and \eqref{1.2} are satisfied, and \begin{equation*} \sum_{n\in \mathbb Z}\int_ \mathbb R\bigl|-u''_n+n^2u_n\bigr|^2dx<\infty. \end{equation*} An element $U\in\mathcal D_\a$ belongs to $\mathcal D_\a^\bullet$, if the number of non-zero terms $\{u_n\}$ in the expansion of $U$ is finite. \end{defn} We denote \[\BL_\a=-\Delta\res\mathcal D_\a,\qquad \BL^\bullet_\a=-\Delta\res\mathcal D^\bullet_\a.\] \begin{lem}\label{l1} The operator $\BL^\bullet_\a$ is symmetric and \[\BL_\a=(\BL^\bullet_\a)^*.\] \end{lem} The proof is standard and we skip it. \begin{thm}\label{t1} 1) For $0\le\a\le 1$ the operator $\BL_\a$ is self-adjoint and, hence, $\BL^\bullet_\a$ is essentially self-adjoint. 2) For $\a> 1$ the operator $\BL_\a$ is non-self-adjoint, and the deficiency indices of $\BL^\bullet_\a$ are $(2,2)$. \end{thm} \begin{proof} We have to check whether the equation \begin{equation}\label{1.uu} \BL_\a W=(\BL^\bullet_\a)^*W=\L W \end{equation} with $\L\neq\overline\L$ has non-zero solutions $W\in \mathcal D_\a$. If $W$ is such a solution, then $W\in\mathcal M_\L(\Omega)$ and, by \eqref{1B:Fourier}, each component in the expansion \eqref{1.u} for $W$ can be written as $w_n(x)=w_n e^{-|x|\sqrt{n^2-\L}}$. The coefficients $w_n$ should satisfy conditions \eqref{1.2} that turn into \begin{equation}\label{1.eq} (n+1/2)w_{n+1}-2\a^{-1}w_n\sqrt{n^2-\L} +(n-1/2)w_{n-1}=0. \end{equation} The analysis of the system \eqref{1.eq} is similar to the reasoning in \cite{S3}, section 4, and is based upon the classical Birkhoff -- Adams theorem, see \cite{EL}, Theorem 8.36. The formulation of its leading case, which we need for the study of the operator $\BL_\a$ with $\a\neq1$, is also reproduced in \cite{S3}. This theorem deals with one-sided sequences ($n\in \mathbb N$ rather than $n\in \mathbb Z$ as in our case), and we have to analyze the behaviour of $w_n$ for $n\to+\infty$ and for $n\to-\infty$ separately. For $n\to+\infty$ we find from the theorem that for $\a\neq1$ the equation \eqref{1.eq} has two linearly independent solutions $\{w_n^\pm(+)\}$ such that \begin{equation}\label{1.bap} w_n^{\pm}(+)=(\l_+^{\pm})^n n^{-1/2}\bigl(1+O(n^{-1})\bigr), \qquad \l_+^\pm=\a^{-1}\pm\sqrt{\a^{-2}-1}. \end{equation} For $n\to-\infty$ we find in the same way that the system has two linearly independent solutions $\{w_n^{\pm}(-)\}$ such that \begin{equation}\label{1.bam} w_n^{\pm}(-)=(\l_-^\pm)^n |n|^{-1/2}\bigl(1+O(|n|^{-1})\bigr), \qquad \l_-^\pm=-\a^{-1}\pm\sqrt{\a^{-2}-1}. \end{equation} If $\a<1$, we conclude from the above asymptotic formulas that both for $n>0$ and for $n<0$ only one of the basic solutions decays as $|n|\to\infty$. Hence, the space of sequences $\{w_n\}$ satisfying \eqref{1.sob} (or, equivalently, such that $W\in\mathcal M_\L(\Omega)$) is no more than one-dimensional. Suppose that $\{w_n\}$ is such a sequence, and apply the following identity for solutions of recurrence equations of the type \[Q_{n+1}C_{n+1}+P_nC_n+Q_nC_{n-1}=0,\qquad n\in \mathbb Z,\] with $Q_n$ real, \begin{equation}\label{1.id} \sum_{n=-N}^N|C_n|^2\im P_n=-Q_{N+1}\im(C_{N+1}\overline{C_N}) -Q_{-N}\im(C_{-N-1}\overline{C_{-N}}). \end{equation} The proof is straightforward and we skip it; cf. (4.23) and (4.24) in \cite{S3}. Applying \eqref{1.id} to the equation \eqref{1.eq}, we obtain \[2\a^{-1}\sum_{n=-N}^N|w_n|^2\im\sqrt{n^2-\L}=(N+1/2) \im(w_{N+1}\overline{w_N}+w_{-N-1}\overline{w_{-N}}).\] By \eqref{1.bap}, \eqref{1.bam} the right-hand side vanishes as $N\to\infty$. Since for non-real $\L$ the sign of $\im\sqrt{n^2-\L}$ is negative if $\im\L>0$ and positive if $\im\L<0$, we conclude that $w_n=0$ for all $n\in \mathbb Z$. It follows that for $\a<1$ the operator $\BL_\a$ is self-adjoint. If $\a>1$, then $|\l_\pm|=1$ and by \eqref{1.bap}, \eqref{1.bam} any solution $\{w_n\}$ satisfies \eqref{1.sob}. This shows that for $\a>1$ the operator $\BL_\a$ is non-self-adjoint and the deficiency indices of $\BL^\bullet_\a$ are $(2,2)$. \vskip0.2cm Now, let $\a=1$. Then the case ($c_1$) of the Birkhoff -- Adams theorem applies, and the equation \eqref{1.eq} has two linearly independent solutions of the form \[ w_n^\pm\sim n^{\pm\sqrt{-\L}},\qquad n\to \infty\] and similarly for $n\to -\infty$. For any non-real $\L$ only one of such solutions may satisfy \eqref{1.sob}. Using again the identity \eqref{1.id}, we conclude that the equation \eqref{1.eq} has no non-zero solutions satisfying \eqref{1.sob}. Hence, the operator $\BL_1$ is self-adjoint. \end{proof} \vskip0.2cm \section{Using quadratic forms. Spectrum for $\a\le1$}\label{smallalpha} For small $\a$ the simplest way to study the spectrum of the operators $\BL_\a$ is to use quadratic forms. Our argument here follows the same line as in \cite{S3}. However, again, as in section \ref{red}, we have to take into account that the sequence $\{u_n\}$ is two-sided. Integrating by parts in the expression for $(\BL_\a U,U)$ over the semi-cylinders $\Omega_\pm$, we find for $U\in\mathcal D^\bullet_\a$: \begin{gather*} (\BL_\a U,U)=\int_\Omega|\nabla U|^2dxdy \\-\int_{ \mathbb S^1} U'_x(0-,y)\overline{U(0,y)}dy+ \int_{ \mathbb S^1} U'_x(0+,y)\overline{U(0,y)}dy.\end{gather*} Taking into account the condition \eqref{1.0}, we obtain \begin{gather*} (\BL_\a U,U)-\int_\Omega|\nabla U|^2dxdy \\=i\a\int_{ \mathbb S^1} \bigl(U'_y(0,y)\cos y+(U(0,y)\cos y)'_y\bigr) \overline{U(0,y)}dy\\ =i\a\int_{ \mathbb S^1} \bigl(U'_y(0,y)\overline{U(0,y)}-U(0,y)\overline{U'_y(0,y)} \bigr)\cos ydy\\ =-2\a\int_{ \mathbb S^1}\im\bigl(U'_y(0,y)\overline{U(0,y)}\bigr)\cos ydy.\end{gather*} In the representation \eqref{1.u} this turns into \begin{equation}\label{2.1} \bl_\a[U]:=(\BL_\a U,U)=\bl_0[U]-\a \bb[U] \end{equation} where \begin{gather} \bl_0[U]=\sum_{n\in \mathbb Z}\int_ \mathbb R\bigl(|u'_n|^2+n^2|u_n|^2\bigr)dx,\label{2.2}\\ \bb[U]=\sum_{n\in \mathbb Z} (2n-1)\re\bigl(u_n(0)\overline{u_{n-1}(0)}\bigr).\label{2.3} \end{gather} \vskip0.2cm Completing the set $\mathcal D_0$ in the metric $\bl_0[U]+\|U\|^2_{L^2(\Omega)}$, we obtain a set which we denote by $\gd$. On $\gd$ the quadratic form $\bl_0$ is well defined and closed, and the associated self-adjoint operator in $L^2(\Omega)$ is $\BL_0$. Along with $\gd$, we need its subspace of co-dimension one, \begin{equation*} \gd'=\bigl\{U\sim\{u_n\}\in\gd:u_0(0)=0\bigr\}. \end{equation*} \begin{lem}\label{t2} For any $U\in\gd'$ the following inequality is satisfied: \begin{equation}\label{2.3b} |\bb[U]|\le \bl_0[U]-\|u_0\|^2_{L^2( \mathbb R)}. \end{equation} \end{lem} \begin{proof} Denote by $\gd^+$ (by $\gd^-$) the subspace in $\gd$, formed by the elements $U\sim\{u_n\}$ whose all components with $n\le0$ (with $n\ge0$) are zeroes. For $U\in\gd^\pm$ we have $\bb[U]=\pm\bb^\pm[U]$ where \begin{equation}\label{2.pm} \bb^\pm[U]=\sum_{n>1}(2n-1) \re\left(u_{\pm n}(0)\overline{u_{\pm(n-1)}(0)}\right). \end{equation} The estimates for $\bb^+[U]$ and for $\bb^-[U]$ are identical and we carry them out for the `plus' sign. We derive from \eqref{2.pm} that \[ |\bb^+[U]|\le\sum_{n>1} (n-1/2)\bigl(|u_n(0)|^2+|u_{n-1}(0)|^2\bigr) \le \sum_{n\ge1}2n|u_n(0)|^2.\] Now to the $n$-th term in the last sum we apply the elementary inequality \[ 2\gamma|f(0)|^2\le\int_ \mathbb R\bigl(|f'|^2+\gamma^2|f|^2\bigr)dx, \qquad \forall f\in H^1( \mathbb R),\ \gamma>0,\] with $\gamma=n$. We obtain \begin{equation*} |\bb^+[U]|\le \sum_{n\ge1}\int_ \mathbb R \bigl(|u'_n|^2+n^2|u_n|^2\bigr)dx. \end{equation*} Together with the similar inequality for $\bb^-[U]$, this yields \eqref{2.3b}. \end{proof} It is not difficult to show that the factor $1$ in front of $\bl_0[U]$ on the right-hand side of \eqref{2.3b} cannot be improved. \vskip0.2cm With Lemma \ref{t2} at our disposal, it is easy to characterize the spectral properties of the operator $\BL_\a$ for $\a<1$. \begin{thm}\label{t3} Let $0<\a\le1$. Then \noindent {1)}\hskip3.5cm $\sigma_{ess}(\BL_\a)=\sigma(\BL_0)=[0,\infty)$. \noindent {2)} The negative spectrum of $\BL_\a$ consists of exactly one non-degenerate eigenvalue. If $\a<1$, then also \noindent{3)} \hskip1cm $\sigma_{a.c.}(\BL_\a)=\sigma_{a.c.}(\BL_0)=[0,\infty),\qquad \gm_{a.c.}(\BL_\a)=\gm_{a.c.}(\BL_0)$ \noindent(cf. \eqref{1.ac}). \end{thm} The proof of the statements {1)} and {3)} basically repeats the argument in \cite{S3}, section 9, and we skip it. To justify the statement {2)}, we first of all note that by Lemma \ref{t2}, for $\a<1$ the quadratic form $\bl_\a$, restricted to the domain $\gd'$, is positive definite and closed. Since $\dim\gd/\gd'=1$, the quadratic form $\bl_\a$, considered on the whole of $\gd$, is bounded from below and also closed. The corresponding self-adjoint operator is $\BL_\a$. For $\a=1$, the quadratic form $\bl_\a$ is only closable on $\gd$, and the operator $\BL_1$ corresponds to the closure of $\bl_1$. This reasoning shows that for $0<\a\le 1$ the number of negative eigenvalues of $\BL_\a$ is no more than one. In order to show that it is exactly one, it is enough to find an element $U\in\gd$ which is such that $\bl_\a[U]<0$. To this end, we take $U\sim\{u_n\}$ with only two non-zero components $u_0,u_1$, then the desired inequality is \begin{equation*} \int_ \mathbb R(|u'_0|^2+|u'_1|^2+|u_1|^2)dx<\a\re(u_1(0)\overline{u_0(0)}). \end{equation*} It is satisfied, for instance, if we take $u_1(x)=e^{-|x|}$ and $u_0(x)=\varepsilon^{-1/2}e^{-\varepsilon|x|}$, with $\varepsilon$ sufficiently small. \begin{rem}\label{2.unbb} For $\a>1$ the quadratic form $\ba_\a$ is unbounded from below. We have to show that for any $\a>1$ and any $M>0$ there exists an element $U\in\gd$, such that \begin{equation}\label{2.unbb1} \ba_\a[U]+M\|U\|^2_{L^2(\Omega)}<0. \end{equation} Choose a number $N\in \mathbb N$ and take $U\sim\{u_n\}$, where $u_N(x)=e^{-|x|\sqrt{N^2+M}}$, $u_{N-1}(x)=e^{-|x|\sqrt{(N-1)^2+M}}$ and all the other components $u_n$ in the expansion \eqref{1.u} are zeroes. Then \[\begin{split} &\int_ \mathbb R\left(|u'_N|^2+(N^2+M)|u_N|^2\right)dx=2\sqrt{N^2+M},\\ &\int_ \mathbb R\left(|u'_{N-1}|^2+((N-1)^2+M)|u_N|^2\right)dx=2\sqrt{(N-1)^2+M}, \end{split}\] and \[ \ba_\a[U]+M\|U\|^2_{L^2(\Omega)}=2(\sqrt{N^2+M}+\sqrt{(N-1)^2+M})-\a(2N-1).\] It is clear, that for any $\a>1$ the last expression is negative, provided that $N$ is taken large enough, and we are done.\end{rem} \section{The case $\a>1$. Singular solutions}\label{SingSol} In order to reach a better understanding of self-adjoint realizations of the operator $\BL_\a$ for $\a>1$, we describe here the behaviour of the singular solutions found in section \ref{sad}. For $\a>1$ the asymptotic expressions for $w_n^\pm(\pm)$ as in \eqref{1.bap} and \eqref{1.bam} can be re-written in a simplified form. Indeed, set \begin{equation*} y(\a)=\arccos\a^{-1}, \end{equation*} then \[\l^+_+=-\l^-_-=e^{iy(\a)},\qquad \l^+_-=-\l^-_+=e^{-iy(\a)}.\] Therefore, \begin{equation}\label{aux.yy}\begin{split} &w_n^\pm(+)=e^{\pm iny(\a)}n^{-\frac12}(1+O(n^{-1})),\ n\to+\infty;\\ &w_n^\pm(-)=(-1)^ne^{\mp iny(\a)}|n|^{-\frac12}(1+O(|n|^{-1})),\ n\to-\infty. \end{split}\end{equation} By \eqref{1.u}, \eqref{1B:Fourier}, and \eqref{aux.yy}, each $L^2$-solution of the equation \eqref{1.uu} can be represented as \[W(x,y)=W(x,y;+)+K_0e^{-|x|\sqrt{-\L}}+W(x,y;-),\] where $K_0$ is a constant, $W(x,y;+)$ is a certain linear combination of the functions \begin{equation}\label{G1.2a}\begin{split} W^\pm(x,y;+)=&\sum\limits_{n>0} w_n^\pm(+)n^{-\frac12}e^{iny}e^{-|x|\sqrt{n^2-\L}}\\ = &\sum\limits_{n>0}e^{in(y\pm y(\a))}n^{-\frac12}e^{-|x|\sqrt{n^2-\L}} \bigl(1+O(n^{-1})\bigr), \end{split}\end{equation} and $W(x,y;-)$ is a linear combination of the functions \begin{equation}\label{G1.2aa}\begin{split} W^\pm(x,y;-)=&\sum\limits_{n<0} w_n^\pm(-)|n|^{-\frac12}e^{iny}e^{-|x|\sqrt{n^2-\L}}\\ = &\sum\limits_{n<0}e^{in(y\pm y(\a)-\pi)}|n|^{-\frac12}e^{-|x|\sqrt{n^2-\L}} \bigl(1+O(|n|^{-1})\bigr). \end{split}\end{equation} Note that $\sqrt{n^2-\L}=|n|+O(n^{-1})$, and hence \[e^{-|x|\sqrt{n^2-\L}}=e^{-|x||n|}(1+|x|O(|n|^{-1})).\] Denote by $V^\pm(x,y;\pm)$ the functions obtained by replacing the factors $e^{-|x|\sqrt{n^2-\L}}$ by $e^{-|x||n|}$ in each term of the sums in \eqref{G1.2a} and \eqref{G1.2aa} and dropping the terms $O(|n|^{-1})$. The error is a bounded function rapidly decaying as $|x|\to\infty$. We have \[V^\pm(x,y;+)=\sum_{n>0}n^{-\frac12}e^{-n\left(|x|-i(y\pm y(\a))\right)}.\] The behaviour of such sums as $|x|-i(y\pm y(\a))\to 0$ is well known. Say, it can be easily derived from the equations (13.11) in Chapter II of the book \cite{Z}. Denote \[{z_\pm}^+=|x|-i(y\pm y(\a)),\qquad {z_\pm}^-=|x|-i(y\pm y(\a)-\pi),\] then \[ V^\pm(x,y;+)=C\left({z_\pm}^+\right)^{-\frac12}+O(1),\qquad {z_\pm}^+\to 0,\] with an appropriate choice of the branch of the square root, and some constant $C$. In the same way, \[ V^\pm(x,y;-)=C\left({z_\pm}^-\right)^{-\frac12}+O(1),\qquad {z_\pm}^-\to 0.\] The reasoning above gives the following description of singular solutions $W(x,y)$. These solutions depend also on the choice of $\L$, but the leading terms of their singularities do not. For this reason we do not reflect dependence on $\L$ in our notations. \begin{prop}\label{SingSolu} The singular solutions $W(x,y)$ of the equation \eqref{1.uu} have singularities at the points $(0,y_j)\in\Omega$, were $y_j,\ j=1,2,3,4,$ are the points $\pm y(\a)$ and $\pm y(\a)+\pi(\!\!\!\mod 2\pi)$. The singularity at each point is of the form \begin{equation*} W(x,y) \sim C_j(|x|-iy_j)^{-\frac12}+O(1). \end{equation*} \end{prop} In order to explain the role of these four singular points, let us check the Shapiro -- Lopatinsky criterion for the ellipticity of the boundary-value problem $-\Delta U=F$ under the conditions \eqref{1.-1} and \eqref{1.0}. In our case this criterion determines the point $y\in \mathbb S^1$ as regular if and only if the problem \begin{equation*}\label{G7.Lopat} -\phi''(t)+\phi(t)=0,\; t\ne0,\qquad \phi'(0+)-\phi'(0-)=\pm 2\a \cos y\phi(0) \end{equation*} has only trivial bounded continuous solutions on the line $t\in(-\infty,\infty)$. This requirement is violated exactly at the points $y=y_j,\ j=1,2,3,4,$ where $\a|\cos y|=1$, the solution being $\phi(t)=e^{-|t|}$. On the other hand, for $\a\in[0,1)$ the Shapiro -- Lopatinsky condition is satisfied at all transition points. Therefore every weak solution of the equation $-\Delta U=\L U$ satisfying \eqref{1.0} belongs to $H^2$ in both half-cylinders $\Omega_\pm$, so it is non-singular, which explains the self-adjointness. \section{The case $\a>1$. Spectral properties}\label{BigAlfaspec} For $\a>1$, the main technical difficulty stems from the fact that Definition~\ref{dalpha} does not describe the class $\mathcal D_\a$ in the terms of standard function spaces on $\Omega$. For this reason, our argument here is rather lengthy. Let us fix some self-adjoint extension $\hat\BL_\a$ of the operator $\BL_\a^\bullet$. The spectral properties discussed in this section do not depend on the choice of the extension. We start by establishing a formula for the difference of resolvents of the operators $\hat\BL_\a$ and $\BL_0$. The method for finding this kind of expressions is widely used and was proposed by Birman in \cite{Bir}. Let first $\L$ be a non-real number. It belongs to the resolvent sets of both operators $\hat\BL_\a$ and $\BL_0$, and we denote by $\hat\BR_\a$, $\BR_0$ the corresponding resolvents. Take some $F,G\in L^2(\Omega)$, and consider the sesqui-linear form \begin{equation}\label{G7.form1} \br[F,G]=((\hat\BR_\a-\BR_0)F,G)=(\hat\BR_\a F,G)-(F,\BR_0^*G). \end{equation} Denote \[\hat\BR_\a F=U,\qquad \BR_0^*G=V,\] then $U\in\mathcal D_\a$ and $V\in H^2(\Omega)$. Thus the quadratic form \eqref{G7.form1} can be re-written as \begin{equation*} (U, (\BL_0-\overline{\L}) V)-((\hat\BL_\a-\L) U, V)= \left(\int_{\Omega_+}+\int_{\Omega_-}\right) (\Delta U\overline{V}-U\overline{\Delta V})dxdy. \end{equation*} Applying \eqref{1.gr1}, we arrive at \begin{equation*} \br{[F,G]}=\a\int_{ \mathbb S^1} U(0,y) \overline{\mathcal B V(0,y)}dy, \end{equation*} where $\mathcal B$ is the operator \eqref{1.b}. Hence, the latter equality gives the representation of the operator $\hat\BR_\a-\BR_0$ as \begin{equation}\label{G7.operator4} \hat\BR_\a-\BR_0= 2\a S^*T , \qquad T=\Gamma\hat\BR_\a,\ S=\mathcal B\Gamma\BR_0^*, \end{equation} where $\Gamma$ stands for the operator of restriction of functions on $\Omega$ to the circle $x=0$. The operator $T$ is bounded from $L^2(\Omega)$ to $H^{-\frac12}( \mathbb S^1)$, and $S$ is bounded from $L^2(\Omega)$ to $H^{\frac12}( \mathbb S^1)$, so that $S^*$ is bounded from $H^{-\frac12}( \mathbb S^1)$ to $L^2(\Omega)$. Our next step is to derive a pseudo-differential equation for the distribution $w=\Gamma W$, where \begin{equation}\label{5.xx} W=U-V_1:=\hat{\BR}_\a F-\BR_0 F,\qquad F\in L^2(\Omega). \end{equation} Evidently, $W\in\mathcal M_\L(\Omega)$ and thus $w\in H^{-\frac12}( \mathbb S^1)$. Below we denote by $\mathcal A$ the operator $-\frac{d^2}{dy^2}$ in $L^2( \mathbb S^1)$, extended to distributions on $ \mathbb S^1$. It follows from the representation \eqref{1B:Fourier} that \[W'_x(0+,y)-W'_x(0-,y)=-2\sum_nw_n\sqrt{n^2-\L}e^{iny}= -2(\mathcal A-\L)^{\frac12}w(y).\] Now, taking into account the transmission conditions for $U$ and for $V_1$, we find that \[W'_x(0+,y)-W'_x(0-,y)-\a \mathcal B W(0,y)=\a \mathcal B V_1(0,y),\] or \begin{equation}\label{5.eq} \left(2(\mathcal A-\L)^{\frac12}+\a \mathcal B\right)w=-\a \mathcal B\Gamma \BR_0F\in H^{\frac12}( \mathbb S^1). \end{equation} \vskip0.2cm The operator $\hat\BR_\a-\BR_0$ is, of course, bounded. We are going to show that, actually, it is compact. The proof is based upon the fact that the operator $T$ in \eqref{G7.operator4} acts from $L^2(\Omega)$ not only into $H^{-\frac12}( \mathbb S^1)$ but into a smaller space, $H^{-\epsilon}( \mathbb S^1)$, for any $\epsilon>0$. To show this, we need an {\it a priori} estimate for the equation \eqref{5.eq}. This equation is elliptic for $\a<1$, but for $\a>1$, which is the case we are dealing with, it is degenerate, so some more effort is needed. \begin{lem}\label{G7.lemma1} For any $\epsilon>0$ there exist constants $C,C'$ such that for any $w\in H^{-\frac12}( \mathbb S^1)$ \begin{equation}\label{G7.apriori1} \|w\|_{H^{-\epsilon}( \mathbb S^1)}\le C\|2(\mathcal A-\L)^{\frac12}w+\a \mathcal B w\|_{H^{\frac12}( \mathbb S^1)}+ C'\|w\|_{H^{-\frac12}( \mathbb S^1)}, \end{equation} provided that the first term on the right-hand side of \eqref{G7.apriori1} is finite. \end{lem} \begin{proof} Denote by $P_\pm$ the Riesz projections, \[ P_+f=\pi^{-1}\sum_{k\ge 0} (f,e^{iky})e^{iky},\qquad P_-f=\pi^{-1}\sum_{k< 0} (f,e^{iky})e^{iky}.\] Here the sums are understood in the sense of distributions; in particular, if $f\in H^{s}( \mathbb S^1), \; s\in \mathbb R$, both series converge in $H^{s}( \mathbb S^1)$. The operators $P_\pm$ differ by smoothing operators from pseudodifferential operators on the circle with symbols \[ p_{+}(y,\eta)=\begin{cases} 1&{\text{if}}\ \eta>0,\\ 0&{\text{if}}\ \eta<0;\end{cases}\qquad p_-(y,\eta)=1-p_{+}(y,\eta),\] see the discussion in \cite{Agr} about the Fourier series representation of pseudodifferential operators on the circle. For $w\in H^s( \mathbb S^1)$ we denote by $w_\pm$ the distributions $w_\pm=P_\pm w$. The operator $(\mathcal A-\L)^{\frac12}$ is, up to a smoothing term, the pseudodifferential operator with symbol $(\eta^2-\L)^{\frac12}=|\eta|+O(|\eta|^{-1})$. As it follows from the composition formulas for pseudodifferential operators in dimension one, the operators in \eqref{G7.apriori1} commute or almost commute with $P_\pm$: \[(\mathcal A-\L)^{\frac12}P_\pm=P_\pm(\mathcal A-\L)^{\frac12},\qquad \mathcal B P_\pm=P_\pm \mathcal B +K,\] with $K$ being a smoothing operator. Thus, up to en error being an operator of order $-1$, the operator $(\mathcal A-\L)^{\frac12}$ acts on the components $w_\pm$ as the differentiation, with proper coefficients: $$\|(\mathcal A-\L)^{\frac12}w_\pm \mp iw_\pm'\|_{H^{\frac12}( \mathbb S^1)}\le C\|w_\pm\|_{H^{-\frac12}( \mathbb S^1)}.$$ Therefore, \eqref{G7.apriori1} will follow as soon as we prove that \begin{equation}\label{G7.separated} \|w_\pm\|_{H^{-\epsilon}( \mathbb S^1)}\le C\|\pm w_\pm'+\frac{\a}{2i} \mathcal B w_\pm\|_{H^{\frac12}( \mathbb S^1)}+C'\|w_\pm\|_{H^{-\frac12}( \mathbb S^1)}. \end{equation} The estimate \eqref{G7.separated}, even with $-\epsilon$ replaced by $3/2$ on the left-hand side, would follow automatically, if the operators $\pm 2i \partial_y +\a \mathcal B$ were elliptic for both signs $\pm$. This is the case for $|\a|<1$. However for $|\a|\ge1$ these operators have points of degeneracy of ellipticity, i.e. the points where the principal symbols $(\pm 1+ \a \cos y) \eta$ vanish. Note that these are exactly the points where the singularities of the singular solutions are located, see section \ref{SingSol}. For such degenerate operators considering the principal symbol is not sufficient for getting {\it a priori} estimates, so the influence of lower order terms in $\mathcal B$ must be taken into account. We concentrate on the case of the 'minus' sign in \eqref{G7.separated}. Let us denote $h(y)=\a\cos y$ and set \[u=-w_-'+\frac{\a}{2i}\mathcal B w_-=(h(y)-1)w_-'+\frac12 h'(y)w_-.\] We also set $g=(h(y)-1)^\frac12 w_-$, with a properly chosen branch of the square root. Note that $g'=(h(y)-1)^{-\frac12}u$. Our next task is to derive an estimate of $g$ in the terms of $u$, assuming that $u \in H^{\frac12}( \mathbb S^1)$. \vskip0.2cm The latter assumption on $u$ implies that the function $(h(y)-1)^{-\frac12}u$ belongs to the space $H^{-\d}( \mathbb S^1)$ for an arbitrarily small $\d>0$, say $\d<1/2$. To justify the above statement, we must show that \begin{equation}\label{5.functional} \left|\int_{ \mathbb S^1}(h(y)-1)^{-\frac12}u(y)\zeta(y)dy\right|\le C \|u\|_{H^{\frac12}( \mathbb S^1)}\|\zeta\|_{H^\d( \mathbb S^1)},\qquad \forall\zeta\in H^\d( \mathbb S^1). \end{equation} But this follows from the H\"older inequality, since $|h-1|^{-\frac12}\in L^r( \mathbb S^1)$ for any $r<2$, and by the embedding theorem $u\in L^q( \mathbb S^1)$ for any $q<\infty$ and $\zeta\in L^{\frac{2}{1-2\d}}( \mathbb S^1)$. It follows from \eqref{5.functional} that $$\| g'\|_{H^{-\d}( \mathbb S^1)}=\|(h(y)-1)^{-\frac12}u\|_{H^{-\d}( \mathbb S^1)}\le C\|u\|_{H^{\frac12}( \mathbb S^1)}.$$ Therefore, the function $g=(h(y)-1)^\frac12 w_-$ lies in $H^{1-\d}( \mathbb S^1)$ and satisfies the estimate $$\|g\|_{H^{1-\d}( \mathbb S^1)}\le C \|u\|_{H^{\frac12}( \mathbb S^1)}+ C'\|g\|_{H^{-N}( \mathbb S^1)},$$ with $N$ being arbitrarily large. By the definition of $g$, we have $w_-=(h(y)-1)^{-\frac12}g$. An estimate, similar to \eqref{5.functional} (even a simpler one, since $g\in L^\infty( \mathbb S^1)$), shows that $w_-$ belongs to $H^{-\epsilon}( \mathbb S^1)$, with the required estimate. \end{proof} The estimate, just proved, enables us to establish the compactness of the difference of resolvents $\hat\BR_\a-\BR_0$ and of several related operators and prove spectral estimates. \begin{prop}\label{G7.Prop} The operator $\hat\BR_\a-\BR_0$ is compact, moreover for its singular numbers $s_n(\hat{\BR}_\a-\BR_0)$ the estimate \begin{equation}\label{G7.Prop.Differ} s_n(\hat\BR_\a-\BR_0)=O(n^{-\frac12+\epsilon}) \end{equation} holds for any $\epsilon>0$. Further on, \begin{equation}\label{G7.Prop.DifferR} s_n((\hat\BR_\a-\BR_0)\BR_0)=O(n^{-\frac52+\epsilon}),\qquad s_n(\BR_0(\hat\BR_\a-\BR_0))=O(n^{-\frac52+\epsilon}). \end{equation} \end{prop} \begin{proof} It follows from the factorization \eqref{G7.operator4} that \[s_n(\hat\BR_\a-\BR_0)\le Cs_n(T),\] where we have to consider the operator $T$ as acting from $L^2(\Omega)$ to $H^{-\frac12}( \mathbb S^1)$. For $F\in L^2(\Omega)$ we define the function $W$ as in \eqref{5.xx} and take $w=W(0,\cdot)$, then $$TF=\Gamma\hat{\BR}_\a F=w+\Gamma\BR_0F.$$ The operator $\Gamma\BR_0$ acts from $L^2(\Omega)$ to $H^{\frac32}( \mathbb S^1)$, and the distribution $w$ satisfies the equation \eqref{5.eq}, whose right-hand side belongs to $H^{\frac12}( \mathbb S^1)$. Lemma \ref{G7.lemma1} applies and gives $w\in H^{-\epsilon}( \mathbb S^1)$. It follows that the operator $T$ is bounded as acting from $L^2(\Omega)$ to $H^{-\epsilon}( \mathbb S^1)$, and therefore, the singular numbers of the operator $T:L^2(\Omega)\to H^{-\frac12}( \mathbb S^1)$ are controlled by those of the embedding $H^{-\epsilon}( \mathbb S^1)\to H^{-\frac12}( \mathbb S^1)$. The latter are of the order $O(n^{-\frac12+\epsilon})$, whence the required estimate \eqref{G7.Prop.Differ}. \vskip0.2cm Further on, we factorize the operator $\BR_0(\hat\BR_\a-\BR_0)$ as \[\BR_0(\hat\BR_\a-\BR_0)=2\a\BR_0 S^*T.\] Since we already know the singular numbers estimate for the operator $T:L^2(\Omega)\to H^{-\frac12}( \mathbb S^1)$, it is sufficient for us to consider the operator $\BR_0 S^*$ as acting between the spaces $H^{-\frac12}( \mathbb S^1)$ and $L^2(\Omega)$. It is more convenient to deal with the adjoint operator \begin{equation}\nonumber S\BR_0^*=\mathcal B\Gamma{(\BR_0^*)}^2: L^2(\Omega)\to H^{\frac12}( \mathbb S^1). \end{equation} This operator is bounded as acting from $L^2(\Omega)$ to $H^{\frac52}( \mathbb S^1)$. Hence, the singular numbers of the same operator but considered as acting between the spaces $L^2(\Omega)$ and $H^{\frac12}( \mathbb S^1)$ are controlled by those of the embedding operator $H^{\frac52}( \mathbb S^1)\to H^{\frac12}( \mathbb S^1)$. The latter are of the order $O(n^{-2})$. This, together with the estimate for $T$, proves the second estimate in \eqref{G7.Prop.DifferR}. The first estimate in \eqref{G7.Prop.DifferR} follows from the second one by passing to adjoint operators. \end{proof} Now we arrive at our main result on the spectrum of the operator $\hat{\BL}_\a$, $\a>1$. \begin{thm}\label{G7:theorem} For $\a>1$ the spectrum of the operator $\hat{\BL}_\a$ consists of the essential spectrum filling the semi-axis $\l\ge0$ and the eigenvalues below the point $0$. The set of eigenvalues below the essential spectrum is unbounded from below, may have only $0$ and $-\infty$ as limit points, and for the counting function $n(t)=\#\{\l\in \sigma_{\mathrm{disc}}(\hat{\BL}_\a),\l\in(-t,-t_0)\}$, with any fixed $t_0>0$, the estimate holds \begin{equation}\label{G7:counting function} n(t)=O(t^{2+\epsilon_1}), \mathrm{\;for\; any\;} \epsilon_1>0. \end{equation} The absolutely continuous spectrum of $\hat{\BL}_\a$ fills the half-line $\l\ge0$ and its multiplicity function coincides with that of $\BL_0$. \end{thm} \begin{rem} The estimate \eqref{G7:counting function} is rather rough. The authors believe that a more detailed analysis, based upon a further study of the degenerate equation \eqref{5.eq}, would show that the counting function has the asymptotics $n(t)\sim C t^{\frac12}$ as $t\to \infty$. Moreover, we think that the negative eigenvalues do not have 0 as their limit point. \end{rem} \begin{proof} First, we note that due to Weyl theorem, the essential spectrum of the operators $\hat\BR_\a$ and $\BR_0$ is the same, therefore the essential spectrum of $\hat\BL_\a$ coincides with that of $\BL_0$, so it is the half-line $[0,\infty)$. Thus, the spectrum of $\hat\BL_\a$ below $0$ may only consist of eigenvalues with possible accumulation points only at $0$ and $-\infty$. The latter point must be an accumulation point for eigenvalues since the operator $\hat\BL_\a$ is not semi-bounded from below, see Remark~\ref{2.unbb}. The discreteness of the negative spectrum implies that there are real regular points of the operator $\hat{\BL}_\a$, these are all points below $0$, which are not eigenvalues. We fix such regular $\L<0$ and consider the resolvents $\hat\BR_\a,\BR_0$ at this point. Then the above construction of the operator $\hat\BR_\a-\BR_0$ and the estimate for its eigenvalues can be repeated, this time for the chosen real $\L$. The spectrum of $\BR_0$ coincides with the interval $[0,-\L^{-1}]$, and \begin{equation*} \hat\BR_\a=\BR_0+(\hat\BR_\a-\BR_0). \end{equation*} The operator $\BR_0$ is non-negative, therefore, for any $\mu<0$ the number of eigenvalues of $\hat\BR_\a$ (counting multiplicities) in $(-\infty,\mu)$ is not greater than the number of eigenvalues of $\hat\BR_\a-\BR_0$ in the same interval. The latter quantity is estimated by means of the eigenvalue bound \eqref{G7.Prop.Differ}, which under an appropriate choice of $\epsilon=\epsilon(\epsilon_1)$ leads to \eqref{G7:counting function}, with $t_0=-\L$ and $t=-(\mu^{-1}+\L)$. In order to justify the statement on the absolute continuous spectrum, let us consider the difference $\hat\BR_\a^3-\BR_0^3.$ We have \begin{gather*} \hat\BR_\a^3-\BR_0^3= (\hat\BR_\a-\BR_0)^3+\hat\BR_\a\BR_0(\hat\BR_\a-\BR_0)\\ +\hat\BR_\a(\hat\BR_\a-\BR_0)\BR_0+\BR_0(\hat\BR_\a-\BR_0)^2+\BR_0^2 (\hat\BR_\a-\BR_0), \end{gather*} and due to the estimates \eqref{G7.Prop.Differ} and \eqref{G7.Prop.DifferR} each term is trace class. By Kato's theorem, the absolute continuous parts of operators $\hat{\BL}_\a$ and ${\BL}_0$ are unitary equivalent. \end{proof} \section{An alternative model}\label{alt} Here we briefly describe an alternative model, where a slight change in the setting leads to some major changes in the spectral behaviour. The family $\BM_\a$ of differential operators acts on the strip $\Omega'= \mathbb R\times(0,\pi)$ and is generated by the Laplacian $-\Delta U=-U''_{x^2}-U''_{y^2}$, the Dirichlet condition $U(x,0)=U(x,\pi)=0$, and two additional conditions at $x=0$: \begin{equation*} \begin{split} U(0+,y)&=U(0-,y)\ (=U(0,y)),\\ U'_x(0+,y)-U'_x(0-,y)&=-i\a\left(U'_y(0,y)\sin y +(U(0,y)\sin y)'_y\right),\end{split} \end{equation*} cf. \eqref{1.-1}, \eqref{1.0}. \vskip0.2cm The Fourier expansion for this case has the form \[U=\sum_{n=1}^\infty u_n(x)\varphi_n (y),\qquad \varphi_n(y)=\sqrt{\frac{2}{\pi}}\sin ny\] (in short, $U\sim\{u_n\}$). The equation and the boundary and transmission conditions reduce to an infinite system of ordinary differential operators on the real axis, coupled by the conditions at $x=0$: \begin{equation}\nonumber -\Delta U\sim\{-u_n''+n^2u_n\},\qquad n\in \mathbb N; \end{equation} each $u_n$ is continuous at $x=0$; \begin{equation}\nonumber u_n'(0+)-u_n'(0-)=i\a\bigl((n+1/2)u_{n+1}(0)-(n-1/2)u_{n-1}(0)\bigr), \end{equation} with $u_0$ taken to be identically zero. For $\a=0$ the system decouples, and we get an analogue of \eqref{1.l0}, but this time with summation over $n\in \mathbb N$. From here we derive that the spectrum $\sigma(\BM_0)$ is absolutely continuous, fills the half-line $[1,\infty)$, and its multiplicity function is given by \begin{equation}\label{1A.ac} \gm_{a.c.}(\l;\BM_0)=2[\l], \qquad\forall \l\ge 1. \end{equation} It is these two differences with $\BL_0$, the sequence of $u_n$ being one-sided and the spectrum of the unperturbed problem starting at $1$ rather than at $0$, that lead to the changes in the spectral properties of the perturbed operator. \vskip0.2cm The study of the self-adjointness of $\BM_\a$ for $\a>0$ follows the same line as for the operators $\BL_\a$ in section \ref{sad}. It turns out that the operator $\BM_\a$, considered on the natural domain (cf. Definition \ref{dalpha}), is self-adjoint for $\a\le 1$. If $\a>1$, the operator has a one-parameter family $\hat\BM_\a$ of self-adjoint realizations. The singular solutions, which define these realizations by v.Neumann's scheme, have two singular points $(0,y^{\pm})$, with singularities of the order $C(|x|+i(y-y^{\pm}))^{-\frac12}$. The points $y^\pm$ are the solutions of the equation $\a\sin y=1$, these are exactly the points where the Shapiro -- Lopatinsky condition is violated. Similarly to the cylinder case, the spectral analysis of the operator $\BM_\a$ for $0<\a< 1$ is based upon considering the quadratic forms. The quadratic form for $\BM_\a$ is \begin{equation}\nonumber \bm_\a[U]:=(\BM_\a U,U)=\bm_0[U]-\a \bb[U] \end{equation} where \begin{gather*} \bm_0[U]=\sum_{n\in \mathbb N}\int_ \mathbb R\bigl(|u'_n|^2+n^2|u_n|^2\bigr)dx,\\ \bb[U]=\sum_{n\ge 2}(2n-1)\im\bigl(u_n(0)\overline{u_{n-1}(0)}\bigr), \end{gather*} cf. \eqref{2.1}, \eqref{2.2} and \eqref{2.3}. The quadratic form $\bm_0$ is positive definite and closed on its natural domain which we again denote by $\gd$. The associated self-adjoint operator in $L^2(\Omega')$ is $\BM_0$. The inequality \begin{equation}\label{2A.3a} |\bb[U]|\le \bm_0[U], \qquad U\in \gd, \end{equation} is checked in the same way as \eqref{2.3b}, and this time no second term as in \eqref{2.3b} appears. The constant factor $1$ in the estimate \eqref{2A.3a} is sharp. Hence, for $\a<1$ the quadratic form $\bm_\a$ is positive definite and closed on $\gd$. The corresponding self-adjoint operator in $L^2(\Omega')$ is $\BM_\a$. It is not difficult to show that for $\a>1$ the quadratic form $\bm_\a$ is unbounded from below. \bigskip We pass now to the description of the spectrum of $\BM_\a$. It is here where the differences with $\BL_\a$ manifest themselves, cf. \thmref{t3}. \begin{thm}\label{t3A} Let $0<\a<1$. Then \noindent 1)\hskip3.5cm $\sigma_{ess}(\BM_\a)=\sigma(\BM_0)=[1,\infty)$. \noindent 2)\hskip1.3cm $\sigma_{a.c.}(\BM_\a)=\sigma_{a.c.}(\BM_0)=[1,\infty),\qquad \gm_{a.c.}(\BM_\a)=\gm_{a.c.}(\BM_0)$ \noindent (cf. \eqref{1A.ac}). \noindent 3) The spectrum of $\BM_\a$ below the threshold $\l_0=1$ is finite. \end{thm} We skip the proof which basically repeats the argument in \cite{S3}, section 9. Note that one can also prove that for the pairs $\BM_\a, \BM_0$ and $\BM_0, \BM_\a$ there exist complete isometric wave operators. The quadratic form $\bm_1\res\gd$ is non-negative and closable, it generates the operator $\BM_1$. It is possible to show that its essential spectrum is the half-line $[0,\infty)$. \vskip0.2cm The analysis of the discrete spectrum of $\BM_\a$ for $\a\in (0,1)$ is based upon a version of Birman-Schwinger principle found in \cite{S3}. Before giving its formulation, let us recall the following well-known notations. Given a real number $\l$ and self-adjoint operator $Q$, whose spectrum on $(-\infty,\l)$ is discrete, we write $N_-(\l;Q)$ for the number of the eigenvalues $\l_n(Q)<\l$, counted according to their multiplicities. We also write $N_+(\l;Q)=N_-(-\l;-Q)$. It turns out that within an error which is no greater than $1$, the number $N_-(1;\BM_\a)$ coincides with $N_+(\a^{-1};\BJ)$, where $\BJ$ is a certain infinite Jacobi matrix: \begin{equation}\label{4A.6} 0\le N_-(1;\BM_\a)-N_+(\a^{-1};\BJ)\le1. \end{equation} The reasoning is the same as in \cite{S3}, however the Jacobi matrix $\BJ$ turns out to be different: it is the zero-diagonal Jacobi matrix, with the non-diagonal entries given by \[ 2j_{n,n-1}=2j_{n-1,n}=\frac{n-1/2}{(n^2-1)^{1/4}(n^2-2n)^{1/4}}.\] Since $j_{n,n-1}\to 1/2$ as $n\to\infty$, the matrix $\BJ$ has the absolutely continuous spectrum filling the segment $[-1,1]$ and the spectrum outside this segment is discrete. Note that $\a\in(0,1)$ is equivalent to $\a^{-1}>1$, so that both terms in \eqref{4A.6} are finite. In order to estimate $N_+(\mu;\BJ)$, $\mu=\a^{-1}$, we use the asymptotics of $j_{n,n-1}$: \begin{equation}\label{AsJac} j_{n,n-1}\sim \frac12 +\frac12 n^{-2} +o(n^{-2}), \; n\to\infty. \end{equation} Using the results of Geronimo \cite{Ger1}, \cite{Ger2}, combined with some standard variational tools, one can show that $N_+(\mu;\BJ)$ can be estimated from below and from above by $|\log(\mu-1)|$, with different constants. Thus the number of eigenvalues of $\BM_\a$ in $(0,1)$ grows logarithmically as $\a\nearrow 1$. We believe that actually a logarithmical asymptotics for the eigenvalues holds. When $\a$ becomes larger than $1$, the phase transition occurs, similar to the cylinder case. Each self-adjoint realization $\hat{\BM}_\a$ of the operator $\BM_\a$ is unbounded from below, with the spectrum below the point $1$ being discrete. The absolutely continuous spectrum is still the half-line $[1,\infty)$, with the same multiplicity function as for $\BM_0$. All these properties are proved using the methods exposed in section~\ref{BigAlfaspec}. Some additional technical complications are caused by the fact that now we should prove estimates of the type \eqref{G7.apriori1} for the operators on an interval $(0,\pi)$, rather than on the circle $ \mathbb S^1$ which is a manifold without boundary. But these complications can be overcome. \section{Acknowledgements} The work on the paper was started in April of 2005, when G. Rozenblum visited the Weizmann Institute of Science. G.R. expresses his gratitude to the Institute for its hospitality and financial support. The authors are also grateful to Y. Kannai for a very useful discussion. \bibliographystyle{amsalpha}
1,116,691,498,153
arxiv
\section{Introduction and Related Work} A tremendous amount of recent research has focused on approaches towards generating responses for conversations in an open-domain setting \cite{radford2019language,xing2018hierarchical,wolf2019transfertransfo}. An equally challenging task for natural language generation systems is evaluating the quality of the generated responses. Evaluation of generated output is typically conducted using a combination of crowdsourced human judgments and automated metrics adopted from machine translation and text summarization \cite{dialogue-eval,novikova-etal-2017-need}. However, studies conducted by Liu \emph{et al.}\citeyearpar{dialogue-eval} and Novikova \emph{et al.} \citeyearpar{novikova-etal-2017-need} show that the automated metrics have poor correlation with human judgments. Despite their shortcomings, automated metrics like BLEU, ROUGE, and METEOR are used due to a lack of alternative metrics. This puts a major imperative on obtaining high-quality crowdsourced human judgments. Previous research which employs crowdsourced judgments has focused on metrics including \textit{ease of answering}, \textit{information flow} and \textit{coherence} \cite{Li-reinforce, dziri2018augmenting}, \textit{naturalness} \cite{asghar2018affective}, \textit{interestingness} \cite{asghar-etal-2017-deep,santhanam2019survey}, \textit{fluency} or \textit{readability} \cite{zhang-etal-2018-personalizing}, \textit{engagement} \cite{venkatesh2018evaluating}. While experiment designs primarily use Likert scales, Belz and Kow \citeyearpar{belz2010comparing} argue that discrete scales, such as the Likert scales, can be unintuitive and certain individuals may avoid extreme values in their judgments. Prior research has also shown that use of continuous scales is more viable for language evaluation \cite{novikova-etal-2018-rankme,belz2011discrete}. Such evidence places more emphasis on a careful study towards obtaining reliable and consistent human ratings for dialogue evaluation. To address this research problem, we focus on a systematic comparison of four experimental conditions by incorporating \textbf{\textit{continuous, relative}} and \textbf{\textit{ranking scales}} for obtaining crowdsourced human judgments. In this initial study, we evaluate the use of two metrics: \textbf{\textit{Readability}} and \textbf{\textit{Coherence}}. Our key findings are: \begin{enumerate} [nolistsep,noitemsep] \item Use of Likert scales results in the lowest inter-rater consistency and agreement when compared to other experiment conditions \item Use of continuous scales results in higher inter-rater consistency and agreement \item Raters who have no prior experience in evaluating dialogue system output have greater inter-rater consistency and agreement than do those who have previously participated in such rating tasks. \end{enumerate} Our findings have the potential to help the research community in the design of their evaluation tasks to obtain higher quality human judgments for natural language generation output. \section{Data and Models} We used the Reddit conversation corpus to train our models. The Reddit conversation corpus, made available by Dziri \emph{et al.} \citeyearpar{dziri2018augmenting}, consists of data extracted from 95 top-ranked subreddits that discuss various topics such as sports, news, education and politics. The corpus contains 9M training examples, 500K development dialogues and 400K dialogues as test data.\footnote{\url{https://github.com/nouhadziri/THRED}} We trained three models on the Reddit conversational dataset described below. All the pre-trained models and supporting analysis code along with user study data are available at \url{https://www.github.com/sashank06/INLG_eval}. The models trained for this study include: $\bullet$ \textbf{Seq2Seq:} Simple encoder-decoder model with attention mechanism \cite{bahdanau2014neural} $\bullet$ \textbf{HRED:} \textit{\textbf{Hierarchical Encoder-Decoder}} \cite{serban2016building} which incorporates an utterance and intra-utterance layer to model context. $\bullet$ \textbf{THRED:} \textit{\textbf{Topic Augmented Hierarchical Encoder-Decoder}} \cite{dziri2018augmenting} which uses topic words along with a hierarchical encoder-decoder to produce a response. \section{Metrics} \label{metrics} For this initial study, we focus on two metrics, readability and coherence. These metrics are among those essential to evaluate the quality of generated responses \cite{novikova-etal-2017-need,dziri-etal-2019-evaluating-coherence}. We describe an automated method to compute each metric. \textbf{Readability} or Fluency measures the linguistic quality of text and helps quantify the difficulty of understanding the text for a reader \cite{gatt2018survey,novikova-etal-2017-need}. We use the Flesch Reading Ease (FRE) \cite{kincaid1975derivation} that counts the number of words, syllables and sentences in the text.\footnote{\url{https://bit.ly/1IZ0FG4}} Higher readability scores indicate that utterance is easier to read and comprehend. \textbf{Coherence} measures the ability of the dialogue system to produce responses consistent with the topic of conversation \cite{venkatesh2018evaluating}. To calculate coherence, we use the method proposed by Dziri \emph{et al.} \citeyearpar{dziri2018augmenting}. This metric computes the cosine similarity on embedding vectors of generated response and target while accounting for dull and generic responses through a penalty factor. To overcome the issue of dull and generic responses, Dziri \emph{et al.} \citeyearpar{dziri2018augmenting} induce a penalty factor which takes into account \begin{equation} P = 1 + \log \frac{2+L'}{2+L''} \end{equation} where $L'$ indicates the length of response after dropping stop words and punctuation and $L''$ indicates the length of non-dull parts of the response after dropping stop words. The penalized semantic similarity (SS) score is then calculated as: \begin{equation} SS(utt_{i,j},resp_i) = P \times ( 1-cos(utt_{i,j},resp_i) \end{equation} where $i$ represents the index of the dialogue in the dataset and $j$ denotes index of the utterance in the conversation history. \section{Experiment Designs} In our study, we use three well-known question types of Likert Scale, Magnitude Estimation and Best-Worst Ranking. We chose these questions types to investigate as these are commonly used across various language evaluation tasks \cite{belz2011discrete,asghar2018affective,novikova-etal-2018-rankme,kiritchenko-mohammad-2017-best} . With the help of these three types of questions, we design four rating procedures that are explained below. \textbf{Likert Scale (LS)}: is typically used in experiments for crowdsourcing human evaluation of dialogue systems \cite{asghar2018affective,lowe-etal-2017-towards}. In our experiment, we ask the raters to rate the generated responses on a 6-point scale, following Novikova \emph{et al.} \citeyearpar{novikova-etal-2018-rankme} (where 1 is the lowest and 6 is the highest on the metrics of readability and coherence). \textbf{Rank-Based Magnitude Estimation (RME)}: Prior research by Belz and Kow \citeyearpar{belz2011discrete} demonstrates through six separate experiments that continuous scales are more viable and offer distinct advantages over discrete scales in evaluation tasks. Recently, Novikova \emph{et al.} \citeyearpar{novikova-etal-2018-rankme} adopted magnitude estimation by providing the rater with a \textit{standard value} for a reference sentence to evaluate output from goal-oriented systems. Following Novikova \emph{et al.} \citeyearpar{novikova-etal-2018-rankme}, we also set the value of the standard (reference utterance) as 100 since the reference utterance was produced by humans and is considered as gold-standard. The crowd-sourced workers are asked to provide a score relative to 100 (from 0 to 999) for three system-generated outputs. \textbf{Biased Magnitude Estimation (BME)}: Our third experiment design is biased magnitude estimation (BME). The main difference between RME and BME method is that the standard value we provide for the reference utterance is not uniformly set to 100 for all examples, but instead calculated by automated methods (explained in Section \ref{metrics}). Our motivation to do so is to understand if \textbf{anchoring bias} may affect the ratings when judgments are made relative to a fixed value (100) or relative to a value calculated by automated means. Anchoring bias is the tendency to rely too heavily on one piece of information offered (the ``anchor'', in this case, the number 100) when making decisions \cite{kahneman201636}. \textbf{Best-Worst Scaling (BWS)}: Our last experiment condition is best-worst scaling (BWS) in which raters are asked to rank the generated responses in order of best to worst on both metrics (readability and coherence). This approach has previously been used to estimate emotion intensity and has been demonstrated to produce high quality and consistent judgments from humans \cite{kiritchenko-mohammad-2017-best}. Each task includes 50 randomly sampled conversations from the test set in our corpus along with generated responses from the three models and the ground truth (reference utterance). For each task, we collected ratings from 40 workers with Master qualifications through Amazon Mechanical Turk. \section{Experiment Results} We organize our findings along five main research questions (RQs) outlined in this section. In the following section, we report on statistical significance using two-way ANOVAs on the between-subject ratings across the four experiment conditions (Tables~\ref{reliability-all}\textendash ~\ref{q2-no}). \textbf{RQ1: What is the effect of experiment design on the reliability on human ratings?} We use intra-class correlation (ICC) to measure the reliability across multiple raters \cite{shrout1979intraclass,landis1977measurement}. To compare the scores obtained from magnitude estimation experiments to the ratings from the task using discrete Likert scales, we perform a normalization of the magnitude estimation scores on a logarithmic scale as suggested by Bard \emph{et al.} \citeyearpar{bard1996magnitude}. \begin{table}[t] \centering \small \begin{tabular}{@{}p{0.7cm}p{1.2cm}p{0.8cm}p{0.8cm}p{0.8cm}p{0.8cm}@{}} \toprule & & \multicolumn{1}{c}{\textbf{Likert}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{RME} \end{tabular}} & \multicolumn{1}{c}{\textbf{\begin{tabular}[l]{@{}l@{}} \textbf{BME} \end{tabular}}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BWS} \end{tabular}} \\ \midrule \multirow{2}{*}{ICC-C} & Readability & 0.75 & 0.95$\dagger$ & 0.83 & 0.75\\ \cmidrule(l){2-6} & Coherence & 0.83 & 0.92 & 0.81 & 0.80 \\ \midrule \multirow{2}{*}{ICC-A} & Readability & 0.59& 0.95$\dagger$ & 0.83 & 0.75\\ \cmidrule(l){2-6} & Coherence & 0.77 & 0.92 & 0.81 & 0.80\\ \bottomrule \end{tabular} \caption{ICC scores on the metrics of readability and coherence for each experiment design. All values are statistically significant p-value\textless{0.001} except those indicated by $\dagger$. n$=$40 for all four designs.} \label{reliability-all} \end{table} Table \ref{reliability-all} represents the ICC scores on consistency (ICC-C) and agreement (ICC-A) for our four experiment tasks. We observe that use of Magnitude Estimation with anchors (RME or BME) results in more reliable ratings than using Likert Scale or using Best-Worst ranking (BWS). This result is consistent with prior research by Novikova \emph{et al.} \citeyearpar{novikova-etal-2018-rankme} and Belz and Kow \citeyearpar{belz2011discrete}. \textbf{RQ2: Does time taken to complete the survey influence reliability of the rankings?} To analyze RQ2, we calculated the total time spent by each participant from the start to the end of the experiment. We found that BME task had longest on average time to completion (43 minutes), followed by RME (42.8 minutes) and Likert scale (33 minutes; Best-Worst ranking had shortest average completion time (32.5 minutes). We then test the hypothesis that raters who spent longer than average time on the task would be more reliable in their ratings than those who completed in less than average time. Table \ref{above-mean} represents the ICC scores for raters who spent higher than average time for the task, while Table \ref{below-mean} represents scores for raters who spent less than average time. Surprisingly, we find that consistency and agreement among raters who spend less than average time is higher than those who spend more time, for the Likert, BME or BWS experiment designs. When using the RME design, raters who spend more time have higher consistency and agreement. \begin{table}[!htbp] \centering \small \begin{tabular}{@{}p{0.7cm}p{1.2cm}p{0.8cm}p{0.8cm}p{0.8cm}p{0.8cm}@{}} \toprule & & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{Likert} \\ (n=15) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{RME} \\ (n=16) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BME} \\ (n=15) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BWS} \\(n=16) \end{tabular}} \\ \midrule \multirow{2}{*}{ICC-C} & Readability & 0.58& 0.93 & 0.51 & 0.62\\ \cmidrule(l){2-6} & Coherence & 0.74 & 0.85 & 0.55 & 0.64 \\ \midrule \multirow{2}{*}{ICC-A} & Readability & 0.52 & 0.93 & 0.51 & 0.62\\ \cmidrule(l){2-6} & Coherence & 0.69 & 0.86& 0.56 & 0.64\\ \bottomrule \end{tabular} \caption{ICC scores when participants spend \textbf{above average time}. All values in this table are statistically significant with p-value\textless{0.001}} \label{above-mean} \end{table} \begin{table}[h] \centering \small \begin{tabular}{@{}p{0.7cm}p{1.2cm}p{0.8cm}p{0.8cm}p{0.8cm}p{0.8cm}@{}} \toprule & & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{Likert} \\ (n=25) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{RME} \\ (n=24) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BME} \\ (n=25) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BWS}\\(n=24) \end{tabular}} \\ \midrule \multirow{2}{*}{ICC-C} & Readability & 0.61 & 0.88 & 0.81 & 0.65\\ \cmidrule(l){2-6} & Coherence & 0.66 & 0.85 & 0.75 & 0.76 \\ \midrule \multirow{2}{*}{ICC-A} & Readability & 0.36 & 0.88 & 0.81 & 0.66\\ \cmidrule(l){2-6} & Coherence & 0.55 & 0.85 & 0.75 & 0.76\\ \bottomrule \end{tabular} \caption{ICC scores when participants spend \textbf{below average time}. All values in this table are statistically significant with p-value\textless{0.001}} \label{below-mean} \end{table} \textbf{RQ3: Does prior experience of evaluating dialogue system output or engaging with conversational agents affect reliability of rankings?} We asked each rater two additional questions at the end of the task. The questions asked raters to indicate whether or not they had prior experience taking part in studies (a) to evaluate dialogue system output; and (b) to engage with a conversational agent. Tables \ref{q1-yes} and \ref{q1-no} show how reliable the ratings from the participants based on their prior experience of taking part in studies about evaluating conversational response. We find that participants who have not taken part in prior studies are more consistent and have a higher agreement score than participant who have prior experience. These results are also validated by Tables \ref{q2-yes} and \ref{q2-no} which shows that participants with no prior experience of engaging with conversational agents are more consistent and reliable. \begin{table}[h] \centering \small \begin{tabular}{@{}p{0.7cm}p{1.2cm}p{0.8cm}p{0.8cm}p{0.8cm}p{0.8cm}@{}} \toprule & & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{Likert} \\ (n=15) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{RME} \\ (n=7) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BME} \\ (n=18) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BWS}\\(n=13) \end{tabular}} \\ \midrule \multirow{2}{*}{ICC-C} & Readability & 0.45 & 0.37 & 0.51 & 0.54\\ \cmidrule(l){2-6} & Coherence & 0.38 & 0.48 & 0.55 & 0.63 \\ \midrule \multirow{2}{*}{ICC-A} & Readability & 0.35 & 0.38 & 0.52 & 0.55\\ \cmidrule(l){2-6} & Coherence & 0.32 & 0.49 & 0.55 & 0.63\\ \bottomrule \end{tabular} \caption{ICC scores when participants \textbf{have} prior experience evaluating dialogue system output. All values statistically significant at p-value\textless{0.001}.} \label{q1-yes} \end{table} \begin{table}[h] \centering \small \begin{tabular}{@{}p{0.7cm}p{1.2cm}p{0.8cm}p{0.8cm}p{0.8cm}p{0.8cm}@{}} \toprule & & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{Likert} \\ (n=25) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{RME} \\ (n=33) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BME} \\ (n=22) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BWS}\\(n=27) \end{tabular}} \\ \midrule \multirow{2}{*}{ICC-C} & Readability & 0.71 & 0.95$\dagger$ & 0.83 & 0.70\\ \cmidrule(l){2-6} & Coherence & 0.82 & 0.92 & 0.76 & 0.72 \\ \midrule \multirow{2}{*}{ICC-A} & Readability & 0.50 & 0.95$\dagger$ & 0.83 & 0.70\\ \cmidrule(l){2-6} & Coherence & 0.75 & 0.92 & 0.77 & 0.72\\ \bottomrule \end{tabular} \caption{ICC scores when participants \textbf{do not have} prior experience evaluating dialogue system output. All values statistically significant at p-value\textless{0.001} except those indicated by $\dagger$.} \label{q1-no} \end{table} \begin{table}[h] \centering \small \begin{tabular}{@{}p{0.7cm}p{1.2cm}p{0.8cm}p{0.8cm}p{0.8cm}p{0.8cm}@{}} \toprule & & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{Likert} \\ (n=18) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{RME} \\ (n=11) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BME} \\ (n=23) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BWS}\\(n=18) \end{tabular}} \\ \midrule \multirow{2}{*}{ICC-C} & Readability & 0.46 & 0.69 & 0.60 & 0.57\\ \cmidrule(l){2-6} & Coherence & 0.44 & 0.65 & 0.62 & 0.67 \\ \midrule \multirow{2}{*}{ICC-A} & Readability & 0.37 & 0.69 & 0.61 & 0.57\\ \cmidrule(l){2-6} & Coherence & 0.38 & 0.65 & 0.62 & 0.67\\ \bottomrule \end{tabular} \caption{ICC scores when participants \textbf{have} prior experience engaging with conversational agents. All values statistically significant at p-value\textless{0.001}.} \label{q2-yes} \end{table} \begin{table}[!htbp] \centering \small \begin{tabular}{@{}p{0.7cm}p{1.2cm}p{0.8cm}p{0.8cm}p{0.8cm}p{0.8cm}@{}} \toprule & & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{Likert} \\ (n=22) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{RME} \\ (n=29) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BME} \\ (n=17) \end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[l]{@{}l@{}} \textbf{BWS}\\(n=22) \end{tabular}} \\ \midrule \multirow{2}{*}{ICC-C} & Readability & 0.70 & 0.95$\dagger$ & 0.84 & 0.67\\ \cmidrule(l){2-6} & Coherence & 0.82 & 0.91 & 0.76 & 0.68 \\ \midrule \multirow{2}{*}{ICC-A} & Readability & 0.48 & 0.95$\dagger$ & 0.84 & 0.67\\ \cmidrule(l){2-6} & Coherence & 0.75 & 0.91 & 0.76 & 0.68\\ \bottomrule \end{tabular} \caption{ICC scores when participants \textbf{do not have} prior experience engaging with conversational agents. All values statistically significant at p-value\textless{0.001} except those indicated by $\dagger$.} \label{q2-no} \end{table} \textbf{RQ4: How well do automated methods to calculate readability and coherence correlate with human ratings?} We report on correlation between readability and coherence scores that are calculated using automated methods (outlined in Section~\ref{metrics}) with the human ratings in Table~\ref{automated}. Readability scores were computed using the Flesh Reading Ease \cite{kincaid1975derivation} and coherence scores were computed based on method proposed by Dziri \emph{et al.} \citeyearpar{dziri2018augmenting}. We observe that the automated metrics for Readability \cite{kincaid1975derivation} and Semantic Similarity \cite{dziri2018augmenting} show low correlation to human judgments ratings. \begin{table}[!htbp] \small \centering \begin{tabular}{@{}lllll@{}} \toprule \textbf{} & \textbf{Likert} & \textbf{RME} & \textbf{BME} & \textbf{BWS} \\ \midrule & \multicolumn{4}{c}{Automated Metric} \\ \midrule Readability & 0.26 & -0.11 & -0.12 & -0.06 \\ \midrule Coherence & -0.12 & -0.13 & -0.11 & 0.01 \\ \bottomrule \end{tabular} \caption{Spearman correlation between the ratings obtained from the automated metrics to human ratings.} \label{automated} \end{table} \textbf{RQ5: Is there any correlation between ratings of readability and coherence for each of the four experiment conditions?} To evaluate whether there is any correlation between the ratings obtained for readability and coherence through of four experimental designs, we report the Spearman correlation values in Table \ref{correaltion_within}. We find that there is high correlation between the human ratings of readability and coherence obtained through RME and BME (statistically significant). One likely factor affecting correlation may be anchoring bias towards the fixed value of the standard utterance provided in RME (100) and reference value provided in BME. We aim to investigate this further in future work. \begin{table}[h] \small \centering \begin{tabular}{@{}lllll@{}} \toprule & \textbf{Likert} & \textbf{RME} & \textbf{BME} & \textbf{BWS} \\ \midrule & \multicolumn{4}{c}{Readability} \\ \midrule Coherence & 0.1 & 0.79*** & 0.77*** & 0.5*** \\ \bottomrule \end{tabular} \caption{Spearman correlation between the ratings of readability and coherence obtained on four different experiment designs. *** p-value\textless{0.001}} \label{correaltion_within} \end{table} \section{Conclusion and Future Work} In this paper, we present our work on designing a systematic experiment with four experiment conditions to evaluate the output of dialogue systems. Different from prior work where a similar study was conducted with output from goal-oriented systems \cite{novikova-etal-2018-rankme}, our study focuses on evaluating output in open-domain situations. Consistent with prior findings, metrics calculated using automated methods \cite{dziri-etal-2019-evaluating-coherence} were found to have a negative correlation with human judgments (c.f. Table~\ref{automated}). This finding points to the need for more effective automated metrics. We find that that use of continuous scales to obtain crowdsourced ratings provides more consistent and reliable ratings than ratings obtained through Likert scales or Best-Worst scaling. This finding is consistent with prior work conducted by Novikova \emph{et al.} \citeyearpar{novikova-etal-2018-rankme}. Novel in our study was the testing of the Best-Worst scaling method to evaluate responses against one another. Although the Best-Worst scaling method has been shown to be effective in obtaining crowdsourced ratings of emotions \cite{kiritchenko-mohammad-2017-best}, we did not find it to be effective in this study. We aim to investigate further whether this finding can be reproduced in a different experiment. Further, we were able to identify the effects of time taken to complete the task on rating reliability. We find that workers who spent less than average time on the task had higher consistency (for the Likert, BME and BWS experiment conditions) than did the workers who spent more than average time. This finding is counter-intuitive, we expect that spending more time would positively impact inter-rater consistency. Our first step in the analysis of the effects of time taken on reliability included analyzing data from workers who spent more or less than average time, which offers admittedly a limited perspective; an interesting next step would be to more thoroughly study the effects of time taken on reliability by taking into account the full distribution of the time spent data. We also find that \textit{lack of} prior experience of evaluating open-domain dialogue system output results in more reliable ratings. One potential explanation for this could be that workers may have pre-conceived notions based on their past experience. One limitation of our current study is that although we had output from three separate models, we conducted the study using data from one corpus. Reproducing our findings across additional corpora, additional metrics and other experiment designs would help substantiate these findings further. An analysis of the interaction effects between independent variables such as time taken and prior experience would also help strengthen the findings of our study. By using a larger sample size (n$=$40), we are able to make claims about statistical significance across experiment conditions. In future work, we plan to evaluate the impact of cognitive biases such as anchoring and confirmation bias in-depth and how it affects consistency and reliability along with testing continuous scale ratings with no reference value. \section*{Acknowledgments} This work was supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No FA8650-18-C-7881. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of AFRL, DARPA, or the U.S. Government. We thank the anonymous reviewers for the helpful feedback.
1,116,691,498,154
arxiv
\section{Introduction} We consider timelike maximal surfaces in a three dimensional vacuum spacetime $({\mycal M}^{1 + 2},{\mathfrak{g}})$. These surfaces are usually referred to as relativistic strings in the literature. The case of non-compact timelike maximal graphs in Minkowski spacetimes ${\mathbb R}^{1 + n}$ is fairly well understood. Global well-posedness for sufficiently small initial data was established by Brendle \cite{Brendle-MaxSurf} and by Lindblad \cite{Lindblad}. The case of general codimension was studied by Allen, Andersson and Isenberg \cite{AllenAI}. The main focus of this paper is in the case where the surface is an immersed cylinder ${\mathbb R} \times {\mathbb S}^1$, or a string, in ${\mathbb R}^{1 + 2}$. We note that local well-posedness for this problem in a larger context was studied recently by Allen, Andersson and Restuccia \cite{AllenAR}. In \cite{BHNO}, Bellettini, Hoppe, Novaga and Orlandi showed that the problem for a relativistic string in a flat spacetime can be simplified considerably in the so-called orthogonal gauge. In fact, the authors effectively reduced the problem to a homogeneous linear wave equation in one dimension, which admits a simple representation formula from its initial data, which implies in particular the long time existence of parametrizations for relativistic strings. As a consequence, they showed, among other results, that if the initial curve is a centrally symmetric convex curve and the initial velocity is zero, the string shrinks to a point in finite time. (It should be noted that the string does not become extinct there, but rather comes out of the singularity point, evolves back to its original shape and then periodically afterwards.) We also note an earlier paper by Kibble and Turok \cite{KibbleTurok} which showed that a closed string with zero initial velocity must form singularity in finite time. By a different method, Kong and his collaborators \cite{Kong:2007} and \cite{Kong:2009} proved another representation formula (without the need to fix a gauge). Using their representation formula, they presented many numerical evidences where singularity formation is prominent. From the above discussion (see also \cite{EggersH}), it is suggestive that, for any arbitrary initial data, a closed string must form singularity in finite time. The main goal of the present paper is to confirm this statement. Let ${\mathbb R}^{1 + 2}$ denote the three dimensional Minkowski spacetime endowed with the flat metric, and $(t,x^1, x^2)$ its standard Cartesian coordinates. We prove: \begin{theorem}\label{Main1} For any smooth immersed closed curve ${\mycal C} \subset {\mathbb R}^2 = \{t = 0\}$ and smooth future-directed timelike and nowhere vanishing vector field $V$ along ${\mycal C}$, there exists no globally smooth immersed surface ${\mycal S} \subset {\mathbb R}^{1 + 2}$ which contains ${\mycal C}$ and tangential to $V$ such that its induced metric is Lorentzian and its mean curvature vector vanishes. \end{theorem} Equivalently, the above result asserts that if one evolves a closed curve in ${\mathbb R}^{1 + 2}$ in a timelike direction such that its mean curvature is zero, it will form singularity in finite time. On the other hand, since the PDE for the parametrization map admits a global solution, it makes sense to talk about the ``maximal surface'' after singularity forms. In Section \ref{Sec:LocPic}, we give a detailed study of the local geometry of a maximal surface at a generic singularity. For a generic singularity propagation, we shows that, locally, the time slices of the maximal surface evolve by a rigid motion: they are either translated or rotated (see Figure \ref{PerSing}). For a generic singularity formation, we show that it is locally self-similar. Self-similar singularity formation was classified by Eggers and Hoppe \cite{EggersH}. Locally, the singularities look like a swallowtail: the first singularity is a cusp of order $4/3$ which splits up to two ordinary cusps at later time (see Figure \ref{SwtailSing}). As a partial complement to the above theorem, we also establish in Proposition \ref{Prop:LBExistenceTime} a lower bound for the existence time before singularity forms. Our estimate implies, for example, that if $\alpha$ is a non-compact curve in ${\mathbb R}^2$ such that its total absolute curvature is smaller than one-half, then there is a (possibly immersed) regular timelike maximal surface in ${\mathbb R}^{1 + 2}$ containing $\alpha$ and perpendicular to ${\mathbb R}^2$. In general vacuum spacetimes, the question of how singularities form is less clear. However, using ODE techniques for proving blow-up results of semilinear wave equation (see e.g. \cite{Levine77, John79, Kato80, Glassey81}), we can prove the following result, which implies a singularity statement in ${\mathbb R}^{1 + 2}$ when the initial curve is convex in ${\mathbb R}^2$ and the normal velocity vector is parallel along the initial curve. \begin{theorem}\label{Main2} Let $({\mycal M}^{1 + 2},{\mathfrak{g}})$ be a complete, oriented, time-oriented, globally hyperbolic, three dimensional vacuum spacetime and ${\bar \nabla}$ its connection. Let ${\mycal C}$ be a smooth embedded spacelike acausal closed curve. Along ${\mycal C}$, let $U$ be its unit tangent vector field, $V$ a smooth future-directed timelike unit vector field normal to $U$, and $\nu$ a unit (spacelike) vector field normal to both $U$ and $V$. If \[ {\mathfrak{g}}({\bar \nabla}_U U,\nu)^2 - {\mathfrak{g}}({\bar \nabla}_U V,\nu)^2 > 0 \text{ along } {\mycal C}, \] then there exists no globally smooth embedded surface ${\mycal S}^{1+1} \subset {\mycal M}^{1 + 2}$ which contains ${\mycal C}$ and tangential to $V$ such that its induced metric is Lorentzian and its mean curvature vector vanishes. \end{theorem} It should be noted that in Theorem \ref{Main2}, we leave out the issue whether the parametrization map exists for all time. As a final remark, we note that the singularity character of maximal surfaces (as in Theorem \ref{Main1}) is a special feature of the fact that those are surfaces in three dimensions. In the appendix, we give a construction of a regular (two-dimensional) timelike cylindrical maximal surface in ${\mathbb R}^{1 + 3}$. We suspect that, in ${\mathbb R}^{1 + n}$ with $n \geq 3$, for generic initial data, timelike maximal cylinders are smooth; but we have not attempted to analyze this statement. \medskip {\noindent\bf Acknowledgments.} The authors would like to thank Professor Kong for useful correspondence. \section{Timelike cylindrical maximal surfaces in ${\mathbb R}^{1 + 2}$} The main goal of this section is to prove Theorem \ref{Main1} for timelike maximal surfaces in ${\mathbb R}^{1 + 2}$. \subsection{Smooth spatially closed timelike surfaces}\label{ssec:TSurfFlat} Consider a smooth immersed surface ${\mycal S}$ in ${\mathbb R}^{1 + 2}$ given by \[ {\mycal S} = \Big\{(t = F^0(s^1, s^2), x^1 = F^1(s^1, s^2), x^2 = F^2(s^1, s^2): (s^1, s^2) \in \omega \subset {\mathbb R}^2\Big\} \] such that the induced metric \[ g = g_{ab}\,ds^a\,ds^b = \eta(F_{,s^a},F_{,s^b})\,ds^a\,ds^b \] is Lorentzian, i.e. $\det g < 0$. Here $a$, $b$ ranges over $\{1,2\}$. Also assume that ${\mycal C} = {\mycal S} \cap \{t = 0\}$ is a smooth immersed closed curve. We claim that, for any fixed $t$, the cross section \[ {\mycal C}_t = {\mycal S} \cap \{t = {\rm const}\} \] is either a smooth curve or is empty. To see this, assume that ${\mycal C}_t$ is non-empty and pick $p = (p^1, p^2) \in {\mycal C}_t$. Notice that $(F^0_{,s^1}(p), F^0_{,s^2}(p)) \neq 0$. For if this fails, we must have at $p$ that \begin{align*} 0 &> \det g = \Big[(F^1_{,s^1})^2 + (F^2_{,s^1})^2\Big]\Big[(F^1_{,s^2})^2 + (F^2_{,s^2})^2\Big] - \Big[F^1_{,s^1}\,F^1_{,s_2} + F^2_{,s^1}\,F^2_{,s^2}\Big]^2 \\ &= \Big[F^1_{,s^1}\,F^2_{,s_2} - F^1_{,s^2}\,F^2_{,s^1}\Big]^2, \end{align*} which is impossible. We thus assume without loss of generality that $F^0_{,s^1}(p) \neq 0$. Then, by the Inverse Function Theorem, we can find a function $f = f(t,s^2)$ such that $p^1 = f(t,p^2)$ and \[ F^0(f(t,s^2), s^2) = t. \] It is thus seen that, near $p$, ${\mycal C}_t$ is given by \[ \{\gamma^1(t,s^2), \gamma^2(t,s^2)\} \] where $\gamma^a(t,s^2) = F^a(f(t,s^2),s^2)$. We will show that this gives a well-parametrized curve. Assume otherwise, then we must have \[ F^a_{,s^1}\,f_{,s^2} + F^a_{,s^2} = 0. \] On the other hand, by definition of $f$, \[ F^0_{,s^1}\,f_{,s^2} + F^0_{,s^2} = 0. \] It follows that \[ g = \big[-(F^0_{,s^1})^2 + (F^1_{,s^1})^2 + (F^2_{,s^1})^2\big]\Big\{(ds^1)^2 - 2f_{,s^2}\,ds^1\,ds^2 + f_{,s^2}^2\,(ds^2)^2\Big\}, \] which further implies \[ \det g = 0, \] which violates our assumption that $g$ is Lorentzian. We have thus shown that ${\mycal C}_t$ is a smooth curved. From the foregoing discussion, ${\mycal S}$ can be represented by \[ {\mycal S} = \{F(t,s) := (t,\gamma(t,s)) \in {\mathbb R}^{1 + 2}, T_1 < t < T_2, s \in {\mathbb R}\} \] where $\gamma: {\mathbb R}_+ \times {\mathbb R} \rightarrow {\mathbb R}^2$. Using the timelike character of ${\mycal S}$ is easy to see that $T_1 = - \infty$, $T_2 = + \infty$ and each curve ${\mycal C}_t$ is a closed curve. We thus have \[ {\mycal S} = \{F(t,s) := (t,\gamma(t,s)) \in {\mathbb R}^{1 + 2}, t \in {\mathbb R}, s \in {\mathbb R}\} \] and $\gamma$ is periodic in $s$ with period $\Xi > 0$. \subsection{The equations}\label{ssec:TheEqns} We now consider a timelike, topologically cylindrical maximal surface ${\mycal S}$ of the form \[ {\mycal S} = \{F(t,s) := (t,\gamma(t,s)) \in {\mathbb R}^{1 + 2}, 0 \leq t < T \leq \infty, s \in {\mathbb R}\} \] and $\gamma$ is periodic in $s$ with period $\Xi > 0$. The induced metric $g$ is \begin{equation} g = -(1 - |\gamma_{,t}|^2)\,dt^2 + 2\,\lr{\gamma_{,t}}{\gamma_{,s}}\,dt\,ds + |\gamma_{,s}|^2\,ds^2, \label{Dec10-met} \end{equation} where $|\cdot|$ and $\lr{\cdot}{\cdot}$ represent the Euclidean norm and dot product of vectors in ${\mathbb R}^2$. That $g$ is Lorentzian becomes \begin{equation} \det g = -|\gamma_{,s}|^2\,(1 - Q) < 0, \label{Dec10-Lor} \end{equation} where \begin{equation} Q = |\gamma_{,t}|^2 - \frac{\lr{\gamma_{,t}}{\gamma_{,s}}^2}{|\gamma_{,s}|^2} \geq 0. \label{Dec10-Q} \end{equation} It should be noted that $Q$ remains unchanged under a reparametrization of the form $(t,s) \mapsto (t,\tilde s(t,s))$. Let $\nabla$ denote the connection of $g$. The Gauss equation gives \[ \nabla_{AB} F^\alpha = L_{AB}\,\nu^\alpha \] where $L$ is the second fundamental form of $M$ and $\nu$ is the unit normal to ${\mycal S}$. Taking trace with respect to $g$ yields \[ \Box_g F^\alpha = H\,\nu^\alpha = 0. \] Componentwise, we get \begin{align} &\Box_g t = 0 \;,\label{Dec10-H1}\\ &\Box_g \gamma = 0 \;.\label{Dec10-H2} \end{align} In \cite{BHNO}, Bellettini, Hoppe, Novaga and Orlandi showed that these equations simplify considerably in the so-called orthogonal gauge. For completeness, we quickly rederive the reduction here. From the first equation, we have \begin{equation} -\partial_t \Big(\frac{|\gamma_{,s}|}{\sqrt{1 - Q}}\Big) + \partial_s \Big(\frac{\lr{\gamma_{,t}}{\gamma_{,s}}}{|\gamma_s|\,\sqrt{1 - Q}}\Big) = 0 .\label{Dec10-H1*} \end{equation} We now look for a reparametrization, say $s = w(t,{\tilde s})$, $\tilde s = {\tilde w}(t,s)$, such that in the new coordinates $(t,{\tilde s})$, \begin{equation} \partial_{{\tilde s}} \Big(\frac{\lr{{\tilde\gamma}_{,t}}{{\tilde\gamma}_{,{\tilde s}}}}{|{\tilde\gamma}_{{\tilde s}}|\,\sqrt{1 - Q}}\Big) = 0 .\label{Dec10-Gauge1} \end{equation} In the above ${\tilde\gamma}(t,{\tilde s}) = \gamma(t,s) = \gamma(t,w(t,{\tilde s}))$. We have \[ {\tilde\gamma}_{,t} = \gamma_{,t} + w_{,t}\gamma_{,s} \text{ and } {\tilde\gamma}_{,{\tilde s}} = w_{,{\tilde s}}\,\gamma_{,s}. \] As $w(t,{\tilde w}(t,s)) = s$, we also have \[ w_{,{\tilde s}} = {\tilde w}_{,s}^{-1} \text{ and } w_{,t} = - {\tilde w}_{,s}^{-1}\,{\tilde w}_{,t}. \] We hence get \[ \partial_{{\tilde s}} \Big(\frac{\lr{{\tilde\gamma}_{,t}}{{\tilde\gamma}_{,{\tilde s}}}}{|{\tilde\gamma}_{{\tilde s}}|\,\sqrt{1 - Q}}\Big) = {\tilde w}_{,s}^{-1}\,\partial_s\Big(\frac{\lr{\gamma_{,t}}{\gamma_{,s}} + {\tilde w}_{,s}^{-1}\,{\tilde w}_{,t}\,|\gamma_{,s}|^2}{|\gamma_{,s}|\,\sqrt{1-Q}}\Big). \] From this, it is easy to see that, to achieve \eqref{Dec10-Gauge1}, we solve for ${\tilde w}$ from \begin{equation} \left\{\begin{array}{l} {\tilde w}_{,t} - \mu\,{\tilde w}_{,s} = 0,\\ {\tilde w}(0,s) = s, \end{array}\right. \label{Dec10-Gauge1Cond1} \end{equation} where $\mu$ is a solution to \[ \partial_s\Big(\frac{\lr{\gamma_{,t}}{\gamma_{,s}} + \mu\,|\gamma_{,s}|^2}{|\gamma_{,s}|\,\sqrt{1-Q}}\Big) = 0. \] It is easy to see that \begin{equation} \mu = - \frac{\lr{\gamma_{,t}}{\gamma_{,s}}}{|\gamma_{,s}|^2} + \frac{C(t)\,\sqrt{1 - Q}}{|\gamma_{,s}|} .\label{Dec10-mu} \end{equation} Now \eqref{Dec10-Gauge1Cond1} is a linear transport equation and can be solved easily. The solution ${\tilde w}$ is constant along characteristics which are integral curves of the ODE \begin{equation} \dot s(t) = \mu(t,s(t)). \label{May09-CharEq} \end{equation} Since $\mu$ is smooth and periodic in $s$, $\mu$ is uniformly Lipschitz in $s$ for $t$ belonging to any closed interval of $[0,T)$. It follows that the integral curve of \eqref{May09-CharEq} exists and is smooth and non-crossing for $t \in [0,T)$. It follows that ${\tilde w}$ is strictly increasing in $s$. This shows that ${\tilde s}$ is a valid reparametrization. Also, by the periodicity of $\mu$, \[ {\tilde w}(t,s+\Xi) - {\tilde w}(t,s) = \Xi, \] which implies that ${\tilde\gamma}$ is periodic in ${\tilde s}$ with the same period $\Xi$. In any event, in the new coordinate $(t,{\tilde s})$, \eqref{Dec10-H1*} becomes \[ \partial_t \Big(\frac{|{\tilde\gamma}_{,{\tilde s}}|}{\sqrt{1 - Q}}\Big) = 0 \] and so \begin{equation} \frac{|{\tilde\gamma}_{,{\tilde s}}|}{\sqrt{1 - Q}} = \rho({\tilde s}). \label{Dec10-ConsLaw} \end{equation} \begin{proposition}[\cite{BHNO}]\label{BHNO} Assume that ${\mycal S}$ is a regular timelike maximal surface in ${\mathbb R}^{1 + 2}$ which is diffeomorphic to $[0, T) \times {\mathbb S}^1$. There exists a smooth parametrization \begin{eqnarray*} [0,T)\times {\mathbb R} &\rightarrow& {\mycal S}\\ (t,s) &\mapsto& (t,\gamma(t,s)) \end{eqnarray*} of ${\mycal S}$ such that $\gamma$ is periodic with period $\Xi > 0$ in $s$, $\gamma_{,s}$ is nowhere zero, and \begin{align} &\lr{\gamma_{,t}}{\gamma_{,s}} = 0 ,\label{Dec10-Gauge1*}\\ &|\gamma_{,t}|^2 + |\gamma_{,s}|^2 = 1 ,\label{Dec10-ConsLaw*}\\ &\gamma_{,tt} - \gamma_{,ss} = 0 .\label{Dec11-WaveEq} \end{align} Conversely, if $\gamma$ is a regular solution to \eqref{Dec11-WaveEq} and satisfies \eqref{Dec10-Gauge1*} and \eqref{Dec10-ConsLaw*} at initial time then it gives rise to a regular timelike maximal surface in ${\mathbb R}^{1 + 2}$ for at least some positive time. \end{proposition} \noindent{\bf Proof.\;} We first pick a parametrization of ${\mycal S}$ such that, on the initial curve ${\mycal C}$, $|\gamma_{,s}(0,s)|^2 = 1 - Q(0,s)$. We then define a new coordinate system $(t,{\tilde s})$ by solving \eqref{Dec10-Gauge1Cond1}-\eqref{Dec10-mu} with $C \equiv 0$. Since $\mu$ is smooth, so is the characteristic curves of \eqref{Dec10-Gauge1Cond1}, which implies that $(t,{\tilde s})$ is a smooth coordinate system on ${\mycal S}$ for as long as $(t,s)$ is. It is straightforward that in the new coordinates, $|\gamma_{,{\tilde s}}(0,{\tilde s})|^2 = 1 - Q(0,{\tilde s})$. Equation \eqref{Dec10-Gauge1*}, \eqref{Dec10-ConsLaw*} and \eqref{Dec11-WaveEq} follow from equation \eqref{Dec10-Gauge1}, the conservation law \eqref{Dec10-ConsLaw} and equation \eqref{Dec10-H2}. For the converse, assume that $\gamma$ satisfies \eqref{Dec11-WaveEq}. We show that if \eqref{Dec10-Gauge1*} and \eqref{Dec10-ConsLaw*} hold at initial time $t = 0$, then they hold everywhere. To this end, consider $\phi = |\gamma_{,t} + \gamma_{,s}|^2$. By \eqref{Dec11-WaveEq}, \[ \phi_{,t} = 2\lr{\gamma_{,t} + \gamma_{,s}}{\gamma_{,tt} + \gamma_{,ts}} = 2\lr{\gamma_{,t} + \gamma_{,s}}{\gamma_{,ss} + \gamma_{,ts}} = \phi_{,s}. \] This implies \[ \phi_{,tt} - \phi_{,ss} = 0. \] On the other hand, as \eqref{Dec10-Gauge1*} and \eqref{Dec10-ConsLaw*} hold initially, we have \[ \phi = 1 \text{ and } \phi_{,t} = \phi_{,s} = 0 \text{ at time } t = 0. \] By the uniqueness of the linear wave equation, this implies that $|\gamma_{,t} + \gamma_{,s}|^2 = \phi = 1$ for all time. Similarly, $|\gamma_{,t} - \gamma_{,s}|^2 = 1$ for all time. These two identities imply \eqref{Dec10-Gauge1*} and \eqref{Dec10-ConsLaw*}. The conclusion follows from the discussion preceding the proposition. \hfill$\square$\medskip \subsection{Unavoidable bad parametrization} We now give the proof of Theorem \ref{Main1}. Assume otherwise that there is a smooth maximal surface ${\mycal S}$ which contains ${\mycal C}$ and tangential to $V$. By Proposition \ref{BHNO}, there is a parametrization $(t,s) \mapsto (t,\gamma(t,s))$ of ${\mycal S}$ with $\gamma$ periodic in $s$, $\gamma_{,s}$ is nowhere zero such that \eqref{Dec10-Gauge1*}, \eqref{Dec10-ConsLaw*} and \eqref{Dec11-WaveEq} hold. Let $\alpha(s) = \gamma(0,s)$ and $\beta = \gamma_{,t}(0,s)$. Then \begin{equation} \lr{\beta}{\alpha_{,s}} = 0 \text{ and } |\alpha_{,s}|^2 + |\beta|^2 = 1 .\label{Dec11-Init} \end{equation} From the linear wave equation \eqref{Dec11-WaveEq}, we see that \begin{equation} \gamma(t,s) = \frac{1}{2}\big(\alpha(s+t) + \alpha(s-t)) + \frac{1}{2}\int_{s-t}^{s + t} \beta(\xi)\,d\xi .\label{Dec11-RepF} \end{equation} It is readily seen that, if $\alpha(s)$ and $\beta(s)$ are smooth, then $\gamma$ is smooth. Furthermore, if $\alpha$ is a regular parametrization of the initial curve, i.e. $\alpha_{,s}(\cdot)$ is nowhere zero, then for some $T > 0$, $\gamma(t,\cdot)$ gives a regular parametrization for $0 < t < T$. From \eqref{Dec11-RepF}, we see that $\gamma(t,\cdot)$ is not a well parametrization if and only if there exists $s$ such that \[ \alpha_{,s}(s + t) + \beta(s + t) + \alpha_{,s}(s - t) - \beta(s - t) = 0 . \] We thus introduce \begin{equation} a(s) := \alpha_{,s}(s) + \beta(s) \text{ and } b(s) = \alpha_{,s}(s) - \beta(s) .\label{Dec27-ab} \end{equation} The following lemma shows that bad parametrization always happens, which contradicts the construction of the coordinate system $(t,s)$ of ${\mycal S}$ and thus concludes the proof of Theorem \ref{Main1}. \begin{lemma}\label{NonemptyBPara} There exist $s$ and $r$ such that \begin{equation} a(s) + b(r) = 0 .\label{Dec11-SingCond} \end{equation} \end{lemma} \noindent{\bf Proof.\;} Let ${\mycal C}$ denote the initial curve defined by $\alpha$. Set \[ A = \Big\{a(s): s\in {\mycal C}\Big\} \text{ and } B = \Big\{-b(s): s\in {\mycal C}\Big\} \;. \] Evidently, $A$ and $B$ a closed non-empty connected subsets of ${\mathbb S}^1$. If one of them is equal to ${\mathbb S}^1$, \eqref{Dec11-SingCond} holds trivially. Assume thus that $A$ and $B$ are not ${\mathbb S}^1$. Then $A \cup B$ cannot be ${\mathbb S}^1$ as well. Arguing indirectly, assume further that \eqref{Dec11-SingCond} fails. Then there is a connected interval $I \subset {\mathbb S}^1$ such that $A \Subset I$ and $B \Subset {\mathbb S}^1 \setminus I$. This implies that there exist a unit vector $n$ and a real number $\lambda \in (-1,1)$ such that \[ \lr{p}{n} > \lambda > \lr{q}{n} \text{ for any } p \in A \text{ and } q \in B \;. \] Therefore, if $L$ is the period of $\alpha$, then \[ L\,\lambda < \int_0^L \lr{a(s)}{n}\,ds = \int_0^L \frac{d}{ds}\lr{\alpha(s)}{n}\,ds + \int_0^L \lr{\beta(s)}{n}\,ds = \int_0^L \lr{\beta(s)}{n}\,ds \;, \] and \[ L\,\lambda > \int_0^L \lr{-b(s)}{n}\,dy = \int_0^L \frac{d}{ds}\lr{-\alpha(s)}{n}\,ds + \int_0^L \lr{\beta(s)}{n}\,ds = \int_0^L \lr{\beta(s)}{n}\,ds \;. \] This absurdity proves the result. \hfill$\square$\medskip \begin{remark}\label{Com:NEmBPa} In fact, if ${\mycal C}$ has non-zero rotation index, one can show that, for any $r \in {\mycal C}$, there exists $s = s(r) \in {\mycal C}$ such that \eqref{Dec11-SingCond} holds. To see this, fix $r \in {\mycal C}$. Define \[ H(\tau,s) = \frac{\alpha_{,s}(s) + \tau\,\beta(s)}{|\alpha_{,s}(s) + \tau\,\beta(s)|}. \] By \eqref{Dec11-Init}, $H$ is a continuous map of $[0,1] \times {\mycal C}$ into ${\mathbb S}^1$. Moreover, for $\tau= 0$, $H(0,\cdot)$ is the tangent map of ${\mycal C}$, i.e. it assigns each point of ${\mycal C}$ to the unit tangent vector of ${\mycal C}$ thereof. As ${\mycal C}$ has non-zero rotation index, $H(0,\cdot)$ has non-zero degree. By the homotopy invariance property of the degree, $H(1,\cdot) = a(\cdot)$ also has non-zero degree. In particular, for any $r \in {\mycal C}$, there exists $s \in {\mycal C}$ such that $a(s) = -b(r)$. Here we have used $|b| = 1$. When ${\mycal C}$ has zero rotation index, it is impossible to have a strong conclusion as in the previous paragraph. For example, when $\beta \equiv 0$, there exists $r_0 \in {\mycal C}$ such that $a(s) + b(r_0) = \alpha_{,s}(s) + \alpha_{,s}(r_0) \neq 0$ for all $s \in {\mycal C}$. \end{remark} \subsection{General analysis of the (spatial) unit tangent map} In the rest of the section, we analyze the local picture where the orthonormal gauge fails. To analyze what happens when the parametrization goes badly, we look into the unit tangent map \[ U(t,s) = \frac{\gamma_{,s}(t,s)}{|\gamma_{,s}(t,s)|}. \] In terms of $a$ and $b$, \begin{equation} U(t,s) = \frac{a(s + t) + b(s - t)}{|a(s + t) + b(s - t)|^{3}} .\label{Dec26-UTangM} \end{equation} We note that \eqref{Dec11-Init} implies that $|a| = |b| = 1$. Thus, there exist smooth functions $\zeta$ and $\eta$ such that \begin{equation} a_{,s}(s) = \zeta(s)\,a^\perp(s) \text{ and } b_{,s}(s) = \eta(s)\,b^\perp(s) .\label{Dec27-xieta} \end{equation} where the perpendicular rotation $v^\perp$ of a vector $v = (v_1, v_2)$ is defined as $v^\perp = (-v_2, v_1)$. \begin{lemma}\label{BDCritW} If $\gamma_{,s}(t_0,s_0) = 0$, i.e. \begin{equation} a(s_0 + t_0) + b(s_0 - t_0) = 0 ,\label{Dec27-SingCondXX} \end{equation} and if \begin{equation} \zeta(s_0 + t_0) \neq \eta(s_0 - t_0) \label{Dec30-SingCondKey} \end{equation} then the unit tangent map $U(t_0,\cdot)$ of the curve ${\mycal C}_{t_0}$ is discontinuous at $s_0$. More specifically, it reverses direction across $s_0$. \end{lemma} \noindent{\bf Proof.\;} Let $e = a(s_0 + t_0) = - b(s_0 + t_0)$, $s_0^+ = s_0 + t_0$ and $s_0^- = s_0 - t_0$. By \eqref{Dec27-xieta}, \begin{align*} a(s + t_0) + b(s - t_0) &= (s - s_0)\,[\zeta(s_0^+) - \eta(s_0^-)]e^\perp + O(|s - s_0|^2) \end{align*} where the big $O$ notation is meant for $s$ close to $s_0$. This implies that \begin{align} U(t_0,s) &= \frac{(s - s_0)\,[\zeta(s_0^+) - \eta(s_0^-)]}{|s - s_0||\zeta(s_0^+) - \eta(s_0^-)|}\,e^\perp + O(|s - s_0|). \label{Jan05-X1} \end{align} This shows that $U(t_0, s)$ reverses direction as $s$ changes across $s_0$. \hfill$\square$\medskip \begin{remark}\label{rem:genericsing} In fact, under the hypotheses of Lemma \ref{BDCritW}, \begin{align*} \gamma(t_0,s) &= \gamma(t_0,s_0) + \frac{1}{2}\,(s - s_0)^2\,[\zeta(s_0^+) - \eta(s_0^-)]\,e^\perp\\ &\qquad\qquad + \frac{1}{6}\,(s - s_0)^3 \Big\{[\zeta_{,s}(s_0^+) - \eta_{,s}(s_0^-)]e^\perp - [\zeta^2(s_0^+) - \eta^2(s_0^-)]\,e\Big\}. \\ &\qquad\qquad + \frac{1}{24}(s - s_0)^4\,\Big\{[\zeta_{,ss}(s_0^+) - \eta_{,ss}(s_0^-) - \zeta^3(s_0^+) + \eta^3(s_0^-)]e^\perp\\ &\qquad\qquad\qquad\qquad - 3[\zeta(s_0^+)\,\zeta_{,s}(s_0^+) - \eta(s_0^-)\,\eta_{,s}(s_0^-)]\,e\Big\} \end{align*} Thus, if $\zeta^2(s_0^+) \neq \eta^2(s_0^-)$, the curve ${\mycal C}_{t_0}$ has an ordinary cusp at $s_0$. \end{remark} \begin{remark}\label{rem:swallowtail} We claim that if $t_0 > 0$ is the smallest time such that $\gamma_{,s}(t_0, s_0) = 0$ for some $s_0$, then \[ \zeta(s_0 + t_0) = \eta(s_0 - t_0). \] Assume this claim for the moment and assume in addition that \[ \zeta_{,s}(s_0 + t_0) \neq \eta_{,s}(s_0 - t_0). \] Then the curve ${\mycal C}_{t_0}$ has a cusp of order $4/3$ at $s_0$. Furthermore, for $t > t_0$, this singularity splits up into two ordinary cusps. This picture is consistent with \cite{EggersH}. See Section \ref{Sec:LocPic} for a more detailed discussion. \end{remark} To prove the claim above, note that \[ \partial_s |a(s + t) + b(s - t)|^2 = \lr{a^\perp(s + t)}{b(s - t)}(\zeta(s + t) - \eta(s - t)). \] and \[ \partial_s^2\Big|_{(t,s) = (t_0,s_0)} |a(s + t) + b(s - t)|^2 = (\zeta(s_0^+) - \eta(s_0^-))^2. \] Thus, by the implicit function theorem, if $\zeta(s_0^+) \neq \eta(s_0^-)$ then there exist some $\epsilon > 0$ and a smooth map $S: (t_0 - \epsilon, t_0 + \epsilon) \rightarrow {\mathbb R}$ such that $S(t_0) = s_0$ and \[ 0 = \partial_s |a(S(t) + t) + b(S(t) - t)|^2 = \lr{a^\perp(S(t) + t)}{b(S(t) - t)}(\zeta(S(t) + t) - \eta(S(t) - t)). \] Also, as $\zeta(s_0^+) \neq \eta(s_0^-)$, we can also assume that $\zeta(S(t) + t) \neq \eta(S(t) - t)$ for $t \in (t_0 - \epsilon, t_0 + \epsilon)$. This implies that \[ \lr{a^\perp(S(t) + t)}{b(S(t) - t)} = 0 \text{ for } t \in (t_0 - \epsilon, t_0 + \epsilon). \] As $a(s_0^+) + b(s_0^-) = 0$, the continuity of $a$ and $b$ implies that $2\gamma_{,s}(t,S(t)) = a(S(t) + t) + b(S(t) - t) = 0$, which contradicts our assumption on $t_0$. \subsection{The case of non-zero rotation index} In the following discussion, we write \[ a = (\cos\psi, \sin \psi) \text{ and } b = -(\cos {\tilde \psi}, \sin{\tilde \psi}). \] Note that $\psi' = \zeta$ and ${\tilde \psi}' = \eta$. Since $a$ and $b$ are periodic and having the same degree, say $d$, (see Remark \ref{Com:NEmBPa}), we have \[ \psi(s + L) - \psi(s) = {\tilde \psi}(r + L) - {\tilde \psi}(r) = 2\,d\,\pi \] where $L$ is the period of $\alpha$. Also, since \[ \alpha_{,s} = \frac{1}{2}( a + b) = \sin\frac{\psi - {\tilde \psi}}{2} \Big(- \sin \frac{\psi + {\tilde \psi}}{2}, \cos \frac{\psi + {\tilde \psi}}{2}\Big) \] and $\alpha_{,s}$ is nowhere vanishing, we infer that the range of $\psi - {\tilde \psi}$ does not intersect $2\pi{\mathbb Z}$. \begin{lemma}\label{NZI::DoubleImFlat} Assume that $\alpha$ has non-zero rotation index and the unit tangent map is continuous. If $\psi(s_0) = \psi(s_1) = {\tilde \psi}(r_0)$ for some $s_0$, $s_1$ and $r_0$ with $0 < s_1 - s_0 < L$, then $\psi$ is constant in $(s_0, s_1)$. \end{lemma} \noindent{\bf Proof.\;} We will only consider the case where the rotation index $d$ of $\alpha$ is positive. The other case can be proved similarly. Arguing indirectly, we assume that $\psi$ is non-constant in $(s_0, s_1)$. Then \begin{equation} \text{either }\max_{[s_0,s_1]} \psi > \psi(s_0) \text{ or } \min_{[s_0,s_1]} \psi < \psi(s_0). \label{Jan15-1} \end{equation} Assume for now that the former case holds. Set \[ M = \min\Big(2d\pi,\max_{[s_0,s_1]} \psi - \psi_0\Big) > 0, \] and define \[ s_- = \sup\{ s < s_1: \psi(s) = \psi(s_0) + M/4\} \text{ and } s_+ = \inf\{s > s_-: \psi(s) = \psi(s_0) + 3M/4\}. \] By the mean value theorem, there exists $s_2 \in [s_-,s_+]$ such that \[ \psi'(s_2) = \frac{\psi(s_+) - \psi(s_-)}{s_+ - s_-} > 0. \] By definition of $s_\pm$, we also have $\psi(s_0) + M/4 < \psi(s_2) \leq \psi(s_0) + 3M/4$. Since ${\tilde \psi}(r_0) = \psi(s_0)$ and ${\tilde \psi}(r_0 + L) = \psi(s_0) + 2d\pi$, the intermediate value theorem implies that there exists $r_2 \in [r_0, r_0 + L]$ such that ${\tilde \psi}(r_2) = \psi(s_2)$. Now let \[ s_3 = \sup\{s: \psi(\hat s) > \psi(s_2) \text{ for all } s_2 < \hat s < s\}. \] Since $\psi'(s_2) > 0$ and $\psi(s_1) = \psi(s_0) < \psi(s_2)$, $s_3$ exists and $s_2 < s_3 < s_1$. Furthermore, $\psi(s_3) = \psi(s_2)$ and $\psi'(s_3) \leq 0$. We thus end up with \[ a(s_2) = a(s_3) = -b(r_2), \zeta(s_2) > 0 \geq \zeta(s_3). \] Therefore, \[ \text{either } \zeta(s_2) \neq \eta(r_2) \text{ or } \zeta(s_3) \neq \eta(r_2). \] Then Lemma \ref{BDCritW} applies yielding that $\kappa$ must blow up somewhere, a contradiction. If the second case in \eqref{Jan15-1} holds, the argument is similar using the comparison values in of ${\tilde \psi}(r)$ for $r \in [r_0 - L, r_0]$. \hfill$\square$\medskip \begin{corollary}\label{NZI::Mon} Assume that $\alpha$ has non-zero rotation index and $\kappa$ is always finite. Then $\psi$ and ${\tilde \psi}$ are either both non-increasing or non-decreasing. \end{corollary} \noindent{\bf Proof.\;} Again, we will only consider the case where the rotation index $d$ of $\alpha$ is positive. By Remark \ref{Com:NEmBPa}, there exists $r_0$ such that $a(0) + b(r_0) = 0$. We can further assume that ${\tilde \psi}(r_0) = \psi(0)$. We claim that $\psi[0,L] \subset [\psi(0),\psi(L)]$. Define \[ s_+ = \inf\{s > 0: \psi(s) = \psi(L)\} \leq L \text{ and } s_- = \sup\{s < L: \psi(s) = \psi(0)\} \geq 0. \] Since $\psi(0) = \psi(s_-) = {\tilde \psi}(0)$, Lemma \ref{NZI::DoubleImFlat} shows that $\psi$ is constant in $(0,s_-)$. Similarly, $\psi$ is constant in $(s_+,L)$. The claim follows easily. We now show that $\psi$ is non-decreasing. Assume otherwise, then for some $0 \leq s_0 < s_1 \leq L$, $\psi(s_0) > \psi(s_1)$. By the claim, $s_0 > 0$. Thus, by the intermediate value theorem, there exists $s_2 \in (0,s_0)$ such that $\psi(s_2) = \psi(s_1)$. By Lemma \ref{NZI::DoubleImFlat}, $\psi$ is constant in $(s_2,s_1)$ contradicting the assumption that $\psi(s_0) > \psi(s_1)$. We hence conclude that $\psi$ is non-decreasing. Similarly, ${\tilde \psi}$ is non-decreasing. \hfill$\square$\medskip \begin{proposition}\label{NZI::NoSmooth} For any smooth initial data $\alpha$ and $\beta$ such that $\alpha$ has non-zero rotation index, there exists a time $T$ such that the curvature $\kappa(T,\cdot)$ of the curve ${\mycal C}_T$ blows up. \end{proposition} \noindent{\bf Proof.\;} We will only consider the case where the rotation index $d$ of $\alpha$ is positive. Assume for some initial data $\alpha$ and $\beta$ that the curvature function $\kappa$ of the solution $\gamma$ remains finite for all time. This implies in particular that the unit tangent map $U(t,s)$ is a continuous function. We will show that the curve $\gamma(t,\cdot)$ will contract to a point in finite time, which results in a contradiction. By \eqref{Dec11-RepF}, \[ U(t,s) = {\rm sgn}\Big(\sin \frac{\psi(s + t) - {\tilde \psi}(s - t)}{2}\Big) \Big(- \sin \frac{\psi(s + t) + {\tilde \psi}(s - t)}{2}, \cos\frac{\psi(s + t) + {\tilde \psi}(s - t)}{2}\Big), \] where $\rm sgn$ denotes the sign function. It follows that, for any $t$, \begin{equation} \text{the function $\psi(\cdot + t) - {\tilde \psi}(\cdot - t)$ does not change sign.} \label{Jan15-2} \end{equation} For otherwise, $U$ must be discontinuous there. By Corollary \ref{NZI::Mon}, $\psi$ and ${\tilde \psi}$ are both non-decreasing. (Here we have also used the fact that $\psi(L) - \psi(0) = 2\pi\,d > 0$.) Using Remark \ref{Com:NEmBPa}, the mean and intermediate value theorems as in the proof of Lemma \ref{NZI::DoubleImFlat}, we can find $s_0$ and $t_0$ such that \begin{equation} \psi(s_0 + t_0) = {\tilde \psi}(s_0 - t_0), \label{Jan15-3} \end{equation} and \begin{equation} \psi'(s_0 + t_0) = {\tilde \psi}'(s_0 - t_0) > 0. \label{Jan15-4} \end{equation} By \eqref{Jan15-2}, we have either \begin{equation} \psi(s + t_0) \geq {\tilde \psi}(s - t_0) \text{ for all } s \label{Jan15-5} \end{equation} or \begin{equation} \psi(s + t_0) \leq {\tilde \psi}(s - t_0) \text{ for all } s. \label{Jan15-5*} \end{equation} Assume for now that \eqref{Jan15-5} holds. By \eqref{Jan15-3} and \eqref{Jan15-4}, for some $\delta_0 > 0$, there holds \[ \psi(s_0 + t_0 - \delta) < {\tilde \psi}(s_0 - t_0 + \delta) \text{ for all } \delta \in (0,\delta_0). \] Thus, by \eqref{Jan15-2} \begin{equation} \psi(s + t_0 - \delta) \leq {\tilde \psi}(s - t_0 + \delta) \text{ for all } s \text{ and for all } \delta \in (0,\delta_0). \label{Jan15-6} \end{equation} From \eqref{Jan15-5} and \eqref{Jan15-6} we deduce that \[ \psi(s + t_0 - \delta) \leq {\tilde \psi}(s - t_0 + \delta) \leq \psi(s + t_0 + \delta) \text{ for all } s \text{ and for all } \delta \in (0,\delta_0). \] Sending $\delta \rightarrow 0$, we thus get \[ \psi(s + t_0) \equiv {\tilde \psi}(s - t_0). \] This implies that \[ \gamma_{,s}(t_0,s) = a(s + t_0) + b(s - t_0) \equiv 0, \] which shows that ${\mycal C}_{t_0}$ is actually a point. The case where \eqref{Jan15-5*} holds can be handled similarly. We get \[ \psi(s + t_0 + \delta) \geq {\tilde \psi}(s - t_0 - \delta) \geq \psi(s + t_0 - \delta) \text{ for all } s \text{ and for all } \delta \in (0,\delta_0). \] This again forces $\psi(s + t_0) \equiv {\tilde \psi}(s - t_0)$ and thereby concludes the proof. \hfill$\square$\medskip \subsection{The case of zero rotation index} We next switch to the case where the rotation index of $\alpha$ is zero. We have \begin{lemma}\label{ZI::NoPlateau} Assume that $\alpha$ has zero rotation index and $U$ is continuous. For any $r$, there is no more than one $s$ such that $\psi(s) = {\tilde \psi}(r)$. \end{lemma} \noindent{\bf Proof.\;} Assume by contradiction that there exists $s_0$, $s_1$ and $r_0$ with $0 < s_1 - s_0 < L$ such that $\psi(s_0) = \psi(s_1) = {\tilde \psi}(r_0)$. Define \[ M = \max_{[s_0, s_0+L]} \psi \text{ and } m = \min_{[s_0, s_0 + L]} \psi. \] Since $\psi - {\tilde \psi}$ is nowhere zero, $\psi$ is not a constant function. Thus, either $M > \psi(s_0)$ or $m < \psi(s_0)$. In the sequel, we will assume that $M > \psi(s_0)$. The case where $m < \psi(s_0)$ can be treated similarly. Furthermore, by replacing $(s_0, s_1)$ by $(s_1, s_0 + L)$ if necessary, we can assume that either $M$ is achieved in $[s_0, s_1]$. We claim that ${\tilde \psi}(r) \leq \psi(s_0)$ for all $r$. If this is wrong, we can argue using the mean and intermediate value theorems as in the proof of Lemma \ref{NZI::DoubleImFlat} to find $s_2$, $s_3$ and $r_2$ such that \[ \psi(s_2) = \psi(s_3) = {\tilde \psi}(r_2) \text{ and } \zeta(s_2) > 0 \geq \zeta(s_3). \] This gives a violation to the conclusion of Lemma \ref{BDCritW}. The claim follows. Now, consider the interval $(s_1, s_0 + L)$. If the minimum value of $\psi$ in this interval is less than $\psi(s_0)$, the same argument leads to another violation of Lemma \ref{BDCritW}. We thus have \begin{equation} \psi(s) \geq \psi(s_0) \geq {\tilde \psi}(r) \text{ for any $s$ and $r$}. \label{Jan15-X1} \end{equation} We next show that \begin{equation} \psi(s) - {\tilde \psi}(r) \leq 2\pi \text{ for any $s$ and $r$}. \label{Jan15-X2} \end{equation} Arguing indirectly, assume that \eqref{Jan15-X2} fails. By the intermediate value theorem, there exists $s_2$ and $r_2$ such that $\psi(s_2) = {\tilde \psi}(r_2) + 2\pi$. Furthermore, we can assume that $s_2 \in (s_0, s_1)$. Evidently, if $\psi(s_2) = \psi(s_0)$, we can further use the intermediate value theorem again to find $s_2'$ and $r_2'$ such that $\psi(s_0) < \psi(s_2') = {\tilde \psi}(r_2') + 2\pi$. We thus assume that $\psi(s_2) > \psi(s_0)$. If $\psi(s_2) < M$, the intermediate value theorem implies that there exists $s_3 \in (s_2, s_1)$ such that $\psi(s_3) = \psi(s_2)$. The argument leading to \eqref{Jan15-X1} then implies that $\psi$ can only takes value either on $(-\infty, \psi(s_2)]$ or $[\psi(s_2),\infty)$, which is obviously not the case. We thus get \[ \psi(s) = M \text{ whenever there exists $r$ such that } \psi(s) = {\tilde \psi}(r) + 2\pi. \] Since $\psi$ achieves values in $(\psi(s_0),M)$, this implies that ${\tilde \psi}(r) \geq M - 2\pi$, which implies \eqref{Jan15-X2}, a contradiction. We have thus shown \eqref{Jan15-X2}. We now revisit the proof of Lemma \ref{NonemptyBPara}. Define $A = \{a(s)\}$ and $B = \{-b(r)\}$. By \eqref{Jan15-X1} and \eqref{Jan15-X2}, $A$ and $B$ intersects at exactly two points: \[ A \cap B = \Big\{(\cos \psi(s_0),\sin\psi(s_0)), (\cos M, \sin M)\Big\}. \] This implies that there exist a unit vector $n$ and a real number $\lambda \in (-1,1)$ such that \[ \lr{p}{n} \geq \lambda \geq \lr{q}{n} \text{ for any } p \in A \text{ and } q \in B \;. \] Furthermore, since neither $\psi$ nor ${\tilde \psi}$ are constant, there exists $s$ and $r$ such that \[ \lr{a(s)}{n} > \lambda > \lr{-b(s)}{n} \;. \] We can then argue as in the proof of Lemma \ref{NonemptyBPara} to reach a contradiction. \hfill$\square$\medskip \begin{proposition}\label{ZI::NoSmooth} For any smooth initial data $\alpha$ and $\beta$ with $\alpha$ has zero rotation index, there exists a time $T$ such that the unit tangent map $U(T,\cdot)$ of the curve ${\mycal C}_T$ is discontinuous. \end{proposition} \noindent{\bf Proof.\;} Assume that the curvature function remains finite for all time. By Lemma \ref{NonemptyBPara}, there exists $s_0$ and $r_0$ such that $a(s_0) + b(r_0) = 0$. We can thus assume that $\psi(s_0) = {\tilde \psi}(r_0)$. Let \[ M = \max_{[s_0, s_0+L]} \psi \text{ and } m = \min_{[s_0, s_0 + L]} \psi. \] We claim that $\psi(s_0) \in \{M, m\}$. Assume otherwise that $m < \psi(s_0) < M$. Fix $0 < s_+ - s_- < L$ such that $\psi(s_-) = m$ and $\psi(s_+) = M$. Then, by the intermediate value theorem, there exists $s_1 \in (s_-, s_+)$ and $s_2 \in (s_+, s_- + L)$ such that $\psi(s_1) = \psi(s_2) = \psi(s_0) = \psi(r_0)$. This violates the conclusion of Lemma \ref{ZI::NoPlateau}. The claim follows. In the sequel, we assume that $\psi(s_0) = m$. The other can be handled similarly. By symmetry, ${\tilde \psi}(r_0)$ is also an extremal value of ${\tilde \psi}$. If it is the minimal value, we can find $s_1$ and $r_1$ such that $\psi(s_1) = {\tilde \psi}(r_1)$ and the common value is not extremal, which is a contradiction to the above claim. Thus, \[ {\tilde \psi}(r_0) = \max {\tilde \psi}, \] which implies \[ \psi(s) \geq {\tilde \psi}(r) \text{ for any $s$ and $r$}. \] As in the proof of Lemma \ref{ZI::NoPlateau}, we next show that \[ \psi(s) - {\tilde \psi}(r) \leq 2\pi \text{ for any $s$ and $r$}. \] If this was not correct, we can find $s_1$ and $r_1$ such that $\psi(s_1) = \hat \psi(r_1)$ where $\hat\psi = {\tilde \psi} + 2\pi$. Using the intermediate value theorem, we can further assume that $\psi(s_1) > m$. Again, $\psi(s_1)$ is an extremal value of $\psi$, which must be the maximal value. Similarly, $\hat\psi(r_1)$ is the minimal value of $\hat\psi$. We thus get \[ 0 = \max \psi - \min \hat\psi = \max \psi - \min {\tilde \psi} - 2\pi. \] We can now argue as in the proof of Lemma \ref{ZI::NoPlateau} to get a contradiction. \hfill$\square$\medskip \subsection{A lower bound for the blow up time} Having proved a singularity statement, we would like to see how long a solution stays smooth before it develops singularity. The estimate should depends on the initial curve, $\alpha_*: [p,q] \rightarrow {\mathbb R}^2$ and the initial (normal) velocity field $\partial_t + \beta_*$ along $\alpha_*$. To clarify the notation, $\alpha = \gamma(0,s)$ and $\beta(0,s) = \gamma_{,t}(0,s)$ are reparametrizations of $\alpha_*$ and $\beta_*$, i.e. $\alpha = \alpha_* \circ \Phi$ and $\beta = \beta_* \circ \Phi$ for some diffeomorphism $\Phi$. Furthermore, the estimate should be local, because the speed of propagation is finite for the wave equation. For this latter point, in this section, we do not assume that $\alpha_*$ is a closed curve. It is useful to define the timelikeness index of the (prospective) maximal surface along $\alpha_*$ to be \[ j(\alpha_*,{\mycal S}) = j(\alpha_*,\beta_*) := L(\alpha_*)\left\{\int_{\alpha_*} \frac{1}{\sqrt{-|V_*|^2}}\,|d\alpha_*|\right\}^{-1} = L(\alpha_*)\left\{\int_{\alpha_*} \frac{1}{\sqrt{1 - |\beta_*|^2}}\,|d\alpha_*|\right\}^{-1} \] where $L(\alpha_*)$ denotes the length of $\alpha_*$. Note that by definition $\beta_*$ is normal to $\alpha_*$ and has norm smaller than $1$. Hence \[ j(\alpha_*,\beta_*) \in (0,1]. \] \begin{proposition}\label{Prop:LBExistenceTime} Let $\alpha_*: [p,q] \rightarrow {\mathbb R}^2$ be a (not necessarily closed) smooth curve in ${\mathbb R}^2 = \{t = 0\} \subset {\mathbb R}^{1 + 2}$, $U_*$ its unit tangent vector field, and $V_* = \partial_t + \beta_*$ a smooth timelike vector field along $\alpha_*$ and normal to $\alpha_*$. Let $\partial_t + a_*$ and $-\partial_t + b_*$ be the null vector fields belonging to the span of $\{U_*,V_*\}$ such that \[ \lr{a_*}{U_*} > 0 \text{ and } \lr{b_*}{U_*} > 0. \] If the timelikeness index along $\alpha_*$ and the curvatures of $a_*$ and $b_*$ satisfy \[ j(\alpha_*,\beta_*) > \frac{3}{2}\int_{\alpha_*} \left[|\nabla_{U_*} a_*| + |\nabla_{U_*} b_*| \right]\,|d\alpha_*|, \] then there exist two smooth functions $p, q: [0,T] \rightarrow {\mathbb R}$ with $T = L(\alpha_*)/j(\alpha_*,\beta_*)$, $p(0) = p$, $q(0) = q$, $p(T) = q(T)$ and a map $\gamma: \Omega \rightarrow {\mathbb R}^2$ with $\Omega = \{(t,x): t \in [0,T], x \in [p(t),q(t)]\}$ such that the map $(t,x) \mapsto (t,\gamma(t,x))$ defines a regular timelike maximal surface which contains $\alpha_*$, is tangential to $V_* = \partial_t + \beta_*$ and whose lateral boundary are two null curves. \end{proposition} \noindent{\bf Proof.\;} Switching to isothermal gauge as before, we can drop the subscript $*$. Note that $a_*$ and $b_*$ coincide with the vector field $a$ and $b$ defined in \eqref{Dec27-ab}. We will show that the map \begin{align*} \Omega = \{(t,s): t \in [0,T], p + t \leq s \leq q - t\} \rightarrow {\mathbb R}^{2 + 1} (t,s) \mapsto (t,\gamma(t,s)) \end{align*} defines a smooth maximal surface. (Note that $2T = |q - p|$ in this gauge.) To this end, it suffices to show that $\gamma_{,s}(t,s) \neq 0$ for $(t,s) \in \Omega$. We first estimate $|\alpha_{,s}| = |\gamma_{,s}(0,\cdot)|$. Using the function $\xi$ and $\eta$ defined in \eqref{Dec27-xieta}, we estimate for $x,y \in [p,q]$: \begin{align*} |\alpha_{,s}(x)| - |\alpha_{,s}(y)| &= \int_x^y \frac{\lr{\alpha_{,s}(z)}{\alpha_{,ss}(z)}}{|\alpha_{,s}(z)|}\,dz\\ &= \int_x^y \frac{\lr{a(z) + b(z)}{a_{,s}(z) + b_{,s}(z)}}{2|a(z) + b(z)|}\,dz\\ &= \int_x^y \frac{\lr{a(z) + b(z)}{a^\perp(z)\,\xi(z) + b^\perp(z)\,\eta(z)}}{2|a(z) + b(z)|}\,dz\\ &= \int_x^y [\xi(z)-\eta(z)]\frac{\lr{a^\perp(z)}{b(z)}}{2|a(z) + b(z)|}\,dz\\ &= \frac{1}{4}\int_x^y [\xi(z)-\eta(z)]\lr{\frac{a(z) + b(z)}{|a(z) + b(z)|}}{\underbrace{a^\perp(z) - b^\perp(z)}_{=\beta^\perp(z)}}\,dz. \end{align*} This implies that \[ \Big||\alpha_{,s}(x)| - |\alpha_{,s}(y)| \Big| \leq \frac{1}{4}\int_p^q |\xi(z) - \eta(z)|\,|\beta(z)|\,dz . \] Integrating in the $y$ variables, it follows that \[ \Big| |\alpha_{,s}(x)| - \underbrace{\frac{L}{2T}}_{= \frac{1}{2}j}\Big| = \Big| |\alpha_{,s}(x)| - \frac{1}{2T}\int_p^q |\alpha_{,s}(y)|\,dy\Big| \leq \frac{1}{4}\int_p^q |\xi(z) - \eta(z)|\,|\beta(z)|\,dz. \] We thus deduce that \begin{equation} |\alpha_{,s}(x)| \geq \frac{1}{2}j - \frac{1}{4}\int_p^q |\xi(z) - \eta(z)|\,|\beta(z)|\,dz \text{ for } x \in [p,q]. \label{11Oct11-alphasBnd} \end{equation} Next, for any $(t,s) \in \Omega$ we have \begin{align*} |\alpha_{,s}(t,s)| &= \frac{1}{2}|a(s + t) + b(s - t)|\\ &\geq \frac{1}{2}|\alpha_{,s}(s - t)| - \frac{1}{2}|a(s + t) - a(s - t)|\\ &\geq \frac{1}{2}|\alpha_{,s}(s - t)| - \frac{1}{2}\left|\int_{s - t}^{s + t} a^\perp(z)\,\xi(z)\,dz\right|\\ &\geq \frac{1}{2}|\alpha_{,s}(s - t)| - \frac{1}{2}\int_p^q |\xi(z)|\,dz. \end{align*} By symmetry, we have \[ |\alpha_{,s}(t,s)| \geq \frac{1}{2}|\alpha_{,s}(s - t)| - \frac{1}{2}\int_p^q |\eta(z)|\,dz, \] which implies \[ |\alpha_{,s}(t,s)| \geq \frac{1}{2}|\alpha_{,s}(s - t)| - \frac{1}{4}\int_p^q \,dz. \] Combining with \eqref{11Oct11-alphasBnd}, we obtain \[ |\alpha_{,s}(t,s)| \geq \frac{1}{4}j - \frac{1}{4}\int_p^q [|\xi(z)| + |\eta(z)|][\frac{1}{2}|\beta(z)| + 1]\,dz \text{ for } (t,s) \in \Omega \] Now notice that \[ \nabla_U a = \frac{1}{|\alpha_{,s}|} a_{,s} = \frac{1}{|\alpha_{,s}|} a^\perp\,\xi \text{ and } \nabla_U b = \frac{1}{|\alpha_{,s}|} b^\perp\,\eta, \] we arrive at \[ |\alpha_{,s}(t,s)| \geq \frac{1}{4}j - \frac{1}{4}\int_{\alpha} [|\nabla_U a(z)| + |\nabla_U b(z)|][\frac{1}{2}|\beta(z)| + 1]\,|d\alpha| \text{ for } (t,s) \in \Omega. \] Note that as $\partial_t + \beta$ is timelike, $|\beta| < 1$. Hence, by hypothesis, the right hand side of the above inequality is positive. We conclude the proof. \hfill$\square$\medskip As a consequence of the above result we have \begin{corollary} For any closed curve ${\mycal C} \subset {\mathbb R}^2 = \{t = 0\}\subset {\mathbb R}^{1 + 2}$ and any future-directed timelike field $V$ along ${\mycal C}$, there exist a constant $T_* > 0$ and a regular timelike maximal surface ${\mycal S}$ containing ${\mycal C}$ and tangential to $V$ in the time slab $\{0 \leq t < T_*\}$. Furthermore, the maximal existence time $T_*$ is finite and, for any positive $l$ which is smaller than the length of the final curve $\gamma(T_*,\cdot)$, there holds \[ \lim_{t \rightarrow T} \sup\Bigg\{ \frac{1}{j(\Gamma,{\mycal S})}\,\int_\Gamma \left[|\nabla_{U} a| + |\nabla_{U} b| \right]\,|d\Gamma|\Bigg\} > \frac{2}{3}. \] where the supremum is taken over all connected sub-arc $\Gamma$ of length $l$ of the curve $\gamma(t,\cdot)$, where $U$ is the unit tangent to $\Gamma$, $\partial_t + a$ and $-\partial_t + b$ are the null vector field along $\Gamma$ which is tangential to $\Gamma$. In particular, the surface ${\mycal S}$ becomes null somewhere on the final curve. \end{corollary} \section{Local picture at a singularity in ${\mathbb R}^{1 + 2}$}\label{Sec:LocPic} In this section, we study the local picture at a singularity. Let $\alpha: [-1,1] \rightarrow {\mathbb R}^2$ and $\beta: [-1,1] \rightarrow {\mathbb R}^2$ be two smooth map such that \begin{enumerate}[(a)] \item $\alpha$ defines a continuous curve which is smooth away from $\alpha(0) = 0$, \item $\lr{\alpha'}{\beta} = 0$ and $|\alpha'|^2 + |\beta|^2 = 1$ in $[-1,1]$, \item $|\beta(0)| = 1$. \end{enumerate} Note that (c) implies \begin{equation} \lr{\beta'(0)}{\beta(0)} = 0 \text{ and } \lr{\beta''(0)}{\beta(0)} + |\beta'(0)|^2 \leq 0, \label{Jan12-X1} \end{equation} and (b) and (c) imply \begin{equation} \alpha'(0) = 0 \text{ and } \lr{\alpha''(0)}{\beta(0)} = 0. \label{Jan12-X2} \end{equation} In particular, as $\beta(0) \neq 0$, \[ \text{$\alpha''(0)$ and $\beta'(0)$ are colinear.} \] In addition, (b) implies that \begin{align} &|\alpha''(0)|^2 + \lr{\beta''(0)}{\beta(0)} + |\beta'(0)|^2 = 0, \label{Jan12-X3}\\ &\lr{\alpha'''(0)}{\beta(0)} + 2\lr{\alpha''(0)}{\beta'(0)} = 0. \label{Jan12-X4} \end{align} In view of \eqref{Dec11-RepF}, define \[ \gamma(t,s) = \frac{1}{2}(\alpha(s + t) + \alpha(s - t)) + \frac{1}{2}\int_{s - t}^{s + t} \beta(\xi)\,d\xi \] for \[ (t,s) \in \Omega := \big\{ (t,s): |t| + |s| \leq 1\big\}. \] As shown earlier, $\gamma$ defines a regular timelike maximal surface ${\mycal S}$ away from points where $\gamma_{,s}(t,s) = 0$. We would like to analyze its local behavior near $\gamma(0,0)$. We note that a spacetime dilation of a maximal surface remains a maximal surface. Thus, it would be natural to consider the limit $n\gamma(\frac{t}{n},\frac{s}{n})$ as $n \rightarrow \infty$. (Note that the rescaling of the parametrization variables is to ensure the gauge conditions \eqref{Dec10-Gauge1*} and \eqref{Dec10-ConsLaw}.) It is easy to see that the limit is the map $(t,s) \mapsto \beta(0)t$ which parametrizes a null plane in ${\mathbb R}^{1 + 2}$. This approximation of the original maximal surface is rather crude. In what to follow, we would like to obtain a better description. We start with an analysis of the zero set of $\gamma_{,s}$. We have \[ \gamma_{,s}(t,s) = \frac{1}{2}(\alpha'(s + t) + \alpha'(s - t)) + \frac{1}{2}(\beta(s + t) - \beta(s - t)) = \frac{1}{2}(a(s + t) + b(s - t)). \] where $a$ and $b$ are defined in \eqref{Dec27-ab}. Recalling the function $\zeta$ and $\eta$ defined in \eqref{Dec27-xieta}, we have \begin{align*} 4\partial_s |\gamma_{,s}|^2(t,s) &= \lr{a'(s + t) + b'(s - t)}{a(s + t) + b(s - t)}\\ &= \lr{a^\perp(s + t)}{b(s - t)}(\zeta(s + t) - \eta(s - t)). \end{align*} and \[ 4 \partial_{s}^2 |\gamma_{,s}|^2(0,0) = (\zeta(0) - \eta(0))^2 = 4|\alpha''(0)|^2. \] \subsection{Generic singularity propagation}\label{ssec:LPicPer} Let us first consider the case $\alpha''(0) \neq 0$. Note that this implies in particular that the curve defined by $\alpha$ has a cusp at the $\alpha(0)$. In this case, we can find some $\epsilon > 0$ such that the solutions to $\partial_s |\gamma_{,s}|^2(t,s) = 0$ in $(-\epsilon,\epsilon)^2 \subset \Omega$ is given by some smooth curve $\Gamma = \{(t,S(t)): t \in (-\epsilon,\epsilon)\}$. Furthermore, as $\alpha''(0) \neq 0$, $\zeta(S(t) + t) - \eta(S(t) - t) \neq 0$ in $(-\epsilon,\epsilon)^2$, and so $\lr{a^\perp(S(t) + t)}{b(S(t) - t)} = 0$. Continuity then implies that $2\partial_{,s} \gamma(S(t),t) = a(S(t) + t) + b(S(t) - t) = 0$. In this case, locally around $\gamma(0,0)$, the singularities of ${\mycal S}$ are given by $\{(t, \gamma(t,S(t)))\}$, which is null and tangent to $\partial_t + \beta(0)$. Also, as \[ 4 \partial_t\partial_{s} |\gamma_{,s}|^2(0,0) = \zeta(0)^2 - \eta(0)^2 = 4\lr{\alpha''(0)}{\beta'(0)}, \] we also have \[ S'(0) = -\frac{\lr{\alpha''(0)}{\beta'(0)}}{|\alpha''(0)|^2}. \] We have thus shown: \begin{proposition} Let ${\mycal S}$ be a ``timelike maximal'' surface defined by a smooth map $(t,s) \mapsto \gamma(t,s)$ satisfying $\lr{\gamma_{,t}}{\gamma_{,s}} = 0$ and $|\gamma_{,t}|^2 + |\gamma_{,s}|^2 = 1$. If $\gamma(t_0,s_0)$ is a singular point of ${\mycal S}$ (i.e. $\gamma_{,s}(t_0, s_0) = 0$) and if $\gamma_{,ss}(t_0,s_0) \neq 0$, then locally around $\gamma(t_0,s_0)$ singularities of ${\mycal S}$ are cusps and propagate along the null curve $t \mapsto (t, \gamma(t,S(t))$ where $S$ solves \[ \left\{\begin{array}{l} S'(t) = -\frac{\lr{\gamma_{,ss}(t,S(t))}{\gamma_{,ts}(t,S(t))}}{|\gamma_{,ss}(t,S(t))|^2},\\ S(t_0) = s_0. \end{array}\right. \] \end{proposition} Examples of exact solutions with the above behavior are given by \[ \gamma(t,s) = \frac{1}{2}\Big(\frac{1}{\lambda_1} + \frac{1}{\lambda_2} - \frac{\cos \lambda_1(s - t)}{\lambda_1} - \frac{\cos \lambda_2(s + t)}{\lambda_2}, - \frac{\sin \lambda_1(s - t)}{\lambda_1} + \frac{\sin \lambda_2(s + t)}{\lambda_2}\Big), \] and ones obtained by sending $\lambda_1 \rightarrow 0$ or $\lambda_2 \rightarrow 0$. The picture of the corresponding ${\mycal S}$ for $\lambda_1 = 3$ and $\lambda_2 = 1$ is given in Figure \ref{PerSing}. In this example, every time slice has two cusp singularities. Those singularities propagate along null helices. As an evolution of curves in ${\mathbb R}^2$, it is a rotation: the time slice at time $t$ is obtained by rotating the initial curve by $3t$ radian around some point. \begin{figure}[h] \begin{center} \includegraphics[width=.4\textwidth]{PersistentSingularity.pdf} \caption{A maximal surface in which any time slice has exactly two singularities which travel along two null curves.} \label{PerSing} \end{center} \end{figure} Let us show that the behavior seen above is prominent: Infinitesimally around a singularity of the present type, the time slices ${\mycal C}_t$ of ${\mycal S}$ are evolved by a rigid motion, namely either a translation or a rotation. The key idea is that the curve $t \mapsto \gamma(t,S(t))$ of zeroes of $\gamma_{,s}$ can be approximated up to second order around $t = 0$ by its osculating circle. We note that, up to cubic terms, the Taylor expansion of $\gamma$ around $(0,0)$ is \[ \gamma(t,s) = \beta(0)\,t + \frac{1}{2}\alpha''(0)(t^2 + s^2) + \beta'(0)\,ts + \frac{1}{6}\,\beta''(0)\,(t^3 + 3\,ts^2) + \frac{1}{6}\,\alpha'''(0)\,(3t^2 s + s^3) + \ldots \] Let \[ e = \beta(0), p = \lr{\alpha''(0)}{e^\perp} \text{ and } q = \lr{\beta'(0)}{e^\perp}. \] Using \eqref{Jan12-X1}-\eqref{Jan12-X4}, we have \begin{align*} \gamma(t,s) &= t\,e - \frac{1}{6}\,[(p^2 + q^2)(t^3 + 3\,t\,s^2) + 2pq(3t^2\,s + s^3)]e\\ &\qquad + \frac{1}{2}[p(t^2 + s^2) + 2q\,ts]\,e^\perp + O(|t|^4 + |s|^4) e + O(|t|^3 + |s|^3)e^\perp. \end{align*} In particular, \begin{align*} \alpha(s) &= -\frac{pq}{3}\,s^3\,e + \frac{p}{2}\,s^2\,e^\perp + O(s^4)\,e + O(s^3)\,e^\perp. \end{align*} The curvature of the curve $t \mapsto \gamma(t,S(t))$ at $t = 0$ is \[ k_0 = \lr{\alpha''(0) - \frac{\lr{\alpha''(0)}{\beta'(0)}}{|\alpha''(0)|^2}\,\beta'(0)}{\beta(0)^\perp} = p - \frac{q^2}{p}. \] If $k_0 \neq 0$, i.e. $p \neq \pm q$, then \begin{align*} \gamma(t,s) &= \frac{1}{2}\Big(-\frac{1}{p-q}\,\sin [(p-q)(s - t)] + \frac{1}{p + q}\sin [(p + q)(s + t)]\Big)\,e\\ &\qquad + \frac{1}{2}\Big(\frac{2p}{p^2 - q^2} - \frac{1}{p - q}\,\cos [(p-q)(s - t)] - \frac{1}{p + q}\cos [(p + q)(s + t)] \Big)e^\perp\\ &\qquad + O(|t|^4 + |s|^4) e + O(|t|^3 + |s|^3)e^\perp. \end{align*} Hence, if we set $X(t,s) = \lr{\gamma(t,s)}{e^\perp} - \frac{p}{p^2 - q^2}$ and $Y(t,s) = \lr{\gamma(t,s)}{e}$, we have \begin{align*} X(t,s) &= - \sin k_0 t\,X\big(0,s + \frac{q}{p} t\big) + \cos k_0 t\,Y\big(0,s + \frac{q}{p}t\big) + O(|t|^3 + |s|^3)e^\perp,\\ Y(t,s) &= \cos k_0 t\,X\big(0,s + \frac{q}{p} t\big) + \sin k_0 t\,Y\big(0,s + \frac{q}{p}t\big) + O(|t|^4 + |s|^4) e. \end{align*} This shows that, infinitesimally, ${\mycal C}_t$ ``is'' the image of a rotation of ${\mycal C}_0$ by $\frac{q}{p}t$ radian about $\frac{p}{p^2 - q^2}e^\perp$ (which is the center of the osculating circle of the curve of zeroes of $\gamma_{,s}$ at $s = t = 0$). The case where $k_0 = 0$ can be obtain by considering the limit $p \rightarrow q$ or $p \rightarrow - q$. For example, when $p = q$, we have \begin{align*} \gamma(t,s) &= t\,e + \frac{1}{3}\,p^2(s + t)^3e + \frac{1}{2}\,p\,(s + t)^2\,e^\perp + O(|t|^4 + |s|^4) e + O(|t|^3 + |s|^3)e^\perp\\ &= t\,e + \alpha(s \pm t) + O(|t|^4 + |s|^4) e + O(|t|^3 + |s|^3)e^\perp. \end{align*} This shows that, infinitesimally, the curve ${\mycal C}_t$ ``is'' a translation of ${\mycal C}_0$. The exact solution approximant is \[ \gamma^*(t,s) = \frac{1}{2}\big(t - s + \frac{1}{2p}\,\sin 2p(s + t)\big)e + \frac{1}{4p}\big(1 - \cos 2p(s + t)\big)e^\perp. \] \subsection{Generic singularity formation}\label{ssec:LPicSw} Let us consider next the case $\alpha''(0) = 0$. Note that condition (b) together with $\alpha''(0) = 0$ implies that \[ \text{$\alpha'''(0)$ and $\beta'(0)$ are colinear.} \] To make the situation not too degenerate, we make an empirical ansatz that \begin{equation} \beta'(0) \neq 0 \text{ and } \alpha'''(0) \neq 0. \label{5Jan12-EmA} \end{equation} As we have said earlier, this is the generic case for singularity formation. Arguing as in the previous case but considering zero of $\partial_t |\gamma_{,s}|^2$ instead, we see that the singularities of ${\mycal S}$ around $\gamma(0,0)$ are given by a curve $\Gamma = \{(T(s),s)\}$ where $T$ is smooth and \[ T'(s) = -\frac{\lr{\alpha''(s)}{\beta'(s)}}{|\beta'(s)|^2}. \] Furthermore, note that \begin{equation} T'(0) = 0 \text{ and } T''(0) = -\frac{\lr{\alpha'''(0)}{\beta'(0)}}{|\beta'(0)|^2} \neq 0, \label{23Jan12-J1} \end{equation} which implies that the singularities lie either all in the future or in the past. In addition, by Remark \ref{rem:swallowtail}, $\gamma(t_0,s_0)$ is a cusp of order $4/3$ and other singularities are regular cusps. We thus have: \begin{proposition} Let ${\mycal S}$ be a ``timelike maximal'' surface defined by a smooth map $(t,s) \mapsto \gamma(t,s)$ satisfying $\lr{\gamma_{,t}}{\gamma_{,s}} = 0$ and $|\gamma_{,t}|^2 + |\gamma_{,s}|^2 = 1$. If $\gamma(t_0,s_0)$ is a singular point of ${\mycal S}$ (i.e. $\gamma_{,s}(t_0, s_0) = 0$) and if $\gamma_{,ts}(t_0,s_0) \neq 0$, then locally around $\gamma(t_0,s_0)$ singularities of ${\mycal S}$ lie along the null curve $s \mapsto (T(s),\gamma(T(s),s))$ where $T$ solves \[ \left\{\begin{array}{l} T'(s) = \frac{\lr{\gamma_{,ss}(T(t),s)}{\gamma_{,ts}(T(s),s)}}{|\gamma_{,ts}(T(t),s)|^2},\\ T(s_0) = t_0. \end{array}\right. \] Furthermore, if $\gamma_{,ss}(t_0,s_0) = 0$ and $\gamma_{,sss}(t_0,s_0) \neq 0$, then those singularities are cusps (except possibly $\gamma(t_0,s_0)$) and the curve $s \mapsto (T(s),\gamma(T(s),s))$ lies either all in the past or in the future (depending on whether $\lr{\gamma_{,sss}}{\gamma_{,t}}(t_0,s_0)$ is positive or negative, respectively) and splits into two null curves which join together at $(t_0,\gamma(t_0,s_0))$ as a cusp. \end{proposition} An example is given by \begin{align*} \alpha(s) &= \Big(\frac{1}{3}\,\sin^3 s, \frac{2}{3} -\frac{2}{3}\cos s + \frac{1}{3}\sin^2 s\,\cos s\Big),\\ \beta(s) &= (-\sqrt{1 - \sin^4 s}\sin s, \sqrt{1 - \sin^4 s}\,\cos s). \end{align*} The picture of the corresponding ${\mycal S}$ is given in Figure \ref{SwtailSing}. \begin{figure}[h] \begin{center} \begin{tabular}{|c|c|} \hline \includegraphics[width=.4\textwidth]{SwallowtailSingularity.pdf}&\includegraphics[width=.4\textwidth]{SwallowtailSingularitySlices.pdf}\\ (a) & (b)\\ \hline \end{tabular} \caption{(a) A maximal surface with swallowtail-type singularity. (b) Time slices of the maximal surface in (a).} \label{SwtailSing} \end{center} \end{figure} As we discussed before, the set of singularities looks like a swallowtail. Under a self-similar assumption, this was proved by Eggers and Hoppe \cite{EggersH}. It is natural to ask whether the prototype of generic singularity formation is of self-similar type. First note that, by \eqref{23Jan12-J1}, for $s$ close to $0$, $T(s) \sim s^2$. Thus, to see the local picture of singularities, one should look at the scale $t \sim s^2$. In this scale, the Taylor expansion up to ``quartic terms'' of $\gamma$at $(0,0)$ is \begin{align*} \gamma(t,s) &= \beta(0)\,t + \beta'(0)\,ts + \frac{1}{2}\,\beta''(0)\,ts^2 + \frac{1}{6}\,\alpha'''(0)\,s^3\\ &\qquad + \frac{1}{24}\,\alpha''''(0)\,s^4 + O(|t|^{5/2} + |s|^5). \end{align*} Now let \[ e = \beta(0), u = \lr{\alpha'''(0)}{e^\perp} \text{ and } q = \lr{\beta'(0)}{e^\perp}. \] A simple computation leads to \begin{align*} \gamma(t,s) &= \big[t - \frac{1}{2}\,q^2\,ts^2 - \frac{1}{8}\,qu\,s^4 \big]\,e + \big[q\,ts + \frac{1}{6}\,u\,s^3\big]\,e^\perp\\ &\qquad + O(|t|^{5/2} + |s|^5)\,e + O(|t|^2 + |s|^4)\,e^\perp. \end{align*} This shows that, infinitesimally, ${\mycal C}_t$ is self-similar: \[ \gamma(n^{-2}t, n^{-1}s) = \Big[n^{-2}t + n^{-4}\,\big(\lr{\gamma(t,s)}{e} - t\big)\Big]e + n^{-3}\lr{\gamma(t,s)}{e^\perp}\,e^\perp + O(n^{-5})\,e + O(n^{-4})\,e^\perp. \] \section{Timelike maximal surfaces in general vacuum spacetimes} In this section we give the proof of Theorem \ref{Main2}. Let $({\mycal M}^{1 + 2},{\mathfrak{g}})$ be a smooth oriented, time-oriented, globally hyperbolic Lorentzian manifold which satisfies the Einstein vacuum equation: \begin{equation} {\overline{\rm Ric}}_{\alpha\beta} = 0. \label{VacBackground} \end{equation} Here ${\overline{\rm Ric}}$ is the Ricci curvature of ${\mathfrak{g}}$. Let $t$ be a global time function on ${\mycal M}$. \subsection{Adapted coordinates} Consider in ${\mycal M}$ a closed spacelike acausal embedded curve ${\mycal C}$ and a timelike (embedded) surface ${\mycal S}$ which contains ${\mycal C}$. We claim that ${\mycal C}$ is a Cauchy curve for ${\mycal S}$. Indeed, since ${\mycal C}$ is acausal, it suffices to show that each inextendible causal curve in ${\mycal S}$ must intersect ${\mycal C}$. Let $\lambda$ a inextendible causal curve in ${\mycal S}$. Since ${\mycal C}$ is compact, the range of $t|_{{\mycal C}}$ is bounded. Also, by the global hyperbolicity of ${\mycal M}$, $t\big|_{\lambda}$ can attain any value in ${\mathbb R}$, and in particular the value zero. The last two statement evidently imply that $\lambda$ intersects ${\mycal C}$. The claim is proved. By the above claim, ${\mycal S}$ is globally hyperbolic (this can also be seen by noting that $t\big|_{{\mycal S}}$ defines a time function on ${\mycal S}$) and ${\mycal S}$ is homeomorphic to ${\mathbb R} \times {\mycal C}$. Next, we follow Kulkarni \cite{Kulkarni} to define a `canonical' parametrization of ${\mycal S}$ as follows. Assume that ${\mycal C}$ is parametrized by $\{\gamma(s): s \in [0,\Xi]\}$ (where $\gamma(0) = \gamma(\Xi)$). Let $\widehat{\mycal S} \approx {\mathbb R} \times {\mathbb R}$ be the universal cover of ${\mycal S}$. Let $\widehat{\mycal C}$ be the lift of ${\mycal C}$ and $\hat s$ be the lift of the parameter $s$. For any point $\hat p \in \widehat {\mycal S}$, the null lines passing through $p$ intersect $\widehat{\mycal C}$ at $p_-$ and $p_+$ whose $\hat s$-parameters are $x_-$ and $x_+$ where $x_- \leq x_+$. We then set \[ \hat\tau(p) = \left\{\begin{array}{ll} 0 &\text{ if } p \in \widehat{\mycal C},\\ \frac{1}{2}(x_+ - x_-) &\text{ if $p$ is in the future of $\widehat{\mycal C}$},\\ -\frac{1}{2}(x_+ - x_-) &\text{ if $p$ is in the past of $\widehat{\mycal C}$}, \end{array}\right. \] and \[ \hat\xi(p) = \frac{1}{2}(x_+ + x_-). \] Then $(\hat\tau,\hat\xi)$ defines a global parametrization of $\widehat {\mycal S}$. This descents to a parametrization $(\tau,\xi)$ of ${\mycal S}$. Note that ${\mycal C} = \{\tau = 0\}$. Now, note that both $\partial_\tau + \partial_\xi$ and $\partial_\tau - \partial_\xi$ are null. Hence, the metric $g$ induced by ${\mathfrak{g}}$ on ${\mycal S}$ takes the form \[ g = A(-d\tau^2 + d\xi^2) \] where $A$ is nowhere zero. Since $\xi \equiv s$ on ${\mycal C}$, which is spacelike, $A$ is positive. We thus have \[ g = e^{2u(\tau,\xi)}\,[-d\tau^2 + d\xi^2]. \] Here and in the rest of the paper, $\xi$ is assumed to take values in ${\mathbb R}$ and all functions are periodic in $\xi$ with a fixed period $\Xi > 0$. Near ${\mycal S}$, we can complete $\{\tau,\xi\}$ to a local coordinate system $\{\rho,t,\xi\}$ such that ${\mycal S}$ is at $\rho = 0$, $\partial_\rho$ is normal to ${\mycal S}$ and ${\mathfrak{g}}(\partial_\rho,\partial_\rho) = 1$. We thus have \begin{align} {\mathfrak{g}} &= - (e^{2u(t,\xi)} + 2\rho\,M(\rho,t,\xi))\,dt^2 + 4\rho\,N(\rho,t,\xi)\,dt\,d\xi + (e^{2u(t,\xi)} + 2\rho\,P(\rho,\tau,\xi))\,d\xi^2\nonumber\\ &\qquad\qquad + d\rho^2 + 2\rho\,(Q(\rho,t,\xi)\,dt + S(\rho,t,\xi)\,d\xi). \label{BackgroundMetric} \end{align} Here all functions depending on $\rho$ are smooth up to $\rho = 0$. \subsection{The governing equations} We assume henceforth that ${\mycal S}$ is maximal. It is easy to see that the second fundamental form of ${\mycal S}$ is \begin{equation} h = -M(\tau,\xi)\,d\tau^2 + 2N(\tau,\xi)\,d\tau\,d\xi + P(\tau,\xi)\,d\xi^2, \label{2ndForm} \end{equation} where, by a standard abuse of notations, \[ M(\tau,\xi) = M(0,\tau,\xi), N(\tau,\xi) = N(0,\tau,\xi) \text{ and } P(\tau,\xi) = P(0,\tau,\xi). \] Since ${\mycal S}$ is maximal, we thus have \begin{equation} H = {\rm tr}_{g} h = e^{-2u}\,M + e^{-2u}\,P = 0 \text{ along } {\mycal S}. \label{ZeroMCurv} \end{equation} We next derive the constraint equations on ${\mycal S}$. Let ${\bar \nabla}$ and $\nabla$ denote the Levi-Civita connection of ${\mathfrak{g}}$ and $g$, respectively, and ${\bar R}$ denote the curvature tensor of ${\mathfrak{g}}$ on ${\mycal M}$, \[ {\bar R}(X,Y,Z,W) = {\mathfrak{g}}(({\bar \nabla}_X {\bar \nabla}_Y - {\bar \nabla}_Y {\bar \nabla}_X - {\bar \nabla}_{[X,Y]})Z,W). \] By the Codazzi equation and the Einstein vacuum equation \eqref{VacBackground}, \begin{align*} &0 = {\overline{\rm Ric}}(\partial_\xi,\partial_\rho) = -e^{2u}{\bar R}(\partial_\tau, \partial_\xi,\partial_\rho, \partial_\tau) = -e^{-2u}[\nabla_\tau h(\partial_\xi,\partial_\tau) - \nabla_\xi h(\partial_\tau, \partial_\tau)],\\ &0 = {\overline{\rm Ric}}(\partial_\tau,\partial_\rho) = e^{-2u}\,{\bar R}(\partial_\xi,\partial_\tau,\partial_\rho, \partial_\xi) = e^{-2u}[\nabla_\xi h(\partial_\tau,\partial_\xi) - \nabla_\tau h(\partial_\xi, \partial_\xi)]. \end{align*} Rewriting using \eqref{ZeroMCurv}, we get \begin{align} 0 &= \nabla_\tau h(\partial_\xi,\partial_\tau) - \nabla_\xi h(\partial_\tau, \partial_\tau)\nonumber\\ &= \partial_\tau N + \partial_\xi M + h_{\tau\tau}\,{\bar \Gamma}_{\tau\xi}^\tau + h_{\tau\xi}({\bar \Gamma}_{\tau\xi}^\xi - {\bar \Gamma}_{\tau\tau}^\tau) - h_{\xi\xi}{\bar \Gamma}_{\tau\tau}^\xi\nonumber\\ &= \partial_\tau N + \partial_\xi M - M\,{\bar \Gamma}_{\tau\xi}^\tau + N({\bar \Gamma}_{\tau\xi}^\xi - {\bar \Gamma}_{\tau\tau}^\tau) - P{\bar \Gamma}_{\tau\tau}^\xi\nonumber\\ &= \partial_\tau N + \partial_\xi M ,\label{Codazzi1}\\ 0 &= \nabla_\xi h(\partial_\tau,\partial_\xi) - \nabla_\tau h(\partial_\xi, \partial_\xi)\nonumber\\ &= \partial_\xi N - \partial_\tau P + h_{\tau\tau}(-{\bar \Gamma}_{\xi\xi}^\tau) + h_{\tau\xi}({\bar \Gamma}_{\xi\tau}^\tau - {\bar \Gamma}_{\xi\xi}^\xi) + h_{\xi\xi}{\bar \Gamma}_{\xi\tau}^\xi\nonumber\\ &= \partial_\xi N - \partial_\tau P + M\,{\bar \Gamma}_{\xi\xi}^\tau + N({\bar \Gamma}_{\xi\tau}^\tau - {\bar \Gamma}_{\xi\xi}^\xi) + P\,{\bar \Gamma}_{\xi\tau}^\xi\nonumber\\ &= \partial_\xi N - \partial_\tau P .\label{Codazzi2} \end{align} Next, by the Gauss equation and the Einstein vacuum equation \eqref{VacBackground}, \begin{align*} 0 &= -e^{-2u}{\overline{\rm Ric}}(\partial_\tau,\partial_\tau) + e^{-2u}{\overline{\rm Ric}}(\partial_\xi,\partial_\xi) -{\overline{\rm Ric}}(\partial_\rho,\partial_\rho)\\ &= -2\,e^{-4u}{\bar R}(\partial_\xi, \partial_\tau, \partial_\tau, \partial_\xi)\\ &= 2K + 2\,e^{-4u}[ h_{\xi\xi}\,h_{\tau\tau} - h_{\xi\tau}^2]\\ &= 2K - 2\,e^{-4u}( M\,P + N^2). \end{align*} Here $K$ is the Gaussian curvature of ${\mycal S}$, \begin{align*} K &= -e^{-2u}[-\partial_{\tau\tau}u + \partial_{\xi\xi}u]. \end{align*} We thus have, by \eqref{ZeroMCurv}, \begin{equation} -\partial_{\tau\tau}u + \partial_{\xi\xi}u = e^{-2u}(M^2 - N^2). \label{Gauss} \end{equation} To summarize, we have derived the following equations, which holds on ${\mycal S}$, \begin{align*} &M + P = 0 ,\\ &\partial_\tau N + \partial_\xi M= 0 ,\\ &\partial_\xi N + \partial_\tau M = 0 ,\\ &-\partial_{\tau\tau}u + \partial_{\xi\xi}u = e^{-2u}(M^2 - N^2) . \end{align*} \subsection{A blow up result} We now give the proof of Theorem \ref{Main2}. First, note that $U = e^{-u}\,\partial_\xi$, $V = e^{-u}\,\partial_\tau$ and $\nu = \pm\partial_\rho$. By hypothesis, \begin{align*} 0 < {\mathfrak{g}}({\bar \nabla}_{e^{-u}\,\partial_\xi} (e^{-u}\,\partial_\xi),\nu)^2 - {\mathfrak{g}}({\bar \nabla}_{e^{-u}\,\partial_\xi}(e^{-u}\,\partial_\tau),\nu)^2 = e^{-4u}\,(P^2 - N^2) \text{ along } {\mycal C}. \end{align*} Thus, \eqref{ZeroMCurv} implies that \[ M^2 - N^2 > 0 \text{ on } {\mycal C}. \] In particular, both $M - N$ and $M + N$ do not change sign on ${\mycal C}$. On the other hand, by \eqref{Codazzi1}, \eqref{Codazzi2} and \eqref{ZeroMCurv}, \[ \partial_\tau(M + N) + \partial_\xi(M + N) = -\partial_\tau(M - N) + \partial_\xi(M - N) = 0, \] which implies that \[ M(\tau,\xi) + N(\tau,\xi) = M(0,\xi - \tau) + N(0,\xi - \tau) \text{ and } M(\tau,\xi) - N(\tau,\xi) = M(0,\xi + \tau) + N(0,\xi + \tau). \] We conclude from the above discussion that both $M + N$ and $M - N$ do not change sign along ${\mycal S}$ and are periodic in $\tau$. As $M^2 - N^2 > 0$ on ${\mycal C}$, it follows that \[ M^2 - N^2 > a > 0 \text{ along } {\mycal S}. \] Recalling \eqref{Gauss}, we arrive at \begin{equation} -u_{,\tau\tau} + u_{,\xi\xi} \geq a\,e^{-2u}. \label{uBlowupEqn} \end{equation} We now follow a standard ODE technique to show that $u$ blows up in finite time (in either the future or the past or both). Let \[ w(\tau) = \frac{1}{\Xi}\int_0^\Xi u(\tau,s)\,ds. \] Reversing time orientation if necessary, we can assume that $w'(0) \leq 0$. We have \[ w'' = \frac{1}{\Xi}\int_0^\Xi u_{\tau\tau}\,d\xi = \frac{1}{\Xi}\int_0^\Xi (u_{\tau\tau} - u_{\xi\xi})\,d\xi \leq -\frac{a}{\Xi}\int e^{-2u} \leq -a\,e^{-2w}. \] This implies that $w'$ is a strictly decreasing function. As $w'(0) \leq 0$, we thus have that $w'(\tau) < 0$ for all $\tau > 0$. We hence get \[ \frac{d}{d\tau} (w')^2 \geq a\,\frac{d}{d\tau}e^{-2w}. \] In other words, $(w')^2 - a\,e^{-2w}$ is increasing. In particular, \[ (w'(\tau))^2 \geq a\,e^{-2w(\tau)} - c_1 \text{ for all $\tau \geq 0$ and some constant $c_1 > 0$.} \] Using the differential inequality $w'' \leq a\,e^{-2w}$ and that $w'$ is strictly decreasing, we can find $\tau_0 > 0$ and $\delta > 0$ such that $w'(\tau) < -\delta$ for all $\tau > \tau_0$. This implies that for $\tau_1 > 0$ sufficiently large, \[ a\,e^{-2w(\tau)} > c_1 \text{ for } \tau \geq \tau_1. \] As $w' < 0$, the last two displayed inequalities give \[ w'(\tau) \leq -\sqrt{a}\,e^{-w(\tau)} + c_2 < 0 \text{ for all $\tau \geq \tau_1$ and some constant $c_2 > 0$.} \] This implies that $\ln\frac{e^{-w}}{\sqrt{a}\,e^{-w} - c_2}$ is differentiable for $\tau \geq \tau_1$ and \[ \frac{d}{d\tau}\left(\ln\frac{e^{-w}}{\sqrt{a}\,e^{-w} - c_2}\right) \leq -c_2 \text{ for all $\tau \geq \tau_1$}. \] Therefore, with $c_3 = \ln\frac{e^{-w(\tau_1)}}{\sqrt{a}\,e^{-w(\tau_1)} - c_2}$, there holds \[ \ln\frac{e^{-w}}{\sqrt{a}\,e^{-w} - c_2} \leq c_3-c_2\tau \text{ for } \tau \geq \tau_1. \] As the left hand side is bounded from below by $-\frac{1}{2}\ln a$, this results in a contradiction for large $\tau$.
1,116,691,498,155
arxiv
\section{Introduction} Structure models of quasicrystals are usually based on the asumption of either energetical or entropical stabilization of the material. Random tilings as possible structure models of the second kind where proposed \cite{Elser} soon after the discovery of quasicrystals and studied thoroughly since, see e.g.\ \cite{Henley91,Richard,Gier} and references therein. Nonetheless, until today it is not yet clear which mechanism is dominating, although experiments are indicating a stochastic component in many cases \cite{JRB}. Scaling arguments \cite{Henley88,Henley91} predict a singular continuous contribution to the diffraction spectrum for two-dimensional random tilings in addition to the usual Bragg part and continuous background. This should be visible in diffraction images of materials with so-called T-phases, see \cite{Baake} and references therein, though it is not obvious how to distinguish the different contributions. This underlines the necessity of investigating the diffraction of random tilings in more detail. In this article, we illustrate recently established results \cite{BH} by the so-called dart-rhombus tiling. This two-dimensional model has crystallographic symmetries and can be mapped onto the dimer model on the Fisher lattice. After introducing the tiling and the necessary mathematical tools, we calculate the two-point correlation functions and thereof the diffraction spectrum. As in other crystallographic examples, the spectrum can be shown to consist only of a Bragg part and an absolutely continuous background, i.e.\ there is no singular continuous component. \section{The dart-rhombus random tiling} The dart-rhombus tiling is a filling of the plane, without gaps and overlaps, with $60^{\circ}$-rhombi of side $1$ and darts made of two rhombus halves (Fig.\ \ref{tiling}). \begin{figure}[ht] \centerline{\epsfxsize=6cm \epsfbox{figs/tiling.eps}} \caption{The dart-rhombus tiling as dimer model on the Fisher lattice. The dots represent the atomic scatterers.} \label{tiling} \end{figure} In addition to the usual face-to-face condition, we impose an alternation condition on the rhombi, such that neighbouring rhombi of equal orientation are excluded. Finally, to avoid pathological lines of alternating darts, we demand that two neighbouring darts must not share a short edge. These rules force the darts to form closed loops in a background of alternating rhombi. The minimal total rhombus density obviously is $1/3$. This tiling can be mapped onto the fully packed dimer model on an Archimedian tiling known as Fisher's lattice in the context of statistical mechanics. In order to control the densities of the different prototiles, we weigh them using activities $y_i,z_i$ (Fig.\ \ref{elemFisher}), with $z_i=e^{\beta\mu_i^{}}$ etc., where $\mu_i^{}$ are chemical potentials and $\beta$ the inverse temperature. \begin{figure}[ht] \centerline{ \begin{minipage}[t]{3cm} \vspace{-3cm}\epsfxsize=3cm \epsfbox{figs/elemFisher.eps} \end{minipage} \epsfxsize=5cm \epsfbox{figs/dartpatch.eps}} \caption{Elementary cell of the Fisher lattice with activities assigned to each bond and a typical random tiling with $\rho_1=0.21$, $\rho_2=0.19$, $\rho_3=0.17$.} \label{elemFisher} \end{figure} The grand-canonical partition function is therefore equal to the dimer generating function. For any periodic graph with even number of sites in the elementary cell, the latter can be computed as Pfaffian\footnote{This is basically the square root of the determinant of an even antisymmetric matrix \cite[Ch.\ IV.2]{MW}.} of the suitably activity-weighted adjacency matrix $\bs A$ \cite{Kasteleyn}. The calculation of the Pfaffian is simplified considerably by imposing periodic boundary conditions, but in the infinite volume limit the result holds also for free ones \cite[Lemma 1]{BH}. Let us denote the rhombus densities by $\rho_i^{}$ ($i=1,2,3$) and the the dart densities by $\sigma_j^{}$ ($j=1,\dots,6$). There are several constraints on the densities. Closed dart loops require equal densities of opposite darts: \begin{equation} \label{constsigma} \sigma_1^{}=\sigma_5^{}, \quad \sigma_2^{}=\sigma_6^{}, \quad \sigma_3^{}=\sigma_4^{}. \end{equation} Moreover, as each dart is accompanied by a corresponding rhombus, the remaining rhombi occur with equal frequency owing to the alternation condition, \begin{equation} \label{constrho} \rho_1^{}-\sigma_1^{}=\rho_2^{}-\sigma_2^{}=\rho_3^{}-\sigma_3^{}. \end{equation} Including the normalization constraint (the sum of the densities is $1$), the number of independent parameters (activities or densities) reduces to three. We exploit this freedom by setting all activities except $z_1,z_2$ and $z_3$ equal to $1$. The dart-rhombus tiling undergoes second order phase transitions at \begin{equation} \begin{align} 1+z_1^2+z_2^2+z_3^2&=2\max\{1,z_1^2,z_2^2,z_3^2\}\;\text{or} \nonumber\\ z_1^2+z_2^2+z_3^2&=2 \max \{z_1^2,z_2^2,z_3^2\} \end{align} \end{equation} with logarithmic (Onsager type) or square root divergence (Kasteleyn type), respectively. The point of maximum entropy is fixed by symmetry to $\rho_i^{}=\frac{1}{6}$, $\sigma_j^{}=\frac{1}{12}$, where darts and rhombi occupy half of the tiling area each. For further details see \cite{Richard,Hoeffe}. \section{Diffraction theory} For simplicity, we assume kinematic diffraction in the Fraunhofer picture \cite{Cowley}, i.e.\ diffraction at infinity from single-scattering. The diffracted intensity $\widehat{\gamma}_{\omega}$ (a positive measure) is calculated as Fourier transform of the autocorrelation $\gamma_{\omega}$ (see \cite{BH} for details). It is known that every positive measure admits a unique decomposition into three parts $\mu=\mu_{pp}^{}+\mu_{sc}^{} +\mu_{ac}^{}$ with respect to Lebesgue's measure, where $pp$, $sc$ and $ac$ stand for pure point, singular continuous and absolutely continuous \cite{RS}. In a diffraction spectrum, $\mu_{pp}^{}$ are the Bragg peaks and $\mu_{ac}^{}$ the usual diffuse background or Laue scattering. A singular continuous part can be encountered in 1D substitutional sequences, cf \cite{Enter}, and is also expected for 2D quasicrystalline random tilings \cite{Henley88,Henley91,BH}. Consider the so-called weighted Dirac comb \cite{Cordoba} \begin{equation} \omega=\sum_{x\in\tilde{\Gamma}} w(x)\delta_x \end{equation} on a lattice $\tilde{\Gamma}$, where $\delta_x$ is the unit point measure (Dirac measure) concentrated at $x$, and $w(x)\in\{0,1\}$ is chosen in order to obtain a specific member of the random tiling ensemble. Its autocorrelation $\gamma_{\omega}$ is (almost surely) \begin{equation} \label{auto} \gamma_{\omega}=\sum_{z\in\Delta} \nu(z) \delta_z. \end{equation} Here, $\Delta=\tilde{\Gamma}-\tilde{\Gamma}=\tilde{\Gamma}$ is the set of difference vectors and the autocorrelation coefficient $\nu(z)$ can be calculated according to \begin{equation} \nu(z) \; = \; \lim_{R\to\infty} \frac{1}{{\rm vol}(B_R)} \sum_{\stackrel{\scriptstyle y \in \Lambda_R} {\scriptstyle y+z \in \Lambda}} \overline{w(y)} \, w(y+z) \, , \end{equation} where $B_R$ is the ball of radius $R$ around the origin, $\Lambda_R=\Lambda\cap B_R$ and \raisebox{1ex}{$\overline{\hphantom{x}}$} the complex conjugation. Thus, $\nu(z)$ is simply the probability of having two scatterers at distance $z$, which a.s.\ exists. We use standard Fourier theory of tempered distributions, for our conventions see \cite{BH}. \section{Diffraction of the dart-rhombus tiling} We decorate the tiling with point scatterers $\delta_x$ according to Fig.\ \ref{tiling}. More realistic atomic profiles can be handled with the convolution theorem \cite[Ch.\ IX]{RS}. The scatterers may have complex strengths $h_{\rho_i}$, resp.\ $h_{\sigma_j}$, where the strengths of opposite darts are supposed to be equal. This constraint simplifies the calculations but is not necessary. The point set of all possible atomic positions is a Kagom\'e grid with minimal vertex distance $1/2$. We write it as triangular lattice $\Gamma$ with a rhombic elementary cell $E$ containing the nine scatterers positions for the different tiles (cf Fig.\ \ref{elemFisher}). Introducing basis vectors $e_1=(\sqrt{3},0)^t$, $e_2=1/2(\sqrt{3},3)^t$ for $\Gamma$ and $E$, the positions $p$ in $E$ (with corresponding density) are given by ($a=1/4(\sqrt{3},1)^t$, $b=1/4(\sqrt{3},-1)^t$) \begin{align} \label{pos} p_{\rho_1^{}}=&\,3a,& p_{\rho_2^{}}=&\,2a-b,& p_{\rho_3^{}}=&\,a+b, \nonumber\\ p_{\sigma_1^{}}=&\,a,& p_{\sigma_2^{}}=&\,2a+b,& p_{\sigma_3^{}}=&\,3a-b, \\ \quad p_{\sigma_4^{}}=&\,3a+b,& p_{\sigma_5^{}}=&\,5a,& p_{\sigma_6^{}}=&\,4a-b. \nonumber \end{align} We now have to calculate the autocorrelation or the joint occupation probability of the dimers. As we will see, the autocorrelation coefficients can be split into a constant term and one decreasing with the distance between the scatterers. After taking the Fourier transform, the first will yield the Bragg peaks whereas the second will be responsible for the continuous part of the spectrum; we will show that this can be represented by a continuous function and hence contains no singular contribution. Using Gibbs' weak phase rule \cite{Ruelle,BH}, one can prove the ergodicity of the model and thus justify the calculation of the diffraction spectrum via the ensemble average. Let $\eta^{}_{\bs{kk}'}$ be the occupation variable that takes the value $1$ if the bond between $\bs{k}$ and $\bs{k}'$ is occupied and $0$ otherwise; $\bar{\eta}_{\bs{kk}'}^{}:=1-\eta^{}_{\bs{kk}'}$. As was shown in \cite{BH}, the probability $P_{\alpha\beta}$ of bonds $\alpha$ and $\beta$ being occupied simultaneously is given by \begin{align} \label{auto2d} P_{\alpha\beta} =& \langle \eta^{}_{\bs{k}_{\alpha}^{}\bs{k}'_{\alpha}} \eta^{}_{\bs{k}_{\beta}^{}\bs{k}'_{\beta}}\rangle \nonumber\\ =& \langle\eta^{}_{\bs{k}_{\alpha}^{}\bs{k}'_{\alpha}}\rangle\langle \eta^{}_{\bs{k}_{\beta}^{}\bs{k}'_{\beta}} \rangle \nonumber\\ &+\langle \bar{\eta}_{\bs{k}_{\alpha}^{}\bs{k}'_{\alpha}} \bar{\eta}_{\bs{k}_{\beta}^{}\bs{k}'_{\beta}}\rangle-\langle \bar{\eta}_{\bs{k}_{\alpha}^{}\bs{k}'_{\alpha}}\rangle\langle \bar{\eta}_{\bs{k}_{\beta}^{}\bs{k}'_{\beta}} \rangle \\ =& \, \rho_{\alpha}^{}\, \rho_{\beta}^{} \nonumber\\ &-A_{\bs{k}_{\alpha}^{}\bs{k}'_{\alpha}}A_{\bs{k}_{\beta}^{} \bs{k}'_{\beta}} (A^{-1}_{\bs{k}_{\alpha}^{}\bs{k}_{\beta}^{}} A^{-1}_{\bs{k}'_{\alpha} \bs{k}'_{\beta}} -A^{-1}_{\bs{k}_{\alpha}^{}\bs{k}'_{\beta}} A^{-1}_{\bs{k}'_{\alpha}\bs{k}_{\beta}^{}}), \nonumber \end{align} with $\rho_{\alpha}^{}$ the density of dimers that can occupy the bond $(\bs{k}_{\alpha}^{} \bs{k}'_{\alpha})$ and $\bs A$ the weighted adjacency matrix. We consider the constant part of the autocorrelation first. Combining (\ref{auto}) and (\ref{auto2d}) we get for the point set of scatterers with density $2/\sqrt{3}$ \begin{equation} (\gamma_{\omega}^{})_{const}^{}= \omega_{\Gamma}^{}*\bigg(\frac{2}{\sqrt{3}}\sum_{\tau,\tilde{\tau}\in \{\rho_i^{},\sigma_j^{}\}}\!\!\! (h_{\tau} h_{\tilde{\tau}} \tau\tilde{\tau})\delta_{p_{\tau}-p_{\tilde{\tau}}}\bigg), \nonumber \end{equation} where $*$ denotes convolution. Computing the Fourier transform with Poisson's summation formula \cite[Eq.\ 12]{BH} using (\ref{pos}) and (\ref{constsigma}), the pure point part of the spectrum is (almost surely) \begin{align} (\widehat{\gamma})_{pp}=&\frac{4}{3}\sum_{(k,l)\in\Gamma^*} \bigg|h_{\rho_1} \rho_1^{}+(-1)^k h_{\rho_2}\rho_2^{}+(-1)^{l} h_{\rho_3}\rho_3^{} \nonumber\\ &+2\cos{\textstyle \frac{\pi(k+l)}{3}}\Bigl((-1)^{k+l} h_{\sigma_1}\sigma_1^{} \\ &+(-1)^{l} h_{\sigma_2}\sigma_2^{}+(-1)^k h_{\sigma_3}\sigma_3^{} \Bigr)\bigg|^2 \delta_{(k,l)}^{}, \nonumber \end{align} where $\Gamma^*$ is spanned by $e_1^*=\big(\frac{1}{\sqrt{3}},-\frac{1}{3} \big)$, $e_2^*=\big(0,\frac{2}{3}\big)$. It remains to calculate the other part of (\ref{auto2d}). $A_{\bs{k}_{\alpha}^{} \bs{k}'_{\alpha}}$ is nonvanishing only if $\bs{k}_{\alpha}^{}$ and $\bs{k}'_{\alpha}$ are connected; in this case $A_{\bs{k}_{\alpha}^{}\bs{k}'_{\alpha}}=\epsilon z_{\alpha}$, with $\epsilon=\pm 1$ according to the direction of the arrows in Fig.\ \ref{elemFisher}. Since $\bs{A}$ is the adjacency matrix of a graph that is an $(m,n)$-periodic array of elementary cells with toroidal boundary conditions and therefore cyclic, it can be reduced to the diagonal form $\bs{\Lambda}=\text{diag}\{\lambda_ {\bs j}\}$ by a Fourier-type similarity transformation with matrix $S_{\bs{kk}'}= (mn)^{-1/2}\exp(2\pi i (k_1^{} k'_1/m +k_2^{} k'_2/n))$. $\bs{A}^{-1}$ is then determined by \cite{FS} \begin{equation} \label{diag} A^{-1}_{\bs{kk}'} = \left(\bs{S\Lambda}^{-1}\bs{S}^{-1}\right)_{\bs{kk}'} = \sum_{\bs{j}=(1,1)}^{(m,n)} S_{\bs{kj}}^{} \lambda_{\bs{j}}^{-1}S^{\dagger}_{\bs{k}'\bs{j}} \, . \end{equation} In the infinite volume limit, the sums approach integrals (Weyl's Lemma), and by introducing ${\bs r}=\bs{k}'-\bs{k}$, $\varphi_1^{}=2\pi i j_1^{}/m$ etc. we obtain \begin{equation} \label{inverse} A^{-1}_{\bs{kk}'} = \frac{1}{4 \pi^2}\int\limits_0^{2 \pi} \int\limits_0^{2 \pi}\lambda^{-1}(\varphi_1^{},\varphi_2^{}) e^{i\bs{\varphi}\cdot{\bs r}} d\varphi_1^{} d\varphi_2^{}. \end{equation} To determine $\lambda^{-1}$, observe that the inverse of \begin{equation} \lambda= \begin{pmatrix} 0 & z_1 & z_3 & 0 & -e^{-i\varphi_1^{}} & 0 \\ -z_1 & 0 & z_2 & 0 & 0 & -e^{-i\varphi_2^{}} \\ -z_3 & -z_2 & 0 & 1 & 0 & 0 \\ 0 & 0 & -1 & 0 & z_3 & z_2 \\ e^{i\varphi_1^{}} & 0 & 0 & -z_3 & 0 & z_1 \\ 0 & e^{i\varphi_2^{}} & 0 & -z_2 & -z_1 & 0 \end{pmatrix} \nonumber \end{equation} can be computed easily by any computer-algebra package but unfortunately does not fit onto this page. Defining the coupling function $[x,y]_{p_1 p_2}$ for two dimers in elementary cells at distance $\bs{r} = x e_1 + y e_2$, with $(x,y)\in \mathbb Z^2$, where the dimers occupy positions $p_1$, resp.\ $p_2$ in each elementary cell, we rewrite (\ref{inverse}) in more explicit form \begin{equation} [x,y]_{p_1 p_2}=\frac{1}{4 \pi^2}\int\limits_0^{2\pi}\int\limits_0^{2\pi} \frac{g(p_1,p_2, \varphi_1^{},\varphi_2^{})e^{i\bs \varphi\cdot\bs r}}{\det\left( \lambda(\varphi_1^{},\varphi_2^{})\right)}d\varphi_1^{} d\varphi_2^{}, \nonumber \end{equation} where the determinant of $\lambda$ is given by \begin{equation} \det(\lambda)=a+2b \cos\varphi_1^{} +2c\cos\varphi_2^{}+2d\cos(\varphi_1^{}-\varphi_2^{}), \nonumber \end{equation} with $a=z_1^4+z_2^4+z_3^4+1$, $b=z_1^2 z_2^2 -z_3^2$, $c=z_1^2 z_3^2-z_2^2$ and $d=z_2^2 z_3^2-z_1^2$; $g$ can be taken from the corresponding entry in $\lambda^{-1}$ and is finite. In order to determine the spectral type of the diffraction, we are interested in the asymptotic behaviour of $[x,y]_{p_1 p_2}$ for large $\bs r$ . Substituting $v=e^{-i\varphi_1^{}}$ and $w=e^{-i\varphi_2^{}}$, we obtain that $I=4 \pi^2 [x,y]_{p_1 p_2}$ is \begin{equation} \int\limits_{S^1\times S^1} \frac{-g(p_1,p_2,v,w) v^{-x}w^{-y}\;dv dw} {v^2(bw+d)+v(aw+c(w^2+1))+w(b+dw)},\nonumber \end{equation} with $S^1$ the unit circle. \begin{figure}[ht] \centerline{\epsfxsize=6cm \epsfbox{figs/dartfou.eps}} \caption{Diffraction image of the tiling of Fig.\ \ref{elemFisher} with scatterers of equal strength $1$. It was calculated numerically by means of standard FFT, because this is simpler than using the exact expression for the $ac$ part.} \label{diffr} \end{figure} We integrate over $v$ for $x < 0$; the case of positive $x$ can be treated analogously. The integrand is singular at \begin{equation} v_{\pm}=\frac{-\alpha\pm\sqrt{\alpha^2-4\beta}} {2(bw+d)} \end{equation} with $\alpha=(aw+c(w^2+1))$ and $\beta=w(bw+d)(b+dw)$, but only $v_+$ lies inside the unit circle. Thus, \begin{equation} \label{int} I = 2\pi i\int\limits_{S^1}\frac{g(p_1,p_2,v_+,w) v_+^{-x}w^{-y}} {(v_+-v_-)(bw+d)}dw. \end{equation} Away from the phase transitions, it can be shown that $|v_+|<1$. With $\tilde{v}_+=\max_{\varphi_2} v_+$ we get \begin{align} |I|&\leq 2\pi\int_{S^1} \frac{|g(p_1,p_2,v_+,w)||v_+|^{-x}} {|(v_+-v_-)(bw+d)|}dw \nonumber \\ &\leq 2 \pi |\tilde{v}_+|^{-x} \int_{S^1}\frac{|g(p_1,p_2,v_+,w)|} {|(v_+-v_-)(bw+d)|}dw \nonumber \\ &=\mathcal O\left(e^{-t_1 |x|}\right), \end{align} for some positive constant $t_1$, because the remaining integral stays finite. Since the coupling function is invariant under interchange of $x$ and $y$, this can be shown for $y$ as well. As $I$ is maximal for $x=0$ for arbitrary but fixed $y$ and vice versa, we conclude that \begin{equation} [x,y]_{p_1 p_2}=\mathcal O\left(e^{-(t_1|x|+t_2|y|)}\right). \end{equation} The non-constant part of the correlation function in (\ref{auto2d}) consists basically of products of $[x,y]$. With such an asymptotic behaviour, one can show that its Fourier transform indeed converges towards a continuous function on $\mathbb R^2/\Gamma$ (cf \cite[Addendum]{BH}). At the Kasteleyn phase transitions, we get crystals consisting only of one rhombus orientation and the corresponding darts (cf \cite{Richard}). The spectrum thus displays Bragg peaks only. For the Onsager case, we re-substitute $w=e^{-i\varphi_2}$ in (\ref{int}). Because of the symmetry of the kernel, it is sufficient to integrate from $0$ to $\pi$. The kernel reaches its maximum value 1 only at $\varphi_2=0$ or $\pi$ and remains smaller elsewhere. Using the same argument as in the treatment of the lozenge tiling in \cite{BH}, we estimate the kernel by a decreasing/increasing straight line. From the resulting asymptotic behaviour we conclude that the diffuse part of the spectrum is an absolutely continuous measure as well. \section{Acknowledgement} I am grateful to Michael Baake for valuable discussions and to Robert V.\ Moody and the Dept.\ of Mathematics, University of Alberta, for hospitality, where part of this work was done. \begin{footnotesize} \begin{frenchspacing}
1,116,691,498,156
arxiv
\section{Introduction} \input{acl2023.bbl} \bibliographystyle{acl_natbib} \clearpage \section{Appendix} \begin{table*}[htb] \newcolumntype{?}{!{\vrule width 1pt}} \newcolumntype{C}{>{\centering\arraybackslash}p{40em}} \centering \renewcommand\arraystretch{1.0} \small \scalebox{0.95}{ \begin{tabular}{@{}l@{}} \toprule Let’s add a sentence as the first sentence of the context to let the hypothesis more likely to hold true and explain why. \\ Context: \textcolor{blue}{ Tara always wanted jewelry. Her birthday was coming up. Test went to the store. He gave her a really nice necklace}\\ \textcolor{blue}{She adored him for the gift. }\\ Hypothesis: \textcolor{orange}{Test was being a good friend \textbf{starts before} he give her a really nice necklace}\\ Add what sentence as the first sentence of the context and why is the hypothesis more likely to hold true? \\ \textcolor{teal}{Test and Tara always hanged out together.} \\ \textcolor{teal}{This makes the statement true because normally people will only hang out frequently with their friends and friends will} \\ \textcolor{teal}{send each other gifts on their birthdays. } \\ \#\#\# \\ Context: \textcolor{blue}{ Tara always wanted jewelry. Her birthday was coming up. Test went to the store. He gave her a really nice necklace}\\ \textcolor{blue}{She adored him for the gift. }\\ Hypothesis: \textcolor{orange}{Test was being a good friend \textbf{starts after} he give her a really nice necklace}\\ Add what sentence as the first sentence of the context and why is the hypothesis more likely to hold true? \\ \textcolor{teal}{Test had always had the biggest crush on his classmate Tara even though she didn't talk to him much.} \\ \textcolor{teal}{This makes the statement true because it indicates that Test and Tara's relationship wasn't close prior to Test giving Tara the gift.} \\ \#\#\# \\ Context: \textcolor{blue}{Tim's tooth was hurting like crazy. He could barely eat or drink. His dentist took a look around in his mouth. One of }\\ \textcolor{blue}{his teeth was rotten. Once the tooth was pulled, Tim felt fine.} \\ Hypothesis: \textcolor{orange}{Tim scheduled an appointment with his dentist starts after his tooth was hurting like crazy} \\ Add what sentence as the first sentence of the context and why is the hypothesis more likely to hold true? \\\bottomrule \end{tabular} } \caption{ \label{tb:prompt} A sample prompt with an instance for two hypothetical changes to make the event pair's temporal relation "more before" or "more after". } \end{table*} \begin{table*}[htb] \newcolumntype{?}{!{\vrule width 1pt}} \newcolumntype{C}{>{\centering\arraybackslash}p{40em}} \centering \renewcommand\arraystretch{1.0} \small \scalebox{0.95}{ \begin{tabular}{@{}l@{}} \toprule Let's find out an event that is unmentioned but can be inferred from the context and the temporal relation between the two events are \\ not deterministic. The new event should not be longer than ten words and include only one verb. \\ Context: \textcolor{blue}{ Tara always wanted jewelry. Her birthday was coming up. Test went to the store. He gave her a really nice necklace}\\ \textcolor{blue}{She adored him for the gift. }\\ What is an event that is unmentioned but has some role and can be inferred from the context? \\ \textcolor{teal}{Test was being a good friend} \\ \textcolor{teal}{It can be inferred from She adored him for the gift.} \\ \#\#\# \\ Context: \textcolor{blue}{Tim's tooth was hurting like crazy. He could barely eat or drink. His dentist took a look around in his mouth. One of }\\ \textcolor{blue}{his teeth was rotten. Once the tooth was pulled, Tim felt fine.}\\ What is an event that is unmentioned but has some role and can be inferred from the context? \\ \textcolor{teal}{Tim scheduled an appointment with his dentist} \\ \textcolor{teal}{It can be inferred from Tim's tooth was hurting like crazy.} \\ \#\#\# \\ Context: \textcolor{blue}{Lily went to a nice restaurant. She ordered a steak. To her dismay the steak was rare. Lily was rather upset. She had }\\ \textcolor{blue}{to send it back.}\\ What is an event that is unmentioned but has some role and can be inferred from the context? \\\bottomrule \end{tabular} } \caption{ \label{tb:prompt1} A sample prompt to generate an implicit event given the context. } \end{table*} \begin{figure*}[h] \centering \scalebox{0.8}{ \includegraphics[width=1\textwidth]{mturk.png}} \caption{\label{fig:mturk}The interface for differential explanation annotation.} \end{figure*} \begin{figure*}[h] \centering \scalebox{0.9}{ \includegraphics[width=1\textwidth]{test.png}} \caption{\label{fig:test}The interface for the qualification test of differential explanation annotation.} \end{figure*} \section{Conclusion} We introduce a novel differential analysis framework and a dataset named \mbox{\textsc{Today}}{} that aims to interpret and evaluate if a temporal model can make correct predictions instead of using spurious information. We demonstrate that existing temporal models fall short in the performance on \mbox{\textsc{Today}}{}. We further show that training on a temporal relation benchmark together with \mbox{\textsc{Today}}{} leads to a more generic temporal reasoning model, resulting in improved performance on \mbox{\textsc{Tracie}}, \mbox{\textsc{Matres}}, and \mbox{\textsc{Today}}{}. Finally, we follow \mbox{\textsc{Today}}{}'s formulation and distill GPT-3 to construct useful incidental supervision for the model by creating a training pipeline that combines GPT-3 with weak explanation verifiers to solicit a large set of cheap and automatic explanations. Despite these advances, the gap in performance on \mbox{\textsc{Today}}{} between using additional sentences only versus including human-annotated gold explanation sentences indicates that \mbox{\textsc{Today}}{} continues to be a challenging task for future work towards generic temporal reasoning. \section{Dataset} \label{sec:dataset} In this section, we introduce the evaluation framework and the collection process of \mbox{\textsc{Today}}{}. \subsection{Task overview} The \mbox{\textsc{Today}}{} dataset and its overall framework is designed to evaluate systems' ability to make temporal predictions with plausible reasons. Existing datasets, including \mbox{\textsc{Matres}}, \textsc{Torque}, and \mbox{\textsc{Tracie}}, annotate only common event pairs that align with human common sense. In other words, If an event pair does not strongly imply a temporal relation (e.g., over 80\% confidence), it will not be annotated and tested on systems. This allows pre-trained language models with millions of parameters to exploit annotation artifacts and certain priors that do not necessarily hold in specific contexts. For example, we know ``lunch'' is usually before ``dinner'', but this also depends on if they are performed by the same subject, at the same location, and/or on the same day. Unfortunately, current models often memorize such relations as immutable facts, leading to prediction errors in instances that are less common in real life. This intuition inspires us to build a framework to evaluate how much spurious information and priors models are using. \vpara{Temporal Explanations} An ideal method to evaluate if models are doing the right thing when making predictions is to let them explain why a certain prediction is made and evaluate the faithfulness and plausibility of the explanations. However, such an evaluation framework is almost impossible to achieve with current progress in natural language processing, where the two main challenges are 1) it is extremely difficult to collect gold explanations that are sufficient to cover any possible sets of explanations and 2) it is impossible to evaluate system generations using existing summarization metrics automatically. \vpara{Temporal Differential Analysis} Because of the aforementioned challenges in directly evaluating system explanations, we propose an alternative that is a close proxy to the ideal form, namely temporal differential analysis. The core of temporal different analysis is to check if models can correctly identify how a subtle change to the context may affect the temporal relations of a given event pair. The intuition behind this choice is two-fold: 1) it is much easier for both annotators and models to produce an explanation if they know which dimension to focus on; 2) this provides a binary evaluation that is deterministic and trustworthy in terms of reflecting how much spurious information models are using. Specifically, our differential analysis process is defined below. Given an original context $\mathcal{C}$, event 1 $\mathcal{E}_1$ and event 2 $\mathcal{E}_2$, we assume a gold distribution $\mathbb{D}=\{P_{before}, P_{after}, P_{same}\}$ on the temporal relation between $\mathcal{E}_1$ and $\mathcal{E}_2$ concerning $\mathcal{C}$, where $P_{before}, P_{after}, P_{same}$ are the probabilities of the temporal relation being before, after and simultaneous and they sum to 1. We then annotate two additional sentences $\mathcal{AS}_{before}$ and $\mathcal{AS}_{after}$, where the temporal relation distribution between $\mathcal{E}_1$ and $\mathcal{E}_2$ with respect to $\mathcal{AS}_{before}+\mathcal{C}$ has an increased $P_{before}$, while similarly the distribution using $\mathcal{AS}_{after}+\mathcal{C}$ as the context has a higher $P_{after}$. Table~\ref{tb:example} shows an example instance of our temporal differential analysis, where an additional sentence $\mathcal{AS}_{before}$ has an effect on the temporal relation between the two events and shifts the label distribution towards ``before''. We conduct a pilot human study for this formulation and find that it is easy to annotate and achieve substantial improvement over the explanation quality compared with directly asking for explanations on an event pair. We, therefore, adopt this formulation and create our evaluation dataset \mbox{\textsc{Today}}{} through a multi-stage annotation process as detailed below. \begin{table}[t] \newcolumntype{?}{!{\vrule width 1pt}} \newcolumntype{C}{>{\centering\arraybackslash}p{40em}} \centering \renewcommand\arraystretch{1.0} \small{ \begin{tabular}{@{}l@{}} \toprule Example \\ \midrule \textbf{Context $\mathcal{C}$}: Dave wanted to make a great scientific \\ discovery. Dave worked with algae to make electricity. \\ Dave discovered he could make electricity with algae! \\ Dave was awarded for his great discovery. \\ \midrule \textbf{Additional Sentence 1 ($\mathcal{AS}_{before}$)}: Dave was a scientist. \\ \midrule \textbf{Event 1 ($\mathcal{E}_1$)}: Dave applied for a grant for his project. \\ \textbf{Event 2 ($\mathcal{E}_2$)}: Dave worked with algae to make electricity. \\ \midrule \textbf{Explanation}: The additional sentence implies Dave was \\ a scientist and normally a scientist has to apply for a grant \\ before he starts the project. \\\bottomrule \end{tabular} } \caption{ \label{tb:example} An example of temporal differential analysis, where $\mathcal{AS}$ shifts the temporal relation between $\mathcal{E}_1$ and $\mathcal{E}_2$ to be more ``before''. See \S \ref{sec:dataset} for more detail. } \end{table} \subsection{Dataset Construction} Following the definition of the temporal differential analysis framework above, we collect a dataset to carry out the actual evaluation. Each instance in \mbox{\textsc{Today}}{} contains a context $\mathcal{C}$, an event pair $\mathcal{E}_1$, $\mathcal{E}_2$, and an additional sentence of either $\mathcal{AS}_{before}$ or $\mathcal{AS}_{after}$. In addition, we also annotate a human explanation $Exp$ regarding why the additional sentence affects the temporal relation between the two events. \mbox{\textsc{Today}}{} is constructed in three steps: 1) event pair generation, 2) additional sentence and explanation annotation, and 3) annotation verification and cleaning. We detail this pipeline below. \vpara{Generating $\mathcal{C}$ and $\mathcal{E}$.} We randomly sample short stories from the ROCStories dataset~\cite{mostafazadeh-etal-2016-corpus} as the context $\mathcal{C}$. For each story, we use GPT-3 to generate an implicit event phrase based on an explicit event phrase selected by GPT-3 at the same time. An implicit event is a event that is not explicitly mentioned by the given context but is inferable and relevant. A sample prompt can be referred to in appendix table~\ref{tb:prompt1} to construct an event pair. We do this for two main reasons: 1) events that are not explicitly mentioned by the context provide more uncertainty so that the event pair does not come with a deterministic temporal relation decided by the context; 2) this is closer to the format of \mbox{\textsc{Tracie}}{}, which we aim to compare system performance changes with. \vpara{Crowdsourcing $\mathcal{AS}$ and $Exp$.} After having $\mathcal{C}$ and $\mathcal{E}$'s, we use Amazon Turk and ask crowdsourcing annotators to write potential $\mathcal{AS}_{before}$ and $\mathcal{AS}_{after}$ with respect to the provided information. The guideline asks annotators to write additional sentences that can be added to the beginning of the context to prevent models from using text positional information. The annotator is also asked to explain why they write $\mathcal{AS}$ and why it affects the temporal relation distribution. We use this as $Exp$. We design an annotation interface that is intuitive and filled with examples, and at the same time, we require annotators to pass a rigorous qualification test to demonstrate proper understanding. We list our interfaces and tests in appendix figure~\ref{fig:mturk} and figure~\ref{fig:test}. \vpara{Annotation Verification.} We employ an additional verification stage for the human-written instances from the previous step. We provide annotators with the formatted textual entailment instance and ask if the entailment label changes in the expected direction. We collect two individual verifications per instance, and the instances accepted by all annotators will appear in the test set. \subsection{Statistics} We collect 1,000 instances that are agreed upon by both verifications while constructing a silver training set with the rest 1,241 instances. \section{Experiment} \label{sec:experiment} In this section, we conduct experiments to show that 1) existing systems do not truly understand temporal relations, and 2) \mbox{\textsc{Today}}{} and subsequent incidental supervision signals can partially address this issue and contribute to generic temporal reasoning models. \subsection{Datasets, Metrics, and Settings} We evaluate start-time temporal relation predictions with \mbox{\textsc{Tracie}}{}~\cite{zhou-etal-2021-temporal}, \mbox{\textsc{Matres}}{}~\cite{ning-etal-2018-multi}, as well as \mbox{\textsc{Today}}{}. Following the settings in \cite{zhou-etal-2021-temporal}, we treat \mbox{\textsc{Matres}}{} as a binary classification benchmark and use accuracy as the evaluation metric for all three datasets. We set $\epsilon$ in equation~\ref{eq:marginrankingloss} to be 0.1. We assign $\alpha$ in equation~\ref{eq:loss} to be 10. All models and baselines follow a standard TE setup and default parameters. All T5 experiments are trained with the same number of steps and repeated with three seeds. \subsection{Baselines and Systems} We use T5-large implemented by~\citet{wolf-etal-2020-transformers} as our base temporal reasoning model. We compare our proposed models with a host of baselines, including GPT-3~\cite{brown2020language} and PatternTime~\cite{zhou-etal-2021-temporal}. We compare variations of our proposed model based on the same T5-large model including T5(T), where T5 is finetuned with \mbox{\textsc{Tracie}}{} training set, T5(T+O), where T5 is finetuned together with \mbox{\textsc{Tracie}}{} training set and \mbox{\textsc{Today}}{} training set, T5(T+O+G), where T5 is finetuned together with \mbox{\textsc{Tracie}}{} training set, \mbox{\textsc{Today}}{} training set and verifier-filtered GPT-3 generated incidental supervision. We repeat this setting by replacing the \mbox{\textsc{Tracie}}{} training set with \mbox{\textsc{Matres}}{} training set and \mbox{\textsc{Tracie}}{} + \mbox{\textsc{Matres}}{} combined training set respectively. Note that we only include 1.5k (10\%) training instances for \mbox{\textsc{Matres}}{} to match the size of other training data. We collect 5000 initial GPT-3 generated incidental supervision and 4811 remained after similarity-based filtering. We apply cross-entropy loss for \mbox{\textsc{Tracie}}{} and \mbox{\textsc{Matres}}{} training set and margin ranking loss for \mbox{\textsc{Today}}{} training set and GPT-3 generated supervision. \subsection{Inference} \label{sec:inference} For \mbox{\textsc{Today}}{} testing set, given the additional sentence for each instance, we utilize GPT-3 to generate three possible explanation sentences based on the additional sentence for both relation directions of each test instance. We then rely on the explanation sentence verifier to choose the final explanation sentence. Specifically, we adopt the explanation sentence with the highest score under the explanation sentence verifier. To enhance the explanation sentence verifier's capacity to identify an incorrect explanation sentence given a correct additional sentence, the explanation sentence verifier is further finetuned with GPT-3 generated training set with the same setting. \begin{table*}[t] \centering \small \begin{tabular}{lccccccc} \toprule Data & Loss & \mbox{\textsc{Tracie}}{} & \mbox{\textsc{Matres}}{} & \mbox{\textsc{Today}}{} & \mbox{\textsc{Today}}{} (gen. exp.) & \mbox{\textsc{Today}}{} (gold exp.) & Average \\ \cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5}\cmidrule(lr){6-6}\cmidrule(lr){7-7}\cmidrule(lr){8-8} GPT-3 & FewShot&52.3&50.1&46.8&-&-&49.7 \\ PatternTime & Distant & 77.0&73.0 &54.1&59.3&67.7&68.0\\ \cmidrule(lr){1-8} T5 (O) & MR &50.6&49.8&52.9&53.7&55.7&51.1\\ T5 (O+G) & MR&55.4&52.3&55.0&57.8&66.5&54.2 \\ \cmidrule(lr){1-8} T5 (M) & CE & 52.7 & 81.2 & 52.5&55.3& 57.5 & 62.1\\ T5 (M+O) & CE + MR & 51.5&81.7 &57.4&60.5&82.7&63.5\\ T5 (M+O+G) & CE + MR &49.9& 82.9&61.4&61.9&\textbf{82.9}& 64.8 \\ \cmidrule(lr){1-8} T5 (T) & CE & 66.2 & 63.2 & 52.3&55.0&56.0 & 60.7\\ T5 (T+O) & CE + MR & 72.9 & 69.4 &59.9 &61.7& 81.6 & 67.4\\ T5 (T+O+G) & CE + MR &73.5& 68.8& 62.1&63.1&82.0&68.1\\ \cmidrule(lr){1-8} T5 (M+T) & CE & 66.2&82.0&52.5&54.7&58.5&66.9 \\ T5 (M+T+O) & CE + MR & 73.0 & 83.5 & 57.9&60.8& 77.8& 71.5\\ T5 (M+T+O+G) & CE + MR & 73.3&83.9&\textbf{63.2}&63.1&81.6 & 73.5\\ \cmidrule(lr){1-8} PatternTime (all) & CE + MR &\textbf{79.9}& \textbf{86.3}&62.9&\textbf{63.4}&82.3&\textbf{76.4}\\ \bottomrule \end{tabular} \caption{System performances under different supervision data and loss function settings across three binary temporal benchmarks. For simplicity, we use T to denote \mbox{\textsc{Tracie}}{} training data, and similarly M for \mbox{\textsc{Matres}}{}, O for \mbox{\textsc{Today}}{} (ours), and G for GPT-3 generated incidental supervision. \mbox{\textsc{Today}}{} only includes the additional sentence. \mbox{\textsc{Today}}{} (gold exp.) includes the additional sentence and the gold explanation sentence for each instance while \mbox{\textsc{Today}}{} (gen exp.) includes the additional sentence and the explanation sentence generated by GPT-3 after filtering for each instance. Average denotes the average binary accuracy of \mbox{\textsc{Tracie}}{}, \mbox{\textsc{Matres}}{} and \mbox{\textsc{Today}}{} for each setting. All T5 experiments are trained with the same number of steps and repeated with three seeds.} \label{tab:maintable} \end{table*} \subsection{Main Results} Table~\ref{tab:maintable} shows system performances under different supervision data and loss function settings across three binary temporal benchmarks. The performance of \mbox{\textsc{Today}}{} on existing systems, i.e., GPT-3 and PatternTime is unsatisfactory, revealing there is a gap between current temporal prediction and truly faithful temporal reasoning. We observe that the average binary accuracy of \mbox{\textsc{Tracie}}{}, \mbox{\textsc{Matres}}{} and \mbox{\textsc{Today}}{} improves with the increasingly diversified training data and achieves the largest increase from 51.1\% to 73.5\% under the unified T5 training setting, which indicates that the model is being more generalized. Especially if we apply all the training data to PatternTime, the average binary accuracy increases by 8.4\%. The use of explanations contributes to an average increase of 5.6\% on the average accuracy compared to merely using the temporal reasoning data, which further verifies the effectiveness of explanations as guidance for models to behave correctly and more like a human towards this task. We also show that the TODAY supervision contributes towards a better temporal reasoning model, with a 6.7\% increase on \mbox{\textsc{Tracie}}{} when trained with \mbox{\textsc{Tracie}}{} only, 0.5\% increase on \mbox{\textsc{Matres}}{} when trained with \mbox{\textsc{Matres}}{} only, and 6.8\% increase on \mbox{\textsc{Tracie}}{} and 1.5\% increase on \mbox{\textsc{Matres}}{} when trained together with \mbox{\textsc{Tracie}}{} and \mbox{\textsc{Matres}}{}. An increase of average 6\% on \mbox{\textsc{Today}}{} without an explanation sentence further proves that the temporal model is drifting towards the right reasoning direction to focus on the differential highlights that contribute to the shift of temporal relation in the context. With GPT-3 generated incidental supervision, the model performance further improves on all metrics, with an average increase of 0.5\%, 0.8\%, 3.8\%, 1.3\% on \mbox{\textsc{Matres}}{}, \mbox{\textsc{Tracie}}{}, \mbox{\textsc{Today}}{} and average accuracy respectively. This illustrates that LLM can provide cheap but effective incidental supervision to benefit the model. We also notice that there is a huge gap between the performance of \mbox{\textsc{Today}}{} without and with gold explanation sentence. This is because a correct explanation sentence can further elaborate and explain the additional sentence, i.e., the differential component. We follow the methods in \S\ref{sec:inference} to generate an explanation for \mbox{\textsc{Today}}{} test and further improve over \mbox{\textsc{Today}}{} w/o explanation by approximately 2\%, while the performance is still suboptimal compared to including the gold explanation sentence. The reason is that the explanation verifier cannot choose the correct explanation from the possible two explanations of different temporal relations. We leave the research on how to generate and identify a high-quality explanation sentence for future work. \begin{table}[t] \centering \small { \begin{tabular}{lccccccc} \toprule Data &\#GPT& T & M & \mbox{\textsc{Today}}{} & Avg \\ \cmidrule(lr){1-1}\cmidrule(lr){2-2}\cmidrule(lr){3-3}\cmidrule(lr){4-4}\cmidrule(lr){5-5} \cmidrule(lr){6-6} Ours&1475&73.3&83.9&63.2&73.5\\ No Exp&1867&73.7&83.5&61.2&72.8\\ No Addition&2529&70.2&81.4&59.5&70.4\\ No General&2079&71.0&81.8&59.5&70.8\\ More \#GPT&2483&74.6&84.0&63.2&73.9\\ \bottomrule \end{tabular} } \caption{Ablation study for LLM generated supervision. We test the model performance under different verifier settings. We also test the setting where we include more verifier-filtered GPT-3 data (filtered by three verifiers). \#GPT refers to the total number of verifier-filtered GPT-3 data under each setting. T refers to \mbox{\textsc{Tracie}}{}, M refers to \mbox{\textsc{Matres}}{}, and Avg refers to Average.} \label{tab:ablation} \end{table} \subsection{Ablation Studies and Analysis} We conduct several ablation studies to understand our models' improvements better. Table~\ref{tab:ablation} demonstrates the results of our model with different settings of verifiers. The results have proved the effectiveness of all the verifiers. The explanation sentence verifier has the least influence. This is expected as we ask GPT-3 to generate an additional sentence followed by an explanation sentence, which largely increases its chance of being coherent as a single generation. We also utilize similarity-based filtering to drop the explanations that are almost identical to the hypothesis, which alleviates one of the major problems of GPT-3 generated explanations. The additional sentence verifier and the general verifier are more crucial as the quality of incidental supervision heavily relies on if it can first correctly interpret the differences in the context and then draw a corresponding reasonable conclusion. We also see that including more filter-verified GPT-3 data can further enhance the model performance, suggesting the usefulness of LLMs to generate supervision signals to empower small models. Since the smaller T5 model with LLM distilled knowledge performs much better than the LLM itself, it also directs us to research the trade-off between model scaling and data scaling in temporal reasoning. \section{Introduction} \begin{figure}[h] \centering \scalebox{0.48}{ \includegraphics[width=1\textwidth]{eventpair.png}} \caption{\label{fig:main} An example of temporal differential analysis, when adding the additional sentence1 to the content, a human will think the relation shifts towards \textbf{before} while adding the additional sentence2, he/she will think the relation shifts towards \textbf{after}.} \end{figure} Temporal relation extraction \cite{pustejovsky2003timebank, chambers-etal-2014-dense} is traditionally viewed as an information extraction task, where a model uses explicit temporal signals such as ``before'' to identify the temporal order of events. While these models have contributed to many downstream pipelines, they are not enough for more complicated tasks such as timeline generation, where most event pairs do not come with explicit clues. These implicit temporal relation extractions \cite{zhou-etal-2021-temporal} require temporal reasoning that relies on common sense and semantic understanding of the context. In recent work, a popular approach to address these predictions is to finetune pre-trained language models (PLMs) with annotated supervision data. Unfortunately, existing temporal benchmarks \cite{pustejovsky2003timebank, cassidy-etal-2014-annotation, ning-etal-2018-multi} only annotate hard labels but ignore the fact that temporal labels based on common sense can be soft and nondeterministic. This allows models to exploit spurious signals and annotation artifacts easily. For example, a model may learn to predict ``lunch'' before ``dinner'' regardless of the surrounding context, yet most existing benchmarks will not challenge such beliefs because most ``lunch'' annotations will happen to be before ``dinner.'' This means that the current high performances of existing models may be misleading and the community may have a false sense of models' generalizability. In this work\footnote{We will release data and code upon publication.}, we bridge this evaluation gap with a novel benchmark that evaluates whether a temporal reasoning model is getting the correct predictions with the right reasons by properly identifying potential alternatives (e.g., ``dinner'' can be before ``lunch'' under certain contexts). Our intuition is to ask models to \textit{explain} temporal relation predictions since the most viable way for humans to demonstrate insights into these problems is by providing satisfactory explanations. While the motivation is sound, automatically evaluating the plausibility of model explanations is extremely difficult. As a result, we use an approximation of such explanations, which we call \textbf{temporal differential analysis}. Under this setting, we select event pairs where the temporal relations are not 100\% deterministic based on the context, meaning that both before/after relations are possible if additional information in regard to the context is given. Then, we annotate a hypothetical change in the form of an additional sentence added to the beginning of the context. As Figure~\ref{fig:main} shows, this hypothetical will shift the event pair's temporal relation distribution, making it either ``\textit{more before}'' or ``\textit{more after}''. Each hypothetical change is also annotated with human explanations of why the change affects the temporal relation. We collect 2,241 such event pairs with a rigorous human annotation pipeline and call the resulting dataset \mbox{\textsc{Today}}{} (\textbf{t}emp\textbf{o}ral \textbf{d}ifferenti\textbf{a}l anal\textbf{y}sis). If a model is generic enough to provide proper explanations for its temporal decisions, it can also distinguish subtle context changes and understand how each change will affect the distribution of temporal relations. We find that models that achieve relatively high in-domain test performances are brittle and demonstrate minimal capabilities for differentiating subtle context changes that affect temporal relations. For example, the PatternTime model \cite{zhou-etal-2021-temporal} that achieves 77\% binary accuracy on \mbox{\textsc{Tracie}}{} \cite{zhou-etal-2021-temporal} - a dataset with similar contexts and events - drops dramatically to 54\% on \mbox{\textsc{Today}}{}, which is barely above random guessing. To mitigate this gap, we propose a general technique that uses temporal explanations that \mbox{\textsc{Today}}{} annotates. Specifically, we argue that explanations of temporal relations are a great proxy for understanding temporal reasoning. We show models trained with \mbox{\textsc{Today}}{}'s task formulation and explanation annotation are better at perceiving cross-dataset supervision and achieve superior performances on multiple datasets with a single model. We also find that while large language models (LLMs) are not good enough for temporal differential analysis, they sometimes produce reasonable explanations for a given temporal relation. We design a pipeline that automatically collects supervision signals based on this finding. The pipeline starts with giving GPT-3 \cite{brown2020language} an instance from \mbox{\textsc{Today}}{} and a hypothetical temporal relation and then uses GPT-3 to generate several explanations. Finally, we train an explanation verifier based on human annotation, which selects the generated explanations that are more likely to be plausible. We show that adding such explanations from GPT-3 further boosts the performance across our benchmarks. Our contribution is threefold. 1) We design a novel evaluation framework and collect a new dataset \mbox{\textsc{Today}}{} that uses differential analysis to test whether systems can perform temporal reasoning with the right reasons; 2) We show that the \mbox{\textsc{Today}}{} supervision, especially the use of explanations, contributes towards a generic temporal reasoning model; 3) We use LLMs to generate pseudo explanations and filter them with a novel explanation verification model and show that such distant supervision signals are helpful. \section{Modeling} \label{sec:model} In this section, we show how to fully use \mbox{\textsc{Today}}{}'s supervision signals, especially the explanations, to build a more generic temporal reasoning model. \vpara{Joint Learning.}\mbox{\textsc{Today}}{} only annotates temporal distribution shifts instead of absolute relations. This means that an instance may have a gold label ``before'' (i.e., the additional sentence $\mathcal{AS}$ makes the relation more ``before'' compared to the original context), yet the likelihood of ``after'' can still be higher, and the \textit{argmax} label will be ``after''. As a result, a model cannot sufficiently learn to predict absolute labels with only supervision signals from \mbox{\textsc{Today}}{}. To mitigate this issue, we propose a joint learning model that requires joint supervision from a dataset that annotates hard labels for temporal relations, such as \mbox{\textsc{Matres}}{} or \mbox{\textsc{Tracie}}{}. \vpara{Modeling.} We adopt \mbox{\textsc{Tracie}}{}'s formulation~\cite{zhou-etal-2021-temporal} to format the temporal reasoning task into textual entailment and use a sequence-to-sequence pre-trained language model as the base of our system. Specifically, the input sequence consists of the premise, which is $\mathcal{AS} + \mathcal{C} + Exp$\footnote{$\mathcal{AS}$ and $Exp$ only apply for relative label instances, such as those in \mbox{\textsc{Today}}{}.} in our case, as well as the hypothesis, which is $\mathcal{E}_1$ \keywordCode{starts [r]} $\mathcal{E}_2$. Here $r$ is a hypothetical relation we plug into the hypothesis since systems are unaware of the gold label from the input sequence. The output sequence contains an entailment label, which is \keywordCode{answer: positive} for entail, and \keywordCode{answer: negative} for contradiction. \vpara{Hard Label Instances.} As we note above, a system does not know the gold label when plugging in the hypothetical relation in the hypothesis. As a result, at learning time, we construct two entailment instances for a temporal relation instance with an absolute hard label. The first instance uses a hypothesis that is $\mathcal{E}_1$ \keywordCode{starts before} $\mathcal{E}_2$. We want the model to learn to output \keywordCode{answer: positive} for entail if the gold label is also ``before'', and \keywordCode{answer: negative} for contradiction if the gold label is ``after''. The second instance uses $\mathcal{E}_1$ \keywordCode{starts after} $\mathcal{E}_2$ as the hypothesis, where the output sequences are reversed compared to the first one. We use the regular cross-entropy loss for optimization and denote the loss as $\ell_{CE}$. At test time, we similarly construct two entailment instances for each event pair and conduct a simple probability-based vote to infer a final ``before/after'' relation. \vpara{Relative Label Instances.} For instances that do not annotate absolute hard labels, we similarly construct two entailment instances for each event pair in training and evaluation time. However, instead of asking the model to use a cross-entropy loss to learn to output entailment labels, we employ a marginal ranking loss and ask the model to increase the probability of the entailment sequence if the plugged-in relation $r$ is the same as the gold label\footnote{Here ``gold label'' refers to the direction that $\mathcal{AS}$ shifts the temporal distribution to.} $r_g$, and vice versa. Specifically, we want \footnote{For simplicity, we omit $Exp$ and $\mathcal{E}$ in the condition.} \beq{ \begin{cases} p(\mathrm{ent} |(\mathcal{AS}+\mathcal{C}), r)> p(\mathrm{ent} |\mathcal{C},r) & r = r_g \\ p(\mathrm{con}|(\mathcal{AS}+\mathcal{C}),r)> p(\mathrm{con}|\mathcal{C},r) & r = \neg r_g \end{cases} } where $\mathrm{ent}$ and $\mathrm{con}$ represent entailment and contradiction, respectively. The loss function we use can subsequently be written as \beqn{ \label{eq:marginrankingloss} \ell_{MR} &= {\rm max}(0,\epsilon+p_{o_g}-p_{g}) \\&+ {\rm max}(0,\epsilon+p_{w}-p_{o_w})\\ p_{g} &= p(\mathrm{ent}|(\mathcal{AS}+\mathcal{C}),r_g) \\ p_{o_{g}} &= p(\mathrm{ent}|\mathcal{C},r_g) \\ p_{w} &= p(\mathrm{ent}|(\mathcal{AS}+\mathcal{C}),\neg r_g) \\ p_{o_{w}} &= p(\mathrm{ent}|\mathcal{C},\neg r_g) } \normalsize where $\epsilon$ is a margin separating the logits. The actual probability of entailment is computed by the word logits in the output sequence of our sequence-to-sequence model. \vpara{Aggregated Loss Function.} The final loss function we use for training is \begin{equation} \label{eq:loss} \ell = \alpha \ell_{CE} + \ell_{MR} \end{equation} \normalsize where $\alpha$ reduces the two losses into the same scale. As a result, the proposed model is a general-purpose temporal reasoning model that can predict both hard-label temporal relations for an event pair and probability changes for differential analysis as proposed in \mbox{\textsc{Today}}{}. \section{LLM Incidental Supervision} As we both hypothesize and later show in \S\ref{sec:experiment}, human-annotated explanations benefit generic temporal reasoning models, as they encourage models to learn to use the correct signals. However, it is extremely difficult and expensive to crowdsource such explanations for training purposes since collecting an instance cost \$1 in average. On the other hand, large language models (LLM) can produce a large amount of generated explanations at a much cheaper cost. Yet, these generated explanations are mostly unusable as they are simply model guesses based on textual correlations. In this section, we introduce a knowledge distillation method that combines the benefits of both human annotations and LLM generations by training verification models based on our seed annotation, which is then used to select generations that are more likely to be plausible. Compared to previous work \cite{Wiegreffe2021ReframingHC}, we propose a verification system composed of multiple models that individually verify different aspects of automatically-generated explanations. We detail our pipeline below. \subsection{Temporal Explanations from GPT-3} We adopt the same event pair generation and context selection process as we have detailed in \S\ref{sec:dataset}. We design a prompt as shown in Table~\ref{tb:prompt} that provides GPT-3 with contexts and event pairs, and ask GPT-3 to generate additional sentences, how these sentences will change the temporal relation, and why. The prompt contains a few examples, which makes this setting few-shot. \subsection{Verification System} \label{sec:gev} \vpara{Similarity-based Filtering.} We filter GPT-3 instances that use exact same sentences from the context as the additional sentence or repeat the event pairs and temporal relations as explanations. We use Sentence-BERT~\cite{reimers-gurevych-2019-sentence} and a similarity threshold of $0.95$ to perform this filtering. \vpara{General Explanation Verifier.} We use the generic temporal relation model as proposed in \S\ref{sec:model} trained on \mbox{\textsc{Today}}{} and an additional temporal relation dataset\footnote{Depending on the task, we choose different temporal relation datasets.} to verify if the generated additional sentence $\mathcal{AS}$ shifts the temporal relation to the direction that it is supposed to. \vpara{Additional Sentence Verifier.} The general explanation verifier cannot sufficiently identify if only part of the GPT-3 generation is correct. For example, a generated instance may have a sub-optimal $\mathcal{AS}$ but convincing $Exp$, which would deceive our temporal relation model. To address this, we train a separate $\mathcal{AS}$ verification model with \mbox{\textsc{Today}}{} that does not use $Exp$ as input. We follow the same training scheme as \S\ref{sec:model}, and similarly, use if $\mathcal{AS}$ shifts the temporal relation as expected as our filtering criteria. \vpara{Explanation Sentence Verifier.} We also train a binary classification model to individually check the plausibility of $Exp$. To generate negative $Exp$ instances, for each instance in the \mbox{\textsc{Today}}{} training set, we ask GPT-3 to generate three possible explanation sentences. We use the one that is the least similar to the human-annotated $Exp$ according to SentenceBert as the negative instance, which we denote as $Exp_{neg}$ We finetune the base seq-to-seq model with the positive and negative explanations and optimize the loss function as the negative log-likelihood of the positive explanation: \beqn{ \ell^{E} &= -log\frac{e^{p_{pos}}}{e^{p_{pos}}+e^{p_{neg}}}\\ p_{pos} &= p(e|(\mathcal{AS}+\mathcal{C},Exp),r_g),\\ p_{neg} &= p(e|(\mathcal{AS}+\mathcal{C},Exp_{neg}),r_g), } We filter all GPT-3 generated instances whose explanation is deemed as negative by this binary classification model. \section{Related Work} \stitle{Temporal Reasoning Models.} Significant effort has been devoted to temporal reasoning, a challenging task that requires models to not only recognize the connection between event mentions but their context as well. Several statistical learning models \cite{mani2007three, ning-etal-2017-structured,ning-etal-2018-cogcomptime} have been proposed to characterize events based on features and learn to predict the temporal relations between event pairs. Recently, data-driven temporal reasoning approaches \cite{trong2022selecting, wang2022extracting, liu2021discourse, mathur-etal-2021-timers, zhou-etal-2020-temporal, han-etal-2019-joint} have witnessed great improvement over these feature-based models on benchmarks and are generally built upon deep neural models to predict temporal labels in an end-to-end fashion. Nevertheless, lack of interpretability has made these neural models untrustworthy to be deployed in real-world applications \cite{yin-etal-2022-sensitivity}, especially in critical areas such as healthcare, finance, and government. The differential analysis approach first introduced in this paper provides a new paradigm of evaluating the interpretability of temporal reasoning models. \stitle{Temporal Relation Datasets.} From different perspectives, multiple research projects have focused on constructing temporal reasoning benchmarks. A series of remarkable datasets, TimeBank \cite{pustejovsky2003timebank}, TempEval 1-3 \cite{verhagen-etal-2007-semeval, verhagen-etal-2010-semeval, uzzaman-etal-2013-semeval}, TimeBank-Dense \cite{cassidy-etal-2014-annotation}, RED \cite{ogorman-etal-2016-richer}, \textsc{Matres} \cite{ning-etal-2018-multi} and so forth, are annotated on newswire articles for events and temporal relations between events. \textsc{Torque} \cite{ning-etal-2020-torque} examines models' capability in temporal reasoning in the format of reading comprehension, whereas a contrast set for \textsc{Matres} is introduced in \cite{gardner-etal-2020-evaluating} to provide a local view of models' decision boundaries. However, none of these datasets provide reasons for these temporal decisions; thus, current temporal models tend to learn superficial temporal cues given the supervision. In contrast, the newly introduced framework, \mbox{\textsc{Today}}{}, bridges this gap by providing supervision signals under subtle context changes and corresponding explanations in the meantime. \stitle{Explanations.} The community has been studying explanations and how they can help the reasoning tasks such as question answering. Several models have been proposed \cite{Rajani2019ExplainYL, Latcinnik2020ExplainingQA, kumar-talukdar-2020-nile, ZRYR22}, as well as evaluation benchmarks that aim to test if existing systems can properly utilize explanations \cite{Camburu2018eSNLINL, Aggarwal2021ExplanationsFC}. Our work is closely related to this line of effort as we attempt to build a proxy benchmark that can be automatically evaluated for temporal explanations. The recent findings on large pre-trained language models have inspired several works to use them as explanation generators \cite{Wiegreffe2021ReframingHC, marasovic-beltagy-et-al-2022-feb}.
1,116,691,498,157
arxiv
\section{Introduction} Still growing interest in extremal black holes is motivated by their unusual and not fully understood nature. The problems of entropy, semiclassical configurations, interactions with matter, and information paradox have not been resolved completely. Apart from their global structure and behavior, the near horizon region is also of interest~\cite{gp}. In particular, of the equal importance is the question of the singularities that reside in centers of most black holes hidden to an external observer. Regular black holes (RBHs) have been considered, dating back to Bardeen~\cite{BAR}, for avoiding the curvature singularity beyond the event horizon~\cite{RBH}. Their causal structures are similar to a Reissner-Nordstr\"{o}m (RN) black hole with the singularity replaced by de Sitter space-time~\cite{Dymn1}. In addition to various RBHs~\cite{HAY}, the action of Einstein gravity and nonlinear electrodynamics provided a magnetically charged RBH~\cite{0006014}. This solution is featured by two integration constants and a free parameter. The integration constants are related to Arnowitt-Deser-Misner (ADM) mass $M$ and a magnetic charge $Q$, while the free parameter $a$ is adjusted to make the line element regular at center. Moreover, it allows exact treatment by using the Lambert function~\cite{0010097}. Here we note that this extremal RBH has the near horizon geometry of $AdS_2\times S^2$ as the extremal RN black hole does have~\cite{0403109,0606185}. On the other hand, string theory suggests that higher curvature terms can be added to Einstein gravity~\cite{Zwiebach}. Black holes in higher-curvature gravity \cite{Callan} were extensively studied, showing the spectacular progress in the microscopic counting of black hole entropy. For a review, see \cite{deWit}. In theories with higher curvature corrections, classical entropy deviates from the Bekenstein-Hawking entropy and can be calculated using Wald's Noether charge formalism \cite{Wald}. It exhibits exact agreement with string theory predictions, both in the BPS \cite{Behrndt} and non-BPS \cite{Goldstein,AE} cases. Recently, Sen has proposed a so-called ``entropy function'' method for calculating the entropy of $n$-dimensional extremal black holes, which is effective even for the presence of higher curvature terms. Here the extremal black holes are characterized by the near horizon geometry of $AdS_{2}\times S^{n-2}$ and corresponding isometry~\cite{Sen}. It states that the entropy of extremal black holes can be obtained by extremizing the entropy function with respect to some moduli on the horizon. Extremizing a entropy function is equivalent to solving Einstein equation in the near horizon. A entropy function usually depends only on the near horizon geometry, and decouples from the data at infinity. This describes the attractor behavior. This method has been applied to many solutions including extremal black holes in higher dimensions, rotating black holes and various non-supersymmetric black holes~\cite{SSen}. We note that the near horizon isometry SO(2,1) and the long throat of $AdS_2$ sector are the two ingredients of the attractor mechanism~\cite{klr}. On the other hand, Cai and Cao~\cite{CC} have proposed the generalized entropy formula based on Wald's Noether charge formalism. However, this generalized entropy function approach has a drawback that there is no way to combine the full equations of motion with the attractor mechanism. Very recently, we have investigated a magnetically charged RBH~\cite{MKP}. It turned out that the entropy function approach does not work for deriving the Bekenstein-Hawking entropy of the extremal RBH, while the generalized entropy formula is suitable for the RBH case. This is mainly because the entropy function depends on the near horizon geometry ($Q^2$) nonlinearly as well as the data at infinity ($M$). In this paper, we address this issue of the extremal RBH again. We introduce a new attractor mechanism to find Bekenstein-Hawking entropy for the extremal RBH. The important point is to find a 2D dilaton gravity with dilaton potential $V(\phi)$ by imposing the dimensional reduction of 4D Einstein gravity including matter and then by performing a conformal transformation \cite{NSN,FNN}. Then, the new attractor equations are given by $\nabla^2\phi=V(\phi)$ and $V^{\prime}(\phi)=-\bar{R}_2$. For a constant dilaton $\phi=\phi_0$, this can be used to find the location $r=r_{e}$ of degenerate horizon. Finally, we use the generalized entropy formula based on Wald's Noether charge formalism to derive the desired Bekenstein-Hawking entropy. The organization of this work is as follows. In Sec. 2, we show the procedure to find a 2D dilation gravity from a 4D Einstein gravity coupled with matter. In order to test whether this approach works for calculating the entropy of an extremal black hole, we study a toy model of the RN black hole in Sec. 3. We briefly review a magnetically charged RBH in Sec. 4. In Sec. 5, we show that Sen's entropy function approach does not work for the regular black hole. Sec. 6 is devoted to finding the entropy of an extremal RBH by using the generalized entropy function approach. We obtain the Bekenstein-Hawking entropy of an extremal RBH by using the 2D dilaton gravity approach with a conformal transformation in Sec. 7. Finally, we discuss our results in Sec. 8. \section{Dimensional Reduction Approach} We start with the four-dimensional (4D) action \begin{equation} \label{action} I=\frac{1}{16\pi}\int d^4x \sqrt{-g}[R-{\cal L}_M(B)] \end{equation} where ${\cal L}_M(B)$ is the Lagrangian for matter. For our purpose, we consider the spherically symmetric metric \begin{equation} \label{metric} ds^2=-U(r)dt^2+\frac{1}{U(r)}dr^2+ b^2(r) d\Omega^2_2, \end{equation} where $b(r)$ plays a role of the radius of two sphere $S^2$. The 4D Ricci scalar $R$ is calculated as \begin{equation} R=-U''-\frac{1}{b^2}\Big[4b b' U'+2U (b'^2+2b b'')-2\Big], \end{equation} where the prime denotes the derivative with respect to $r$. After the dimensional reduction by integrating the action (\ref{action}) over $S^2$, the reduced effective action in two dimensions~\cite{NSN} can be rewritten by \begin{equation} I^{(2)}=\frac{1}{4}\int d^2x\sqrt{-g} [b^2 R_2+2g^{\mu\nu}\nabla_\mu b\nabla_\nu b+2-4 b^2{\cal L}_M], \end{equation} where $R_2=-U''(r)$ is the 2D Ricci scalar. It is convenient to eliminate the kinetic term by using the conformal transformation \begin{equation} \label{conft} \bar{g}_{\mu\nu}=\sqrt{\phi}~g_{\mu\nu},~\phi=\frac{b^2(r)}{4}. \end{equation} This transformation delivers information on the 4D action (\ref{action}) to 2D dilaton potential, if the 4D action provides the black hole solution. That is, we may get the good $s$-wave approximation to the 4D black hole eliminating the kinetic term. Unless one makes the conformal transformation, the information is split into the kinetic and the potential terms. Now, let us choose the dilaton as the squared radius of $S^2$ ($\phi=r^2/4$). Then, the reparameterized action takes the form \begin{equation}\label{repara-action} \bar{I}^{(2)}=\int d^2x \sqrt{-\bar{g}}[\phi\bar{R}_2+V(\phi)] \equiv\int d^2x\sqrt{-\bar{g}}\bar{F}, \end{equation} where the Ricci scalar and the dilaton potential are given by \begin{equation} \bar{R}_2=-\frac{U''}{\sqrt{\phi}},~V(\phi)=\frac{1}{2\sqrt{\phi}}-\sqrt{\phi}{\cal L}_M(B), \end{equation} respectively. This is a 2D dilaton gravity (for a review, see ~\cite{gkv}) with $G_2=1/2$~\cite{JT}. Also $\bar{F}$ will play a role of the entropy function. The two equations of motion are derived as \begin{eqnarray} \label{newatt1} \nabla^2\phi=V(\phi),\\ \bar{R}_2=-V'(\phi), \label{newatt2} \end{eqnarray} where $V'(\phi)$ denotes the derivative with respect to $\phi$. These equations will play the role of a new attractor. Note that in the case of nonsupersymmetric attractors in four dimensions~\cite{Goldstein}, the condition for attractor are that $\partial_iV_{eff}(\phi_{i0})=0$ and $\partial_i\partial_jV_{eff}(\phi_{i0}) $ should have positive eigenvalues at the critical point of $\phi_i=\phi_{i0}$. If these conditions are satisfied, the attractor mechanism works and the entropy is given by the effective potential at horizon. For our case, the corresponding conditions for the attractor are \begin{equation}\label{attcon} V(\phi_0)=0, ~~~V'(\phi_0)\neq 0 \end{equation} for the constant dilaton $\phi= \phi_0$. A solution to the equation (\ref{newatt1}) provides a constant dilation $\phi_0$ when $V(\phi_0)=0$. Considering the connection of $\phi_0=r_e^2/4$, the solution to the equation (\ref{newatt2}) gives us information on the 2D spacetime \begin{equation} \bar{R}_2|_{r=r_e}=-\frac{U''(r_e)}{\sqrt{\phi_0}} \end{equation} which is a constant curvature of the $AdS_2$ spacetime. After the conformal transformation, the generalized entropy formula derived by Wald's Noether charge formalism is obtained as \begin{equation} \bar{S}_{BH}=\frac{4\pi \sqrt{\phi_0}}{U''(r_e)}(q e - \bar{F}), \end{equation} which is slightly different from the entropy formula $S_{BH}=\frac{4\pi }{U''(r_e)}(q e - F)$, proposed by Cai and Cao \cite{CC}. Finally, we have the generalized entropy function \begin{equation} \bar{F}(\phi_0)= - \sqrt{\phi_0} U''(r_e) \end{equation} at the degenerate horizon. Considering $\phi_0= \frac{1}{4}r_e^2$, the desired Bekenstein-Hawking entropy is obtained by \begin{equation} \bar{S}_{BH}=-\frac{4\pi\sqrt{\phi_0}}{U''(r_e)}\bar{F}(\phi_0)=4\pi\phi_0 = \pi r_e^2 \end{equation} for a magnetically charged black hole with $e=0$. \section{Reissner-Norstr\"om black hole} We consider a toy model of Einstein-Maxwell theory to test whether our approach does work for obtaining a proper entropy of extremal RN black hole. In this case, we have \begin{equation} {\cal L}_M(B)=F_{\mu\nu}F^{\mu\nu}=\frac{2Q^2}{r^4} \end{equation} with $F_{\theta\varphi}=Q\sin\theta$. Then, the potential is given by \begin{equation} V(\phi)=\frac{1}{2\sqrt{\phi}}\Big[1-\frac{Q^2}{4\phi}\Big] \end{equation} whose form is depicted in Fig. 1. When $V(\phi_0)=0$, one has the solution to Eq. (\ref{newatt1}) as \begin{equation} \phi_0=\frac{Q^2}{4}. \end{equation} In this case, we have the $AdS_2$ spacetime with the curvature \begin{equation} \bar{R}_2|_{r=r_e}=-V'(\phi_0)=-4/Q^3. \end{equation} Finally, since the generalized entropy function is given by \begin{equation} \bar{F}^{RN}(\phi_0)= -\sqrt{\phi_0} U''(r_e), \end{equation} we have the entropy for the extremal RN black hole as \begin{equation} \bar{S}^{RN}_{BH} = 4\pi\phi_0=\pi Q^2. \end{equation} As is shown in Fig. 1, one cannot find the degenerate horizon for $Q^2=0$ case because it corresponds to Schwarzschild black hole. \begin{figure}[t!] \centering \includegraphics{fig1.eps} \caption{The solid curve: the dilaton potential $V(\phi)$ for the RN black hole with $Q=1$. $V(\phi)=0$ at $\phi=\phi_0=0.25$ denotes the degenerate horizon. For $Q\not=0$, one always finds the point of $V(\phi)=0$. The large-dashed curve is for the Schwarzschild case with $Q^2=0$, where there is no point of $V(\phi)=0$.} \label{fig1} \end{figure} \section{Regular black hole} We briefly review a magnetically charged RBH~\cite{0403109,0606185}. In this case, ${\cal L}_M(B)$ in Eq. (\ref{action}) is a functional of $B= F_{\mu\nu}F^{\mu\nu} $ defined by \begin{equation} \label{lagr} {\cal L}_M(B)=B \cosh^{-2}\left[a \left(\frac{B}{2}\right)^{1/4} \right], \end{equation} where the free parameter $a$ will be adjusted to guarantee the regularity at the center. In the limit of $a\to 0$, we recover the Einstein-Maxwell theory in the previous section. To determine the metric function (\ref{metric}) defined by \begin{equation} U(r)\,=\,1\,-\,\frac{2 m(r)}{r}, \end{equation} we have to solve Einstein equation. From the variation of the action (\ref{action}) together with the matter (\ref{lagr}) with respect to the vector potential $A_\mu$, the equations of motion are given by \begin{equation} \label{maxwell1} \nabla _{\mu}\left( \frac{d{\cal L}(B)}{dB} F^{\mu\nu}\right) =0, \end{equation} \begin{equation} \label{maxwell2} \nabla _{\mu}\,^{\ast }F^{\mu\nu}=0, \end{equation} where the asterisk denotes the Hodge duality. The solution to Eqs. (\ref{maxwell1}) and (\ref{maxwell2}) is $F_{\theta\varphi}=Q\sin \theta$ for a magnetically charged case. On the other hand, the variation of the action with respect to the metric $g_{\mu\nu}$ leads to the Einstein equation \begin{equation} \label{eineq} R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}R = 8\pi T_{\mu\nu} \end{equation} with the stress-energy tensor \begin{equation} T_{\mu\nu}=\frac{1}{4\pi }\left( \frac{d {\cal L}\left( B\right) }{dB}F_{\rho \mu}F^{\rho}_{\nu}-\frac{1}{4}g_{\mu\nu} {\cal L}\left( B\right) \right). \end{equation} After solving these Einstein equation, the mass distribution is determined to be \begin{equation} \label{quadrature1} m(r)\,=\,\frac{1}{4}\int^r {\cal L}[B(r')]r'^{2} dr'\, + C, \end{equation} where $C$ is an integration constant. Considering the condition for the ADM mass $M(=m(\infty)=C)$, the mass distribution takes the form \begin{equation} m(r)\,=M-\frac{Q^{3/2}}{2a} \tanh\left(\frac{aQ^{1/2}}{r} \right). \end{equation} Moreover, setting $a\,=\,Q^{3/2}/2M$ determines the metric function completely as \begin{equation} U(r)\,=\,1\,-\,\frac{2 M}{r}\left(1\,-\,\tanh\frac{Q^{2}}{2Mr} \right). \label{Gr} \end{equation} At this stage we note that $U(r)$ is regular as $r \to 0$, in contrast to the RN case ($a=0$ limit) where its metric function of $1-2M/r+Q^2/r^2$ diverges as $r^{-2}$ in that limit. In order to find the location of the horizon from $U(r)=0$, we use the Lambert functions $W_i (\xi)$ defined by the formula $W(\xi)e^{W(\xi)}=\xi$ \cite{0403109}. As is shown in Fig. 2, $W_0(\xi)$ and $W_{-1}(\xi)$ are real branches. Their values at branch point $\xi=-1/e$ are the same as $W_{0}(-1/e)=W_{-1}(-1/e)=-1$. Here we set $W_{0}(1/e) \equiv w_0$ because it plays an important role in finding the location of degenerate horizon of the extremal RBH. \begin{figure}[t!] \centering \includegraphics{fig2.eps} \caption{The two real branches of the Lambert function $W_0(\xi)$ (upper curve) and $W_{-1}(\xi)$ (lower curve) are depicted for the solution to the RBH. The degenerate event horizon at ($q_e,x_e$) corresponds to the branch point of the Lambert function at $\xi=-1/e$. } \label{fig2} \end{figure} We note that the mass $M$ is a free parameter. Introducing a reduced radial coordinate $x=r/M$ and a charge-to-mass ratio $q=Q/M$, the condition for the event horizon is given by \begin{equation} U(x(q))\,=\,1\,-\,\frac{2 }{x}\left(1\,-\,\tanh\frac{q^{2}}{2x} \right)=0. \label{Grn} \end{equation} Here one finds the outer $x_+$ and inner $x_-$ horizons as \begin{equation} x_+(q)=-\frac{q^2}{W_0(-\frac{q^2e^{q^2/4}}{4})-q^2/4}, ~~x_-(q)=-\frac{q^2}{W_{-1}(-\frac{q^2e^{q^2/4}}{4})-q^2/4}. \end{equation} For $q^2/4=q^2_{e}/4=w_0$, the two horizons $x_+$ and $x_-$ merge into a degenerate event horizon\footnote{The near horizon geometry of the degenerate horizon $U(r) \simeq h(r-r_{e})^2$ with $U'(r_{e})=0$ and $U''(r_{e})=2h$. Introducing new coordinates $r= r_{e} + \varepsilon/(hy$) and $\tilde{t}=t/\varepsilon$ with $h=\frac{(1+\omega_o)^3}{32M^2 \omega_{o}^2}.$ Expanding the function $U(r)$ in terms of $\varepsilon$, retaining quadratic terms and subsequently taking the limit of $\varepsilon \to 0$, the line element~\cite{0403109} becomes $ds^2_{NH} \simeq \frac{1}{hy^2} \left( -dt^2 + dy^2 \right) + r^2_{e} d\Omega_2^2.$ Moreover, using the Poincar\`{e} coordinate $y =1/u$, one could rewrite the above line element as the standard form of $AdS_2\times S^2$: $ ds^2_{NH} \simeq \frac{1}{h} \left( - u^2 dt^2 + \frac{1}{u^2}du^2 \right) + r^2_{e} d\Omega_2^2$.} at \begin{equation} x_{e}=\frac{4q^2_{e}}{4+ q^2_{e}}=\frac{4w_0}{1+w_0}, \end{equation} where we use the relation of $(q_{e}^2/4)e^{q_{e}^2/4}=1/e=w_0e^{w_0}$. That is, the degenerate event horizon appears at $(q_{e}=1.056, x_{e}=0.871)$ when $x_+=x_-=x_e$. We note that in finding the location of the degenerate horizon, first we choose $q=q_e$ and then determine $x=x_e$. For $q>q_{e}$, there is no horizon, while for $q<q_e$, two horizons appear. For our purpose, let us define the Bekenstein-Hawking entropy for the magnetically charged extremal RBH as \begin{equation} \label{BHRBH} S_{BH}= \pi r^2_{e} =\pi M^2 x^2_{e}=\pi Q_e^2\Big[\frac{4 q_{e}}{4+ q^2_{e}}\Big]^2 \end{equation} with $Q_e=Mq_e$. \section{Entropy function approach} The magnetically charged extremal RBH is an interesting object beacuse its near horizon geometry is given by the topology of $AdS_2\times S^2$ and its action is already known. Let us attempt to derive the black hole entropy in Eq. (\ref{BHRBH}) using Sen's entropy function approach. For this purpose, we consider an extremal black hole solution whose near horizon geometry is given by $AdS_2\times S^2$ with the magnetically charged configuration \begin{eqnarray} \label{e11} && ds^2\equiv g_{\mu\nu}dx^\mu dx^\nu = v_1\left(-r^2 dt^2+{dr^2\over r^2}\right) + v_2~ d \Omega^2_2, \\ \label{e12} && F_{\theta\phi} = {Q} \, \sin\theta\, , \end{eqnarray} where $v_i (i=1,2)$ are constants to be determined. Now, let us define the Lagrangian density $f(v_i, Q)$ as the remaining part after integrating the action (\ref{action}) over $S^2$ as follows~\cite{0505122}: \begin{equation} \label{e2} {f}(v_i, Q) = \frac{1}{16\pi} \int d\theta\, d\phi\, \sqrt{- g}\, \left[R\, -\,{\cal L}_M(B) \right]\,. \end{equation} Since $ R=-\frac{2}{v_1}+\frac{2}{v_2}$ and $B=\frac{2{Q}^2}{{v_2}^2}$, we obtain \begin{equation} {f}(v_i, {Q})=\frac{1}{2}v_{1}v_2\left[-\frac{1}{v_1}+\frac{1}{v_2} \, - \frac{Q^2}{v^2_2}\cosh^{-2}\Big(\frac{Q^2}{2M\sqrt{v_2}}\Big) \right]. \end{equation} Here, we choose the free parameter $a=Q^{3/2}/2M$ to have a RBH solution. For the magnetically charged extremal RBH, the entropy function is given by \begin{equation} \label{e31} {\cal F}(v_i, {Q}) = -2\pi {f}(v_i, {Q}) \, . \end{equation} In this case, the extremal values $v_i^e$ are determined by extremizing the function ${\cal F}(v_i, {Q}) $ with respect to $v_i$: \begin{eqnarray} \label{e33} \frac{\partial {\cal F}}{\partial v_1}&=&0 \Rightarrow v_2=Q^2\cosh^{-2} \left[\frac{Q^2}{2M\sqrt{v_2}}\right], \\ \frac{\partial {\cal F}}{\partial v_2}&=&0 \Rightarrow \frac{1}{v_1}=\frac{Q^2}{v^2_2}\cosh^{-2}\left[\frac{Q^2}{2M\sqrt{v_2}}\right] -\frac{Q^2}{v_2}\frac{\partial}{\partial v_2}\left(\cosh^{-2}\left[\frac{Q^2}{2M\sqrt{v_2}}\right]\right).\nonumber\\ \label{e332} \end{eqnarray} Actually, these are conventional attractor equations. Using the above relations, the entropy function at the extremum is given by \begin{equation} {\cal F}(v^e_{2}, Q)=\pi v^e_2. \end{equation} In order to find the extremal value of $v^e_2$, we introduce $Q = \tilde{q} M$ and $v^e_2 = M^2 \tilde{x}^2 $. Then, Eqs. (\ref{e33}) and (\ref{e332}) can be rewritten as the following equations \begin{eqnarray} \label{sol1}\frac{\tilde{x}^2}{\tilde{q}^2}&=&\cosh^{-2}(\frac{\tilde{q}^2}{2\tilde{x}}), \\ \label{sol2}\frac{1}{v_1}&=& \frac{\tilde{q}^2}{M^2{\tilde{x}}^4} \cosh^{-2}(\tilde{q}^2/2\tilde{x}) -\frac{\tilde{q}^4}{2M^2\tilde{x}^5} \frac{\sinh(\tilde{q}^2/2\tilde{x})}{\cosh^{3}(\tilde{q}^2/2\tilde{x})}, \end{eqnarray} which are identical to the Einstein equation in the near horizon geometry of the $AdS_2\times S^2$ in Ref.{~\cite{0403109}} except being replaced $1/v_1$ by $h$. This means that the entropy function approach is equivalent to solving the Einstein equation on the $AdS_2 \times S^2$ background (the attractor equations). Of course, Eqs. (\ref{e33}) and (\ref{e332}) are not the full Einstein equation in (\ref{eineq}). \begin{figure}[t!] \centering \includegraphics{fig3.eps} \caption{Plot of the curvature radius $\tilde{x}$ of $S^2$ versus the parameter $\tilde{q}$. The two curves denote the solution space of the near horizon geometry to Eq. (\ref{sol1}), while the line denotes $\tilde{x}=\tilde{q}$ for the RN case with its extremal point $\tilde{x}=\tilde{q}=1(\square)$. The upper curve includes a point of $\bullet$, which corresponds to an extremal RBH. However, the lower curve belongs to the unphysical solution space because of negative $v_1$. $\diamond$ denotes the critical point at ($\tilde{q}_c,\tilde{x}_c$). (a)-(d) are introduced to connect the 2D dilaton potential in Fig. 4.}\label{fig.3} \end{figure} Since the above equations are nonlinear differential equations, we could not solve them analytically. In the limit of $a \to 0$, one easily finds the extremal RN case such that $v^e_2=M^2\tilde{q}^2(\tilde{x}=\tilde{q})$, $v_1^e=v_2^e=Q^2$. Instead, we have to solve numerically the Eq. (\ref{sol1}) because of the nonlinearity between $\tilde{x}$ and $\tilde{q}$. Their solutions are depicted in Fig. 3. In this figure, the solid line corresponds to the solution space in which each set of $(\tilde{q}, \tilde{x})$ resides on the subspace $S^2$. There are two branches: the upper and lower ones which merge at the critical point of $ (\tilde{q}_c,\tilde{x}_c)=(1.325,0.735)$. Since the lower branch gives us negative $v_1$ and thus it becomes de Sitter space instead of $AdS_2$, this branch should be ruled out. Note that the magnetically charged extremal RBH corresponds to the point $(\tilde{q}_{e},\tilde{x}_{e})=(1.056, 0.871)$. However, there is no way to arrive at this point even though the solution space comprises such a point. In this case, the entropy function takes the form \begin{equation}\label{sen1} {\cal F}= \pi v^e_2= \pi M^2 \tilde{x}^2 =\pi M^2\tilde{q}^2 \cosh^{-2}\left(\frac{\tilde{q}^2}{2\tilde{x}}\right). \end{equation} We note that this entropy function depends on both $\tilde{q}$ and $\tilde{x}$, in contrast to the RN case of ${\cal F}_{RN}=\pi M^2 \tilde {x}^2=\pi Q^2$. Hence, unless one knows $\tilde{q}=q_e$ and $\tilde{x}=x_e$, we cannot obtain the Bekenstein-Hawking entropy of $S_{BH}=\pi M^2x_{e}^2$ in Eq. (\ref{BHRBH}). \section{Generalized entropy function approach} Before performing the conformal transformation, the entropy formula based on Wald's Noether charge formalism \cite{CC} takes the form \begin{equation} \label{WEF} S_{BH}=\frac{4\pi }{U''(r_e)}\left(q e - F(r_e)\right), \end{equation} where the generalized entropy function $F$ is given by \begin{equation} F(r_e)= \frac{1}{16\pi}\int_{r=r_e} d\theta d\varphi \sqrt{-g}\left[R-{\cal L}_M\right] \end{equation} with \begin{eqnarray} && R=-\frac{r^2U''+4 r U'+2U -2}{r^2},\\ && {\cal L}_M=\frac{2Q^2}{r^4}\cosh^{-2}\left[\frac{Q^2}{2Mr}\right]. \end{eqnarray} In this approach, one has to know the location of the degenerate horizon (the solution to full Einstein equations). After the integration over the angular coordinates, the generalized entropy function leads to \begin{equation} F(r_e)= \left.\frac{1}{4}\Big[-r^2U''(r)+2-r^2{\cal L}_M\Big]\right|_{r=r_e}=-\frac{1}{4}U''(r_e)r_e^2 \end{equation} because of ${\cal L}_M \mid_{r=r_e}=\frac{2}{r^2_e}$ and $U(r_e)=U'(r_e)=0$. Finally, for $e=0$ we obtain the correct form of the entropy from Eq. (\ref{WEF}) as \begin{equation} \label{WEFe1} S_{BH}=-\frac{4\pi }{U''(r_e)}F(r_e)=\pi r_e^2. \end{equation} Even though we find the Bekenstein-Hawking entropy using the generalized entropy formula, there is still no way to fix the location $r=r_e$ of the degenerate horizon. Hence we have to find another approach to calculate the entropy of an extremal RBH natually. \section{2D dilaton gravity approach} Now, the remaining issue is how to incorporate the full equations of motion to extremizing process to find the entropy of an extremal RBH. We start with the action of the 2D dilation gravity in Eq. (\ref{repara-action}) \begin{equation}\label{JTA} \bar{I}_{RBH}=\int d^2x \sqrt{-\bar{g}}\left[\phi\bar{R}_2+V(\phi)\right]=\int d^2x \sqrt{-\bar{g}} \bar{F}, \end{equation} where the Ricci scalar and the dilaton potential are \begin{equation} \bar{R}_2=-\frac{U''}{\sqrt{\phi}},~V(\phi)=\frac{1}{2\sqrt{\phi}}- \frac{Q^2}{8\phi^{3/2}}\cosh^{-2}\left[\frac{Q^2}{4M\sqrt{\phi}}\right], \end{equation} respectively. The two equations of motion are obtained as \begin{eqnarray} \label{tee33} \nabla^2\phi&=& V(\phi), \\ \bar{R}_2&=&-V'(\phi) \label{tee34}, \end{eqnarray} which give the equations for a new attractor. The solution to these equations provides the ground state for the $AdS_2$-gravity of Jackiw-Teitelboim theory \cite{JT}. Without any gauge-fixing, the solution to Eq. (\ref{tee33}) may be a constant dilation $\phi_0$ when $V(\phi_0)=0$. $V(\phi_0)=0$ implies \begin{equation} \label{tee44} \phi_0=\frac{Q^2}{4}\frac{1}{\cosh^{2}[\frac{Q^2}{4M\sqrt{\phi_0}}]}. \end{equation} \begin{figure}[t!] \centering \includegraphics{fig4.eps} \caption{The solid curve: the dilaton potential $V(\phi)$ for the extremal RBH with $M=1,Q_e=1.056$((b) in Fig. 3). $V(\phi_0)=0$ is at $\phi_0$ = 0.019(unphysical) and $\phi_0 = 0.19(r_e=0.87)$ where the latter denotes the degenerate horizon. The large-dashed curve is for no horizon with $Q=1.4$ ((d) in Fig. 3) where there is no point of $V(\phi)=0$. The small-dashed curve is for the critical case with $Q=1.325$((c) in Fig. 3), which implies one point of $V(\phi_0)=0$. The dotted curve is for another extremal black hole with $Q=0.8$ ((a) in Fig. 3). } \label{fig4} \end{figure} However, this is equivalent to Eq. (\ref{sol1}), which is one of the attractor equations in the Sen's entropy function approach. As is shown in Fig. 4, there exits a point of $V(\phi_0)=0$ if $Q \le Q_c=1.325$, where $Q_c$ corresponds to the critical point $(\tilde{x}_c, \tilde{q}_c)$. Hence, $V(\phi_0)=0$ is simply another representation to the attractor equation (\ref{sol1}). On the other hand, one may expect that the solution to Eq. (\ref{tee34}) gives us some information on the location of the degenerate horizon. It can be rewritten as \begin{equation} U''(r)|_{r=r_e}=\frac{2Q^2}{16\phi^2_0} \frac{1}{\cosh^{2}(\frac{Q^2}{4M\sqrt{\phi_0}})} -\frac{Q^4}{32M\phi^{5/2}_0} \frac{\sinh(\frac{Q^2}{4M\sqrt{\phi_0}})}{\cosh^{3}(\frac{Q^2}{4M\sqrt{\phi_0}})} \label{tee45}, \end{equation} which is unfortunately nothing but another attractor equation (\ref{sol2}). Hence, it seems difficult to determine the location of the degenerate horizon of an extremal RBH using the conventional attractor equations of (\ref{tee44}) and (\ref{tee45}). In order to find the location of the degenerate horizon, we have to find the general solution to Eqs. (\ref{tee33}) and (\ref{tee34}) by choosing a conformal gauge of $g_{tx}=0$ as~\cite{GKL} \begin{eqnarray} \frac{d\phi}{dx}&=&2(J(\phi)-{\cal C}), \\ ds^2&=&-(J(\phi)-{\cal C})dt^2+\frac{dx^2}{J(\phi)-{\cal C}}, \end{eqnarray} where the 2D mass function $J(\phi)$ is given by \begin{equation} J(\phi)=\int^{\phi}V(\tilde{\phi})d\tilde{\phi}=\sqrt{\phi}+M \tanh\Big(\frac{Q^2}{4M\sqrt{\phi}}\Big). \end{equation} Here ${\cal C}$ is a coordinate-invariant constant of the integration, which is identified with the mass $M$ of the extremal black hole. A necessary condition that a 2D dilaton gravity admits an extremal RBH is that there exists at least one curve of $\phi=\phi_0={\rm const}$ such that $J(\phi)=M$. Actually, we have an important relation between the 4D metric function $U(r)$ and 2D mass function $J(\phi)$ as \begin{equation} \sqrt{\phi}U(r(\phi))=J(\phi)-M \end{equation} with $r=2\sqrt{\phi}$. In addition, $J(\phi)$ should be monotonic in a neighborhood of $\phi_0$ with the attractor conditions $J'(\phi_0)=V(\phi_0)=0$ and $J''(\phi_0)=V'(\phi_0)\not=0$ to have the extremal black hole. These correspond to the attractor conditions in Eq. (\ref{attcon}). First, $J(\phi)=M$ determines the horizons $r=r_\pm$ \begin{equation} \label{EHJT} U(\phi_\pm)=1-\frac{M}{\sqrt{\phi_\pm}} \Big[1-\tanh \Big(\frac{Q^2}{4M\sqrt{\phi_\pm}}\Big)\Big]=0 \to U(r_\pm)=0. \end{equation} Considering the connection of $\phi_0=\frac{1}{4}r_e^2$, the attractor conditions of $J'(\phi_0)=0$ and $J''(\phi_0)\not=0$ implies the extremal configuraion \begin{equation} \label{cforebh} U'(r_e)=0, ~~U''(r_e)\not=0. \end{equation} Then, combining Eq. (\ref{EHJT}) with Eq. (\ref{cforebh}) leads to the condition for finding the degenerate horizon $r=r_e$. Following Sec. 4, for $Q_e=Mq_e= 2 \sqrt{w_0}M$, we find the location of the degenerate horizon, $r_e=Mx_e=4Mw_0/(1+w_0)$. Here, we have the $AdS_2$ spacetime with negative constant curvature \begin{equation} \bar{R}_2|_{r=r_e}=-\frac{2h}{\sqrt{\phi_0}}= -\frac{1}{\sqrt{\phi_0}}U''(r_e)=-\frac{(1+\omega_0)^4}{16M^3\omega^3_0}. \end{equation} The generalized entropy function takes the form \begin{equation} \bar{F}^{RBH}(\phi_0)= - \sqrt{\phi_0} U''(r_e). \end{equation} Finally, for the magnetically charged extremal RBH, the desired Bekenstein-Hawking entropy is given by \begin{equation} \bar{S}^{RBH}_{BH}=-\frac{4\pi\sqrt{\phi_0}}{U''(r_e)}\bar{F}^{RBH}(\phi_0) =4\pi\phi_0=\pi r_e^2. \end{equation} Given $\tilde{x}=x_e$, $\tilde{q}=q_e$, this entropy can be exactly recovered from Eq. (\ref{sen1}) in the entropy function approach as \begin{equation} {\cal F}(x_e,q_e)=\pi M^2 q_e^2\cosh^{-2}\left(\frac{q^2_e}{2x_e}\right)=\pi r^2_e=\pi M^2x^2_e. \end{equation} Here, we also note that the $AdS_2$ curvature $\bar{R}_2(U''(r_e))$ and $\sqrt{\phi_0}$ are irrelevant to determining the entropy of the extremal RBH. Furthermore, we confirm that the entropy is invariant under the conformal transformation because $\sqrt{\phi_0}$ is a conformal factor~\cite{CC}. \section{Discussions} We have discussed a magnetically charged RBH derived from the coupled action of Einstein gravity and nonlinear electrodynamics. Although the action is simple, it is very interesting to investigate its extremal black hole because its action is nonlinear on the Maxwell side. This black hole solution is parameterized by the ADM mass $M$ and magnetic charge $Q$, while the free parameter $a=Q^{3/2}/2M$ is adjusted to make the resultant line element regular at the center. It turned out that the entropy function approach does not lead to a correct entropy of the Bekenstein-Hawking entropy. This is mainly because the magnetically charged extremal RBH is obtained from the coupled system of Einstein gravity and nonlinear electrodynamics. In the limit of $a \to 0$ (Einstein-Maxwell theory), one finds the condition of $M=Q$ for the RN black hole. In this case, all approaches mentioned by this work provide the Bekenstein-Hawking entropy $S_{BH}^{RN}=\pi Q^2$ because of its linearity $\tilde{x}=\tilde{q}$, where $r=M\tilde{x}$ and $Q=M\tilde{q}$. For $a=Q^{3/2}/2M$ case of the extremal RBH, there is no linearity between $\tilde{x}$ and $\tilde{q}$ and instead, their connection is determined by the nonlinearity of $\tilde{x}=\tilde{q}\cosh^{-1}(\tilde{q}^2/2\tilde{x})$ in Eq. (\ref{sol1}). It follows that the entropy function approach is sensitive to whether the nature of the central region of the black hole is regular or singular . The two attractor equations in Eqs. (\ref{e33}) and (\ref{e332}) are not enough to determine the entropy of the extremal RBH because we have a lot of solutions satisfying these same attractor equations in Fig. 3. That is, Eq. (\ref{e33}) of $\tilde{x}=\tilde{q}\cosh^{-1}(\tilde{q}^2/2\tilde{x})$ does not imply the condition for determining the degenerate horizon of $U(x)=U'(x)=0,~U''(x)\not=0$. Solving the Einstein equations in the near horizon geometry is not sufficient to obtain the entropy of the extremal RBH. Hence, to find the correct form of the entropy of extremal black hole, we introduce the generalized entropy formula combined with a 2D dilaton gravity. In this case, the new attractor equations are given by Eqs. (\ref{tee33}) and (\ref{tee34}), which contain full information on the location of the degenerate horizon. Using the 2D dilation gravity approach, the new attractor equations provide the condition of (\ref{EHJT}) and (\ref{cforebh}) for determining the location of the degenerate horizon. Also we check that Eq. (\ref{e33}) is satisfied with $\tilde{x}=x_e$ and $\tilde{q}=q_e$, corresponding to $\bullet$ in Fig. 3. At this stage, we would like to mention the difference between the RN and RBH black holes in obtaining entropies of their extremal black boles. In the case of the RN black hole, the attractor equation of $\tilde{x}=\tilde{q}(r=Q)$ with the free parameter $M$ is enough to determine the Bekenstein-Hawking entropy as the extremal black hole entropy. This means that the extremal condition of $\tilde{x}=\tilde{q}=1(r=M=Q)$ is not necessary for finding the extremal entropy. However, for the RBH with the free parameter $M$, we have to know the extremal position of $x_e$ and the charge $q_e$ exactly to obtain the entropy because the attractor equation of $\tilde{x}=\tilde{q}\cosh^{-1}(\tilde{q}^2/2\tilde{x})$ is nonlinear. In this work, we have succeeded to find the entropy of the extremal RBH by using the 2D dilation gravity approach. This approach provides the location of horizon with attractor conditions for degenerate horizon, which are $U(x_e)=U'(x_e)=0,U''(x_e) \not=0$. We stress that this is not possible if one does not use the dilaton gravity approach known as the $s$-wave approximation of 4D gravity theory. Using Sen's entropy function approach, one can get $U'(x_e)=0$ and $U''(x_e) \not=0$ partly. In other words, although Sen's entropy function approach is known to provide Einstein equation in the near horizon geometry of $AdS_2$ spacetime as the attractor equations, this does not work for the regular black hole. Formally, we have to use full Einstein equations to find the entropy of extremal regular black hole. However, noting that the $s$-wave approximation preserves the attractor conditions, we have used the 2D dilaton gravity approach to find the location of the degenerate horizon for simplicity, instead of solving the full Einstein equations. In conclusion, we have successfully obtained the entropy of an extremal regular black hole from the generalized entropy formula based on Wald's Noether charge formalism combined with the 2D dilaton gravity approach. \vspace{0.5cm} \medskip \section*{Acknowledgments} We would like to thank L.-M. Cao for useful discussions. Two of us (Y. S. Myung and Y.-J. Park) were supported by the Science Research Center Program of the Korea Science and Engineering Foundation through the Center for Quantum Spacetime of Sogang University with grant number R11-2005-021. Y. S. Myung was also in part supported by the Korea Research Foundation (KRF-2006-311-C00249) funded by the Korea Government (MOEHRD). Y.-W. Kim was supported by the Korea Research Foundation Grant funded by Korea Government (MOEHRD): KRF-2007-359-C00007.
1,116,691,498,158
arxiv
\section{Introduction} The tunneling decay of particles trapped in an external potential $V(\vec{x})$ is a problem often encountered in nuclear, atomic and molecular physics. The most famous example is arguably the alpha decay of nuclei for which Gamow calculated the decay rates semiclassically \cite{Gamo28}. For a particle with mass $m$ such a decay can be described by resonance eigenstates $\psi_S$ of the Schr\"odinger equation \begin{equation} \left( - \frac{\hbar^2}{2m} \nabla^2 + V(\vec{x}) \right) \psi_S=\cal{E} \psi_S \label{SE_comp} \end{equation} with complex eigenenergy ${\cal E}=E_S-{ \rm i } \Gamma/2$, often refered to as Siegert resonances, where the wavefunction satisfies outgoing wave (or Siegert) boundary conditions \cite{Sieg39}, i.~e. the wavefunction is given by an outgoing plane wave for $|\vec{x}| \rightarrow \pm \infty$ (cf.~ equation (\ref{BC_S})). The imaginary part is also refered to as the decay rate since it leads to an exponential decay of the wavefunction, \begin{equation} \fl \quad \quad \quad \psi_S(\vec{x},t)=\exp({ \rm i } {\cal E }t/\hbar) \psi_S(\vec{x},0)=\exp(-\Gamma t/(2\hbar)) \exp({ \rm i } E_S t/\hbar) \psi_S(\vec{x},0) \,. \label{wf_decay} \end{equation} On the other hand, the tunneling decay of trapped particles is closely related to the problem of scattering of particles off the same potential since quantities like the transmission coefficient $|T(E)|^2$ or the scattering cross section show characteristic peaks near the resonance energies ${\cal E}$, which can be described by a Lorentz or Breit-Wigner profile \begin{equation} |T(E)|^2=\frac{(\Gamma/2)^2}{(E-E_S)^2+(\Gamma/2)^2} \end{equation} the width of which is given by the decay rate $\Gamma$ \cite{Sieg39}. As mentioned above, the decay rates can be calculated in the manner of Gamow using semiclassical approximations. Though these approximations are straightforward and easy to use they are often not very precise, only providing an order of magnitude. On the other hand there are powerful complex-scaling based methods (see e.~g.~\cite{Mois98,Mois11}) (including related methods such as complex absorbing potentials) where the spatial coordinate is rotated to the complex plane, $\vec{x} \rightarrow \vec{x} \exp({ \rm i } \theta)$ by some sufficiently large angle $\theta$ to make the resonance wavefunctions square integrable, enabling the use of the usual techniques for calculating ordinary bound states. These methods are precise and highly efficient yet quite sophisticated and, apart from rare exceptions, only suited for numerical calculations. Alternative techniques, like, e.g., the stabilization method \cite{Mand93a} have both some advantages and disadvantages compared to complex scaling and are, however, not very intuitive and only aim at numerical applications. While there are a number of excellent texts (see, e.~g.~\cite{Mois98,Tayl72,Mois11} and references therein) that discuss the mathematical aspects of the problem (e.~g.~analytical continuation of the wavefunction in the complex plane) and the computational methods mentioned above, a complementary treatment assuming a different point of view could prove valuable. In this article we want to draw attention to the scarcely known Siegert approximation method for calculating complex resonances {\cite{Sieg39,08nlLorentz}} which is both intuitive and easy to implement but does not rely on semiclassical arguments. It yields good results for narrow resonances (i.e. $\Gamma/2 \ll E_S$, where the resonance energy $E_S$ is measured relative to the potential energy at $|\vec{x}| \rightarrow \infty$) and it can in some cases lead to closed form analytical approximations for the decay coefficient. Another advantage of this method is its straightforward applicability to resonances of the nonlinear Schr\"odinger equation that occur, e.~g., in the context of trapped Bose-Einstein condensates \cite{08nlLorentz,09ddshell} since it does not require properties like the linearity or analyticity of the differential equation. While direct complex scaling \cite{Schl06a,Schl06b} has succesfully been applied to nonlinear Siegert resonances it is much less efficient than in the linear case and requires substantial modifications. Complex absorbing potential methods, which have proven more efficient in this context \cite{Mois04a,10nlws}, also require considerable modifications compared to the linear case. In the derivation of the Siegert approximation method given here, complex resonances are reviewed from an alternative, rather intuitive point of view which complements the usual more mathematical treatments of the problem by emphasizing some important aspects like the role of the continuity equation in this context as well as the similarities and differences between complex resonance states and so-called transmission resonances, i.~e.~scattering states corresponding to the maxima of the transmission coefficient (or scattering cross section respectively) for real eigenenergies. This paper is organized as follows: In section \ref{sec:Sieg} the Siegert approximation method is described, focussing on resonances of the one-dimensional linear Schr\"odinger equation for the sake of simplicity. In section \ref{sec:apps} the method is illustrated by means of several analytical and numerical applications. Finally, the main aspects of the article are summarized in section \ref{sec:summary}. \ref{app:num} contains a MATLAB code for calculating resonances. \section{The Siegert approximation method} \label{sec:Sieg} For narrow resonances (i.e. $\Gamma/2 \ll E$), one can often neglect the decay rate $\Gamma$ in (\ref{SE_comp}) and thus obtain an approximate resonance wavefunction $\psi_{\rm approx}$ and an approximate real part $E_{\rm approx}$ of the resonance energy with very little effort. For the sake of simplicity we first consider symmetric one-dimensional finite range potentials \begin{equation} V(x)=\left\{ \begin{array}{cl} 0 & |x|>a\\ V_x(x) & |x| \le a \end{array} \right. \end{equation} with finite range $a$ and $V(-x)=V(x)$. An example of such a potential is the double barrier shown in the right panel of figure \ref{fig:Siegert} (bold blue curve) which can safely be assumed to be approximately equal to zero for $|x| \ge 20$. Also shown is the square $|\psi(x)|^2$ of the most stable resonance wavefunction which is strongly localized between the potential maxima and inherits the symmetry of the double barrier potential. Apart from the Siegert resonances $\psi_S$ with complex energies $E_S -{ \rm i } \Gamma/2$ obtained for outgoing wave boundary conditions \begin{equation} \psi_S'(\pm a)= \pm { \rm i } k_S \psi_S(\pm a), \quad \quad k_S=\sqrt{2m(E_S-{ \rm i } \Gamma/2)}/\hbar \label{BC_S} \end{equation} (the prime denotes a derivative by $x$) these potentials also possesses so-called transmission (or unit) resonances $\psi_T(x)$ corresponding to real energies $E_T$ for which the potential is completely transparent, i.e. for the corresponding transmission coefficient we have $|T(E_T)|^2=1$. We will see further below, that for narrow resonances $\psi_T(x)$ and $E_T$ are good approximations to $\psi_S$ and $E_S$ respectively. To obtain the transmission coefficient $|T|^2$ we consider transmission through the potential $V(x)$ which is characterized by the following boundary conditions for the scattering wavefunction $\psi(x)$: On the left hand side we have a superposition of an incoming and a reflected plane wave \begin{equation} \psi(x)=A \exp({ \rm i } kx)+B \exp(-{ \rm i } kx), \quad x <-a \end{equation} where $k=\sqrt{2mE}/\hbar$ is the wavenumber corresponding to an energy $E$ of the incoming wave. On the right hand side we only have an outgoing wave, \begin{equation} \psi(x)=C \exp({ \rm i } kx) , \quad x>a \,. \end{equation} Thus the transmission coefficient reads \begin{equation} |T|^2=|C/A|^2 \,. \end{equation} We immediately see that for the transmission resonances with $|T|^2=1$ we have $|C|=|A|$ (and $B=0$), so that we can always achieve $C=A$ by multiplying the wavefunction with a constant phase factor. Thus the boundary conditions for the transmission resonces $\psi_T$ can be written as \begin{equation} \psi_T=A \exp({ \rm i } k_T x), \quad |x|>a \end{equation} or equivalently as \begin{equation} \psi_T'(\pm a)=i k_T \psi_T(\pm a) \text { with } k_T=\sqrt{2m E_T}/\hbar \,. \label{BC_T} \end{equation} The symmetry of both the Siegert resonance wavefunction $|\psi_S(-x)|^2=|\psi_S(x)|^2$ and the transmission resonance wavefunction $|\psi_T(-x)|^2=|\psi_T(x)|^2$ imply that the derivatives of $|\psi_S|^2$ and $|\psi_T|^2$ must vanish at $x=0$. Thus the respective boundary conditions (\ref{BC_S}) and (\ref{BC_T}) can be recast in the form \begin{equation} (|\psi_\alpha(x)|^2)'\big|_{x=0}=0, \quad \psi_\alpha'(a)={ \rm i } k_\alpha \psi_\alpha(a), \quad \alpha \in \{S,T\} \label{BC_Sym} \end{equation} with $k_\alpha=\sqrt{2m {\cal E}_\alpha}/\hbar$, ${\cal E}_S=E_S -{ \rm i } \Gamma/2$, ${\cal E}_T=E_T$. Therefore it is quite intuitive that for $\Gamma/2 \ll E_S$ we can make the approximations $E_S \approx E_T$ and \begin{equation} \psi_S(x) \approx\left\{ \begin{array}{cl} \psi_T(x) & 0 <x \le a\\ \psi_T(-x) & -a \le x<0 \end{array} \right. \,. \label{psi_app} \end{equation} This was shown rigorously by Siegert \cite{Sieg39}. Note that this approximate correspondance between complex energy Siegert resonances describing decay and real energy transmission resonances, which generally holds for narrow resonance widths, is in fact one of the main reasons for considering complex resonances in the context of scattering \cite{Sieg39}. Now we have approximations for the wavefunction and the real part of the eigenenergy but what about the imaginary part, i.e. the decay rate ? To this end let us assume that the potential $V(x)$ has local maxima at $x= \pm b$ with $0< b < a$ (as, for example, the potential shown in figure (\ref{fig:Siegert}) ). Then the probability of finding the particle described by our Schr\"odinger equation (\ref{SE_comp}) 'inside' the potential well, i.e. in the region $ -b \le x \le +b$ between the potential maxima is given by the norm \begin{equation} N=\int_{-b}^b |\psi_S(x)|^2 { \rm d } x \approx 2 \int_{0}^b |\psi_T(x)|^2 { \rm d } x\, \end{equation} of the wavefunction inside the well. The exponential decay behaviour of the resonance wavefunction given in equation (\ref{wf_decay}) implies that the norm $N$ decays according to \begin{equation} \partial_t N=-\frac{\Gamma}{\hbar}N \end{equation} so that the decay rate can be written as \begin{equation} \Gamma=-\hbar \frac{\partial_t N}{N} \,. \label{gamma} \end{equation} The time derivative $\partial_t N= \int_{-b}^b \partial_t |\psi_S(x,t)|^2 { \rm d } x \approx 2 \int_{0}^b \partial_t |\psi_T(x,t)|^2 { \rm d } x$ of the norm can be found by means of the continuity equation for the resonance wavefunction $\psi_S$ which reads \begin{equation} \partial_t \rho_S = - j_S' \label{cont} \end{equation} with $\rho_S(x,t)=|\psi_S(x,t)|^2$ and the probability current \begin{equation} j_S= - \frac{{ \rm i } \hbar}{2m} \left(\psi_S^* \psi_S'-\psi_S \psi_S'^* \right) \,. \end{equation} Integrating the continuity equation (\ref{cont}) from $x=-b$ to $x=b$ we obtain $\partial_t N= -(j_S(b)-j_S(-b))$. Equation (\ref{psi_app}) implies that the currents are approximately given by $j_S(b) \approx j_T(b)$ and $j_S(-b) \approx -j_T(b)$. Furthermore, the current $j_T$ corresponding to the transmission resonance $\psi_T$ does not depend on the position $x$ and in particular $j_T(b)=j_T(a)$. For $x \ge a$ the transmission resonance wavefunction is given by a plane wave $\psi_T(x)=\psi_T(a) \exp({ \rm i } k_T (x-a)$ so that the current at $x=a$ simply reads $j_T(a)=\hbar k_T |\psi(a)|^2/m$. Thus the decay coefficient (\ref{gamma}) becomes \begin{equation} \Gamma \approx \hbar \frac{j_T(a)}{\int_{0}^b |\psi_T(x)|^2 { \rm d } x} = \frac{\hbar^2 k_T}{m} \frac{|\psi_T(a)|^2}{\int_{0}^b |\psi_T(x)|^2 { \rm d } x}\,. \label{Gamma_s} \end{equation} We call (\ref{Gamma_s}) the Siegert formula. Note that it only depends on $E_T$ and $\psi_T$ so that the exact values $E_S$ and $\psi_S$ are not required. In general, the Siegert approximation method consists of two steps: \begin{enumerate} \item \label{I1} Neglect at first the imaginary part $\Gamma/2$ (also called decay coefficient) of the complex resonance energy in the Schr\"odinger equation for the Siegert resonance wave function in order to obtain approximations to both the wave function and the real part of the resonance energy. (In the symmetric barrier case discussed above this is done by calculating the energies and scattering states for which the transmission probability of the corresponding scattering problem has a local maximum or, equivalently, directly use the symmetry of the problem expressed in the boundary conditions given in equation (\ref{BC_Sym}).) \item \label{I2} Use the continuity equation to obtain a Siegert formula, i.~e.~an approximate expression for the decay coefficient $\Gamma$ (as given in equation (\ref{Gamma_s}) for the symmetric barrier case) which only depends on the approximate energy and approximate wavefunction calculated in step (\ref{I1}). \end{enumerate} The Siegert approximation method yields good results for narrow resonances, i.~e.~whenever the imaginary part $\Gamma/2$ of the resonance energy is small compared to the real part $E$. Unlike common discussions of complex resonance states the above derivation is based on the intuitive picture of a matter wave flowing out of a potential well, emphasizing the role of the conservation of the probability current expressed by the continuity equation. The above treatment for symmetric potentials can be straightforwardly generalized to resonances of asymmetric barrier potentials where the matter wave is localized between two maxima at $x=b_-<0$ and $x=b_+>0$ with two finite ranges $a_-,b_-$ and $a_+,b_+$ but then the actual calculations in step (\ref{I1}) are generally more difficult because the symmetry condition (\ref{BC_Sym}) no longer applies. Now one has to consider the problem of transmission through the barrier and calculate the states $\psi_{\rm approx}$ and the respective real energies $E_{\rm approx}$ which correspond to local maximima of the transmission coefficient with $|T(E_{\rm approx})|^2<1$. In analogy to the symmetric case one finds that the relation (\ref{Gamma_s}) for the decay coefficient is generalized to \begin{equation} \Gamma_{\rm approx}=\frac{\hbar^2 k_{\rm approx} |\psi_{\rm approx}(a_+)|^2+\hbar^2 k_{-a} |\psi_{\rm approx}(a_-)|^2}{m \int_{b_-}^{b_+} |\psi_{\rm approx}(x)|^2 { \rm d } x} \label{Gamma_asym} \end{equation} where $k_{approx}=\sqrt{2mE_{\rm approx}}/\hbar$. An important special case of an asymmetric barrier is a trap which is open on one side only whereas there is an impenetrable barrier on the other side. If the impenetrable barrier is on the left hand side (at $x=a_-$) we obtain the boundary condition \begin{equation} |\psi_{\rm approx}(a_-)|^2=0 \label{dings} \end{equation} and the wavefunction $\psi_{\rm approx}$ and the corresponding energy $E_{\rm approx}$ can be obtained by finding an approximate solution to the system of equations given by (\ref{dings}) and \begin{equation} \psi_{\rm approx}'(a_+)= { \rm i } k_{\rm approx} \psi_{\rm approx}'(a_+) \,. \label{Gamma_oneside} \end{equation} An example of such a calculation is given in section \ref{subsec:delta_shell}. Note that for such a potential exact solutions for real energies do not exist. We further note that potentials with infinite range can usually be approximated by finite range potentials by choosing appropriate values for $a_\pm$ and that the Siegert approximation method can be generalized to two and three dimensions (see \cite{08nlLorentz} for a detailed discussion). \section{Applications} \label{sec:apps} \subsection{The finite square well potential} \label{subsec:square_well} \begin{figure}[htb] \begin{center} \includegraphics[width=0.3\textwidth] {boxpot.eps} \includegraphics[width=0.3\textwidth] {DShell.eps} \includegraphics[width=0.3\textwidth] {DGauss.eps} \caption{\label{fig:Siegert} Barrier potentials (bold blue) and approximate squared wavefunctions $|\psi|^2$ (red) of the corresponding most stable resonance states (units with $\hbar=m=1$). Left: Finite square well (\ref{square}) with $V_0=-5$, $L=3$. Middle: Delta Shell potential (\ref{Shell}) with $\lambda=10$, $L=1$ (The Dirac delta function is symbolically represented by a vertical line). Right: Double barrier potential (\ref{double}) with $V_0=1$, $\lambda=0.1$.} \end{center} \end{figure} As a first analytically solvable example we consider the finite square well potential \begin{equation} V(x)=\left\{ \begin{array}{cl} 0 & |x|>L\\ V_0<0 & |x| \le L \end{array} \right. \label{square} \end{equation} with width $2L$ which is shown in the left panel of figure \ref{fig:Siegert}. As discussed at the beginning of the preceding section, the complex resonances for such a symmetric potential can be calculated approximately by finding its transmission resonances which satisfy the symmetry and boundary conditions (\ref{BC_Sym}) with $\alpha=T$. With the notation of the preceding section we can identify $b=a=L$, i.e. both the position of the local maximim and the range of the potential are given by the box length parameter $L$. Dropping the index $T$ we can write the transmission resonance wave function inside the square well as a superposition of plane waves, \begin{equation} \psi(x)=F { \rm e }^{{ \rm i } q x} + G { \rm e }^{-{ \rm i } q x} \, , \quad |x|<L \label{f1} \end{equation} where $q=\sqrt{2m(E-V_0)}/\hbar$ is real. The symmetry condition $|\psi(-x)|^2=|\psi(x)|^2$ leads to $FG^*=GF^*$ so that \begin{equation} |\psi(x)|^2=|F|^2+|G|^2+2FG^*\cos(2qx) \, . \label{fq1} \end{equation} The second condition in (\ref{BC_Sym}) reads $\psi'(L)={ \rm i } k \psi(L)$. Inserting Eq.~(\ref{f1}) we obtain \begin{equation} G=\frac{q-k}{q+k} { \rm e }^{2 { \rm i } qL} F \, . \label{G1} \end{equation} The condition $FG^*=GF^*$ then implies $\exp( 4 { \rm i } qL)=1$ which leads to the resonance condition $2qL=n \pi$ with integer $n$. Thus we arrive at the celebrated formula \begin{equation} E_n=V_0+ \frac{\hbar^2 q^2}{2m}=V_0+ \frac{\hbar^2 \pi^2}{8mL^2}n^2 , \label{ERlin} \end{equation} for the transmission resonance energies of a finite square well potential. The number $n$ must be sufficiently large to make $E_n$ positive. Inserting Eq.~ (\ref{G1}) into (\ref{fq1}) leads to \begin{equation} |\psi(x)|^2=|F|^2 \left[1 +2(-1)^n\frac{q-k}{q+k}\cos(2qx) + \left(\frac{q-k}{q+k}\right)^2 \right] \end{equation} ( cf.~left panel of figure \ref{fig:Siegert}) and in particular \begin{equation} |\psi(L)|^2=|F|^2 \left[1+\frac{q-k}{q+k} \right]^2=|F|^2 \left(\frac{2q}{q+k} \right)^2 \, . \label{fa1} \end{equation} Inserting $|\psi(L)|^2$ and the integral \begin{eqnarray} \int_{0}^L |\psi(x)|^2 { \rm d } x &=& L|F|^2 \left( 1+ \left(\frac{q-k}{q+k}\right)^2\right) \nonumber \\ &=& L|F|^2 \frac{(q+k)^2+(q-k)^2}{(q+k)^2} \nonumber \\ &=& 2L|F|^2 \frac{q^2+k^2}{(q+k)^2} \end{eqnarray} into the Siegert formula (\ref{Gamma_s}) we obtain the decay coefficient \begin{equation} \Gamma_n =\frac{2 \hbar}{L} \sqrt{ \frac{2 E_n}{m} } \frac{q^2}{q^2+k^2} \end{equation} or \begin{equation} \Gamma_n=\frac{2 \hbar}{L} \sqrt{ \frac{2}{m} } \frac{\sqrt{E_n}(E_n-V_0)}{2E_n-V_0} \, \label{Gamma1} \end{equation} which is the well known textbook result for the decay coefficient (or resonance width) of a finite square well potential (see e.g. \cite{Mess91}), that is usually obtained by expanding the transmission coefficient in a Taylor series around $E=E_n$. \subsection{Delta Shell potential} \label{subsec:delta_shell} As an analytically solvable example for complex resonances in asymmetric potentials we consider the one-dimensional delta-shell potential \begin{equation} V(x)= \left\{ \begin{array}{cl} +\infty & x \le 0 \\ (\hbar^2/m) \lambda \, \delta(x-L) & x>0\\ \end{array} \right. \, \text{ with } \lambda, L>0 \nonumber \label{Shell} \end{equation} consisting of an infinitely high potential barrier at $x=0$ and a delta barrier at $x=L$ which is illustrated in the middle panel of figure \ref{fig:Siegert}. To find the solution of the corresponding Schr\"odinger equation \begin{equation} \left[ -\frac{\hbar^2}{2m} \frac{{ \rm d }^2}{{ \rm d } x^2} + V(x) \right] \psi(x) = \left(E-{ \rm i } \Gamma/2 \right) \psi(x) \end{equation} we make the ansatz \begin{equation} \psi(x)= \left\{ \begin{array}{cl} I\, \sin(k x) & 0 \le x \le L \\ C \, { \rm e }^{{\rm i}kx} & x>L \end{array} \right. \, \text{ and } E- { \rm i } \Gamma/2 =\frac{\hbar^2 k^2}{2m} \nonumber \label{DShell_ansatz} \end{equation} for the wavefunction which satisfies the required outgoing wave (Siegert) boundary condition for $x \rightarrow \infty$. The matching conditions for the wavefunction at $x=L$ read \begin{equation} \psi(L-\epsilon)=\psi(L+\epsilon)\,, \quad \quad \psi'(L-\epsilon)+2 \lambda \psi(L)=\psi'(L+\epsilon) \end{equation} where the discontinuity in the derivative is caused by the delta function potential \cite{Mess91}. This leads to \begin{equation} I \sin(k L)=C \, { \rm e }^{{\rm i}kL} \, , \quad \quad k I \cos(kL)+2 \lambda I \sin(k L)= i k C \, { \rm e }^{{\rm i}kL} \end{equation} or \begin{equation} k \cos(kL)+(2 \lambda -{ \rm i } k)\sin(k L)= 0 \,. \label{Dlin} \end{equation} The real and imaginary part of the eigenenergy can now be found by numerically solving the transcendental equation (\ref{Dlin}) in the complex plane. In the following we show how the Siegert approximation method presented in section \ref{sec:Sieg} can be used to obtain a convenient approximation in an analytically closed form. To obtain the approximate real part of the energy and approximate wavefunction as required in step (\ref{I1}) of the method we make the following approximation: Imagine that the delta potential at $x=L$ is infinitely strong, i.e. the limit $\lambda \rightarrow \infty$. Then the system is a closed box of length $L$ and the wavenumber satisfies $k L= n \pi$ with an integer $n$. For a strong but still finite delta potential, i.e. $|k|<<\lambda$, we therefore assume $k L= n \pi + \delta L$ with $\delta L <0$ and $|\delta L| \ll 1$. Inserting this ansatz into the real part of Eq.~(\ref{Dlin}) and expanding it up to second order in $\delta L$ yields \begin{equation} \delta =\frac{2 \lambda L +1}{n \pi L}-\sqrt{\left(\frac{2 \lambda L +1}{ n \pi L}\right)^2+\frac{2}{L^2}} \, . \end{equation} Inserting $k= n \pi/L +\delta$ into equation (\ref{DShell_ansatz}) yields approximations for the resonance wavefunction and the real part of the eigenenergy. An approximation of the imaginary part is obtained by inserting these results into the Siegert formula (\ref{Gamma_asym}), \begin{equation} \fl \quad \quad \quad \Gamma/2 \approx \frac{\hbar^2k |\psi(L)|^2}{2m\int_0^L |\psi(x)|^2 { \rm d } x}=\frac{2 \hbar^2}{m} \left(\frac{n \pi}{L}+\delta \right)^2\frac{\sin^2(\delta L)}{n \pi +\delta L-\sin(2 \delta L)}, \end{equation} where we have identified $b_+=a_+=L$ and $b_-=a_-=0$. For a potential with $\lambda=10$, $L=1$ and scaled units $\hbar=m=1$ the Siegert approximation yields ${\cal E}_{approx}=4.481-{ \rm i } 0.062$ for the most stable resonance (cf.~middle panel of figure \ref{fig:Siegert}) which is in good agreement with the numerically exact result ${\cal E}_{exact}=4.487-{ \rm i } 0.061$. A treatment of the equivalent problem within the context of the nonlinear Schr\"odinger equation can be found in \cite{09ddshell}. \subsection{Double barrier} \label{subsec:DGauss} As an example of a numerical problem we consider the double barrier potential \begin{equation} V(x)=\frac{V_0}{2} x^2 \exp(-\lambda x^2) \label{double} \end{equation} with $V_0=1$ and $\lambda=0.1$ so that the position of the potential maxima is given by $b = 1/\sqrt{\lambda} \approx 3.16$ using units where $\hbar=m=1$. In order to calculate the approximate resonance wavefunction and real part of the energy as required in step (\ref{I1}) of the method we solve the boundary value problem given by equation (\ref{BC_Sym}) by means of a shooting procedure. We choose the cutoff parameter for the potential $a=20$ to ensure that $V(x) \approx 0$ for $|x|>a$ so that the wavefunction is well approximated by a plane wave in that region. Starting with initial conditions given in equation (\ref{BC_Sym}) we integrate the Schr\"odinger equation from $x=-a$ to $x=0$ using a standard Runge Kutta solver. By means of a bisection method the real energy $E$ is adapted such that the boundary condition at $x=0$ is satisfied. The decay rate is again obtained by inserting this value of $E$ and the corresponding wavefunction into the Siegert formula (\ref{Gamma_s}). More details on the actual numerical implementation of the method can be found in \ref{app:num}. \begin{table}[htbp] \caption{\label{tab-compare-Sieg-cs} Complex eigenenergies for the three most stable resonances of the potential (\ref{double}) calculated using complex scaling (CS) and the Siegert approximation (S).} \begin{indented} \item[] \begin{tabular}{lll} {\bf r} $n$ & $\cal{E}_{\rm S}$ & $\cal{E}_{\rm CS}$ \\ \mr 1 & $0.4601- { \rm i } \, 9.62 \times \, 10^{-7}$ & $0.4601 - { \rm i } \, 9.6204 \times \, 10^{-7}$ \\ 2 & $1.2804 -{ \rm i } \, 1.70 \times \, 10^{-3}$ & $1.2804 - { \rm i } \, 1.6737 \times \, 10^{-3}$ \\ 3 & $1.88 {\lineup \0\0}- { \rm i } \, 7 {\lineup \0} \times \, 10^{-2}$ & $1.8531 - { \rm i } \, 6.7240 \times \, 10^{-2}$ \\ {\bf r} \end{tabular} \end{indented} \end{table} The right panel of figure \ref{fig:Siegert} shows the potential (\ref{double}) and the square $|\psi|^2$ of the most stable resonance wavefunction calculated with the Siegert approximation method. Table \ref{tab-compare-Sieg-cs} compares our results for the three most stable resonances with numerically exact results calculated with the complex scaling method described in \cite{02computing}. We see that our simple approximation yields very good results for the ground state since its decay rate is small. The values for the first and second excited states demonstrate that the Siegert approximation becomes less accurate with increasing decay rates. A generalization of the same problem to the nonlinear Schr\"odinger equation is straightforward and can be found in \cite{08nlLorentz}. \section{Summary and conclusion} \label{sec:summary} In this article the Siegert approximation method for calculating complex resonance states was presented in an intuitive and straightforward manner which at the same time clearly points out the similarities and differences between resonances of transmission coefficients (or similar quantities) and resonances in the complex plane (Siegert rsonances) as well as the role of the continuity equation in this context. It was illustrated by two analytically solvable example problems and a numerical application. The author hopes that the present article, in addition to drawing attention to a useful and easily applicable computational tool, offers an alternative, rather intuitive point of view for a better understanding of complex resonances which complements other, more technical treatments. \ack The author would like to thank Hans J\"urgen Korsch and Nina Lorenz for helpful discussions and comments. Financial support from a scholarship of the Universit\'e Libre de Bruxelles is gratefully acknowledged. \begin{appendix} \section{Numerical calculation} \label{app:num} The following commented MATLAB code implements the numerical algorithm for calculating resonances of the one-dimensional symmetric finite range potential described in section \ref{subsec:DGauss}. For paedagogical reasons the program makes use of several global variables which give the code a simple structure but make it less flexible and elegant. For the same reasons and for achieving compatbility with both MATLAB and the Open Source software OCTAVE the integration of the Schr\"odinger equation is performed by means of a straightforward implementation of the classical Runge Kutta method which requires a rather high number of grid points. Thus the present code can be made a lot more efficient by using more sophisticated integrators like, e.g., MATLAB's {\verb ODE45 } or OCTAVE's {\verb lsode }. The program can be straighforwardly adapted for other symmetric potentials by changing the function $V(x)$ and providing the corresponding position $b$ of the potential maximum that confines the wavefunction as well as a suitable cutoff parameter $a$. For a potential open on one side only the boundary condition criterion $crit$ must be modified according to equation (\ref{dings}). \begin{lstlisting}[language=Matlab, breaklines=true] function [E_res, Gamma, psi, Dpsi, x]= Num_Siegert global V0 lambda m hbar a Nx V0=1; lambda=0.1; hbar=1; m=1; a=-20; Nx=10000; Emin=0.3; Emax=0.6; E_res=bisect(@solve_SE,Emin,Emax,10^-5) [crit,psi,Dpsi,x]=solve_SE(E_res); k=sqrt(2*m*E_res)/hbar; dx=(0-a)/Nx; b=1./sqrt(lambda); Gamma=hbar^2*k/m*abs(psi(1)).^2./(sum(abs(psi).^2.*(x>=-b)*dx)) Gamma_half=Gamma/2 psi=psi./sqrt(sum(abs(psi).^2.*(x>=-b).*dx)); figure(1); hold on plot(x,abs(psi).^2) plot(-x,abs(psi).^2) plot(x,V(x),'r') plot(-x,V(x),'r') hold off xlabel('x') function [crit,psi_v,Dpsi_v,x_v]=solve_SE(E) global m hbar a Nx psi_v=[]; Dpsi_v=[]; x_v=[]; k=sqrt(2*m*E)/hbar; x=a; psi=0.001; Dpsi=i*k*psi; x_v=[x_v x]; psi_v=[psi_v psi]; Dpsi_v=[Dpsi_v Dpsi]; dx=(0-a)/Nx; for j=1:Nx y0=[psi; Dpsi]; dy_dx0=Schroedinger(y0,x,E); yA=y0+0.5*dx*dy_dx0; dy_dxA=Schroedinger(yA,x+0.5*dx,E); yB=y0+0.5*dx*dy_dxA; dy_dxB=Schroedinger(yB,x+0.5*dx,E); yC=y0+dx*dy_dxB; dy_dxC=Schroedinger(yC,x+dx,E); y1=y0+dx*(dy_dx0+2*(dy_dxA+dy_dxB)+dy_dxC)/6; psi=y1(1); Dpsi=y1(2); x=a+j*dx; x_v=[x_v x]; psi_v=[psi_v psi]; Dpsi_v=[Dpsi_v Dpsi]; end; crit=(psi*Dpsi'+psi'*Dpsi); function dy_dx=Schroedinger(y,x,E) global m hbar dy_dx = [y(2); 2*m/hbar^2 * (V(x)-E).*y(1)]; function potential = V(x) global V0 lambda potential = V0.*0.5.*x.^2.*exp(-lambda.*x.^2); function [x0] = bisect(func,x1,x2,tol) f1=feval(func,x1); if f1 == 0, x0=x1; return; end f2=feval(func,x2); if f2 == 0, x0=x2; return; end if f1*f2 >= 0 | x1 >= x2 x1, f1 x2, f2 error('No root found due to initial values x1, x2'); end for i=1:100 if x2 - x1 < tol return; end x0 = 0.5*(x2+x1); f0=feval(func,x0); if f0*f1 <= 0 x2=x0; f2=f0; else x1=x0; f1=f0; end end error('No root found') \end{lstlisting} \end{appendix} \section*{References}
1,116,691,498,159
arxiv
\section{ Introduction} In this paper we discuss the explicit formulas for the solution of the following singular generalized heat and wave equations on the Euclidian space $R^n$:\\ $$ \left \{\begin{array}{cc}\left(\frac{\partial}{\partial t}+\frac{k}{t}\right) u(t,X)=\Delta u(t,X);(t,X)\in \R^\ast_+\times R^n\\ u(0,X)=f(X) ; f\in C^\infty(\R)\end{array} \right.\eqno(1.1) $$ $$ \left \{\begin{array}{cc}\left(\frac{\partial}{\partial t}+\frac{k}{t}\right)\left(\frac{\partial}{\partial t}-\frac{k}{t}\right) w(t, X)=\Delta w(t, X); (t, X)\in \R^\ast_+\times \R\\ w(0,X)=0, w_t(0, X)=g(X), g\in C^\infty(\R)\end{array} \right.\eqno(1.2 )$$ where $$\Delta=\frac{\partial^{2}}{\partial X_{1}^{2}}+\frac{\partial^{2}}{\partial X ^{2}_{2}}+...+\frac{\partial^{2}}{\partial X^{2}_{n}}\eqno(1.3)$$ is the usual n-dimensional Euclidian Laplacian on $R^n$ and $k$ is a real number.\\ The mathematical interest in these equations, however, comes mainly from the fact that the time inverse potential $\frac{k}{t}$ (resp. the time inverse square $\frac{k(1-k)}{t^2}$) is homogeneous of degree -1 (resp. -2) and therefore scales exactly the same as $\partial/\partial t$ (resp. $\partial^{2}/\partial t^{2}$).\\ An inconvenient of the time dependent potential is the absence of the relation between the semi-groups of the Schr\"odinger equation and the spectral properties of the operator. The space inverse potential $k/x$ is called Coulomb potential and is widely studied in physical and mathematical literature$[1]$.\\ The space inverse square potential $k(1-k)/x^2$ arises in several contexts, one of them is the Schr\"odinger equation in non relativistic quantum mechanics (Reed and Simon $[7]$) . For example, the Hamiltonian for a spinzero particle in Coulomb field gives rise to a Schr\"odinger operator involving the space inverse square potential (Case $[2]$). The Cauchy problem for the wave equation with the space inverse square potential in Euclidean space $\R^n$ is extensively studied (Cheeger and Taylor $[3]$), (Planchon et al$[6]$). The cases considered frequentely are $k=0$, the equations in $(1.1)$ and $(1.2)$ then turn into the classical heat and wave equation on the Euclidean spaces $\R^n$ and these equations appear in several branches of mathematics and physics (Folland $[4], p.143, 171$). Our main objective of this paper is to solve the Cauchy problems $(1.1)$ and $(1.2).$ \section{Singular heat equation} {\bf Theorem 2.1} The generalized singular heat equation in $(1.1)$ has the following general solution\\ $\varphi(t,X)=A t^{-n/2}{}_1F_1\left(\frac {n}{2}-k, \frac{ n}{2}, \frac{|X|^2}{4 t}\right)+$\\ $$B t^{-n/2}\exp{\left(-\frac{|X|^{2}}{4t}\right)}U\left(k, \frac{n}{2},\frac{|X|^{2}}{4t}\right)\eqno(2.1)$$ with $A$ and $B$ are complex constants and ${}_1F_1\left(a, c, z\right)$ and $U(a, c, z)$ are the confluent hypergeometric functions of the first and the second kind given respectively by ([5],p.263):\\ $${}_1F_1(a; c; z)=\sum_{k=0}^{}\frac{(a)_k}{(c)_k k!} z^k\ \ \ \ c\neq 0,-1,-2,..... \eqno(2.2) $$ $$U(a, c, z)=\frac{\pi}{\sin\pi c}\left[\frac{{}_1F_1(a; c; z)}{\Gamma(c)\Gamma(1+a-c)}-z^{1-c}\frac{{}_1F_1(a+1-c; 2-c; z)}{\Gamma(a)\Gamma(2-c)}\right]\eqno(2.3)$$ where as usual $(a)_n$ is the Pochhamer symbol defined by $$ (a)_n=\frac{\Gamma(a+n)}{\Gamma(a)}\eqno(2.4) $$ and $\Gamma$ is the classical Euler function.\\ {\bf Proof} Using the geodesic polar coordinates centred at $X$, $Y=X+r\omega , r>0 ;\omega \in S^{n-1}$ with $S^{n-1}$ is the sphere of dimension $n-1$ , and setting $y=r^{2}$ in the generalized singular heat equation in $(1.1)$ we obtain $$\left[4y(\partial^{2}/\partial y^{2})+2n(\partial/\partial y)\right]\Psi(t,y)=\left[(\partial/\partial t)+(k/t)\right]\Psi (t,y)\eqno(2.5)$$ by the change of function and the change of the variable below.\\ $$\Psi (t,y)=t^{-n/2}\Phi(t,y) ; z=-y/4t \eqno(2.6)$$ the equation $(2.5)$ is transformed into the following confluent hypergeometric equation $$\left[z(d^{2}/d z^{2}) +((n/2)-z)(d/dz)\right]\Phi(z)-((n/2)-k)\Phi(z)=0\eqno(2.7)$$ with parameters $a=n/2-k; c=n/2$,($[5]$ p.$268$). An appropriate independent solutions of this equation are: ($[5]$ p.$270$) ${}_1F_{1}\left(a,c,z\right)$ and $\exp(z) U(c-a,c,-z)$. From the formulas $(2.5),(2.6)$ and $(2 .7)$ we conlude that the function $\varphi$ in $(2.1)$ is the general solution of the generalized singular heat equation in $(1.1)$.\\ {\bf Theorem 2.2} For $n\geq 2$ and $ k\neq 0,-1,-2,...$, the Cauchy problem for the singular generalized heat equation $(1.1)$ has the unique solution given by $$u(t,X)=\int_{R^{n}}H^{k}_{n}(t, X, Y)f(Y)dm(Y)\eqno(2.8)$$ where $$H^k_n(t,X,Y)=\Gamma(k)(4\pi t)^{-n/2}\exp{\left(-\frac{|X-Y|^{2}}{4t}\right)}U\left(k, \frac{n}{2}, \frac{|X-Y|^{2}}{4t}\right)\eqno(2.9)$$ and $U(a, c, z)$ is the confluent hypergeometric function of the second kind given in $(2.3)$.\\ {\bf Proof } In view of the proposition 2.1, to finish the proof of the theorem it remains to show the limit condition in $(1.1)$, for this we recall the asymptotic behavior of the degenerate confluent hypergeometric function $ U(a,c,z)$, ($[5]$ p.$288-289$),\\ for $ z\rightarrow +\infty $\\ $$U(a,c,z)=z^{-a}+ O(|z|^{-a-1}) \eqno(2.10)$$ and for $z\longrightarrow 0$ $$U(a,c,z)=(1/\Gamma(a))\left[\log z+ \psi(a)-2\gamma \right]+O(|z\log z|) ,c=1 \eqno(2.11)$$ $$U(a,c,z)=(\Gamma(c-1)/\Gamma(a))z^{1-c}+ O(1) , 1<\Re c <2 \eqno(2.12)$$ $$U(a,c,z)=(\Gamma(c-1)/\Gamma(a))z^{1-c}+ O(|\log z|) , c=2 \eqno(2.13)$$ $$U(a,c,z)=(\Gamma(c-1)/\Gamma(a))z^{1-c}+ O(|z|^{\Re c-2}) , \Re c\geq 2, c\neq 2\eqno(2.14)$$ Using the geodesic polar coordinates centred at $X$, and by setting $y=r^{2};z=y/4t$ in $(1.1)$ we get $$u(t,X)=(\Gamma(k)/2\pi^{n/2})\int^{\infty}_{0}\exp(-z)U(k,n/2,z)z^{(n/2)-1}f_X^{\#}(\sqrt{4tz})dz\eqno(2.15)$$ with $$f_X^{\#}(r)=\int_{S^{n-1}}f(X+r\omega)d\omega\eqno(2.16)$$ Taking the limit in $(2.15)$ in view of the formulas $(2.10)-(2.14)$ we can reverse the limit and the integral and we obtain $$\lim_{t\longrightarrow 0}u(t, X)=c_n f_X^{\#}(0)\int^{\infty}_{0}\exp(-z)U(k, n/2, z)z^{(n/2)-1}dz\eqno(2.17)$$ and by the formula ($[1]$ p.$266$): $$\int^{\infty}_{0}\exp(-z)U(a,c,z)z^{c-1}dz=-\exp(-z)z^{c}U(a, c+1, z)\eqno(2.18)$$ $$\lim_{t\rightarrow 0}u(t, X)= \left[-\frac{\Gamma(k)}{\pi^{n/2}} f_X^{\#}(0)\exp(z)U(k,n/2,z)z^{(n/2)}\right]^{\infty}_0\eqno(2.19)$$ using again the formulas $(2.10)$ and $(2.14)$ we have $$\lim_{t\rightarrow 0}w(t, X)= \frac{\Gamma(k)}{\pi^{n/2}} \frac{2\pi^{n/2}}{\Gamma(n/2)}\frac{\Gamma(n/2)}{\Gamma(k)}f_X^{\#}(0)=f(X)\eqno(2.20)$$ The unequeness is clear from the properties of the confluent hypergeometric equation ($[5]$ p.268-270). \section{The generalized singular wave equation on $\R^n$} {\bf Theorem 3.1} The generalized singular wave equation in $(1.2)$ has the following general solution $$w(t, X, Y)=A t^{1-n}{}_2F_1\left(\frac {n-k}{2}, \frac{ n-1+k}{2},\frac{n+1}{2}, 1-\frac{|X-Y|^2}{t^2}\right)+$$ $$B \left(t^2-|X-Y|^2\right)^{(1-n)/2}{}_2F_1\left(\frac {1-k}{2}, \frac{ k}{2},\frac{3-n}{2}, 1-\frac{|X-Y|^2}{t^2}\right)\eqno(3.1)$$ with $A$ and $B$ are complex constants and ${}_2F_1\left(a, b, c, z\right)$ is the Gauss hypergeometric function given by $$ F(a,b,c;z)=\sum_{n=0}^{\infty}\frac{(a)_n(b)_n}{(c)_n n!}z^n, \quad |z|<1, \eqno(3.2)$$ {\bf Proof} Using the geodesic polar coordinates centred at $X$, $Y=X+r\omega , r>0 ;\omega \in S^{n-1}$, and setting $y=r^{2}$ and $x=t^2$ in the generalized singular wave equation in $(1.2)$ we obtain \\ $\left[4y(\partial^{2}/\partial y^{2})+2n(\partial/\partial y)\right]\Psi(x,y)=$\\ $$\left[4x(\partial^{2}/\partial x^{2})+2(\partial/\partial x)+(k(1-k)/x)\right]\Psi(x,y)\eqno(3.3)$$ setting $$\Psi(x,y)=x^{-(n-1)/2}\Phi(x,y); z=y/x\eqno(3.4)$$ we obtain the following Gauss hypergeometric equation\\ $$z(1-z)\frac{d^{2}}{dz^{2}}\Phi(z)+[n/2-(n+1/2)z]\frac{d}{dz}\Phi(z)-(n-k)(n-1+k)/4\Phi(z)=0\eqno(3.5)$$ with parameters: $a=(n-k)/2; b=(n-1+k)/2; c=n/2$.\\ The hypergeometric equation $(3.5)$ has the following system of solutions $([5], p.42-43)$ $$\Phi_1(z)=F((n-k)/2,(n-1+k)/2;(n+1)/2,1-z)\eqno(3.6)$$ and $$\Phi_2(z)=(1-z)^{(1-n)/2}F((1-k)/2,k/2;(3-n)/2,1-z)\eqno(3.7)$$ hence the following functions satisfy the generalized singular wave equation in $(1.2)$\\ $\varphi_1^k(t, X, Y)= t^{1-n}\times$ $$F\left((n-k)/2,n-1+k)/2;(n+1)/2,1-\frac{|X-Y|^2}{t^2}\right)\eqno(3.8)$$ $\varphi_2^k(t, X, Y)=\left(t^2-|X-Y|^2\right)^{(1-n)/2}\times$\\ $$F\left((1-k)/2,k/2;(3-n)/2,1-\frac{|X-Y|^2}{t^2}\right)\eqno(3.9)$$ and the proof of the theorem 3.1 is finished.\\ In the remainder of this section we present several lemmas.\\ {\bf Lemma 3.2} For $X, Y\in \R^n$ and $t\in \R^+$ set\\ $W_2(t, X, Y)=$ $$c_2 \left(t^2-|X-Y|^2\right)^{-1/2}{}_2F_1\left(\frac {1-k}{2}, \frac{ k}{2},\frac{1}{2}, 1-\frac{|X-Y|^2}{t^2}\right)\eqno(3.10)$$ and $$c_2=\frac{\Gamma(1+k/2)\Gamma(3-k)/2)}{\pi^{3/2}}\eqno(3.11)$$ for $n$ even $n\geq 4$ \\ $W_n(t, X, Y)=$ $$c_n \left(t^2-|X-Y|^2\right)^{(1-n)/2}{}_2F_1\left(\frac {1-k}{2}, \frac{ k}{2},\frac{3-n}{2}, 1-\frac{|X-Y|^2}{t^2}\right)\eqno(3.12)$$ and $$c_n=\frac{2^{n/2-1}(n-3)!!\Gamma(n/2)}{((n-2)/2)!\pi^{(n-1)/2}}c_2\eqno(3.13)$$ then for $$A_{x}^a= ( a|X-Y|^2)^{-1} x^{1-a}\frac{\partial}{\partial x} x^a=(a |X-Y|^2)^{-1}\left(x\frac{\partial}{\partial x}+a\right)\eqno(3.14)$$ the following formulas hold\\ i) $$A^{(n-3)/2}_{t^2}W_n^k(t, X, Y)=W^{k}_{n+2}(t, X, Y)\eqno(3.15)$$ ii) $$W_n^k(t, X, Y)=c_nA_{t^2}^{\frac{n-3}{2}} A_{t^2}^{\frac{n-5}{2}} ... A_{t^2}^{\frac{1}{2}}W_2^k(t, X, Y)\eqno(3.16)$$ iii)For $g\in C_0^\infty(\R^n)$ we have\\ $A_{x}^{\frac{n-3}{2}} A_{x}^{\frac{n-5}{2}} ... A_{x}^{\frac{1}{2}}\left[x^{1/2}g(\sqrt{x z})\right]=$\\ $$(\frac{y}{2})^{(n-2)/2}\frac{(n-2)/2 !}{(n-3)!!} x^{1/2} g(\sqrt{x z})+x\sum^{(n-2)/2}_{i=1}b_{i}x^{(i-1)/2}z^{i/2}g^{(i)}(\sqrt{x z})\eqno(3.17)$$ with $b_i$ ; $i=1,2,...,(n-2)/2$ are real constants.\\ {\bf Proof }: To show $i)$ we use the formula $([5]$ p.$41)$ $$\frac{d}{dz}z^{c-1}F(a,b;c,z)=(c-1)z^{c-2}F(a,b;c-1,z)\eqno(3.18)$$ $ii)$ comes from $i)$.\\ iii) we can demonstrate iii) by induction over even $n\geq 4$.\\ {\bf Lemma 3.3} For $z\longrightarrow 0$ we have:\\ i) $ F((1+k)/2,1-k/2;(n+1)/2,1-z)= $ $$\frac{\Gamma((n+1)/2)\Gamma((2-n)/2)}{\Gamma((1 +k)/2)\Gamma((2-k)/2)}z^{(n-2)/2} +O(1)\eqno(3.19) $$ ii) $F((1-k)/2,k/2;1/2,1-z)=$ $$\frac{\Gamma(1/2)}{\Gamma(k/2)\Gamma((1-k)/2))}[1+o(\log z))]\eqno(3.20) $$ $k\neq 0,-2,-4,....$\\ {\bf Proof} i) is easily seen from the formula $([5], p.47)$\\ $F(a, b, c, z)=\frac{\Gamma(c)\Gamma(c-a-b)}{\Gamma(c-b)\Gamma(c-a)}F(a, b, a+b-c+1, 1-z)+$ $$(1-z)^{c-a-b}\frac{\Gamma(c)\Gamma(a+b-c)}{\Gamma(b)\Gamma(a)}F(c-a, c-b, c-a-b+1, 1-z)\eqno(3.21) $$ ii) is a consequence of the formula $([5]$ p.$44)$\\ $F(a,b;a+b,z)=\left(\Gamma(a+b)/\Gamma(a)\Gamma(b)\right)\sum^{\infty}_{n=0}\times$\\ $$((a)_{n}(b)_{n}/(n!)^{2})[2\psi(n+1)-\psi(a+n)-\psi(b+n)-\log (1-z)](1-z)^{n}\eqno(3.22) $$ $\arg(1-z)<\pi$;$|1-z|<1$\\ \section{Cauchy problem for the singular wave equation on $\R^n$, $n$ odd} {\bf Theorem 4.1} Suppose $n$ is odd and $$W_n^k(t, X, Y)=C_n t^{1-n}{}_2F_1\left(\frac {n-k}{2}, \frac{ n-1+k}{2},\frac{n+1}{2}, 1-\frac{|X-Y|^2}{t^2}\right)\eqno(4.1)$$ with \\ $ C_n=\frac{\Gamma(n/2) k(k-1)}{2\pi^{n/2}(n-1)}\times$\\ $$\left[\frac{\Gamma((n-k)/2)\Gamma((n-1-k)/2)}{\Gamma((n-k)/2)\Gamma((n-1-k)/2)-\Gamma(n/2)\Gamma((n-1)/2)}\right]\eqno(4.2)$$ If $g\in C_0^\infty(\R^n)$, the function $$ w(t, X)=\int_{|X-Y|< t} W_n^k(t, X, Y)g(Y)dY\eqno(4.3)$$ solves the Cauchy problem for the generalized singular wave equation$ (1.2)$\\ {\bf Proof} In view of the theorem 3.1, we see that the kernel in $(4.1)$ satisfies the generalized singular wave equation in $(1.2)$ and hence the function $w(t, X)$ in $(4.3)$ satisfies the same equation. To complete the proof of the theorem 4.1 it remains to show the limit conditions. Using the geodesic polar coordinates and setting $y=r^{2};x=t^{2},z=y/x$ in $(4.3)$ we have\\ $ w(t,X)=C_n\int^{1}_{0}\frac{1}{2} g_X^{\#}(t\sqrt{z})z^{(n-2)/2} \times $ $$x^{1/2}F((n-k)/2,(n-1+k)/2;(n+1)/2,1-z)dz\eqno (4.4) $$ with $g_X^{\#}(r)$ is as in $(2.16)$. By the formula $([5] p.47)$. $$F(a,b;c,z)=(1-z)^{c-a-b}F(c-a,c-b;c,z)\eqno(4.5)$$ we can write\\ $$ w(t,X)=C_n\frac{t}{2}\int^{1}_{0}F((1+k)/2,1-k/2;(n+1)/2,1-z)g_X^{\#}(t\sqrt{z})dz\eqno(4.6) $$ hence by taking the limit in $(4.6)$ using the formula $(3.19)$ we can reverse the integral and the limit to obtain $$\lim_{t\rightarrow 0}w(t,X)=0.\eqno(4.7) $$ For the second condition we derive the expression in $(4.6)$, using again $(3.19)$ we can derive under the integral sign to obtain\\ $\frac{\partial }{\partial t} w(t,X)=$\\ $$C_n\frac{1}{2}\int^{1}_{0}g_X^{\#}(t\sqrt{z})F((1+k)/2,1-k/2;(n+1)/2,1-z)dz +tO(1)\eqno(4.8) $$ Hence by taking the limit of $(4.8)$ in view of $(3.19)$ we can reverse the limit and the integral to write $$\lim_{t\rightarrow 0}\frac{\partial}{\partial t}w(t,X)=C_n\frac{1}{2}g_X^{\#}(0)\int^{1}_{0}F((1+k)/2,1-k/2;(n+1)/2,1-z)dz\eqno(4.9) $$ $$\lim_{t\rightarrow 0}\frac{\partial}{\partial t}w(t,X)=C_n\frac{1}{2}g_X^{\#}(0)\int^{1}_{0}F((1+k)/2,1-k/2;(n+1)/2, z)dz\eqno(4.10) $$ In view of the formula ($[5]$ p.$41$) $$\frac{d}{dz} F(a,b;c,z)=\frac{a b}{c}F(a+1,b+1;c+1,z)\eqno(4.11) $$ we obtain\\ $\lim_{t\rightarrow 0}\frac{\partial}{\partial t}w(t,X)=-C_n(1/2)g_X^{\#}(0)\frac{n-1}{k(k-1)}\times$ $$\left[F((k-1)/2,-k/2;(n-1)/2, z)\right]^{1}_{0}\eqno(4.12) $$ that is\\ $\lim_{t\rightarrow 0}\frac{\partial}{\partial t}w(t,X)=-C_n(1/2)g_X^{\#}(0)\frac{n-1}{k(k-1)}\times$ $$\left[F((k-1)/2,-k/2;(n-1)/2, 1)-1\right]\eqno(4.13) $$ And by the formula $(4.2)$ and ($[5]$ p.$40$):\\ $$F(a,b;c,1)=\left(\Gamma(c)\Gamma(c-a-b)/\Gamma(c-a)\Gamma(c-b)\right); \Re(a+b-c)<0\eqno(4.14) $$ $c\neq 0,-1,-2,-3,...$ .\\ we obtain $\lim_{t\rightarrow 0}\frac{\partial}{\partial t}w(t,X)=g(X)$.\\ \\ \section{Cauchy problem for the singular wave equation on the Euclidien plane $\R^2$ } {\bf Theorem 5.1} Suppose $n=2$ and $$W_2^k(t, X, Y)=c_2\left(t^{2}-|X-Y|^{2}\right)^{\frac{-1}{2}} F\left(\frac{k}{2}, \frac{1-k}{2}, \frac{1}{2};1-\frac{|X-Y|^{2}}{t^{2}}\right)\eqno(5.1)$$ with $$c_2=\frac{\Gamma(1+k/2)\Gamma(3-k)/2)}{\pi^{3/2}}\eqno(5.2)$$ If $g\in C^\infty_0(R^2)$, the function $$ w(t, X)=\int_{|X-Y|< t} W_2^k(t, X, Y)g(Y)dY\eqno(5.3)$$ solves the Cauchy problem $(1.2)$.\\ {\bf Proof} From the theorem 3.1 we see that the functions $w(t, X)$ in $(5.3 )$ satisfies the generalized singular wave equation in $(1.2)$ .\\ Now to show the limit conditions, by the geodesic polar coordinates and the change of variables $y=r^2; x=t^2, z=y/x$ in $(5.3)$, we have for for $n=2$ \\ $$w(t,X)=c_2(t/2)\int^1_0(1-z)^{-1/2}F((1-k)/2, k/2, 1/2, 1-z)g_X^{\#}(t\sqrt{z})dz\eqno(5.4) $$ By taking the limit in $(5.4)$ we can use the formula $(3.20)$ to reverse the limit and the integral and to obtain $$\lim_{t\rightarrow 0}w(t,X)=0\eqno(5.5) $$ Now to show the second condition we derive the expression $(5.4)$ and in view of the formula $(3.20)$ we can derive under the integral sign to obtain\\ $\frac{\partial}{\partial t}w(t,X)=c_2\frac{1}{2} \int^{1}_{0}(1-z)^{-1/2}\times$\\ $$F((1-k)/2,k/2;1/2,1-z)g_X^{\#}(t\sqrt{z})dz+t O(1)) g_X^{\#}(t\sqrt{z})dz\eqno(5.6) $$ Using again the formula $(3.20)$ we can reverse the limit and the integral and we have $$\lim_{t\longrightarrow 0}\frac{\partial}{\partial t} w(t,X)=c_2 \frac{1}{2}g_X^{\#}(0)\int^{1}_{0}(1-z)^{-1/2}F((1-k)/2,k/2;1/2,1-z)dz\eqno(5.7) $$ $$\lim_{t\longrightarrow 0}\frac{\partial}{\partial t} w(t,X)=c_2 \frac{1}{2}g_X^{\#}(0) \int^{1}_{0}z^{-1/2}F((1-k)/2,k/2;1/2,z)dz\eqno(5.8) $$ we have by the formula $(3.18)$ $$\lim_{t\longrightarrow 0}\frac{\partial}{\partial t}w(t,X)=c_2\frac{1}{2}g_X^{\#}(0) 2z^{1/2}\left[F((1-k)/2,k/2;3/2,z)\right]_0^1 \eqno(5.9) $$ and from $(4.14)$ $$\lim_{t\rightarrow 0}\frac{\partial}{\partial t}w(t, X)=c_2\pi^{3/2} / \Gamma((3-k)/2))\Gamma((1+k/2)g(X)=g(X)\eqno(5.10) $$ \section{Cauchy problem for the singular wave equation on $\R^n$, $n\geq 4 $ even } {\bf Theorem 6.1} Suppose $n$ is even and $n\geq 4$, let $W_2^k(t, X, Y)$ is as in theorem 5.1 and $A^a_{x}$ is as in $(3.11)$ and $$c_n=\frac{(n-3)!!\Gamma(n/2)}{2^{1-n/2}((n-2)/2)!\pi^{(n-1)/2}}\eqno(6.1)$$ If $g\in C^\infty_0(R^n)$, the function $$ w(t, X)=c_nA_{t^2}^{\frac{n-3}{2}} A_{t^2}^{\frac{n-5}{2}} ... A_{t^2}^{\frac{1}{2}}\int_{|X-Y|< t} W_2^k(t, X, Y)g(Y)dY\eqno(6.2)$$ solves the Cauchy problem $(1.2)$.\\ {\bf Proof} In view of the theorem 3.1, we see that the functions $w(t, X)$ in $(6.2)$ satisfies the generalized singular wave equation in $(1.2)$ .\\ To finish the proof of the theorem we show the limit condition in the even case $n\geq 4$: using the geodesic polar coordinates and setting $y=r^{2}; x=t^{2}; z=y/x$ in $(6.2)$; we have:\\ for $n$ even $n\geq 4$:\\ $$w(t, X)=c_nc_2B_t\left[(t/2)\int^{1}_{0}(1-z)^{-1/2} F((1-k)/2,k/2;1/2,1-z)g_X^{\#}(t\sqrt{z})dz\right]\eqno(6.3)$$ with $$B_t=A_{t^2}^{\frac{n-3}{2}} A_{t^2}^{\frac{n-5}{2}} ... A_{t^2}^{\frac{1}{2}}$$ Using the formula iii) of lemma 3.2 we have\\ $w(t, X)=C_n\frac{t}{2} \int^{1}_{0}(1-z)^{-1/2}F((1-k)/2, k/2;1/2, 1-z)g_X^{\#}(t\sqrt{z})dz+$\\ $$t^2 \sum_{i=0}^{(n-2)/2}b_{i}t^{i-1}\int_0^{1}(1-z)^{-1/2}F((1-k)/2, k/2;1/2,1-z) z^{i/2}\widetilde{g}_X^{(i)}(t\sqrt{z})dz \eqno(6.4)$$ with $$C_n=c_n c_22^{-(n-2)/2}\frac{(n-2)/2!}{(n-3)!!} $$ Taking the limit of the expression $(6.3)$ and using the formula $(3.20)$ we can reverse the integral and the limit to obtain $$\lim_{t\rightarrow 0}w(t,X)=0\eqno(6.5)$$ For the second condition we derive the expression $(6.4)$ and by $(3.17)$ we can derive under the integral sign\\ $\frac{\partial}{\partial t}w(t,X)=c_n c_22^{-(n-2)/2}\frac{(n-2)/2!}{(n-3)!!} \frac{1}{2} \int^{1}_{0}(1-z)^{-1/2}\times$ $$F((1-k)/2,k/2;1/2,1-z)g_X^{\#}(t\sqrt{z})dz +t O(1)) g_X^{\#}(t\sqrt{z})dz\eqno(6.6)$$ Taking now the limit in $(6.6)$ and using again $(3.20)$ we can reverse the limit and the integral sign\\ $\lim_{t\longrightarrow 0}\frac{\partial}{\partial t}w(t,X)==c_n c_22^{-(n-2)/2}\frac{(n-2)/2!}{(n-3)!!} \frac{1}{2} g_X^{\#}(0)\times$\\ $$\int^{1}_{0}(1-z)^{-1/2}F((1-k)/2, k/2;1/2, 1-z)dz\eqno(6.7)$$ $\lim_{t\longrightarrow 0}\frac{\partial}{\partial t}w(t,X)=$ $$=c_n 2^{-(n-2)/2}\frac{(n-2)/2!}{(n-3)!!} \frac{1}{2} g_X^{\#}(0) \int^{1}_{0}z^{-1/2}F((1-k)/2,k/2;1/2,z)dz\eqno(6.8)$$ by the formula $(3.18)$ we have\\ $\lim_{t\longrightarrow 0}\frac{\partial}{\partial t}w(t, X)==c_n c_22^{-(n-2)/2}\frac{(n-2)/2!}{(n-3)!!} \frac{1}{2} \times$ $$\widetilde{g}_X(0) 2z^{1/2}\left(F((1-k)/2,k/2;3/2,z)\right)_0^1 \eqno(6.9)$$ using the formula $(4.14)$ we have\\ $\lim_{t\rightarrow 0}\frac{\partial}{\partial t}w(t, X) =c_{n }c_2 2^{-(n-2)/2}\frac{(n-2)/2!}{(n-3)!!}\times$ $$\frac{ \pi^{(n+2)/2}}{\Gamma((3-k)/2))\Gamma((1+k/2)\Gamma(n/2)}g(X)=g(X)\eqno(6.10)$$ \section{Applications} {\bf Remark 7.1}: we have $$\lim_{k\rightarrow 0}\left[\Gamma (k)\right]^{-1}H^{k}_{n}(t, X,Y)=K_{n}(t, X, Y)\eqno(7.1)$$ where $$K_{n}(t, X,Y)=(4\pi t)^{-n/2}\exp{\left(-|X-Y|^{2}/4t\right)}\eqno(7.2)$$ is the classical heat kernel on $R^{n}$.\\ {\bf Corollary 7.2} The generalized Cauchy problem for the heat equation on $\R^n$: $$ \left \{\begin{array}{cc}\left(\frac{\partial}{\partial t}\right)v(t, X)=0&;(t, X)\in \R^\ast_+\times \R^n\\ \lim t^{-k}v(t, X)=v_0(X) ~~; v_0\in C^\infty_0(\R^n)\end{array} \right.\eqno(7.3) $$ has the unique solution given by $$v(t, X)=\int_{\R^n}K^k_n(t, X, Y)f(Y)dm(Y)\eqno(7.4)$$ where $$K^k_n(t, X, Y)=\Gamma(k)t^k(4\pi t)^{-n/2}\exp{\left(-\frac{|X-Y|^2}{4t}\right)}U\left(k, \frac{n}{2}, \frac{|X-Y|^2}{4t}\right)\eqno(7.5)$$ {\bf Proof} The proof of this corollary is simple and is omitted.\\ {\bf Corollary 7.3} We have $$\lim_{k\longrightarrow 0}W_n^k(t, X, Y)=W_n(t, X, Y)\eqno(7.6)$$ with $$W_n(t, X, Y)=(2\pi)^{-n/2}\left(t^2-|X-Y|^2\right)^{(1-n)/2}\eqno(7.7)$$ is the classical wave kernel on $\R^n$ $[4]$ \\ \\ \noindent {\bf Proof} The proof of this corollary is simple and is left to the reader.
1,116,691,498,160
arxiv
\section*{Checklist} \newpage \section{Introduction} Graph Neural Networks (GNNs)~\citep{DBLP:conf/iclr/KipfW17, DBLP:conf/iclr/VelickovicCCRLB18, NEURIPS2018_53f0d7c5} have achieved great practical successes in many real-world applications, such as chemistry \citep{pires2015pkcsm}, molecular biology \citep{huber2007graphs}, social networks \citep{cho2011friendship} and epidemic modelling \citep{simon2011exact}. For most of these applications, explaining predictions made by a GNN model is crucial for establishing trust with end-users, identifying the cause of a prediction, and even discovering potential deficiencies of a GNN model before massive deployment. Ideally, an explanation should be able to answer questions like ``\textit{Would the prediction of the GNN model change if a certain part of an input molecule is removed?}'' in the context of predicting whether an artificial molecule is active for a certain type of proteins~\cite{jiang2020drug, XIONG2021}, \textit{``Would an item recommended still be recommended if a customer had not purchased some other items in the past?''} for a GNN built for recommendation systems~\cite{fan2019graph, yin2019deeper}. Counterfactual explanations~\cite{moraffah2020causal} in the form of ``\textit{If X had not occurred, Y would not have occurred}''~\cite{molnar2019} are the principled way to answer such questions and thus are highly desirable for GNNs. In the context of GNNs, a counterfactual explanation identifies a small subset of edges of the input graph instance such that removing those edges significantly changes the prediction made by the GNN. Counterfactual explanations are usually concise and easy to understand~\cite{moraffah2020causal, sokol2019counterfactual} because they align well with the human intuition to describe a causal situation~\cite{molnar2019}. To make explanations more trustworthy, the counterfactual explanation should be robust to noise, that is, some slight changes on an input graph do not change the explanation significantly. How to produce robust counterfactual explanations on predictions made by general graph neural networks is a novel problem that has not been systematically studied before. As to be discussed in Section~\ref{sec:rw}, most GNN explanation methods~\citep{NEURIPS2019_d80b7040, NEURIPS2020_e37b08dd, yuan2020xgnn, DBLP:conf/iclr/VelickovicCCRLB18, pope2019explainability} are neither counterfactual nor robust. These methods mostly focus on identifying a subgraph of an input graph that achieves a high correlation with the prediction result. Such explanations are usually not counterfactual because, due to the high non-convexity of GNNs, removing a subgraph that achieves a high correlation does not necessarily change the prediction result. Moreover, many existing methods~\cite{NEURIPS2019_d80b7040, NEURIPS2020_e37b08dd, DBLP:conf/iclr/VelickovicCCRLB18, pope2019explainability} are not robust to noise and may change significantly upon slight modifications on input graphs, because the explanation of every single input graph prediction is independently optimized to maximize the correlation with the prediction, thus an explanation can easily overfit the noise in the data. In this paper, we develop RCExplainer, a novel method to produce robust counterfactual explanations on GNNs. The key idea is to first model the common decision logic of a GNN by set of decision regions where each decision region governs the predictions on a large number of graphs, and then extract robust counterfactual explanations by a deep neural network that explores the decision logic carried by the linear decision boundaries of the decision regions. We make the following contributions. First, we model the decision logic of a GNN by a set of decision regions, where each decision region is induced by a set of linear decision boundaries of the GNN. We propose an unsupervised method to find decision regions for each class such that each decision region governs the prediction of multiple graph samples predicted to be the same class. The linear decision boundaries of the decision region capture the common decision logic on all the graph instances inside the decision region, thus do not easily overfit the noise of an individual graph instance. By exploring the common decision logic encoded in the linear boundaries, we are able to produce counterfactual explanations that are inherently robust to noise. Second, based on the linear boundaries of the decision region, we propose a novel loss function to train a neural network that produces a robust counterfactual explanation as a small subset of edges of an input graph. The loss function is designed to directly optimize the explainability and counterfactual property of the subset of edges, such that: 1) the subgraph induced by the edges lies within the decision region, thus has a prediction consistent with the input graph; and 2) deleting the subset of edges from the input graph produces a remainder subgraph that lies outside the decision region, thus the prediction on the remainder subgraph changes significantly. Last, we conduct comprehensive experimental study to compare our method with the state-of-the-art methods on fidelity, robustness, accuracy and efficiency. All the results solidly demonstrate the superior performance of our approach. \section{Related work} \label{sec:rw} The existing GNN explanation methods~\cite{yuan2020xgnn, DBLP:conf/iclr/VelickovicCCRLB18, NEURIPS2019_d80b7040, pope2019explainability, NEURIPS2020_e37b08dd} generally fall into two categories: model level explanation~\cite{yuan2020xgnn} and instance level explanation~\cite{DBLP:conf/iclr/VelickovicCCRLB18, NEURIPS2019_d80b7040, pope2019explainability, NEURIPS2020_e37b08dd}. A model level explanation method~\cite{yuan2020xgnn} produces a high-level explanation about the general behaviors of a GNN independent from input examples. This may be achieved by synthesizing a set of artificial graph instances such that each artificial graph instance maximizes the prediction score on a certain class. The weakness of model level explanation methods is that an input graph instance may not contain an artificial graph instance, and removing an artificial graph from an input graph does not necessarily change the prediction. As a result, model level explanations are substantially different from counterfactual explanations, because the synthesized artificial graphs do not provide insights into how the GNN makes its prediction on a specific input graph instance. The instance level explanation methods~\cite{DBLP:conf/iclr/VelickovicCCRLB18, NEURIPS2019_d80b7040, pope2019explainability, NEURIPS2020_e37b08dd} explain the prediction(s) made by a GNN on a specific input graph instance or multiple instances by identifying a subgraph of an input graph instance that achieves a high correlation with the prediction on the input graph. GNNExplainer~\citep{NEURIPS2019_d80b7040} removes redundant edges from an input graph instance to produce an explanation that maximizes the mutual information between the distribution of subgraphs of the input graph and the GNN's prediction. Following the same idea by \citet{NEURIPS2019_d80b7040}, PGExplainer~\citep{NEURIPS2020_e37b08dd} parameterizes the generation process of explanations by a deep neural network, and trains it to maximize a similar mutual information based loss used by GNNExplainer~\citep{NEURIPS2019_d80b7040}. The trained deep neural network is then applied to generate explanations for a single input graph instance or a group of input graphs. MEG~\cite{numeroso2021meg} incorporates strong domain knowledge in chemistry with a reinforcement learning framework to produce counterfactual explanations on GNNs specifically built for compound prediction, but the heavy reliance on domain knowledge largely limits its applicability on general GNNs. Some studies~\cite{pope2019explainability, DBLP:conf/iclr/VelickovicCCRLB18} also adapt the existing explanation methods of image-oriented deep neural networks to produce instance level explanations for GNNs. Pope et al.~\citep{pope2019explainability} extend several gradient based methods~\cite{selvaraju2017grad, simonyan2014deep, zhang2018top} to explain predictions made by GNNs. The explanations are prone to gradient saturation \citep{glorot2010understanding} and may also be misleading \citep{NEURIPS2018_294a8ed2} due to the heavy reliance on noisy gradients. Velickovic et al.~\cite{DBLP:conf/iclr/VelickovicCCRLB18} extend the attention mechanism~\cite{denil2017programmable, duan2017one} to identify the nodes in an input graph that contribute the most to the prediction. This method has to retrain the GNN with the altered architecture and the inserted attention layers. Thus, the explanations may not be faithful to the original GNN. Instance level explanations are usually not counterfactual because, due to the non-convexity of GNNs, removing an explanation subgraph from the input graph does not necessarily change the prediction result. Moreover, those methods~\cite{NEURIPS2019_d80b7040, NEURIPS2020_e37b08dd, DBLP:conf/iclr/VelickovicCCRLB18, pope2019explainability} are usually not robust to noise because the explanation of every single input graph prediction is independently optimized. Thus, an explanation can easily overfit the noise inside input graphs and may change significantly upon slight modifications on input graphs. To tackle the weaknesses in the existing methods, in this paper, we directly optimize the counterfactual property of an explanation. Our explanations are also much more robust to modifications on input graphs, because they are produced from the common decision logic on a large group of similar input graphs, which do not easily overfit the noise of an individual graph sample. Please note that our study is substantially different from adversarial attacks on GNNs. The adversarial attacking methods~\cite{zugner2019adversarial, zugner2018adversarial, xu2020adversarial, xu2019topology, jin2019latent} and the most recent CF-GNNExplainer~\cite{lucic2021cf} use adversarial examples as explanations and only focus on changing the predicted labels of GNNs, but totally ignore the explainability of the generated adversarial examples~\cite{freiesleben2020counterfactual}. Thus, the adversarial examples generated by adversarial attacks do not align well with the human intuition. On the contrary, our method directly optimizes the explainability of an explanation and requires that the subgraph induced by the explanation lies within the decision region at a large distance from the decision boundaries. We also require that the explanation is generally valid for a large set of similar graph instances by extracting it from the common linear decision boundaries of a large decision region. \section{Problem Formulation} Denote by $G = \{V, E\}$ a graph where $V = \{v_1, v_2, \ldots, v_n\}$ is the set of $n$ nodes and $E \subseteq V\times V$ is the set of edges. The edge structure of a graph $G$ is described by an adjacency matrix $\mathbf{A}\in\{0,1\}^{n\times n}$, where $\mathbf{A}_{ij} = 1$ if there is an edge between node $v_i$ and $v_j$; and $\mathbf{A}_{ij}=0$ otherwise. Denote by $\phi$ a GNN model that maps a graph to a probability distribution over a set of classes denoted by $C$. Let $D$ denote the set of graphs that are used to train the GNN model $\phi$. We focus on GNNs that adopt piecewise linear activation functions, such as MaxOut~\citep{goodfellow2013maxout} and the family of ReLU~\citep{glorot2011deep,he2015delving,nair2010rectified}. The robust counterfactual explanation problem is defined as follows. \begin{definition}[Robust Counterfactual Explanation Problem] Given a GNN model $\phi$ trained on a set of graphs $D$, for an input graph $G=\{V, E\}$, our goal is to explain why $G$ is predicted by the GNN model as $\phi(G)$ by identifying a small subset of edges $S\subseteq E$, such that (1) removing the set of edges in $S$ from $G$ changes the prediction on the remainder $\{V, E-S\}$ of $G$ significantly; and (2) $S$ is stable with respect to slight changes on the edges of $G$ and the feature representations of the nodes of $G$. \end{definition} In the definition, the first requirement requires that the explanation $S$ is counterfactual, and the second requirement requires that the explanation is robust to noisy changes on the edges and nodes of $G$. \section{Method} \label{sec:method} In this section, we first introduce how to extract the common decision logic of a GNN on a large set of graphs with the same predicted class. This is achieved by a decision region induced by a set of linear decision boundaries of the GNN. Then, based on the linear boundaries of the decision region, we propose a novel loss function to train a neural network that produces robust counterfactual explanations. Last, we discuss the time complexity of our method when generating explanations. \subsection{Modelling Decision Regions} Following the routines of many deep neural network explanation methods~\citep{selvaraju2017grad,zeiler2014visualizing}, we extract the decision region of a GNN in the $d$-dimensional output space $\mathbb{O}^d$ of the last convolution layer of the GNN. Because the features generated by the last convolution layer are more conceptually meaningful and more robust to noise than those raw features of input graphs, such as vertices and edges~\cite{zugner2019certifiable,bojchevski2019certifiable}. Denote by $\phi_{gc}$ the mapping function realized by the graph convolution layers that maps an input graph $G$ to its graph embedding $\phi_{gc}(G)\in \mathbb{O}^d$, and by $\phi_{fc}$ the mapping function realized by the fully connected layers that maps the graph embedding $\phi_{gc}(G)$ to a predicted distribution over the classes in $C$. The overall prediction $\phi(G)$ made by the GNN can be written as $ \phi(G)=\phi_{fc}(\phi_{gc}(G)). $ For the GNNs that adopt piecewise linear activation functions for the hidden neurons, such as MaxOut~\citep{goodfellow2013maxout} and the family of ReLU~\citep{glorot2011deep,he2015delving,nair2010rectified}, the decision logic of $\phi_{fc}$ in the space $\mathbb{O}^d$ is characterized by a piecewise linear decision boundary formed by connected pieces of decision hyperplanes in $\mathbb{O}^d$~\citep{NEURIPS2018_294a8ed2}. We call these hyperplanes \textbf{linear decision boundaries (LDBs)}, and denote by $\mathcal{H}$ the set of LDBs induced by $\phi_{fc}$. The set of LDBs in $\mathcal{H}$ partitions the space $\mathbb{O}^d$ into a large number of convex polytopes. A convex polytope is formed by a subset of LDBs in $\mathcal{H}$. All the graphs whose graph embeddings are contained in the same convex polytope are predicted as the same class~\cite{chu2018exact}. Therefore, the LDBs of a convex polytope encode the common decision logic of $\phi_{fc}$ on all the graphs whose graph embeddings lie within the convex polytope~\cite{chu2018exact}. Here, a graph $G$ is \textbf{covered} by a convex polytope if the graph embedding $\phi_{gc}(G)$ is contained in the convex polytope. Based on the above insight, we model the \textbf{decision region} for a set of graph instances as a convex polytope that satisfies the following two properties. First, the decision region should be induced by a subset of the LDBs in $\mathcal{H}$. In this way, when we extract counterfactual explanations from the LDBs, the explanations are loyal to the real decision logic of the GNN. Second, the decision region should cover many graph instances in the training dataset $D$, and all the covered graphs should be predicted as the same class. In this way, the LDBs of the decision region capture the common decision logic on all the graphs covered by the decision region. Here, the requirement of covering a larger number of graphs ensures that the common decision logic is general, and thus it is less likely to overfit the noise of an individual graph instance. As a result, the counterfactual explanations extracted from the LDBs of the decision region are insensitive to slight changes in the input graphs. Our method can be easily generalized to incorporate prediction confidence in the coverage measure, such as considering the count of graphs weighted by prediction confidence. To keep our discussion simple, we do not pursue this detail further in the paper. Next, we illustrate how to extract a decision region satisfying the above two requirements. The key idea is to find a convex polytope covering a large set of graph instances in $D$ that are predicted as the same class $c\in C$. Denote by $D_c\subseteq D$ the set of graphs in $D$ predicted as a class $c\in C$, by $\mathcal{P}\subseteq \mathcal{H}$ a set of LDBs that partition the space $\mathbb{O}^d$ into a set of convex polytopes, and by $r(\mathcal{P}, c)$ the convex polytope induced by $\mathcal{P}$ that covers the largest number of graphs in $D_c$. Denote by $g(\mathcal{P}, c)$ the number of graphs in $D_c$ covered by $r(\mathcal{P}, c)$, and by $h(\mathcal{P}, c)$ the number of graphs in $D$ that are covered by $r(\mathcal{P}, c)$ but are not predicted as class $c$. We extract a decision region covering a large set of graph instances in $D_c$ by solving the following constrained optimization problem. \begin{equation}\label{eq:separation} \max_{\mathcal{P} \subseteq \mathcal{H}} g(\mathcal{P}, c), \textrm{ s.t. } h(\mathcal{P}, c)=0 \end{equation} This formulation realizes the two properties of decision regions because $\mathcal{P}\subseteq \mathcal{H}$ ensures that the decision region is induced by a subset of LDBs in $\mathcal{H}$, maximizing $g(\mathcal{P}, c)$ requires that $r(\mathcal{P}, c)$ covers a large number of graphs in $D_c$, and the constraint $h(\mathcal{P}, c)=0$ ensures that all the graphs covered by $r(\mathcal{P}, c)$ are predicted as the same class $c$. Once we find a solution $\mathcal{P}$ to the above problem, the decision region $r(\mathcal{P}, c)$ can be easily obtained by first counting the number of graphs in $D_c$ covered by each convex polytope induced by $\mathcal{P}$, and then select the convex polytope that covers the largest number of graphs in $D_c$. \subsection{Extracting Decision Regions} The optimization problem in Equation~\eqref{eq:separation} is intractable for standard GNNs, mainly because it is impractical to compute $\mathcal{H}$, all the LDBs of a GNN. The number of LDBs in $\mathcal{H}$ of a GNN is exponential with respect to the number of neurons in the worst case~\cite{montufar2014number}. To address this challenge, we substitute $\mathcal{H}$ by a sample $\tilde{\mathcal{H}}$ of LDBs from $\tilde{\mathcal{H}}$. A LDB in the space $\mathbb{O}^d$ can be written as $\mathbf{w}^\top \mathbf{x}+b=0$, where is $\mathbf{x}\in \mathbb{O}^d$ is a variable, $\mathbf{w}$ is the basis term, and $b$ corresponds to the bias. Following \citep{chu2018exact}, for any input graph $G$, a linear boundary can be sampled from $\mathcal{H}$ by computing \begin{equation}\label{eq:basis} \begin{split} \mathbf{w} = \frac{\partial \left(\max_1(\phi_{fc}(\boldsymbol{\alpha})) - \max_2(\phi_{fc}(\boldsymbol{\alpha}))\right)}{\partial \boldsymbol{\alpha}}|_{\boldsymbol{\alpha}=\phi_{gc}(G)}, \end{split} \end{equation} and \begin{equation}\label{eq:bias} \begin{split} b = \textstyle\max_1(\phi_{fc}(\boldsymbol{\alpha}))-\textstyle\max_{2}(\phi_{fc}(\boldsymbol{\alpha}))-\mathbf{w}^T\boldsymbol{\alpha}|_{\boldsymbol{\alpha}=\phi_{gc}(G)}, \end{split} \end{equation} where $\max_1(\phi_{fc}(\boldsymbol{\alpha})))$ and $\max_2(\phi_{fc}(\boldsymbol{\alpha}))$ are the largest and the second largest values in the vector $\phi_{fc}(\boldsymbol{\alpha})$, respectively. Given an input graph $G$, Equations~\eqref{eq:basis} and \eqref{eq:bias} identify one LDB from $\mathcal{H}$. Thus, we can sample a subset of input graphs uniformly from $D$, and use Equations~\eqref{eq:basis} and \eqref{eq:bias} to derive a sample of LDBs as $\tilde{\mathcal{H}}\subset \mathcal{H}$. Now, we substitute $\mathcal{H}$ in Equation~\eqref{eq:separation} by $\tilde{\mathcal{H}}$ to produce the following problem. \begin{equation}\label{eq:practical} \max_{\mathcal{P} \subseteq \tilde{\mathcal{H}}} g(\mathcal{P}, c), \textrm{ s.t. } h(\mathcal{P}, c) \leq \delta, \end{equation} where $\delta \geq 0$ is a tolerance parameter to keep this problem feasible. The parameter $\delta$ is required because substituting $\mathcal{H}$ by $\tilde{\mathcal{H}}$ ignores the LDBs in $\mathcal{H}\setminus \tilde{\mathcal{H}}$. Thus, the convex polytope $r(\mathcal{P}, c)$ induced by subset of boundaries in $\tilde{\mathcal{H}}$ may contain instances that are not predicted as class $c$. We directly set $\delta=h(\tilde{\mathcal{H}}, c)$, which is the smallest value of $\delta$ that keeps the practical problem feasible. The problem in Equation \eqref{eq:practical} can be proven to be a Submodular Cost Submodular Cover (SCSC) problem \citep{NIPS2013_a1d50185} (see Appendix~\ref{sec:apx_proof} for proof) that is well known to be NP-hard \citep{crawford2019submodular}. We adopt a greedy boundary selection method to find a good solution to this problem \citep{wolsey1982analysis}. Specifically, we initialize $\mathcal{P}$ as an empty set, and then iteratively select a new boundary $h$ from $\tilde{\mathcal{H}}$ by \begin{equation}\label{eq:greedy1} \begin{split} h = \underset{h\in \tilde{\mathcal{H}} \setminus \mathcal{P}}{\arg\min} \frac{g( \mathcal{P}, c) - g(\mathcal{P} \cup \{h\}, c) + \epsilon} {h(\mathcal{P},c) - h(\mathcal{P}\cup \{h\}, c)}, \end{split} \end{equation} where $g( \mathcal{P}, c) - g(\mathcal{P} \cup \{h\}, c)$ is the decrease of $g( \mathcal{P}, c)$ when adding $h$ into $\mathcal{P}$, and $h(\mathcal{P},c) - h(\mathcal{P}\cup \{h\}, c)$ is the decrease of $h(\mathcal{P},c)$ when adding $h$ into $\mathcal{P}$. Both $g( \mathcal{P}, c)$ and $h(\mathcal{P},c)$ are non-increasing when adding $h\in \tilde{\mathcal{H}}$ into $\mathcal{P}$ because adding a new boundary $h$ may only exclude some graphs from the convex polytope $r(\mathcal{P}, c)$. Intuitively, in each iteration, Equation~\eqref{eq:greedy1} selects a boundary $h\in \tilde{\mathcal{H}}$ such that adding $h$ into $\mathcal{P}$ reduces $g( \mathcal{P}, c)$ the least and reduces $h(\mathcal{P},c)$ the most. In this way, we can quickly reduce $h(\mathcal{P},c)$ to be smaller than $\delta$ without decreasing $g( \mathcal{P}, c)$ too much, which produces a good feasible solution to the practical problem. We add a small constant $\epsilon$ to the numerator such that, when there are multiple candidates of $h$ that do not decrease $g( \mathcal{P}, c)$, we can still select the $h$ that reduces $h(\mathcal{P},c)$ the most. We apply a peeling-off strategy to iteratively extract multiple decision regions. For each class $c\in C$, we first solve the practical problem once to find a decision region $r(\mathcal{P}, c)$, then we remove the graphs covered by $r(\mathcal{P}, c)$ from $D_c$. If there are remaining graphs predicted as the class $c$, we continue finding the decision regions using the remaining graphs until all the graphs in $D_c$ are removed. When all the graphs in $D_c$ are removed for each class $c\in C$, we stop the iteration and return the set of decision regions we found. \subsection{Producing Explanations} In this section, we introduce how to use the LDBs of decision regions to train a neural network that produces a robust counterfactual explanation as a small subset of edges of an input graph. We form explanations as a subset of edges because GNNs make decisions by aggregating messages passed on edges. Using edges instead of vertices as explanations can provide better insights on the decision logic of GNNs. \subsubsection{The Neural Network Model} Denote by $f_\theta$ the neural network to generate a subset of edges of an input graph $G$ as the robust counterfactual explanation on the prediction $\phi(G)$. $\theta$ represents the set of parameters of the neural network. For experiments, our explanation network $f$ consists of 2 fully connected layers with a ReLU activation and the hidden dimension of 64. For any two connected vertices $v_i$ and $v_j$ of $G$, denote by $\mathbf{z}_i$ and $\mathbf{z}_j$ the embeddings produced by the last convolution layer of the GNN for the two vertices, respectively. The neural network $f_\theta$ takes $\mathbf{z}_i$ and $\mathbf{z}_j$ as the input and outputs the probability for the edge between $v_i$ and $v_j$ to be part of the explanation. This can be written as \begin{equation}\label{eq:mlp} \begin{split} \mathbf{M}_{ij} = f_\theta(\mathbf{z}_i, \mathbf{z}_j), \end{split} \end{equation} where $\mathbf{M}_{ij}$ denotes the probability that the edge between $v_i$ and $v_j$ is contained in the explanation. When there is no edge between $v_i$ and $v_j$, that is, $\mathbf{A}_{ij} = 0$, we set $\mathbf{M}_{ij}=0$. For an input graph $G=\{V, E\}$ with $n$ vertices and a trained neural network $f_\theta$, $\mathbf{M}$ is an $n$-by-$n$ matrix that carries the complete information to generate a robust counterfactual explanation as a subset of edges, denoted by $S\subseteq E$. Concretely, we obtain $S$ by selecting all the edges in $E$ whose corresponding entries in $\mathbf{M}$ are larger than 0.5. \subsubsection{Training Model $f_\theta$} For an input graph $G=(V, E)$, denote by $S\subseteq E$ the subset of edges produced by $f_\theta$ to explain the prediction $\phi(G)$, our goal is to train a good model $f_\theta$ such that the prediction on the subgraph $G_S$ induced by $S$ from $G$ is consistent with $\phi(G)$; and deleting the edges in $S$ from $G$ produces a remainder subgraph $G_{E\setminus S}$ such that the prediction on $G_{E\setminus S}$ changes significantly from $\phi(G)$. Since producing $S$ by $f_\theta$ is a discrete operation that is hard to incorporate in an end-to-end training process, we define two proxy graphs to approximate $G_S$ and $G_{E\setminus S}$, respectively, such that the proxy graphs are determined by $\theta$ through continuous functions that can be smoothly incorporated into an end-to-end training process. The proxy graph of $G_S$, denoted by $G_\theta$, is defined by regarding $\mathbf{M}$ instead of $\mathbf{A}$ as the adjacency matrix. That is, $G_\theta$ has exactly the same graph structure as $G$, but the edge weights of $G_\theta$ is given by the entries in $\mathbf{M}$ instead of $\mathbf{A}$. Here, the subscript $\theta$ means $G_\theta$ is determined by $\theta$. The proxy graph of $G_{E\setminus S}$, denoted by $G'_\theta$, also have the same graph structure as $G$, but the edge weight between each pair of vertices $v_i$ and $v_j$ is defined as \begin{equation} \mathbf{M}'_{ij}=\left\{ \begin{split} & 1 - \mathbf{M}_{ij} & \textrm{ if } \mathbf{A}_{ij} = 1 \\ & 0 & \textrm{ if } \mathbf{A}_{ij} = 0\\ \end{split} \right. \end{equation} The edge weights of both $G_\theta$ and $G'_\theta$ are determined by $\theta$ through continuous functions, thus we can smoothly incorporate $G_\theta$ and $G'_\theta$ into an end-to-end training framework. As discussed later in this section, we use a regularization term to force the value of each entry in $\mathbf{M}_{ij}$ to be close to either 0 or 1, such that $G_\theta$ and $G'_\theta$ better approximate $G_{S}$ and $G_{E\setminus S}$ respectively. We formulate our loss function as \begin{equation}\label{eq:loss_net} \mathcal{L}(\theta) = \sum_{G \in D} \left\{\lambda\mathcal{L}_{same}(\theta, G) + (1-\lambda)\mathcal{L}_{opp}(\theta, G) + \beta\mathcal{R}_{sparse}(\theta, G) + \mu \mathcal{R}_{discrete}(\theta, G)\right\}, \end{equation} where $\lambda\in[0, 1]$, $\beta \geq 0$ and $\mu\geq 0$ are the hyperparameters controlling the importance of each term. The influence of these parameters is discussed in Appendix~\ref{sec:apx_hyper}. The first term of our loss function requires that the prediction of the GNN on $G_\theta$ is consistent with the prediction on $G$. Intuitively, this means that the edges with larger weights in $G_\theta$ dominate the prediction on $G$. We formulate this term by requiring $G_\theta$ to be covered by the same decision region covering $G$. Denote by $\mathcal{H}_G$ the set of LDBs that induce the decision region covering $G$, and by $|\mathcal{H}_G|$ the number of LDBs in $\mathcal{H}_G$. For the $i$-th LDB $h_i \in \mathcal{H}_G$, denote by $\mathcal{B}_i(\mathbf{x})=\mathbf{w}_i^\top\mathbf{x}+b_i$, where $\mathbf{w}_i$ and $b_i$ are the basis and bias of $h_i$, respectively, and $\mathbf{x}\in \mathbb{O}^d$ is a point in the space $\mathbb{O}^d$. The sign of $\mathcal{B}_i(\mathbf{x})$ indicates whether a point $\mathbf{x}$ lies on the positive side or the negative side of $h_i$, and the absolute value $|\mathcal{B}_i(\mathbf{x})|$ is proportional to the distance of a point $\mathbf{x}$ from $h_i$. Denote by $\sigma(\cdot)$ the standard sigmoid function, we formulate the first term of our loss function as \begin{equation}\label{eq:loss_same} \begin{split} \mathcal{L}_{same}(\theta, G) = \frac{1}{|\mathcal{H}_G|}\sum_{ h_i\in\mathcal{H}_G} \sigma\left(-\mathcal{B}_i(\phi_{gc}(G))* \mathcal{B}_i(\phi_{gc}(G_\theta))\right), \end{split} \end{equation} such that minimizing $\mathcal{L}_{same}(\theta, G)$ encourages the graph embeddings $\phi_{gc}(G)$ and $\phi_{gc}(G_\theta)$ to lie on the same side of every LDB in $\mathcal{H}_G$. Thus, $G_\theta$ is encouraged to be covered by the same decision region covering $G$. The second term of our loss function optimizes the counterfactual property of the explanations by requiring the prediction on $G'_\theta$ to be significantly different from the prediction on $G$. Intuitively, this means that the set of edges with larger weights in $G_\theta$ are good counterfactual explanations because reducing the weights of these edges significantly changes the prediction. Following the above intuition, we formulate the second term as \begin{equation}\label{eq:loss_opp} \begin{split} \mathcal{L}_{opp}(\theta, G) = \min_{h_i\in\mathcal{H}_G} \sigma\left(\mathcal{B}_i(\phi_{gc}(G))* \mathcal{B}_i(\phi_{gc}(G'_\theta))\right), \end{split} \end{equation} such that minimizing $\mathcal{L}_{opp}(\theta, G)$ encourages the graph embeddings $\phi_{gc}(G)$ and $\phi_{gc}(G'_\theta)$ to lie on the opposite sides of at least one LDB in $\mathcal{H}_G$. This further means that $G'_\theta$ is encouraged not to be covered by the decision region covering $G$, thus the prediction on $G'_\theta$ can be changed significantly from the prediction on $G$. Similar to \citep{NEURIPS2019_d80b7040}, we use a L1 regularization $ \mathcal{R}_{sparse}(\theta, G) = \|\mathbf{M}\|_1 $ on the matrix $\mathbf{M}$ produced by $f_\theta$ on an input graph $G$ to produce a sparse matrix $\mathbf{M}$, such that only a small number of edges in $G$ are selected as the counterfactual explanation. We also follow \citep{NEURIPS2019_d80b7040} to use an entropy regularization \begin{equation} \mathcal{R}_{discrete}(\theta, G) = -\frac{1}{|\mathbf{M}|}\sum_{i,j}(\mathbf{M}_{ij}\log(\mathbf{M}_{ij})+(1-\mathbf{M}_{ij})\log(1-\mathbf{M}_{ij})) \end{equation} to push the value of each entry in $\mathbf{M}_{ij}$ to be close to either 0 or 1, such that $G_\theta$ and $G'_\theta$ approximate $G_{S}$ and $G_{E\setminus S}$ well, respectively. Now we can use the graphs in $D$ and the extracted decision regions to train the neural network $f_\theta$ in an end-to-end manner by minimizing $\mathcal{L}(\theta)$ over $\theta$ using back propagation. Once we finish training $f_\theta$, we can first apply $f_\theta$ to produce the matrix $\mathbf{M}$ for an input graph $G=(V, E)$, and then obtain the explanation $S$ by selecting all the edges in $E$ whose corresponding entries in $\mathbf{M}$ are larger than 0.5. We do not need the extracted boundaries for inference as the the decision logic of GNN is already distilled into the explanation network $f$ during the training. As discussed in Appendix~\ref{sec:apx_node_cls}, our method can be easily extended to generate robust counterfactual explanations for node classification tasks. Our method is highly efficient with a time complexity $O(|E|)$ for explaining the prediction on an input graph $G$, where $|E|$ is the total number of edges in $G$. Additionally, the neural network $f_\theta$ can be directly used without retraining to predict explanations on unseen graphs. Thus our method is significantly faster than the other methods \cmmnt{\citep{NEURIPS2019_d80b7040,pope2019explainability, yuan2021explainability,NEURIPS2020_8fb134f2}} \citep{NEURIPS2019_d80b7040,pope2019explainability,NEURIPS2020_8fb134f2} that require retraining each time when generating explanations on a new input graph. \section{Experiments} \label{sec:experiment} We conduct series of experiments to compare our method with the state-of-the-art methods including GNNExplainer~\citep{NEURIPS2019_d80b7040}, PGExplainer~\citep{NEURIPS2020_e37b08dd}, PGM-Explainer~\citep{NEURIPS2020_8fb134f2}\cmmnt{, SubgraphX~\citep{yuan2021explainability}} and CF-GNNExplainer~\citep{lucic2021cf}. For the methods that identify a set of vertices as an explanation, we use the set of vertices to induce a subgraph from the input graph, and then use the set of edges of the induced subgraph as the explanation. For the methods that identify a subgraph as an explanation, we directly use the set of edges of the identified subgraph as the explanation. To demonstrate the effectiveness of the decision regions, we derive another baseline method named RCExp-NoLDB that adopts the general framework of RCExplainer but does not use the LDBs of decision regions to generate explanations. Instead, RCExp-NoLDB directly maximizes the prediction confidence on class $c$ for $G_\theta$ and minimizes the prediction confidence of class $c$ for $G'_\theta$. We evaluate the explanation performance on two typical tasks: the graph classification task that uses a GNN to predict the class of an input graph, and the node classification task that uses a GNN to predict the class of a graph node. For the graph classification task, we use one synthetic dataset, BA-2motifs~\citep{NEURIPS2020_e37b08dd}, and two real-world datasets, Mutagenicity~\citep{kazius2005derivation} and NCI1~\citep{4053093}. For the node classification task, we use the same four synthetic datasets as used by GNNExplainer~\cite{NEURIPS2019_d80b7040}, namely, {BA-\sc{shapes}}, {BA-\sc{Community}}, {\sc{tree-cycles}} and {\sc{tree-grid}}. Limited by space, we only report here the key results on the graph classification task for fidelity, robustness and efficiency. Please refer to Appendix~\ref{sec:apx_impldetails} for details on datasets, baselines and the experiment setups. Detailed experimental comparison on the node classification task will be discussed in Appendix~\ref{sec:apx_exp} where we show that our method produces extremely accurate explanations. CF-GNNExplainer~\cite{lucic2021cf} is only included in the results of node classification, because the source code of CF-GNNExplainer is not available and~\cite{lucic2021cf} reports performance on only node classification tasks. \subsection{Fidelity} \textbf{Fidelity} is measured by the decrease of prediction confidence after removing the explanation (i.e., a set of edges) from the input graph~\citep{pope2019explainability}. We use fidelity to evaluate how counterfactual the generated explanations are on the datasets Mutagenicity, NCI1 and BA-2motifs. A large fidelity score indicates stronger counterfactual characteristics. It is important to note that fidelity may be sensitive to sparsity of explanations. The sparsity of an explanation $S$ with respect to an input graph $G=(V, E)$ is $sparsity(S, G) =1 - \frac{|S|}{|E|}$, that is, the percentage of edges remaining after the explanation is removed from $G$. We only compare explanations with the same level of sparsity. Figure \ref{fig:fidelity} shows the results about fidelity. Our approach achieves the best fidelity performance at all levels of sparsity. The results validate the effectiveness of our method in producing highly counterfactual explanations. RCExplainer also significantly outperforms RCExp-NoLDB. This confirms that using LDBs of decision regions extracted from GNNs produces more faithful counterfactual explanations. \cmmnt{SubgraphX does not perform as well as reported by \citet{yuan2021explainability}. The fidelity performance reported by \citet{yuan2021explainability} is obtained by setting the features of nodes that are part of the explanation to $0$ but not removing the explanation edges from the input graph. This does not remove the message passing roles of the explanation nodes from the input graph because the edges connected to those nodes still can pass messages. In our experiments, we fix this problematic setting by directly blocking the messages that are passed on the edges in the explanation. Appendix~\ref{sec:apx_impldetails} provides more details.} \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/nosubx_last_fidelity_rcexplainer.pdf} \end{center} \caption{Fidelity performance averaged across 10 runs for the datasets at different levels of sparsity. } \label{fig:fidelity} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/consistent_robust_hzlegend_k8_rcexplainer.pdf} \end{center} \caption{Noise robustness (AUC) averaged across 10 runs for the datasets at different levels of noise.} \label{fig:robustness} \end{figure*} \subsection{Robustness Performance} In this experiment, we evaluate the robustness of all methods by quantifying how much an explanation changes after adding noise to the input graph. For an input graph $G$ and the explanation $S$, we produce a perturbed graph $G'$ by adding random noise to the node features and randomly adding or deleting some edges of the input graph such that the prediction on $G'$ is consistent with the prediction on $G$ . Using the same method we obtain the explanation $S'$ on $G'$. Considering top-$k$ edges of $S$ as the ground-truth and comparing $S'$ against them, we compute a receiver operating characteristic (ROC) curve and evaluate the robustness by the area under curve (AUC) of the ROC curve. We report results for $k=8$ in Figure~\ref{fig:robustness}. Results for other values of $k$ are included in Appendix~\ref{sec:apx_exp} where we observe similar trend. Figure~\ref{fig:robustness} shows the AUC of GNNExplainer, PGExplainer, RCExp-NoLDB and RCExplainer at different levels of noise. A higher AUC indicates better robustness. The percentage of noise shows the proportion of nodes and edges that are modified.\cmmnt{Baselines such as PGM-Explainer and SubgraphX are not included in this experiment as they do not output the edge weights that are required for computing AUC.} PGM-Explainer is not included in this experiment as it does not output the edge weights that are required for computing AUC. We present additional robustness experiments in Appendix~\ref{sec:apx_exp} where we extend all the baselines to report node and edge level accuracy. GNNExplainer performs the worst on most of the datasets, since it optimizes each graph independently without considering other graphs in the training set. Even when no noise is added, the AUC of GNNExplainer is significantly lower than 1 because different runs produce different explanations for the same graph prediction. PGExplainer is more robust than GNNExplainer because the neural network they trained to produce explanations implicitly considers all the graphs used for training Our method achieves the best AUC on all the datasets, because the common decision logic carried by the decision regions of a GNN is highly robust to noise. PGExplainer achieves a comparable performance as our method on the Mutagenicity dataset, because the samples of this dataset share a lot of common structures such as carbon rings, which makes it easier for the neural network trained by PGExplainer to identify these structures in presence of noise. However, for BA-2motifs and NCI1, this is harder as samples share very few structures and thus the AUC of PGExplainer drops significantly. RCExplainer also significantly outperforms RCExp-NoLDB on these datasets which highlights the role of decision boundaries in making our method highly robust. \cmmnt{ \begin{table}[t] \small \begin{center} \begin{tabular}{cccccc} \toprule {\bf{Method}} & {GNNExplainer} & {PGExplainer} & {PGM-Explainer} & {SubgraphX} & {RCExplainer} \\ \midrule \bf{Time} & 1.2s $\pm$ 0.2 & \textbf{0.01s} $\pm$ 0.03 & 13.1s $\pm$ 3.9 & 77.8s $\pm$ 4.5 & \textbf{0.01s} $\pm$ 0.02\\ \bottomrule \end{tabular} \end{center} \caption{Average time cost for producing an explanation on a single graph sample.} \label{table:time} \end{table} } \begin{table}[t] \small \begin{center} \begin{tabular}{ccccc} \toprule {\bf{Method}} & {GNNExplainer} & {PGExplainer} & {PGM-Explainer} & {RCExplainer} \\ \midrule \bf{Time} & 1.2s $\pm$ 0.2 & \textbf{0.01s} $\pm$ 0.03 & 13.1s $\pm$ 3.9 & \textbf{0.01s} $\pm$ 0.02\\ \bottomrule \end{tabular} \end{center} \caption{Average time cost for producing an explanation on a single graph sample.} \label{table:time} \end{table} \subsection{Efficiency} We evaluate efficiency by comparing the average computation time taken for inference on unseen graph samples. Table \ref{table:time} shows the results on the Mutagenicity dataset. Since our method also can be directly used for unseen data without any retraining, it is as efficient as PGExplainer and significantly faster than GNNExplainer \cmmnt{,} and PGM-Explainer\cmmnt{ and SubgraphX}. \section{Conclusion} \label{sec:conclude} In this paper, we develop a novel method for producing counterfactual explanations on GNNs. We extract decision boundaries from the given GNN model to formulate an intuitive and effective counterfactual loss function. We optimize this loss to train a neural network to produce explanations with strong counterfactual characteristics. Since the decision boundaries are shared by multiple samples of the same predicted class, explanations produced by our method are robust and do not overfit the noise. Our experiments on synthetic and real-life benchmark datasets strongly validate the efficacy of our method. In this work, we focus on GNNs that belong to Piecewise Linear Neural Networks (PLNNs). Extending our method to other families of GNNs and tasks such as link prediction, remains an interesting future direction. Our method will benefit multiple fields where GNNs are intensively used. By allowing the users to interpret the predictions of complex GNNs better, it will promote transparency, trust and fairness in the society. However, there also exist some inherent risks. A generated explanation may expose private information if our method is not coupled with an adequate privacy protection technique. Also, some of the ideas presented in this paper may be adopted and extended to improve adversarial attacks. Without appropriate defense mechanisms, the misuse of such attacks poses a risk of disruption in the functionality of GNNs deployed in the real world. That said, we firmly believe that these risks can be mitigated through increased awareness and proactive measures. \section{Illustration of RCExplainer's training} \label{sec:apx_key} \begin{figure*}[h] \begin{center} \includegraphics[width=1.0\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/illustration.pdf} \end{center} \caption{For training of RCExplainer, decision boundaries are extracted from the feature space of graph embeddings after the last graph convolution layer. After processing, a subset of boundaries is obtained and used to train an explanation neural network that takes edge activations from the convolution layers of GNN as input and predicts a mask over the adjacency matrix for the given graph sample. Counterfactual loss is used to optimize the explanation network.} \label{fig:training} \end{figure*} \section{Node classification} \label{sec:apx_node_cls} Our method is directly applicable to the task of node classification with few simple modifications. Instead of extracting Linear Decision Boundaries (LDBs) in feature space of graph embeddings, we operate on the feature space of node embeddings obtained after the last graph convolution layer. We use the greedy method described in Equation~\eqref{eq:greedy1} to find the decision regions for each class, except for the node classification, the functions $g(\cdot)$ and $h(\cdot)$ denote the coverage of nodes rather than graphs. The next step to train the explanation network $f_\theta$ to generate counterfactual explanations for node classification is identical to the procedure described in Section~\ref{sec:method} except for one difference. For node classification, since a node's prediction is only influenced by its local neighborhood, therefore we only need to consider the computation graph of the given node while generating the explanation. The computation graph of a node is defined as $k$-hop neighborhood of the node , where $k$ refers to number of graph convolution layers in the given GNN model $\phi$. In other words, GNN performs $k$ steps of message passing through its graph convolution layers during the forward pass to effectively convolve $k$-hop neighborhood of the given node. Hence, the output of $f_\theta$ is the output mask over the adjacency matrix of the computation graph of the given node. The edges with mask values more than 0.5 are chosen from the computation subgraph to form the explanation subset $S$ that can explain the original node classification prediction. \section{Interpreting individual boundaries} \label{sec:apx_bdry} \begin{figure*}[h] \begin{center} \includegraphics[width=0.8\linewidth]{figures/case.png} \end{center} \caption{(a) Three classes and the motifs associated with each class are shown. All samples of the same class contain same two motifs. (b) Explanation results for an instance of class $c_1$ are shown w.r.t both of the boundaries separately. In both cases, RCExplainer correctly identifies the motif (highlighted in black) that is associated with the class $c_1$ but not associated with the class that lies on the other side of the given boundary.} \label{fig:case} \end{figure*} We present a case study to demonstrate that our method can be adapted to answer the question, \textit{``Which substructures make the samples of one class different from the samples of other specific class, and therefore can be masked to flip the prediction between the two given classes?''}. This is useful in various fields, for instance, in drug discovery where the classes correspond to different chemical properties possible of a drug compound, researchers are often interested in understanding the role of chemical structures that result in a prediction of a particular property instead of another specific one. Also, this is especially helpful for debugging in cases where one expects a particular output for the given input but the GNN's prediction does not agree with the expectation. This case corresponds to a more constrained setting of counterfactual explanations as the target prediction is also predetermined. Let the original predicted label and the target label on a given graph $G$ be denoted by $c_i$ and $c_j$ respectively. Since our method explicitly models the boundaries separating the samples of one class from the samples of other classes, our method can be easily adapted to answer such questions. If we are able to only interpret the boundary separating the samples of the given two classes, this would allow us to uncover the substructures that make the samples of first class different from the samples of the other class. To address this, we modify the loss terms to \begin{equation}\label{eq:loss_same2} \begin{split} \mathcal{L}_{same}(\theta, G) = \sigma(-\mathcal{B}_{ij}(\phi_{gc}(G))* \mathcal{B}_{ij}(\phi_{gc}(G_\theta))), \end{split} \end{equation} \begin{equation}\label{eq:loss_opp2} \begin{split} \mathcal{L}_{opp}(\theta, G) = \sigma(\mathcal{B}_{ij}(\phi_{gc}(G))* \mathcal{B}_{ij}(\phi_{gc}(G'_\theta))), \end{split} \end{equation} where $\mathcal{B}_{ij}$ refers to the specific boundary in the set $\mathcal{P}$ separating the samples with predicted label $c_i$ from the samples with the predicted label $c_j$. Since we are only concerned about changing the outcome from $c_i$ to $c_j$, we need to consider only the boundary separating these classes while formulating the loss for the network. We verify this on a synthetic graph classification dataset with 3 classes, $c_1$, $c_2$ and $c_3$ such that each graph sample contains exactly 2 motifs. Both the motifs jointly determine the class because each possible pair of classes share exactly one motif as shown in Figure~\ref{fig:case}(a). We show explanation results produced by RCExplainer on an instance of class $c_1$ in Figure~\ref{fig:case}(b). For a given graph sample of class $c_1$, we separately find explanations with respect to each of the two boundaries $\mathcal{B}_{12}$ and $\mathcal{B}_{13}$, $\mathcal{B}_{12}$ separates $c_1$ from $c_2$, while $\mathcal{B}_{13}$ separates $c_1$ from $c_3$. We can see in the Figure~\ref{fig:case}(b) that optimizing our method w.r.t $\mathcal{B}_{12}$ correctly identifies the motif (ABCD) in the sample that is not associated with the class $c_2$. The other motif (EFGH) which is also associated with the $c_2$ is not considered important by the method. When we find the explanation for the same graph sample but with respect to the boundary $\mathcal{B}_{13}$, the results are opposite and align with the expectations. In this case, the motif (EFGH) that is not associated with $c_3$ is highlighted instead of the motif (ABCD). We observe similar behavior on the instances of other classes where interpreting an instance with respect to a single boundary correctly identifies the motif that identifies the given class from the other class. In conclusion, the above case study demonstrates that our method can highlight the motif unique to the class $c_i$ by interpreting the boundary $\mathcal{B}_{ij}$ separating the classes $c_i$ and $c_j$. Removing the highlighted motif from the given sample causes the drop in confidence of original predicted label $c_i$ while increasing the confidence for the class $c_j$. \section{Proof: Decision region extraction is an instance of SCSC optimization} \label{sec:apx_proof} Now we prove that the optimization problem in Equation~\eqref{eq:practical} is an instance of Submodular Cover Submodular Cost (SCSC) problem. The Equation~\eqref{eq:practical} can be written as \begin{equation}\label{eq:practical2} \min_{\mathcal{P} \subseteq \tilde{\mathcal{H}}} D_c-g(\mathcal{P}, c), \textrm{ s.t. } D'_c-h(\mathcal{P}, c) \geq D'_c-\delta. \end{equation} Maximizing $g(\mathcal{P}, c)$ denotes maximizing the coverage of the set of boundaries $\mathcal{P}$ for the samples of class $c$ denoted by $D_c$. This can be seen as minimizing $D_c-g(\mathcal{P}, c)$ which denotes the number of graph samples of class $c$ that are not covered by $g(\mathcal{P}, c)$ and thus exclusive to $D_c$. $D'_c$ in the constraint is equal to $D-D_c$ that denotes the set of graph samples in the dataset $D$ that do not belong to the class $c$. Let us denote $D_c-g(\mathcal{P}, c)$ by function $g'(\mathcal{P})$ and $D'_c-h(\mathcal{P}, c)$ by $h'(\mathcal{P})$. To prove that the optimization problem in Equation~\eqref{eq:practical2} is an instance of SCSC problem, we prove the functions $g'(\mathcal{P})$ and $h'(\mathcal{P})$ are submodular with respect to $\mathcal{P}$. For function $g'(\mathcal{P})$ to be submodular with respect to $\mathcal{P}$, we show that for any two arbitrary sets of LDBs denoted by $\mathcal{P} \subseteq \tilde{\mathcal{H}}$ and $\mathcal{Q} \subseteq \tilde{\mathcal{H}}$, if $\mathcal{P} \subseteq \mathcal{Q}$ then \begin{equation}\label{eq:submod} g'(\mathcal{P}+\{h\})-g'(\mathcal{P}) \geq g'(\mathcal{Q}+\{h\})-g'(\mathcal{Q}) \end{equation} is always satisfied for a linear decision boundary $h \in \tilde{\mathcal{H}}\setminus\mathcal{Q}$. As discussed in Section~\ref{sec:method} the LDBs in $\mathcal{P}$ induce a convex polytope $r(\mathcal{P}, c)$ that has the maximum coverage of samples of class $c$. Adding a new boundary $h$ to $\mathcal{P}$ may remove (separate) some samples of class $c$ from $r(\mathcal{P}, c)$ and lower its coverage. This reduction in coverage is denoted by the term $g'(\mathcal{P}+\{h\})-g'(\mathcal{P})$ on the left hand side of Equation~\eqref{eq:submod}. Similarly the term $g'(\mathcal{Q}+\{h\})-g'(\mathcal{Q})$ on the right hand side of Equation~\eqref{eq:submod} denotes the reduction in coverage for the subset $\mathcal{Q}$. Now, since $\mathcal{P} \subseteq \mathcal{Q}$, the set of graph samples contained in the polytope $r(\mathcal{Q}, c)$ is subset of the graph samples contained in the polytope $r(\mathcal{P}, c)$. Hence, adding new a LDB $h$ to $\mathcal{P}$ is not going to remove less number of samples from the polytope $r(\mathcal{P}, c)$ as compared to the samples removed from the polytope $r(\mathcal{Q}, c)$. Therefore, the function $g'(\mathcal{P})$ is submodular with respect to $\mathcal{P}$. Similarly, we can prove the function $h'(\mathcal{P})$ to be submodular with respect to $\mathcal{P}$. This concludes the proof. \section{Implementation details} \label{sec:apx_impldetails} \paragraph{Datasets.} Table~\ref{table:datasets_props} shows the properties of all the datasets used in experiments. The last row corresponds to the test accuracy of the GCN model we train on the corresponding dataset. \begin{table}[h] \small \begin{center} \begin{tabular}{l@{\hskip 0.07in}c@{\hskip 0.05in}c@{\hskip 0.05in}c@{\hskip 0.05in}c@{\hskip 0.05in}c@{\hskip 0.05in}c@{\hskip 0.05in}c} \toprule & \begin{tabular}[t]{@{}c@{}}BA- \\ \sc{shapes} \end{tabular} & \begin{tabular}[t]{@{}c@{}}BA- \\ \sc{Community} \end{tabular} & \begin{tabular}[t]{@{}c@{}}\sc{tree-} \\ \sc{cycles} \end{tabular} & \begin{tabular}[t]{@{}c@{}}\sc{tree-} \\ \sc{grid} \end{tabular} & \begin{tabular}[t]{@{}c@{}}BA- \\ 2motifs \end{tabular} & \raisebox{-.5\height}{Mutagenicity} & \raisebox{-.5\height}{NCI1}\\ \midrule \# of Nodes (avg) & 700 & 1400 & 871 & 1020 & 25 & 30.32 & 29.87\\ \# of Edges (avg) & 2050 & 4460 & 970 & 2540 & 25.48 & 30.77 & 32.30\\ \# of Graphs & 1 & 1 & 1 & 1 & 700 & 4337 & 4110 \\ \# of Classes & 4 & 8 & 2 & 2 & 2 & 2 & 2\\ Base & BA graph & BA graph & Tree & Tree & BA graph & --- & ---\\ \raisebox{-.5\height}{Motifs} & \raisebox{-.5\height}{House} & \raisebox{-.5\height}{House} & \raisebox{-.5\height}{Cycle} & \raisebox{-.5\height}{Grid} & \begin{tabular}[t]{@{}c@{}}House \& \\ Cycle \end{tabular} & \raisebox{-.5\height}{---} & \raisebox{-.5\height}{---} \\ License & Apache 2.0 & Apache 2.0 & Apache 2.0 & Apache 2.0 & --- & --- & --- \\ \midrule Test accuracy & 0.98 & 0.95 & 0.99 & 0.99 & 0.91 & 0.91 & 0.84\\ \bottomrule \end{tabular} \end{center} \caption{Properties of the datasets used and the test accuracy of the corresponding trained GCN models.} \label{table:datasets_props} \end{table} \paragraph{Baselines.} For the baselines, we use publically available implementations provided in~\citep{NEURIPS2019_d80b7040,holdijk2021re,NEURIPS2020_8fb134f2,liu2021dig} to obtain the results. Implementation of GNNExplainer provided by~\citep{NEURIPS2019_d80b7040} is licensed under Apache 2.0 license \cmmnt{while implementation of SubgraphX provided by~\citep{liu2021dig} is licensed under GNU General Public License v3.0}. We use the default parameters provided by the authors for the baselines. For the local baseline RCExp-NoLDB, we use same setup as RCExplainer except we don't use LDBs for training the explanation network $f$. The loss function denoted by $\mathcal{L}_{conf}$ for this baseline aligns with the loss functions of GNNExplainer and PGExplainer except we introduce a second term to enforce the counterfactual characteristics. We directly maximize the confidence of the original predicted class $c$ on the masked graph $G_{\theta}$ and minimize the confidence of original predicted class for the remainder graph $G'_{\theta}$. $\mathcal{L}_{conf}$ can be expressed as : \begin{equation}\label{eq:loss_conf} \begin{split} \mathcal{L}_{conf}(\theta, G) = -\log(P_\phi(Y=c|X=G_\theta)) - \frac{\eta}{\log(P_\phi(Y=c|X=G'_\theta))}, \end{split} \end{equation} where $P_\phi(Y|X=G_x)$ corresponds to the conditional probability distribution learnt by GNN model $\phi$ for $G_x$ as input graph. $Y$ corresponds to the random variable representing the set of classes $\mathcal{C}$ and $X$ is the random variable representing possible input graphs for the GNN $\phi$. Here $\eta$ is a hyperparameter that represents the weight of the second term in the loss function. The loss $\mathcal{L}_{conf}$ is jointly minimized with the regularizers $\mathcal{R}_{sparse}$ and $\mathcal{R}_{discrete}$ specified in Section~\ref{sec:method}. \paragraph{Training details.} We follow \citep{NEURIPS2019_d80b7040,NEURIPS2020_e37b08dd} and use the same architecture to train a GNN model with 3 graph convolution layers for generating explanations on each dataset. Consistent with prior works~\citep{NEURIPS2019_d80b7040,NEURIPS2020_e37b08dd,NEURIPS2020_8fb134f2}, we use (80/10/10)\% random split for training/validation/test for each dataset. We use Adam optimizer to tune the parameters of $f_\theta$ and set learning rate to $0.001$. We train our method for $600$ epochs. For node classification datasets, we set $\lambda$ to $0.85$, $\beta$ to $0.006$ and $\mu$ to $0.66$. For graph classification datasets, we set $\lambda$ to $0.1$, $\mu$ to $0.66$. $\beta$ is set to $6\times 10^{-5}$ for BA-2motifs and NCI1, and to $6\times 10^{-4}$ for Mutagenicity. We also scale the combined loss by factor of $15$ for all the datasets. The number of LDBs to be sampled from GNN for each class is set to $50$. Empirically, we find that this is enough as the subset of LDBs selected greedily from this set is able to cover all the samples of the given class. Our codebase is built on the top of implementations provided by \citep{NEURIPS2019_d80b7040,NEURIPS2020_e37b08dd}. All of the experiments are conducted on a Linux machine with an Intel i7-8700K processor and a Nvidia GeForce GTX 1080 Ti GPU with 11GB memory. Our code is implemented using python 3.8.5 with Pytorch 1.8.1 that uses CUDA version 10.0. \section{Additional experiments} \label{sec:apx_exp} \textbf{Fidelity.} As described in Section~\ref{sec:experiment}, counterfactual characteristic of an explanation is measured by using fidelity as an evaluation metric. It is defined as drop in confidence of the original predicted class after masking the produced explanation in the original graph~\citep{pope2019explainability}. Since, we produce explanations as edges, we mask the edges in the input graph to calculate the drop. Fidelity for the input graph $G$ and the produced explanation $S$ is formally written as \begin{equation}\label{eq:fidelity} \begin{split} fidelity(S,G) = P_\phi(Y=c|X=G)-P_\phi(Y=c|X=G_{E\setminus S}), \end{split} \end{equation} where $c$ denotes the class predicted by $\phi$ for $G$. As discussed in Section~\ref{sec:experiment}, explanations are mostly useful, if they are sparse (concise). Sparsity is defined as the fraction of total edges that are present in $E$ but not in $S$: \begin{equation}\label{eq:sparsity} \begin{split} sparsity(S,G) =1-\frac{|E|}{|S|}, \end{split} \end{equation} However, since \cmmnt{the approaches like SubgraphX and} PGM-Explainer \cmmnt{do} does not report the importance ranking of edges of $G$, it's not feasible to completely control the edge sparsity of the desired explanation. Hence, we take samples with similar sparsity level for comparison. Consistent with prior works~\citep{NEURIPS2019_d80b7040,NEURIPS2020_e37b08dd}, we compute fidelity for the samples that are labelled positive, for instance in Mutagenicity dataset, we compute fidelity for the compounds that are labelled as mutagenic. The results are presented in Figure~\ref{fig:fidelity}. \cmmnt{As reported in Figure~\ref{fig:fidelity}, the results obtained for SubgraphX are significantly lower than those reported by \citet{yuan2021explainability}. We believe, this is the result of the problematic setting adopted in~\citep{yuan2021explainability} and implemented in~\citep{liu2021dig} for computing the fidelity. To be specific, while computing the drop in confidence, the features of the nodes present in the explanation are set to 0 without removing the edges incident on these nodes. As the message passing is still allowed on these edges, therefore the first graph convolution results in updating the representation of the nodes to non-zero. As the features of these nodes are now not set to zero, the subsequent graph convolutions would also allow these nodes to participate in updating the representations of their neighboring nodes.} \cmmnt{A simple fix to this problem is to mask the edges incident on these nodes while computing the fidelity. This would ensure that these nodes do not participate in message passing irrespective of the number of graph convolutions. Adopting this setting, allows us to get the results obtained in Figure~\ref{fig:fidelity}. We also note that masking only the edges and not setting the nodes to zero also yields similar performance for SubgraphX as reported in Figure~\ref{fig:fidelity}.} \begin{figure*}[h] \begin{center} \includegraphics[width=1.0\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/dotted_robust_auc4_12.pdf} \end{center} \caption{Noise robustness (AUC) averaged across 10 runs for the datasets at different levels of noise.} \label{fig:robust_auck_4_12} \end{figure*} \begin{figure*}[h!] \begin{center} \includegraphics[width=1.0\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/nosubx_dotted_robust_node_acc.pdf} \end{center} \caption{Noise robustness (node accuracy) averaged across 10 runs for the datasets at different levels of noise.} \label{fig:robust_nodeacc} \end{figure*} \textbf{Robustness to noise.} As discussed in Section~\ref{sec:experiment}, we use AUC to compare robustness of different methods. AUC is defined as area under receiver operating characteristic (ROC) curve of a binary classifier. We consider the top-$k$ edges of the produced explanation $S$ for the input graph $G$ as ground-truth. After we obtain the explanation $S'$ for the noisy sample $G'$, we formulate this as binary classification problem. For each edge in $G'$, if it is present in the top-$k$ edges of the produced explanation $S$, then it is labeled positive, and negative otherwise. For $G'$, the mask weight of an edge predicted by explanation network $f_\theta$ is the probability of the corresponding edge being classified as positive. Limited by space, we only reported the results for $k=8$ in Section~\ref{sec:experiment}. Now, we report the results for $k=4$ and $k=12$ in Figure~\ref{fig:robust_auck_4_12} where we observe similar trend as observed in Figure~\ref{fig:robustness}. RCExplainer outperforms the rest of the methods by big margin on BA-2motifs and NCI1. Since, AUC evaluation requires that the explanation method outputs the importance weights for the edges of a noisy sample $G'$, we cannot use this for comparing \cmmnt{approaches like SubgraphX and} PGM-Explainer that \cmmnt{do} does not provide this information. Therefore, to provide a more comprehensive comparison, we use node accuracy as a measure to compare all the baselines. For calculating node accuracy, we consider top-$k$ important nodes in the explanation for the original graph $G$ as ground-truth and compare them with the the top-$k$ important nodes obtained through the explanation for the noisy graph $G'$. However, the challenge is that GNNExplainer, PGExplainer, RCExp-NoLDB and RCExplainer do not rank nodes based on their importance. To address this, we use edge weights to obtain the node weights. We approximate the node weights as : \begin{equation}\label{eq:nodeweight} \begin{split} \mathbf{a}_i = \max_{j\in \{1,\ldots,|V|\}}(\mathbf{M}_{ij}), \end{split} \end{equation} where $\mathbf{a}_i$ denotes the weight of the node $v_i$ and $\mathbf{M}$ is the weighted adjacency mask predicted by $f_\theta$. We believe this is a valid approximation because for an important edge to exist in the explanation subgraph, nodes connected by this edge must also be considered important and be present in the explanation subgraph. Now, using these node weights, we can obtain the ground-truth set of nodes by picking top-$k$ important nodes of the explanation on $G$. Comparing top-$k$ important nodes of explanation on $G'$ with the ground-truth set of nodes gives us accuracy. We present the node accuracy plots for $k=2,4 \text{ and } 8$ in Figure~\ref{fig:robust_nodeacc}. We also note that comparison is not completely fair to GNNExplainer, PGExplainer and our method because of the approximation used to extend these methods for computing node level accuracy. Despite the approximation, our method significantly outperforms all the methods. GNNExplainer \cmmnt{,} and PGM-Explainer \cmmnt{and SubgraphX} perform consistently worse as expected because they optimize each sample independently to obtain an explanation. \cmmnt{One other way to compare all the approaches on the robustness would be to compute edge level accuracy instead at the node level. This would require extending SubgraphX and PGM-Explainer to obtain important edges from the explanation subgraph. However, it is more challenging as SubgraphX only provides a subgraph as an output. To obtain top-$k$ important edges, we can randomly sample $k$ edges from the returned explanation subgraph that consists of slightly more than $k$ number of edges. The random sampling would make this evaluation more approximate and perhaps would further degrade the performance of SubgraphX, therefore, we do not report these results.} \textbf{Node classification.} \begin{table}[h] \small \begin{center} \resizebox{\textwidth}{!}{% \def\arraystretch{1.5 \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\textbf{BA-\sc{\textbf{Shapes}}}} & \multicolumn{2}{c|}{\textbf{BA-\sc{ \textbf{Community}}}} & \multicolumn{2}{c|}{\sc{\textbf{tree-cycles}}} & \multicolumn{2}{c|}{\sc{\textbf{tree-grid}}} \\ \cline{2-9} & AUC & Acc & AUC & Acc & AUC & Acc & AUC & Acc \\ \hline GNNExplainer & 0.925 & 0.729 & 0.836 & 0.750 & 0.948 & 0.862 & 0.875 & 0.741\\ PGExplainer & 0.963 & 0.932 & 0.945 & 0.872 & 0.987 & 0.924 & 0.907 & 0.871\\ PGM-Explainer & \textit{n.a.} & 0.965 & \textit{n.a.} & \textbf{0.926} & \textit{n.a.} & 0.954 &\textit{n.a.} & 0.885\\ CF-GNNExplainer & \textit{n.a.} & 0.960 & \textit{n.a.} & \textit{n.a.} & \textit{n.a.} & 0.940 & \textit{n.a.} & 0.960\\ \hline RCExplainer (Ours) & \begin{tabular}[t]{@{}c@{}}\textbf{0.998} \\ $\pm$ 0.001 \end{tabular} & \begin{tabular}[t]{@{}c@{}} \textbf{0.973} \\ $\pm$ 0.003 \end{tabular} & \begin{tabular}[t]{@{}c@{}}\textbf{0.995} \\ $\pm$ 0.002 \end{tabular} & \begin{tabular}[t]{@{}c@{}} 0.916 \\ $\pm$ 0.009 \end{tabular} & \begin{tabular}[t]{@{}c@{}}\textbf{0.993} \\ $\pm$ 0.003 \end{tabular} & \begin{tabular}[t]{@{}c@{}} \textbf{0.993} \\ $\pm$ 0.003\end{tabular} & \begin{tabular}[t]{@{}c@{}}\textbf{0.995} \\ $\pm$ 0.002 \end{tabular} & \begin{tabular}[t]{@{}c@{}} \textbf{0.974} \\ $\pm$ 0.005\end{tabular}\\ \hline \end{tabular} } \end{center} \caption{AUC and accuracy evaluation on synthetic node classification datasets.} \vspace{-0.1in} \label{table:node_acc_results} \end{table} We evaluate our method on four synthetic node classification datasets used by GNNExplainer~\cite{NEURIPS2019_d80b7040}, namely, {BA-\sc{shapes}}, {BA-\sc{Community}}, {\sc{tree-cycles}} and {\sc{tree-grid}}. Following~\citep{NEURIPS2019_d80b7040,NEURIPS2020_e37b08dd}, we formalize the explanation problem as binary classification of edges and adopt AUC under the ROC curve as the evaluation metric. This evaluation is only possible for synthetic datasets where we can consider the motifs as reasonable approximations of the explanation ground truth. The edges that are part of a motif are positive and rest of the edges are labelled negative during the evaluation. We show the results in Table~\ref{table:node_acc_results} where we demonstrate that our method is extremely accurate and achieves close to optimal score for AUC on all of the datasets. This is solid evidence of our method's ability to capture the behavior of underlying GNN better and produce consistently accurate explanations to justify the original predictions. Please note that PGM-Explainer does not provide edge weights so it is not applicable for AUC. Also since the implementation of CF-GNNExplainer is not available, we only report those results that are available in \citep{lucic2021cf}. \section{Hyperparameter analysis} \label{sec:apx_hyper} \textbf{Number of LDBs.} \begin{figure*}[h!] \begin{center} \includegraphics[width=0.65\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/newfig/boundary_bacommunity_fidelity.pdf} \end{center} \caption{Fidelity vs Sparsity plots on BA-Community for different number of sampled LDBs.} \label{fig:ldb} \end{figure*} As mentioned in Section~\ref{sec:method}, we sample LDBs from the decision logic of GNN to form a candidate pool from which some boundaries are selected by the greedy method. In Figure~\ref{fig:ldb}, we show the effect of number of sampled candidate boundaries on the performance on BA-Community dataset. As we increase the number of sampled LDBs from 10 to 50, the fidelity improves and saturation is achieved once 50 LDBs are sampled. This is consistent with the expectations as more boundaries are sampled, the quality of decision region improves. When there are enough boundaries that can result in a good decision region after greedy selection, the performance saturates. \textbf{Choosing $\lambda$.} \begin{figure*}[h!] \begin{center} \includegraphics[width=0.65\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/lambda_bamotifs_fidelity.pdf} \end{center} \caption{Fidelity vs Sparsity plots on BA-2motifs for different values of $\lambda$.} \label{fig:lambda} \end{figure*} in Figure~\ref{fig:lambda}, we show the effect of $\lambda$ hyperparameter introduced in Equation~\eqref{eq:loss_net}. We show the fidelity performance of our method on BA-2motifs dataset for different values of $\lambda$ ranging from 0.01 to 0.90. The fidelity results are worst for $\lambda=0.9$ as the second term $\mathcal{L}_{opp}$ of the loss that enforces counterfactual behavior of explanations is weighted very less in the combined loss. Setting $\lambda=0.1$ gives the best results for fidelity. \section{Qualitative results} \label{sec:apx_qual} \textbf{Qualitative results.} We present the sample results produced by GNNExplainer, PGExplainer and RCExplainer in Table \ref{table:qualitative}. Our method consistently identifies the right motifs with high precision and is also able to handle tricky cases. For instance in Figure (q), note that our method is able to identify the right motif in the presence of another ``house-structure''. The other structure contains the query node but as it also contains the nodes from the other community, hence, it is not the right explanation for the prediction on the given query node. In Figure (t), our method is able to correctly identify both NO2 groups present in the compound, and as discussed before, NO2 groups attached to carbon rings are known to make the compounds mutagenic~\citep{debnath1991structure}. The edges connecting nitrogen(N) atoms to the carbon(C) atoms are given the highest weights in the explanation. This is very intuitive in counterfactual sense as masking these edges would break the NO2 groups from the carbon ring and push the prediction of the compound towards ``Non-Mutagenic''. \begin{table}[ht] \begin{center} \resizebox{\textwidth}{!}{% \def\arraystretch{1.2 \begin{tabular}{|c@{\hskip 0.01in}|c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}|c} \hline & & {BA-\sc{shapes}} & & {BA-\sc{Community}} & & {\sc{tree-cycles}} & & {\sc{tree-grid}} & & Mutagenicity\\ \hline Motifs & \raisebox{-3.3\height}{(a)} & \raisebox{-.5\height}{\includegraphics[width=0.04\linewidth]{newfig/syn1.png}} & \raisebox{-3.3\height}{(b)} & \raisebox{-.5\height}{\includegraphics[width=0.2\linewidth]{newfig/syn2.png}} & \raisebox{-3.3\height}{(c)} & \raisebox{-.5\height}{\includegraphics[width=0.04\linewidth]{newfig/syn3_cycle.png}} & \raisebox{-3.3\height}{(d)} & \raisebox{-.5\height}{\includegraphics[width=0.04\linewidth]{newfig/syn4_grid.png}} & \raisebox{-3.3\height}{(e)} & \raisebox{-.5\height}{\includegraphics[width=0.2\linewidth]{newfig/mutagenicity_gt_nclr.png}} \\ \hline GNNExp. & \raisebox{-6.3\height}{(f)} & \raisebox{-.5\height}{\includegraphics[width=0.15\linewidth]{newfig/gnn_correct_ex_syn1.png}} & \raisebox{-6.3\height}{(g)} & \raisebox{-.5\height}{\includegraphics[width=0.12\linewidth]{newfig/gnn_syn2_ex.png}} & \raisebox{-6.3\height}{(h)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/gnn_syn3_cycle_ex.png}} & \raisebox{-6.3\height}{(i)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/gnn_ex_syn4.png}} & \raisebox{-6.3\height}{(j)} & \raisebox{-.5\height}{\includegraphics[width=0.2\linewidth]{newfig/gnn_mutagenicity_sample_hz_nclr.png}} \\ \hline PGExp. & \raisebox{-6.3\height}{(k)} & \raisebox{-.5\height}{\includegraphics[width=0.15\linewidth]{newfig/pge_correct_ex_syn1.png}} & \raisebox{-6.3\height}{(l)} & \raisebox{-.5\height}{\includegraphics[width=0.12\linewidth]{newfig/pge_syn2_ex.png}} & \raisebox{-6.3\height}{(m)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/pge_syn3_cycle_ex.png}} & \raisebox{-6.3\height}{(n)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/pge_ex_syn4.png}} & \raisebox{-6.3\height}{(o)} & \raisebox{-.5\height}{\includegraphics[width=0.2\linewidth]{newfig/pge_mutagenicity_sample_hz_nclr.png}} \\ \hline RCExp. & \raisebox{-6.3\height}{(p)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/correct_ex_syn1.png}} & \raisebox{-6.3\height}{(q)} & \raisebox{-.5\height}{\includegraphics[width=0.12\linewidth]{newfig/syn2_ex.png}} & \raisebox{-6.3\height}{(r)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/syn3_cycle_ex.png}} & \raisebox{-6.3\height}{(s)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/ex_syn4.png}} & \raisebox{-6.3\height}{(t)} & \raisebox{-.5\height}{\includegraphics[width=0.2\linewidth]{newfig/mutagenicity_sample_hz_nclr.png}} \\ \hline \end{tabular} } \end{center} \caption{Qualitative results produced by GNNExplainer, PGExplainer and RCExplainer. The motifs present in the corresponding dataset are shown in the first row and the corresponding explanations are shown in the later rows. First four columns correspond to node classification where the node highlighted in red is being explained and the node colors denote different labels. Last column corresponds to graph classification where the prediction is explained for a mutagenic sample and the colors denote different atoms. Explanations are highlighted in black.} \vspace{-0.1in} \label{table:qualitative} \end{table} \section{Code} \label{sec:apx_disc} We make our code available as part of the supplemental material to the reviewers for better replication of the results. \section{Illustration of RCExplainer's training} \label{sec:apx_key} \begin{figure*}[h] \begin{center} \includegraphics[width=1.0\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/illustration.pdf} \end{center} \caption{For training of RCExplainer, decision boundaries are extracted from the feature space of graph embeddings after the last graph convolution layer. After processing, a subset of boundaries is obtained and used to train an explanation neural network that takes edge activations from the convolution layers of GNN as input and predicts a mask over the adjacency matrix for the given graph sample. Counterfactual loss is used to optimize the explanation network.} \label{fig:training} \end{figure*} \section{Node classification} \label{sec:apx_node_cls} Our method is directly applicable to the task of node classification with few simple modifications. Instead of extracting Linear Decision Boundaries (LDBs) in feature space of graph embeddings, we operate on the feature space of node embeddings obtained after the last graph convolution layer. We use the greedy method described in Equation~\eqref{eq:greedy1} to find the decision regions for each class, except for the node classification, the functions $g(\cdot)$ and $h(\cdot)$ denote the coverage of nodes rather than graphs. The next step to train the explanation network $f_\theta$ to generate counterfactual explanations for node classification is identical to the procedure described in Section~\ref{sec:method} except for one difference. For node classification, since a node's prediction is only influenced by its local neighborhood, therefore we only need to consider the computation graph of the given node while generating the explanation. The computation graph of a node is defined as $k$-hop neighborhood of the node , where $k$ refers to number of graph convolution layers in the given GNN model $\phi$. In other words, GNN performs $k$ steps of message passing through its graph convolution layers during the forward pass to effectively convolve $k$-hop neighborhood of the given node. Hence, the output of $f_\theta$ is the output mask over the adjacency matrix of the computation graph of the given node. The edges with mask values more than 0.5 are chosen from the computation subgraph to form the explanation subset $S$ that can explain the original node classification prediction. \section{Interpreting individual boundaries} \label{sec:apx_bdry} \begin{figure*}[h] \begin{center} \includegraphics[width=0.8\linewidth]{figures/case.png} \end{center} \caption{(a) Three classes and the motifs associated with each class are shown. All samples of the same class contain same two motifs. (b) Explanation results for an instance of class $c_1$ are shown w.r.t both of the boundaries separately. In both cases, RCExplainer correctly identifies the motif (highlighted in black) that is associated with the class $c_1$ but not associated with the class that lies on the other side of the given boundary.} \label{fig:case} \end{figure*} We present a case study to demonstrate that our method can be adapted to answer the question, \textit{``Which substructures make the samples of one class different from the samples of other specific class, and therefore can be masked to flip the prediction between the two given classes?''}. This is useful in various fields, for instance, in drug discovery where the classes correspond to different chemical properties possible of a drug compound, researchers are often interested in understanding the role of chemical structures that result in a prediction of a particular property instead of another specific one. Also, this is especially helpful for debugging in cases where one expects a particular output for the given input but the GNN's prediction does not agree with the expectation. This case corresponds to a more constrained setting of counterfactual explanations as the target prediction is also predetermined. Let the original predicted label and the target label on a given graph $G$ be denoted by $c_i$ and $c_j$ respectively. Since our method explicitly models the boundaries separating the samples of one class from the samples of other classes, our method can be easily adapted to answer such questions. If we are able to only interpret the boundary separating the samples of the given two classes, this would allow us to uncover the substructures that make the samples of first class different from the samples of the other class. To address this, we modify the loss terms to \begin{equation}\label{eq:loss_same2} \begin{split} \mathcal{L}_{same}(\theta, G) = \sigma(-\mathcal{B}_{ij}(\phi_{gc}(G))* \mathcal{B}_{ij}(\phi_{gc}(G_\theta))), \end{split} \end{equation} \begin{equation}\label{eq:loss_opp2} \begin{split} \mathcal{L}_{opp}(\theta, G) = \sigma(\mathcal{B}_{ij}(\phi_{gc}(G))* \mathcal{B}_{ij}(\phi_{gc}(G'_\theta))), \end{split} \end{equation} where $\mathcal{B}_{ij}$ refers to the specific boundary in the set $\mathcal{P}$ separating the samples with predicted label $c_i$ from the samples with the predicted label $c_j$. Since we are only concerned about changing the outcome from $c_i$ to $c_j$, we need to consider only the boundary separating these classes while formulating the loss for the network. We verify this on a synthetic graph classification dataset with 3 classes, $c_1$, $c_2$ and $c_3$ such that each graph sample contains exactly 2 motifs. Both the motifs jointly determine the class because each possible pair of classes share exactly one motif as shown in Figure~\ref{fig:case}(a). We show explanation results produced by RCExplainer on an instance of class $c_1$ in Figure~\ref{fig:case}(b). For a given graph sample of class $c_1$, we separately find explanations with respect to each of the two boundaries $\mathcal{B}_{12}$ and $\mathcal{B}_{13}$, $\mathcal{B}_{12}$ separates $c_1$ from $c_2$, while $\mathcal{B}_{13}$ separates $c_1$ from $c_3$. We can see in the Figure~\ref{fig:case}(b) that optimizing our method w.r.t $\mathcal{B}_{12}$ correctly identifies the motif (ABCD) in the sample that is not associated with the class $c_2$. The other motif (EFGH) which is also associated with the $c_2$ is not considered important by the method. When we find the explanation for the same graph sample but with respect to the boundary $\mathcal{B}_{13}$, the results are opposite and align with the expectations. In this case, the motif (EFGH) that is not associated with $c_3$ is highlighted instead of the motif (ABCD). We observe similar behavior on the instances of other classes where interpreting an instance with respect to a single boundary correctly identifies the motif that identifies the given class from the other class. In conclusion, the above case study demonstrates that our method can highlight the motif unique to the class $c_i$ by interpreting the boundary $\mathcal{B}_{ij}$ separating the classes $c_i$ and $c_j$. Removing the highlighted motif from the given sample causes the drop in confidence of original predicted label $c_i$ while increasing the confidence for the class $c_j$. \section{Proof: Decision region extraction is an instance of SCSC optimization} \label{sec:apx_proof} Now we prove that the optimization problem in Equation~\eqref{eq:practical} is an instance of Submodular Cover Submodular Cost (SCSC) problem. The Equation~\eqref{eq:practical} can be written as \begin{equation}\label{eq:practical2} \min_{\mathcal{P} \subseteq \tilde{\mathcal{H}}} D_c-g(\mathcal{P}, c), \textrm{ s.t. } D'_c-h(\mathcal{P}, c) \geq D'_c-\delta. \end{equation} Maximizing $g(\mathcal{P}, c)$ denotes maximizing the coverage of the set of boundaries $\mathcal{P}$ for the samples of class $c$ denoted by $D_c$. This can be seen as minimizing $D_c-g(\mathcal{P}, c)$ which denotes the number of graph samples of class $c$ that are not covered by $g(\mathcal{P}, c)$ and thus exclusive to $D_c$. $D'_c$ in the constraint is equal to $D-D_c$ that denotes the set of graph samples in the dataset $D$ that do not belong to the class $c$. Let us denote $D_c-g(\mathcal{P}, c)$ by function $g'(\mathcal{P})$ and $D'_c-h(\mathcal{P}, c)$ by $h'(\mathcal{P})$. To prove that the optimization problem in Equation~\eqref{eq:practical2} is an instance of SCSC problem, we prove the functions $g'(\mathcal{P})$ and $h'(\mathcal{P})$ are submodular with respect to $\mathcal{P}$. For function $g'(\mathcal{P})$ to be submodular with respect to $\mathcal{P}$, we show that for any two arbitrary sets of LDBs denoted by $\mathcal{P} \subseteq \tilde{\mathcal{H}}$ and $\mathcal{Q} \subseteq \tilde{\mathcal{H}}$, if $\mathcal{P} \subseteq \mathcal{Q}$ then \begin{equation}\label{eq:submod} g'(\mathcal{P}+\{h\})-g'(\mathcal{P}) \geq g'(\mathcal{Q}+\{h\})-g'(\mathcal{Q}) \end{equation} is always satisfied for a linear decision boundary $h \in \tilde{\mathcal{H}}\setminus\mathcal{Q}$. As discussed in Section~\ref{sec:method} the LDBs in $\mathcal{P}$ induce a convex polytope $r(\mathcal{P}, c)$ that has the maximum coverage of samples of class $c$. Adding a new boundary $h$ to $\mathcal{P}$ may remove (separate) some samples of class $c$ from $r(\mathcal{P}, c)$ and lower its coverage. This reduction in coverage is denoted by the term $g'(\mathcal{P}+\{h\})-g'(\mathcal{P})$ on the left hand side of Equation~\eqref{eq:submod}. Similarly the term $g'(\mathcal{Q}+\{h\})-g'(\mathcal{Q})$ on the right hand side of Equation~\eqref{eq:submod} denotes the reduction in coverage for the subset $\mathcal{Q}$. Now, since $\mathcal{P} \subseteq \mathcal{Q}$, the set of graph samples contained in the polytope $r(\mathcal{Q}, c)$ is subset of the graph samples contained in the polytope $r(\mathcal{P}, c)$. Hence, adding new a LDB $h$ to $\mathcal{P}$ is not going to remove less number of samples from the polytope $r(\mathcal{P}, c)$ as compared to the samples removed from the polytope $r(\mathcal{Q}, c)$. Therefore, the function $g'(\mathcal{P})$ is submodular with respect to $\mathcal{P}$. Similarly, we can prove the function $h'(\mathcal{P})$ to be submodular with respect to $\mathcal{P}$. This concludes the proof. \section{Implementation details} \label{sec:apx_impldetails} \paragraph{Datasets.} Table~\ref{table:datasets_props} shows the properties of all the datasets used in experiments. The last row corresponds to the test accuracy of the GCN model we train on the corresponding dataset. \begin{table}[h] \small \begin{center} \begin{tabular}{l@{\hskip 0.07in}c@{\hskip 0.05in}c@{\hskip 0.05in}c@{\hskip 0.05in}c@{\hskip 0.05in}c@{\hskip 0.05in}c@{\hskip 0.05in}c} \toprule & \begin{tabular}[t]{@{}c@{}}BA- \\ \sc{shapes} \end{tabular} & \begin{tabular}[t]{@{}c@{}}BA- \\ \sc{Community} \end{tabular} & \begin{tabular}[t]{@{}c@{}}\sc{tree-} \\ \sc{cycles} \end{tabular} & \begin{tabular}[t]{@{}c@{}}\sc{tree-} \\ \sc{grid} \end{tabular} & \begin{tabular}[t]{@{}c@{}}BA- \\ 2motifs \end{tabular} & \raisebox{-.5\height}{Mutagenicity} & \raisebox{-.5\height}{NCI1}\\ \midrule \# of Nodes (avg) & 700 & 1400 & 871 & 1020 & 25 & 30.32 & 29.87\\ \# of Edges (avg) & 2050 & 4460 & 970 & 2540 & 25.48 & 30.77 & 32.30\\ \# of Graphs & 1 & 1 & 1 & 1 & 700 & 4337 & 4110 \\ \# of Classes & 4 & 8 & 2 & 2 & 2 & 2 & 2\\ Base & BA graph & BA graph & Tree & Tree & BA graph & --- & ---\\ \raisebox{-.5\height}{Motifs} & \raisebox{-.5\height}{House} & \raisebox{-.5\height}{House} & \raisebox{-.5\height}{Cycle} & \raisebox{-.5\height}{Grid} & \begin{tabular}[t]{@{}c@{}}House \& \\ Cycle \end{tabular} & \raisebox{-.5\height}{---} & \raisebox{-.5\height}{---} \\ License & Apache 2.0 & Apache 2.0 & Apache 2.0 & Apache 2.0 & --- & --- & --- \\ \midrule Test accuracy & 0.98 & 0.95 & 0.99 & 0.99 & 0.91 & 0.91 & 0.84\\ \bottomrule \end{tabular} \end{center} \caption{Properties of the datasets used and the test accuracy of the corresponding trained GCN models.} \label{table:datasets_props} \end{table} \paragraph{Baselines.} For the baselines, we use publically available implementations provided in~\citep{NEURIPS2019_d80b7040,holdijk2021re,NEURIPS2020_8fb134f2,liu2021dig} to obtain the results. Implementation of GNNExplainer provided by~\citep{NEURIPS2019_d80b7040} is licensed under Apache 2.0 license \cmmnt{while implementation of SubgraphX provided by~\citep{liu2021dig} is licensed under GNU General Public License v3.0}. We use the default parameters provided by the authors for the baselines. For the local baseline RCExp-NoLDB, we use same setup as RCExplainer except we don't use LDBs for training the explanation network $f$. The loss function denoted by $\mathcal{L}_{conf}$ for this baseline aligns with the loss functions of GNNExplainer and PGExplainer except we introduce a second term to enforce the counterfactual characteristics. We directly maximize the confidence of the original predicted class $c$ on the masked graph $G_{\theta}$ and minimize the confidence of original predicted class for the remainder graph $G'_{\theta}$. $\mathcal{L}_{conf}$ can be expressed as : \begin{equation}\label{eq:loss_conf} \begin{split} \mathcal{L}_{conf}(\theta, G) = -\log(P_\phi(Y=c|X=G_\theta)) - \frac{\eta}{\log(P_\phi(Y=c|X=G'_\theta))}, \end{split} \end{equation} where $P_\phi(Y|X=G_x)$ corresponds to the conditional probability distribution learnt by GNN model $\phi$ for $G_x$ as input graph. $Y$ corresponds to the random variable representing the set of classes $\mathcal{C}$ and $X$ is the random variable representing possible input graphs for the GNN $\phi$. Here $\eta$ is a hyperparameter that represents the weight of the second term in the loss function. The loss $\mathcal{L}_{conf}$ is jointly minimized with the regularizers $\mathcal{R}_{sparse}$ and $\mathcal{R}_{discrete}$ specified in Section~\ref{sec:method}. \paragraph{Training details.} We follow \citep{NEURIPS2019_d80b7040,NEURIPS2020_e37b08dd} and use the same architecture to train a GNN model with 3 graph convolution layers for generating explanations on each dataset. Consistent with prior works~\citep{NEURIPS2019_d80b7040,NEURIPS2020_e37b08dd,NEURIPS2020_8fb134f2}, we use (80/10/10)\% random split for training/validation/test for each dataset. We use Adam optimizer to tune the parameters of $f_\theta$ and set learning rate to $0.001$. We train our method for $600$ epochs. For node classification datasets, we set $\lambda$ to $0.85$, $\beta$ to $0.006$ and $\mu$ to $0.66$. For graph classification datasets, we set $\lambda$ to $0.1$, $\mu$ to $0.66$. $\beta$ is set to $6\times 10^{-5}$ for BA-2motifs and NCI1, and to $6\times 10^{-4}$ for Mutagenicity. We also scale the combined loss by factor of $15$ for all the datasets. The number of LDBs to be sampled from GNN for each class is set to $50$. Empirically, we find that this is enough as the subset of LDBs selected greedily from this set is able to cover all the samples of the given class. Our codebase is built on the top of implementations provided by \citep{NEURIPS2019_d80b7040,NEURIPS2020_e37b08dd}. All of the experiments are conducted on a Linux machine with an Intel i7-8700K processor and a Nvidia GeForce GTX 1080 Ti GPU with 11GB memory. Our code is implemented using python 3.8.5 with Pytorch 1.8.1 that uses CUDA version 10.0. \section{Additional experiments} \label{sec:apx_exp} \textbf{Fidelity.} As described in Section~\ref{sec:experiment}, counterfactual characteristic of an explanation is measured by using fidelity as an evaluation metric. It is defined as drop in confidence of the original predicted class after masking the produced explanation in the original graph~\citep{pope2019explainability}. Since, we produce explanations as edges, we mask the edges in the input graph to calculate the drop. Fidelity for the input graph $G$ and the produced explanation $S$ is formally written as \begin{equation}\label{eq:fidelity} \begin{split} fidelity(S,G) = P_\phi(Y=c|X=G)-P_\phi(Y=c|X=G_{E\setminus S}), \end{split} \end{equation} where $c$ denotes the class predicted by $\phi$ for $G$. As discussed in Section~\ref{sec:experiment}, explanations are mostly useful, if they are sparse (concise). Sparsity is defined as the fraction of total edges that are present in $E$ but not in $S$: \begin{equation}\label{eq:sparsity} \begin{split} sparsity(S,G) =1-\frac{|E|}{|S|}, \end{split} \end{equation} However, since \cmmnt{the approaches like SubgraphX and} PGM-Explainer \cmmnt{do} does not report the importance ranking of edges of $G$, it's not feasible to completely control the edge sparsity of the desired explanation. Hence, we take samples with similar sparsity level for comparison. Consistent with prior works~\citep{NEURIPS2019_d80b7040,NEURIPS2020_e37b08dd}, we compute fidelity for the samples that are labelled positive, for instance in Mutagenicity dataset, we compute fidelity for the compounds that are labelled as mutagenic. The results are presented in Figure~\ref{fig:fidelity}. \cmmnt{As reported in Figure~\ref{fig:fidelity}, the results obtained for SubgraphX are significantly lower than those reported by \citet{yuan2021explainability}. We believe, this is the result of the problematic setting adopted in~\citep{yuan2021explainability} and implemented in~\citep{liu2021dig} for computing the fidelity. To be specific, while computing the drop in confidence, the features of the nodes present in the explanation are set to 0 without removing the edges incident on these nodes. As the message passing is still allowed on these edges, therefore the first graph convolution results in updating the representation of the nodes to non-zero. As the features of these nodes are now not set to zero, the subsequent graph convolutions would also allow these nodes to participate in updating the representations of their neighboring nodes.} \cmmnt{A simple fix to this problem is to mask the edges incident on these nodes while computing the fidelity. This would ensure that these nodes do not participate in message passing irrespective of the number of graph convolutions. Adopting this setting, allows us to get the results obtained in Figure~\ref{fig:fidelity}. We also note that masking only the edges and not setting the nodes to zero also yields similar performance for SubgraphX as reported in Figure~\ref{fig:fidelity}.} \begin{figure*}[h] \begin{center} \includegraphics[width=1.0\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/dotted_robust_auc4_12.pdf} \end{center} \caption{Noise robustness (AUC) averaged across 10 runs for the datasets at different levels of noise.} \label{fig:robust_auck_4_12} \end{figure*} \begin{figure*}[h!] \begin{center} \includegraphics[width=1.0\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/nosubx_dotted_robust_node_acc.pdf} \end{center} \caption{Noise robustness (node accuracy) averaged across 10 runs for the datasets at different levels of noise.} \label{fig:robust_nodeacc} \end{figure*} \textbf{Robustness to noise.} As discussed in Section~\ref{sec:experiment}, we use AUC to compare robustness of different methods. AUC is defined as area under receiver operating characteristic (ROC) curve of a binary classifier. We consider the top-$k$ edges of the produced explanation $S$ for the input graph $G$ as ground-truth. After we obtain the explanation $S'$ for the noisy sample $G'$, we formulate this as binary classification problem. For each edge in $G'$, if it is present in the top-$k$ edges of the produced explanation $S$, then it is labeled positive, and negative otherwise. For $G'$, the mask weight of an edge predicted by explanation network $f_\theta$ is the probability of the corresponding edge being classified as positive. Limited by space, we only reported the results for $k=8$ in Section~\ref{sec:experiment}. Now, we report the results for $k=4$ and $k=12$ in Figure~\ref{fig:robust_auck_4_12} where we observe similar trend as observed in Figure~\ref{fig:robustness}. RCExplainer outperforms the rest of the methods by big margin on BA-2motifs and NCI1. Since, AUC evaluation requires that the explanation method outputs the importance weights for the edges of a noisy sample $G'$, we cannot use this for comparing \cmmnt{approaches like SubgraphX and} PGM-Explainer that \cmmnt{do} does not provide this information. Therefore, to provide a more comprehensive comparison, we use node accuracy as a measure to compare all the baselines. For calculating node accuracy, we consider top-$k$ important nodes in the explanation for the original graph $G$ as ground-truth and compare them with the the top-$k$ important nodes obtained through the explanation for the noisy graph $G'$. However, the challenge is that GNNExplainer, PGExplainer, RCExp-NoLDB and RCExplainer do not rank nodes based on their importance. To address this, we use edge weights to obtain the node weights. We approximate the node weights as : \begin{equation}\label{eq:nodeweight} \begin{split} \mathbf{a}_i = \max_{j\in \{1,\ldots,|V|\}}(\mathbf{M}_{ij}), \end{split} \end{equation} where $\mathbf{a}_i$ denotes the weight of the node $v_i$ and $\mathbf{M}$ is the weighted adjacency mask predicted by $f_\theta$. We believe this is a valid approximation because for an important edge to exist in the explanation subgraph, nodes connected by this edge must also be considered important and be present in the explanation subgraph. Now, using these node weights, we can obtain the ground-truth set of nodes by picking top-$k$ important nodes of the explanation on $G$. Comparing top-$k$ important nodes of explanation on $G'$ with the ground-truth set of nodes gives us accuracy. We present the node accuracy plots for $k=2,4 \text{ and } 8$ in Figure~\ref{fig:robust_nodeacc}. We also note that comparison is not completely fair to GNNExplainer, PGExplainer and our method because of the approximation used to extend these methods for computing node level accuracy. Despite the approximation, our method significantly outperforms all the methods. GNNExplainer \cmmnt{,} and PGM-Explainer \cmmnt{and SubgraphX} perform consistently worse as expected because they optimize each sample independently to obtain an explanation. \cmmnt{One other way to compare all the approaches on the robustness would be to compute edge level accuracy instead at the node level. This would require extending SubgraphX and PGM-Explainer to obtain important edges from the explanation subgraph. However, it is more challenging as SubgraphX only provides a subgraph as an output. To obtain top-$k$ important edges, we can randomly sample $k$ edges from the returned explanation subgraph that consists of slightly more than $k$ number of edges. The random sampling would make this evaluation more approximate and perhaps would further degrade the performance of SubgraphX, therefore, we do not report these results.} \textbf{Node classification.} \begin{table}[h] \small \begin{center} \resizebox{\textwidth}{!}{% \def\arraystretch{1.5 \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{2}{c|}{\textbf{BA-\sc{\textbf{Shapes}}}} & \multicolumn{2}{c|}{\textbf{BA-\sc{ \textbf{Community}}}} & \multicolumn{2}{c|}{\sc{\textbf{tree-cycles}}} & \multicolumn{2}{c|}{\sc{\textbf{tree-grid}}} \\ \cline{2-9} & AUC & Acc & AUC & Acc & AUC & Acc & AUC & Acc \\ \hline GNNExplainer & 0.925 & 0.729 & 0.836 & 0.750 & 0.948 & 0.862 & 0.875 & 0.741\\ PGExplainer & 0.963 & 0.932 & 0.945 & 0.872 & 0.987 & 0.924 & 0.907 & 0.871\\ PGM-Explainer & \textit{n.a.} & 0.965 & \textit{n.a.} & \textbf{0.926} & \textit{n.a.} & 0.954 &\textit{n.a.} & 0.885\\ CF-GNNExplainer & \textit{n.a.} & 0.960 & \textit{n.a.} & \textit{n.a.} & \textit{n.a.} & 0.940 & \textit{n.a.} & 0.960\\ \hline RCExplainer (Ours) & \begin{tabular}[t]{@{}c@{}}\textbf{0.998} \\ $\pm$ 0.001 \end{tabular} & \begin{tabular}[t]{@{}c@{}} \textbf{0.973} \\ $\pm$ 0.003 \end{tabular} & \begin{tabular}[t]{@{}c@{}}\textbf{0.995} \\ $\pm$ 0.002 \end{tabular} & \begin{tabular}[t]{@{}c@{}} 0.916 \\ $\pm$ 0.009 \end{tabular} & \begin{tabular}[t]{@{}c@{}}\textbf{0.993} \\ $\pm$ 0.003 \end{tabular} & \begin{tabular}[t]{@{}c@{}} \textbf{0.993} \\ $\pm$ 0.003\end{tabular} & \begin{tabular}[t]{@{}c@{}}\textbf{0.995} \\ $\pm$ 0.002 \end{tabular} & \begin{tabular}[t]{@{}c@{}} \textbf{0.974} \\ $\pm$ 0.005\end{tabular}\\ \hline \end{tabular} } \end{center} \caption{AUC and accuracy evaluation on synthetic node classification datasets.} \vspace{-0.1in} \label{table:node_acc_results} \end{table} We evaluate our method on four synthetic node classification datasets used by GNNExplainer~\cite{NEURIPS2019_d80b7040}, namely, {BA-\sc{shapes}}, {BA-\sc{Community}}, {\sc{tree-cycles}} and {\sc{tree-grid}}. Following~\citep{NEURIPS2019_d80b7040,NEURIPS2020_e37b08dd}, we formalize the explanation problem as binary classification of edges and adopt AUC under the ROC curve as the evaluation metric. This evaluation is only possible for synthetic datasets where we can consider the motifs as reasonable approximations of the explanation ground truth. The edges that are part of a motif are positive and rest of the edges are labelled negative during the evaluation. We show the results in Table~\ref{table:node_acc_results} where we demonstrate that our method is extremely accurate and achieves close to optimal score for AUC on all of the datasets. This is solid evidence of our method's ability to capture the behavior of underlying GNN better and produce consistently accurate explanations to justify the original predictions. Please note that PGM-Explainer does not provide edge weights so it is not applicable for AUC. Also since the implementation of CF-GNNExplainer is not available, we only report those results that are available in \citep{lucic2021cf}. \section{Hyperparameter analysis} \label{sec:apx_hyper} \textbf{Number of LDBs.} \begin{figure*}[h!] \begin{center} \includegraphics[width=0.65\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/newfig/boundary_bacommunity_fidelity.pdf} \end{center} \caption{Fidelity vs Sparsity plots on BA-Community for different number of sampled LDBs.} \label{fig:ldb} \end{figure*} As mentioned in Section~\ref{sec:method}, we sample LDBs from the decision logic of GNN to form a candidate pool from which some boundaries are selected by the greedy method. In Figure~\ref{fig:ldb}, we show the effect of number of sampled candidate boundaries on the performance on BA-Community dataset. As we increase the number of sampled LDBs from 10 to 50, the fidelity improves and saturation is achieved once 50 LDBs are sampled. This is consistent with the expectations as more boundaries are sampled, the quality of decision region improves. When there are enough boundaries that can result in a good decision region after greedy selection, the performance saturates. \textbf{Choosing $\lambda$.} \begin{figure*}[h!] \begin{center} \includegraphics[width=0.65\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/lambda_bamotifs_fidelity.pdf} \end{center} \caption{Fidelity vs Sparsity plots on BA-2motifs for different values of $\lambda$.} \label{fig:lambda} \end{figure*} in Figure~\ref{fig:lambda}, we show the effect of $\lambda$ hyperparameter introduced in Equation~\eqref{eq:loss_net}. We show the fidelity performance of our method on BA-2motifs dataset for different values of $\lambda$ ranging from 0.01 to 0.90. The fidelity results are worst for $\lambda=0.9$ as the second term $\mathcal{L}_{opp}$ of the loss that enforces counterfactual behavior of explanations is weighted very less in the combined loss. Setting $\lambda=0.1$ gives the best results for fidelity. \section{Qualitative results} \label{sec:apx_qual} \textbf{Qualitative results.} We present the sample results produced by GNNExplainer, PGExplainer and RCExplainer in Table \ref{table:qualitative}. Our method consistently identifies the right motifs with high precision and is also able to handle tricky cases. For instance in Figure (q), note that our method is able to identify the right motif in the presence of another ``house-structure''. The other structure contains the query node but as it also contains the nodes from the other community, hence, it is not the right explanation for the prediction on the given query node. In Figure (t), our method is able to correctly identify both NO2 groups present in the compound, and as discussed before, NO2 groups attached to carbon rings are known to make the compounds mutagenic~\citep{debnath1991structure}. The edges connecting nitrogen(N) atoms to the carbon(C) atoms are given the highest weights in the explanation. This is very intuitive in counterfactual sense as masking these edges would break the NO2 groups from the carbon ring and push the prediction of the compound towards ``Non-Mutagenic''. \begin{table}[ht] \begin{center} \resizebox{\textwidth}{!}{% \def\arraystretch{1.2 \begin{tabular}{|c@{\hskip 0.01in}|c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}c@{\hskip 0.01in}|c} \hline & & {BA-\sc{shapes}} & & {BA-\sc{Community}} & & {\sc{tree-cycles}} & & {\sc{tree-grid}} & & Mutagenicity\\ \hline Motifs & \raisebox{-3.3\height}{(a)} & \raisebox{-.5\height}{\includegraphics[width=0.04\linewidth]{newfig/syn1.png}} & \raisebox{-3.3\height}{(b)} & \raisebox{-.5\height}{\includegraphics[width=0.2\linewidth]{newfig/syn2.png}} & \raisebox{-3.3\height}{(c)} & \raisebox{-.5\height}{\includegraphics[width=0.04\linewidth]{newfig/syn3_cycle.png}} & \raisebox{-3.3\height}{(d)} & \raisebox{-.5\height}{\includegraphics[width=0.04\linewidth]{newfig/syn4_grid.png}} & \raisebox{-3.3\height}{(e)} & \raisebox{-.5\height}{\includegraphics[width=0.2\linewidth]{newfig/mutagenicity_gt_nclr.png}} \\ \hline GNNExp. & \raisebox{-6.3\height}{(f)} & \raisebox{-.5\height}{\includegraphics[width=0.15\linewidth]{newfig/gnn_correct_ex_syn1.png}} & \raisebox{-6.3\height}{(g)} & \raisebox{-.5\height}{\includegraphics[width=0.12\linewidth]{newfig/gnn_syn2_ex.png}} & \raisebox{-6.3\height}{(h)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/gnn_syn3_cycle_ex.png}} & \raisebox{-6.3\height}{(i)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/gnn_ex_syn4.png}} & \raisebox{-6.3\height}{(j)} & \raisebox{-.5\height}{\includegraphics[width=0.2\linewidth]{newfig/gnn_mutagenicity_sample_hz_nclr.png}} \\ \hline PGExp. & \raisebox{-6.3\height}{(k)} & \raisebox{-.5\height}{\includegraphics[width=0.15\linewidth]{newfig/pge_correct_ex_syn1.png}} & \raisebox{-6.3\height}{(l)} & \raisebox{-.5\height}{\includegraphics[width=0.12\linewidth]{newfig/pge_syn2_ex.png}} & \raisebox{-6.3\height}{(m)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/pge_syn3_cycle_ex.png}} & \raisebox{-6.3\height}{(n)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/pge_ex_syn4.png}} & \raisebox{-6.3\height}{(o)} & \raisebox{-.5\height}{\includegraphics[width=0.2\linewidth]{newfig/pge_mutagenicity_sample_hz_nclr.png}} \\ \hline RCExp. & \raisebox{-6.3\height}{(p)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/correct_ex_syn1.png}} & \raisebox{-6.3\height}{(q)} & \raisebox{-.5\height}{\includegraphics[width=0.12\linewidth]{newfig/syn2_ex.png}} & \raisebox{-6.3\height}{(r)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/syn3_cycle_ex.png}} & \raisebox{-6.3\height}{(s)} & \raisebox{-.5\height}{\includegraphics[width=0.1\linewidth]{newfig/ex_syn4.png}} & \raisebox{-6.3\height}{(t)} & \raisebox{-.5\height}{\includegraphics[width=0.2\linewidth]{newfig/mutagenicity_sample_hz_nclr.png}} \\ \hline \end{tabular} } \end{center} \caption{Qualitative results produced by GNNExplainer, PGExplainer and RCExplainer. The motifs present in the corresponding dataset are shown in the first row and the corresponding explanations are shown in the later rows. First four columns correspond to node classification where the node highlighted in red is being explained and the node colors denote different labels. Last column corresponds to graph classification where the prediction is explained for a mutagenic sample and the colors denote different atoms. Explanations are highlighted in black.} \vspace{-0.1in} \label{table:qualitative} \end{table} \section{Code} \label{sec:apx_disc} We make our code available as part of the supplemental material to the reviewers for better replication of the results. \section*{Checklist} \newpage \section{Introduction} Graph Neural Networks (GNNs)~\citep{DBLP:conf/iclr/KipfW17, DBLP:conf/iclr/VelickovicCCRLB18, NEURIPS2018_53f0d7c5} have achieved great practical successes in many real-world applications, such as chemistry \citep{pires2015pkcsm}, molecular biology \citep{huber2007graphs}, social networks \citep{cho2011friendship} and epidemic modelling \citep{simon2011exact}. For most of these applications, explaining predictions made by a GNN model is crucial for establishing trust with end-users, identifying the cause of a prediction, and even discovering potential deficiencies of a GNN model before massive deployment. Ideally, an explanation should be able to answer questions like ``\textit{Would the prediction of the GNN model change if a certain part of an input molecule is removed?}'' in the context of predicting whether an artificial molecule is active for a certain type of proteins~\cite{jiang2020drug, XIONG2021}, \textit{``Would an item recommended still be recommended if a customer had not purchased some other items in the past?''} for a GNN built for recommendation systems~\cite{fan2019graph, yin2019deeper}. Counterfactual explanations~\cite{moraffah2020causal} in the form of ``\textit{If X had not occurred, Y would not have occurred}''~\cite{molnar2019} are the principled way to answer such questions and thus are highly desirable for GNNs. In the context of GNNs, a counterfactual explanation identifies a small subset of edges of the input graph instance such that removing those edges significantly changes the prediction made by the GNN. Counterfactual explanations are usually concise and easy to understand~\cite{moraffah2020causal, sokol2019counterfactual} because they align well with the human intuition to describe a causal situation~\cite{molnar2019}. To make explanations more trustworthy, the counterfactual explanation should be robust to noise, that is, some slight changes on an input graph do not change the explanation significantly. How to produce robust counterfactual explanations on predictions made by general graph neural networks is a novel problem that has not been systematically studied before. As to be discussed in Section~\ref{sec:rw}, most GNN explanation methods~\citep{NEURIPS2019_d80b7040, NEURIPS2020_e37b08dd, yuan2020xgnn, DBLP:conf/iclr/VelickovicCCRLB18, pope2019explainability} are neither counterfactual nor robust. These methods mostly focus on identifying a subgraph of an input graph that achieves a high correlation with the prediction result. Such explanations are usually not counterfactual because, due to the high non-convexity of GNNs, removing a subgraph that achieves a high correlation does not necessarily change the prediction result. Moreover, many existing methods~\cite{NEURIPS2019_d80b7040, NEURIPS2020_e37b08dd, DBLP:conf/iclr/VelickovicCCRLB18, pope2019explainability} are not robust to noise and may change significantly upon slight modifications on input graphs, because the explanation of every single input graph prediction is independently optimized to maximize the correlation with the prediction, thus an explanation can easily overfit the noise in the data. In this paper, we develop RCExplainer, a novel method to produce robust counterfactual explanations on GNNs. The key idea is to first model the common decision logic of a GNN by set of decision regions where each decision region governs the predictions on a large number of graphs, and then extract robust counterfactual explanations by a deep neural network that explores the decision logic carried by the linear decision boundaries of the decision regions. We make the following contributions. First, we model the decision logic of a GNN by a set of decision regions, where each decision region is induced by a set of linear decision boundaries of the GNN. We propose an unsupervised method to find decision regions for each class such that each decision region governs the prediction of multiple graph samples predicted to be the same class. The linear decision boundaries of the decision region capture the common decision logic on all the graph instances inside the decision region, thus do not easily overfit the noise of an individual graph instance. By exploring the common decision logic encoded in the linear boundaries, we are able to produce counterfactual explanations that are inherently robust to noise. Second, based on the linear boundaries of the decision region, we propose a novel loss function to train a neural network that produces a robust counterfactual explanation as a small subset of edges of an input graph. The loss function is designed to directly optimize the explainability and counterfactual property of the subset of edges, such that: 1) the subgraph induced by the edges lies within the decision region, thus has a prediction consistent with the input graph; and 2) deleting the subset of edges from the input graph produces a remainder subgraph that lies outside the decision region, thus the prediction on the remainder subgraph changes significantly. Last, we conduct comprehensive experimental study to compare our method with the state-of-the-art methods on fidelity, robustness, accuracy and efficiency. All the results solidly demonstrate the superior performance of our approach. \section{Related work} \label{sec:rw} The existing GNN explanation methods~\cite{yuan2020xgnn, DBLP:conf/iclr/VelickovicCCRLB18, NEURIPS2019_d80b7040, pope2019explainability, NEURIPS2020_e37b08dd} generally fall into two categories: model level explanation~\cite{yuan2020xgnn} and instance level explanation~\cite{DBLP:conf/iclr/VelickovicCCRLB18, NEURIPS2019_d80b7040, pope2019explainability, NEURIPS2020_e37b08dd}. A model level explanation method~\cite{yuan2020xgnn} produces a high-level explanation about the general behaviors of a GNN independent from input examples. This may be achieved by synthesizing a set of artificial graph instances such that each artificial graph instance maximizes the prediction score on a certain class. The weakness of model level explanation methods is that an input graph instance may not contain an artificial graph instance, and removing an artificial graph from an input graph does not necessarily change the prediction. As a result, model level explanations are substantially different from counterfactual explanations, because the synthesized artificial graphs do not provide insights into how the GNN makes its prediction on a specific input graph instance. The instance level explanation methods~\cite{DBLP:conf/iclr/VelickovicCCRLB18, NEURIPS2019_d80b7040, pope2019explainability, NEURIPS2020_e37b08dd} explain the prediction(s) made by a GNN on a specific input graph instance or multiple instances by identifying a subgraph of an input graph instance that achieves a high correlation with the prediction on the input graph. GNNExplainer~\citep{NEURIPS2019_d80b7040} removes redundant edges from an input graph instance to produce an explanation that maximizes the mutual information between the distribution of subgraphs of the input graph and the GNN's prediction. Following the same idea by \citet{NEURIPS2019_d80b7040}, PGExplainer~\citep{NEURIPS2020_e37b08dd} parameterizes the generation process of explanations by a deep neural network, and trains it to maximize a similar mutual information based loss used by GNNExplainer~\citep{NEURIPS2019_d80b7040}. The trained deep neural network is then applied to generate explanations for a single input graph instance or a group of input graphs. MEG~\cite{numeroso2021meg} incorporates strong domain knowledge in chemistry with a reinforcement learning framework to produce counterfactual explanations on GNNs specifically built for compound prediction, but the heavy reliance on domain knowledge largely limits its applicability on general GNNs. Some studies~\cite{pope2019explainability, DBLP:conf/iclr/VelickovicCCRLB18} also adapt the existing explanation methods of image-oriented deep neural networks to produce instance level explanations for GNNs. Pope et al.~\citep{pope2019explainability} extend several gradient based methods~\cite{selvaraju2017grad, simonyan2014deep, zhang2018top} to explain predictions made by GNNs. The explanations are prone to gradient saturation \citep{glorot2010understanding} and may also be misleading \citep{NEURIPS2018_294a8ed2} due to the heavy reliance on noisy gradients. Velickovic et al.~\cite{DBLP:conf/iclr/VelickovicCCRLB18} extend the attention mechanism~\cite{denil2017programmable, duan2017one} to identify the nodes in an input graph that contribute the most to the prediction. This method has to retrain the GNN with the altered architecture and the inserted attention layers. Thus, the explanations may not be faithful to the original GNN. Instance level explanations are usually not counterfactual because, due to the non-convexity of GNNs, removing an explanation subgraph from the input graph does not necessarily change the prediction result. Moreover, those methods~\cite{NEURIPS2019_d80b7040, NEURIPS2020_e37b08dd, DBLP:conf/iclr/VelickovicCCRLB18, pope2019explainability} are usually not robust to noise because the explanation of every single input graph prediction is independently optimized. Thus, an explanation can easily overfit the noise inside input graphs and may change significantly upon slight modifications on input graphs. To tackle the weaknesses in the existing methods, in this paper, we directly optimize the counterfactual property of an explanation. Our explanations are also much more robust to modifications on input graphs, because they are produced from the common decision logic on a large group of similar input graphs, which do not easily overfit the noise of an individual graph sample. Please note that our study is substantially different from adversarial attacks on GNNs. The adversarial attacking methods~\cite{zugner2019adversarial, zugner2018adversarial, xu2020adversarial, xu2019topology, jin2019latent} and the most recent CF-GNNExplainer~\cite{lucic2021cf} use adversarial examples as explanations and only focus on changing the predicted labels of GNNs, but totally ignore the explainability of the generated adversarial examples~\cite{freiesleben2020counterfactual}. Thus, the adversarial examples generated by adversarial attacks do not align well with the human intuition. On the contrary, our method directly optimizes the explainability of an explanation and requires that the subgraph induced by the explanation lies within the decision region at a large distance from the decision boundaries. We also require that the explanation is generally valid for a large set of similar graph instances by extracting it from the common linear decision boundaries of a large decision region. \section{Problem Formulation} Denote by $G = \{V, E\}$ a graph where $V = \{v_1, v_2, \ldots, v_n\}$ is the set of $n$ nodes and $E \subseteq V\times V$ is the set of edges. The edge structure of a graph $G$ is described by an adjacency matrix $\mathbf{A}\in\{0,1\}^{n\times n}$, where $\mathbf{A}_{ij} = 1$ if there is an edge between node $v_i$ and $v_j$; and $\mathbf{A}_{ij}=0$ otherwise. Denote by $\phi$ a GNN model that maps a graph to a probability distribution over a set of classes denoted by $C$. Let $D$ denote the set of graphs that are used to train the GNN model $\phi$. We focus on GNNs that adopt piecewise linear activation functions, such as MaxOut~\citep{goodfellow2013maxout} and the family of ReLU~\citep{glorot2011deep,he2015delving,nair2010rectified}. The robust counterfactual explanation problem is defined as follows. \begin{definition}[Robust Counterfactual Explanation Problem] Given a GNN model $\phi$ trained on a set of graphs $D$, for an input graph $G=\{V, E\}$, our goal is to explain why $G$ is predicted by the GNN model as $\phi(G)$ by identifying a small subset of edges $S\subseteq E$, such that (1) removing the set of edges in $S$ from $G$ changes the prediction on the remainder $\{V, E-S\}$ of $G$ significantly; and (2) $S$ is stable with respect to slight changes on the edges of $G$ and the feature representations of the nodes of $G$. \end{definition} In the definition, the first requirement requires that the explanation $S$ is counterfactual, and the second requirement requires that the explanation is robust to noisy changes on the edges and nodes of $G$. \section{Method} \label{sec:method} In this section, we first introduce how to extract the common decision logic of a GNN on a large set of graphs with the same predicted class. This is achieved by a decision region induced by a set of linear decision boundaries of the GNN. Then, based on the linear boundaries of the decision region, we propose a novel loss function to train a neural network that produces robust counterfactual explanations. Last, we discuss the time complexity of our method when generating explanations. \subsection{Modelling Decision Regions} Following the routines of many deep neural network explanation methods~\citep{selvaraju2017grad,zeiler2014visualizing}, we extract the decision region of a GNN in the $d$-dimensional output space $\mathbb{O}^d$ of the last convolution layer of the GNN. Because the features generated by the last convolution layer are more conceptually meaningful and more robust to noise than those raw features of input graphs, such as vertices and edges~\cite{zugner2019certifiable,bojchevski2019certifiable}. Denote by $\phi_{gc}$ the mapping function realized by the graph convolution layers that maps an input graph $G$ to its graph embedding $\phi_{gc}(G)\in \mathbb{O}^d$, and by $\phi_{fc}$ the mapping function realized by the fully connected layers that maps the graph embedding $\phi_{gc}(G)$ to a predicted distribution over the classes in $C$. The overall prediction $\phi(G)$ made by the GNN can be written as $ \phi(G)=\phi_{fc}(\phi_{gc}(G)). $ For the GNNs that adopt piecewise linear activation functions for the hidden neurons, such as MaxOut~\citep{goodfellow2013maxout} and the family of ReLU~\citep{glorot2011deep,he2015delving,nair2010rectified}, the decision logic of $\phi_{fc}$ in the space $\mathbb{O}^d$ is characterized by a piecewise linear decision boundary formed by connected pieces of decision hyperplanes in $\mathbb{O}^d$~\citep{NEURIPS2018_294a8ed2}. We call these hyperplanes \textbf{linear decision boundaries (LDBs)}, and denote by $\mathcal{H}$ the set of LDBs induced by $\phi_{fc}$. The set of LDBs in $\mathcal{H}$ partitions the space $\mathbb{O}^d$ into a large number of convex polytopes. A convex polytope is formed by a subset of LDBs in $\mathcal{H}$. All the graphs whose graph embeddings are contained in the same convex polytope are predicted as the same class~\cite{chu2018exact}. Therefore, the LDBs of a convex polytope encode the common decision logic of $\phi_{fc}$ on all the graphs whose graph embeddings lie within the convex polytope~\cite{chu2018exact}. Here, a graph $G$ is \textbf{covered} by a convex polytope if the graph embedding $\phi_{gc}(G)$ is contained in the convex polytope. Based on the above insight, we model the \textbf{decision region} for a set of graph instances as a convex polytope that satisfies the following two properties. First, the decision region should be induced by a subset of the LDBs in $\mathcal{H}$. In this way, when we extract counterfactual explanations from the LDBs, the explanations are loyal to the real decision logic of the GNN. Second, the decision region should cover many graph instances in the training dataset $D$, and all the covered graphs should be predicted as the same class. In this way, the LDBs of the decision region capture the common decision logic on all the graphs covered by the decision region. Here, the requirement of covering a larger number of graphs ensures that the common decision logic is general, and thus it is less likely to overfit the noise of an individual graph instance. As a result, the counterfactual explanations extracted from the LDBs of the decision region are insensitive to slight changes in the input graphs. Our method can be easily generalized to incorporate prediction confidence in the coverage measure, such as considering the count of graphs weighted by prediction confidence. To keep our discussion simple, we do not pursue this detail further in the paper. Next, we illustrate how to extract a decision region satisfying the above two requirements. The key idea is to find a convex polytope covering a large set of graph instances in $D$ that are predicted as the same class $c\in C$. Denote by $D_c\subseteq D$ the set of graphs in $D$ predicted as a class $c\in C$, by $\mathcal{P}\subseteq \mathcal{H}$ a set of LDBs that partition the space $\mathbb{O}^d$ into a set of convex polytopes, and by $r(\mathcal{P}, c)$ the convex polytope induced by $\mathcal{P}$ that covers the largest number of graphs in $D_c$. Denote by $g(\mathcal{P}, c)$ the number of graphs in $D_c$ covered by $r(\mathcal{P}, c)$, and by $h(\mathcal{P}, c)$ the number of graphs in $D$ that are covered by $r(\mathcal{P}, c)$ but are not predicted as class $c$. We extract a decision region covering a large set of graph instances in $D_c$ by solving the following constrained optimization problem. \begin{equation}\label{eq:separation} \max_{\mathcal{P} \subseteq \mathcal{H}} g(\mathcal{P}, c), \textrm{ s.t. } h(\mathcal{P}, c)=0 \end{equation} This formulation realizes the two properties of decision regions because $\mathcal{P}\subseteq \mathcal{H}$ ensures that the decision region is induced by a subset of LDBs in $\mathcal{H}$, maximizing $g(\mathcal{P}, c)$ requires that $r(\mathcal{P}, c)$ covers a large number of graphs in $D_c$, and the constraint $h(\mathcal{P}, c)=0$ ensures that all the graphs covered by $r(\mathcal{P}, c)$ are predicted as the same class $c$. Once we find a solution $\mathcal{P}$ to the above problem, the decision region $r(\mathcal{P}, c)$ can be easily obtained by first counting the number of graphs in $D_c$ covered by each convex polytope induced by $\mathcal{P}$, and then select the convex polytope that covers the largest number of graphs in $D_c$. \subsection{Extracting Decision Regions} The optimization problem in Equation~\eqref{eq:separation} is intractable for standard GNNs, mainly because it is impractical to compute $\mathcal{H}$, all the LDBs of a GNN. The number of LDBs in $\mathcal{H}$ of a GNN is exponential with respect to the number of neurons in the worst case~\cite{montufar2014number}. To address this challenge, we substitute $\mathcal{H}$ by a sample $\tilde{\mathcal{H}}$ of LDBs from $\tilde{\mathcal{H}}$. A LDB in the space $\mathbb{O}^d$ can be written as $\mathbf{w}^\top \mathbf{x}+b=0$, where is $\mathbf{x}\in \mathbb{O}^d$ is a variable, $\mathbf{w}$ is the basis term, and $b$ corresponds to the bias. Following \citep{chu2018exact}, for any input graph $G$, a linear boundary can be sampled from $\mathcal{H}$ by computing \begin{equation}\label{eq:basis} \begin{split} \mathbf{w} = \frac{\partial \left(\max_1(\phi_{fc}(\boldsymbol{\alpha})) - \max_2(\phi_{fc}(\boldsymbol{\alpha}))\right)}{\partial \boldsymbol{\alpha}}|_{\boldsymbol{\alpha}=\phi_{gc}(G)}, \end{split} \end{equation} and \begin{equation}\label{eq:bias} \begin{split} b = \textstyle\max_1(\phi_{fc}(\boldsymbol{\alpha}))-\textstyle\max_{2}(\phi_{fc}(\boldsymbol{\alpha}))-\mathbf{w}^T\boldsymbol{\alpha}|_{\boldsymbol{\alpha}=\phi_{gc}(G)}, \end{split} \end{equation} where $\max_1(\phi_{fc}(\boldsymbol{\alpha})))$ and $\max_2(\phi_{fc}(\boldsymbol{\alpha}))$ are the largest and the second largest values in the vector $\phi_{fc}(\boldsymbol{\alpha})$, respectively. Given an input graph $G$, Equations~\eqref{eq:basis} and \eqref{eq:bias} identify one LDB from $\mathcal{H}$. Thus, we can sample a subset of input graphs uniformly from $D$, and use Equations~\eqref{eq:basis} and \eqref{eq:bias} to derive a sample of LDBs as $\tilde{\mathcal{H}}\subset \mathcal{H}$. Now, we substitute $\mathcal{H}$ in Equation~\eqref{eq:separation} by $\tilde{\mathcal{H}}$ to produce the following problem. \begin{equation}\label{eq:practical} \max_{\mathcal{P} \subseteq \tilde{\mathcal{H}}} g(\mathcal{P}, c), \textrm{ s.t. } h(\mathcal{P}, c) \leq \delta, \end{equation} where $\delta \geq 0$ is a tolerance parameter to keep this problem feasible. The parameter $\delta$ is required because substituting $\mathcal{H}$ by $\tilde{\mathcal{H}}$ ignores the LDBs in $\mathcal{H}\setminus \tilde{\mathcal{H}}$. Thus, the convex polytope $r(\mathcal{P}, c)$ induced by subset of boundaries in $\tilde{\mathcal{H}}$ may contain instances that are not predicted as class $c$. We directly set $\delta=h(\tilde{\mathcal{H}}, c)$, which is the smallest value of $\delta$ that keeps the practical problem feasible. The problem in Equation \eqref{eq:practical} can be proven to be a Submodular Cost Submodular Cover (SCSC) problem \citep{NIPS2013_a1d50185} (see Appendix~\ref{sec:apx_proof} for proof) that is well known to be NP-hard \citep{crawford2019submodular}. We adopt a greedy boundary selection method to find a good solution to this problem \citep{wolsey1982analysis}. Specifically, we initialize $\mathcal{P}$ as an empty set, and then iteratively select a new boundary $h$ from $\tilde{\mathcal{H}}$ by \begin{equation}\label{eq:greedy1} \begin{split} h = \underset{h\in \tilde{\mathcal{H}} \setminus \mathcal{P}}{\arg\min} \frac{g( \mathcal{P}, c) - g(\mathcal{P} \cup \{h\}, c) + \epsilon} {h(\mathcal{P},c) - h(\mathcal{P}\cup \{h\}, c)}, \end{split} \end{equation} where $g( \mathcal{P}, c) - g(\mathcal{P} \cup \{h\}, c)$ is the decrease of $g( \mathcal{P}, c)$ when adding $h$ into $\mathcal{P}$, and $h(\mathcal{P},c) - h(\mathcal{P}\cup \{h\}, c)$ is the decrease of $h(\mathcal{P},c)$ when adding $h$ into $\mathcal{P}$. Both $g( \mathcal{P}, c)$ and $h(\mathcal{P},c)$ are non-increasing when adding $h\in \tilde{\mathcal{H}}$ into $\mathcal{P}$ because adding a new boundary $h$ may only exclude some graphs from the convex polytope $r(\mathcal{P}, c)$. Intuitively, in each iteration, Equation~\eqref{eq:greedy1} selects a boundary $h\in \tilde{\mathcal{H}}$ such that adding $h$ into $\mathcal{P}$ reduces $g( \mathcal{P}, c)$ the least and reduces $h(\mathcal{P},c)$ the most. In this way, we can quickly reduce $h(\mathcal{P},c)$ to be smaller than $\delta$ without decreasing $g( \mathcal{P}, c)$ too much, which produces a good feasible solution to the practical problem. We add a small constant $\epsilon$ to the numerator such that, when there are multiple candidates of $h$ that do not decrease $g( \mathcal{P}, c)$, we can still select the $h$ that reduces $h(\mathcal{P},c)$ the most. We apply a peeling-off strategy to iteratively extract multiple decision regions. For each class $c\in C$, we first solve the practical problem once to find a decision region $r(\mathcal{P}, c)$, then we remove the graphs covered by $r(\mathcal{P}, c)$ from $D_c$. If there are remaining graphs predicted as the class $c$, we continue finding the decision regions using the remaining graphs until all the graphs in $D_c$ are removed. When all the graphs in $D_c$ are removed for each class $c\in C$, we stop the iteration and return the set of decision regions we found. \subsection{Producing Explanations} In this section, we introduce how to use the LDBs of decision regions to train a neural network that produces a robust counterfactual explanation as a small subset of edges of an input graph. We form explanations as a subset of edges because GNNs make decisions by aggregating messages passed on edges. Using edges instead of vertices as explanations can provide better insights on the decision logic of GNNs. \subsubsection{The Neural Network Model} Denote by $f_\theta$ the neural network to generate a subset of edges of an input graph $G$ as the robust counterfactual explanation on the prediction $\phi(G)$. $\theta$ represents the set of parameters of the neural network. For experiments, our explanation network $f$ consists of 2 fully connected layers with a ReLU activation and the hidden dimension of 64. For any two connected vertices $v_i$ and $v_j$ of $G$, denote by $\mathbf{z}_i$ and $\mathbf{z}_j$ the embeddings produced by the last convolution layer of the GNN for the two vertices, respectively. The neural network $f_\theta$ takes $\mathbf{z}_i$ and $\mathbf{z}_j$ as the input and outputs the probability for the edge between $v_i$ and $v_j$ to be part of the explanation. This can be written as \begin{equation}\label{eq:mlp} \begin{split} \mathbf{M}_{ij} = f_\theta(\mathbf{z}_i, \mathbf{z}_j), \end{split} \end{equation} where $\mathbf{M}_{ij}$ denotes the probability that the edge between $v_i$ and $v_j$ is contained in the explanation. When there is no edge between $v_i$ and $v_j$, that is, $\mathbf{A}_{ij} = 0$, we set $\mathbf{M}_{ij}=0$. For an input graph $G=\{V, E\}$ with $n$ vertices and a trained neural network $f_\theta$, $\mathbf{M}$ is an $n$-by-$n$ matrix that carries the complete information to generate a robust counterfactual explanation as a subset of edges, denoted by $S\subseteq E$. Concretely, we obtain $S$ by selecting all the edges in $E$ whose corresponding entries in $\mathbf{M}$ are larger than 0.5. \subsubsection{Training Model $f_\theta$} For an input graph $G=(V, E)$, denote by $S\subseteq E$ the subset of edges produced by $f_\theta$ to explain the prediction $\phi(G)$, our goal is to train a good model $f_\theta$ such that the prediction on the subgraph $G_S$ induced by $S$ from $G$ is consistent with $\phi(G)$; and deleting the edges in $S$ from $G$ produces a remainder subgraph $G_{E\setminus S}$ such that the prediction on $G_{E\setminus S}$ changes significantly from $\phi(G)$. Since producing $S$ by $f_\theta$ is a discrete operation that is hard to incorporate in an end-to-end training process, we define two proxy graphs to approximate $G_S$ and $G_{E\setminus S}$, respectively, such that the proxy graphs are determined by $\theta$ through continuous functions that can be smoothly incorporated into an end-to-end training process. The proxy graph of $G_S$, denoted by $G_\theta$, is defined by regarding $\mathbf{M}$ instead of $\mathbf{A}$ as the adjacency matrix. That is, $G_\theta$ has exactly the same graph structure as $G$, but the edge weights of $G_\theta$ is given by the entries in $\mathbf{M}$ instead of $\mathbf{A}$. Here, the subscript $\theta$ means $G_\theta$ is determined by $\theta$. The proxy graph of $G_{E\setminus S}$, denoted by $G'_\theta$, also have the same graph structure as $G$, but the edge weight between each pair of vertices $v_i$ and $v_j$ is defined as \begin{equation} \mathbf{M}'_{ij}=\left\{ \begin{split} & 1 - \mathbf{M}_{ij} & \textrm{ if } \mathbf{A}_{ij} = 1 \\ & 0 & \textrm{ if } \mathbf{A}_{ij} = 0\\ \end{split} \right. \end{equation} The edge weights of both $G_\theta$ and $G'_\theta$ are determined by $\theta$ through continuous functions, thus we can smoothly incorporate $G_\theta$ and $G'_\theta$ into an end-to-end training framework. As discussed later in this section, we use a regularization term to force the value of each entry in $\mathbf{M}_{ij}$ to be close to either 0 or 1, such that $G_\theta$ and $G'_\theta$ better approximate $G_{S}$ and $G_{E\setminus S}$ respectively. We formulate our loss function as \begin{equation}\label{eq:loss_net} \mathcal{L}(\theta) = \sum_{G \in D} \left\{\lambda\mathcal{L}_{same}(\theta, G) + (1-\lambda)\mathcal{L}_{opp}(\theta, G) + \beta\mathcal{R}_{sparse}(\theta, G) + \mu \mathcal{R}_{discrete}(\theta, G)\right\}, \end{equation} where $\lambda\in[0, 1]$, $\beta \geq 0$ and $\mu\geq 0$ are the hyperparameters controlling the importance of each term. The influence of these parameters is discussed in Appendix~\ref{sec:apx_hyper}. The first term of our loss function requires that the prediction of the GNN on $G_\theta$ is consistent with the prediction on $G$. Intuitively, this means that the edges with larger weights in $G_\theta$ dominate the prediction on $G$. We formulate this term by requiring $G_\theta$ to be covered by the same decision region covering $G$. Denote by $\mathcal{H}_G$ the set of LDBs that induce the decision region covering $G$, and by $|\mathcal{H}_G|$ the number of LDBs in $\mathcal{H}_G$. For the $i$-th LDB $h_i \in \mathcal{H}_G$, denote by $\mathcal{B}_i(\mathbf{x})=\mathbf{w}_i^\top\mathbf{x}+b_i$, where $\mathbf{w}_i$ and $b_i$ are the basis and bias of $h_i$, respectively, and $\mathbf{x}\in \mathbb{O}^d$ is a point in the space $\mathbb{O}^d$. The sign of $\mathcal{B}_i(\mathbf{x})$ indicates whether a point $\mathbf{x}$ lies on the positive side or the negative side of $h_i$, and the absolute value $|\mathcal{B}_i(\mathbf{x})|$ is proportional to the distance of a point $\mathbf{x}$ from $h_i$. Denote by $\sigma(\cdot)$ the standard sigmoid function, we formulate the first term of our loss function as \begin{equation}\label{eq:loss_same} \begin{split} \mathcal{L}_{same}(\theta, G) = \frac{1}{|\mathcal{H}_G|}\sum_{ h_i\in\mathcal{H}_G} \sigma\left(-\mathcal{B}_i(\phi_{gc}(G))* \mathcal{B}_i(\phi_{gc}(G_\theta))\right), \end{split} \end{equation} such that minimizing $\mathcal{L}_{same}(\theta, G)$ encourages the graph embeddings $\phi_{gc}(G)$ and $\phi_{gc}(G_\theta)$ to lie on the same side of every LDB in $\mathcal{H}_G$. Thus, $G_\theta$ is encouraged to be covered by the same decision region covering $G$. The second term of our loss function optimizes the counterfactual property of the explanations by requiring the prediction on $G'_\theta$ to be significantly different from the prediction on $G$. Intuitively, this means that the set of edges with larger weights in $G_\theta$ are good counterfactual explanations because reducing the weights of these edges significantly changes the prediction. Following the above intuition, we formulate the second term as \begin{equation}\label{eq:loss_opp} \begin{split} \mathcal{L}_{opp}(\theta, G) = \min_{h_i\in\mathcal{H}_G} \sigma\left(\mathcal{B}_i(\phi_{gc}(G))* \mathcal{B}_i(\phi_{gc}(G'_\theta))\right), \end{split} \end{equation} such that minimizing $\mathcal{L}_{opp}(\theta, G)$ encourages the graph embeddings $\phi_{gc}(G)$ and $\phi_{gc}(G'_\theta)$ to lie on the opposite sides of at least one LDB in $\mathcal{H}_G$. This further means that $G'_\theta$ is encouraged not to be covered by the decision region covering $G$, thus the prediction on $G'_\theta$ can be changed significantly from the prediction on $G$. Similar to \citep{NEURIPS2019_d80b7040}, we use a L1 regularization $ \mathcal{R}_{sparse}(\theta, G) = \|\mathbf{M}\|_1 $ on the matrix $\mathbf{M}$ produced by $f_\theta$ on an input graph $G$ to produce a sparse matrix $\mathbf{M}$, such that only a small number of edges in $G$ are selected as the counterfactual explanation. We also follow \citep{NEURIPS2019_d80b7040} to use an entropy regularization \begin{equation} \mathcal{R}_{discrete}(\theta, G) = -\frac{1}{|\mathbf{M}|}\sum_{i,j}(\mathbf{M}_{ij}\log(\mathbf{M}_{ij})+(1-\mathbf{M}_{ij})\log(1-\mathbf{M}_{ij})) \end{equation} to push the value of each entry in $\mathbf{M}_{ij}$ to be close to either 0 or 1, such that $G_\theta$ and $G'_\theta$ approximate $G_{S}$ and $G_{E\setminus S}$ well, respectively. Now we can use the graphs in $D$ and the extracted decision regions to train the neural network $f_\theta$ in an end-to-end manner by minimizing $\mathcal{L}(\theta)$ over $\theta$ using back propagation. Once we finish training $f_\theta$, we can first apply $f_\theta$ to produce the matrix $\mathbf{M}$ for an input graph $G=(V, E)$, and then obtain the explanation $S$ by selecting all the edges in $E$ whose corresponding entries in $\mathbf{M}$ are larger than 0.5. We do not need the extracted boundaries for inference as the the decision logic of GNN is already distilled into the explanation network $f$ during the training. As discussed in Appendix~\ref{sec:apx_node_cls}, our method can be easily extended to generate robust counterfactual explanations for node classification tasks. Our method is highly efficient with a time complexity $O(|E|)$ for explaining the prediction on an input graph $G$, where $|E|$ is the total number of edges in $G$. Additionally, the neural network $f_\theta$ can be directly used without retraining to predict explanations on unseen graphs. Thus our method is significantly faster than the other methods \cmmnt{\citep{NEURIPS2019_d80b7040,pope2019explainability, yuan2021explainability,NEURIPS2020_8fb134f2}} \citep{NEURIPS2019_d80b7040,pope2019explainability,NEURIPS2020_8fb134f2} that require retraining each time when generating explanations on a new input graph. \section{Experiments} \label{sec:experiment} We conduct series of experiments to compare our method with the state-of-the-art methods including GNNExplainer~\citep{NEURIPS2019_d80b7040}, PGExplainer~\citep{NEURIPS2020_e37b08dd}, PGM-Explainer~\citep{NEURIPS2020_8fb134f2}\cmmnt{, SubgraphX~\citep{yuan2021explainability}} and CF-GNNExplainer~\citep{lucic2021cf}. For the methods that identify a set of vertices as an explanation, we use the set of vertices to induce a subgraph from the input graph, and then use the set of edges of the induced subgraph as the explanation. For the methods that identify a subgraph as an explanation, we directly use the set of edges of the identified subgraph as the explanation. To demonstrate the effectiveness of the decision regions, we derive another baseline method named RCExp-NoLDB that adopts the general framework of RCExplainer but does not use the LDBs of decision regions to generate explanations. Instead, RCExp-NoLDB directly maximizes the prediction confidence on class $c$ for $G_\theta$ and minimizes the prediction confidence of class $c$ for $G'_\theta$. We evaluate the explanation performance on two typical tasks: the graph classification task that uses a GNN to predict the class of an input graph, and the node classification task that uses a GNN to predict the class of a graph node. For the graph classification task, we use one synthetic dataset, BA-2motifs~\citep{NEURIPS2020_e37b08dd}, and two real-world datasets, Mutagenicity~\citep{kazius2005derivation} and NCI1~\citep{4053093}. For the node classification task, we use the same four synthetic datasets as used by GNNExplainer~\cite{NEURIPS2019_d80b7040}, namely, {BA-\sc{shapes}}, {BA-\sc{Community}}, {\sc{tree-cycles}} and {\sc{tree-grid}}. Limited by space, we only report here the key results on the graph classification task for fidelity, robustness and efficiency. Please refer to Appendix~\ref{sec:apx_impldetails} for details on datasets, baselines and the experiment setups. Detailed experimental comparison on the node classification task will be discussed in Appendix~\ref{sec:apx_exp} where we show that our method produces extremely accurate explanations. CF-GNNExplainer~\cite{lucic2021cf} is only included in the results of node classification, because the source code of CF-GNNExplainer is not available and~\cite{lucic2021cf} reports performance on only node classification tasks. \subsection{Fidelity} \textbf{Fidelity} is measured by the decrease of prediction confidence after removing the explanation (i.e., a set of edges) from the input graph~\citep{pope2019explainability}. We use fidelity to evaluate how counterfactual the generated explanations are on the datasets Mutagenicity, NCI1 and BA-2motifs. A large fidelity score indicates stronger counterfactual characteristics. It is important to note that fidelity may be sensitive to sparsity of explanations. The sparsity of an explanation $S$ with respect to an input graph $G=(V, E)$ is $sparsity(S, G) =1 - \frac{|S|}{|E|}$, that is, the percentage of edges remaining after the explanation is removed from $G$. We only compare explanations with the same level of sparsity. Figure \ref{fig:fidelity} shows the results about fidelity. Our approach achieves the best fidelity performance at all levels of sparsity. The results validate the effectiveness of our method in producing highly counterfactual explanations. RCExplainer also significantly outperforms RCExp-NoLDB. This confirms that using LDBs of decision regions extracted from GNNs produces more faithful counterfactual explanations. \cmmnt{SubgraphX does not perform as well as reported by \citet{yuan2021explainability}. The fidelity performance reported by \citet{yuan2021explainability} is obtained by setting the features of nodes that are part of the explanation to $0$ but not removing the explanation edges from the input graph. This does not remove the message passing roles of the explanation nodes from the input graph because the edges connected to those nodes still can pass messages. In our experiments, we fix this problematic setting by directly blocking the messages that are passed on the edges in the explanation. Appendix~\ref{sec:apx_impldetails} provides more details.} \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/nosubx_last_fidelity_rcexplainer.pdf} \end{center} \caption{Fidelity performance averaged across 10 runs for the datasets at different levels of sparsity. } \label{fig:fidelity} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[width=1.0\linewidth]{NeurIPS 2021-Counterfactual Interpretation on GNNs/figures/consistent_robust_hzlegend_k8_rcexplainer.pdf} \end{center} \caption{Noise robustness (AUC) averaged across 10 runs for the datasets at different levels of noise.} \label{fig:robustness} \end{figure*} \subsection{Robustness Performance} In this experiment, we evaluate the robustness of all methods by quantifying how much an explanation changes after adding noise to the input graph. For an input graph $G$ and the explanation $S$, we produce a perturbed graph $G'$ by adding random noise to the node features and randomly adding or deleting some edges of the input graph such that the prediction on $G'$ is consistent with the prediction on $G$ . Using the same method we obtain the explanation $S'$ on $G'$. Considering top-$k$ edges of $S$ as the ground-truth and comparing $S'$ against them, we compute a receiver operating characteristic (ROC) curve and evaluate the robustness by the area under curve (AUC) of the ROC curve. We report results for $k=8$ in Figure~\ref{fig:robustness}. Results for other values of $k$ are included in Appendix~\ref{sec:apx_exp} where we observe similar trend. Figure~\ref{fig:robustness} shows the AUC of GNNExplainer, PGExplainer, RCExp-NoLDB and RCExplainer at different levels of noise. A higher AUC indicates better robustness. The percentage of noise shows the proportion of nodes and edges that are modified.\cmmnt{Baselines such as PGM-Explainer and SubgraphX are not included in this experiment as they do not output the edge weights that are required for computing AUC.} PGM-Explainer is not included in this experiment as it does not output the edge weights that are required for computing AUC. We present additional robustness experiments in Appendix~\ref{sec:apx_exp} where we extend all the baselines to report node and edge level accuracy. GNNExplainer performs the worst on most of the datasets, since it optimizes each graph independently without considering other graphs in the training set. Even when no noise is added, the AUC of GNNExplainer is significantly lower than 1 because different runs produce different explanations for the same graph prediction. PGExplainer is more robust than GNNExplainer because the neural network they trained to produce explanations implicitly considers all the graphs used for training Our method achieves the best AUC on all the datasets, because the common decision logic carried by the decision regions of a GNN is highly robust to noise. PGExplainer achieves a comparable performance as our method on the Mutagenicity dataset, because the samples of this dataset share a lot of common structures such as carbon rings, which makes it easier for the neural network trained by PGExplainer to identify these structures in presence of noise. However, for BA-2motifs and NCI1, this is harder as samples share very few structures and thus the AUC of PGExplainer drops significantly. RCExplainer also significantly outperforms RCExp-NoLDB on these datasets which highlights the role of decision boundaries in making our method highly robust. \cmmnt{ \begin{table}[t] \small \begin{center} \begin{tabular}{cccccc} \toprule {\bf{Method}} & {GNNExplainer} & {PGExplainer} & {PGM-Explainer} & {SubgraphX} & {RCExplainer} \\ \midrule \bf{Time} & 1.2s $\pm$ 0.2 & \textbf{0.01s} $\pm$ 0.03 & 13.1s $\pm$ 3.9 & 77.8s $\pm$ 4.5 & \textbf{0.01s} $\pm$ 0.02\\ \bottomrule \end{tabular} \end{center} \caption{Average time cost for producing an explanation on a single graph sample.} \label{table:time} \end{table} } \begin{table}[t] \small \begin{center} \begin{tabular}{ccccc} \toprule {\bf{Method}} & {GNNExplainer} & {PGExplainer} & {PGM-Explainer} & {RCExplainer} \\ \midrule \bf{Time} & 1.2s $\pm$ 0.2 & \textbf{0.01s} $\pm$ 0.03 & 13.1s $\pm$ 3.9 & \textbf{0.01s} $\pm$ 0.02\\ \bottomrule \end{tabular} \end{center} \caption{Average time cost for producing an explanation on a single graph sample.} \label{table:time} \end{table} \subsection{Efficiency} We evaluate efficiency by comparing the average computation time taken for inference on unseen graph samples. Table \ref{table:time} shows the results on the Mutagenicity dataset. Since our method also can be directly used for unseen data without any retraining, it is as efficient as PGExplainer and significantly faster than GNNExplainer \cmmnt{,} and PGM-Explainer\cmmnt{ and SubgraphX}. \section{Conclusion} \label{sec:conclude} In this paper, we develop a novel method for producing counterfactual explanations on GNNs. We extract decision boundaries from the given GNN model to formulate an intuitive and effective counterfactual loss function. We optimize this loss to train a neural network to produce explanations with strong counterfactual characteristics. Since the decision boundaries are shared by multiple samples of the same predicted class, explanations produced by our method are robust and do not overfit the noise. Our experiments on synthetic and real-life benchmark datasets strongly validate the efficacy of our method. In this work, we focus on GNNs that belong to Piecewise Linear Neural Networks (PLNNs). Extending our method to other families of GNNs and tasks such as link prediction, remains an interesting future direction. Our method will benefit multiple fields where GNNs are intensively used. By allowing the users to interpret the predictions of complex GNNs better, it will promote transparency, trust and fairness in the society. However, there also exist some inherent risks. A generated explanation may expose private information if our method is not coupled with an adequate privacy protection technique. Also, some of the ideas presented in this paper may be adopted and extended to improve adversarial attacks. Without appropriate defense mechanisms, the misuse of such attacks poses a risk of disruption in the functionality of GNNs deployed in the real world. That said, we firmly believe that these risks can be mitigated through increased awareness and proactive measures.
1,116,691,498,161
arxiv
\section{Introduction}\label{intro} Towards the end of the introduction of \cite{hrushovski:kazhdan:integration:vf} three hopes for the future of the theory of motivic integration are mentioned. We propose to investigate one of them in a series of papers: additive invariants and integration in $o$\nobreakdash-minimal valued fields. A prototype of such valued fields is $\mathds{R} \dpar{ t^{\mathds{Q}}}$, the generalized power series field over $\mathds{R}$ with exponents in $\mathds{Q}$. One of the cornerstones of the methodology of \cite{hrushovski:kazhdan:integration:vf} is $C$\nobreakdash-minimality, which is the right analogue of $o$\nobreakdash-minimality for algebraically closed valued fields and other closely related structures that epitomizes the behavior of definable subsets of the affine line. It, of course, fails in an $o$\nobreakdash-minimal valued field, mainly due to the presence of a total ordering. Thus the construction we seek has to be carried out in a different framework, which affords a similar type of normal forms for definable subsets of the affine line, a special kind of weak $o$\nobreakdash-minimality; this framework is van den Dries and Lewenberg's theory of $T$-convex valued fields \cite{DriesLew95, Dries:tcon:97}. The reader is referred to the opening discussions in \cite{DriesLew95, Dries:tcon:97} for a more detailed introduction to $T$\nobreakdash-convexity and a summary of fundamental results. In those papers, how the valuation is expressed is somewhat inconsequential. In contrast, we shall work exclusively with a fixed two-sorted language $\lan{T}{RV}{}$ --- see \S~\ref{defn:lan} and Example~\ref{exam:RtQ} for a quick grasp of the central features of this language --- since such a language is a part of the preliminary setup of any Hrushovski-Kazhdan style integration. Throughout this paper, let $T$ be a complete power-bounded $o$\nobreakdash-minimal $\lan{T}{}{}$\nobreakdash-theory extending the theory $\usub{\textup{RCF}}{}$ of real closed fields. For the real field $\mathds{R}$, the condition of being power-bounded is the same as that of being polynomially bounded. However, for nonarchimedean real closed fields, the former condition is more general and is indeed more natural. The language $\lan{T}{}{}$ extends the language $\{<, 0, 1, +, -, \times\}$ of ordered rings. Let $\mdl R \coloneqq (R, <, \ldots)$ be a model of $T$. By definition, a $T$\nobreakdash-convex subring $\OO$ of $\mdl R$ is a convex subring of $\mdl R$ such that, for every definable (no parameters allowed) continuous function $f : R \longrightarrow R$, we have $f(\OO) \subseteq \OO$. The convexity of $\OO$ implies that it is a valuation ring of $\mdl R$. For instance, if $\mdl R$ is nonarchimedean and $\mathds{R} \subseteq R$ then the convex hull of $\mathds{R}$ forms a valuation ring of $\mdl R$ and, accordingly, the infinitesimals form its maximal ideal. Such a convex hull is $T$\nobreakdash-convex if no definable continuous function can grow so fast as to stretch the standard real numbers into infinity. Let $\OO$ be a \emph{proper} $T$\nobreakdash-convex subring of $\mdl R$. The theory $T_{\textup{convex}}$ of the pair $(\mdl R, \OO)$, suitably axiomatized in the language $\lan{}{convex}{}$ that expands $\lan{T}{}{}$ with a new unary relation symbol, is complete, and if $T$ admits quantifier elimination and is universally axiomatizable then $T_{\textup{convex}}$ admits quantifier elimination as well. Since $T$ is power-bounded, the definable subsets of $R$ afford a type of normal form, a special kind of weak $o$\nobreakdash-minimality (see \cite{mac:mar:ste:weako}), which we dub Holly normal form (since it was first studied by Holly in \cite{holly:can:1995}); in a nutshell, every definable subset of $R$ is a boolean combination of intervals and (valuative) discs. Clearly this is a natural generalization of $o$\nobreakdash-minimality in the presence of valuation. A number of desirable properties of definable sets in $R$ depends on the existence of such a normal form. For instance, every subset of $R$ defined by a principal type assumes one of the following four forms: a point, an open disc, a closed disc, and a half thin annulus, and, furthermore, these four forms are distinct in the sense that no definable bijection between any two of them is possible. Let $\vv : R^{\times} \longrightarrow \Gamma$ be the valuation map induced by $\OO$, $\K$ the corresponding residue field, and $\upharpoonright : \OO \longrightarrow \K$ the residue map. There is a canonical way of turning $\K$ into a model of $T$ as well, see \cite[Remark~2.16]{DriesLew95}. Let $\MM$ be the maximal ideal of $\OO$. Let $\RV = R^{\times} / (1 + \MM)$ and $\rv : R^{\times} \longrightarrow \RV$ be the quotient map. Note that, for each $a \in R$, the map $\vv$ is constant on the set $a + a\MM$, and hence there is an induced map $\vrv : \RV \longrightarrow \Gamma$. The situation is illustrated in the following commutative diagram \begin{equation*} \bfig \square(0,0)/^{ (}->`->>`->>`^{ (}->/<600, 400>[\OO \smallsetminus \MM`R^{\times}`\K^{\times}` \RV;`\upharpoonright`\rv`] \morphism(600,0)/->>/<600,0>[\RV`\Gamma;\vrv] \morphism(600,400)/->>/<600,-400>[R^{\times}`\Gamma;\vv] \efig \end{equation*} where the bottom sequence is exact. This structure may be expressed and axiomatized in a natural two-sorted first-order language $\lan{T}{RV}{}$, in which $R$ is referred to as the $\VF$-sort and $\RV$ is taken as a new sort. Informally, $\lan{T}{RV}{}$ is viewed as an extension of $\lan{}{convex}{}$. We expand $(\mdl R, \OO)$ to an $\lan{T}{RV}{}$-structure. The main construction in this paper is carried out in such a setting. For concreteness, the reader is welcome to take $R = \mathds{R} \dpar{ t^{\mathds{Q}} }$ and $\OO = \mathds{R} \llbracket t^{\mathds{Q}} \rrbracket$ in the remainder of this introduction (see Example~\ref{exam:RtQ} below for more on this generalized power series field). For a description of the ideas and the main results of the Hrushovski-Kazhdan style integration theory, we refer the reader to the original introduction in \cite{hrushovski:kazhdan:integration:vf} and also the introductions in \cite{Yin:int:acvf, Yin:int:expan:acvf}. There is also a quite comprehensive introduction to the same materials in \cite{hru:loe:lef} and, more importantly, a specialized version that relates the Hrushovski-Kazhdan style integration to the geometry and topology of Milnor fibers over the complex field. The method expounded there will be featured in a sequel to this paper as well. In fact, since much of the work below is closely modeled on that in \cite{hrushovski:kazhdan:integration:vf, Yin:special:trans, Yin:int:acvf, hru:loe:lef}, the reader may simply substitute the term ``theory of power-bounded $T$-convex valued fields'' for ``theory of algebraically closed valued fields'' or more generally ``$V$-minimal theories'' in those introductions and thereby acquire a quite good grip on what the results of this paper look like. For the reader's convenience, however, we shall repeat some of the key points, perhaps with minor changes here and there. Let $\VF_*$ and $\RV[*]$ be two categories of definable sets that are respectively associated with the $\VF$-sort and the $\RV$-sort as follows. In $\VF_*$, the objects are the definable subsets of cartesian products of the form $\VF^n \times \RV^m$ and the morphisms are the definable bijections. On the other hand, for technical reasons (particularly for keeping track of ambient dimensions), $\RV[*]$ is formulated in a somewhat complicated way and is hence equipped with a gradation by ambient dimensions (see Definition~\ref{defn:c:RV:cat}). The Grothendieck semigroup of a category $\mdl C$, denoted by $\gsk \mdl C$, is the free semigroup generated by the isomorphism classes of $\mdl C$, subject to the usual scissor relation $[A \smallsetminus B] + [B] = [A]$, where $[A]$, $[B]$ denote the isomorphism classes of the objects $A$, $B$ and ``$\smallsetminus$'' is certain binary operation, usually just set subtraction. Sometimes $\mdl C$ is also equipped with a binary operation --- for example, cartesian product --- that induces multiplication in $\gsk \mdl C$, in which case $\gsk \mdl C$ becomes a (commutative) semiring. The formal groupification of $\gsk \mdl C$, which is then a ring, is denoted by $\ggk \mdl C$. The main construction of the Hrushovski-Kazhdan integration theory is a canonical --- that is, functorial in a suitable way --- homomorphism from the Grothendieck semiring $\gsk \VF_*$ of $\VF_*$ to the Grothendieck semiring $\gsk \RV[*]$ of $\RV[*]$ modulo a semiring congruence relation $\isp$ on the latter. In fact, it turns out to be an isomorphism. This construction has three main steps. \begin{enumerate}[{Step} 1.] \item First we define a lifting map $\bb L$ from the set of objects of $\RV[*]$ into the set of objects of $\VF_*$ (Definition~\ref{def:L}). Next we single out a subclass of isomorphisms in $\VF_*$, which are called special bijections (Definition~\ref{defn:special:bijection}), and show that for any object $A$ in $\VF_*$ there is a special bijection $T$ on $A$ and an object $\bm U$ in $\RV[*]$ such that $T(A)$ is isomorphic to $\bb L \bm U$ (Corollary~\ref{all:subsets:rvproduct}). This implies that $\bb L$ is ``essentially surjective'' on objects, meaning that it is surjective on isomorphism classes of $\VF_*$. For this result alone we do not have to limit our means to special bijections. However, in Step~3 below, special bijections become an essential ingredient in computing the semiring congruence relation $\isp$. \item We show that, for any two isomorphic objects $\bm U_1$, $\bm U_2$ of $\RV[*]$, their lifts $\bb L \bm U_1, \bb L \bm U_2$ in $\VF_*$ are isomorphic as well (Corollary~\ref{RV:lift}). This implies that $\bb L$ induces a semiring homomorphism $ \gsk \RV[*] \longrightarrow \gsk \VF_*, $ which is also denoted by $\bb L$. This homomorphism is surjective by Step~1 and hence, modulo the semiring congruence relation $\isp$ --- that is, the kernel of $\bb L$ --- the inversion $\int_+$ of the homomorphism $\bb L$ is an isomorphism of semirings. \item A number of important properties of the classical integration can already be verified for $\int_+$ and hence, morally, this third step is not necessary. For applications, however, it is much more satisfying to have a precise description of the semiring congruence relation $\isp$. The basic notion used in the description is that of a blowup of an object in $\RV[*]$, which is essentially a restatement of the trivial fact that there is an additive translation from $1 + \MM$ onto $\MM$ (Definition~\ref{defn:blowup:coa}). We then show that, for any two objects $\bm U_1$, $\bm U_2$ in $\RV[*]$, there are isomorphic blowups $\bm U_1^{\flat}$, $\bm U_2^{\flat}$ of them if and only if $\bb L \bm U_1$, $\bb L \bm U_2$ are isomorphic (Proposition~\ref{kernel:L}). The ``if'' direction essentially contains a form of Fubini's theorem and is the most technically involved part of the construction. \end{enumerate} We call the semiring homomorphism $\int_+$ thus obtained a Grothendieck homomorphism. If the objects carry volume forms and the Jacobian transformation preserves the integral, that is, the change of variables formula holds, then it may be called a motivic integration; we will not consider this case here and postpone it to a future installment. When the semirings are formally groupified, this Grothendieck homomorphism is accordingly recast as a ring homomorphism, which is denoted by $\int$ and is understood as a (universal) additive invariant. The structure of the Grothendieck ring $\ggk \RV[*]$ may be significantly elucidated. To wit, it can be expressed as a tensor product of two other Grothendieck rings $\ggk \RES[*]$ and $\ggk \Gamma[*]$, that is, there is an isomorphism of graded rings: \[ \bb D: \ggk \RES[*] \otimes_{\ggk \Gamma^{c}[*]} \ggk \Gamma[*] \longrightarrow \ggk \RV[*], \] where $\RES[*]$ is essentially the category of definable sets in $\mathds{R}$ (as a model of the theory $T$) and $\Gamma[*]$ is essentially the category of definable sets over $\mathds{Q}$ (as an $o$\nobreakdash-minimal group), both are graded by ambient dimension, and $\Gamma^{c}[*]$ is the full subcategory of $\Gamma[*]$ of finite objects, whose Grothendieck ring admits a natural embedding into $\ggk \RES[*]$ as well. This isomorphism results in various retractions from $\ggk \RV[*]$ into $\ggk \RES[*]$ or $\ggk \Gamma[*]$ and, when combined with the Grothendieck homomorphism $\int$ and the two Euler characteristics in $o$\nobreakdash-minimal groups (one is a truncated version of the other), yield a (generalized) Euler characteristic \[ \textstyle \Xint{\textup{G}} : \ggk \VF_* \to^{\sim} ( \mathds{Z} \oplus \bigoplus_{i \geq 1} (\mathds{Z}[Y]/(Y^2+Y))X^i) / (1 + 2YX + X), \] which is actually an isomorphism, and two specializations to $\mathds{Z}$: \[ \textstyle \Xint{\textup{R}}^g, \Xint{\textup{R}}^b: \ggk \VF_* \longrightarrow \mathds{Z}, \] determined by the assignments $Y \longmapsto -1$ and $Y \longmapsto 0$ or, equivalently, $X \longmapsto 1$ and $X \longmapsto -1$ (see Proposition~\ref{prop:eu:retr:k} and Theorem~\ref{thm:ring}). We will demonstrate the significance of these two specializations, as opposed to only one, in a future paper that is dedicated to the study of generalized (real) Milnor fibers in the sense of \cite{hru:loe:lef}. For certain purposes, the difference between model theory and algebraic geometry is somewhat easier to bridge if one works over the complex field, as is demonstrated in \cite{hru:loe:lef}; however, over the real field, although they do overlap significantly, the two worlds seem to diverge in their methods and ideas. Our results should be understood in the context of ``$o$\nobreakdash-minimal geometry'' \cite{dries:1998, DrMi96} as opposed to real algebraic geometry. In general, the various Grothendieck rings considered in real algebraic geometry bring about lesser collapse of ``algebraic data'' --- since there are much less morphisms in the background --- and can yield much finer invariants, and hence are more faithful to the geometry in this regard, although the flip side of the story is that the invariants are often computationally intractable (especially when resolution of singularities is involved) and specializations are often needed in practice. For instance, the Grothendieck ring of real algebraic varieties may be specialized to $\mathds{Z}[X]$, which is called the virtual Poincar\'e polynomial (see \cite{mccrory:paru:virtual:poin}). Our method here does not seem to be suited for recovering invariants at this level, at least not directly. The role of $T$-convexity in this paper cannot be overemphasized. However, it does not quite work if the exponential function is included in the theory $T$. It remains a worthy challenge to find a suitable framework in which the construction of this paper may be extended to that case. Much of the content of this paper is extracted from the preprint \cite{Yin:int:tcvf}, which contains a more comprehensive study of $T$\nobreakdash-convex valued fields. This auxiliary part of the theory we are developing may be regarded as a sequel to or a variation on the themes of the work in \cite{DriesLew95, Dries:tcon:97}. It has become clear that some of the technicalities thereof may be of independent interest. For instance, the valuative or infinitesimal version of Lipschitz continuity plays a crucial role in proving the existence of Lipschitz stratifications in an arbitrary power-bounded $o$\nobreakdash-minimal field (this proof has been published in \cite{halyin} and the result cited there is Corollary~\ref{part:rv:cons}). Also, in a future paper, we will use the main result here to show that, in both the real and the complex cases, the Euler characteristic of the topological Milnor fiber coincides with that of the motivic Milnor fiber, avoiding the algebro-geometric machinery employed in \cite[Remark~8.5.5]{hru:loe:lef}. \section{Basic results in $T$-convex valued fields} In this section, we first describe the two-sorted language $\lan{T}{RV}{}$ for $o$\nobreakdash-minimal valued fields and the $\lan{T}{RV}{}$-theory $\TCVF$. This theory is axiomatized. Then we show that $\TCVF$ admits quantifier elimination. Some of the results in \cite{DriesLew95, Dries:tcon:97} that are crucial for our construction are also translated into the present setting. \subsection{Some notation}\label{subs:nota} Recall from the introduction above that $T$ is a complete power-bounded $o$\nobreakdash-minimal $\lan{T}{}{}$\nobreakdash-theory extending the theory $\usub{\textup{RCF}}{}$ of real closed fields. \begin{conv} For the moment, by \emph{definable} we mean definable with arbitrary parameters from the structure in question. But later --- starting in \S~\ref{def:VF} --- we will abandon this practice and work with a fixed set of parameters. The reason for this change will be made abundantly clear when it happens. \end{conv} \begin{defn}[Power-bounded]\label{defn.powBd} Suppose that $\mdl R$ is an $o$\nobreakdash-minimal real closed field. A \emph{power function} in $\mdl R$ is a definable endomorphism of the multiplicative group $\mdl R^+$. We say that $\mdl R$ is \emph{power-bounded} if every definable function $f \colon \mdl R \longrightarrow \mdl R$ is eventually dominated by a power function, that is, there exists a power function $g$ such that $\abs{f(x)} \le g(x)$ for all sufficiently large $x$. A complete $o$\nobreakdash-minimal theory extending $\usub{\textup{RCF}}{}$ is \emph{power-bounded} if all its models are. \end{defn} All power functions in $\mdl R$ may be understood as functions of the form $x \longmapsto x^\lambda$, where $\lambda = \tfrac{d}{d x} f(1)$. The collection of all such $\lambda$ form a subfield and is called the \emph{field of exponents} of $\mdl R$. We will quote the results on power-bounded structures directly from \cite{DriesLew95, Dries:tcon:97} and hence do not need to know more about them other than the things that have already been said. At any rate, a concise and lucid account of the essentials may be found in \cite[\S~ 3]{Dries:tcon:97}. \begin{rem}[Functional language]\label{rem:cont} We shall need a generality that is due to Lou van den Dries (private communication). It states that the theory $T$ can be reformulated in another language \emph{all} of whose primitives, except the binary relation $\leq$, are function symbols that are interpreted as \emph{continuous} functions in all the models of $T$. Actually, for this to hold, we only need to assume that $T$ is a complete $o$\nobreakdash-minimal theory that extends $\usub{\textup{RCF}}{}$. More precisely, working in any model of $T$, it can be shown that all definable sets are boolean combinations of sets of the form $f(x) = 0$ or $g(x) > 0$, where $f$ and $g$ are definable total continuous functions. In particular, this holds in the prime model $\mdl P$ of $T$. Taking all definable total continuous functions in $\mdl P$ and the ordering $<$ as the primitives in a new language $\lan{T'}{}{}$, we see that $T$ can be reformulated as an \emph{equivalent} $\lan{T'}{}{}$-theory $T'$ in the sense that the syntactic categories of $T$ and $T'$ are naturally equivalent. In traditional but less general and more verbose model-theoretic jargon, this just says that if a model of $T$ is converted to a model of $T'$ in the obvious way then the two models are bi\"{i}nterpretable via the identity map, and vice versa. The theory $T'$ also admits quantifier elimination, but it cannot be universally axiomatizable in $\lan{T'}{}{}$. To see this, suppose for contradiction that it can be. Then, by the argument in the proof of \cite[Corollary~2.15]{DMM94}, every definable function $f$ in a model of $T'$, in particular, multiplicative inverse, is given piecewise by terms. But all terms define total continuous functions. This means that, by $o$\nobreakdash-minimality, multiplicative inverse near $0$ is given by two total continuous functions, which is absurd. Now, we may and do extend $T'$ by definitions so that it is universally axiomatizable in the resulting language. Thus every substructure of a model of $T'$ is actually a model of $T'$ and, as such, is an elementary substructure. In fact, since $T'$ has definable Skolem functions, we shall abuse notation slightly and redefine $T$ to be $T'^{\textup{df}}$, where $T'^{\textup{df}}$ is in effect a Skolemization of $T'$ (see \cite[\S\S~2.3--2.4]{DriesLew95} for further explanation). Note that the language of $T$ contains additional function symbols only and some of them must be interpreted in all models of $T$ as discontinuous functions for the reason given above. To summarize, the main point is that $T$ admits quantifier elimination, is universally axiomatizable, is a definitional extension of $T'$, and all the primitives of $\lan{T'}{}{}$, except $\leq$, define continuous functions in all the models of $T'$. The syntactical maneuver of passing through $T'$ just described will only be used in Theorem~\ref{thm:complete} below, and it is not really necessary if one works with a concrete $o$\nobreakdash-minimal extension of $\usub{\textup{RCF}}{}$ such as $\usub{T}{an}$ defined in \cite{DMM94} (also see Example~\ref{exam:RtQ}). \end{rem} We shall work with a sufficiently saturated model $\mdl R \coloneqq (R, <, \ldots)$ of $T$ unless suggested otherwise. Its field of exponents is denoted by $\mathds{K}$. \begin{nota}[Coordinate projections]\label{indexing} For each $n \in \mathds{N}$, let $[n]$ abbreviate the set $\{1, \ldots, n\}$. For any $E \subseteq [n]$, we write $\pr_E(A)$ for the projection of $A$ into the coordinates contained in $E$. In practice, it is often more convenient to use simple standard descriptions as subscripts. For example, if $E$ is a singleton $\{i\}$ then we shall always write $E$ as $i$ and $\tilde E \coloneqq [n] \smallsetminus E$ as $\tilde i$; similarly, if $E = [i]$, $\{k: i \leq k \leq j\}$, $\{k: i < k < j\}$, $\{\text{all the coordinates in the sort $S$}\}$, etc., then we may write $\pr_{\leq i}$, $\pr_{[i, j]}$, $\pr_{(i, j)}$, $\pr_{S}$, etc.; in particular, $A_{\VF}$ and $A_{\RV}$ stand for the projections of $A$ into the $\VF$-sort and $\RV$-sort coordinates, respectively. Unless otherwise specified, by writing $a \in A$ we shall mean that $a$ is a finite tuple of elements (or ``points'') of $A$, whose length, denoted by $\lh(a)$, is not always indicated. If $a = (a_1, \ldots, a_n)$ then, for all $1 \leq i < j \leq n$, following the notational scheme above, $a_i$, $a_{\tilde i}$, $a_{\leq i}$, $a_{[i, j]}$, $a_{[i, j)}$, etc., are shorthand for the corresponding subtuples of $a$. We shall write $\{t\} \times A$, $\{t\} \cup A$, $A \smallsetminus \{t\}$, etc., simply as $t \times A$, $t \cup A$, $A \smallsetminus t$, etc., when no confusion can arise. For $a \in \pr_{\tilde E} (A)$, the fiber $\{b : ( b, a) \in A \} \subseteq \pr_E(A)$ over $a$ is denoted by $A_a$. Note that, in the discussion below, the distinction between the two sets $A_a$ and $A_a \times a$ is usually immaterial and hence they may and often shall be tacitly identified. In particular, given a function $f : A \longrightarrow B$ and $b \in B$, the pullback $f^{-1}(b)$ is sometimes written as $A_b$ as well. This is a special case since functions are identified with their graphs. This notational scheme is especially useful when the function $f$ has been clearly understood in the context and hence there is no need to spell it out all the time. \end{nota} \begin{nota}[Subsets and substructures]\label{nota:sub} By a definable set we mean a definable subset in $\mdl R$, and by a subset in $\mdl R$ we mean a subset in $R$, by which we mean a subset of $R^n$ for some $n$, unless indicated otherwise. Similarly for other structures or sets in place of $\mdl R$ that have been clearly understood in the context. Often the ambient total ordering in $\mdl R$ induces a total ordering on a definable set $S$ of interest with a distinguished element $e$. Then it makes sense to speak of the positive and the negative parts of $S$ relative to $e$, which are denoted by $S^+$ and $S^-$, respectively. Also write $S^+_e$ for $S^+ \cup e$, etc. There may also be a natural absolute value map $S \longrightarrow S^+_e$, which is always denoted by $| \cdot |$; typically $S$ is a sort and $S^\times \coloneqq S \smallsetminus e$ is equipped with a (multiplicatively written) group structure, in which case the absolute value map is usually given as a component of a (splitting) short exact sequence \[ \pm 1 \longrightarrow S^\times \longrightarrow S^+ \quad \text{or} \quad S^+ \longrightarrow S^\times \longrightarrow \pm 1. \] Note that $e$ cannot be the identity element of $S^\times$. We will also write $A < e$ to mean that $A \subseteq S$ and $a < e$ for all $a \in A$, etc. If $\phi(x)$ is a formula then $\phi(x) < e$ denotes the subset of $S$ defined by the formula $\phi(x) \wedge x < e$. Substructures of $\mdl R$ are written as $\mdl S \subseteq \mdl R$. As has been pointed out above, all substructures $\mdl S$ of $\mdl R$ are actually elementary substructures. If $A \subseteq \mdl R^n$ is a set definable with parameters coming from $\mdl S$ then $A(\mdl S)$ is the subset in $\mdl S$ defined by the same formula, that is, $A(\mdl S) = A \cap \mdl S^n$. Given a substructure $\mdl S \subseteq \mdl R$ and a set $A \subseteq \mdl R$, the substructure generated by $A$ over $\mdl S$ is denoted by $\langle \mdl S , A \rangle$ or $\mdl S \langle A \rangle$. Clearly $\langle \mdl S , A \rangle$ is the definable closure of $A$ over $\mdl S$. Later, we will expand $\mdl R$ and introduce more sorts and structures. In that situation we will write $\mdl S \langle A \rangle_T$ or $\langle \mdl S , A \rangle_T$ to emphasize that this is the $\lan{T}{}{}$\nobreakdash-substructure generated by $A$ over the $\lan{T}{}{}$\nobreakdash-reduct of $\mdl S$. \end{nota} \begin{nota}[Topology] The default topology on $\mdl R$ is of course the order topology and the default topology on $\mdl R^n$ is the corresponding product topology. Given a subset $S$ in $\mdl R$, we write $\cl(S)$ for its topological closure, $\ito(S)$ for its interior, and $\partial S \coloneqq \cl(S) \setminus S$ for its frontier (not to be confused with the boundary $\cl(S) \setminus \ito(S)$ of $S$, which is also sometimes denoted by $\partial S$). The same topological discourse applies to a definable set if the ambient total ordering of $\mdl R$ induces a total ordering on it. \end{nota} \subsection{The theory $\TCVF$}\label{defn:lan} The language $\lan{T}{RV}{}$ for $o$\nobreakdash-minimal valued fields --- the theory $T$ may vary, of course --- has the following sorts and symbols: \begin{itemize} \item A sort $\VF$, which uses the language $\lan{T}{}{}$. \item A sort $\RV$, whose basic language is that of groups, written multiplicatively as $\{1, \times, {^{-1}} \}$, together with a constant symbol $0_{\RV}$ (for notational ease, henceforth this will be written simply as $0$). \item A unary predicate $\K^{\times}$ in the $\RV$-sort. The union $\K^{\times} \cup \{0\}$ is denoted by $\K$, which is more conveniently thought of as a sort and, as such, employs the language $\lan{T}{}{}$ as well, where the constant symbols $0$, $1$ are shared with the $\RV$-sort. \item A binary relation symbol $\leq$ in the $\RV$-sort. \item A function symbol $\rv : \VF \longrightarrow \RV_0$. \end{itemize} We shall write $\RV$ to mean the $\RV$-sort without the element $0$, and $\RV_0$ otherwise, etc., although quite often the difference is immaterial. \begin{defn}\label{defn:tcf} The axioms of the theory $\usub{\textup{TCVF}}{}$ of \emph{$T$-convex valued fields} in the language $\lan{T}{RV}{}$ are presented here informally. Many of them are clearly redundant as axioms, and we try to phrase some of these in such a way as to indicate so. The list also contains additional notation that will be used throughout the paper. \begin{enumerate}[({Ax.} 1)] \item The $\lan{T}{}{}$\nobreakdash-reduct of the $\VF$-sort is a model of $T$. Recall from Notation~\ref{nota:sub} that $\VF^+ \subseteq \VF$ is the subset of positive elements and $\VF^- \subseteq \VF$ the subset of negative elements. \item \label{ax:rv} The quadruple $(\RV, 1, \times, {^{-1}})$ forms an abelian group. Inversion is augmented by $0^{-1} = 0$. Multiplication is augmented by $t \times 0 = 0 \times t = 0$ for all $t \in \RV$. The map $\rv : \VF^{\times} \longrightarrow \RV$ is a surjective group homomorphism augmented by $\rv(0) = 0$. \item The binary relation $\leq$ is a total ordering on $\RV_{0}$ such that, for all $t, t' \in \RV_{0}$, $t < t'$ if and only if $\rv^{-1}(t) < \rv^{-1}(t')$. The distinguished element $0 \in \RV_0$ is more aptly referred to as the \emph{middle element} of $\RV_{0}$. Clearly $\RV^+ = \rv(\VF^+)$ and $\RV^- = \rv(\VF^-)$ (see Notation~\ref{nota:sub}). It follows from (Ax.~\ref{ax:rv}) that $\RV^+$ is an ordered convex subgroup of $\RV$ and the quotient group $\RV / \RV^+$ is isomorphic to the group $\pm 1 \coloneqq \rv(\pm 1)$. This gives rise to an absolute value map on $\RV_{0}$, which is compatible with the absolute value map on $\VF$ in the sense that $\rv(\abs{a}) = \abs{\rv(a)}$ for all $a \in \VF$. \item \label{ax:K} The set $\K^{\times}$ forms a \emph{nontrivial} subgroup of $\RV$ and the set $\K^{+} = \K^{\times} \cap \RV^+$ forms a convex subgroup of $\RV^+$. The quotient groups $\RV / \K^+$, $\RV^{+} / \K^+$ are denoted by $\Gamma$, $\Gamma^+$ and the corresponding quotient maps by $\vrv$, $\vrv^+$. Also set $\vrv(0) = 0 \in \Gamma_0$. Since $\K^+$ is convex, $\Gamma^+$ is an ordered group, where the induced ordering is also denoted by $\leq$, and the absolute value map on $\RV_0$ descends to $\Gamma_0$ in the obvious sense. \item Let $\leq^{-1}$ be the ordering on $\Gamma^+_0$ inverse to $\leq$ and $\absG_{\infty} \coloneqq (\Gamma^+_0, +, \leq^{-1})$ the resulting \emph{additively} written ordered abelian group with the top element $\infty$. The composition \[ \abval : \VF \to^{\rv} \RV_0 \to^{\abs{ \cdot}} \RV^+_0 \to^{\vrv^+} \Gamma^+_0 \] is a (nontrivial) valuation with respect to the ordering $\leq^{-1}$, with valuation ring $\OO = \rv^{-1}(\RV^{\circ}_0)$ and maximal ideal $\MM = \rv^{-1}(\RV^{\circ\circ}_0)$, where, denoting $\vrv^+ \circ \abs{ \cdot}$ by $\abvrv$, \begin{align*} \RV^{\circ}_0 &= \{t \in \RV: 1 \leq^{-1} \abvrv(t) \}, \\ \RV^{\circ \circ}_0 &= \{t \in \RV: 1 <^{-1} \abvrv(t)\}. \end{align*} \item \label{ax:t:model} The $\K$-sort (recall that $\K$ is informally referred to as a sort) is a model of $T$ and, as a field, is the residue field of the valued field $(\VF, \OO)$. The natural quotient map $\OO \longrightarrow \K$ is denoted by $\upharpoonright$. For notational convenience, we extend the domain of $\upharpoonright$ to $\VF$ by setting $\upharpoonright(a) = 0$ for all $a \in \VF \smallsetminus \OO$. The following function is also denoted by $\upharpoonright$: \[ \RV \to^{\rv^{-1}} \VF \to^{\upharpoonright} \K. \] \item \label{ax:tcon} ($T$-convexity). Let $f : \VF \longrightarrow \VF$ be a continuous function defined by an $\lan{T}{}{}$\nobreakdash-formula. Then $f(\OO) \subseteq \OO$. \item Suppose that $\phi$ is an $\lan{T}{}{}$\nobreakdash-formula that defines a continuous function $f : \VF^m \longrightarrow \VF$. Then $\phi$ also defines a continuous function $\ol f : \K^m \longrightarrow \K$. Moreover, for all $a \in \OO^m$, we have $\upharpoonright(f(a)) = \ol f(\upharpoonright(a))$. \label{ax:match} \end{enumerate} \end{defn} By (Ax.~\ref{ax:t:model}) and Remark~\ref{rem:cont}, (Ax.~\ref{ax:match}) can be simplified as: for all function symbols $f$ of $\lan{T'}{}{}$ and all $a \in \OO^m$, $\upharpoonright(f(a)) = \ol f(\upharpoonright(a))$. Then it is routine to check that, except the surjectivity of the map $\rv$ and the nontriviality of the value group $\abs{\Gamma}$ (this is an existential axiom and is actually expressed in (Ax.~\ref{ax:K})), $\TCVF$ is also universally axiomatized. Let $\mdl S$ be a substructure of a model $\mdl M$ of $\TCVF$. We say that $\mdl S$ is \emph{$\VF$-generated} if $\RV_0(\mdl S) = \rv(\VF(\mdl S))$. Thus $\mdl S$ is indeed a model of $\TCVF$ if it is $\VF$-generated and $\Gamma(\mdl S)$ is nontrivial. At any rate, $\VF(\mdl S)$, $\upharpoonright(\VF(\mdl S))$, and $\K(\mdl S)$ are all models of $T$. For $A \subseteq \VF(\mdl M) \cup \RV(\mdl M)$, the substructure generated by $A$ over $\mdl S$ is denoted by $\langle \mdl S , A \rangle$ or $\mdl S \langle A \rangle$. Clearly $\VF(\langle \mdl S , A \rangle) = \langle \mdl S , A \rangle_T$ (see Notation~\ref{nota:sub}). \begin{rem}\label{signed:Gam} Although the behavior of the valuation map $\abval$ in the traditional sense is coded in $\TCVF$, we shall work with the \emph{signed} valuation map, which is more natural in the present setting: \[ \vv : \VF \to^{\rv} \RVV \to^{\vrv} \GAA, \] where the ordering $\leq$ on the \emph{signed value group} $\Gamma_0$ no longer needs to be inverted. It is also tempting to use the ordering $\leq$ in the \emph{value group} $\abs{\Gamma}_{\infty}$ instead of its inverse, but this makes citing results in the literature a bit awkward. We shall actually abuse the notation and denote the ordering $\leq^{-1}$ in $\abs{\Gamma}_{\infty}$ also by $\leq$; this should not cause confusion since the ordering on $\Gamma_0$ will rarely be used (we will indicate so explicitly when it is used). The axioms above guarantee that the ordered abelian group ${\GAA} /{\pm 1}$ (here $\vv(\pm 1)$ is just written as $\pm 1$) with the bottom element $0$ is isomorphic to $\abs{\Gamma}_{\infty}$ if either one of the orderings is inverted. So $\abval$ may be thought of as the composition $\vv/{\pm 1} : \VF \longrightarrow {\GAA} /{\pm 1}$. \end{rem} \begin{conv}\label{how:gam} Semantically we shall treat the value group $\GAA$ as an imaginary sort. However, syntactically any reference to $\GAA$ may be eliminated in the usual way and we can still work with $\lan{T}{RV}{}$-formulas for the same purpose. \end{conv} \begin{exam}\label{exam:RtQ} Here our main reference is \cite{DMM94}. A restricted analytic function $\mathds{R}^m \longrightarrow \mathds{R}$ is given on the cube $[-1, 1]^n$ by a power series in $n$ variables over $\mathds{R}$ that converges in a neighborhood of $[-1, 1]^n$, and $0$ elsewhere. Let $\lan{}{an}{}$ be the language that extends the language of ordered rings with a new function symbol for each restricted analytic function, $\usub{\mathds{R}}{an}$ the real field with its natural $\lan{}{an}{}$-structure, and $\usub{T}{an}$ the $\lan{}{an}{}$-theory of $\usub{\mathds{R}}{an}$. Obviously $\usub{T}{an}$ is polynomially bounded. More importantly, it is universally axiomatizable and admits quantifier elimination in a slightly enlarged language, and hence there is no longer any need to extend $\usub{T}{an}$ by definitions as we have arranged in \S~\ref{subs:nota}. (This language is of course more natural than a brute force definitional extension that achieves the same thing, but we do not really care what it is). A generalized power series with coefficients in the field $\mathds{R}$ and exponents in the additive group $\mathds{Q}$ is a formal sum $x = \sum_{q \in \mathds{Q}} a_q t^q$ such that its support $\supp(x) = \{q \in \mathds{Q} : a_q \neq 0\}$ is well-ordered. Let $\mathds{R} \dpar{ t^{\mathds{Q}} }$, $K$ for short, be the set of all such series. Addition and multiplication in $K$ are defined in the expected way, and this makes $K$ a field, generally referred to as a Hahn field. We consider $\mathds{R}$ as a subfield of $K$ via the map $a \longmapsto at^0$. The map $ K^\times \longrightarrow \mathds{Q}$ given by $x \longmapsto \min\supp(x)$ is indeed a valuation. Its valuation ring $\mathds{R} \llbracket t^{\mathds{Q}} \rrbracket$, $\OO$ for short, consists of those series $x$ with $\min\supp(x) \geq 0$ and its maximal ideal $\MM$ of those series $x$ with $\min\supp(x) > 0$. Its residue field admits a section onto $\mathds{R}$ and hence is isomorphic to $\mathds{R}$. It is well-known that $(K, \OO)$ is a henselian valued field and $K$ is real closed. Restricted analytic functions may be naturally interpreted in $K$. According to \cite[Corollary~2.11]{DMM94}, with its naturally induced ordering, $K$ is indeed an elementary extension of $\usub{\mathds{R}}{an}$ and hence a model of $\usub{T}{an}$. We turn $K$ into a model of $\TCVF$, with signed valuation, as follows. First of all, set $\RV = K^{\times} / (1 + \MM)$. Let $\rv : K^\times \longrightarrow \RV$ be the quotient map. The leading term of a series in $K^\times$ is its first term with nonzero coefficient. It is easy to see that two series $x$, $y$ have the same leading term if and only if $\rv(x) = \rv(y)$ and hence $\RV$ is isomorphic to the subgroup of $K^\times$ consisting of all the leading terms. There is a natural isomorphism $a_qt^q \longmapsto (q, a_q)$ from this latter group of leading terms to the group $\mathds{Q} \oplus \mathds{R}^\times$, through which we may identify $\RV$ with $\mathds{Q} \oplus \mathds{R}^\times$. Since $1 + \MM$ is a convex subset of $K^\times$, the total ordering on $K^\times$ induces a total ordering $\leq$ on $\RV$. This ordering $\leq$ is the same as the lexicographic ordering on $\mathds{Q} \oplus \mathds{R}^+$ or $\mathds{Q} \oplus \mathds{R}^-$ via the identification just made. Let $\mathds{R}^{+}$ be the multiplicative group of the positive reals and $\RV^{+} = \mathds{Q} \oplus\mathds{R}^+ $. Observe that $\mathds{R}^{+}$ is a convex subgroup of $\RV$. The quotient group $\Gamma \coloneqq (\mathds{Q} \oplus \mathds{R}^\times) / \mathds{R}^{+}$ is naturally isomorphic to the subgroup $\pm e^{\mathds{Q}} \coloneqq e^{\mathds{Q}} \cup - e^{\mathds{Q}}$ of $\mathds{R}^\times$ so that $\mathds{Q}$ is identified with $e^\mathds{Q}$ via the map $q \longmapsto e^q$. Adding a new symbol $\infty$ to $\RV$, now it is routine to interpret $K$ as an $\lan{T}{RV}{}$-structure, with $T = \usub{T}{an}$ and the signed valuation given by \[ x \longmapsto \rv(x) = (q, a_q) \longmapsto \sgn(a_q)e^{-q}, \] where $\sgn(a_q)$ is the sign of $a_q$. It is also a model of $\TCVF$: all the axioms are more or less immediately derivable from the valued field structure, except (Ax.~\ref{ax:tcon}), which holds since $\usub{T}{an}$ is polynomially bounded, and (Ax.~\ref{ax:match}), which follows from \cite[Proposition~2.20]{DriesLew95}. \end{exam} \subsection{Quantifier elimination} Recall from \S~\ref{intro} that $T_{\textup{convex}}$ is the $\lan{}{convex}{}$-theory of pairs $(\mdl R, \OO)$ with $\mdl R \models T$ and $\OO$ a \emph{proper} $T$-convex subring. We may and shall view $T_{\textup{convex}}$ as the $\lan{}{convex}{}$-reduct of $\TCVF$. \begin{thm}\label{tcon:qe} The theory $T_{\textup{convex}}$ admits quantifier elimination and is complete. \end{thm} \begin{proof} See \cite[Theorem~3.10, Corollary~3.13]{DriesLew95}. \end{proof} That $\OO$ is a proper subring cannot be expressed by a universal axiom. Of course, we can always add a new constant symbol $\imath$ to $\lan{}{convex}{}$ and an axiom ``$\imath$ is in the maximal ideal'' to $T_{\textup{convex}}$ so that $T_{\textup{convex}}$ may indeed be formulated as a universal theory. In that case, every substructure of a model of $T_{\textup{convex}}$ is a model of $T_{\textup{convex}}$ and, moreover, $T_{\textup{convex}}$ has definable Skolem functions given by $\lan{T}{}{}(\imath)$-terms (this is an easy consequence of our assumption on $T$, quantifier elimination in $T_{\textup{convex}}$, and universality of $T_{\textup{convex}}$, as in \cite[Corollary~2.15]{DMM94}). We shall not implement this maneuver formally, even though the resulting properties may come in handy occasionally. \begin{rem}\label{res:exp} According to \cite[Remark~2.16]{DriesLew95}, there is a natural way to expand the residue field $\K$ of the $T_{\textup{convex}}$-model $(\mdl R, \OO)$ to a $T$\nobreakdash-model as follows. Let $\mdl R' \subseteq \OO$ be a maximal subfield with respect to the property of being an elementary $\lan{T}{}{}$\nobreakdash-substructure of $\mdl R$. It follows that $\mdl R'$ is isomorphic to $\K$ as fields via the residue map $\upharpoonright$. Then we can expand $\K$ to a $T$\nobreakdash-model so that the restriction $\upharpoonright \upharpoonright \mdl R'$ becomes an isomorphism of $\lan{T}{}{}$\nobreakdash-structures. This expansion procedure does not depend on the choice of $\mdl R'$. \end{rem} \begin{prop}\label{uni:exp} Every $T_{\textup{convex}}$-model expands to a unique $\TCVF$-model up to isomorphism. \end{prop} \begin{proof} Let $(\mdl R, \OO)$ be a $T_{\textup{convex}}$-model. It is enough to show that there is a canonical $\TCVF$-model expansion $(\mdl R, \RVV(\mdl R))$ of $(\mdl R, \OO)$, where $\mdl R$ is the $\VF$-sort, such that any other such expansion $(\mdl R, \RVV)$ is isomorphic to it. This canonical expansion is constructed as follows. Let $\RV(\mdl R)$ be the quotient group $\mdl R^\times / (1 + \MM)$ and $\rv : \mdl R^\times \longrightarrow \RV(\mdl R)$ the quotient map. As in Example~\ref{exam:RtQ}, it is routine to convert the pair $(\mdl R, \RVV(\mdl R))$ into an $\lan{T}{RV}{}$-structure and check that it satisfies all the axioms in Definition~\ref{defn:tcf}, where (Ax.~\ref{ax:t:model}) is implied by the construction just described above. We shall refer to the obvious bijection between $(\mdl R, \RVV(\mdl R))$ and $(\mdl R, \RVV)$ as the identity map. This map commutes with all the primitives of $\lan{T}{RV}{}$ except, possibly, those in the $\K$-sort. This is where the syntactical maneuver in Remark~\ref{rem:cont} comes in. Recall that all the functional primitives of $\lan{T'}{}{}$ define continuous functions in all the models of $T'$ and $T$ is a definitional extension of $T'$. It follows from (Ax.~\ref{ax:match}) that the identity map indeed induces an $\lan{T}{}{}$\nobreakdash-isomorphism between the two $\K$-sorts. Thus the two expansions are isomorphic. \end{proof} \begin{thm}\label{thm:complete} The theory $\TCVF$ is complete. \end{thm} \begin{proof} By Proposition~\ref{uni:exp}, every embedding between two $T_{\textup{convex}}$-models, which is necessarily elementary, expands uniquely to an $\lan{T}{RV}{}$-embedding between two $\TCVF$-models. This latter embedding is indeed elementary since $\TCVF$ admits quantifier elimination, which will be shown below. It follows that the theory $\TCVF$ is complete. But here we do not really need to go through that route. We can simply observe that, by the proof of Proposition~\ref{uni:exp}, $T_{\textup{convex}}$ and $\TCVF$ are equivalent in the sense mentioned in Remark~\ref{rem:cont}, and hence they are both complete if one of them is. \end{proof} \begin{conv} From now on, we shall work in the model $\mmdl$ of $\TCVF$, which is the unique $\lan{T}{RV}{}$-expansion of the sufficiently saturated $T_{\textup{convex}}$-model $(\mdl R, \OO)$. We shall write $\VF(\mmdl)$ simply as $\VF$ or $\mdl R$, depending on the context, $\RVV(\mmdl)$ as $\RV_0$, etc. A subset in $\mmdl$ may simply be referred to as a set. When we work in the $\lan{T}{}{}$\nobreakdash-reduct $\mdl R$ of $\mmdl$ instead of $\mmdl$, or just wish to emphasize that a set is definable in $\mdl R$ instead of $\mmdl$, the symbol ``$\lan{T}{}{}$'' or ``$T$'' will be inserted into the corresponding places in the terminology. \end{conv} Let $\mdl S \subseteq \mdl R$ be a small substructure and $a, b \in \mdl R \smallsetminus \mdl S$ such that they make the same cut in (the ordering of) $\mdl S$. By $o$\nobreakdash-minimality, there is an automorphism $\sigma$ of $\mdl R$ over $\mdl S$ such that $\sigma(a) = b$. Recall that the field of exponents of $\mdl R$ is denoted by $\mathds{K}$. \begin{thm}\label{theos:qe} The theory $\TCVF$ admits quantifier elimination. \end{thm} \begin{proof} We shall run the usual Shoenfield test for quantifier elimination. To that end, let $\mdl M$ be a model of $\TCVF$, $\mdl S$ a substructure of $\mdl M$, and $\sigma : \mdl S \longrightarrow \mmdl$ an embedding. All we need to do is to extend $\sigma$ to an embedding $\mdl M \longrightarrow \mmdl$. The construction is more or less a variation of that in the proof of \cite[Theorem~3.10]{Yin:QE:ACVF:min}. The strategy is to reduce the situation to Theorem~\ref{tcon:qe}. In the process of doing so, instead of the dimension inequality of the general theory of valued fields, the Wilkie inequality \cite[Corollary~5.6]{Dries:tcon:97} is used (see \cite[\S~3.2]{DriesLew95} for the notion of ranks of $T$-models). Note that, to use this inequality, we need to assume that $T$ is power-bounded. Let $\mdl S_* = \langle \VF(\mdl S) \rangle$ and $t \in \RV(\mdl S) \smallsetminus \RV(\mdl S_*)$. Note that if such a $t$ does not exist then we have $\mdl S = \mdl S_*$ and its $\lan{}{convex}{}$-reduct is an $\lan{}{convex}{}$-substructure of the $\lan{}{convex}{}$-reduct of $\mdl M$, and hence an embedding as desired can be easily obtained by applying Theorem~\ref{tcon:qe} and Proposition~\ref{uni:exp}. Let $a \in \VF(\mdl M)$ with $\rv(a) = t$ and $b \in \VF$ with $\rv(b) = \sigma(t)$. Observe that, according to $\sigma$, $a$ and $b$ must make the same cut in $\VF(\mdl S)$ and $\VF(\sigma(\mdl S))$, respectively, and hence there is an $\lan{T}{}{}$\nobreakdash-isomorphism \[ \bar \sigma : \langle \mdl S_*, a \rangle_T \longrightarrow \langle \sigma(\mdl S_*), b \rangle_T \] with $\bar \sigma(a) = b$ and $\bar \sigma \upharpoonright \VF(\mdl S) = \sigma \upharpoonright \VF(\mdl S)$. We shall show that $\bar \sigma$ expands to an isomorphism between $\langle \mdl S_*, a \rangle$ and $\langle \sigma(\mdl S_*), b \rangle$ that is compatible with $\sigma$. Case (1): There is an $a_1 \in \langle \mdl S_*, a \rangle_T$ such that \[ \abs{\OO(\mdl S_*) } < a_1 < \abs{ \VF(\mdl S_*) \smallsetminus \OO(\mdl S_*) }. \] Set $\abs{\Gamma}(\mdl S_*) = G$. Since $\OO(\langle\mdl S_*, a \rangle)$ is $T$-convex, by \cite[Lemma~5.4]{Dries:tcon:97} and \cite[Remark~3.8]{DriesLew95}, \begin{itemize} \item either $a_1 \in \OO(\langle\mdl S_*, a \rangle)$ and $\absG(\langle \mdl S_*, a \rangle) = G$ or \item $a_1 \notin \OO(\langle\mdl S_*, a \rangle)$ and $\absG(\langle \mdl S_*, a \rangle) \cong G \oplus \mathds{K}$. \end{itemize} By the Wilkie inequality, if \[ \absG(\langle \mdl S_*, a \rangle) \cong G \oplus \mathds{K} \] then $\K(\langle \mdl S_*, a \rangle) = \K(\mdl S_*)$ and hence $\abvrv(t) \notin G$, which implies $\abvrv(\sigma(t)) \notin \sigma(G)$; conversely, if \[ \absG(\langle \sigma(\mdl S_*), b \rangle) \cong \sigma(G) \oplus \mathds{K} \] then $\abvrv(t) \notin G$. Therefore \[ \absG(\langle \mdl S_*, a \rangle) \cong G \oplus \mathds{K} \quad \text{if and only if} \quad \absG(\langle \sigma(\mdl S_*), b \rangle) \cong \sigma(G) \oplus \mathds{K}, \] which, by \cite[Remark~3.8]{DriesLew95}, is equivalent to saying that $a_1 \in \OO(\langle\mdl S_*, a \rangle)$ if and only if $\bar \sigma(a_1) \in \OO(\langle \mdl \sigma(\mdl S_*), b \rangle)$. Subcase (1a): $a_1 \in \OO(\langle \mdl S_*, a \rangle)$. Subcase~(1a) of the proof of \cite[Theorem~3.10]{DriesLew95} shows that $\bar \sigma$ expands to an $\lan{}{convex}{}$-isomorphism and hence to an $\lan{T}{RV}{}$-isomorphism, which is also denoted by $\bar \sigma$. Since $\absG(\langle \mdl S_*, a \rangle) = G$, we may assume $t \in \K(\mdl M)$. By the Wilkie inequality, $\K(\langle \mdl S_*, a \rangle)$ is precisely the $T$-model generated by $t$ over $\K(\mdl S_*)$. So $\RV(\langle \mdl S_*, a \rangle) = \langle \RV(\mdl S_*), t \rangle$ and \[ \bar \sigma \upharpoonright \RV(\langle \mdl S_*, a \rangle) = \sigma \upharpoonright \RV(\langle \mdl S_*, a \rangle). \] Subcase (1b): $a_1 \notin \OO(\langle \mdl S_*, a \rangle)$. As above, Subcase~(1b) of the proof of \cite[Theorem~3.10]{DriesLew95} shows that $\bar \sigma$ expands to an $\lan{T}{RV}{}$-isomorphism and this time $\K(\langle \mdl S_*, a \rangle) = \K(\mdl S_*)$. Again it is clear that \[ \bar \sigma \upharpoonright \RV(\langle \mdl S_*, a \rangle) = \sigma \upharpoonright \RV(\langle \mdl S_*, a \rangle). \] Case (2): Case (1) fails. Then there is also no $b_1 \in \langle \sigma(\mdl S_*), b \rangle_T$ such that \[ \abs{ \OO(\sigma(\mdl S_*)) } < b_1 < \abs{ \VF(\sigma(\mdl S_*)) \smallsetminus \OO(\sigma(\mdl S_*)) }. \] Using Case~(2) of the proof of \cite[Theorem~3.10]{DriesLew95}, compatibility between $\bar \sigma$ and $\sigma$ may be deduced as in Case (1) above. Iterating this procedure, we may assume $\mdl S = \mdl S_*$. The theorem follows. \end{proof} \begin{cor} For all set $A \subseteq \VF$, $\langle A \rangle$ is an elementary substructure of $\mmdl$ if and only if $\Gamma(\langle A \rangle)$ is nontrivial, that is, $\Gamma(\langle A \rangle) \neq \pm 1$. \end{cor} \begin{cor}\label{trans:VF} Every parametrically $\lan{T}{RV}{}$-definable subset of $\VF^n$ is parametrically $\lan{}{convex}{}$-definable. \end{cor} This corollary already follows from Proposition~\ref{uni:exp}. Anyway, it enables us to transfer results in the theory of $T$-convex valued fields \cite{DriesLew95, Dries:tcon:97} into our setting, which we shall do without further explanation. We include here a couple of generalities on immediate isomorphisms. Their proofs are built on that of Theorem~\ref{theos:qe} and hence we shall skip some details. \begin{defn} Let $\mdl M$, $\mdl N$ be substructures and $\sigma : \mdl M \longrightarrow \mdl N$ an $\lan{T}{RV}{}$-isomorphism. We say that $\sigma$ is an \emph{immediate isomorphism} if $\sigma(t) = t$ for all $t \in \RV(\mdl M)$. \end{defn} Note that if $\sigma$ is an immediate isomorphism then, \emph{ex post facto}, $\RV(\mdl M) = \RV(\mdl N)$. \begin{lem}\label{imm:ext} Every immediate isomorphism $\sigma : \mdl M \longrightarrow \mdl N$ can be extended to an immediate automorphism of $\mmdl$. \end{lem} \begin{proof} Let $\mdl M_* = \langle \VF(\mdl M) \rangle$ and $\mdl N_* = \langle \VF(\mdl N) \rangle$. Let $t \notin \RV(\mdl M_*)$ and $a \in \rv^{-1}(t)$. Since $\sigma$ is immediate, $a$ makes the same cut in $\VF(\mdl M)$ and $\VF(\mdl N)$ according to $\sigma$. By the proof of Theorem~\ref{theos:qe}, $\sigma$ may be extended to an immediate isomorphism $\langle \mdl M, a \rangle \longrightarrow \langle \mdl N, a \rangle$. Iterating this procedure, we reach a stage where the assertion simply follows from Theorem~\ref{tcon:qe}. \end{proof} We have something much stronger. For that, the following crucial property is needed. \begin{prop}[Valuation property]\label{val:prop} Let $\mdl M$ be a $\VF$-generated substructure and $a \in \VF$. Suppose that $\Gamma(\langle \mdl M, a \rangle) \neq \Gamma(\mdl M)$. Then there is a $d \in \VF(\mdl M)$ such that $\vv(a - d) \notin \vv(\mdl M)$. \end{prop} \begin{proof} For the polynomially bounded case, see~\cite[Proposition~9.2]{DriesSpei:2000} and the remark thereafter. Apparently this is established in full generality (power-bounded) in \cite{tyne}, which is in a repository that is password-protected. \end{proof} \begin{lem}\label{imm:iso} Let $\sigma : \mdl M \longrightarrow \mdl N$ be an immediate isomorphism. Let $a \in \VF \smallsetminus \VF(\mdl M)$ and $b \in \VF \smallsetminus \VF(\mdl N)$ such that $\rv(a - c) = \rv(b -\sigma(c))$ for all $c \in \VF(\mdl M)$. Then $\sigma$ may be extended to an immediate isomorphism $\bar \sigma : \langle \mdl M, a \rangle \longrightarrow \langle \mdl N, b \rangle$ with $\bar \sigma(a) = b$. \end{lem} Observe that, since every element of $\VF(\langle \mdl M, a \rangle) = \langle \mdl M, a \rangle_T$ is of the form $f(a, c)$, where $c \in \VF(\mdl M)$ and $f$ is a function symbol of $\lan{T}{}{}$, and similarly for $\langle \mdl N, b \rangle$, the lemma is equivalent to saying that $\rv(a - c) = \rv(b -\sigma(c))$ for all $c \in \VF(\mdl M)$ implies $\rv(f(a,c)) = \rv(f(b,\sigma(c)))$ for all $c \in \VF(\mdl M)$ and all function symbols of $\lan{T}{}{}$. \begin{proof} Without loss of generality, we may assume that $\mdl M$, $\mdl N$ are $\VF$-generated. According to $\sigma$, $a$ and $b$ must make the same cut respectively in $\VF(\mdl M)$ and $\VF(\mdl N)$, and hence there is an $\lan{T}{}{}$\nobreakdash-isomorphism $\bar \sigma : \langle \mdl M, a \rangle_T \longrightarrow \langle \mdl N, b \rangle_T$ with $\bar \sigma(a) = b$ that extends $\sigma \upharpoonright \VF(\mdl M)$. We shall first show that $\bar \sigma$ expands to an $\lan{T}{RV}{}$-isomorphism. There are two cases to consider, corresponding to the two cases in the proof of Theorem~\ref{theos:qe}. Case (1): There is an $a' \in \langle \mdl M, a \rangle_T$ such that \[ \abs{ \OO(\mdl M)} < a' < \abs{ \VF(\mdl M) \smallsetminus \OO(\mdl M) }. \] Let $f$ be a function symbol of $\lan{T}{}{}$ and $c \in \VF(\mdl M)$ such that $f(a, c) = a'$. Let $b' = \bar \sigma(f(a, c))$. Then we also have \[ \abs{\OO(\mdl N)} < b' < \abs{ \VF(\mdl N) \smallsetminus \OO(\mdl N) }. \] If $a' \notin \OO(\langle \mdl M, a \rangle)$ then $\Gamma(\langle \mdl M, a \rangle) \neq \Gamma(\mdl M)$. By the valuation property, there is a $d \in \VF(\mdl M)$ such that $\vv(a - d) \notin \Gamma(\mdl M)$. Then $\vv(b - \sigma(d)) \notin \Gamma(\mdl N)$ and hence, by the Wilkie inequality, $\OO(\langle \mdl N, b\rangle)$ is the convex hull of $\OO(\mdl N)$ in $\langle \mdl N, b \rangle_T$. This implies $b' \notin \OO(\langle \mdl N, b \rangle)$. By symmetry and \cite[Remark~3.8]{DriesLew95}, we see that $a' \in \OO(\langle \mdl M, a \rangle)$ if and only if $b' \in \OO(\langle \mdl N, b \rangle)$, and hence \[ \bar \sigma(\OO(\langle \mdl M, a \rangle)) = \OO(\langle \mdl N, b \rangle). \] Case (2): Case (1) fails. We may proceed exactly as in Case (2) of the proof of Theorem~\ref{theos:qe}. This concludes our proof that $\bar \sigma$ expands to an $\lan{T}{RV}{}$-isomorphism. Next, we show that $\bar \sigma$ is indeed immediate. If $\RV(\langle \mdl M, a \rangle) = \RV(\mdl M)$ then also $\RV(\langle \mdl N, b \rangle) = \RV(\mdl N)$, and there is nothing more to be done. So suppose $\RV(\langle \mdl M, a \rangle) \neq \RV(\mdl M)$. We claim that there is a $d \in \VF(\mdl M)$ such that $\rv(a - d) \notin \RV(\mdl M)$. We consider two (mutually exclusive) cases. Case (1): $\Gamma(\langle \mdl M, a \rangle) \neq \Gamma(\mdl M)$. Then the valuation property gives such a $d$ directly. Case (2): $\K(\langle \mdl M, a \rangle) \neq \K(\mdl M)$. Let $a'$ be as above. Let $\OO'$ be the $T$\nobreakdash-convex subring of $\VF(\mdl M)$ that does not contain $a'$, that is, $\OO'$ is the convex hull of $\OO(\mdl M)$ in $\langle \mdl M, a \rangle_T$. Let $\vv'$, $\Gamma'(\langle \mdl M, a \rangle)$ be the corresponding signed valuation map and signed value group. Then the valuation property yields a $d \in \VF(\mdl M)$ such that $\vv'(a - d) \notin \Gamma'(\mdl M)$. Since \[ \abs{\Gamma'}(\langle \mdl M, a \rangle) \cong \abs{\Gamma}(\mdl M) \oplus \mathds{K}, \] there is a $\gamma \in \abs{\Gamma}(\mdl M)$ such that (exactly) one of the following two relations hold: \begin{gather*} \abs{ \OO_\gamma(\mdl M)} < \abs{a- d} < \abs{ \VF(\mdl M) \smallsetminus \OO_\gamma(\mdl M) },\\ \abs{ \MM_\gamma(\mdl M)} < \abs{a- d} < \abs{ \VF(\mdl M) \smallsetminus \MM_\gamma(\mdl M) }, \end{gather*} where \[ \OO_\gamma = \{c \in \VF: \abval(c) \geq \gamma\} \quad \text{and} \quad \MM_\gamma = \{c \in \VF: \abval(c) > \gamma\}. \] It is not hard to see that, in either case, $\rv(a - d) \notin \RV(\mdl M)$. Since $\rv(a - d) = \rv(b - \sigma(d)) \eqqcolon t$, by the Wilkie inequality, $\RV(\langle \mdl M, a \rangle) = \langle \RV(\mdl M), t \rangle$ and hence $\bar \sigma$ must be immediate. \end{proof} \subsection{Fundamental structure of $T$-convex valuation} We review some fundamental facts concerning the valuation in $\mmdl$. Additional notation and terminology are also introduced. Recall \cite[Theorem~A]{Dries:tcon:97}: The structure of definable sets in the $\K$-sort is precisely that given by the theory $T$. Recall \cite[Theorem~B]{Dries:tcon:97}: The structure of definable sets in the (imaginary) $\abs \Gamma$-sort is precisely that given by the $o$\nobreakdash-minimal theory of nontrivially ordered vector spaces over $\mathds{K}$. The structure of definable sets in the (imaginary) $\Gamma$-sort is the same one modulo the sign. In particular, every definable function in the $\Gamma$-sort is definably piecewise $\mathds{K}$-linear modulo the sign. \begin{lem}\label{gk:ortho} If $f : \Gamma \longrightarrow \K$ is a definable function then $f(\K)$ is finite. Similarly, if $g : \K \longrightarrow \Gamma$ is a definable function then $g(\Gamma)$ is finite. \end{lem} \begin{proof} See \cite[Proposition~5.8]{Dries:tcon:97}. \end{proof} Note that \cite[Theorem~B, Proposition~5.8]{Dries:tcon:97} require that $T$ be power-bounded. \begin{nota}\label{gamma:what} Recall convention~\ref{how:gam}. There are two ways of treating an element $\gamma \in \Gamma$: as a point --- when we study $\Gamma$ as an independent structure, see Convention~\ref{how:gam} --- or a subset of $\mmdl$ --- when we need to remain in the realm of definable sets in $\mmdl$. The former perspective simplifies the notation but is of course dispensable. We shall write $\gamma$ as $\gamma^\sharp$ when we want to emphasize that it is the set $\vrv^{-1}(\gamma)$ in $\mmdl$ that is being considered. More generally, if $I$ is a set in $\Gamma$ then we write $I^\sharp = \bigcup\{\gamma^\sharp: \gamma \in I\}$. Similarly, if $U$ is a set in $\RV$ then $U^\sharp$ stands for $\bigcup\{\rv^{-1}(t): t \in U\}$. \end{nota} Since $\TCVF$ is a weakly $o$\nobreakdash-minimal theory (see \cite[Corollary~3.14]{DriesLew95} and Corollary~\ref{trans:VF}), we can use the dimension theory of \cite[\S~4]{mac:mar:ste:weako} in $\mmdl$. \begin{defn} The \emph{$\VF$-dimension} of a definable set $A$, denoted by $\dim_{\VF}(A)$, is the largest natural number $k$ such that, possibly after re-indexing of the $\VF$-coordinates, $\pr_{\leq k}(A_t)$ has nonempty interior for some $t \in A_{\RV}$. \end{defn} For all substructures $\mdl M$ and all $a \in \VF$, $\VF(\dcl_{\mdl M}( a)) = \langle \mdl M , a \rangle_T$, where $\dcl_{\mdl M}(a)$ is the definable closure of $a$ over $\mdl M$. This implies that the exchange principle with respect to definable closure --- or algebraic closure, which is the same thing since there is an ordering --- holds in the $\VF$-sort, because it holds for $T$\nobreakdash-models. Therefore, by \cite[\S~4.12]{mac:mar:ste:weako}, we may equivalently define $\dim_{\VF}(A)$ to be the maximum of the algebraic dimensions of the fibers $A_t$, $t \in A_{\RV}$. Algebraic dimension is defined for (any sort of) any theory whose models have the exchange property with respect to algebraic closure, or more generally any suitable notion of closure. In the present setting, the algebraic dimension of a set $B \subseteq \VF^n$ that is definable over a substructure $\mdl M$ is just the maximum of the ranks of the $T$\nobreakdash-models $\langle \mdl M , b \rangle_T$, $b \in B$, relative to the $T$\nobreakdash-model $\VF(\mdl M)$ (again, see \cite[\S~3.2]{DriesLew95} for the notion of ranks of $T$-models). It can be shown that this does not depend on the choice of $\mdl M$. Yet another way to define this notion of $\VF$-dimension is to imitate \cite[Definiton~4.1]{Yin:special:trans}, since we have: \begin{lem}\label{altVFdim} If $\dim_{\VF}(A) = k$ then $k$ is the smallest number such that there is a definable injection $f: A \longrightarrow \VF^k \times \RV^l$. \end{lem} \begin{proof} This is immediate by a straightforward argument combining the exchange principle, Lemma~\ref{RV:no:point} below, and compactness. Alternatively, we may just quote \cite[Theorem~4.11]{mac:mar:ste:weako}. \end{proof} \begin{rem}[$\RV$-dimension and $\Gamma$-dimension]\label{rem:RV:weako} It is routine to verify that the axioms concerning only the $\RV$-sort are all universal except for the one asserting that $\K^{\times}$ is a proper subgroup, which is existential. These axioms amount to a weakly $o$\nobreakdash-minimal theory also and the exchange principle holds for this theory. Therefore, we can use the dimension theory of \cite[\S~4]{mac:mar:ste:weako} directly in the $\RV$-sort as well. We call it the $\RV$-dimension and the corresponding operator is denoted by $\dim_{\RV}$. Note that $\dim_{\RV}$ does not depend on parameters (see \cite[\S~4.12]{mac:mar:ste:weako}) and agrees with the $o$\nobreakdash-minimal dimension in the $\K$-sort (see \cite[\S~4.1]{dries:1998}) whenever both are applicable. Similarly we shall use $o$\nobreakdash-minimal dimension in the $\Gamma$-sort and call it the $\Gamma$-dimension. The corresponding operator is denoted by $\dim_{\Gamma}$. \end{rem} \begin{lem}\label{dim:cut:gam} Let $U \subseteq \RV^n$ be a definable set with $\dim_{\RV}(U) = k$. Then $\dim_{\RV}(U_{\gamma}) = k$ for some $\gamma \in \vrv(U)$. \end{lem} Here $U_{\gamma}$ denotes the pullback of $\gamma$ along the obvious function $\vrv \upharpoonright U$, in line with the convention set in the last paragraph of Notation~\ref{indexing}. \begin{proof} By \cite[Theorem~4.11]{mac:mar:ste:weako} we may assume $n=k$. Then, for some $\gamma \in \vrv(U)$, $U_{\gamma}$ contains an open subset of $\RV^n$. The lemma follows. \end{proof} \begin{lem}\label{gam:red:K} Let $D \subseteq \Gamma^n$ be a definable set with $\dim_{\Gamma}(D) = k$. Then $D^\sharp$ is definably bijective to a disjoint union of finitely many sets of the form $(\K^+)^{n-k} \times D'^\sharp$, where $D' \subseteq \Gamma^k$. \end{lem} \begin{proof} Over a definable finite partition of $D$, we may assume that $D \subseteq (\Gamma^+)^n$ and the restriction $\pr_{\leq k} \upharpoonright D$ is injective. It follows from \cite[Theorem~B]{Dries:tcon:97} that the induced function $f : D_{\leq k} \longrightarrow D_{>k}$ is piecewise $\mathds{K}$-linear. Thus, for every $\gamma \in D_{\leq k}$ and every $t \in \gamma^\sharp$ there is a $t$-definable point in $f(\gamma)^\sharp$. The assertion follows. \end{proof} Taking disjoint union of finitely many definable sets of course will introduce extra bookkeeping coordinates, but we shall suppress this in notation. \begin{rem}[$o$\nobreakdash-minimal sets in $\RV$]\label{omin:res} The theory of $o$\nobreakdash-minimality, in particular its terminologies and notions, may be applied to a set $U \subseteq \RV^n$ such that $\vrv(U)$ is a singleton or, more generally, is finite. For example, we shall say that $U$ is a \emph{cell} if the multiplicative translation $U / u \subseteq (\K^+)^n$ of $U$ by some $u \in U$ is an $o$\nobreakdash-minimal cell (see \cite[\S~3]{dries:1998}); this definition does not depend on the choice of $u$. Similarly, the \emph{$o$\nobreakdash-minimal Euler characteristic} $\chi(U)$ of such a set $U$ is the $o$\nobreakdash-minimal Euler characteristic of $U / u$ (see \cite[\S~4.2]{dries:1998}). This definition may be extended to disjoint unions of finitely many (not necessarily disjoint) sets $U_i \subseteq \RV^n \times \Gamma^m$ such that each $\vrv(U_i)$ is finite. \end{rem} \begin{thm}\label{groth:omin} Let $U$, $V$ be definable sets in $\RV$ with $\vrv(U)$, $\vrv(V)$ finite. Then there is a definable bijection between $U$ and $V$ if and only if \[ \dim_{\RV}(U) = \dim_{\RV}(V) \quad \text{and} \quad \chi(U) = \chi(V). \] \end{thm} \begin{proof} See \cite[\S~8.2.11]{dries:1998}. \end{proof} \begin{defn}[Valuative discs]\label{defn:disc} A set $\mathfrak{b} \subseteq \VF$ is an \emph{open disc} if there is a $\gamma \in |\Gamma|$ and a $b \in \mathfrak{b}$ such that $a \in \mathfrak{b}$ if and only if $\abval(a - b) > \gamma$; it is a \emph{closed disc} if $a \in \mathfrak{b}$ if and only if $\abval(a - b) \geq \gamma$. The point $b$ is a \emph{center} of $\mathfrak{b}$. The value $\gamma$ is the \emph{valuative radius} or simply the \emph{radius} of $\mathfrak{b}$, which is denoted by $\rad (\mathfrak{b})$. A set of the form $t^\sharp$, where $t \in \RV$, is called an \emph{$\RV$-disc} (recall Notation~\ref{gamma:what}). A closed disc with a maximal open subdisc removed is called a \emph{thin annulus}. A set $\mathfrak{p} \subseteq \VF^n \times \RV_0^m$ of the form $(\prod_{i \leq n} \mathfrak{b}_i) \times t$ is an (\emph{open, closed, $\RV$-}) \emph{polydisc}, where each $\mathfrak{b}_i$ is an (open, closed, $\RV$-) disc. The \emph{polyradius} $\rad(\mathfrak{p})$ of $\mathfrak{p}$ is the tuple $(\rad(\mathfrak{b}_1), \ldots, \rad(\mathfrak{b}_n))$, whereas the \emph{radius} of $\mathfrak{p}$ is $\min \rad(\mathfrak{p})$. If all the discs $\mathfrak{b}_i$ are of the same valuative radius then $\mathfrak{p}$ is referred to as a \emph{ball}. The open and the closed polydiscs centered at a point $a \in \VF^n$ with polyradius $\gamma \in |\Gamma|^n$ are denoted by $\mathfrak{o}(a, \gamma)$ and $\mathfrak{c}(a, \gamma)$, respectively. The \emph{$\RV$-hull} of a set $A$, denoted by $\RVH(A)$, is the union of all the $\RV$-polydiscs whose intersections with $A$ are nonempty. If $A$ equals $\RVH(A)$ then $A$ is called an \emph{$\RV$-pullback}. \end{defn} The map $\abval$ is constant on a disc if and only if it does not contain $0$ if and only it is contained in an $\RV$-disc. If two discs have nonempty intersection then one of them contains the other. Many such elementary facts about discs will be used throughout the rest of the paper without further explanation. \begin{nota}[The definable sort $\DC$ of discs]\label{disc:exp} At times it will be more convenient to work in the traditional expansion $\mdl R_{\rv}^{\textup{eq}}$ of $\mmdl$ by all definable sorts. However, for our purpose, a much simpler expansion $\mdl R_{\rv}^{\bullet}$ suffices. This expansion has only one additional sort $\DC$ that contains, as elements, all the open and closed discs (since each point in $\VF$ may be regarded as a closed disc of valuative radius $\infty$, for convenience, we may and occasionally do think of $\VF$ as a subset of $\DC$). Heuristically, we may think of a disc that is properly contained in an $\RV$-disc as a ``thickened'' point of certain stature in $\VF$. For each $\gamma \in \absG$, there are two additional cross-sort maps $\VF \longrightarrow \DC$ in $\mdl R_{\rv}^{\bullet}$, one sends $a$ to the open disc, the other to the closed disc, of radius $\gamma$ that contain $a$. The expansion $\mdl R_{\rv}^{\bullet}$ can help reduce the technical complexity of our discussion. However, as is the case with the imaginary $\Gamma$-sort, it is conceptually inessential since, for the purpose of this paper, all allusions to discs as (imaginary) elements may be eliminated in favor of objects already definable in $\mmdl$. Whether parameters in $\DC$ are used or not shall be indicated explicitly, if it is necessary. Note that it is redundant to include in $\DC$ discs centered at $0$, since they may be identified with their valuative radii. For a disc $\mathfrak{a} \subseteq \VF$, the corresponding imaginary element in $\DC$ is denoted by $\code{\mathfrak{a}}$ when notational distinction makes the discussion more streamlined; $\code{\mathfrak{a}}$ may be heuristically thought of as the ``name'' of $\mathfrak{a}$. Conversely, a set $D \subseteq \DC$ is often identified with the set $\{\mathfrak{a} : \code \mathfrak{a} \in D\}$, in which case $\bigcup D$ denotes a subset of $\VF$. \end{nota} \begin{nota}\label{nota:tor} For each $\gamma \in |\Gamma|$, let $\MM_\gamma$ and $\OO_{\gamma}$ be the open and closed discs around $0$ with radius $\gamma$, respectively. Assume $\gamma \geq 0$. Let $\RV_{\gamma} = \VF^{\times} / (1 + \MM_\gamma)$, which is a subset of $\DC$. It is an abelian group and also inherits an ordering from $\VF^\times$. The canonical map $\VF^{\times} \longrightarrow \RV_{\gamma}$ is denoted by $\rv_{\gamma}$ and is augmented by $\rv_{\gamma}(0) = 0$. If $\code \mathfrak{b} \in \DC$, $b \in \mathfrak{b}$, and $\rad(\mathfrak{b}) \leq \abval(b) + \gamma$ then $\mathfrak{b}$ is a union of discs of the form $\rv_{\gamma}^{-1}(\code \mathfrak{a})$. In this case, we shall abuse the notation slightly and write $\code \mathfrak{a} \in \mathfrak{b}$, $\mathfrak{b} \subseteq \RV_{\gamma}$, etc. For each $\code \mathfrak{a} \in \RV_{\gamma}$, let $\tor (\code \mathfrak{a}) \subseteq \RV_{\gamma}$ be the $\code \mathfrak{a}$-definable subset such that $\rv^{-1}_{\gamma}(\tor (\code \mathfrak{a}))$ forms the smallest closed disc containing $\mathfrak{a}$. Set \begin{align*} \tor^{\times}(\code \mathfrak{a}) & = \tor (\code \mathfrak{a}) \smallsetminus \code \mathfrak{a},\\ \tor^+(\code \mathfrak{a}) &= \{t \in \tor(\code \mathfrak{a}): t > \code \mathfrak{a}\},\\ \tor^-(\code \mathfrak{a}) &= \{t \in \tor(\code \mathfrak{a}): t < \code \mathfrak{a}\}. \end{align*} If $\code \mathfrak{a} = (\code {\mathfrak{a}_1}, \ldots, \code {\mathfrak{a}_n})$ with $\code {\mathfrak{a}_i} \in \RV_{\gamma_i}$ then $\prod_i \tor(\code{\mathfrak{a}_i})$ is simply written as $\tor(\code \mathfrak{a})$; similarly for $\tor^{\times}(\code \mathfrak{a})$, $\tor^+(\code \mathfrak{a})$, etc. If $\gamma = 0$ then we may, for all purposes, identify $\tor^{\times} (\code \mathfrak{a})$, $\tor (\code \mathfrak{a})$, etc., with $\tor^{\times}(\alpha) \coloneqq \abvrv^{-1}(\alpha) \subseteq \RV$, $\tor(\alpha) \coloneqq \tor^{\times}(\alpha) \cup \{0\}$, etc., where $\alpha = \rad(\mathfrak{a})$. \end{nota} \begin{rem}[$\K$-torsors]\label{rem:K:aff} Let $\code \mathfrak{a} \in \RV_{\gamma}$ and $\alpha = \rad(\mathfrak{a})$. Since, via additive translation by $\code \mathfrak{a}$, there is a canonical $\code \mathfrak{a}$-definable order-preserving bijection \[ \aff_{\goedel{\mathfrak{a}}} :\tor(\code \mathfrak{a}) \longrightarrow \tor(\alpha), \] we see that $\code \mathfrak{a}$-definable subsets of $\tor(\code \mathfrak{a})^n$ naturally correspond to those of $\tor(\alpha)^n$. If there is an $\code \mathfrak{a}$-definable $t \in \tor^{\times}(\alpha)$ then, via multiplicative translation by $t$, this correspondence may be extended to $\code \mathfrak{a}$-definable subsets of $\tor(0)^n = \K^n$. More generally, for any $t \in \tor^{\times}(\alpha)$, the induced bijection $\tor(\code \mathfrak{a}) \longrightarrow \K$ is denoted by $\aff_{\goedel \mathfrak{a}, t}$. Consequently, $\tor(\code \mathfrak{a})$ may be viewed as a $\K$-torsor and, as such, is equipped with much of the structure of $\K$. \end{rem} \begin{defn}[Derivation between $\K$-torsors]\label{rem:tor:der} Let $\code \mathfrak{a}$, $\alpha$ be as above. Let $\goedel \mathfrak{b} \in \RV_{\delta}$ and $\beta = \rad(\mathfrak{b})$. Let $f : \tor(\code \mathfrak{a}) \longrightarrow \tor(\code \mathfrak{b})$ be a function. We define the \emph{derivative} $\tfrac{d}{d x} f$ of $f$ at any point $\code \mathfrak{d} \in \tor(\code \mathfrak{a})$ as follows. Choose any $t \in \tor^{\times}(\alpha)$ and any $s \in \tor^{\times}(\beta)$. Consider the function \[ f_{\goedel \mathfrak{a}, \goedel \mathfrak{b}, t,s} : \K \to^{\aff^{-1}_{\goedel \mathfrak{a}, t}} \tor(\code \mathfrak{a}) \to^f \tor(\goedel \mathfrak{b}) \to^{\aff_{\goedel \mathfrak{b}, s}} \K. \] Put $r = \aff_{\goedel \mathfrak{a}, t}(\goedel \mathfrak{d})$ and suppose that $\frac{d}{dx} f_{\goedel \mathfrak{a}, \goedel \mathfrak{b}, t,s}(r) \in \K$ exists. Then we set \[ \tfrac{d}{d x} f(\goedel \mathfrak{d}) = s t^{-1} \tfrac{d}{d x} f_{\goedel \mathfrak{a}, \goedel \mathfrak{b}, t,s}(r) \in \tor(\beta - \alpha). \] It is routine to check that this construction does not depend on the choice of $\code \mathfrak{a}$, $\goedel \mathfrak{b}$, $t$, $s$ and hence the derivative $\tfrac{d}{d x} f(\goedel \mathfrak{d})$ is well-defined. \end{defn} \begin{defn}[$\vv$-intervals] Let $\mathfrak{a}$, $\mathfrak{b}$ be discs, not necessarily disjoint. The subset $\mathfrak{a} < x < \mathfrak{b}$ of $\VF$, if it is not empty, is called an \emph{open $\vv$-interval} and is denoted by $(\mathfrak{a}, \mathfrak{b})$, whereas the subset \[ \{a \in \VF : \ex{x \in \mathfrak{a}, y \in \mathfrak{b}} ( x \leq a \leq y) \} \] if it is not empty, is called a \emph{closed $\vv$-interval} and is denoted by $[\mathfrak{a}, \mathfrak{b}]$. The other $\vv$-intervals $[\mathfrak{a}, \mathfrak{b})$, $(-\infty, \mathfrak{b}]$, etc., are defined in the obvious way, where $(-\infty, \mathfrak{b}]$ is a closed (or half-closed) $\vv$-interval that is unbounded from below. Let $A$ be such a $\vv$-interval. The discs $\mathfrak{a}$, $\mathfrak{b}$ are called the \emph{end-discs} of $A$. If $\mathfrak{a}$, $\mathfrak{b}$ are both points in $\VF$ then of course we just say that $A$ is an interval and if $\mathfrak{a}$, $\mathfrak{b}$ are both $\RV$-discs then we say that $A$ is an $\RV$-interval. If $A$ is of the form $(\mathfrak{a}, \mathfrak{b}]$ or $[\mathfrak{b}, \mathfrak{a})$, where $\mathfrak{a}$ is an open disc and $\mathfrak{b}$ is the smallest closed disc containing $\mathfrak{a}$, then $A$ is called a \emph{half thin annulus} and the \emph{radius} of $A$ is $\rad(\mathfrak{b})$. Two $\vv$-intervals are \emph{disconnected} if their union is not a $\vv$-interval. \end{defn} Obviously the open $\vv$-interval $(\mathfrak{a}, \mathfrak{b})$ is empty if $\mathfrak{a}$, $\mathfrak{b}$ are not disjoint. Equally obvious is that a $\vv$-interval is definable over some substructure $\mdl S$ if and only if its end-discs are definable over $\mdl S$. \begin{rem}[Holly normal form]\label{rem:HNF} By the valuation property Proposition~\ref{val:prop} and \cite[Proposition~7.6]{Dries:tcon:97}, we have an important tool called \emph{Holly normal form} \cite[Theorem~4.8]{holly:can:1995} (henceforth abbreviated as HNF); that is, every definable subset of $\VF$ is a unique union of finitely many definable pairwise disconnected $\vv$-intervals. This is obviously a generalization of the $o$\nobreakdash-minimal condition. \end{rem} \section{Definable sets in $\VF$}\label{def:VF} From here on, we shall work with a fixed small substructure $\mdl S$ of $\mdl R_{\rv}$, also occasionally of $\mdl R_{\rv}^{\bullet}$ (primarily in this section). The conceptual reason for this move is that the Grothendieck rings in our main construction below change their meaning if the set of parameters changes. In particular, allowing all parameters trivializes the whole construction somewhat. For instance, every definable set will contain a definable point. Consequently, all Galois actions on the classes of finite definable sets are killed off, and this is highly undesirable for motivic integration in algebraically closed valued fields. Admittedly, this problem is not as severe in our setting. Anyway, we follow the practice in \cite{hrushovski:kazhdan:integration:vf}. Note that $\mdl S$ is regarded as a part of the language now and hence, contrary to the usual convention in the model-theoretic literature, ``$\emptyset$-definable'' or ``definable'' only means ``$\mdl S$-definable'' instead of ``parametrically definable'' if no other qualifications are given. To simplify the notation, we shall not mention $\mdl S$ and its extensions in context if no confusion can arise. For example, the definable closure operator $\dcl_{\mdl S}$, etc., will simply be written as $\dcl$, etc. For the moment we do not require that $\mdl S$ be $\VF$-generated or $\Gamma(\mdl S)$ be nontrivial. When we work in $\mdl R_{\rv}^{\bullet}$ --- either by introducing parameters of the form $\code \mathfrak{a}$ or the phrase ``in $\mdl R_{\rv}^{\bullet}$'' --- the substructure $\mdl S$ may contain names for discs that may or may not be definable from $\VF(\mdl S) \cup \RV(\mdl S)$. \subsection{Definable functions and atomic open discs} The structural analysis of definable sets in $\VF$ below is, for the most part, of a rather technical nature. One highlight is Corollary~\ref{part:rv:cons}. It is a crucial ingredient of the proof in \cite{halyin} that all definable closed sets in an arbitrary power-bounded $o$\nobreakdash-minimal field admit Lipschitz stratification. \begin{conv}\label{topterm} Since apart from $\leq$ the language $\lan{T}{}{}$ only has function symbols, we may and shall assume that, in any $\lan{T}{RV}{}$-formula, every $\lan{T}{}{}$\nobreakdash-term occurs in the scope of an instance of the function symbol $\rv$. For example, if $f(x)$, $g(x)$ are $\lan{T}{}{}$\nobreakdash-terms then the formula $f(x) < g(x)$ is equivalent to $\rv(f(x) - g(x)) < 0$. The $\lan{T}{}{}$\nobreakdash-term $f(x)$ in $\rv(f(x))$ shall be referred to as a \emph{top $\lan{T}{}{}$\nobreakdash-term}. \end{conv} We begin by studying definable functions between various regions of the structure. \begin{lem}\label{Ocon} Let $f : \OO \longrightarrow \VF$ be a definable function. Then for some $\gamma \in \GAA$ and $a \in \OO$ we have $\vv(f(b)) = \gamma$ for all $b > a$ in $\OO$. \end{lem} \begin{proof} See \cite[Proposition~4.2]{Dries:tcon:97}. \end{proof} Note that this is false if $T$ is not power-bounded. A definable function $f$ is \emph{quasi-$\lan{T}{}{}$\nobreakdash-definable} if it is a restriction of an $\lan{T}{}{}$\nobreakdash-definable function (with parameters in $\VF(\mdl S)$, of course). \begin{lem}\label{fun:suba:fun} Every definable function $f : \VF^n \longrightarrow \VF$ is piecewise quasi-$\lan{T}{}{}$\nobreakdash-definable; that is, there are a definable finite partition $A_i$ of $\VF^n$ and $\lan{T}{}{}$\nobreakdash-definable functions $f_i: \VF^n \longrightarrow \VF$ such that $f \upharpoonright A_i = f_i \upharpoonright A_i$ for all $i$. \end{lem} \begin{proof} By compactness, this is immediately reduced to the case $n = 1$. In that case, let $\phi(x, y)$ be a quantifier-free formula that defines $f$. Let $\tau_i(x, y)$ enumerate the top $\lan{T}{}{}$\nobreakdash-terms in $\phi(x, y)$. For each $a \in \VF$ and each $t_i(a, y)$, let $B_{a, i} \subseteq \VF$ be the characteristic finite subset of the function $t_i(a, y)$ given by $o$\nobreakdash-minimal monotonicity (see \cite[\S~3.1]{dries:1998}). It is not difficult to see that if $f(a) \notin \bigcup_i B_{a, i}$ then there would be a $b \neq f(a)$ such that \[ \rv(\tau_i(a, b)) = \rv(\tau_i(a, f(a))) \] for all $i$ and hence $\phi(a, b)$ holds, which is impossible since $f$ is a function. The lemma follows. \end{proof} This lemma is just a variation of \cite[Lemma~2.6]{Dries:tcon:97}. \begin{cor}[Monotonicity]\label{mono} Let $A \subseteq \VF$ and $f : A \longrightarrow \VF$ be a definable function. Then there is a definable finite partition of $A$ into $\vv$-intervals $A_i$ such that every $f \upharpoonright A_i$ is quasi-$\lan{T}{}{}$\nobreakdash-definable, continuous, and monotone (constant or strictly increasing or strictly decreasing). Consequently, each $f(A_i)$ is a $\vv$-interval. \end{cor} \begin{proof} This is immediate by Lemma~\ref{fun:suba:fun}, $o$\nobreakdash-minimal monotonicity, and HNF. \end{proof} This corollary is a version of \cite[Corollary~2.8]{Dries:tcon:97}, slightly finer due to the presence of HNF. \begin{cor}\label{uni:fun:decom} For the function $f$ in Corollary~\ref{mono}, there is a definable function $\pi : A \longrightarrow \RV^2$ such that, for each $t \in \RV^2$, $f \upharpoonright A_t$ is either constant or injective. \end{cor} \begin{proof} This follows easily from monotonicity. Also, the proof of \cite[Lemma~4.11]{Yin:QE:ACVF:min} still works. \end{proof} \begin{lem}\label{RV:no:point} Given a tuple $t = (t_1, \ldots, t_n) \in \RV$, if $a \in \VF$ is $t$-definable then $a$ is definable. Similarly, for $\gamma = (\gamma_1, \ldots, \gamma_n) \in \Gamma$, if $t \in \RV$ is $\gamma$-definable then $t$ is definable. \end{lem} \begin{proof} The first assertion follows directly from Lemma~\ref{fun:suba:fun}. It can also be easily seen through an induction on $n$ with the trivial base case $n=0$. For any $b \in t_n^\sharp$, by the inductive hypothesis, we have $a \in \VF(\langle b \rangle)$. If $a$ were not definable then, by the exchange principle, we would have $b \in \VF(\langle a \rangle)$ and hence $t_n^\sharp \subseteq \VF(\langle a \rangle)$, which is impossible. The second assertion is similar, using the exchange principle in the $\RV$-sort (see Remark~\ref{rem:RV:weako}). \end{proof} \begin{cor}\label{function:rv:to:vf:finite:image} Let $U \subseteq \RV^m$ be a definable set and $f : U \longrightarrow \VF^n$ a definable function. Then $f(U)$ is finite. \end{cor} \begin{proof} We may assume $n=1$. Then this is immediate by Lemma~\ref{RV:no:point} and compactness. \end{proof} There is a more general version of Lemma~\ref{RV:no:point} that involves parameters in the $\DC$-sort: \begin{lem}\label{ima:par:red} Let $\code \mathfrak{a} = (\code{\mathfrak{a}_1}, \ldots, \code{\mathfrak{a}_n}) \in \DC^n$. If $a \in \VF$ is $\code \mathfrak{a}$-definable then $a$ is definable. \end{lem} \begin{proof} We proceed by induction on $n$. Let $b \in \mathfrak{a}_n$ and $t \in \RV$ such that $\abvrv(t) = \rad(\mathfrak{a}_n)$. Then $a$ is $(\code {\mathfrak{a}_1}, \ldots, \code {\mathfrak{a}_{n-1}}, t, b)$-definable. By the inductive hypothesis and Lemma~\ref{RV:no:point}, we have $a \in \VF(\langle b \rangle)$. If $a$ were not definable then we would have $b \in \VF(\langle a \rangle)$ and hence $\mathfrak{a}_n \subseteq \VF(\langle a \rangle)$, which is impossible unless $\mathfrak{a}_n$ is a definable point in $\VF$. \end{proof} \begin{lem}\label{open:K:con} In $\mdl R^{\bullet}_{\rv}$, let $\mathfrak{a} \subseteq \VF$ be a definable open disc and $f : \mathfrak{a} \longrightarrow \K$ a definable nonconstant function. Then there is a definable proper subdisc $\mathfrak{b} \subseteq \mathfrak{a}$ such that $f \upharpoonright (\mathfrak{a} \smallsetminus \mathfrak{b})$ is constant. \end{lem} \begin{proof} If $\mathfrak{b}_1$ and $\mathfrak{b}_2$ are two proper subdiscs of $\mathfrak{a}$ such that $f \upharpoonright (\mathfrak{a} \smallsetminus \mathfrak{b}_1)$ and $f \upharpoonright (\mathfrak{a} \smallsetminus \mathfrak{b}_2)$ are both constant then $\mathfrak{b}_1$ and $\mathfrak{b}_2$ must be concentric, that is, $\mathfrak{b}_1 \cap \mathfrak{b}_2 \neq \emptyset$, for otherwise $f$ would be constant. Therefore, it is enough to show that $f \upharpoonright (\mathfrak{a} \smallsetminus \mathfrak{b})$ is constant for some proper subdisc $\mathfrak{b} \subseteq \mathfrak{a}$. To that end, without loss of generality, we may assume that $\mathfrak{a}$ is centered at $0$. For each $\gamma \in \vv(\mathfrak{a}) \subseteq \Gamma$, by \cite[Theorem~A]{Dries:tcon:97} and $o$\nobreakdash-minimality, $f(\vv^{-1}(\gamma))$ contains a $\gamma$-definable element $t_{\gamma} \in \K$. By weak $o$\nobreakdash-minimality, $f(\vv^{-1}(\gamma)) = t_{\gamma}$ for all but finitely many $\gamma \in \vv(\mathfrak{a})$. Let $g : \vv(\mathfrak{a}) \longrightarrow \K$ be the definable function given by $\gamma \longrightarrow t_{\gamma}$. By Lemma~\ref{gk:ortho}, the image of $g$ is finite. The assertion follows. \end{proof} Alternatively, we may simply quote \cite[Theorem~1.2]{jana:omin:res}. \begin{defn} Let $D$ be a set of parameters. We say that a (not necessarily definable) nonempty set $A$ \emph{generates a (complete) $D$-type} if, for every $D$-definable set $B$, either $A \subseteq B$ or $A \cap B = \emptyset$. In that case, $A$ is \emph{$D$-type-definable} if no set properly contains $A$ and also generates a $D$-type. If $A$ is $D$-definable and generates a $D$-type, or equivalently, if $A$ is both $D$-definable and $D$-type-definable then we say that $A$ is \emph{$D$-atomic} or \emph{atomic over $D$}. \end{defn} We simply say ``atomic'' when $D =\emptyset$. In the literature, a type could be a partial type and hence a type-definable set may have nontrivial intersection with a definable set. In this paper, since partial types do not play a role, we shall not carry the superfluous qualifier ``complete'' in our terminology. \begin{rem}[Taxonomy of atomic sets]\label{rem:type:atin} It is not hard to see that, by HNF, if $\mathfrak{i} \subseteq \VF$ is atomic then $\mathfrak{i}$ must be a $\vv$-interval. In fact, there are only four possibilities for $\mathfrak{i}$: a point, an open disc, a closed disc, and a half thin annulus. There are no ``meaningful'' relations between them, see Lemma~\ref{atom:type}. \end{rem} \begin{lem}\label{atom:gam} In $\mdl R^{\bullet}_{\rv}$, let $\mathfrak{a}$ be an atomic set. Then $\mathfrak{a}$ remains $\gamma$-atomic for all $\gamma \in \Gamma$. Moreover, if $\mathfrak{a} \subseteq \VF^n$ is an open polydisc then it remains $\code \mathfrak{a}$-atomic. \end{lem} \begin{proof} The first assertion is a direct consequence of definable choice in the $\Gamma$-sort. For the second assertion, let $\gamma = \rad(\mathfrak{a})$. If $\mathfrak{a}$ were not $\code \mathfrak{a}$-atomic then, by compactness, there would be a $\gamma$-definable subset $A \subseteq \VF^n$ such that $A \cap \mathfrak{a}$ is nonempty and, for all open polydisc $\mathfrak{b}$ with $\gamma = \rad(\mathfrak{b})$, if $A \cap \mathfrak{b}$ is nonempty then it is a proper subset of $\mathfrak{b}$ --- this contradicts the first assertion that $\mathfrak{a}$ is $\gamma$-atomic. \end{proof} Recall from \cite[Definition~4.5]{mac:mar:ste:weako} the notion of a cell in a weakly $o$\nobreakdash-minimal structure. In our setting, it is easy to see that, by HNF, we may require that the images of the bounding functions $f_1$, $f_2$ of a cell $(f_1, f_2)_A$ in the $\VF$-sort be contained in $\DC$; cell decomposition \cite[Theorem~4.6]{mac:mar:ste:weako} holds accordingly. Cells are in general not invariant under coordinate permutations; however, by cell decomposition, an atomic subset of $\VF^n$ must be a cell and must remain so under coordinate permutations. \begin{lem}\label{open:rv:cons} In $\mdl R^{\bullet}_{\rv}$, let $\mathfrak{a} \subseteq \VF^n$ be an atomic open polydisc and $f : \mathfrak{a} \longrightarrow \VF$ a definable function. If $f$ is not constant then $f(\mathfrak{a})$ is an (atomic) open disc; in particular, $\rv \upharpoonright f(\mathfrak{a})$ is always constant. \end{lem} \begin{proof} By atomicity, $f(\mathfrak{a})$ must be an atomic $\vv$-interval. We proceed by induction on $n$. For the base case $n=1$, suppose for contradiction that $f(\mathfrak{a})$ is a closed disc (other than a point) or a half thin annulus. By monotonicity, we may assume that $f(\mathfrak{a})$ is, say, strictly increasing. Then $f^{-1}$ violates Lemma~\ref{Ocon}, contradiction. For the case $n > 1$, suppose for contradiction again that $f(\mathfrak{a})$ is a closed disc or a half thin annulus. By the inductive hypothesis, for every $a \in \pr_{1}(\mathfrak{a})$ there is a maximal open subdisc $\mathfrak{b}_a \subseteq f(\mathfrak{a})$ that contains $f(\mathfrak{a}_a)$, similarly for every $a \in \pr_{>1}(\mathfrak{a})$. It follows that $f(\mathfrak{a})$ is actually contained in a maximal open subdisc of $f(\mathfrak{a})$, which is absurd. \end{proof} \begin{cor}\label{poly:open:cons} Let $f : \VF^n \longrightarrow \VF$ be a definable function and $\mathfrak{a} \subseteq \VF^n$ an open polydisc. If $(\rv \circ f) \upharpoonright \mathfrak{a}$ is not constant then there is an $\code \mathfrak{a}$-definable nonempty proper subset of $\mathfrak{a}$. \end{cor} Here is a strengthening of Lemma~\ref{atom:gam}: \begin{lem}\label{atom:self} Let $B \subseteq \VF^n$ be $\lan{T}{}{}$\nobreakdash-type-definable and $\mathfrak{a} = \mathfrak{a}_1 \times \ldots \times \mathfrak{a}_n \subseteq B$ an open polydisc. Then, for all $a = (a_1, \ldots, a_n)$ and $b = (b_1, \ldots, b_n)$ in $\mathfrak{a}$, there is an immediate automorphism $\sigma$ of $\mmdl$ with $\sigma(a) = b$. Consequently, $\mathfrak{a}$ is $(\code \mathfrak{a}, t)$-atomic for all $t \in \RV$. \end{lem} \begin{proof} To see that the first assertion implies the second, suppose for contradiction that there is an $(\code \mathfrak{a}, t)$-definable nonempty proper subset $A \subseteq \mathfrak{a}$. Let $a \in A$, $b \in \mathfrak{a} \smallsetminus A$, and $\sigma$ be an immediate automorphism of $\mmdl$ with $\sigma(a) = b$. Then $\sigma$ is also an immediate automorphism of $\mmdl$ over $\langle \code \mathfrak{a}, t \rangle$, contradicting the assumption that $A$ is $(\code \mathfrak{a}, t)$-definable. For the first assertion, by Lemma~\ref{imm:ext}, it is enough to show that there is an immediate isomorphism $\sigma : \langle a \rangle \longrightarrow \langle b \rangle$ sending $a$ to $b$. Write \[ \mathfrak{a}' = \mathfrak{a}_1 \times \ldots \times \mathfrak{a}_{n-1}, \quad a' = (a_1, \ldots, a_{n-1}), \quad b' = (b_1, \ldots, b_{n-1}). \] Then, by induction on $n$ and Lemma~\ref{imm:iso}, it is enough to show that, for any immediate isomorphism $\sigma' : \langle a' \rangle \longrightarrow \langle b' \rangle$ sending $a'$ to $b'$ and any $\lan{T}{}{}$\nobreakdash-definable function $f : \VF^{n-1} \longrightarrow \VF$, \[ \rv(a_n - f(a')) = \rv(b_n - \sigma'(f(a'))). \] This is clear for the base case $n=1$, since $\mathfrak{a}$ must be disjoint from $\VF(\mdl S)$. For the case $n > 1$, we choose an immediate automorphism of $\mmdl$ extending $\sigma'$, which is still denoted by $\sigma'$; this is possible by Lemma~\ref{imm:ext}. By the inductive hypothesis and Lemma~\ref{open:rv:cons}, $f(\mathfrak{a}') = f(\sigma'(\mathfrak{a}')) = \sigma'(f(\mathfrak{a}'))$ is either a point or an open disc. Since $B$ is $\lan{T}{}{}$\nobreakdash-type-definable, it follows that $f(\mathfrak{a}')$ must be disjoint from $\mathfrak{a}_n$ and hence the desired condition is satisfied. \end{proof} \begin{cor}\label{part:rv:cons} Let $A \subseteq \VF^n$ and $f : A \longrightarrow \VF$ be an $\lan{T}{}{}$\nobreakdash-definable function. Then there is an $\lan{T}{}{}$\nobreakdash-definable finite partition $A_i$ of $A$ such that, for all $i$, if $\mathfrak{a} \subseteq A_i$ is an open polydisc then $\rv \upharpoonright f(\mathfrak{a})$ is constant and $f(\mathfrak{a})$ is either a point or an open disc. \end{cor} \begin{proof} For $a \in A$, let $D_a \subseteq A$ be the $\lan{T}{}{}$\nobreakdash-type-definable subset containing $a$. By Lemma~\ref{atom:self}, every open polydisc $\mathfrak{a} \subseteq D_a$ is $\code \mathfrak{a}$-atomic and hence, by Lemma~\ref{open:rv:cons}, the assertion holds for $\mathfrak{a}$. Then, by compactness, the assertion must hold in a definable subset $A_a \subseteq A$ that contains $a$; by compactness again, it holds in finitely many definable subsets $A_1, \ldots, A_m$ of $A$ with $\bigcup_i A_i = A$. Then the partition of $A$ generated by $A_1, \ldots, A_m$ is as desired. \end{proof} \begin{rem}\label{rem:LT:com} Clearly the conclusion of Corollary~\ref{part:rv:cons} still holds if we replace ``$\lan{T}{}{}$\nobreakdash-definable'' with ``definable'' everywhere therein. Moreover, its proof works almost verbatim in all situations where we want to partition an $\lan{T}{}{}$\nobreakdash-definable set $A \subseteq \VF^n$ into finitely many $\lan{T}{}{}$\nobreakdash-definable pieces $A_i$ such that certain definable property, not necessarily $\lan{T}{}{}$\nobreakdash-definable, holds on every open polydisc (or other imaginary elements) contained in $A_i$. \end{rem} Here is a variation of Lemma~\ref{atom:self}. \begin{lem}\label{atom:exp} Let $\mathfrak{a} \subseteq \VF^n$ be an $\code \mathfrak{a}$-atomic open polydisc. Let $e \in \VF^{\times}$ with $\abval(e) \gg 0$ (here $\gg$ stands for ``sufficiently larger than''). Then $\mathfrak{a}$ is $(\code \mathfrak{a}, e)$-atomic. \end{lem} \begin{proof} The argument is somewhat similar to that in the proof of Lemma~\ref{atom:self}. We proceed by induction on $n$. Write $\mathfrak{a} = \mathfrak{a}_1 \times \ldots \times \mathfrak{a}_n$ and $\mathfrak{a}' = \mathfrak{a}_1 \times \ldots \times \mathfrak{a}_{n-1}$. Let $(a' , a_n)$ and $(b', b_n)$ be two points in $\mathfrak{a}' \times \mathfrak{a}_n$. By the inductive hypothesis and Lemma~\ref{atom:self}, there is an immediate isomorphism $\sigma' : \langle a', e \rangle \longrightarrow \langle b', e \rangle$ with $\sigma'(e) = e$ and $\sigma'(a') = b'$. Thus, it is enough to show that, for all $\lan{T}{}{}$\nobreakdash-definable function $f : \VF^{n} \longrightarrow \VF$, \[ \rv(a_n - f(e, a')) = \rv(b_n - \sigma'(f(e, a'))). \] Suppose for contradiction that we can always find an $e \in \VF^{\times}$ that is arbitrarily close to $0$ such that $f(e, \mathfrak{a}') \cap \mathfrak{a}_n \neq \emptyset$ (this must hold for some such $f$, for otherwise we are already done by compactness); more precisely, by weak $o$\nobreakdash-minimality, without loss of generality, there is an open interval $(0, \epsilon) \subseteq \VF^+$ such that $f(e, \mathfrak{a}') \cap \mathfrak{a}_n \neq \emptyset$ for all $e \in (0, \epsilon)$. For each $a' \in \mathfrak{a}'$, let $f_{a'}$ be the $a'$-$\lan{T}{}{}$\nobreakdash-definable function on $\VF^+$ given by $b \longmapsto f(b, a')$. By $o$\nobreakdash-minimal monotonicity, there is an $\code{\mathfrak{a}'}$-definable function $l : \mathfrak{a}' \longrightarrow \VF^+$ such that $f^*_{a'} \coloneqq f_{a'} \upharpoonright A_{a'}$ is continuously monotone (of the same kind) for all $a' \in \mathfrak{a}'$, where $A_{a'} \coloneqq (0, l(a'))$. By Lemma~\ref{open:rv:cons}, $l(\mathfrak{a}')$ is either a point or an open disc. Thus the $\vv$-interval $(0, l(\mathfrak{a}'))$ is nonempty, which implies that $f^*_{a'}(e) \in \mathfrak{a}_n$ for some $a' \in \mathfrak{a}'$ and some $e \in A_{a'}$. In that case, we must have that, for all $a' \in \mathfrak{a}'$, $\mathfrak{a}_n \subseteq f^*_{a'}(A_{a'})$ and hence $ f^*_{a'}$ is bijective. By $o$\nobreakdash-minimality in the $\Gamma$-sort, $\abval((f^*_{a'})^{-1}(\mathfrak{a}_n))$ has to be a singleton, say, $\beta_{a'}$; in fact, the function given by $a' \longmapsto \beta_{a'}$ has to be constant and hence we may write $\beta_{a'}$ as $\beta$. It follows that, for all $e \in \VF^+$ with $\abval(e) > \beta$, $f(e, \mathfrak{a}') \cap \mathfrak{a}_n = \emptyset$, contradiction. \end{proof} Next we come to the issue of finding definable points in definable sets. As we have mentioned above, this is a trivial issue if the space of parameters is not fixed. \begin{lem}\label{S:def:cl} The substructure $\mdl S$ is definably closed. \end{lem} \begin{proof} By Lemma~\ref{RV:no:point}, we have $\VF(\dcl( \mdl S)) = \VF(\mdl S)$. Suppose that $t \in \RV$ is definable. By the first sentence of Remark~\ref{rem:RV:weako}, if $\vrv(\RV(\mdl S))$ is nontrivial then $\RV(\mdl S)$ is a model of the reduct of $\TCVF$ to the $\RV$-sort and hence, by quantifier elimination, is an elementary substructure of $\RV$, which implies $t \in \RV(\mdl S)$. On the other hand, if $\vrv(\RV(\mdl S))$ is trivial then $\RV(\mdl S) = \K(\mdl S)$ and it is not hard, though a bit tedious, to check, using quantifier elimination again, that $t \in \K(\mdl S)$. \end{proof} If $\mdl S$ is $\VF$-generated and $\Gamma(\mdl S)$ is nontrivial then $\mdl S$ is an elementary substructure and hence every definable set contains a definable point. This, of course, fails if $\mdl S$ carries extra $\RV$-data, by the above lemma. However, we do have: \begin{lem}\label{clo:disc:bary} Every definable closed disc $\mathfrak{b}$ contains a definable point. \end{lem} \begin{proof} Suppose for contradiction that $\mathfrak{b}$ does not contain a definable point. Since $\mmdl$ is sufficiently saturated, there is an open disc $\mathfrak{a}$ that is disjoint from $\VF(\mdl S)$ and properly contains $\mathfrak{b}$. Let $a \in \mathfrak{a} \smallsetminus \mathfrak{b}$ and $b \in \mathfrak{b}$. Clearly $\rv(c - b) = \rv(c - a)$ for all $c \in \VF(\mdl S)$. As in the proof of Lemma~\ref{atom:self}, there is an immediate automorphism $\sigma$ of $\mmdl$ such that $\sigma(a) = b$. This means that $\mathfrak{b}$ is not definable, which is a contradiction. \end{proof} Notice that the argument above does not work if $\mathfrak{b}$ is an open disc. \begin{cor}\label{open:disc:def:point} Let $\mathfrak{a} \subseteq \VF$ be a disc and $A$ a definable subset of $\VF$. If $\mathfrak{a} \cap A$ is a nonempty proper subset of $\mathfrak{a}$ then $\mathfrak{a}$ contains a definable point. \end{cor} \begin{proof} It is not hard to see that, by HNF, if $\mathfrak{a} \cap A$ is a nonempty proper subset of $\mathfrak{a}$ then $\mathfrak{a}$ contains a definable closed disc and hence the claim is immediate by Lemma~\ref{clo:disc:bary}. \end{proof} \begin{lem}\label{one:atomic} Let $A \subseteq \VF$ be a definable set that contains infinitely many open discs of radius $\beta$. Then one of these discs $\mathfrak{a}$ is $(\code \mathfrak{a}, \beta)$-atomic. \end{lem} \begin{proof} By Lemmas~\ref{atom:gam} and \ref{atom:self}, it is enough to show that some open disc $\mathfrak{a} \subseteq A$ of radius $\beta$ is contained in a type-definable set. Suppose for contradiction that this is not the case. By Corollary~\ref{open:disc:def:point} and HNF, for every definable set $B \subseteq A$, we have either $\mathfrak{a} \cap B = \emptyset$ or $\mathfrak{a} \subseteq B$ for all but finitely many such open discs $\mathfrak{a} \subseteq A$. Passing to $\mdl R^{\bullet}_{\rv}$ and applying compactness (with the parameter $\beta$), the claim follows. \end{proof} \subsection{Contracting from $\VF$ to $\RV$} We can relate definable sets in $\VF$ to those in $\RV$, specifically, $\RV$-pullbacks, through a procedure called contraction. But a more comprehensive study of the latter will be postponed to the next section. \begin{defn}[Disc-to-disc]\label{defn:dtdp} Let $A$, $B$ be two subsets of $\VF$ and $f : A \longrightarrow B$ a bijection. We say that $f$ is \emph{concentric} if, for all open discs $\mathfrak{a} \subseteq A$, $f(\mathfrak{a})$ is also an open disc; if both $f$ and $f^{-1}$ are concentric then $f$ has the \emph{disc-to-disc property} (henceforth abbreviated as ``dtdp''). More generally, let $f : A \longrightarrow B$ be a bijection between two sets $A$ and $B$, each with exactly one $\VF$-coordinate. For each $(t, s) \in f_{\RV}$, let $f_{t, s} = f \cap (\VF^2 \times (t, s))$, which is called a \emph{$\VF$-fiber} of $f$. We say that $f$ has \emph{dtdp} if every $\VF$-fiber of $f$ has dtdp. \end{defn} We are somewhat justified in not specifying ``open disc'' in the terminology since if $f$ has dtdp then, for all open discs $\mathfrak{a} \subseteq A$ and all closed discs $\mathfrak{c} \subseteq \mathfrak{a}$, $f(\mathfrak{c})$ is also a closed disc. In fact, this latter property is stronger: if $f(\mathfrak{c})$ is a closed disc for all closed discs $\mathfrak{c} \subseteq A$ then $f$ has dtdp. But we shall only be concerned with open discs, so we ask for it directly. \begin{lem}\label{open:pro} Let $f : A \longrightarrow B$ be a definable bijection between two sets $A$ and $B$, each with exactly one $\VF$-coordinate. Then there is a definable finite partition $A_i$ of $A$ such that each $f \upharpoonright A_i$ has dtdp. \end{lem} \begin{proof} By compactness, we may simply assume that $A$ and $B$ are subsets of $\VF$. Then we may proceed exactly as in the proof of Corollary~\ref{part:rv:cons}, using Lemmas~\ref{open:rv:cons} and~\ref{atom:self} (also see Remark~\ref{rem:LT:com}). \end{proof} \begin{defn} Let $A$ be a subset of $\VF^n$. The \emph{$\RV$-boundary} of $A$, denoted by $\partial_{\RV}A$, is the definable subset of $\rv(A)$ such that $t \in \partial_{\RV} A$ if and only if $t^\sharp \cap A$ is a proper nonempty subset of $t^\sharp$. The definable set $\rv(A) \smallsetminus \partial_{\RV}A$, denoted by $\ito_{\RV}(A)$, is called the \emph{$\RV$-interior} of $A$. \end{defn} Obviously, $A \subseteq \VF^n$ is an $\RV$-pullback if and only if $\partial_{\RV} A$ is empty. Note that $\partial_{\RV}A$ is in general different from the topological boundary $\partial(\rv(A))$ of $\rv(A)$ in $\RV^n$ and neither one of them includes the other. \begin{lem}\label{RV:bou:dim} Let $A$ be a definable subset of $\VF^n$. Then $\dim_{\RV}(\partial_{\RV} A) < n$. \end{lem} \begin{proof} We do induction on $n$. The base case $n=1$ follows immediately from HNF. We proceed to the inductive step. Since $\partial_{\RV} A_a$ is finite for every $i \in [n]$ and every $a \in \pr_{\tilde i}(A)$, by Corollary~\ref{open:disc:def:point} and compactness, there are a definable finite partition $A_{ij}$ of $\pr_{\tilde i}(A)$ and, for each $A_{ij}$, finitely many definable functions $f_{ijk} : A_{ij} \longrightarrow \VF$ such that \[ \textstyle\bigcup_k \rv(f_{ijk}(a)) = \partial_{\RV} A_a \quad \text{for all } a \in A_{ij}. \] By Corollary~\ref{part:rv:cons}, we may assume that if $t^\sharp \subseteq A_{ij}$ then the restriction $\rv \upharpoonright f_{ijk}(t^\sharp)$ is constant. Hence each $f_{ijk}$ induces a definable function $C_{ijk} : \ito_{\RV}(A_{ij}) \longrightarrow \RVV$. Let \[ \textstyle C = \bigcup_{i, j, k} C_{ijk} \quad \text{and} \quad B = \bigcup_{i,j} \bigcup_{t \in \partial_{\RV} A_{ij}} \rv(A)_t. \] Obviously $\dim_{\RV}(C) < n$. By the inductive hypothesis, for all $A_{ij}$ we have $\dim_{\RV}(\partial_{\RV} A_{ij}) < n-1$. Thus $\dim_{\RV}(B) < n$. Since $\partial_{\RV} A \subseteq B \cup C$, the claim follows. \end{proof} For $(a, t) \in \VF^n \times \RV_0^m$, we write $\rv(a,t)$ to mean $(\rv(a), t)$, similarly for other maps. \begin{defn}[Contractions]\label{defn:corr:cont} A function $f : A \longrightarrow B$ is \emph{$\rv$-contractible} if there is a (necessarily unique) function $f_{\downarrow} : \rv(A) \longrightarrow \rv(B)$, called the \emph{$\rv$-contraction} of $f$, such that \[ (\rv \upharpoonright B) \circ f = f_{\downarrow} \circ (\rv \upharpoonright A). \] Similarly, it is \emph{$\upharpoonright$-contractible} (resp.\ \emph{$\vv$-contractible}) if the same holds in terms of $\upharpoonright$ (resp.\ $\vv$ or $\vrv$, depending on the coordinates) instead of $\rv$. \end{defn} The subscripts in these contractions will be written as $\downarrow_{\rv}$, $\downarrow_{\upharpoonright}$, etc., if they occur in the same context and therefore need to be distinguished from one another notationally. \begin{lem}\label{fn:alm:cont} For every definable function $f : \VF^n \longrightarrow \VF$ there is a definable set $U \subseteq \RV^n$ with $\dim_{\RV}(U) < n$ such that $f \upharpoonright (\VF^n \smallsetminus U^\sharp)$ is $\rv$-contractible. \end{lem} \begin{proof} By Corollary~\ref{poly:open:cons}, for any $t \in \RV^n$, if $\rv(f(t^\sharp))$ is not a singleton then $t^\sharp$ has a $t$-definable proper subset. By compactness, there is a definable subset $A \subseteq \VF^n$ such that $t \in \partial_{\RV} A$ if and only if $\rv(f(t^\sharp))$ is not a singleton. So the assertion follows from Lemma~\ref{RV:bou:dim}. \end{proof} For any definable set $A$, a property holds \emph{almost everywhere} in $A$ or \emph{for almost every point} in $A$ if it holds away from a definable subset of $A$ of a smaller $\VF$-dimension. This terminology will also be used with respect to other notions of dimension. \begin{rem}[Regular points] Let $f : \VF^n \longrightarrow \VF^m$ be a definable function. By Lemma~\ref{fun:suba:fun} and $o$\nobreakdash-minimal differentiability, $f$ is $C^p$ almost everywhere for all $p$ (see \cite[\S~7.3]{dries:1998}). For each $p$, let $\reg^p(f) \subseteq \VF^n$ be the definable subset of regular $C^p$-points of $f$. If $p=0$ then we write $\reg(f)$, which is simply the subset of the regular points of $f$. Assume $n=m$. If $a \in \reg(f)$ and $f$ is $C^1$ in a neighborhood of $a$ then $\reg^1(f)$ contains a neighborhood of $a$ on which the sign of the Jacobian of $f$, which is denoted by $\jcb_{\VF} f$, is constant. If $f$ is locally injective on a definable open subset $A \subseteq \VF^n$ then $f$ is regular almost everywhere in $A$ and hence, for all $p$, $\dim_{\VF}(A \smallsetminus \reg^p(f)) < n$. By \cite[Theorem~A]{Dries:tcon:97}, the situation is quite similar if $f$ is a (parametrically) definable function of the form $\tor(\alpha)^n \longrightarrow \tor(\beta)^m$, $\alpha, \beta \in \absG$, and $\dim_{\VF}$ is replaced by $\dim_{\RV}$, in particular, if $f$ is such a function from $\K^n$ into $\K^m$, or more generally, from $\tor(u)$ into $\tor(v)$, where $u \in \RV^n_{\alpha}$ and $v \in \RV^m_{\beta}$ (see Notation~\ref{rem:K:aff} and Definition~\ref{rem:tor:der}). \end{rem} \begin{rem}[$\rv$-contraction of univariate functions]\label{contr:uni} Suppose that $f$ is a definable function from $\OO^\times$ into $\OO$. By monotonicity, there are a definable finite set $B \subseteq \OO^\times$ and a definable finite partition of $A \coloneqq \OO^\times \smallsetminus B$ into infinite $\vv$-intervals $A_i$ such that both $f$ and $\tfrac{d}{d x} f$ are quasi-$\lan{T}{}{}$\nobreakdash-definable, continuous, and monotone on each $A_i$. If $\rv(A_i)$ is not a singleton then let $U_i \subseteq \K$ be the largest open interval contained in $\rv(A_i)$. Let \[ A^*_i = U_i^\sharp, \quad U = \textstyle{\bigcup_i U_i}, \quad A^* = U^\sharp, \quad f^* = f \upharpoonright A^*. \] By Lemma~\ref{fn:alm:cont}, we may refine the partition such that both $f^*$ and $\frac{d}{d x} f^*$ are $\rv$-contractible. By Lemma~\ref{gk:ortho}, $\vv \upharpoonright f^*(A^*_i)$ and $\vv \upharpoonright \tfrac{d}{d x} f^*(A^*_i)$ must be constant, say $\alpha_i$ and $\beta_i$, respectively. So it makes sense to speak of $\tfrac{d}{d x} f^*_{\downarrow_{\rv}}$ on each $U_i$, which a priori is not the same as $(\tfrac{d}{d x} f^*)_{\downarrow_{\rv}}$. Deleting finitely many points from $U$ if necessary, we assume that $f^*_{\downarrow_{\rv}}$, $(\tfrac{d}{d x} f^*)_{\downarrow_{\rv}}$, and $\tfrac{d}{d x} f^*_{\downarrow_{\rv}}$ are all continuous monotone functions on each $U_i$. We claim that $\abs{\beta_i} = \abs{\alpha_i}$ unless $f^*_{\downarrow_{\rv}} \upharpoonright U_i$ is constant. Suppose for contradiction that $f^*_{\downarrow_{\rv}} \upharpoonright U_i$ is not constant and $\abs{\beta_i} \neq \abs{\alpha_i}$. First examine the case $\abs{\beta_i} < \abs{\alpha_i}$. A moment of reflection shows that, then, $f^* \upharpoonright A^*_i$ would increase or decrease too fast to confine $f^*(A_i^*)$ in $\vv^{-1}(\alpha_i)$. Dually, if $\abs{\beta_i} > \abs{\alpha_i}$ then $f^* \upharpoonright A^*_i$ would increase or decrease too slowly to make $f^*_{\downarrow_{\rv}}(U_i)$ contain more than one point. In either case, we have reached a contradiction. Actually, a similar estimate shows that if $\abs{\beta_i} = \abs{\alpha_i} < \infty$ then $f^*_{\downarrow_{\rv}} \upharpoonright U_i$ cannot be constant. Finally, we show that $\abs{\beta_i} = \abs{\alpha_i}$ implies $(\tfrac{d}{d x} f^*)_{\downarrow_{\rv}} = \tfrac{d}{d x} f^*_{\downarrow_{\rv}}$ on $U_i$ (note that if $\abs{\beta_i} > \abs{\alpha_i}$ then $\tfrac{d}{d x} f^*_{\downarrow_{\rv}} = 0$). Suppose for contradiction that, say, \[ (\tfrac{d}{d x} f^*)_{\downarrow_{\rv}}(\rv(a)) > \tfrac{d}{d x} f^*_{\downarrow_{\rv}}(\rv(a)) > 0 \] for some $a \in A^*_i$. Then there is an open interval $I \subseteq U_i$ containing $\rv(a)$ such that $(\tfrac{d}{d x} f^*)_{\downarrow_{\rv}}(I) > \tfrac{d}{d x} f^*_{\downarrow_{\rv}}(I)$. It follows that $f^*_{\downarrow_{\rv}}(I)$ is properly contained in $\rv(f^*(I^\sharp)) = f^*_{\downarrow_{\rv}}(I)$, which is absurd. The other cases are similar. \end{rem} The higher-order multivariate version is more complicated to state than to prove: \begin{lem}\label{univar:der:contr} Let $A \subseteq (\OO^\times)^n$ be a definable $\RV$-pullback with $\dim_{\RV}(\rv(A)) = n$ and $f : A \longrightarrow \OO$ a definable function. Let $p \in \mathds{N}^n$ be a multi-index of order $\abs{p} = d$ and $k \in \mathds{N}$ with $k \gg d$. Suppose that $f$ is $C^k$ and, for all $q \leq p$, $\frac{\partial^q}{\partial x^q} f$ is $\rv$-contractible and its contraction $(\frac{\partial^q}{\partial x^q} f)_{\downarrow_{\rv}}$ is also $C^k$. Then there is a definable set $V \subseteq \rv(A)$ with $\dim_{\RV}(V) < n$ and $U \coloneqq \rv(A) \smallsetminus V$ open such that, for all $a \in U^\sharp$ and all $q' < q \leq p$ with $\abs{q'} + 1 = q$, exactly one of the following two conditions holds: \begin{itemize} \item either $\frac{\partial^{q}}{\partial x^{q}} f(a) = 0$ or $\abval (\frac{\partial^{q'}}{\partial x^{q'}} f(a)) < \abval (\frac{\partial^{q}}{\partial x^{q}} f(a))$, \item $(\frac{\partial^{q - q'}}{\partial x^{q - q'}} \frac{\partial^{q'}}{\partial x^{q'}} f)_{\downarrow_{\rv}}(\rv (a)) = \frac{\partial^{q - q'}}{\partial x^{q - q'}}(\frac{\partial^{q'}}{\partial x^{q'}} f)_{\downarrow_{\rv}}(\rv( a)) \neq 0$. \end{itemize} If the first condition never occurs then, for all $q \leq p$, we actually have $(\frac{\partial^q}{\partial x^q} f )_{\downarrow_{\rv}} = \frac{\partial^{q}}{\partial x^{q}} f_{\downarrow_{\rv}}$ on $U$. At any rate, for all $q \leq p$, we have $(\frac{\partial^q}{\partial x^q} f )_{\downarrow_{\upharpoonright}} = \frac{\partial^{q}}{\partial x^{q}} f_{\downarrow_{\upharpoonright}}$ on $U$. \end{lem} \begin{proof} First observe that, by induction on $d$, it is enough to consider the case $d =1$ and $p = (0, \ldots, 0, 1)$. For each $a \in \pr_{<n}(A)$, by the discussion in Remark~\ref{contr:uni}, there is an $a$-definable finite set $V_{a}$ of $\rv(A)_{\rv(a)}$ such that the assertion holds for the restriction $f \upharpoonright (A_a \smallsetminus V_{a}^\sharp)$. Let $A^* = \bigcup_{a \in \pr_{<n}(A)} V_{a}^\sharp \subseteq A$. By Lemma~\ref{RV:bou:dim}, $\dim_{\RV}(\partial_{\RV} A^*) < n$ and hence $\dim_{\RV}(\rv(A^*)) < n$. Therefore, by Lemma~\ref{fn:alm:cont}, there is a definable open set $U \subseteq \ito(\rv(A) \smallsetminus \rv(A^*))$ that is as desired. \end{proof} Suppose that $f = (f_1, \ldots, f_m) : A \longrightarrow \OO$ is a sequence of definable $\upharpoonright$-contractible functions, where the set $A$ is as in Lemma~\ref{univar:der:contr}. Let $P(x_1, \ldots, x_m)$ be a partial differential operator with definable $\upharpoonright$-contractible coefficients $a_i : A \longrightarrow \OO$ and $P_{\downarrow_{\upharpoonright}}(x_1, \ldots, x_m)$ the corresponding operator with $\upharpoonright$-contracted coefficients $a_{i\downarrow_{\upharpoonright}} : \upharpoonright(A) \longrightarrow \K$. Note that both $P(f) : A \longrightarrow \OO$ and $P_{\downarrow_{\upharpoonright}}(f_{\downarrow_{\upharpoonright}}) : \upharpoonright(A) \longrightarrow \K$ are defined almost everywhere. By Lemma~\ref{univar:der:contr}, such an operator $P$ almost commutes with $\upharpoonright$: \begin{cor}\label{rv:op:comm} For almost all $t \in \rv(A)$ and all $a \in t^\sharp$, \[ \upharpoonright(P(f)(a)) = P_{\downarrow_{\upharpoonright}}(f_{\downarrow_{\upharpoonright}})(\upharpoonright(a)). \] \end{cor} \begin{cor} Let $U$, $V$ be definably connected subsets of $(\K^+)^n$ and $f : U^\sharp \longrightarrow V^\sharp$ a definable $\upharpoonright$-contractible function. Suppose that $f_{\downarrow_{\upharpoonright}} : U \longrightarrow V$ is continuous and locally injective. Then there is a definable subset $U^* \subseteq U$ of $\RV$-dimension $< n$ such that the sign of $\jcb_{\VF} f$ is constant on $(U \smallsetminus U^*)^\sharp$. \end{cor} \begin{proof} This follows immediately from Corollary~\ref{rv:op:comm} and \cite[Theorem~3.2]{pet:star:otop}. \end{proof} \begin{lem}\label{atom:type} In $\xmdl$, let $\mathfrak{a} \subseteq \VF$ be an atomic subset and $f : \mathfrak{a} \longrightarrow \VF$ a definable injection. Then $\mathfrak{a}$ and $f(\mathfrak{a})$ must be of the same one of the four possible forms (see Remark~\ref{rem:type:atin}). \end{lem} \begin{proof} This is trivial if $\mathfrak{a}$ is a point. The case of $\mathfrak{a}$ being an open disc is covered by Lemma~\ref{open:rv:cons}. So we only need to show that if $\mathfrak{a}$ is a closed disc then $f(\mathfrak{a})$ cannot be a half thin annulus. We shall give two proofs. The first one works only when $T$ is polynomially bounded, but is more intuitive and much simpler. Suppose that $T$ is polynomially bounded. Suppose for contradiction that $\code \mathfrak{a}$ is of the form $\tor(\goedel \mathfrak{m})$ for some $\goedel \mathfrak{m} \in \RV_{\gamma}$ and $\goedel{f(\mathfrak{a})}$ is of the form $\tor^+(\goedel \mathfrak{n})$ for some $\goedel \mathfrak{n} \in \RV_{\delta}$. By Lemma~\ref{open:pro} and monotonicity, $f$ induces an increasing (or decreasing, which can be handled similarly) bijection $f_{\downarrow} : \tor(\goedel \mathfrak{m}) \longrightarrow \tor^+(\goedel \mathfrak{n})$. In fact, for all $p \in \mathds{N}$, \[ \tfrac{d^p}{d x^p} f_{\downarrow} : \tor(\goedel \mathfrak{m}) \longrightarrow \tor^{+}(\delta - p \gamma) \] cannot be constant and hence must be continuous, surjective, and increasing. Using additional parameters, we can translate $f_{\downarrow}$ into a function $\K \longrightarrow \K^+$ and this function cannot be polynomially bounded by elementary differential calculus, which is a contradiction. We move on to the second proof. The argument is essentially the same as that in the proof of \cite[Lemma~3.45]{hrushovski:kazhdan:integration:vf}. Consider the group \[ G \coloneqq \aut(\tor(\goedel \mathfrak{m}) / \K) \leq \aut(\xmdl / \K). \] Suppose for contradiction that $G$ is finite. Since every $G$-orbit is finite, every point in $\tor(\goedel \mathfrak{m})$ is $\K$-definable. It follows that there exists a nonconstant definable function $\tor(\goedel \mathfrak{m}) \longrightarrow \K$. But this is not possible since $\mathfrak{a}$ is atomic. Let $\Lambda$ be the group of affine transformations of $\K$, that is, $\Lambda = \K^{\times} \ltimes \K$, where the first factor is the multiplicative group of $\K$ and the second the additive group of $\K$. Every automorphism in $G$ is a $\K$-affine transformation of $\tor(\goedel \mathfrak{m})$ and hence $G$ is a subgroup of $\Lambda$. For each $\K$-definable relation $\phi$ on $\tor(\goedel \mathfrak{m})$, let $G_{\phi} \subseteq \Lambda$ be the definable subgroup of $\K$-affine transformations that preserve $\phi$. So $G = \bigcap_{\phi} G_{\phi}$. Since there is no infinite descending chain of definable subgroups of $\Lambda$, we see that $G$ is actually an infinite definable group. Then we may choose two nontrivial automorphisms $g, g' \in G$ whose fixed points are distinct. It follows that the commutator of $g$, $g'$ is a translation and hence, by $o$\nobreakdash-minimality, $G$ contains all the translations, that is, $\K \leq G$. By a similar argument, every automorphism in $H \coloneqq \aut(\tor^+(\goedel \mathfrak{n}) / \K)$ is a $\K$-linear transformation of $\tor^+(\goedel \mathfrak{n})$ and hence $H = \K^+ \leq \K^{\times}$. Now any definable bijection between $\tor(\goedel \mathfrak{m})$ and $\tor^+(\goedel \mathfrak{n})$ would induce a definable group isomorphism $\K \longrightarrow \K^+$, that is, an exponential function, which of course contradicts the assumption that $T$ is power-bounded. \end{proof} \begin{defn}[$\vv$-affine and $\rv$-affine]\label{rvaffine} Let $\mathfrak{a}$ be an open disc and $f : \mathfrak{a} \longrightarrow \VF$ an injection. We say that $f$ is \emph{$\vv$-affine} if there is a (necessarily unique) $\gamma \in \Gamma$, called the \emph{shift} of $f$, such that, for all $a, a' \in \mathfrak{a}$, \[ \abval(f(a) - f(a')) = \gamma + \abval(a - a'). \] We say that $f$ is \emph{$\rv$-affine} if there is a (necessarily unique) $t \in \RV$, called the \emph{slope} of $f$, such that, for all $a, a' \in \mathfrak{a}$, \[ \rv(f(a) - f(a')) = t \rv(a - a'). \] \end{defn} Obviously $\rv$-affine implies $\vv$-affine. With the extra structure afforded by the total ordering, we can reproduce (an analogue of) \cite[Lemma~3.18]{Yin:int:acvf} with a somewhat simpler proof: \begin{lem}\label{rv:lin} In $\xmdl$, let $f : \mathfrak{a} \longrightarrow \mathfrak{b}$ be a definable bijection between two atomic open discs. Then $f$ is $\rv$-affine and hence $\vv$-affine with respect to $\rad(\mathfrak{b}) - \rad(\mathfrak{a})$. \end{lem} \begin{proof} Since $f$ has dtdp by Lemma~\ref{open:pro}, for all $\rad(\mathfrak{a}) < \delta$ and all \[ \mathfrak{d} \coloneqq \tor(\goedel \mathfrak{c}) \subseteq \rv_{\delta- \abval(\mathfrak{a})}(\mathfrak{a}), \] it induces a $\goedel \mathfrak{d}$-definable $C^1$ function $f_{\goedel \mathfrak{d}} : \mathfrak{d} \longrightarrow \tor(\goedel{f(\mathfrak{c})})$. The codomain of its derivative $\tfrac{d}{d x} f_{\goedel \mathfrak{d}}$ can be narrowed down to either $\tor^+(\epsilon - \delta)$ or $\tor^{-}(\epsilon - \delta)$, where $\epsilon = \rad(f(\mathfrak{c}))$. By Lemma~\ref{open:rv:cons}, there is a $t \in \RV$ such that $\tfrac{d}{d x} f(\mathfrak{a}) \subseteq t^\sharp$. By Lemma~\ref{atom:gam}, $\mathfrak{a}$ remains atomic over $\delta$. Then, by (an accordingly modified version of) Remark~\ref{contr:uni}, we must have that, for all $\mathfrak{d}$ as above, all $\goedel \mathfrak{c} \in \mathfrak{d}$, and all $a \in \mathfrak{c}$, \[ \tfrac{d}{d x} f_{\goedel \mathfrak{d}}(\goedel \mathfrak{c}) = \rv(\tfrac{d}{d x} f(a)) = t \] and hence \[ \aff_{\goedel{f(\mathfrak{c})}} \circ f_{\goedel \mathfrak{d}} \circ \aff^{-1}_{\goedel \mathfrak{c}} : \tor(\delta) \longrightarrow \tor(\epsilon) \] is a linear function given by $u \longmapsto tu$ (see Definition~\ref{rem:tor:der} for the notation). It follows that, for \begin{itemize} \item $a$ and $a'$ in $\mathfrak{a}$, \item $\mathfrak{d}$ the smallest closed disc containing $a$ and $a'$, \item $\mathfrak{c}$ and $\mathfrak{c}'$ the maximal open subdiscs of $\mathfrak{d}$ containing $a$ and $a'$, respectively, \end{itemize} we have \[ \rv(f(a) - f(a')) = \rv(f(\mathfrak{c}) - f(\mathfrak{c}')) = t \rv(\mathfrak{c} - \mathfrak{c}') = t \rv(a - a'). \] That is, $f$ is $\rv$-affine. Moreover, it is clear from dtdp that $\abvrv(t) = \rad(\mathfrak{b}) - \rad(\mathfrak{a})$. \end{proof} \section{Grothendieck semirings}\label{sect:groth} In this section, we define various categories of definable sets and explore the relations between their Grothendieck semirings. The first main result is that the Grothendieck semiring $\gsk \RV[*]$ of the $\RV$-category $\RV[*]$ can be naturally expressed as a tensor product of the Grothendieck semirings of two of its full subcategories $\RES[*]$ and $\Gamma[*]$. The second main result is that there is a natural surjective semiring homomorphism from $\gsk \RV[*]$ onto the Grothendieck semiring $\gsk \VF_*$ of the $\VF$-category $\VF_*$. \begin{hyp}\label{hyp:gam} By (the proof of) Lemma~\ref{S:def:cl}, every definable set in $\RV$ contains a definable point if and only if $\Gamma(\mdl S) \neq \pm 1$. Thus, from now on, we shall assume that $\Gamma(\mdl S)$ is nontrivial. \end{hyp} \subsection{The categories of definable sets} As in Definition~\ref{defn:dtdp}, an $\RV$-fiber of a definable set $A$ is a set of the form $A_a$, where $a \in A_{\VF}$. The $\RV$-fiber dimension of $A$ is the maximum of the $\RV$-dimensions of its $\RV$-fibers and is denoted by $\dim^{\fib}_{\RV}(A)$. \begin{lem}\label{RV:fiber:dim:same} Suppose that $f : A \longrightarrow A'$ is a definable bijection. Then $\dim^{\fib}_{\RV}(A) = \dim^{\fib}_{\RV} (A')$. \end{lem} \begin{proof} Let $\dim^{\fib}_{\RV}(A) = k$ and $\dim^{\fib}_{\RV}(A') = k'$. For each $a \in \pr_{\VF}(A)$, let $h_{a} : A_a \longrightarrow A'_{\VF}$ be the $a$-definable function induced by $f$ and $\pr_{\VF}$. By Corollary~\ref{function:rv:to:vf:finite:image}, the image of $h_{a}$ is finite. It follows that $k \leq k'$. Symmetrically we also have $k \geq k'$ and hence $k = k'$. \end{proof} \begin{defn}[$\VF$-categories]\label{defn:VF:cat} The objects of the category $\VF[k]$ are the definable sets of $\VF$-dimension $\leq k$ and $\RV$-fiber dimension $0$ (that is, all the $\RV$-fibers are finite). Any definable bijection between two such objects is a morphism of $\VF[k]$. Set $\VF_* = \bigcup_k \VF[k]$. \end{defn} \begin{defn}[$\RV$-categories]\label{defn:c:RV:cat} The objects of the category $\RV[k]$ are the pairs $(U, f)$ with $U$ a definable set in $\RVV$ and $f : U \longrightarrow \RV^k$ a definable finite-to-one function. Given two such objects $(U, f)$, $(V, g)$, any definable bijection $F : U \longrightarrow V$ is a \emph{morphism} of $\RV[k]$. \end{defn} Set $\RV[{\leq} k] = \bigoplus_{i \leq k} \RV[i]$ and $\RV[*] = \bigoplus_{k} \RV[k]$; similarly for the other categories below. \begin{nota}\label{0coor} We emphasize that if $(U, f)$ is an object of $\RV[k]$ then $f(U)$ is a subset of $\RV^k$ instead of $\RV_0^k$, while $0$ can occur in any coordinate of $U$. An object of $\RV[*]$ of the form $(U, \id)$ is often written as $U$. More generally, if $f : U \longrightarrow \RV_0^k$ is a definable finite-to-one function then $(U, f)$ denotes the obvious object of $\RV[{\leq} k]$. Often $f$ will be a coordinate projection (every object in $\RV[*]$ is isomorphic to an object of this form). In that case, $(U, \pr_{\leq k})$ is simply denoted by $U_{\leq k}$ and its class in $\gsk \RV[k]$ by $[U]_{\leq k}$, etc. \end{nota} \begin{rem}\label{fintoone} Alternatively, we could allow only injections instead of finite-to-one functions in defining the objects of $\RV[k]$. Insofar as the Grothendieck semigroup $\gsk \RV[k]$ is concerned, this is not more restrictive in our setting since for any $\bm U \coloneqq (U, f) \in \RV[k]$ there is a definable finite partition $\bm U_i \coloneqq (U_i, f_i)$ of $\bm U$, in other words, $[\bm U] = \sum_i [\bm U_i]$ in $\gsk \RV[k]$, such that each $f_i$ is injective. It is technically more convenient to work with finite-to-one functions, though (for instance, we can take finite disjoint unions). \end{rem} In the above definitions and other similar ones below, all morphisms are actually isomorphisms and hence the categories are all groupoids. For the cases $k =0$, the reader should interpret things such as $\RV^0$ and how they interact with other things in a natural way. For instance, $\RV^0$ may be treated as the empty tuple. So the categories $\VF[0]$, $\RV[0]$ are equivalent. About the position of ``$*$'' in the notation: ``$\VF_*$'' suggests that the category is filtrated and ``$\RV[*]$'' suggests that the category is graded. \begin{defn}[$\RES$-categories]\label{defn:RES:cat} The category $\RES[k]$ is the full subcategory of $\RV[k]$ such that $(U, f) \in \RES[k]$ if and only if $\vrv(U)$ is finite. \end{defn} \begin{rem}[Explicit description of ${\gsk \RES[k]}$]\label{expl:res} Let $\RES$ be the category whose objects are the definable sets $U$ in $\RVV$ with $\vrv(U)$ finite and whose morphisms are the definable bijections. The obvious forgetful functor $\RES[*] \longrightarrow \RES$ induces a surjective semiring homomorphism $\gsk \RES[*] \longrightarrow \gsk \RES$, which is clearly not injective. The semiring $\gsk \RES$ is actually generated by isomorphism classes $[U]$ with $U$ a set in $\K^+$. By Theorem~\ref{groth:omin}, we have the following explicit description of $\gsk \RES$. Its underlying set is $(0 \times \mathds{N}) \cup (\mathds{N}^+ \times \mathds{Z})$. For all $(a, b), (c, d) \in \gsk \RES$, \[ (a, b) + (c, d) = (\max\{a, c\}, b+d), \quad (a, b) \times (c, d) = (a + c, b \times d). \] By the computation in \cite{kage:fujita:2006}, the dimensional part is lost in the groupification $\ggk \RES$ of $\gsk \RES$, that is, $\ggk \RES = \mathds{Z}$, which is of course much simpler than $\gsk \RES$. However, following the philosophy of \cite{hrushovski:kazhdan:integration:vf}, we shall work with Grothendieck semirings whenever possible. By Lemma~\ref{gk:ortho}, if $(U, f) \in \RES[*]$ then $\vrv(f(U))$ is finite as well. Therefore the semiring $\gsk \RES[*]$ is generated by isomorphism classes $[(U, f)]$ with $f$ a bijection between two sets in $\K^+$. As above, each $\gsk \RES[k]$ may be described explicitly as well. The semigroup $\gsk \RES[0]$ is canonically isomorphic to the semiring $(0, 0) \times \mathds{N}$. For $k > 0$, the underlying set of $\gsk \RES[k]$ is $\bigcup_{0 \leq i \leq k}((k, i) \times \mathds{Z})$, and its semigroup operation is given by \[ (k, i, a) + (k, i', a') = (k, \max\{i, i'\}, a + a'). \] Moreover, multiplication in $\gsk \RES[*]$ is given by \[ (k, i, a) \times (l, j, b) = (k+l, i + j, a \times b). \] \end{rem} \begin{defn}[$\Gamma$-categories]\label{def:Ga:cat} The objects of the category $\Gamma[k]$ are the finite disjoint unions of definable subsets of $\Gamma^k$. Any definable bijection between two such objects is a \emph{morphism} of $\Gamma[k]$. The category $\Gamma^{c}[k]$ is the full subcategory of $\Gamma[k]$ such that $I \in \Gamma^{c}[k]$ if and only if $I$ is finite. \end{defn} Clearly $\gsk \Gamma^c[k]$ is naturally isomorphic to $\mathds{N}$ for all $k$ and hence $\gsk \Gamma^c[*] \cong \mathds{N}[X]$. \begin{nota}\label{nota:RV:short} We introduce the following shorthand for distinguished elements in the various Grothendieck semigroups and their groupifications (and closely related constructions): \begin{gather*} \bm 1_{\K} = [\{1\}] \in \gsk \RES[0], \quad [1] = [(\{1\}, \id)] \in \gsk \RES[1],\\ [\bm T] = [(\K^+, \id)] \in \gsk \RES[1], \quad [\bm A] = 2 [\bm T] + [1] \in \gsk \RES[1],\\ \bm 1_{\Gamma} = [\Gamma^0] \in \gsk \Gamma[0], \quad [e] = [\{1\}] \in \gsk \Gamma[1], \quad [\bm H] = [(0,1)] \in \gsk \Gamma[1],\\ [\bm P] = [(\RV^{\circ \circ}, \id)] - [1] \in \ggk \RV[1]. \end{gather*} Here $\RV^{\circ \circ} = \RV^{\circ \circ}_0 \smallsetminus 0$. Note that the interval $\bm H$ is formed in the signed value group $\Gamma$, whose ordering is inverse to that of the value group $\abs \Gamma_\infty$ (recall Remark~\ref{signed:Gam}). The interval $(1, \infty) \subseteq \Gamma$ is denoted by $\bm H^{-1}$. As in~\cite{hrushovski:kazhdan:integration:vf}, the elements $[\bm P]$ and $\bm 1_{\K} + [\bm P]$ in $\ggk \RV[*]$ play special roles in the main construction (see Propositions~\ref{kernel:L} and the remarks thereafter). \end{nota} The following lemma is a generality proven elsewhere. It is only needed to prove Lemma~\ref{gam:pulback:mono}. \begin{lem}\label{gen:mat:inv} Let $K$ be an integral domain and $M$ a torsion-free $K$-module, the latter is viewed as the main sort of a first-order structure of some expansion of the usual $K$-module language. Let $\mathfrak{F}$ be a class of definable functions in the sort $M$ such that \begin{itemize} \item all the identity functions are in $\mathfrak{F}$, \item all the functions in $\mathfrak{F}$ are definably piecewise $K$-linear, that is, they are definably piecewise of the form $x \longmapsto M x + c$, where $M$ is a matrix with entries in $K$ and $c$ is a definable point, \item $\mathfrak{F}$ is closed under composition, inversion, composition with $\mgl(K)$-transformations ($K$-linear functions with invertible matrices), and composition with coordinate projections. \end{itemize} If $g : D \longrightarrow E$ is a bijection in $\mathfrak{F}$, where $D, E \subseteq M^n$, then $g$ is definably a piecewise $\mgl_n(K)$-transformation. \end{lem} \begin{proof} See \cite[Lemma~2.29]{Yin:int:expan:acvf}. \end{proof} \begin{lem}\label{gam:pulback:mono} Let $g$ be a $\Gamma[k]$-morphism. Then $g$ is definably a piecewise $\mgl_k(\mathds{K})$-transformation modulo the sign, that is, a piecewise $\mgl_k(\mathds{K}) \times \mathds{Z}_2$-transformation. Consequently, $g$ is a $\vrv$-contraction (recall Definition~\ref{defn:corr:cont}). \end{lem} \begin{proof} For the first claim, it is routine to check that Lemma~\ref{gen:mat:inv} is applicable to the class of definable functions in the $\abs \Gamma$-sort. The second claim follows from the fact that the natural actions of $\mgl_k(\mathds{K})$ on $(\RV^+)^k$ and $(\Gamma^+)^k$ commute with the map $\vrv$. \end{proof} \begin{rem}\label{why:glz} In \cite{hrushovski:kazhdan:integration:vf}, $\Gamma[k]$-morphisms are by definition piecewise $\mgl_k(\mathds{Z})$-transformations. This is because, in the setting there, the $\vrv$-contractions are precisely the piecewise $\mgl_k(\mathds{Z})$-transformations, which form a proper subclass of definable bijections in the $\Gamma$-sort, which in general are piecewise $\mgl_k(\mathds{Q})$-transformations. \end{rem} \begin{lem}\label{G:red} For all $I \in \Gamma[k]$ there are finitely many definable sets $H_i \subseteq \Gamma^{n_i}$ with $\dim_{\Gamma}(H_i) = n_i \leq k$ such that $[I] = \sum_i [H_i] [e]^{k -n_i}$ in $\gsk \Gamma[k]$. \end{lem} \begin{proof} We do induction on $k$. The base case $k = 0$ is trivial. For the inductive step $k > 0$, the claim is also trivial if $\dim_{\Gamma}(I) = k$; so let us assume that $\dim_{\Gamma}(I) < k$. By \cite[Theorem~B]{Dries:tcon:97}, we may partition $I$ into finitely many definable pieces $I_i$ such that each $I_i$ is the graph of a definable function $I'_i \longrightarrow \Gamma$, where $I'_i \in \Gamma[k-1]$. So the claim simply follows from the inductive hypothesis. \end{proof} \begin{rem}\label{gam:res} There is a natural map $\Gamma[*] \longrightarrow \RV[*]$ given by $I \longmapsto \bm I \coloneqq (I^\sharp, \id)$ (see Notation~\ref{gamma:what}). By Lemma~\ref{gam:pulback:mono}, this map induces a homomorphism $\gsk \Gamma[*] \longrightarrow \gsk \RV[*]$ of graded semirings. By \cite[Theorem~A]{Dries:tcon:97} and Theorem~\ref{groth:omin}, this homomorphism restricts to an injective homomorphism $\gsk \Gamma^{c}[*] \longrightarrow \gsk \RES[*]$ of graded semirings. There is also a similar semiring homomorphism $\gsk \Gamma^c[*] \longrightarrow \gsk \RES$, but it is not injective. \end{rem} \begin{ques} Is the homomorphism $\gsk \Gamma[*] \longrightarrow \gsk \RV[*]$ above injective? \end{ques} Now, the map from $\gsk \RES[*] \times \gsk \Gamma[*]$ to $\gsk \RV[*]$ naturally determined by the assignment \[ ([(U, f)], [I]) \longmapsto [(U \times I^\sharp, f \times \id)] \] is well-defined and is clearly $\gsk \Gamma^{c}[*]$-bilinear. Hence it induces a $\gsk \Gamma^{c}[*]$-linear map \[ \bb D: \gsk \RES[*] \otimes_{\gsk \Gamma^{c}[*]} \gsk \Gamma[*] \longrightarrow \gsk \RV[*], \] which is a homomorphism of graded semirings. We shall abbreviate ``$\otimes_{\gsk \Gamma^{c}[*]}$'' as ``$\otimes$'' below. Note that, by the universal mapping property, groupifying a tensor product in the category of $\gsk \Gamma^{c}[*]$-semimodules is the same, up to isomorphism, as taking the corresponding tensor product in the category of $\ggk \Gamma^{c}[*]$-modules. We will show that $\bb D$ is indeed an isomorphism of graded semirings. \subsection{The tensor expression} Heuristically, $\RV$ may be viewed as a union of infinitely many one-dimensional vector spaces over $\K$. Weak $o$\nobreakdash-minimality states that every definable subset of $\RV$ is nontrivial only within finitely many such one-dimensional spaces. The tensor expression of $\gsk \RV[*]$ we seek may be thought of as a generalization of this phenomenon to all definable sets in $\RV$. \begin{lem}\label{resg:decom} Let $A \subseteq \RV^k \times \Gamma^l$ be an $\alpha$-definable set, where $\alpha \in \Gamma$. Set $\pr_{\leq k}(A) = U$ and suppose that $\vrv(U)$ is finite. Then there is an $\alpha$-definable finite partition $U_i$ of $U$ such that, for each $i$ and all $t, t' \in U_i$, we have $A_t = A_{t'}$. \end{lem} \begin{proof} By stable embeddedness, for every $t \in U$, $A_t$ is $(\vrv(t), \alpha)$-definable in the $\Gamma$-sort alone. Since $\vrv(U)$ is finite, the assertion simply follows from compactness. \end{proof} \begin{lem}\label{gam:tup:red} Let $\beta$, $\gamma = (\gamma_1, \ldots, \gamma_m)$ be finite tuples in $\Gamma$. If there is a $\beta$-definable nonempty proper subset of $\gamma^\sharp$ then, for some $\gamma_i$ and every $t \in \gamma^\sharp_{\tilde i}$, $\gamma^\sharp_i$ contains a $t$-definable point. Consequently, if $U$ is such a subset of $\gamma^\sharp$ then either $U$ contains a definable point or there exists a subtuple $\gamma_* \subseteq \gamma$ such that $\pr_{\gamma_*}(U) = \gamma^\sharp_*$, where $\pr_{\gamma_*}$ denotes the obvious coordinate projection, and there is a $\beta$-definable function from $\gamma^\sharp_*$ into $(\gamma \smallsetminus \gamma_*)^\sharp$. \end{lem} \begin{proof} For the first claim we do induction on $m$. The base case $m = 1$ simply follows from $o$\nobreakdash-minimality in the $\K$-sort and Lemma~\ref{RV:no:point}. For the inductive step $m > 1$, let $U$ be a $\beta$-definable nonempty proper subset of $\gamma^\sharp$. By the inductive hypothesis, we may assume \[ \{ t \in \pr_{>1}(U) : U_t \neq \gamma^\sharp_1\} = \gamma^\sharp_{> 1}. \] Then $\gamma_1$ is as desired. The second claim follows easily from the first. \end{proof} \begin{lem}\label{RV:decom:RES:G} Let $U \subseteq \RV^m$ be a definable set. Then there are finitely many definable sets of the form $V_i \times D_i \subseteq (\K^+)^{k_i} \times \Gamma^{l_i}$ such that $k_i + l_i = m$ for all $i$ and $[U] = \sum_i [V_i \times D_i^\sharp]$ in $\gsk \RV[*]$. \end{lem} \begin{proof} The case $m=1$ is an immediate consequence of weak $o$\nobreakdash-minimality in the $\RV$-sort. For the case $m>1$, by Lemma~\ref{gam:tup:red}, compactness, and a routine induction on $m$, over a definable finite partition of $U$, we may assume that $U$ is a union of sets of the form $t \times \gamma^\sharp$, where $t \in (\K^+)^k$, $\gamma \in \Gamma^l$, and $k+l=m$. Then the assertion follows from Lemma~\ref{resg:decom}. \end{proof} Let $Q$ be a set of parameters in $\mdl R^{\bullet}_{\rv}$. We say that a $Q$-definable set $I \subseteq \Gamma^m$ is \emph{$Q$-reducible} if $I^\sharp$ is $Q$-definably bijective to $\K^+ \times I_{\tilde i}^\sharp$, where $i \in [m]$ and $I_{\tilde i} = \pr_{\tilde i}(I)$. For every $t \in (\K^+)^{n}$ and every $\alpha \in \Gamma^m$, $\alpha$ is $(t,\alpha)$-reducible if and only if, by Lemma~\ref{gam:tup:red}, there is a $(t,\alpha)$-definable nonempty proper subset of $\alpha^\sharp$ if and only if, by Lemma~\ref{gam:tup:red} again, there is an $\alpha$-definable set $U \subseteq (\K^+)^{n}$ containing $t$ such that $\alpha$ is $(u,\alpha)$-reducible for every $u \in U$ if and only if, by $o$\nobreakdash-minimality in the $\K$-sort and Lemma~\ref{RV:no:point}, $\alpha$ is $\alpha$-reducible. We say that a definable set $A$ in $\RV$ is \emph{$\Gamma$-tamped} of \emph{height} $l$ if there are $U \in \RES[k]$ and $I \in \Gamma[l]$ with $\dim_{\Gamma}(I) = l$ such that $A = U \times I^\sharp$. In that case, there is only one way to write $A$ as such a product, and if $B = V \times J^\sharp \subseteq A$ is also $\Gamma$-tamped then the coordinates occupied by $J^\sharp$ are also occupied by $I^\sharp$, in particular, $\dim_{\Gamma}(J) = l$ if and only if $V \subseteq U$ and $J \subseteq I$. \begin{lem}\label{Gtamp} Let $A = U \times I^\sharp$, $B = V \times J^\sharp$ be $\Gamma$-tamped sets of the same height $l$, where $U$, $V$ are sets in $\K^+$. Let $f$ be a definable bijection whose domain contains $A$ and whose range contains $ B$. Suppose that $B \smallsetminus f( A)$, $A \smallsetminus f^{-1}(B)$ do not have $\Gamma$-tamped subsets of height $l$. Then there are finitely many $\Gamma$-tamped sets $A_i = U_i \times I_i^\sharp \subseteq U \times I^\sharp$ and $B_i = V_i \times J_i^\sharp \subseteq V \times J^\sharp$ such that \begin{itemize} \item $ A \smallsetminus \bigcup_i A_i$ and $ B \smallsetminus \bigcup_i B_i$ do not have $\Gamma$-tamped subsets of height $l$, \item each restriction $f \upharpoonright A_i$ is of the form $p_i \times q_i$, where $p_i : U_i \longrightarrow V_i$, $q_i : I_i^\sharp \longrightarrow J_i^\sharp$ are bijections and the latter $\vrv$-contracts to a $\Gamma[*]$-morphism $q_{i \downarrow} : I_i \longrightarrow J_i$. \end{itemize} \end{lem} Let $t \times \alpha^\sharp \subseteq A$. If $t \times \alpha^\sharp \subseteq A \smallsetminus f^{-1}(B)$ then, by Lemma~\ref{resg:decom}, it is contained in a definable set $U' \times I'^\sharp \subseteq A \smallsetminus f^{-1}(B)$ with $U' \subseteq U$ and $I' \subseteq I$. Since $A \smallsetminus f^{-1}(B)$ does not have $\Gamma$-tamped subsets of height $l$, we must have $\dim_{\Gamma}(I') < l$. It follows from (the proof of) Lemma~\ref{gam:red:K} that $I'$ is piecewise reducible, which implies that $\alpha$ is $\alpha$-reducible. At any rate, if $\alpha$ is $(t,\alpha)$-reducible then $\alpha$ is $\alpha$-reducible and hence there is a reducible subset of $I$ that contains $\alpha$. \begin{proof} Remove all the reducible subsets of $I$ from $I$ and call the resulting set $\bar I$; similarly for $\bar J$. Then, for all $t \in U$ and all $\alpha \in \bar I$, $f(t \times \alpha^\sharp)$ must be contained in a set of the form $s \times \beta^\sharp$, for otherwise it would have a $(t,\alpha)$-definable nonempty proper subset and hence would be $(t,\alpha)$-reducible. In fact, $f(t \times \alpha^\sharp) = s \times \beta^\sharp$, for otherwise $\beta$ is $(t,\alpha)$-reducible and hence, by $o$\nobreakdash-minimality in the $\K$-sort and the assumption $\dim_{\Gamma}(I) = \dim_{\Gamma}(J) = l$, a $(t,\alpha)$-definable subset of $\alpha^\sharp$ can be easily constructed. For the same reason, we must actually have $\beta \in \bar J$. It follows that $f(U \times \bar I^\sharp) = V \times \bar J^\sharp$. Then, by compactness, there are finitely many reducible subsets $I_i$ of $I$ such that, for all $t \in U$ and all $\alpha \in I_* = I \smallsetminus \bigcup_i I_i$, $f(t \times \alpha^\sharp) = s \times \beta^\sharp$ for some $s \in V$ and $\beta \in J$. Applying Lemma~\ref{resg:decom} to (the graph of) the function on $U \times I_*$ induced by $f$, the lemma follows. \end{proof} \begin{prop}\label{red:D:iso} $\bb D$ is an isomorphism of graded semirings. \end{prop} \begin{proof} Surjectivity of $\bb D$ follows immediately from Lemma~\ref{RV:decom:RES:G}. For injectivity, let $\bm U_i \coloneqq (U_i, f_i)$, $\bm V_j \coloneqq (V_j, g_j)$ be objects in $\RES[*]$ and $I_i$, $J_j$ objects in $\Gamma[*]$ such that $\bb D([\bm U_i] \otimes [I_i])$, $\bb D([\bm V_j] \otimes [J_j])$ are objects in $\gsk \RV[l]$ for all $i$, $j$. Set \[ \textstyle M_i = U_i \times I_i^\sharp, \quad N_i = V_j \times J_j^\sharp, \quad M = \biguplus_i M_i, \quad N = \biguplus_j N_j. \] Suppose that there is a definable bijection $f : M \longrightarrow N$. We need to show \[ \textstyle \sum_i [\bm U_i] \otimes [I_i] = \sum_j [\bm V_j] \otimes [J_j]. \] By Lemma~\ref{gam:red:K}, we may assume that all $M_{i}$, $N_{j}$ are $\Gamma$-tamped. By $o$\nobreakdash-minimal cell decomposition, without changing the sums, we may assume that each $U_i$ is a disjoint union of finitely many copies of $(\K^+)^i$ and thereby re-index $M_i$ more informatively as $M_{i, m} = U_i \times I_m^\sharp$, where $I_m$ is an object in $\Gamma[m]$; similarly each $N_j$ is re-indexed as $N_{j, n}$. By Lemma~\ref{dim:cut:gam}, the respective maximums of the numbers $i+m$, $j+n$ are the $\RV$-dimensions of $M$, $N$ and hence must be equal; it is denoted by $p$. Let $q$ be the largest $m$ such that $i + m = p$ for some $M_{i, m}$ and $q'$ the largest $n$ such that $j + n = p$ for some $N_{j, n}$. It is not hard to see that we may arrange $q = q'$. We now proceed by induction on $q$. The base case $q=0$ is rather trivial. For the inductive step, by Lemma~\ref{Gtamp}, we see that certain products contained in $M_{p-q, q}$, $N_{p-q, q}$ give rise to the same sum and the inductive hypothesis may be applied to the remaining portions. \end{proof} We may view $\Gamma$ as a double cover of $\abs \Gamma$ via the identification $\Gamma / {\pm 1} = \abs \Gamma$. Consequently we can associate two Euler characteristics $\chi_{\Gamma,g}$, $\chi_{\Gamma, b}$ with the $\Gamma$-sort, induced by those on $|\Gamma|$ (see \cite{kage:fujita:2006} and also~\cite[\S~ 9]{hrushovski:kazhdan:integration:vf}). They are distinguished by \[ \chi_{\Gamma, g}(\bm H) = \chi_{\Gamma, g}(\bm H^{-1}) = -1 \quad \text{and} \quad \chi_{\Gamma, b}(\bm H) = \chi_{\Gamma, b}(\bm H^{-1}) = 0. \] Similarly, there is an Euler characteristic $\chi_{\K}$ associated with the $\K$-sort (there is only one). We shall denote all of these Euler characteristics simply by $\chi$ if no confusion can arise. Using these $\chi$ and the groupification of $\bb D$ (also denoted by $\bb D$), we can construct various retractions from the Grothendieck ring $\ggk \RV[*]$ to (certain localizations of) the Grothendieck rings $\ggk \RES[*]$ and $\ggk \Gamma[*]$. \begin{lem}\label{gam:euler} The Euler characteristics induce naturally three graded ring homomorphisms: \[ \mdl E_{\K} : \ggk \RES[*] \longrightarrow \mathds{Z}[X] \quad \text{and} \quad \mdl E_{\Gamma, g}, \mdl E_{\Gamma, b} : \ggk \Gamma[*] \longrightarrow \mathds{Z}[X]. \] \end{lem} \begin{proof} For $U \in \RES[k]$ and $I \in \Gamma[k]$, we set $\mdl E_{\K, k}([U]) = \chi(U)$ (see Remark~\ref{omin:res}) and $\mdl E_{\Gamma, k}([I]) = \chi(I)$. These maps are well-defined and they induce graded ring homomorphisms $\mdl E_{\K} \coloneqq \sum_k \mdl E_{\K, k} X^k$ and $\mdl E_{\Gamma} \coloneqq \sum_k \mdl E_{\Gamma, k} X^k$ as desired. \end{proof} By the computation in \cite{kage:fujita:2006}, $\ggk \Gamma[*]$ is canonically isomorphic to the graded ring \[ \textstyle \mathds{Z}[X, Y^{(2)}] \coloneqq \mathds{Z} \oplus \bigoplus_{i \geq 1} (\mathds{Z}[Y]/(Y^2+Y))X^i, \] where $YX$ represents the class $[\bm H] = [\bm H^{-1}]$ in $\ggk \Gamma[1]$. Thus $\mdl E_{\Gamma, g}$, $\mdl E_{\Gamma, b}$ are also given by \[ \mathds{Z}[X, Y^{(2)}] \two^{Y \longmapsto -1}_{Y \longmapsto 0} \mathds{Z}[X]. \] \begin{rem}[Explicit description of ${\ggk \RV[*]}$]\label{rem:poin} Of course, $\mdl E_{\K}$ is actually an isomorphism. The homomorphism $\gsk \Gamma^{c}[*] \longrightarrow \gsk \RES[*]$ in Remark~\ref{gam:res} and $\mdl E_{\K}$ then induce an isomorphism $\mdl E_{\K^c} : \ggk \Gamma^{c}[*] \longrightarrow \mathds{Z}[X]$. But this isomorphism is different from the groupification $\mdl E_{\Gamma^c}$ of the canonical isomorphism $\gsk \Gamma^{c}[*] \cong \gsk \mathds{N}[*]$. This latter isomorphism $\mdl E_{\Gamma^c}$ is also induced by $\mdl E_{\Gamma, g}$, $\mdl E_{\Gamma, b}$ (the two homomorphisms agree on $\ggk \Gamma^{c}[*]$). They are distinguished by $\mdl E_{\K^c}([e]) = -X$ and $\mdl E_{\Gamma^c}([e]) = X$. We have a commutative diagram \[ \bfig \hSquares(0,0)/<-`->`->`->`->`<-`->/[{\ggk \RES[*]}`{\ggk \Gamma^{c}[*]}`{\ggk \Gamma[*]}`\mathds{Z}[X]`\mathds{Z}[X]`{\mathds{Z}[X, Y^{(2)}]}; ``\mdl E_{\K}`\mdl E_{\Gamma^c}`\cong`\tau`] \efig \] where $\tau$ is the involution determined by $X \longmapsto -X$. The graded ring \[ \mathds{Z}[X] \otimes_{\mathds{Z}[X]} \mathds{Z}[X, Y^{(2)}] \] may be identified with $\mathds{Z}[X, Y^{(2)}]$ via the isomorphism given by $x \otimes y \longmapsto \tau(x)y$. Consequently, by Proposition~\ref{red:D:iso}, there is a graded ring isomorphism \[ \ggk \RV[*] \to^{\sim} \mathds{Z}[X, Y^{(2)}] \quad \text{with} \quad \bm 1_{\K} + [\bm P] \longmapsto 1 + 2YX + X. \] Setting \[ \mathds{Z}^{(2)}[X] = \mathds{Z}[X, Y^{(2)}] / (1 + 2YX + X), \] we see that there is a canonical ring isomorphism \[ \bb E_{\Gamma}: \ggk \RV[*] / (\bm 1_{\K} + [\bm P]) \to^{\sim} \mathds{Z}^{(2)}[X]. \] There are exactly two ring homomorphisms $\mathds{Z}^{(2)}[X] \longrightarrow \mathds{Z}$ determined by the assignments $Y \longmapsto -1$ and $Y \longmapsto 0$ or, equivalently, $X \longmapsto 1$ and $X \longmapsto -1$. Combining these with $\bb E_{\Gamma}$, we see that there are exactly two ring homomorphisms \[ \bb E_{\Gamma,g}, \bb E_{\Gamma,b}: \ggk \RV[*] / (\bm 1_{\K} + [\bm P]) \longrightarrow \mathds{Z}. \] \end{rem} \begin{prop}\label{prop:eu:retr:k} There are two ring homomorphisms \[ \bb E_{\K, g}: \ggk \RV[*] \longrightarrow \ggk \RES[*][[\bm A]^{-1}] \quad \text{and} \quad \bb E_{\K, b}: \ggk \RV[*] \longrightarrow \ggk \RES[*][[1]^{-1}] \] such that \begin{itemize} \item their ranges are precisely the zeroth graded pieces of their respective codomains, \item $\bm 1_{\K} + [\bm P]$ vanishes under both of them, \item for all $x \in \ggk \RES[k]$, $\bb E_{\K, g} (x) = x [\bm A]^{-k}$ and $\bb E_{\K, b}(x) = x [1]^{-k}$. \end{itemize} \end{prop} \begin{proof} We first define, for each $n$, a homomorphism \[ \bb E_{g, n}: \ggk \RV[n] \longrightarrow \ggk \RES[n] \] as follows. By Proposition~\ref{red:D:iso}, there is an isomorphism \[ \textstyle \bb D_n : \bigoplus_{i + j = n} \ggk \RES[i] \otimes \ggk \Gamma[j] \to^{\sim} \ggk \RV[n]. \] Let the group homomorphism $\mdl E_{g, j} : \ggk \Gamma[j] \longrightarrow \mathds{Z}$ be defined as in Lemma~\ref{gam:euler}, using $\chi_{\Gamma, g}$. Let \[ E_{g}^{i, j}: \ggk \RES[i] \otimes \ggk \Gamma[j] \longrightarrow \ggk \RES[i {+} j] \] be the group homomorphism determined by $x \otimes y \longmapsto \mdl E_{g, j}(y) x [\bm T]^{j}$. Let \[ \textstyle E_{g, n} = \sum_{i + j = n} E_{g}^{i, j} \quad \text{and} \quad \bb E_{g, n} = E_{g, n} \circ \bb D_n^{-1}. \] Note that, due to the presence of the tensor $\otimes_{\ggk \Gamma^{c}[*]}$ and the replacement of $y$ with $\mdl E_{g, j}(y) [\bm T]^{j}$, there is this issue of compatibility between the various components of $E_{g, n}$. In our setting, this is easily resolved since all definable bijections are allowed in $\Gamma[*]$ and hence $\gsk \Gamma^c[*]$ is generated by isomorphism classes of the form $[e]^k$. In the setting of \cite{hrushovski:kazhdan:integration:vf}, however, one has to pass to a quotient ring to achieve compatibility (see Remark~\ref{why:glz} and also \cite[\S~2.5]{hru:loe:lef}). Now, it is straightforward to check the equality \[ \bb E_{g, n}(x)\bb E_{g, m}(y) = \bb E_{g, n+m}(xy). \] The group homomorphisms $\tau_{m, k} : \ggk \RES[m] \longrightarrow \ggk \RES[m{+}k]$ given by $x \longmapsto x [\bm A]^k$ determine a colimit system and the group homomorphisms \[ \textstyle\bb E_{g, \leq n} \coloneqq \sum_{m \leq n} \tau_{m, n-m} \circ \bb E_{g, m} : \ggk \RV[{\leq} n] \longrightarrow \ggk \RES[n] \] determine a homomorphism of colimit systems. Hence we have a ring homomorphism: \[ \colim{n} \bb E_{g, \leq n} : \ggk \RV[*] \longrightarrow \colim{\tau_{n, k}} \ggk \RES[n]. \] For all $n \geq 1$ we have \[ \bb E_{g, \leq n}(\bm 1_{\K} + [\bm P]) = [\bm A]^n - 2[\bm T][\bm A]^{n-1} - [1] [\bm A]^{n-1} = 0. \] This yields the desired homomorphism $\bb E_{\K, g}$ since the colimit in question can be embedded into the zeroth graded piece of $\ggk \RES[*][[\bm A]^{-1}]$. The construction of $\bb E_{\K, b}$ is completely analogous, with $[\bm A]$ replaced by $[1]$ and $\chi_{\Gamma, g}$ by $\chi_{\Gamma, b}$. \end{proof} Since the zeroth graded pieces of both $\ggk \RES[*][[\bm A]^{-1}]$ and $\ggk \RES[*][[1]^{-1}]$ are canonically isomorphic to $\mathds{Z}$, the homomorphisms $\bb E_{\K, g}$, $\bb E_{\K, b}$ are just the homomorphisms $\bb E_{\Gamma, g}$, $\bb E_{\Gamma, b}$ in Remark~\ref{rem:poin}, more precisely, $\bb E_{\K, g} = \bb E_{\Gamma, g}$ and $\bb E_{\K, b} = \bb E_{\Gamma, b}$. \section{Generalized Euler characteristic} From here on, our discussion will be of an increasingly formal nature. Many statements are exact copies of those in \cite{Yin:special:trans, Yin:int:acvf, Yin:int:expan:acvf} and often the same proofs work, provided that the auxiliary results are replaced by the corresponding ones obtained above. For the reader's convenience, we will write down all the details. \subsection{Special bijections} Our first task is to connect $\gsk \VF_*$ with $\gsk \RV[*]$, more precisely, to establish a surjective homomorphism $\gsk \RV[*] \longrightarrow \gsk \VF_*$. Notice the direction of the arrow. The main instrument in this endeavor is special bijections. \begin{conv}\label{conv:can} We reiterate \cite[Convention~2.32]{Yin:int:expan:acvf} here, with a different terminology, since this trivial-looking convention is actually quite crucial for understanding the discussion below, especially the parts that involve special bijections. For any set $A$, let \[ \can(A) = \{(a, \rv(a), t) : (a, t) \in A \text{ and } a \in \pr_{\VF}(A)\}. \] The natural bijection $\can : A \longrightarrow \can(A)$ is called the \emph{regularization} of $A$. We shall tacitly substitute $\can(A)$ for $A$ if it is necessary or is just more convenient. Whether this substitution has been performed or not should be clear in context (or rather, it is always performed). \end{conv} \begin{defn}[Special bijections]\label{defn:special:bijection} Let $A$ be a (regularized) definable set whose first coordinate is a $\VF$-coordinate (of course nothing is special about the first $\VF$-coordinate, we choose it simply for notational ease). Let $C \subseteq \RVH(A)$ be an $\RV$-pullback (see Definition~\ref{defn:disc}) and \[ \lambda: \pr_{>1}(C \cap A) \longrightarrow \VF \] a definable function whose graph is contained in $C$. Recall Notation~\ref{nota:tor}. Let \[ \textstyle C^{\sharp} = \bigcup_{x \in \pr_{>1} (C)} \MM_{\abvrv(\pr_1(x_{\RV}))} \times x \quad \text{and} \quad \RVH(A)^{\sharp} = C^{\sharp} \uplus (\RVH(A) \smallsetminus C), \] where $x_{\RV} = \pr_{\RV}(x)$. The \emph{centripetal transformation $\eta : A \longrightarrow \RVH(A)^{\sharp}$ with respect to $\lambda$} is defined by \[ \begin{cases} \eta (a, x) = (a - \lambda(x), x), & \text{on } C \cap A,\\ \eta = \id, & \text{on } A \smallsetminus C. \end{cases} \] Note that $\eta$ is injective. The inverse of $\eta$ is naturally called the \emph{centrifugal transformation with respect to $\lambda$}. The function $\lambda$ is referred to as the \emph{focus} of $\eta$ and the $\RV$-pullback $C$ as the \emph{locus} of $\lambda$ (or $\eta$). A \emph{special bijection} $T$ on $A$ is an alternating composition of centripetal transformations and regularizations. By Convention~\ref{conv:can}, we shall only display the centripetal transformations in such a composition. The \emph{length} of such a special bijection $T$, denoted by $\lh(T)$, is the number of centripetal transformations in $T$. The range of $T$ is sometimes denoted by $A^{\flat}$. \end{defn} For functions between sets that have only one $\VF$-coordinate, composing with special bijections on the right and inverses of special bijections on the left obviously preserves dtdp. \begin{lem}\label{inverse:special:dim:1} Let $T$ be a special bijection on $A \subseteq \VF \times \RV^m$ such that $A^{\flat}$ is an $\RV$-pullback. Then there is a definable function $\epsilon : \pr_{\RV} (A^{\flat}) \longrightarrow \VF$ such that, for every $\RV$-polydisc $\mathfrak{p} = t^\sharp \times s \subseteq A^{\flat}$, $(T^{-1}(\mathfrak{p}))_{\VF} = t^\sharp + \epsilon(s)$. \end{lem} \begin{proof} It is clear that $\mathfrak{p}$ is the image of an open polydisc $\mathfrak{a} \times r \subseteq A$. Let $T'$ be $T$ with the last centripetal transformation deleted. Then $T'(\mathfrak{a} \times r)$ is also an open polydisc $\mathfrak{a}' \times r'$. The range of the focus map of $\eta_n$ contains a point in the smallest closed disc containing $\mathfrak{a}'$. This point can be transported back by the previous focus maps to a point in the smallest closed disc containing $\mathfrak{a}$. The lemma follows easily from this observation. \end{proof} Note that, since $\dom(\epsilon) \subseteq \RV^l$ for some $l$, by Corollary~\ref{function:rv:to:vf:finite:image}, $\ran(\epsilon)$ is actually finite. A definable set $A$ is called a \emph{deformed $\RV$-pullback} if there is a special bijection $T$ on $A$ such that $A^{\flat}$ is an $\RV$-pullback. \begin{lem}\label{simplex:with:hole:rvproduct} Every definable set $A \subseteq \VF \times \RV^m$ is a deformed $\RV$-pullback. \end{lem} \begin{proof} By compactness and HNF this is immediately reduced to the situation where $A \subseteq \VF$ is contained in an $\RV$-disc and is a $\vv$-interval with end-discs $\mathfrak{a}$, $\mathfrak{b}$. This may be further divided into several cases according to whether $\mathfrak{a}$, $\mathfrak{b}$ are open or closed discs and whether the ends of $A$ are open or closed. In each of these cases, Lemma~\ref{clo:disc:bary} is applied in much the same way as its counterpart is applied in the proof of \cite[Lemma~4.26]{Yin:QE:ACVF:min}. It is a tedious exercise and is left to the reader. \end{proof} Here is an analogue of \cite[Theorem~5.4]{Yin:special:trans} (see also \cite[Theorem~4.25]{Yin:int:expan:acvf}): \begin{thm}\label{special:term:constant:disc} Let $F(x) = F(x_1, \ldots, x_n)$ be an $\lan{T}{}{}$-term. Let $u \in \RV^n$ and $R : u^\sharp \longrightarrow A$ be a special bijection. Then there is a special bijection $T : A \longrightarrow A^\flat$ such that $F \circ R^{-1} \circ T^{-1}$ is $\rv$-contractible. In a commutative diagram, \[ \bfig \square(0,0)/`->`->`->/<1500,400>[A^\flat`\VF`\rv(A^\flat)`\RV_0; `\rv`\rv`(F \circ R^{-1} \circ T^{-1})_{\downarrow}] \morphism(0,400)<500,0>[A^\flat`A; T^{-1}] \morphism(500,400)<500,0>[A`u^\sharp; R^{-1}] \morphism(1000,400)<500,0>[u^\sharp`\VF; F] \efig \] \end{thm} \begin{proof} First observe that if the assertion holds for one $\lan{T}{}{}$-term then it holds simultaneously for any finite number of $\lan{T}{}{}$-terms, since $\rv$-contractibility is preserved by further special bijections on $A^\flat$. We do induction on $n$. For the base case $n=1$, by Corollary~\ref{part:rv:cons} and Remark~\ref{rem:LT:com}, there is a definable finite partition $B_i$ of $u^\sharp$ such that, for all $i$, if $\mathfrak{a} \subseteq B_i$ is an open disc then $\rv \upharpoonright F(\mathfrak{a})$ is constant. By consecutive applications of Lemma~\ref{simplex:with:hole:rvproduct}, we obtain a special bijection $T$ on $A$ such that each $(T \circ R) (B_i)$ is an $\RV$-pullback. Clearly $T$ is as required. For the inductive step, we may concentrate on a single $\RV$-polydisc $\mathfrak{p} = v^\sharp \times (v, r) \subseteq A$. Let $\phi(x, y)$ be a quantifier-free formula that defines the function $\rv \circ f$. Recall Convention~\ref{topterm}. Let $G_{i}(x)$ enumerate the top $\lan{T}{}{}$-terms of $\phi$. For $a \in v_1^\sharp$, write $G_{i,a} = G_{i}(a, x_2, \ldots, x_n)$. By the inductive hypothesis, there is a special bijection $R_{a}$ on $(v_2, \ldots, v_n)^\sharp$ such that every $G_{i,a} \circ R_a^{-1}$ is $\rv$-contractible. Let $U_{k, a}$ enumerate the loci of the components of $R_{a}$ and $\lambda_{k, a}$ the corresponding focus maps. By compactness, \begin{itemize} \item for each $i$, there is a quantifier-free formula $\psi_i$ such that $\psi_i(a)$ defines $(G_{i,a} \circ R_a^{-1})_{\downarrow}$, \item there is a quantifier-free formula $\theta$ such that $\theta(a)$ determines the sequence $\rv(U_{k, a})$ and the $\VF$-coordinates targeted by $\lambda_{k, a}$. \end{itemize} Let $H_{j}(x_1)$ enumerate the top $\lan{T}{}{}$-terms of the formulas $\psi_i$, $\theta$. Applying the inductive hypothesis again, we obtain a special bijection $T_1$ on $v_1^\sharp$ such that every $H_{j} \circ T_1^{-1}$ is $\rv$-contractible. This means that, for every $\RV$-polydisc $\mathfrak{q} \subseteq T_1(v_1^\sharp)$ and all $a_1, a_2 \in T_1^{-1}(\mathfrak{q})$, \begin{itemize} \item the formulas $\psi_i(a_1)$, $\psi_i(a_2)$ define the same $\rv$-contraction, \item the special bijections $R_{a_1}$, $R_{a_2}$ may be glued together in the obvious sense to form one special bijection on $\{a_1, a_2\} \times (v_2, \ldots, v_n)^\sharp$. \end{itemize} Consequently, $T_1$ and $R_{a}$ naturally induce a special bijection $T$ on $\mathfrak{p}$ such that every $G_{i} \circ T^{-1}$ is $\rv$-contractible. This implies that $F \circ R^{-1} \circ T^{-1}$ is $\rv$-contractible and hence $T$ is as required. \end{proof} \begin{cor}\label{special:bi:term:constant} Let $A \subseteq \VF^n$ be a definable set and $f : A \longrightarrow \RV^m$ a definable function. Then there is a special bijection $T$ on $A$ such that $A^\flat$ is an $\RV$-pullback and the function $f \circ T^{-1}$ is $\rv$-contractible. \end{cor} \begin{proof} By compactness, we may assume that $A$ is contained in an $\RV$-polydisc $\mathfrak{p}$. Let $\phi$ be a quantifier-free formula that defines $f$. Let $F_i(x, y)$ enumerate the top $\lan{T}{}{}$-terms of $\phi$. For $s \in \RV^{m}$, let $F_{i, s} = F_{i}(x, s)$. By Theorem~\ref{special:term:constant:disc}, there is a special bijection $T$ on $\mathfrak{p}$ such that each function $F_{i, s} \circ T^{-1}$ is contractible. This means that, for each $\RV$-polydisc $\mathfrak{q} \subseteq T(\mathfrak{p})$, \begin{itemize} \item either $T^{-1}(\mathfrak{q}) \subseteq A$ or $T^{-1}(\mathfrak{q}) \cap A = \emptyset$, \item if $T^{-1}(\mathfrak{q}) \subseteq A$ then $(f \circ T^{-1})(\mathfrak{q})$ is a singleton. \end{itemize} So $T \upharpoonright A$ is as required. \end{proof} \begin{defn}[Lifting maps]\label{def:L} Let $U$ be a set in $\RV$ and $f : U \longrightarrow \RV^k$ a function. Let $U_f$ stand for the set $\bigcup \{f(u)^\sharp \times u: u \in U\}$. The \emph{$k$th lifting map} \[ \mathbb{L}_k: \RV[k] \longrightarrow \VF[k] \] is given by $(U,f) \longmapsto U_f$. The map $\mathbb{L}_{\leq k}: \RV[{\leq} k] \longrightarrow \VF[k]$ is given by $\bigoplus_{i} \bm U_i \longmapsto \biguplus_{i} \bb L_i \bm U_i$. Set $\mathbb{L} = \bigcup_k \mathbb{L}_{\leq k}$. \end{defn} \begin{cor}\label{all:subsets:rvproduct} Every definable set $A \subseteq \VF^n \times \RV^m$ is a deformed $\RV$-pullback. In particular, if $A \in \VF_*$ then there are a $\bm U \in \RV[{\leq} n]$ and a special bijection from $A$ onto $\mathbb{L}_{{\leq} n}(\bm U)$. \end{cor} \begin{proof} For the first assertion, by compactness, we may assume $A \subseteq \VF^n$. Then it is a special case of Corollary~\ref{special:bi:term:constant}. The second assertion follows from Lemma~\ref{RV:fiber:dim:same}. \end{proof} \begin{defn}[Lifts]\label{def:lift} Let $F: (U, f) \longrightarrow (V, g)$ be an $\RV[k]$-morphism. Then $F$ induces a definable finite-to-finite correspondence $F^\dag \subseteq f(U) \times g(V)$. Since $F^\dag$ can be decomposed into finitely many definable bijections, for simplicity, we assume that $F^\dag$ is itself a bijection. Let $F^{\sharp} : f(U)^\sharp \longrightarrow g(V)^\sharp$ be a definable bijection that $\rv$-contracts to $F^\dag$. Then $F^\sharp$ is called a \emph{lift} of $F$. By Convention~\ref{conv:can}, we shall think of $F^\sharp$ as a definable bijection $\bb L(U, f) \longrightarrow \bb L(V, g)$ that $\rv$-contracts to $F^\dag$. \end{defn} \begin{lem}\label{simul:special:dim:1} Let $f : A \longrightarrow B$ be a definable bijection between two sets that have exactly one $\VF$-coordinate each. Then there are special bijections $T_A : A \longrightarrow A^{\flat}$, $T_B : B \longrightarrow B^{\flat}$ such that $A^{\flat}$, $B^{\flat}$ are $\RV$-pullbacks and $f^{\flat}_{\downarrow}$ is bijective in the commutative diagram \[ \bfig \square(0,0)/->`->`->`->/<600,400>[A`A^{\flat}`B`B^{\flat}; T_A`f``T_B] \square(600,0)/->`->`->`->/<600,400>[A^{\flat}`\rv(A^{\flat})`B^{\flat} `\rv(B^{\flat}); \rv`f^{\flat}`f^{\flat}_{\downarrow}`\rv] \efig \] Thus, if $A, B \in \VF_*$ then $f^{\flat}$ is a lift of $f^{\flat}_{\downarrow}$, where the latter is regarded as an $\RV[1]$-morphism between $\rv(A^{\flat})_{1}$ and $\rv(B^{\flat})_1$ (recall Notation~\ref{0coor}). \end{lem} \begin{proof} By Corollaries~\ref{special:bi:term:constant}, \ref{all:subsets:rvproduct}, and Lemma~\ref{open:pro}, we may assume that $A$, $B$ are $\RV$-pullbacks, $f$ is $\rv$-contractible and has dtdp, and there is a special bijection $T_B: B \longrightarrow B^{\flat}$ such that $(T_B \circ f)^{-1}$ is $\rv$-contractible. Let $T_B = \eta_{n} \circ \ldots \circ \eta_{1}$, where each $\eta_{i}$ is a centripetal transformation (and regularization maps are not displayed). Then it is enough to construct a special bijection $T_A = \zeta_{n} \circ \ldots \circ \zeta_{1}$ on $A$ such that, for each $i$, both $f_i \coloneqq T_{B, i} \circ f \circ T_{A, i}^{-1}$ and $T_{A, i} \circ (T_B \circ f)^{-1}$ are $\rv$-contractible, where $T_{B, i} = \eta_{i} \circ \ldots \circ \eta_{1}$ and $T_{A, i} = \zeta_{i} \circ \ldots \circ \zeta_{1}$. To that end, suppose that $\zeta_i$ has been constructed for each $i \leq k < n$. Let $A_{k} = T_{A, k}(A)$ and $B_k = T_{B, k}(B)$. Let $D \subseteq B_k$ be the locus of $\eta_{k+1}$ and $\lambda$ the corresponding focus map. Since $f_k$ is $\rv$-contractible and has dtdp, each $\RV$-polydisc $\mathfrak{p} \subseteq B_k$ is a union of disjoint sets of the form $f_k(\mathfrak{q})$, where $\mathfrak{q} \subseteq A_k$ is an $\RV$-polydisc. For each $t = (t_1, t_{\tilde 1}) \in \dom(\lambda)$, let $O_{t}$ be the set of those $\RV$-polydiscs $\mathfrak{q} \subseteq A_k$ such that $f_k(\mathfrak{q}) \subseteq t^\sharp_1 \times t$. Let \begin{itemize} \item $\mathfrak{q}_{t} \in O_{t}$ be the $\RV$-polydisc with $(\lambda(t), t) \in \mathfrak{o}_{ t} \coloneqq f_k(\mathfrak{q}_t)$, \item $C = \bigcup_{t \in \dom(\lambda)} \mathfrak{q}_{t} \subseteq A_k$ and $a_{t} = f_k^{-1}(\lambda( t), t) \in \mathfrak{q}_{t}$, \item $\kappa : \pr_{>1} (C) \longrightarrow \VF$ the corresponding focus map given by $\pr_{>1} (\mathfrak{q}_{t}) \longmapsto \pr_1(a_{t})$, \item $\zeta_{k+1}$ the centripetal transformation determined by $C$ and $\kappa$. \end{itemize} For each $t \in \dom(\lambda)$, $f_{k+1}$ restricts to a bijection between the $\RV$-pullbacks $\zeta_{k+1}(\mathfrak{q}_{t})$ and $\eta_{k+1}(\mathfrak{o}_{t})$ that is $\rv$-contractible in both ways and, for any $\mathfrak{q} \in O_{t}$ with $\mathfrak{q} \neq \mathfrak{q}_{t}$, $f_{k+1}(\mathfrak{q})$ is an open polydisc contained in an $\RV$-polydisc. So $f_{k+1}$ is $\rv$-contractible. On the other hand, it is clear that, for any $\RV$-polydisc $\mathfrak{p} \subseteq B^{\flat}$, $T_{A, k} \circ (T_B \circ f)^{-1}(\mathfrak{p})$ does not contain any $a_{t}$ and hence, by the construction of $T_{A, k}$, $T_{A, k+1} \circ (T_B \circ f)^{-1}$ is $\rv$-contractible. \end{proof} \begin{hyp}\label{hyp:point} The following lemma is used directly only once in Corollary~\ref{RV:lift}. It should have been presented right after Definition~\ref{defn:corr:cont}. We place it here because this is the first place in this paper, in fact, one of the only two places, the other being Lemma~\ref{blowup:same:RV:coa}, where we need to assume that every definable $\RV$-disc contains a definable point. The easiest way to guarantee this is to assume that $\mdl S$ is $\VF$-generated, which, together with Hypothesis~\ref{hyp:gam}, implies that it is a model of $\TCVF$ and is indeed an elementary substructure (so every definable set contains a definable point). This assumption will be in effect throughout the rest of the paper. \end{hyp} \begin{lem}\label{RVlift} Every definable bijection $f : U \longrightarrow V$ between two subsets of $\RV^k$ can be lifted, that is, there is a definable bijection $f^{\sharp} : U^\sharp \longrightarrow V^\sharp$ that $\rv$-contracts to $f$. \end{lem} \begin{proof} We do induction on $n = \dim_{\RV}(U) = \dim_{\RV}(V)$. If $n=0$ then $U$ is finite and hence, for every $u \in U$, the $\RV$-polydisc $u^\sharp$ contains a definable point, similarly for $V$, in which case how to construct an $f^{\sharp}$ as desired is obvious. For the inductive step, by weak $o$\nobreakdash-minimality in the $\RV$-sort, there are definable finite partitions $U_i$, $V_i$ of $U$, $V$ and injective coordinate projections \[ \pi_i : U_i \longrightarrow \RV^{k_i}, \quad \pi'_i : V_i \longrightarrow \RV^{k_i}, \] where $\dim_{\RV}(U_i) = \dim_{\RV}(V_i) = k_i$; the obvious bijection $\pi_i(U_i) \longrightarrow \pi'_i(V_i)$ induced by $f$ is denoted by $f_i$. Observe that if every $f_i$ can be lifted as desired then, by the construction in the base case above, $F$ can be lifted as desired as well. Therefore, without loss of generality, we may assume $k = n$. For $u \in U$ and $a \in u^\sharp$, the $\RV$-polydisc $f(u)^\sharp$ contains an $a$-definable point and hence, by compactness, there is a definable function $f^{\sharp} : U^\sharp \longrightarrow V^\sharp$ that $\rv$-contracts to $f$. By Lemma~\ref{RV:bou:dim}, $\dim_{\RV}(\partial_{\RV}f^{\sharp}(U^\sharp)) < n$ and hence, by the inductive hypothesis, we may assume that $f^{\sharp}$ is surjective. Then there is a definable function $g : V^\sharp \longrightarrow U^\sharp$ such that $f^{\sharp}(g(b)) = b$ for all $b \in V^\sharp$. By Lemma~\ref{RV:bou:dim} and the inductive hypothesis again, we may further assume that $g$ is also a surjection, which just means that $f^{\sharp}$ is a bijection as desired. \end{proof} The following corollary is an analogue of \cite[Proposition~6.1]{hrushovski:kazhdan:integration:vf}. \begin{cor}\label{RV:lift} For every $\RV[k]$-morphism $F : (U, f) \longrightarrow (V, g)$ there is a $\VF[k]$-morphism $F^\sharp$ that lifts $F$. \end{cor} \begin{proof} As in Definition~\ref{def:lift}, we may assume that the finite-to-finite correspondence $F^\dag$ is actually a bijection. Then this is immediate by Lemma~\ref{RVlift}. \end{proof} \begin{cor}\label{L:sur:c} The lifting map $\bb L_{\leq k}$ induces a surjective homomorphism, which is sometimes simply denoted by $\bb L$, between the Grothendieck semigroups \[ \gsk \RV[{\leq} k] \epi \gsk \VF[k]. \] \end{cor} \begin{proof} By Corollary~\ref{RV:lift}, every $\RV[k]$-isomorphism can be lifted. So $\bb L_{\leq k}$ induces a map on the isomorphism classes, which is easily seen to be a semigroup homomorphism. By Lemma~\ref{altVFdim} and Corollary~\ref{all:subsets:rvproduct}, this homomorphism is surjective. \end{proof} \subsection{$2$-cells} The remaining object of this section is to identify the kernels of the semigroup homomorphisms $\bb L$ in Corollary~\ref{L:sur:c} and thereby complete the construction of the universal additive invariant. We begin with a discussion of the issue of $2$-cells, as in \cite[\S~4]{Yin:int:acvf}. The notion of a $2$-cell, which corresponds to that of a bicell in \cite{cluckers:loeser:constructible:motivic:functions}, may look strange and is, perhaps, only of technical interest. It arises when we try to prove some analogue of Fubini's theorem, such as Lemma~\ref{contraction:perm:pair:isp} below. The difficulty is that, although the interaction between $\rv$-contractions and special bijections for definable sets of $\VF$-dimension $1$ is in a sense ``functorial'' (see Lemma~\ref{simul:special:dim:1}), we are unable to extend the construction to higher $\VF$-dimensions. This is the concern of \cite[Question~7.9]{hrushovski:kazhdan:integration:vf}. It has also occurred in \cite{cluckers:loeser:constructible:motivic:functions} and actually may be traced back to the construction of the $o$\nobreakdash-minimal Euler characteristic in \cite{dries:1998}; see \cite[Section~1.7]{cluckers:loeser:constructible:motivic:functions}. Anyway, in this situation, a natural strategy for $\rv$-contracting the isomorphism class of a definable set of higher $\VF$-dimension is to apply the result for $\VF$-dimension $1$ parametrically and proceed with one $\VF$-coordinate at a time. As in the classical theory of integration, this strategy requires some form of Fubini's theorem: for a well-behaved integration (or additive invariant in our case), an integral should yield the same value when it is evaluated along different orders of the variables. By induction, this problem is immediately reduced to the case of two variables. A $2$-cell is a definable subset of $\VF^2$ with certain symmetrical (or ``linear'' in the sense described in Remark~\ref{2cell:linear} below) internal structure that satisfies this Fubini-type requirement. Now the idea is that, if we can find a definable partition for every definable set such that each piece is a $2$-cell indexed by some $\RV$-sort parameters, then, by compactness, every definable set satisfies the Fubini-type requirement. This kind of partition is achieved in Lemma~\ref{decom:into:2:units}. \begin{lem}\label{bijection:dim:1:decom:RV} Let $f : A \longrightarrow B$ be a definable bijection between two subsets of $\VF$. Then there is a special bijection $T$ on $A$ such that $A^\flat$ is an $\RV$-pullback and, for each $\RV$-polydisc $\mathfrak{p} \subseteq A^\flat$, $f \upharpoonright T^{-1}(\mathfrak{p})$ is $\rv$-affine. \end{lem} \begin{proof} By Lemma~\ref{rv:lin} and compactness, for all but finitely many $a \in A$ there is an $a$-definable $\delta_a \in \abs{\Gamma}$ such that $f \upharpoonright \mathfrak{o}(a, \delta_a)$ is $\rv$-affine. Without loss of generality, we may assume that, for all $a \in A$, $\delta_a$ exists and is the least element that satisfies this condition. Let $g : A \longrightarrow |\Gamma|$ be the definable function given by $a \longmapsto \delta_a$. By Corollary~\ref{special:bi:term:constant}, there is a special bijection $T$ on $A$ such that $A^\flat$ is an $\RV$-pullback and, for all $\RV$-polydisc $\mathfrak{p} \subseteq A^\flat$, $(g \circ T^{-1}) \upharpoonright \mathfrak{p}$ is constant. By Lemmas~\ref{one:atomic} and \ref{rv:lin}, we must have $(g \circ T^{-1})(\mathfrak{p}) \leq \rad(\mathfrak{p})$, for otherwise the choice of $\delta_a$ is violated for some $a \in T^{-1}(\mathfrak{p})$. So $T$ is as required. \end{proof} \begin{lem}\label{bijection:rv:one:one} Let $A \subseteq \VF^2$ be a definable set such that $\mathfrak{a}_1 \coloneqq \pr_1(A)$ and $\mathfrak{a}_2 \coloneqq \pr_2(A)$ are open discs. Suppose that there is a definable bijection $f : \mathfrak{a}_1 \longrightarrow \mathfrak{a}_2$ that has dtdp and, for each $a \in \mathfrak{a}_1$, there is a $t_a \in \RVV$ with $A_a = t_a^\sharp + f(a)$. Then there is a special bijection $T$ on $\mathfrak{a}_1$ such that $\mathfrak{a}_1^\flat$ is an $\RV$-pullback and, for each $\RV$-polydisc $\mathfrak{p} \subseteq \mathfrak{a}_1^\flat$, $\rv$ is constant on the set \[ \{a - f^{-1}(b) : a \in T^{-1}(\mathfrak{p}) \text{ and } b \in A_a \}. \] \end{lem} \begin{proof} For each $a \in \mathfrak{a}_1$, let $\mathfrak{b}_a$ be the smallest closed disc that contains $A_a$. Since $A_a - f(a) = t_a^\sharp$, we have $f(a) \in \mathfrak{b}_a$ but $f(a) \notin A_a$ if $t_a \neq 0$. Hence $a \notin f^{-1}(A_a)$ if $t_a \neq 0$ and $\{a\} = f^{-1}(A_a)$ if $t_a = 0$. Since $f^{-1}(A_a)$ is a disc or a point, in either case, the function on $f^{-1}(A_a)$ given by $b \longmapsto \rv(a - b)$ is constant. The function $h : \mathfrak{a}_1 \longrightarrow \RVV$ given by $a \longmapsto \rv(a - f^{-1}(A_a))$ is definable. Now we apply Corollary~\ref{special:bi:term:constant} as in the proof of Lemma~\ref{bijection:dim:1:decom:RV}. The lemma follows. \end{proof} \begin{defn}\label{defn:balance} Let $A$, $\mathfrak{a}_1$, $\mathfrak{a}_2$, and $f$ be as in Lemma~\ref{bijection:rv:one:one}. We say that $f$ is \emph{balanced in $A$} if $f$ is actually $\rv$-affine and there are $t_1, t_2 \in \RVV$, called the \emph{paradigms} of $f$, such that, for every $a \in \mathfrak{a}_1$, \[ A_a = t_2^\sharp + f(a) \quad \text{and} \quad f^{-1}(A_a) = a - t_1^\sharp. \] \end{defn} \begin{rem}\label{2cell:linear} Suppose that $f$ is balanced in $A$ with paradigms $t_1$, $t_2$. If one of the paradigms is $0$ then the other one must be $0$. In this case $A$ is just the (graph of the) bijection $f$ itself. Assume that $t_1$, $t_2$ are nonzero. Let $\mathfrak{B}_1$, $\mathfrak{B}_2$ be, respectively, the sets of closed subdiscs of $\mathfrak{a}_1$, $\mathfrak{a}_2$ of radii $\abs{\vrv(t_1)}$, $\abs{\vrv(t_2)}$. Let $a_1 \in \mathfrak{b}_1 \in \mathfrak{B}_1$ and $\mathfrak{o}_1$ be the maximal open subdisc of $\mathfrak{b}_1$ containing $a_1$. Let $\mathfrak{b}_2 \in \mathfrak{B}_2$ be the smallest closed disc containing the open disc $\mathfrak{o}_2 \coloneqq A_{a_1}$. Then, for all $a_2 \in \mathfrak{o}_2$, we have \[ \mathfrak{o}_2 = t_2^\sharp + f(\mathfrak{o}_1) = A_{a_1} \quad \text{and} \quad A_{a_2} = f^{-1}(\mathfrak{o}_2) + t_1^\sharp = \mathfrak{o}_1. \] This internal symmetry of $A$ is illustrated by the following diagram: \[ \bfig \dtriangle(0,0)|amb|/.``<-/<600,250>[\mathfrak{o}_1`f^{-1}(\mathfrak{o}_2)`\mathfrak{o}_2; \pm t_1^\sharp`\times`f^{-1}] \ptriangle(600,0)|amb|/->``./<600,250>[\mathfrak{o}_1`f(\mathfrak{o}_1)`\mathfrak{o}_2; f`` \pm t_2^\sharp] \efig \] Since $f$ is $\rv$-affine, we see that its slope must be $-t_2/t_1$ (recall Definition~\ref{rvaffine}). If we think of $\mathfrak{b}_1$, $\mathfrak{b}_2$ as $\tor(\code {\mathfrak{o}_1})$, $\tor(\code {\mathfrak{o}_2})$ then the set $A \cap (\mathfrak{b}_1 \times \mathfrak{b}_2)$ may be thought of as the ``line'' in $\tor(\code {\mathfrak{o}_1}) \times \tor(\code {\mathfrak{o}_2})$ given by the equation \[ x_2 = - \tfrac{t_2}{t_1}(x_1 - \code{\mathfrak{o}_1}) + (\code{\mathfrak{o}_2} - t_2). \] Thus, by Lemma~\ref{simul:special:dim:1}, the obvious bijection between $\pr_1(A) \times t_2^\sharp$ and $t_1^\sharp \times \pr_2(A)$ is the lift of an $\RV[{\leq}2]$-morphism modulo special bijections; see Lemma~\ref{2:unit:contracted} below for details. The slope of $f$ will play a more important role when volume forms are introduced into the categories (in a sequel). \end{rem} \begin{defn}[$2$-cell]\label{def:units} We say that a set $A$ is a \emph{$1$-cell} if it is either an open disc contained in a single $\RV$-disc or a point in $\VF$. We say that $A$ is a \emph{$2$-cell} if \begin{enumerate} \item $A$ is a subset of $\VF^2$ contained in a single $\RV$-polydisc and $\pr_1(A)$ is a $1$-cell, \item there is a function $\epsilon : \pr_1 (A) \longrightarrow \VF$ and a $t \in \RV$ such that, for every $a \in \pr_1(A)$, $A_a = t^\sharp + \epsilon(a)$, \item one of the following three possibilities occurs: \begin{enumerate} \item $\epsilon$ is constant, \item $\epsilon$ is injective, has dtdp, and $\rad(\epsilon(\pr_1(A))) \geq \abs{\vrv(t)}$,\label{2cell:3b} \item $\epsilon$ is balanced in $A$. \end{enumerate} \end{enumerate} The function $\epsilon$ is called the \emph{positioning function} of $A$ and the element $t$ the \emph{paradigm} of $A$. More generally, a set $A$ with exactly one $\VF$-coordinate is a \emph{$1$-cell} if, for each $t \in \pr_{>1}(A)$, $A_t$ is a $1$-cell in the above sense; the parameterized version of the notion of a $2$-cell is formulated in the same way. \end{defn} A $2$-cell is definable if all the relevant ingredients are definable. Naturally we will only be concerned with definable $2$-cells. Notice that Corollary~\ref{all:subsets:rvproduct} implies that for every definable set $A$ with exactly one $\VF$-coordinate there is a definable function $\pi: A \longrightarrow \RV^l$ such that every fiber $A_s$ is a $1$-cell. This should be understood as $1$-cell decomposition and the next lemma as $2$-cell decomposition. \begin{lem}[$2$-cell decomposition]\label{decom:into:2:units} For every definable set $A \subseteq \VF^2$ there is a definable function $\pi: A \longrightarrow \RV^m$ such that every fiber $A_s$ is an $s$-definable $2$-cell. \end{lem} \begin{proof} By compactness, we may assume that $A$ is contained in a single $\RV$-polydisc. For each $a \in \pr_1 (A)$, by Corollary~\ref{all:subsets:rvproduct}, there is an $a$-definable special bijection $T_a$ on $A_a$ such that $A_a^\flat$ is an $\RV$-pullback. By Lemma~\ref{inverse:special:dim:1}, there is an $a$-definable function $\epsilon_a : (A_a^\flat)_{\RV} \longrightarrow \VF$ such that, for every $(t, s) \in (A_a^\flat)_{\RV}$, we have \[ T_a^{-1}(t^\sharp \times (t, s)) = t^\sharp + \epsilon_a(t, s). \] By compactness, we may glue these functions together, that is, there is a definable set $C \subseteq \pr_1(A) \times \RV^l$ and a definable function $\epsilon : C \longrightarrow \VF$ such that, for every $a \in \pr_1(A)$, $C_a = (A_a^\flat)_{\RV}$ and $\epsilon \upharpoonright C_a = \epsilon_a$. For $(t, s) \in C_{\RV}$, write $\epsilon_{(t, s)} = \epsilon \upharpoonright C_{(t, s)}$. By Corollary~\ref{uni:fun:decom} and compactness, we are reduced to the case that each $\epsilon_{(t, s)}$ is either constant or injective. If no $\epsilon_{(t, s)}$ is injective then we can finish by applying Corollary~\ref{all:subsets:rvproduct} to each $C_{(t, s)}$ and then compactness. Suppose that some $\epsilon_{(t, s)}$ is injective. Then, by Lemmas~\ref{open:pro} and \ref{bijection:dim:1:decom:RV}, we are reduced to the case that $C_{(t, s)}$ is an open disc and $\epsilon_{(t, s)}$ is $\rv$-affine and has dtdp. Write $\mathfrak{b}_{(t, s)} = \ran (\epsilon_{(t, s)})$. If $\rad(\mathfrak{b}_{(t, s)}) \geq \abvrv(t)$ then $\epsilon_{(t, s)}$ satisfies the condition (\ref{2cell:3b}) in Definition~\ref{def:units}. So let us suppose $\rad(\mathfrak{b}_{(t, s)}) < \abvrv(t)$. Then \[ \textstyle \mathfrak{b}_{(t, s)} = \bigcup_{a \in C_{(t, s)}} (t^\sharp + \epsilon_{(t, s)}(a)). \] By Lemma~\ref{bijection:rv:one:one}, we are further reduced to the case that there is an $r \in \RV$ such that, for every $a \in C_{(t, s)}$, \[ \rv(a - \epsilon_{(t, s)}^{-1}(t^\sharp + \epsilon_{(t, s)}(a))) = r \quad \text{and hence} \quad \epsilon_{(t, s)}^{-1}(t^\sharp + \epsilon_{(t, s)}(a)) = a - r^\sharp. \] So, in this case, $\epsilon_{(t, s)}$ is balanced. Now we are done by compactness. \end{proof} To extend Lemma~\ref{simul:special:dim:1} to all definable bijections, we need not only $2$-cell decomposition but also the following notions. Let $A \subseteq \VF^{n} \times \RV^{m}$, $B \subseteq \VF^{n} \times \RV^{m'}$, and $f : A \longrightarrow B$ be a definable bijection. \begin{defn}\label{rela:unary} We say that $f$ is \emph{relatively unary} or, more precisely, \emph{relatively unary in the $i$th $\VF$-coordinate}, if $(\pr_{\tilde{i}} \circ f)(x) = \pr_{\tilde{i}}(x)$ for all $x \in A$, where $i \in [n]$. If $f \upharpoonright A_y$ is also a special bijection for every $y \in \pr_{\tilde{i}} (A)$ then we say that $f$ is \emph{relatively special in the $i$th $\VF$-coordinate}. \end{defn} Obviously the inverse of a relatively unary bijection is a relatively unary bijection. Also note that every special bijection on $A$ is a composition of relatively special bijections. Choose an $i \in [n]$. By Corollary~\ref{all:subsets:rvproduct} and compactness, there is a bijection $T_i$ on $A$, relatively special in the $i$th $\VF$-coordinate, such that $T_i(A_a)$ is an $\RV$-pullback for every $a \in \pr_{\tilde i}(A)$. Note that $T_i$ is not necessarily a special bijection on $A$, since the special bijections in the $i$th $\VF$-coordinate for distinct $a, a' \in \pr_{\tilde i}(A)$ with $\rv(a) = \rv(a')$ may not even be of the same length. Let \[ \textstyle A_i = \bigcup_{a \in \pr_{\tilde i}(A)} a \times (T_i(A_a))_{\RV} \subseteq \VF^{n-1} \times \RV^{m_i}. \] Write $\hat T_i : A \longrightarrow A_i$ for the function naturally induced by $T_i$. For any $j \in [n{-}1]$, we repeat the above procedure on $A_i$ with respect to the $j$th $\VF$-coordinate and thereby obtain a set $A_{j} \subseteq \VF^{n-2} \times \RV^{m_j}$ and a function $\hat T_{j} : A_i \longrightarrow A_{j}$. The relatively special bijection on $T_i(A)$ induced by $\hat T_{j}$ is denoted by $T_j$. Continuing thus, we obtain a sequence of bijections $T_{\sigma(1)}, \ldots, T_{\sigma(n)}$ and a corresponding function $\hat T_{\sigma} : A \longrightarrow \RV^{l}$, where $\sigma$ is the permutation of $[n]$ in question. The composition $T_{\sigma(n)} \circ \ldots \circ T_{\sigma(1)}$, which is referred to as the \emph{lift} of $\hat T_{\sigma}$, is denoted by $T_{\sigma}$. \begin{defn}\label{defn:standard:contraction} Suppose that there is a $k \in 0 \cup [m]$ such that $(A_a)_{\leq k} \in \RV[k]$ for every $a \in A_{\VF}$. In particular, if $k=0$ then $A \in \VF_*$. By Lemma~\ref{RV:fiber:dim:same}, $\hat T_{\sigma}(A)_{\leq n+k}$ is an object of $\RV[{\leq} l{+}k]$, where $\dim_{\VF}(A) = l$. The function $\hat T_{\sigma}$ --- or the object $\hat T_{\sigma}(A)_{\leq n+k}$ --- is referred to as a \emph{standard contraction} of the set $A$ with the \emph{head start} $k$. \end{defn} The head start of a standard contraction is usually implicit. In fact, it is always $0$ except in Lemma~\ref{isp:VF:fiberwise:contract}, and can be circumvented even there. This seemingly needless gadget only serves to make the above definition more streamlined: If $A \in \VF_*$ then the intermediate steps of a standard contraction of $A$ may or may not result in objects of $\VF_*$ and hence the definition cannot be formulated entirely within $\VF_*$. \begin{rem}\label{special:dim:1:RV:iso} In Lemma~\ref{simul:special:dim:1}, clearly $\rv(A^{\flat})$, $\rv(B^{\flat})$ are standard contractions of $A$, $B$. Indeed, if $A, B \in \VF_*$ then $[\rv(A^{\flat})]_{\leq 1} = [\rv(B^{\flat})]_{\leq 1}$. \end{rem} \begin{lem}\label{bijection:partitioned:unary} There is a definable finite partition $A_i$ of $A$ such that each $f \upharpoonright A_i$ is a composition of relatively unary bijections. \end{lem} \begin{proof} This is an easy consequence of weak $o$\nobreakdash-minimality. In more detail, for each $a \in \pr_{< n}(A)$ there are an $a$-definable finite partition $A_{ai}$ of $A_a$ and injective coordinate projections $\pi_i : f(A_{ai}) \longrightarrow \VF \times \RV^{m'}$. Therefore, by compactness, there are a definable finite partition $A_{i}$ of $A$, definable injections $f_i : A_i \longrightarrow \VF^{n} \times \RV^{m'}$, and $j_i \in [n]$ such that, for all $x \in A_i$, \[ \pr_{< n}(x) = \pr_{< n}(f_i(x)) \quad \text{and} \quad \pr_{n \cup [m']}(f_i(x)) = \pr_{j_i \cup [m']}(f(x)). \] The claim now follows from compactness and an obvious induction on $n$. \end{proof} For the next two lemmas, let $12$ and $21$ denote the permutations of $[2]$. \begin{lem}\label{2:unit:contracted} Let $A \subseteq \VF^2$ be a definable $2$-cell. Then there are standard contractions $\hat T_{12}$, $\hat R_{21}$ of $A$ such that $[\hat T_{12}(A)]_{\leq 2} = [\hat R_{21}(A)]_{\leq 2}$. \end{lem} \begin{proof} Let $\epsilon$ be the positioning function of $A$ and $t \in \RV_0$ the paradigm of $A$. If $t = 0$ then $A$ is (the graph of) the function $\epsilon : \pr_1(A) \longrightarrow \pr_2(A)$, which is either a constant function or a bijection. In the former case, since $A$ is essentially just an open ball, the lemma simply follows from Corollary~\ref{all:subsets:rvproduct}. In the latter case, there are relatively special bijections $T_2$, $R_1$ on $A$ in the coordinates $2$, $1$ such that \[ T_2(A) = \pr_1(A) \times 0 \times 0 \quad \text{and} \quad R_1(A) = 0 \times \pr_2(A) \times 0. \] So the lemma follows from Remark~\ref{special:dim:1:RV:iso}. For the rest of the proof we assume $t \neq 0$. If $\epsilon$ is not balanced in $A$ then $A = \pr_1(A) \times \pr_2(A)$ is an open polydisc. By Corollary~\ref{all:subsets:rvproduct}, there are special bijections $T_1$, $T_2$ on $\pr_1(A)$, $\pr_2(A)$ such that $\pr_1(A)^\flat$, $\pr_2(A)^\flat$ are $\RV$-pullbacks. In this case the standard contractions determined by $(T_1, T_2)$ and $(T_2, T_1)$ are essentially the same. Suppose that $\epsilon$ is balanced in $A$. Let $r$ be the other paradigm of $\epsilon$. Recall that $\epsilon : \pr_1 (A) \longrightarrow \pr_2(A)$ is again a bijection. Let $T_2$ be the relatively special bijection on $A$ in the coordinate $2$ given by $(a, b) \longmapsto (a, b - \epsilon(a))$ and $R_1$ the relatively special bijection on $A$ in the coordinate $1$ given by $(a, b) \longmapsto (a - \epsilon^{-1}(b), b)$, where $(a, b) \in A$. Clearly \[ T_2(A) = \pr_1(A) \times t^\sharp \times t \quad \text{and} \quad R_1(A) = r^\sharp \times \pr_2(A) \times r. \] So, again, the lemma follows from Remark~\ref{special:dim:1:RV:iso}. \end{proof} \begin{lem}\label{subset:partitioned:2:unit:contracted} Let $A \subseteq \VF^2 \times \RV^m$ be an object in $\VF_*$. Then there are a definable injection $f : A \longrightarrow \VF^2 \times \RV^l$, relatively unary in both coordinates, and standard contractions $\hat T_{12}$, $\hat R_{21}$ of $f(A)$ such that $[\hat T_{12}(f(A))]_{\leq 2} = [\hat R_{21}(f(A))]_{\leq 2}$. \end{lem} \begin{proof} By Lemma~\ref{decom:into:2:units} and compactness, there is a definable function $f: A \longrightarrow \VF^2 \times \RV^l$ such that $f(A)$ is a $2$-cell and, for each $(a, t) \in A$, $f(a, t) = (a, t, s)$ for some $s \in \RV^{l-m}$. By Lemma~\ref{2:unit:contracted} and compactness, there are standard contractions $\hat T_{12}$, $\hat R_{21}$ of $f(A)$ into $\RV^{k+l}$ such that the following diagram commutates \[ \bfig \Vtriangle(0,0)/->`->`->/<400,400>[\hat T_{12}(f(A))`\hat R_{21}(f(A))`\RV^l; F`\pr_{> k}`\pr_{> k}] \efig \] and $F$ is an $\RV[{\leq} 2]$-morphism $\hat T_{12}(f(A))_{\leq 2} \longrightarrow \hat R_{21}(f(A))_{\leq 2}$. \end{proof} \subsection{Blowups and the main theorems} The central notion for understanding the kernels of the semigroup homomorphisms $\bb L$ is that of a blowup: \begin{defn}[Blowups]\label{defn:blowup:coa} Let $\bm U = (U, f) \in \RV[k]$, where $k > 0$, such that, for some $j \leq k$, the restriction $\pr_{\tilde j} \upharpoonright f(U)$ is finite-to-one. Write $f = (f_1, \ldots, f_k)$. The \emph{elementary blowup} of $\bm U$ in the $j$th coordinate is the pair $\bm U^{\flat} = (U^{\flat}, f^{\flat})$, where $U^{\flat} = U \times \RV^{\circ \circ}_0$ and, for every $(t, s) \in U^{\flat}$, \[ f^{\flat}_{i}(t, s) = f_{i}(t) \text{ for } i \neq j \quad \text{and} \quad f^{\flat}_{j}(t, s) = s f_{j}(t). \] Note that $\bm U^{\flat}$ is an object in $\RV[{\leq} k]$ (actually in $\RV[k{-}1] \oplus \RV[k]$) because $f^{\flat}_{j}(t, 0) = 0$. Let $\bm V = (V, g) \in \RV[k]$ and $C \subseteq V$ be a definable set. Suppose that $F : \bm U \longrightarrow \bm C$ is an $\RV[k]$-morphism, where $\bm C = (C, g \upharpoonright C) \in \RV[k]$. Then \[ \bm U^{\flat} \uplus (V \smallsetminus C, g \upharpoonright (V \smallsetminus C)) \] is a \emph{blowup of $\bm V$ via $F$}, denoted by $\bm V^{\flat}_F$. The subscript $F$ is usually dropped in context if there is no danger of confusion. The object $\bm C$ (or the set $C$) is referred to as the \emph{locus} of $\bm V^{\flat}_F$. A \emph{blowup of length $n$} is a composition of $n$ blowups. \end{defn} \begin{rem} In an elementary blowup, the condition that the coordinate of interest is definably dependent (the coordinate projection is finite-to-one) on the other ones is needed so that the resulting objects stay in $\RV[{\leq} k]$. In the setting of \cite{hrushovski:kazhdan:integration:vf}, this condition is also needed for matching blowups with special bijections, since, otherwise, we would not be able to use (a generalization of) Hensel's lemma to find enough centers of $\RV$-discs to construct focus maps. In our setting, Lemma~\ref{RVlift} plays the role of Hensel's lemma, which is more powerful, and hence ``algebraicity'' is no longer needed for this purpose (see Lemma~\ref{blowup:same:RV:coa}). \end{rem} If there is an elementary blowup of $(U, f) \in \RV[k]$ then, \textit{a posteriori}, $\dim_{\RV}(f(U)) < k$. Also, there is at most one elementary blowup of $(U, f)$ with respect to any coordinate of $f(U)$. We should have included the coordinate that is blown up as a part of the data. However, in context, either this is clear or it does not need to be spelled out, and we shall suppress mentioning it below for notational ease. \begin{lem}\label{blowup:equi:class:coa} Let $\bm U, \bm V \in \RV[{\leq} k]$ such that $[\bm U] = [\bm V]$ in $\gsk \RV[{\leq} k]$. Let $\bm U_1$, $\bm V_1$ be blowups of $\bm U$, $\bm V$ of lengths $m$, $n$, respectively. Then there are blowups $\bm U_2$, $\bm V_2$ of $\bm U_1$, $\bm V_1$ of lengths $n$, $m$, respectively, such that $[\bm U_2] = [\bm V_2]$. \end{lem} \begin{proof} Fix an isomorphism $I: \bm U \longrightarrow \bm V$. We do induction on the sum $l = m + n$. For the base case $l = 1$, without loss of generality, we may assume $n = 0$. Let $C$ be the blowup locus of $\bm U_1$. Clearly $\bm V$ may be blown up by using the same elementary blowup as $\bm U_1$, where the blowup locus is changed to $I(C)$, and the resulting blowup is as required. \[ \bfig \square(0,0)/.`=``./<500,900>[\bm U`\bm U^{\flat}` \bm V`\bm V^{\flat};1```1] \square(500,0)/.```./<1000,900>[\bm U^{\flat}`\bm U_1` \bm V^{\flat}`\bm V_1; m - 1```n - 1] \morphism(500,900)/./<500,-300>[\bm U^{\flat}`\bm U^{\flat\flat};1] \morphism(500,0)/./<500,300>[\bm V^{\flat}` \bm V^{\flat\flat};1] \morphism(1000,300)/=/<0,300>[\bm V^{\flat\flat}` \bm U^{\flat\flat};] \morphism(1500,900)/./<500,0>[\bm U_1`\bm U_1^{\flat};1] \morphism(1500,0)/./<500,0>[\bm V_1`\bm V_1^{\flat};1] \morphism(1000,600)/./<1000,0>[\bm U^{\flat\flat}`\bm U^{\flat3}; m - 1] \morphism(1000,300)/./<1000,0>[\bm V^{\flat\flat}`\bm V^{\flat3}; n - 1] \morphism(2000,900)/=/<0,-300>[\bm U_1^{\flat}`\bm U^{\flat3};] \morphism(2000,0)/=/<0,300>[\bm V_1^{\flat}`\bm V^{\flat3};] \morphism(2000,600)/./<1000,0>[\bm U^{\flat3}`\bm U^{\flat4};n - 1] \morphism(2000,300)/./<1000,0>[\bm V^{\flat3}`\bm V^{\flat4};m - 1] \morphism(3000,300)/=/<0,300>[\bm V^{\flat4}`\bm U^{\flat4};] \morphism(2000,900)/./<1000,0>[\bm U_1^{\flat}`\bm U_2;n - 1] \morphism(2000,0)/./<1000,0>[\bm V_1^{\flat}`\bm V_2;m - 1] \morphism(3000,0)/=/<0,300>[\bm V_2`\bm V^{\flat4};] \morphism(3000,900)/=/<0,-300>[\bm U_2`\bm U^{\flat4};] \efig \] We proceed to the inductive step. How the isomorphic blowups are constructed is illustrated above. Write $\bm U = (U, f)$ and $\bm V = (V, g)$. Let $\bm U^{\flat}$, $\bm V^{\flat}$ be the first blowups in $\bm U_1$, $\bm V_1$ and $C$, $D$ their blowup loci, respectively. Let $\bm U'^{\flat}$, $\bm V'^{\flat}$ be the corresponding elementary blowups contained in $\bm U^{\flat}$, $\bm V^{\flat}$. If, say, $n = 0$, then by the argument in the base case $\bm V$ may be blown up to an object that is isomorphic to $\bm U^{\flat}$ and hence the inductive hypothesis may be applied. So assume $m,n > 0$. Let $A = C \cap I^{-1}(D)$ and $B = I(C) \cap D$. Since $(A, f \upharpoonright A)$ and $(B, g \upharpoonright B)$ are isomorphic, the blowups of $\bm U'$, $\bm V'$ with the loci $(A, f \upharpoonright A)$ and $(B, g \upharpoonright B)$ are isomorphic. Then, it is not hard to see that the blowup $\bm U^{\flat\flat}$ of $\bm U^{\flat}$ using the locus $I^{-1}(D) \smallsetminus C$ and its corresponding blowup of $\bm V'$ and the blowup $\bm V^{\flat\flat}$ of $\bm V^{\flat}$ using the locus $I(C) \smallsetminus D$ and its corresponding blowup of $\bm U'$ are isomorphic. Applying the inductive hypothesis to the blowups $\bm U^{\flat\flat}$, $\bm U_1$ of $\bm U^{\flat}$, we obtain a blowup $\bm U^{\flat3}$ of $\bm U^{\flat\flat}$ of length $m - 1$ and a blowup $\bm U_1^{\flat}$ of $\bm U_1$ of length $1$ such that they are isomorphic. Similarly, we obtain a blowup $\bm V^{\flat3}$ of $\bm V^{\flat\flat}$ of length $n - 1$ and a blowup $\bm V_1^{\flat}$ of $\bm V_1$ of length $1$ such that they are isomorphic. Applying the inductive hypothesis again to the blowups $\bm U^{\flat3}$, $\bm V^{\flat3}$ of $\bm U^{\flat\flat}$, $\bm V^{\flat\flat}$, we obtain a blowup $\bm U^{\flat4}$ of $\bm U^{\flat3}$ of length $n - 1$ and a blowup $\bm V^{\flat4}$ of $\bm V^{\flat3}$ of length $m - 1$ such that they are isomorphic. Finally, applying the inductive hypothesis to the blowups $\bm U^{\flat4}$, $\bm U_1^{\flat}$ of $\bm U^{\flat3}$, $\bm U_1^{\flat}$ and the blowups $\bm V^{\flat4}$, $\bm V_1^{\flat}$ of $\bm V^{\flat3}$, $\bm V_1^{\flat}$, we obtain a blowup $\bm U_2$ of $\bm U_1^{\flat}$ of length $n - 1$ and a blowup $\bm V_2$ of $\bm V_1^{\flat}$ of length $m - 1$ such that $\bm U^{\flat4}$, $\bm U_2$, $\bm V^{\flat4}$, and $\bm V_2$ are all isomorphic. So $\bm U_2$, $\bm V_2$ are as desired. \end{proof} \begin{cor}\label{blowup:equi:class} Let $[\bm U] = [\bm U']$ and $[\bm V] = [\bm V']$ in $\gsk \RV[{\leq} k]$. If there are isomorphic blowups of $\bm U$, $\bm V$ then there are isomorphic blowups of $\bm U'$, $\bm V'$. \end{cor} \begin{defn}\label{defn:isp} Let $\isp[k]$ be the set of pairs $(\bm U, \bm V)$ of objects of $\RV[{\leq} k]$ such that there exist isomorphic blowups $\bm U^{\flat}$, $\bm V^{\flat}$. Set $\isp[*] = \bigcup_{k} \isp[k]$. \end{defn} We will just write $\isp$ for all these sets when there is no danger of confusion. By Corollary~\ref{blowup:equi:class}, $\isp$ may be regarded as a binary relation on isomorphism classes. \begin{lem}\label{isp:congruence:vol} $\isp[k]$ is a semigroup congruence relation and $\isp[*]$ is a semiring congruence relation. \end{lem} \begin{proof} Clearly $\isp[k]$ is reflexive and symmetric. If $([\bm U_1], [\bm U_2])$, $([\bm U_2], [\bm U_3])$ are in $\isp[k]$ then, by Lemma~\ref{blowup:equi:class:coa}, there are blowups $\bm U_1^{\flat}$ of $\bm U_1$, $\bm U_{2}^{\flat 1}$ and $\bm U_{2}^{\flat 2}$ of $\bm U_2$, and $\bm U_3^{\flat}$ of $\bm U_3$ such that they are all isomorphic. So $\isp[k]$ is transitive and hence is an equivalence relation. For any $[\bm W] \in \gsk \RV[l]$, the following are easily checked: \[ ([\bm U_1 \uplus \bm W], [\bm U_2 \uplus \bm W])\in \isp,\quad ([\bm U_1 \times \bm W], [\bm U_2 \times \bm W])\in \isp. \] These yield the desired congruence relations. \end{proof} Let $\bm U = (U, f)$ be an object of $\RV[k]$ and $T$ a special bijection on $\bb L \bm U$. The set $(T(\mathbb{L} \bm U))_{\RV}$ is simply denoted by $U_{T}$ and the object $(U_{T})_{\leq k} \in \RV[{\leq} k]$ by $\bm U_{T}$. \begin{lem}\label{special:to:blowup:coa} The object $\bm U_T$ is isomorphic to a blowup of $\bm U$ of the same length as $T$. \end{lem} \begin{proof} By induction on the length $\lh (T)$ of $T$ and Lemma~\ref{blowup:equi:class:coa}, this is immediately reduced to the case $\lh (T) = 1$. For that case, let $\lambda$ be the focus map of $T$. Without loss of generality, we may assume that the locus of $\lambda$ is $\mathbb{L} \bm U$. Then it is clear how to construct an (elementary) blowup of $\bm U$ as desired. \end{proof} \begin{lem}\label{kernel:dim:1:coa} Suppose that $[A] = [B]$ in $\gsk \VF[1]$ and $\bm U, \bm V \in \RV[{\leq} 1]$ are two standard contractions of $A$, $B$, respectively. Then $([\bm U], [\bm V]) \in \isp$. \end{lem} \begin{proof} By Lemma~\ref{simul:special:dim:1}, there are special bijections $T$, $R$ on $\bb L \bm U$, $\bb L \bm V$ such that $\bm U_{T}$, $\bm V_{R}$ are isomorphic. So the assertion follows from Lemma~\ref{special:to:blowup:coa}. \end{proof} \begin{lem}\label{blowup:same:RV:coa} Let $\bm U^{\flat}$ be a blowup of $\bm U = (U, f) \in \RV[{\leq} k]$ of length $l$. Then $\bb L \bm U^{\flat}$ is isomorphic to $\bb L \bm U$. \end{lem} \begin{proof} By induction on $l$ this is immediately reduced to the case $l=1$. For that case, without loss of generality, we may assume that $\pr_{\tilde 1} \upharpoonright f(U)$ is injective and $\bm U^{\flat}$ is an elementary blowup in the first coordinate. So it is enough to show that there is a focus map into the first coordinate with locus $f(U)^\sharp$. This is guaranteed by Hypothesis~\ref{hyp:point}. \end{proof} \begin{lem}\label{isp:VF:fiberwise:contract} Let $A'$, $A''$ be definable sets with $A'_{\VF} = A''_{\VF} \eqqcolon A \subseteq \VF^n$. Suppose that there is a $k \in \mathds{N}$ such that, for every $a \in A$, $([A'_a]_{\leq k}, [A''_a]_{\leq k}) \in \isp$. Let $\hat T_{\sigma}$, $\hat R_{\sigma}$ be respectively standard contractions of $A'$, $A''$. Then \[ ([\hat T_{\sigma}(A')]_{\leq n+k}, [\hat R_{\sigma}(A'')]_{\leq n+k}) \in \isp. \] \end{lem} Note that the condition $([A'_a]_{\leq k}, [A''_a]_{\leq k}) \in \isp$ makes sense only over the substructure $\mdl S \langle a \rangle$. \begin{proof} By induction on $n$ this is immediately reduced to the case $n=1$. So assume $A \subseteq \VF$. Let $\phi'$, $\phi''$ be quantifier-free formulas that define $A'$, $A''$, respectively. Let $\theta$ be a quantifier-free formula such that, for every $a \in A$, $\theta(a)$ defines the necessary data (two blowups and an $\RV[*]$-morphism) that witness the condition $([A'_a]_{\leq k}, [A''_a]_{\leq k}) \in \isp$. Applying Corollary~\ref{special:bi:term:constant} to the top $\lan{T}{}{}$\nobreakdash-terms of $\phi'$, $\phi''$, and $\theta$, we obtain a special bijection $F: A \longrightarrow A^{\flat}$ such that $A^{\flat}$ is an $\RV$-pullback and, for all $\RV$-polydiscs $\mathfrak{p} \subseteq A^{\flat}$ and all $a_1, a_2 \in F^{-1}(\mathfrak{p})$, \begin{itemize} \item $A'_{a_1} = A'_{a_2}$ and $A''_{a_1} = A''_{a_2}$, \item $\theta(a_1)$ and $\theta(a_2)$ define the same data. \end{itemize} The second item implies that the data defined by $\theta$ over $F^{-1}(\mathfrak{p})$ is actually $\rv(\mathfrak{p})$-definable. Let $B' = \bigcup_{a \in A} F(a) \times A'_a$, similarly for $B''$. Note that $B'$, $B''$ are obtained through special bijections on $A'$, $A''$. For all $t \in A'_{\RV}$, $B'_t$ is an $\RV$-pullback that is $t$-definably bijective to the $\RV$-pullback $T_{\sigma}(A')_t$. By Lemma~\ref{kernel:dim:1:coa}, we have, for all $t \in A'_{\RV}$ \[ ([(B'_{\RV})_t]_1, [\hat T_{\sigma}(A')_t]_1) \in \isp \] and hence, by compactness, \[ ([B'_{\RV}]_{\leq k+1}, [\hat T_{\sigma}(A')]_{\leq k+1}) \in \isp. \] The same holds for $B''$ and $\hat R_{\sigma}(A'')$. On the other hand, by the second item above, for every $\RV$-polydisc $\mathfrak{p} \subseteq A^{\flat}$, we have $((B'_{\RV})_{\rv(\mathfrak{p})}, (B''_{\RV})_{\rv(\mathfrak{p})}) \in \isp$ and hence, by compactness, \[ ([B'_{\RV}]_{\leq k+1}, [B''_{\RV}]_{\leq k+1}) \in \isp. \] Since $\isp$ is a congruence relation, the lemma follows. \end{proof} \begin{cor}\label{contraction:same:perm:isp} Let $A', A'' \in \VF_*$ with exactly $n$ $\VF$-coordinates each and $f : A' \longrightarrow A''$ be a relatively unary bijection in the $i$th coordinate. Then for any permutation $\sigma$ of $[n]$ with $\sigma(1) = i$ and any standard contractions $\hat T_{\sigma}$, $\hat R_{\sigma}$ of $A'$, $A''$, \[ ([\hat T_{\sigma}(A')]_{\leq n}, [\hat R_{\sigma}(A'')]_{\leq n}) \in \isp. \] \end{cor} \begin{proof} This is immediate by Lemmas~\ref{kernel:dim:1:coa} and \ref{isp:VF:fiberwise:contract}. \end{proof} The following lemma is essentially a version of Fubini's theorem (also see Theorem~\ref{semi:fubini} below). \begin{lem}\label{contraction:perm:pair:isp} Let $A \in \VF_*$ with exactly $n$ $\VF$-coordinates. Suppose that $i, j \in [n]$ are distinct and $\sigma_1$, $\sigma_2$ are permutations of $[n]$ such that \[ \sigma_1(1) = \sigma_2(2) = i, \quad \sigma_1(2) = \sigma_2(1) = j, \quad \sigma_1 \upharpoonright \set{3, \ldots, n} = \sigma_2 \upharpoonright \set{3, \ldots, n}. \] Then, for any standard contractions $\hat T_{\sigma_1}$, $\hat T_{\sigma_2}$ of $A$, \[ ([\hat T_{\sigma_1}(A)]_{\leq n}, [\hat T_{\sigma_2}(A)]_{\leq n}) \in \isp. \] \end{lem} \begin{proof} Let $ij$, $ji$ denote the permutations of $E \coloneqq \{i, j\}$. By compactness and Lemma~\ref{isp:VF:fiberwise:contract}, it is enough to show that, for any $a \in \pr_{\tilde E}(A)$ and any standard contractions $\hat T_{ij}$, $\hat T_{ji}$ of $A_a$, \[ ([\hat T_{ij}(A_a)]_{\leq 2}, [\hat T_{ji}(A_a)]_{\leq 2}) \in \isp. \] To that end, fix an $a \in \pr_{\tilde E}(A)$. By Lemma~\ref{subset:partitioned:2:unit:contracted}, there are a definable bijection $f$ on $A_a$ that is relatively unary in both $\VF$-coordinates and standard contractions $\hat R_{ij}$, $\hat R_{ji}$ of $f(A_a)$ such that \[ [\hat R_{ij}(f(A_a))]_{\leq 2} = [\hat R_{ji}(f(A_a))]_{\leq 2}. \] So the desired property follows from Corollary~\ref{contraction:same:perm:isp}. \end{proof} The following proposition is the culmination of the preceding technicalities; it identifies the congruence relation $\isp$ with that induced by $\bb L$. \begin{prop}\label{kernel:L} For $\bm U, \bm V \in \RV[{\leq} k]$, \[ [\bb L \bm U] = [\bb L \bm V] \quad \text{if and only if} \quad ([\bm U], [\bm V]) \in \isp. \] \end{prop} \begin{proof} The ``if'' direction simply follows from Lemma~\ref{blowup:same:RV:coa} and Proposition~\ref{L:sur:c}. For the ``only if'' direction, we show a stronger claim: if $[A] = [B]$ in $\gsk \VF_*$ and $\bm U, \bm V \in \RV[{\leq} k]$ are two standard contractions of $A$, $B$ then $([\bm U], [\bm V]) \in \isp$. We do induction on $k$. The base case $k = 1$ is of course Lemma~\ref{kernel:dim:1:coa}. For the inductive step, suppose that $F : \bb L \bm U \longrightarrow \bb L \bm V$ is a definable bijection. By Lemma~\ref{bijection:partitioned:unary}, there is a partition of $\bb L \bm U$ into definable sets $A_1, \ldots, A_n$ such that each restriction $F_i = F \upharpoonright A_i$ is a composition of relatively unary bijections. Applying Corollary~\ref{special:bi:term:constant} as before, we obtain two special bijections $T$, $R$ on $\bb L \bm U$, $\bb L \bm V$ such that $T(A_i)$, $(R \circ F)(A_i)$ is an $\RV$-pullback for each $i$. By Lemma~\ref{special:to:blowup:coa}, it is enough to show that, for each $i$, there are standard contractions $\hat T_{\sigma}$, $\hat R_{\tau}$ of $T(A_i)$, $(R \circ F)(A_i)$ such that \[ ([(\hat T_{\sigma} \circ T)(A_i)]_{\leq k}, [(\hat R_{\tau} \circ R \circ F)(A_i)]_{\leq k}) \in \isp. \] To that end, first note that each $(R \circ F \circ T^{-1}) \upharpoonright T(A_i)$ is a composition of relatively unary bijections, say \[ T(A_i) = B_1 \to^{G_1} B_2 \cdots B_l \to^{G_l} B_{l+1} = (R \circ F)(A_i). \] For each $j \leq l - 2$, we can choose five standard contractions \[ [U_j]_{\leq k}, \quad [U_{j+1}]_{\leq k}, \quad [U'_{j+1}]_{\leq k}, \quad [U''_{j+1}]_{\leq k}, \quad [U_{j+2}]_{\leq k} \] of $B_j$, $B_{j+1}$, $B_{j+1}$, $B_{j+1}$, $B_{j+2}$ with the permutations $\sigma_{j}$, $\sigma_{j+1}$, $\sigma'_{j+1}$, $\sigma''_{j+1}$, $\sigma_{j+2}$ of $[k]$, respectively, such that \begin{itemize} \item $\sigma_{j+1}(1)$ and $\sigma_{j+1}(2)$ are the $\VF$-coordinates targeted by $G_{j}$ and $G_{j+1}$, respectively, \item $\sigma''_{j+1}(1)$ and $\sigma''_{j+1}(2)$ are the $\VF$-coordinates targeted by $G_{j+1}$ and $G_{j+2}$, respectively, \item $\sigma_{j} = \sigma_{j+1}$, $\sigma''_{j+1} = \sigma_{j+2}$, and $\sigma'_{j+1}(1) = \sigma''_{j+1}(1)$, \item the relation between $\sigma_{j+1}$ and $\sigma'_{j+1}$ is as described in Lemma~\ref{contraction:perm:pair:isp}. \end{itemize} By Corollary~\ref{contraction:same:perm:isp} and Lemma~\ref{contraction:perm:pair:isp}, all the adjacent pairs of these standard contractions are $\isp$-congruent, except $([U'_{j+1}]_{\leq k}, [U''_{j+1}]_{\leq k})$. Since we can choose $[U'_{j+1}]_{\leq k}$, $[U''_{j+1}]_{\leq k}$ so that they start with the same contraction in the first targeted $\VF$-coordinate of $B_{j+1}$, the resulting sets from this step are the same. So, applying the inductive hypothesis in each fiber over the just contracted coordinate, we see that this last pair is also $\isp$-congruent. This completes the ``only if'' direction. \end{proof} This proposition shows that the semiring congruence relation on $\gsk \RV[*]$ induced by $\bb L$ is generated by the pair $([1], \bm 1_{\K} + [(\RV^{\circ \circ}, \id)])$ and hence its corresponding ideal in the graded ring $\ggk \RV[*]$ is generated by the element $\bm 1_{\K} + [\bm P]$ (see Notation~\ref{nota:RV:short} and Remark~\ref{gam:res}). \begin{thm}\label{main:prop} For each $k \geq 0$ there is a canonical isomorphism of Grothendieck semigroups \[ \textstyle \int_{+} : \gsk \VF[k] \longrightarrow \gsk \RV[{\leq} k] / \isp \] such that \[ \textstyle \int_{+} [A] = [\bm U]/ \isp \quad \text{if and only if} \quad [A] = [\bb L\bm U]. \] Putting these together, we obtain a canonical isomorphism of Grothendieck semirings \[ \textstyle \int_{+} : \gsk \VF_* \longrightarrow \gsk \RV[*] / \isp. \] \end{thm} \begin{proof} This is immediate by Corollary~\ref{L:sur:c} and Proposition~\ref{kernel:L}. \end{proof} \begin{thm}\label{thm:ring} The Grothendieck semiring isomorphism $\int_+$ naturally induces a ring isomorphism: \[ \textstyle \Xint{\textup{G}} : \ggk \VF_* \to \ggk \RV[*] / (\bm 1_{\K} + [\bm P]) \to^{\bb E_{\Gamma}} \mathds{Z}^{(2)}[X], \] and two ring homomorphisms onto $\mathds{Z}$: \[ \textstyle \Xint{\textup{R}}^g, \Xint{\textup{R}}^b: \ggk \VF_* \to \ggk \RV[*] / (\bm 1_{\K} + [\bm P]) \two^{\bb E_{\Gamma, g}}_{\bb E_{\Gamma, b}} \mathds{Z}. \] \end{thm} \begin{proof} This is just a combination of Theorem~\ref{main:prop} and Remark~\ref{rem:poin} (or Proposition~\ref{prop:eu:retr:k}). \end{proof} Let $F$ be a definable set with $A \coloneqq F_{\VF} \subseteq \VF^n$. Then $F$ may be viewed as a representative of a \emph{definable} function $\bm F : A \longrightarrow \gsk \RV[*] / \isp$ given by $a \longmapsto [F_a] / \isp$. Note that the class $[F_a]$ depends on the parameter $a$ and hence can only be guaranteed to lie in the semiring $\gsk \RV[*]$ constructed over $\mdl S \langle a \rangle$ instead of $\mdl S$, but we abuse the notation. Similarly, for distinct $a, a' \in A$, there is a priori no way to compare $[F_a]$ and $[F_{a'}]$ unless we work over the substructure $\mdl S \langle a, a' \rangle$; given another definable set $G$ with $A = G_{\VF}$, the corresponding definable function $\bm G$ is the same as $\bm F$ if $\bm G(a) = \bm F(a)$ over $\mdl S \langle a \rangle$ for all $a \in A$. The set of all such functions is denoted by $\fn_+(A)$, which is a semimodule over $\gsk \RV[*] / \isp$. Let $E \subseteq [n]$ be a nonempty set. Then, for each $a \in \pr_{E}(A)$, the definable function in $\fn_+(A_a)$ represented by $F_a$ is denoted by $\bm F_a$. Let $\bb L F = \bigcup_{a \in A} a \times F_a^\sharp$ and then set $\int_{+A} \bm F = \int_+ [\bb L F]$, which, by Proposition~\ref{kernel:L} and compactness, does not depend on the representative $F$. Thus there is a canonical homomorphism of semimodules: \[ \textstyle \int_{+A} : \fn_+(A) \longrightarrow \gsk \RV[*] / \isp. \] \begin{thm}\label{semi:fubini} For all $\bm F \in \fn_+(A)$ and all nonempty sets $E, E' \subseteq [n]$, \[ \textstyle \int_{+ a \in \pr_{E}(A)} \int_{+ A_a} \bm F_a = \int_{+ a \in \pr_{E'}(A)} \int_{+ A_a} \bm F_a. \] \end{thm} \begin{proof} This is clear since both sides equal $\int_{+A} \bm F$. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
1,116,691,498,162
arxiv
\section*{ABSTRACT} The Cosmic Neutrinos Background (\textbf{CNB}) are Primordial Neutrinos decoupled when the Universe was very young. Its detection is complicated, especially if we take into account neutrino mass and a possible breaking of Lorentz Invariance at high energy, but has a fundamental relevance to study the Big-Bang. In this paper, we will see that a Lorentz Violation does not produce important modification, but the mass does. We will show how the neutrinos current velocity, with respect to comobile system to Universe expansion, is of the order of $1065$ $\left[\frac{km}{s}\right]$, much less than light velocity. Besides, we will see that the neutrinos distribution is complex due to Planetary motion. This prediction differs totally from the usual massless case, where we would get a correction similar to the Dipolar Moment of the \textbf{CMB}.\\ \section*{INTRODUCTION} From the beginning, the photons and all particles were coupled forming a plasma that was evolving under the influence of the Universe expansion. In such a moment, when the photons were dominating the expansion, the neutrinos were decoupling from the plasma and evolved in an independent way. One of the last discoveries about neutrino is its mass. This has relevant effects in the Standard Model and in some of its characteristics, distinguishing it from the photons. One of them, which we will study, is its velocity. Thus, we will analyze the evolution of the neutrinos's kinetic energy since its decoupling till today.\\ Other phenomenon that we will study, that is directly related with the first one, is the neutrinos distribution. The detection of the Cosmic Microwave Background of Photons (\textbf{CMB}) is the best proof of the Big-Bang scenario \cite{abg} that helped to check or refute models that describe it, and study the composition of the Universe. Because of this, it is important to study the Cosmic Neutrinos Background (\textbf{CNB}), especially the form of their Distribution Function to consider the effect of the peculiar velocity of the planet, named Dipolar Moment in the \textbf{CMB}, and optimize the detection. This is already complicated due to the low interaction that the neutrinos have with ordinary matter. The calculation will be done for photons and neutrinos in parallel.\\ Finally, we will include a Lorentz Invariance Violation (LIV) represented by an alteration to the Dispersion Relation of energy given by \cite{liv,Alfaro1,Alfaro2,Alfaro3}: $$E^2 = v_{max}^2p^2 + m^2c^4$$ Where $v_{max} = c(1-\alpha)$ is the maximum attainable particle velocity with $\alpha \sim (10^{-22}-10^{-23})$ . The motivation to use this LIV comes from the possibility that, at the high energies available in the Big Bang there take place some LIV due to Quantum Gravity \cite{liv,Alfaro2,Alfaro3}. If such a LIV exists, the first problem is the appearance of a privileged reference system, but fortunately exists a natural candidate, the one where the \textbf{CMB} is isotropic. A LIV without a preferred frame as in Double Special Relativity \cite{dsr}, will not be considered here. \\ \section{ENERGY AND VELOCITY OF PRIMORDIAL NEUTRINOS.} Initially the neutrinos were in thermal equilibrium with the rest of matter. For this, is necessary that $\Gamma_{i} \gg H$, where $\Gamma_{i}$ is the rate of interactions of the species $i$, $H \propto T^2$ is Hubble's constant and $T$ the temperature. While the neutrinos are kept in equilibrium, its distribution will be given by Fermi-Dirac's statistics: $$f_{eq}(E,T) = \frac{1}{e^{\frac{E-\mu}{k_BT}}+1}$$ During cosmic expansion, the temperature will be diminishing down to a point where $\Gamma_{\nu} \lesssim H$ and $\Gamma_{i \neq \nu} \gg H$. This means that the neutrinos lost the equilibrium and are decoupled from the rest of the matter. We will name $T_{\nu,D}$ the neutrinos decoupling temperature that are obtained when we impose $\Gamma_{\nu} \simeq H(T) $. To see what is happening with its distribution we will do the following analysis. For a time $t_0$, an observer sees in any direction a quantity $dN = f d^3r d^3p$ of neutrinos in an volume $d^3r$ and with momentum between $\vec{p}$ and $\vec{p} +d\vec{p}$. After a $dt$ time, the neutrinos have not interacted, so $dN$ remains constant, but the volume in which they are, have increased in a factor $\left(\frac{R(t_0+dt)}{R(t_0)} \right)^3$ and the momentum has diminished in $\frac{R(t_0)}{R(t_0+dt)}$, because of the expansion of the Universe. This means that $f(E,T_{\nu}) $ is constant in time. Therefore, for $t> t_D$ (or $T_{\nu} < T_{\nu,D}$) with $t_D$ the moment in which is produced the decoupling, the distribution function is given by \cite{Early uni}: \begin{equation} \label{feq} f[E(p(t)),T_{\nu}(t)] = f_{eq}[E(p_D),T_{\nu,D}] = f_{eq}\left[E\left(p(t)\frac{R(t)}{R_D}\right),T_{\nu,D}\right] \end{equation} When the subscript $D$ refers to the age of decoupling. In addition, we know that the number of neutrinos, the total energy and the energy per neutrino are given by: \begin{equation} \label{Nf} N_{\nu} = \frac{gV}{(2\pi\hbar)^3} \int f(p,T_{\nu})d^3p \end{equation} \begin{equation} \label{Ef} E_{\nu} = \frac{gV}{(2\pi\hbar)^3} \int E(p) f(p,T_{\nu})d^3p \end{equation} \begin{equation} \label{ef} \varepsilon_{\nu} = \frac{E_{\nu}}{N_{\nu}} \end{equation} Where: $$E^2(p) = v_{max}^2 p^2 + m^2c^4$$ to allow for a small LIV in the dispersion relation.\\ Now we can determine the Distribution Function that they will have after being decoupled. It is possible to express the energy of the neutrinos (high energies and small masses) as $E(t) = v_{max,\nu} p(t)$ during the decoupling (We use an expansion with zero order in the mass because $f$ depends exponentially on $E$), and as $p_D = p(t) \frac{R(t)}{R_D}$ we obtain using (\ref{feq}): \begin{equation} \label{f_Rel} f[p,T_{\nu}] = \frac{1}{e^{\frac{v_{max,\nu}p}{k_BT_{\nu}}}+1} \end{equation} With $T _{\nu} = T _{\nu, D} \frac{R_D}{R (t)}$ and $\mu _{\nu} =0$ because of the low interaction that they have with matter. This means that the distribution of neutrinos after decoupling is Fermi's with temperature $T_{\nu}$, therefore $RT_{\nu} = cte$. Replacing it in (\ref{Nf}) and (\ref{Ef}): $$N_{\nu} = \frac{gV}{(2\pi\hbar)^3} \int \frac{1}{e^{\frac{v_{max,\nu}p}{k_BT_{\nu}}}+1} d^3p$$ $$E_{\nu} = \frac{gV}{(2\pi\hbar)^3} \int \frac{E(p)}{e^{\frac{v_{max,\nu}p}{k_BT_{\nu}}}+1} d^3p$$ Naturally, $N_{\nu}$ will be constant in time. Using the change of variable $x = \frac{v _{max,\nu} p}{k_BT _{\nu}}$, we obtain: \begin{equation} \label{N} N_{\nu} = \frac{3gV\zeta(3)(k_BT_{\nu})^3}{4\pi^2\hbar^3v_{max,\nu}^3} \end{equation} Where $\zeta (3) = 1.2021$ is the Riemann's Zeta function. We see that, in fact, $N_{\nu}$ keeps constant in time because $V \propto R^{3} (t) $ and $T_{\nu} \propto R^{-1}(t) $. To determine $E_{\nu}$, we must compute the integral, which is complicated for the general case. Thus , we will analyze the extreme cases where the neutrinos continue being relativistic and when they do not. Due to the spherical symmetry, the velocity is only radial, therefore we just must determine its modulus. The modulus of the velocity of a particle is given by: $$v = \frac{\partial \varepsilon}{\partial p}$$ Being $\varepsilon$ and $p$ the energy and the momentum of a particle, related by our dispersion relation: $$\varepsilon^2 = v_{max}^2 p^2 + m^2c^4$$ While the particle continues being relativistic, developing the derivative till the second order in the mass ($\varepsilon \gg mc^2$), we obtain: \begin{equation} \label{vrel} v_{\nu} \simeq v_{max,\nu}\left(1-\frac{1}{2}\left(\frac{mc^2}{\varepsilon}\right)^2\right) \end{equation} Notice that we must use $E (p) = v_{max} p$ to calculate $E_{\nu}$, to the order of approximation in the mass that we are considering.\\ Now, if the particle becomes Non-Relativistic, we have that the energy and the velocity of a particle to second order in the momentum ($pv_{max, \nu} \ll mc^2$) will be: $$\varepsilon_{\nu} \simeq mc^2 + \left(\frac{v_{max,\nu}}{c}\right)^2\frac{p^2}{2m}$$ \begin{equation} \label{vnorel} v_{\nu} = \frac{\partial \varepsilon}{\partial p} \simeq \left(\frac{v_{max,\nu}}{c}\right)^2\frac{p}{m} = v_{max,\nu}\sqrt{2\left(\frac{\varepsilon}{mc^2}-1\right)} \end{equation} Where we see that, to keep the order in the momentum, the calculation must be up to second order in the expression of $E(p)$, therefore $E(p) = mc^2 + \left(\frac{v_{max,\nu}}{c}\right)^2\frac{p^2}{2m}$.\\ \subsection{Relativistic Neutrinos} As we said, to determine $E _{\nu}$ we must use $E (p) = v _{max} p$. With this, we obtain the expression:\\ \begin{equation} \label{Erelaun} E_{\nu} = \frac{7\pi^2gV(k_BT_{\nu})^4}{240\hbar^3v_{max,\nu}^3} \end{equation} Using (\ref{N}) and (\ref{Erelaun}) in (\ref{ef}) and (\ref{vrel}), we obtain: \begin{equation} \label{ener_rel} \varepsilon_{\nu} = \frac{7\pi^4}{180\zeta(3)}k_BT_{\nu} \end{equation} \begin{equation} \label{veldesrel_rel} v_{\nu} = v_{max,\nu}\left(1-\frac{1}{2}\left(\frac{180\zeta(3)m_{\nu}c^2}{7\pi^4k_BT_{\nu}}\right)^2\right) \end{equation} If we define the relative velocity between the neutrinos and the photons as $\Delta v = c - v_{\nu}$, result: $$\Delta v = \Delta v_{max}+\frac{v_{max,\nu}}{2}\left(\frac{180\zeta(3)m_{\nu}c^2}{7\pi^4k_BT_{\nu}}\right)^2$$ Where $\Delta v_{max} = c - v_{max, \nu} = c\alpha_{\nu}$. We can see that this factor vanishes if the violation does not exist. Evaluating numerically: \begin{equation} \label{c-veldesrel_rel} \frac{\Delta v}{c} = \alpha_{\nu}\left(1-5.04 \times 10^{-2}\left(\frac{M_{\nu}}{k_BT_{\nu}}\right)^2\right)+5.04 \times 10^{-2}\left(\frac{M_{\nu}}{k_BT_{\nu}}\right)^2 \end{equation} Where we have separated the LIV dependent part from the rest.\\ \subsection{Non-Relativistic Neutrinos} In this case we have that $E (p) = m _{\nu} c^2 + \left (\frac{v _{max, \nu}}{c} \right) ^2\frac{p^2}{2m _{\nu}}$, therefore, when we evaluate in $E_{\nu}$ using (\ref{N}), we obtain:\\ \begin{equation} \label{Eyanorel}E_{\nu} = N_{\nu}m_{\nu}c^2 \left(1 + \frac{1}{2}\left(\frac{k_BT_{\nu}}{m_{\nu}c^2}\right)^2 \frac{I_4}{I_2}\right) \end{equation} With $I_n = \int_0^{\infty} \frac{x^n}{e^x +1} dx = \left (1-\frac{1}{2^n} \right) n! \zeta (n+1) $. Then, evaluating in (\ref{ef}) and (\ref{vnorel}), we have: \begin{equation} \label{eneryanorel} \varepsilon_{\nu} = m_{\nu}c^2 \left(1 + 15\frac{\zeta(5)}{2\zeta(3)}\left(\frac{k_BT_{\nu}}{m_{\nu}c^2}\right)^2 \right) \end{equation} \begin{equation} \label{veldesrel_norel} v_{\nu} = v_{max,\nu}\sqrt{15\frac{\zeta(5)}{\zeta(3)}}\frac{k_BT_{\nu}}{M_{\nu}} \end{equation} Giving a relative velocity: \begin{equation} \label{c-veldesrel_norel}\frac{\Delta v}{c} = \alpha_{\nu}\sqrt{15\frac{\zeta(5)}{\zeta(3)}}\frac{k_BT_{\nu}}{M_{\nu}} + \left(1-\sqrt{15\frac{\zeta(5)}{\zeta(3)}}\frac{k_BT_{\nu}}{M_{\nu}}\right) \end{equation} Where we have separated the LIV part from the rest and $\zeta (5) = 1.0369$. Since the neutrino velocity cannot be higher than its maximum velocity, we must see to what temperatures this approximation is valid. We have that $v_{\nu}> v_{max, \nu}$ if $k_BT_{\nu}> \sqrt{\frac{\zeta(3)}{15\zeta(5)}} M_{\nu}$. It means that the approximation is valid if $k_BT_{\nu} \ll \sqrt {\frac{\zeta (3)}{15\zeta(5)}} M_{\nu} \sim 0.28 M_{\nu}$.\\ \subsection{Numerical Results and Analysis} In the age of decoupling of the neutrinos, we know that $k_BT_{\nu,D} \simeq (2 - 4)$ [MeV] and, currently, $k_BT_{\nu, 0} = 1.68 \times 10^{-4}$ [eV]. In addition to this, for cosmological parameters, we know \cite{masa}: $$\sum_{i} m_{\nu_i} \leq 0.17 [eV]$$ That clearly indicates that they are relativistic in the moment of the decoupling. There exist many estimations of the masses of the neutrinos, but none of them are very precise. Thus, we will use $m_{\nu} \simeq 0.17 [eV]$. This way, we are sure of being inside the correct limits and we will find the maximum effect that the mass could have in the velocity of neutrinos. This way, none of these estimations is below $k_BT_{\nu,0}$, therefore they are Non-Relativistic nowadays.\\ Before discussing the results, we will analyze the effect of the LIV. Thus, we will compare our relativistic expressions with our Non-Relativistic ones. If we observe these expressions, we can see that both are proportional to $v_{max, \nu}$, which is the only thing that depends on $\alpha_{\nu}$. It means that the difference in percentage between the case with and without LIV is always: $$\frac{v_{\nu}(0) - v_{\nu}(\alpha_{\nu})}{v_{\nu}(0)}100\% = \alpha_{\nu} 100\% = 1 \times 10^{-20} \%$$ Therefore, it is not possible that this LIV has an important effect in the neutrinos, then we will continue our calculations using $\alpha=0$.\\ Previously we mentioned that our Non-Relativistic approximation is valid if $\frac{k_BT_{\nu}}{M_{\nu}} \leq 0.28$. At present we have that $\frac{k_BT_{\nu}}{M_{\nu}} \simeq 10^{-3}$ fulfilling the Non-Relativistic bound, but with a mass $100$ times minor ($\sim 2 \times 10^{-3}$ [eV]) the bound is not respected. However it does not correspond to the relativistic case either. To be kept relativistic, we need a mass $10000$ times smaller or less ($\sim 2 \times 10^{-5}$ [eV]).\\ In Figure \ref{graf_vel_rel} the evolution of the velocity of the neutrinos due to the expansion of the Universe is represented graphically . We define the adimensional quantities $z = \frac{M_{\nu}}{k_BT _{\nu}}$, $y =\frac{v _{\nu}}{c}$. It is indicated in the graph that the time grows towards bigger values in $z$. Clearly, we see that the neutrinos suffer a rapid deceleration from the time of decoupling . Then, this deceleration begins to diminish slowly, approaching a zero velocity.\\ All the estimations of $M _{\nu}$ indicate that we are in a zone dominated by the Non-Relativistic approximation. The estimation for the smallest masses ranges between $10^{-4}$ and $10^{-3}$ [eV]. Remembering that our top limit is $0.17$ [eV], we see that we are currently in the region $0.6 < z <1012$, which is a very wide range. Evaluating numerically in (\ref{veldesrel_norel}), we obtain $v_{\nu} = 3.55 \times 10^{-3} c = 1065$ [$\frac{km}{s}$], with a mass of $0.17$ [eV]. This velocity will be bigger if we use smaller neutrino masses.\\ Up to now, we have assumed that the neutrinos are not affected by the galactic potential, they are free particles and are not relics from the Milky Way \cite{white1}-\cite{white2}. To check this point, we consider the relation between kinetic and potential energy of the neutrino in the Milky Way. That is: $$\frac{m_{\nu}v^2}{2} = \frac{GMm_{\nu}}{R}$$ $$v = \sqrt{\frac{2GM}{R}}$$ Where $v$ would be the limit velocity where the potential energy is comparable with the kinetic energy. Evaluating in $M \simeq 2 \times 10^{42}$ [kg] and $R \simeq 4.7 \times 10^{20}$ [m], mass and radius of the Milky Way respectively, we obtain $v \simeq 754$ $\left [\frac{km}{s} \right]$. Since $v_{\nu} \gtrsim 1000$ $\left [\frac{km}{s} \right]$, our supposition is correct.\\ \section{THE CNB DISTRIBUTION.} To determine the effective neutrinos distribution (distribution from Earth), we need to use equation (\ref{f_Rel}) in the comobile system of the Universe. In addition to this, we will use the photons distribution when they are decoupled, that is: $$f(p,T) = \frac{1}{e^{\frac{pc}{kT}}-1}$$ That corresponds to an ultra-relativistic Bose-Einstein's distribution with $RT = cte$. Currently, it has a temperature of $2.73$ [K]. In addition, we saw in the previous chapter that a LIV of the form: $$E^2 = v_{max}^2p^2 + m^2c^4$$ Is not markedly different in its energy and velocity in comparison to the usual Dispersion Relation ($v _{max} = c$). Because of this the usual special relativity rules are valid. For instance, Lorentz's Transformations for the coordinates of space-time and of energy - momentum. Then, we can compare the neutrinos distribution in the comobile system and Earth. It is possible to demonstrate (See Appendix): \begin{equation} \label{f=f'} f'(p',T') = f(p,T) \end{equation} \begin{equation} \label{E(E')} E = \gamma (E' - v_t p'\cos(\theta')) \end{equation} Where the primed elements refer to the reference system of Earth and the non primed to the comobile. $\theta'$ is the angle that is formed between the vision line and the direction of Earth motion and $v_t$ is the Earth's velocity \cite{vel pec}. We can see that the distribution function is invariant under Lorentz's Transformation and the energy changes with the angle of vision.\\ Now we will analyze some cases. First, the photons of the CMB to guide us because they are already very well known , and secondly, the neutrinos. Using expression (\ref{E(E')}) we will determine $p'$ as a function of $p$.\\ \subsection{Photons} In this case, we have that $E = cp$, therefore the expression (\ref{E(E')}) is reduced to:\\ $$p = \frac{1 - \frac{v_t}{c}\cos(\theta')}{\sqrt{1-\left(\frac{v_t}{c}\right)^2}}p'$$ Replacing in (\ref{f=f'}), we obtain: $$f'(p',T'_{\gamma}) = f\left(\frac{1-\frac{v_t}{c}\cos(\theta')}{\sqrt{1-\left(\frac{v_t}{c}\right)^2}}p',T_{\gamma}\right)$$ As the photons, after being decoupled, continue with a distribution of the form: $$f_{\gamma} = \frac{1}{e^{\frac{pc}{k_BT_{\gamma}}}-1}$$ We can leave our expression as: $$f'(p',T'_{\gamma}) = f\left(p',T_{\gamma}\frac{\sqrt{1-\left(\frac{v_t}{c}\right)^2}}{1 - \frac{v_t}{c}\cos(\theta')}\right)$$ Therefore, the photons distribution detected from Earth, $f'$, in a specific direction, will be of the same form that the one detected in the comobile system to the Universe, but with a different temperature given by: $$T'_{\gamma} = T_{\gamma}\frac{\sqrt{1-\left(\frac{v_t}{c}\right)^2}}{1 - \frac{v_t}{c}\cos(\theta')}$$ If we consider that $v_t \ll c$, we have: $$T'_{\gamma} \simeq T_{\gamma}\left(1 + \frac{v_t}{c}\cos(\theta')\right)$$ \begin{equation} \label{Dipol} \frac{\Delta T_{\gamma}}{T_{\gamma}} \simeq \frac{v_t}{c}\cos(\theta') \end{equation} that is known as the Dipolar Moment, and is of the order of $10^{-4}$.\\ \subsection{Neutrinos} Now, we have particles with mass. Currently, the neutrinos are Non-Relativistic, therefore $E = m_{\nu} c^2 + \frac{p^2}{2m_{\nu}}$ for both the comobile and Earth systems. Evaluating in (\ref{E(E')}) and using the approximation $v_t \ll c$ up to second order in $p' $ and $v_t$, we obtain: $$p^2 = p'^2 - 2m_{\nu}v_tp'\cos(\theta') + m_{\nu}^2v_t^2$$ Evaluating in (\ref{f=f'}), we have: \begin{equation} \label{f'_neutrin} f'(p',T'_{\nu}) = f\left(\sqrt{p'^2 - 2m_{\nu}v_tp'\cos(\theta') + m_{\nu}^2v_t^2},T_{\nu}\right) \end{equation} In this case is impossible to find a relation between $T'_{\nu}$ and $T_{\nu}$, but we know that the distribution is given by (\ref{f_Rel}). Seemingly, we can only notice the effects graphically. To facilitate our analysis, it will be helpful to define the number of neutrinos per solid angle $d\Omega'$ of momentum as: $$\frac{dN}{d\Omega'} = \frac{gV}{(2\pi \hbar)^3}f'(p',T'_{\nu})p'^2dp$$ With this, we can obtain the distribution function of the number of particles: \begin{equation} \label{F'} F'(p',T'_{\nu}) = \frac{gV}{(2\pi \hbar)^3}f'(p',T'_{\nu})p'^2 \end{equation} In our case, the distribution function $F'$ will be: $$F'(p',T'_{\nu}) \propto \frac{p'^2}{e^{\frac{\sqrt{p'^2 - 2m_{\nu}v_tp'\cos(\theta') + m_{\nu}^2v_t^2}c}{k_BT_{\nu}}}+1}$$ \subsection{ANALYSIS} To do our analysis, it is useful to introduce the adimensional variables $x = \frac{p'c}{k_BT_{\nu}}$, $a(\theta') = \frac{m_{\nu} v_tc}{k_BT_{\nu}} \cos (\theta') $ and $b = a(0)$. With these parameters, our distribution is: \begin{equation} \label{F_rel}F' \propto \frac{x^2}{e^{\sqrt{x^2 - 2ax + b^2}}+1} \end{equation} Considering a terrestrial velocity $v_t \simeq 300$ [$\frac{km}{s}$], we can see that $b \simeq 1$ for $M_{\nu} = 0.17$. It means that $-1 \leq a(\theta') \leq 1$. This range will be smaller if we use a smaller mass, but then the Non-Relativistic approximation is less precise.\\ In Figure \ref{grafF-rel} it is shown $F'$; here we have used a value of $b\sim 1$ and some representative values of $a (\theta') $ (See Table \ref{direc}).\\ \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline \textbf{$a(\theta')$} & \textbf{Direction of Observation}\\ \hline \hline $1$ & In favour of the Terrestrial Movement \\ $0.5$ & $60^0$ deflected to the Terrestrial Movement \\ $0$ & Perpendicular to the Terrestrial Movement \\ $-0.5$ & $120^0$ deflected to the Terrestrial Movement \\ $-1$ & Against the Terrestrial Movement \\ \hline \end{tabular} \caption{Directions of Observation.}{{\footnotesize Values of $a (\theta') $ used in Figure \ref{grafF-rel} with the corresponding direction of observation.}} \label{direc} \end{center} \end{table} Let's remind that the distribution $F'$ represents the particles number that come from certain direction and momentum. We see in Figure \ref{grafF-rel} that the distribution suffers a loss of homogeneity, which is translated in more neutrinos observed in favour of the Earth's movement, but simultaneously the form of the distribution function is altered much more with regard to the distribution of the comobile system. If we move away from this direction, the neutrino number detected diminishes considerably and the small momentum are favored.\\ The distribution maximum must fulfill the equation: \begin{equation} \label{n_eq}\left(2\sqrt{x_{max}^2 - 2ax_{max} + b^2} - x_{max}^2 + ax_{max}\right)e^{\sqrt{x_{max}^2 - 2ax_{max} + b^2}} + 2\sqrt{x_{max}^2 - 2ax_{max} + b^2}=0 \end{equation} It is complicated to find a general expression for $x_{Max}$, although a numerical treatment is readily available. As an example, we can study the extreme cases $a (\theta) = b$ and $a (\theta) =-b$ where $b \simeq 1$ for $M _ {\nu} = 0.17$ [eV]. Evaluating in (\ref {n_eq}), we obtain: $$x_{max}(a=b) = 2.463$$ $$x_{max}(a=-b) = 2.091$$ This means that the momentum of the majority of detected neutrinos will be: $$2.091 \leq \frac{p'c}{k_BT_{\nu}} \leq 2.463$$ This will be a useful information to plan the detectors.\\ Now, if we use a smaller mass, the differences between different directions in the distributions diminish. In Figures \ref{grafF-rel_10} and \ref {grafF-rel_100} we can see the distributions with a mass $10$ and $100$ times smaller, where the Non-Relativistic approximation can be still valid. To compare, the photons distribution appears in Figure \ref{grafF-fot} for different observation angles. Comparing this with Figure \ref {grafF-rel_100}, we see that the effect produced in the neutrinos distribution is bigger always than the produced in the photons.\\ This happens because the photons always go to a bigger velocity than the terrestrial ($c \gg v_t$), therefore Earth would be almost still with regard to the comobile system, doing that the isotropy almost does not change. On the other hand, if the neutrinos acquire mass, they will be found submitted to a deceleration as the Universe expands (See Figure \ref{graf_vel_rel}) so that that currently the neutrinos are Non-Relativistic, with a velocity not much higher than $v_t$. This means that the effect of the terrestrial movement begins being important in the velocities addition and will be increased with the time due to the constant cooling of the neutrinos. This will reach the point in which the neutrino velocity will be much smaller than $v_t$ and, practically, the planetary movement will predominate. This will be reflected in an increase of the distribution in the direction of the terrestrial movement.\\ \section*{CONCLUSION.} The mass of the neutrinos brought important modifications to its velocity. Without mass, the neutrinos would have supported a constant velocity and equal to the light velocity, $c$. On the other hand, with non zero masses, its velocity is affected by a strong deceleration (See Figures \ref{graf_vel_rel}), therefore they are Non-Relativistic nowadays. As we have developed an expression for the velocity with regard to the comobile system to the expansion of the Universe, it is necessary to use the addition of velocities to determine the mean neutrino velocity relative to Earth. Thus, we use Lorentz's Transformations since the LIV did not bring any important effect. The difference that is produced in its velocity with and without LIV is of $\sim 10^{-20}$ \%, which is totally negligible. Then, we can use the invariance of the distribution function to relate the comobile system to the terrestrial.\\ In the same way, the mass of the neutrinos brought important changes to the distribution. Unlike the photons, it was not possible to introduce a similar term to the Dipolar Moment because the temperature would depend on $p'$. Greater the mass greater the effect. In addition, the distribution is widely favored in the Earth's direction, but if we move away from this direction, the neutrinos number diminishes. In spite of that the variation depends greatly on the mass; as time goes by, the neutrinos will be cooling diminishing little by little its velocity. This means that in some moment the velocity of the neutrinos will be less than the terrestrial speed. In the future, the neutrinos will be almost still in comparison to the Earth's velocity. In this moment, we will only detect the neutrino that "crash" with Earth when it advances.\\ To sum up, we see that the existence of the neutrino mass produces a relatively important effect in its evolution, which is reflected in the perception that we have of them especially in the loss of homogeneity in the distribution function. Thus, for its detection is advisable to use detectors of neutrinos directed in favour to the terrestrial movement or to use a satellite located in someone of Lagrange's points of the Solar System to keep the isotropic distribution of the comobile system, as the satellite \textbf {Planck Surveyor} that will observe \textbf{CMB} \cite{planck}.\\ \section*{Acknowledgments} The authors want to thank A. Reisenegger for an interesting discussion. The work of JA and PG was partially supported by Fondecyt \# 1060646.
1,116,691,498,163
arxiv
\section{Introduction} In April 2019 the \textit{Criminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019} (Cth) criminalised the failure to report and/or remove Australian-related Abhorrent Violent Material (AVM)\footnote{``the most egregious, violent audio, visual material perpetrated by the perpetrator or accomplice''} by ISPs, hosting and content providers\cite{AVM}. Events such as the Christchurch terror attack of March 2019 demonstrated the potential not only for publication of such acts, but also their re-distribution \textit{en masse} after minor modification in order to avoid traditional (cryptographic hash-based) detection and blocking methods by online content providers. The mandatory reporting of such materials by multiple providers worldwide would rapidly overwhelm law enforcement's ability to review and investigate, unless a reliable, portable and acceptable means for measuring similarity is adopted. In August 2019 Facebook open-sourced the algorithms and released reference implementations of PDQ and TMK + PDQF, their internally used image and video similarity measurement algorithms\cite{FBnews}. At time of writing, these are freely available at the \texttt{ThreatExchange} github repository\footnote{https://github.com/facebook/ThreatExchange/tree/master/hashing}. In this report, we run the algorithms through a battery of tests based on typically encountered modification scenarios, using real-world data taken from actual Australian Federal Police (AFP) investigations. Moving beyond mere performance, we also provide a reference implementation of our own, demonstrating the portability and accessibility of such algorithms when effectively packaged and presented through standards-based APIs. \section{Background} Cryptographic hashes such as MD5 and SHA-1 have long formed the basis for identifying identical data at the \textit{binary} level. This makes them exceedingly powerful at detecting identical materials - for example, copies of a known file, but by design, completely useless for detecting \textit{similar} materials. Investigations involving large quantities of shared electronic materials (such as AVM and child exploitation materials (CEM)) typically involve light (if not entirely imperceptible) changes to these materials as they are shared. Images and videos, for example, often have watermarks/text added and formats changed (e.g. jpg $\rightarrow$ png), either intentionally by the offenders or as a by-product of the services used to effect sharing and transfer. The efficacy of cryptographic hashes in detecting perceptibly identical materials is therefore limited. Such limitations would be regarded as inconveniences and inefficiencies in most workplaces, but the true cost of exposure to such materials is only now becoming apparent - not only to front line investigators, but also supporting personnel such as digital forensic examiners and intelligence analysts. Each failure to automatically recognise perceptibly identical materials results in another instance of exposure. Given the sheer volume of materials currently being encountered and seized by Australian police, these repeat exposures could conservatively be estimated in the hundreds of thousands per annum within Australia law enforcement alone. Unlike binary-level digest algorithms such as MD5 and SHA-1, perceptual hashes are a mode of fuzzy hashing operating on materials as rendered to the end user, making them highly suitable for detecting lightly or imperceptibly altered materials. Microsoft has for some years made available PhotoDNA, an image similarity algorithm, on a no-cost basis to law enforcement and other practitioners - primarily for use in detecting CEM. Whilst zero cost, users are required to sign a licensing agreement prior to receiving access to the code, with usage largely limited to organisations dealing directly with CEM. PhotoDNA has been unquestionably effective within child protection, and Microsoft is to be commended for it's early, proactive support in the field. However, the limited rollout and support of this algorithm across law enforcement beyond child protection limits its useability. The release and open-sourcing of similarity based hashing algorithms is a welcome development, not only as a means for detecting likely duplicate materials within an organisation, but also for the free sharing of such intelligence at a technical level. Of course, this is on the assumption that they are sufficiently accurate, robust and efficient for deployment and use. \subsection{The Algorithms} In this section we provide a very brief overview of the PDQ and TMK + PDQF algorithms. Readers seeking a more in-depth understanding should refer to \texttt{hashing.pdf}\footnote{\url{https://github.com/facebook/ThreatExchange/blob/master/hashing/hashing.pdf}} within the project repository. \subsubsection{PDQ} \label{PDQIntro} In the words of the authors, a \textbf{P}erceptual algorithm utilising a \textbf{D}iscrete Cosine Transform and outputting (amongst others) a \textbf{Q}uality metric, is inspired by and an evolution of the (DCT flavour of) pHash \citep{pHash}, a perceptual hashing algorithm familiar to many within digital forensics. PDQ is stored as a 256 bit hash, representable as a 64 character hexadecimal string. We will not attempt to describe the algorithm's design here, but will oversimplify to state that the hash represents the output of a 16x16 transform of the original image via subimage averaging followed by a quantized discrete cosine transform, with each bit being 1 if greater than the overall median in the respective transform position, and 0 otherwise. A quality score broadly proportional to the number of interesting features is introduced to identify image frames that are typically too visually uniform to be useful. Similarity between two hashes is measured via Hamming distance\footnote{aka 'edit distance' - how many differences there are between two identically lengthed strings/vectors. For example, the Hamming distance between \textbf{abc} and \textbf{abd} is 1. No attempt is made to measure the distance beween changed characters - changing the second vector to \textbf{abz} still results in a Hamming distance of 1.}, with a distance of 0 indicating perceptually/syntactically identical content\footnote{Assuming `quality' imagery}. Statistically, a mean distance of 128 would be anticipated for randomly selected hash pairs, with the authors reporting 30 or less being a good measure for confidently matching materials. \subsection{TMK + PDQF} \textbf{TMK + PDQF} (`TMK' for brevity) uses a modified version of PDQ\footnote{The final conversion to binary of each 16x16 transform value is removed, leaving the original figure} for image similarity, combined with the \textbf{T}emporal \textbf{M}atch \textbf{Kernel} for measuring time-related information. Its operation is summarised thus within documentation: (1) resampling videos to a common frame rate (15 frames per second), (2) calculating a frame descriptor, (3) computing averages within various periods and (4) generating a hash from the trigonometrically weighted averages. Given the (in our opinion) greatly increased complexity from PDQ, readers are perhaps best referred to the original paper at \citep{Poullot:2015:TMK:2733373.2806228} for further information. TMK stores hashes as 258KB binaries (extension \texttt{.tmk}), but the authors are quick to note that the first kilobyte is sufficient to differentiate most videos. This 1KB value (known as the level 1 feature) is an unweighted average of all frame features (i.e. PDQF), and forms the first phase of comparison. The level 2 feature is a collection of weighted averages, totalling 256KB. Metadata regarding the calculation is also included within each file, bringing the total size to 258KB. TMK utilises a two phase match comparison process: \begin{enumerate} \item{Phase 1: Cosine similarity is calculated between the level 1 (i.e. 1KB) features of two hashes, with a score between -1.0 and 1.0 (inclusive) generated. Higher scores indicate greater similarity. A match threshold of 0.7 is recommended by the authors, and is used throughout our testing.} \item{Phase 2: The level 2 (i.e. 256KB) features of two hashes are compared and a score in the range 0.0-1.0 generated. Higher scores indicate closer matches, with 0.7 being recommended as a match threshold. This recommendation is adopted within our testing} \end{enumerate} \section{The Test Environment} All tests and activities conducted within this report were carried out on a Dell Precision 5530 notebook computer running Ubuntu Desktop 18.04, installed in August 2019 and patched (using \texttt{apt get update}) on the day of commencing these experiments. The PDQ and TMK algorithms and associated software were obtained via a \texttt{git clone} of the Facebook ThreatExchange repository \cite{Fa19}. We then installed \texttt{ffmpeg}, a pre-requisite for TMK and performed \texttt{make} commands for the \texttt{C++} versions \footnote{A python library has since been included within the repository} of both algorithms. The make process includes sense-checks and tests at conclusion - the TMK process reported errors, but these were caused by an incorrect path setting for the ffmpeg binary. We manually performed the tests and observed nil reported errors. Beyond this minor hiccup, the installation process was fast and uncomplicated, though we recommend users have at least a basic understanding of software compilation and builds for this stage. \subsection{Test Corpus} All tests reported within this paper were conducted on a corpus of 225,887 images and 3,366 videos manually reviewed by Police and annotated as child exploitation materials. These files were sourced exclusively from 10 investigations undergoing finalisation by one Australian jurisdiction's Joint Anti Child Exploitation Team (JACET) in July 2019 - the relative skew between media types reflects the materials as located. These matters may remain subject to further judicial proceedings, therefore no further details of their provenance will be provided. \begin{figure}[h] \includegraphics[width=\columnwidth]{Runtime_Histogram.pdf} \caption{Test corpus video runtimes} \label{CEMRuntimes} \end{figure} For tests benefitting from separate corpora, we utilised about 322,490 images of lawful pornography first used by \cite{DALINS201840}, plus a partial (approx. 596,000 images) download of the Google Open Images Dataset \cite{OpenImages}. \section{Testing Imagery (PDQ)} Once installed and built, the PDQ software is presented as a series of executables - we utilised \texttt{pdq-photo-hasher} for our tests. We developed a python script automating test corpus (refer Section \ref{pdqmeth}) generation, copying each file from a source drive, calling the pdq hashing binary for each variation, parseing the results (via regex) and then calculating the respective hamming distances. \subsection{Methodology} \label{pdqmeth} Each image within our test corpus was subjected to the following transformations, as illustrated in Figure \ref{fig:treatments} \begin{enumerate} \item{Format change - the image is converted to a random selection of JPEG, TIFF, PNG, or bitmap formats.} \item{Watermark - The AFP logo was added to the bottom right corner, being $\frac{1}{4}$ the size of the image's shorter dimension. The logo is non-transparent, with numerous colour shifts and rendered surfaces.} \item{Text - `AiLECS' is added to the image in the top left-hand corner, with the font size being the largest before the text itself extends beyond $\frac{1}{2}$ the size of the shorter dimension.} \item{Thumbnail - the image is reduced in size to 32, 64, 128 and 256 pixels for the longer dimension, akin to file previews automatically generated in applications such as Windows Explorer and MacOS Finder. Figure \ref{sampleThumbnail} displays a 32 pixel thumbnail.} \item{Cropping - a random proportion of the image greater than 0\% and less than 100\% is selected and retained, using the existing image centre and aspect ratio. Figure \ref{sampleCropping} displays cropping areas for ratios 0.5-0.9 (i.e. retain half $\rightarrow$ retain 90\%) inclusive, using increments of 0.1} \item{Rotation - the image is randomly rotated between 1 and 359 degrees inclusive. Figure \ref{sampleRotation} displays a 45\degree rotation. } \end{enumerate} \begin{figure*} \centering \begin{subfigure}{.4\columnwidth} \includegraphics[width=\columnwidth]{watermark.pdf} \caption{Watermark} \label{sampleWatermark} \end{subfigure} \begin{subfigure}{.4\columnwidth} \includegraphics[width=\columnwidth]{text.pdf} \caption{Text} \label{sampleText} \end{subfigure} \begin{subfigure}{.4\columnwidth} \includegraphics[width=\columnwidth]{cropping.pdf} \caption{Cropping} \label{sampleCropping} \end{subfigure} \begin{subfigure}{.4\columnwidth} \includegraphics[width=\columnwidth]{rotation.pdf} \caption{Rotation} \label{sampleRotation} \end{subfigure} \begin{subfigure}{.4\columnwidth} \includegraphics[width=\columnwidth]{thumbnail.pdf} \caption{Thumbnail} \label{sampleThumbnail} \end{subfigure} \caption{Image treatments for test corpus. Format change not shown.} \label{fig:treatments} \end{figure*} In order to compare processing overheads, we timed the original hashing of each image using the PDQ algorithm, and also MD5. In order to maintain consistency with the PDQ process, the MD5 was calculated using an external binary (\texttt{md5sum}) rather than from a buffer within the python script. Furthermore, this step was carried out at random either at the first encounter of each file, or later in the process in order to compensate for any operating system or device-level caching. \subsection{Results} We observed the PDQ algorithm to perform strongly when minor/imperceptible changes are made or features are \textit{added} to the rendered image, whilst struggling with removal or alteration. \subsection{Format Change} As anticipated, the algorithm was robust to format changes, reflecting its use of the rendered image (rather than underlying binary values) for hash calculation. 99.96\% of format changed images met the match threshold of 30, necessitating the use of log scale for the y axis in Figure \ref{PDQFormatChanges}. To summarise, over 94.85\% of format changes resulted in no hash alteration (i.e. a hamming distance of 0), with a further 5.01\% reporting a hamming distance of 2. No hamming distances of 1 were identified, in keeping with the reasoning detailed in Section \ref{PDQIntro}). \begin{figure}[h] \includegraphics[width=\columnwidth]{Format_Hamming_Count_Log.pdf} \caption{Hamming distances resulting from format changes - note use of log scale} \label{PDQFormatChanges} \end{figure} \subsection{Watermark} PDQ was not as robust to the introduction of a watermark as it was to changes to underlying formats. Figure \ref{PDQWatermarkText} shows that just overhalf of all watermarked images would be missed if the hamming distance threshold of 30 was strictly adhered to. Whilst disappointing, the test does involve a rather intrusive change to the image, with a completely opaque watermark featuring numerous features and color shifts. This should be regarded as an extreme example of such alteration. \begin{figure}[h] \includegraphics[width=\columnwidth]{Watermark_Text_Hamming_Means.pdf} \caption{Hamming distances resulting text and watermark introductions} \label{PDQWatermarkText} \end{figure} \subsection{Text} Being less intrusive than the aforementioned watermarking, PDQ performed more strongly in this test. Figure \ref{PDQWatermarkText} shows that a vast majority of text changes resulted in hamming distances less than 20, with most less than 10. On a side note, the strong dips around 1 and 7 on the x axis most likely reflect the algorithm's bias to returning even hamming distances (as documented within the project) rather than any statistical error. \subsection{Thumbnail} As with format changes, the algorithm performed strongly when responding to what really constitutes a loss in resolution. Figure \ref{PDQThumbnailPlot} displays that using the 30 hamming distance ceiling would almost guarantee detection of all 256 pixel (long side) images as duplicates of their originals. The loss of granularity is reflected in the results for the 128 and 64 pixel forms - both would detect most instances, but the latter definitely wouldn't do so for many. The 32 pixel thumbnail plot demonstrates the limitations of perceptual hashing when dealing with low quality imagery, with the series diving and spiking as it encounters odd numbered hamming distances. As discussed in Section \ref{PDQIntro}, quality images result in a PDQ hash containing an even number of 0 and 1 values, leading to a far higher likelihood of even hamming distances. As the sample in Figure \ref{sampleWatermark} shows, the quality of a 32px thumbnail is quite low, legible content in such a format is extremely limited. \begin{figure}[h] \includegraphics[width=\columnwidth]{Thumbnails_Hamming_Plots.pdf} \caption{Hamming distances for generated thumbnails} \label{PDQThumbnailPlot} \end{figure} \subsection{Cropping} The algorithm struggled with cropping, and itself isn't designed to cope with such changes - instead, additional hashes can be generated to provide what amounts to hypothetical changes, including rotation. This option is available within the binary used, and was utilised for this test and for rotation. Figure \ref{PDQCropping} displays that a small ($<$5\%) removal of an image will result in hamming distances beyond our working threshold, with results nearing random distributions by around 20\% removal (i.e. a crop ratio of 80\%). When the Dihedral-Transform hashes are utilised, a near average of around 57 for the closest hash is returned across the cropping spectrum, though the use of these additional hashes would lead to (a) query time inefficiencies and (b) a far greater change of false positives. Of course, we would also need to set a threshold nearing 60 to utilise this approach, again increasing the likelihood of false positives. \begin{figure}[h] \includegraphics[width=\columnwidth]{Crops_Hamming_Means.pdf} \caption{Hamming distances for cropped images} \label{PDQCropping} \end{figure} \subsection{Rotation} The algorithm's performance largely mirrored that observed for cropping. Beyond slight (less than 5\degree of original), performance falls away sharply with only calculation of additional hashes capable of reducing distances away from near-random distributions. As discussed, such an approach would be a good backstop if rotated/cropped images are anticipated, but otherwise would probably lead to undesirable performance impacts. \begin{figure}[h] \includegraphics[width=\columnwidth]{Rotation_Mean_Hammings.pdf} \caption{Hamming distances for rotated images} \label{PDQRotation} \end{figure} \subsection{Speed} PDQ hash generation involved a time overhead compared to that required for MD5. Figure \ref{PDQvsMD5} displays that whilst MD5 digests could be calculated for 90\% of files within 0.02 seconds, PDQ was largely completed around 0.08. This is a very reasonable amount considering PDQ is a perceptual hash (and therefore requires image rendering rather than simple binary reads and calculations). We did \textbf{not} pre-process imagery to accelerate the process, keeping all data 'as is' for processing. This performance can therefore be regarded as close to worst-case for real-world use, particularly considering the use of an external binary rather than calling integrated code. \begin{figure}[h] \includegraphics[width=\columnwidth]{Timing_Probabilities.pdf} \caption{Time taken for PDQ \& MD5 hash calculations} \label{PDQvsMD5} \end{figure} \subsection{Entropy} A fuzzy hash's values across a sufficiently large corpus of dissimilar materials should resemble random distribution. Such a `flat' distribution of values indicates a lack of bias (intentional or otherwise) within the algorithm, and in this instance, supports the use of a simple edit distance calculation for measuring perceived similarities. Figure \ref{MeanPDQBits} displays the mean bit values for each PDQ hash generated over each of our test corpora, plus across \textit{all} test corpora. The plot contains some peaks, indicating some biases - particularly within the CEM corpus. A degree of bias is to be anticipated on a per corpus basis, given the propensity for certain features (e.g. head/shoulders against a flat background for passport photos) to appear in specific genres. This hypothesis is supported by the smoother plot for the combined corpora series. \begin{figure}[h] \includegraphics[width=\columnwidth]{Bit_Values_PDQ_Plots.pdf} \caption{Average (mean) bit values for PDQ hashes across multiple corpora (duplicate MD5 and PDQ values removed)} \label{MeanPDQBits} \end{figure} \subsection{Discussion} We found PDQ to be a robust and capable algorithm for ``non-adversarially'' (in the words of the project authors) altered imagery. In particular, we found performance when dealing with resized or converted images to be near perfect, as demonstrated by Figure \ref{PDQFormatChanges}'s sparseness - the worst result being $\frac{1}{15}$ of an acceptable match threshold away from zero. As advised in project documentation, image rotation and cropping result in greatly decreased performance, and given their impact on materials shown, should reasonably be regarded as adversarial - in other words, being undertaken in order to defeat detection, rather than any other purpose. The use of additional (Dihedral-Transform) hashes can help combat this, but our tests demonstrate how their effective use would require a substantial change in match thresholds. Not discussed yet is the impact this would have on query times, a topic we cover later in this paper. \section{Testing Video (TMK + PDQF)} As with PDQ, we utilised executables packaged with the project for our tests - in this case, \texttt{tmk-hash-video} for hash generation. Unlike PDQ, given the relative complexity of calculating similarities, we used the packaged binary (\texttt{tmk-two-level-score}) for hash comparison. Unlike PDQ, the TMK algorithm takes a two phased approach - if the first phase passes a pre-defined match threshold, then the second phase is attempted. If both results are \textbf{higher} (note contrast to PDQ) than the threshold, then a video is regarded as a match. For our tests, we followed the recommended threshold of 0.7 for both phases. Note that \texttt{tmk-two-level-score} does not output scores if no match is detected for phase 1 - hence, setting the threshold to -1 in effect forces full calculations. \subsection{Methodology} We undertook a series of transformations akin to those utilised in testing PDQ, but adapted for use in video formats. The tests consisted of: \begin{enumerate} \item{Bitrate alteration - the visual bitrate (if applicable) or overall bitrate of the video is reduced to a randomly selected ratio beween 0-100\%, exclusive.} \item{Cropping - as with still images, a randomly selected ratio in the range 0-100\% (exclusive) of the original vision is retained, centered on the existing visual centre and maintaining the existing aspect ratio.} \item{Format change - the video is converted to a random selection of 'mpg', 'mp4', 'flv', 'mkv' and 'avi' formats.} \item{Half scaling - effectively the video equivalent of thumbnailing, though in this case rather than following pre-defined sizes, the video's existing visual dimensions are halved.} \item{Text - `AiLECS' (white, 40pt font) is scrolled across the screen for ten seconds, commencing one second into the video. This is repeated every 20 seconds thereafter.} \item{Title - a five second title screen (consisting of the afp logo) is added to the video's beginning. No content is overwritten.} \item{Trimming - five seconds of content are removed from the beginning of the video.} \item{Watermark - The AFP logo was added to vision, following the same placement and sizing rules detailed within Section \ref{pdqmeth}.} \end{enumerate} \subsection{Results} Figure \ref{OverallTMK} displays the overall performance of the TMK algorithm across our tests. As with still imagery (PDQ), the algorithm performs strongly when the perceived content changes are minimal - bitrate, scaling and format changes are all recognised as the `same' material. Watermark and cropping underperformed, akin to that seen with still imagery. Interestingly, adding a five second title screen also saw a dramatic loss of performance in recognising largely `like' materials. \begin{figure*} \centering \includegraphics[width=\textwidth]{Combined_Pie_Results.pdf} \caption{Pass/fail rates for all video tests - Phase 2 assumes pass at Phase 1.} \label{OverallTMK} \end{figure*} \subsection{Bitrate alteration} The algorithm performed extremely strongly with bitrate alterations, comfortably detecting each changed file as a match. \subsection{Cropping} As with PDQ's performance on still imagery, the TMK algorithm struggled with cropping. Removing more than 20\% visuals made detection improbable during both phases, but as figure \ref{CroppingTMK} demonstrates, phase 1 performance drops rapidly in relation to crop ratio, until a floor of mean performance is reached around the 0.6 (i.e. removal of 40\%) level. Phase 2 follows this pattern, though with a more linear drop in confidence against cropping. In an investigative context, we would recommend removal of more than 10\% be regarded as out of scope for reliable match detection. \begin{figure*} \centering \includegraphics[width=\textwidth]{Cropping_Scores.pdf} \caption{TMK scores vs cropping ratios. Horizontal bars indicate max, mean and minimum values for each series.} \label{CroppingTMK} \end{figure*} \subsection{Format change} Results in detecting content subjected for formatting changes closely followed those for bitrate alteration, with all files comfortably detected as duplicate content. \subsection{Half Scaling} Results for scaling closely followed those for bitrate and format alteration, though with marginally higher confidences within the phase 1 test. All, however, are comfortably above the match threshold of 0.7 and would only affect cases where extremely high thresholds are undertaken. \subsection{Text} Less visually intrusive than the watermark, the algorithm was largely successful in detecting duplicate content. Whereas we assumed shorter runtimes could result in lower confidences (given the proportionately higher level of visual change), Figure \ref{TMKTextRuntime} shows this not to be the case. \begin{figure} \centering \includegraphics[width=\columnwidth]{Runtime_Text_Scores.pdf} \caption{Mean TMK scores for text test, taken against video running time. Note low correlation of scores to runtime, particularly within phase 1.} \label{TMKTextRuntime} \end{figure} \subsection{Title} Overall, performance was disappointing in this test, with Figure \ref{TMKTitleRuntime} showing the majority of results falling below the 0.7 threshold for both phases. We originally hypothesised that the impact of new materials' insertion into a single location within a video would decrease as its relative proportion to running time decreased - in other words, the smaller the proportion of a video being new, the lower the change. It is difficult to confirm or disprove this theory. Prima facie, Figure \ref{TMKTitleRuntime} shows this not to be the case, as the clustering of Phase 2 scores (indicating a corresponding pass at Phase 1) occurs around $<= 1000$ seconds. This, however, is in line with the underlying distribution of runtimes within the test corpus. The absence of Phase 2 scores at longer running times could simply reflect a seemingly random distribution of successful Phase 1 tests, given the relative scarcity of sample data. \begin{figure} \centering \includegraphics[width=\columnwidth]{Runtime_Title_Scores.pdf} \caption{Mean TMK scores for title test, taken against video running time.} \label{TMKTitleRuntime} \end{figure} \subsection{Trimming} The TMK algorithm's performance in detecting `like' materials after the \textit{removal} of content stands in stark contrast to addition. Figure \ref{TMKTrimRuntime} displays the results - reversing the hypothesis given in the previous section, it would appear that performance does improve as the relative proportion of content altered (in this case, removed) decreases. This, however, occurs at a relatively short runtimes, with results approaching maximum at around 300 seconds. The only failure depicted on this plot is at an extremely short runtime, reflecting the substantial (if not entire) removal of content from that file. \begin{figure} \centering \includegraphics[width=\columnwidth]{Runtime_Trim_Scores.pdf} \caption{Mean TMK scores for title test, taken against video running time.} \label{TMKTrimRuntime} \end{figure} \subsection{Watermark} Watermarking performance was similar for TMK as for PDQ. Figure \ref{TMKWatermarkScore} displays a mean score for videos falling below the recommended threshold of 0.7 within phase 1. On a more positive note, phase 2 performance was robust in cases where phase 1 was passed. \begin{figure} \centering \includegraphics[width=\columnwidth]{Watermark_TMK_Scores.pdf} \caption{TMK scores for watermark test. Phase 2 results only materials passing phase 1.} \label{TMKWatermarkScore} \end{figure} \subsection{Speed} Whilst the wording is partly confusing, the project documentation lists TMK hashes as taking approximately 30x video runtime, depending on storage density. We experienced better speeds, with average hash calculation times being approximately $\frac{1}{40}$ (2.5\%) of runtime - probably due to our use of local (SATA3) based storage. Figure \ref{TMKRuntime} shows the correlation, though with variability most likely introduced by varying file sizes/compression levels and any required conversion to supported video formats. \begin{figure} \centering \includegraphics[width=\columnwidth]{Runtime_TMK_Calculation_Time.pdf} \caption{TMK Calculation times vs video running times} \label{TMKRuntime} \end{figure} \subsection{Entropy} The storage and comparison of PDQ hashes are relatively simple, particularly given their compact size and simple format - 256 bit hexadecimal strings being readily transportable within a text file, or stored as fields within databases. In comparison, the TMK algorithm (as released) stores hashes as 256 kilobyte file - approximately 8,000 times larger. The authors make a note of the first 1 KB being sufficient for differentiating most videos, but we are unaware of any firmer statistics regarding such performance. \subsection{Discussion} We observed the TMK + PDQF algorithm to work as stated within project documentation. It is a robust and well performing algorithm in the uses for which it was designed - format, resolution and compression changes result in near perfect performance, closely followed by `light-touch' alterations such as text. Heavier, ``adversarial'' changes (as described by the project authors) such as the addition of a title screen, watermarking and cropping beyond marginal levels result in rapid performance drops. \section{Implementation Discussion} Beyond integration and licencing concerns, arguably the greatest roadblock to the widespread law enforcement adoption of perceptual hashing is query time performance. Unlike cryptographic digests such as MD5 and SHA-1 where the smallest non-zero hamming distance and maximum value are equivalent, fuzzy hashes such as PDQ require comparison of a large proportion (if not entirety) of each value to establish similarity. This is substantially more computationally intensive than comparing cryptographic digests - in the case of MD5, 99.6\% $(1 - \frac{1}{2^{8}})$ of digests can be disregarded after the eighth bit\footnote{Second character if displayed as a hexadecimal string}. Assuming a threshold of 30, a search for matching images using PDQ requires comparison of at least 30 bits (7.5 hex characters), almost four times more than the overwhelming majority of MD5 digests. The PDQ documentation includes advice and mathematical proofs on the feasibility of advice on indexing methods such as multi-index hashing (MIH) \citep{6248043} for minimising query times. We have incorporated MIH as part of our reference implementation (refer Section \ref{Implementation}), observing a drop in query times of about 33\% when compared with linear search across our test corpus, a result becoming more pronounced as the corpus size increases. Unlike the PDQ authors, we dropped the use of caching within the index, finding the overall speed improvement within our tests to be marginal. This could be due to our use of python rather than a pre-compiled executable, and in any case, cache performance is entirely dependent upon individual usage patterns. Our implementation of MIH is available for download at \url{https://github.com/AiLECS/pyMIH} or via \texttt{pypi/pip} as \texttt{pyMIH}. As described earlier in this paper, TMK is also capable of accelerated performance through indexing, with the relatively lightweight outputs of Phase 1 ample for rapidly reducing the number of candidates at lookup. Facebook has also open-sourced \texttt{faiss}\citep{Faiss}, an efficient search/indexing project with great potential for performance improvements. At time of writing, integration between TMK and faiss is faulty. A workaround has been provided, with a fix to transparently avoid the issue reportedly in the works. \subsection{Reference Implementation} \label{Implementation} Indexing can improve performance in fuzzy hash lookups, though it will never replicate the speed of cryptographic hashes - hence their continuing use, despite well known limitations. Computational overheads are acceptable in smaller datasets, but grow exponentially as large-scale databases develop and grow. On-demand (i.e. `cloud') computing is one step towards overcoming delays, either by providing more resources, or additional resources closer to the edge - i.e. where the demand is. Given existing dependencies, the PDQ binaries are currently only deployable on *nix and MacOS systems, precluding its use on a vast majority of corporate systems. We have packaged the PDK implementation as a Docker image\footnote{Available for download at \url{https://github.com/JDalins/PDQContainer}}, exposing the algorithm via a RESTful API service for accessibility. Whilst containerisation doesn't accelerate computation, it massively improves portability, allowing the service to spin up and down across heterogeneous host operating systems and infrastructure. The image itself is 1.2GB, making it easily transported over standard internet connections. A TMK image is under construction, but due to the additional search overheads, will not be released until a Faiss-based search back-end is completed. We anticipate release of this image by mid-late October 2019. Further development of both images will be dependent on performance and user demand. \section{Discussion} A common lament within digital forensics (and law enforcement more broadly) is the gap between R\&D and implemented, accessible tools. Microsoft's free (in dollar terms) release of PhotoDNA was an unprecedented step at the time, but its ongoing status as a proprietary algorithm limits its further distribution, evolution and application. The release of free software, both in terms of monetary cost and restrictions on application, significantly reduces the non-technical overheads restricting implementation of novel technologies into existing forensic tools and workflows. In our case, a simple git clone followed by two `make` commands was sufficient for establishing a basic tool - the entire process taking less than twenty minutes. Development of the python script for automating testing took a further two days, making this rudimentary deployment a weekend project. In its current form, external dependencies limit the project's deployment in environments other than *nix and MacOS - the AFP (as with many other LEAs) is predominantly a Windows environment, with in-house software development typically gravitating to .NET. The ability to call hashing algorithms directly from existing software would represent a massive leap in the deployment potential for tools such as these. .NET wrappers or ports \footnote{Most likely in the form of a dll}, released and supported as part of the project, would certainly simplify deployment of these algorithms within such environments. Our use of the PDQ and TMK algorithms is basic, with scripts calling command line applications rather than integrating tightly with our tools. Substantial performance increases could be achieved through tighter coupling of libraries with host applications - the ability to directly pass image/video data in memory (rather than via writes to file systems) alone would accelerate performance dramatically. The AFP is moving to a loosely coupled microservices based architecture - primarily to ensure agility in delivering the tools demanded by different crime threat domains, but also ensuring their availability and adaptability across the organisation and beyond. The deployment of services such as image/video similarity measurement via a container exposing a RESTful API accelerates both implementation and use: Specialist client applications such as digital forensic tools can utilise basic HTTP libraries for the vast bulk of integration work, whereas containers can be reliably orchestrated and rapidly deployed to areas of demand, reducing network overheads (a critical concern given Australia's geography). Our reference implementation of such a service demonstrates the concept's viability, though it must be noted that the underlying database deployment for these microservices is going to be a matter for individual organisations. In this review we have identified the potential value the PDQ and TMK algorithms bring to law enforcement, particularly in reducing repeat practitioner exposure to known (but even imperceptibly altered) materials. The open sourcing of such algorithms presents a major (if not unprecedented) opportunity for LEAs and their partners to not only use and share leading-edge tools, but also guide and contribute to their development and evolution. The initiative now rests with practitioners to take the intitiative provided by releases such as this, and where suitable, continue the development of such tools into deployable and sustainable products. \section{The AiLECS Lab} The work depicted within this paper was undertaken as part of the Artificial Intelligence for Law Enforcement \& Community Safety (AiLECS) Lab, a joint Australian Federal Police/Monash University initiative formed as part of Monash University's "IT for Social Good" program. The lab's mission is to research and develop ethical uses of machine learning within law enforcement, with a particular focus the use of advanced data analytics in the identification and classification of psychologically harmful materials. For more information, please refer to \url{https://www.monash.edu/it/ailecs}.
1,116,691,498,164
arxiv
\section{Introduction} The coherent transfer of quantum states between light and atoms has been the subject of much research, both experimentally and theoretically, motivated by potential applications in quantum computing, quantum cryptography and teleportation. While the transfer of quantum states from light to a single atom can in principle be achieved by cavity QED techniques \cite{Cirac}, the required strong-coupling regime is experimentally very difficult to reach. To overcome this difficulty the use of atomic ensembles, rather than single atoms, has been proposed \cite{Kozhekin,Fleischhauer1,Kuzmich,Sherson} and implemented for storage of classical light pulses \cite{Liu,Phillips}, and recently storage of non-classical pulses has been demonstrated \cite{Julsgaard,Eisaman}. One such light storage scheme is based on electromagnetically induced transparency (EIT) \cite{Harris} in ensembles of $\Lambda$-type atoms \cite{Fleischhauer1}. While the storage and retrieval of light pulses with this scheme has been demonstrated utilizing many different types of atomic ensembles, including Bose-Einstein condensates \cite{Liu} and thermal gasses \cite{Phillips} as well as solid state media \cite{Turukhin,Longdell}, the non-trivial manipulation of, and interaction between, stored pulses is hampered by the inherent trade-off between storage time and field amplitude. A step towards overcoming this problem was taken with the suggestion of using standing wave fields to create a periodic modulation of either the dispersive \cite{Andre1} or the absorptive \cite{Bajcsy} properties of the medium, inducing a photonic bandgap \cite{Yariv} and creating a stationary light pulse. Schemes to implement a controlled phase gate using these techniques have been proposed using either the dispersive \cite{Friedler} or the absorptive \cite{Andre2} grating technique, but both are still hampered by a trade-off between storage time and field amplitude. For the latter case, only thermal gas media have been considered, and a detailed theoretical treatment of this case is given in \cite{Zimmer}. This theory, however, does not apply to media comprised of stationary atoms such as ultra cold gasses or solid state media \cite{Hansen}. In this article we present a detailed theoretical treatment of the creation of stationary light pulses by the absorptive grating technique for media comprised of stationary atoms. We find that the loss of excitations inherent to the thermal gas case is absent for stationary atoms, making such media ideally suited for the kind of non-linear optical interactions envisaged in \cite{Andre2}. Furthermore we find interesting quantitative and qualitative differences between the thermal gas and ultra cold gas cases when quasi-standing wave coupling fields are considered. These differences are important for the proposed controlled phase gate scheme \cite{Andre2} in stationary atom media. In Sec.~\ref{Sec:Standing_wave_polaritons_cold} we present a detailed account of our theory of stationary light pulses in media comprised of stationary atoms and compare the results to the thermal gas case. The theory is complemented in Sec.~\ref{Sec:Non-adiabatic} by a calculation of non-adiabatic corrections. A summary of our results is provided in Sec.~\ref{Sec:Summary}. Appendix \ref{Sec:Standing_wave_polaritons_thermal} contains a brief review of the theory of stationary light pulses in thermal gas media \cite{Zimmer}, reformulated in terms of polariton fields, used for comparison with the stationary atom case. \section{Standing wave polaritons in ensembles of stationary atoms} \label{Sec:Standing_wave_polaritons_cold} We consider an ensemble of $N$ non-moving $\Lambda$ atoms interacting with probe and coupling lasers propagating parallel to the $z$ axis. The two lower states $\ket{b}$ and $\ket{c}$ of the atoms (see Fig.~) are assumed to be nearly degenerate, such that the magnitude of the wave vectors of the probe and coupling lasers can be considered identical $(k_p\simeq k_c=k)$. \begin{figure} \includegraphics{lambda.eps} \caption{\label{Fig:level_diagram}The 3-level $\Lambda$ atom. The quantized probe field couples the ground state $\ket{b}$ to the excited state $\ket{a}$, while the classical coupling field couples the metastable state $\ket{c}$ to $\ket{a}$.} \end{figure} The Hamiltonian for the $N$ atom problem is \begin{equation} \hat{H}=\hat{H}_F+\sum_{j=1}^N\left(\hat{H}_A^j+\hat{H}_L^j +\hat{H}_V^j\right) \end{equation} where $\hat{H}_F$ and $\hat{H}_A$ describe the free electromagnetic field and the atoms, $\hat{H}_L$ describes the interaction of the atoms with the probe and coupling fields, and $\hat{H}_V$ describes the interaction with the vacuum field modes. The individual terms are given by \begin{subequations} \begin{align} \hat{H}_F&=\sum_m\hbar\omega_m\hat{a}_m^\dag\hat{a}_m \\ \hat{H}_A^j&=\hbar\omega_{cb}\hat{\sigma}_{cc}^j+\hbar\omega_{ab} \hat{\sigma}_{aa}^j \\ \hat{H}_L^j&=-\bigl(\hat{\mathbf{E}}_p+\hat{\mathbf{E}}_c\bigr)\cdot \bigl(\mathbf{d}_{ba}\hat{\sigma}_{ba}^j+\mathbf{d}_{ca} \hat{\sigma}_{ca}^j+\mathrm{h.a.}\bigr) \\ \hat{H}_V^j&=-\hat{\mathbf{E}}_V\cdot\bigl(\mathbf{d}_{ba}\hat{\sigma}_{ba}^j +\mathbf{d}_{ca}\hat{\sigma}_{ca}^j+\mathrm{h.a.}\bigr) \end{align} \end{subequations} We introduce slowly varying field operators for the electromagnetic field. Since we are allowing for standing wave fields, we write the field operator as a superposition of two traveling wave fields propagating in opposite directions \begin{equation} \hat{\mathbf{E}}_{p,c}(z,t)=\sqrt{\frac{\hbar\omega_{p,c}}{2\varepsilon_0V}} \mathbf{e}_{p,c}E_{p,c}(z,t)e^{-i\omega_{p,c}t}+\text{h.c.} \end{equation} where the operators $E_{p,c}$ are given by \begin{equation}\label{Eq:standingwave_field_decomp2} E_{p,c}(z,t)=E_{p,c}^+(z,t)e^{ikz}+E_{p,c}^-(z,t)e^{-ikz}. \end{equation} The field operators $E_{p,c}^\pm$ are the slowly varying field operators for the forward and backward propagating components of the probe and coupling fields with carrier frequencies $\omega_{p,c}$, and $\mathbf{e}_{p,c}$ are the respective polarization vectors. We define continuum atomic operators $\hat{\sigma}_{\mu\nu}$ by summing over the individual atoms in a small volume $V$, and introduce slowly varying atomic operators $\sigma_{\mu\nu}$ defined by \begin{subequations}\label{Eq:SV_atom_opr_stand} \begin{align} \hat{\sigma}_{ba}&=\sigma_{ba}e^{-i\omega_pt} \\ \hat{\sigma}_{ca}&=\sigma_{ca}e^{-i\omega_ct} \\ \hat{\sigma}_{bc}&=\sigma_{bc}e^{-i\left(\omega_p-\omega_c\right)t}. \end{align} \end{subequations} Notice that the operators defined by (\ref{Eq:SV_atom_opr_stand}) are slowly varying in \emph{time}, but not in \emph{space}. The Heisenberg-Langevin equations for these operators in the rotating wave approximation are \begin{subequations} \begin{align} \dot{\sigma}_{aa}&=-i\bigl(g_pE_p^\dag\sigma_{ba}+\Omega_c^*\sigma_{ca} -\mathrm{h.a.}\bigr)-\gamma\sigma_{aa}+F_{aa} \\ \dot{\sigma}_{bb}&=i\bigl(g_pE_p^\dag\sigma_{ba}-\mathrm{h.a.}\bigr) +\gamma_b\sigma_{aa}+F_{bb} \\ \dot{\sigma}_{cc}&=i\bigl(\Omega_c^*\sigma_{ca}-\mathrm{h.a.}\bigr) +\gamma_c\sigma_{aa}+F_{cc} \\ \dot{\sigma}_{ba}&=i\bigl(g_pE_p(\sigma_{bb}-\sigma_{aa})+\Omega_c \sigma_{bc}\bigr)-\Gamma_{ba}\sigma_{ba}+F_{ba} \\ \dot{\sigma}_{ca}&=i\bigl(\Omega_c(\sigma_{cc}-\sigma_{aa})+g_pE_p \sigma_{bc}^\dag\bigr)-\Gamma_{ca}\sigma_{ca}+F_{ca} \\ \dot{\sigma}_{bc}&=i\bigl(\Omega_c^*\sigma_{ba}-g_pE_p\sigma_{ca}^\dag \bigr)-\Gamma_{bc}\sigma_{bc}+F_{bc} \end{align} \end{subequations} where $\gamma=\gamma_b+\gamma_c$ is the decay rate of the excited state $\ket{a}$ into the two lower states. The complex decay rates $\Gamma_{\mu\nu}$ are given by \begin{align} \Gamma_{ba}&=\gamma_{ba}-i\delta_p, \\ \Gamma_{ca}&=\gamma_{ca}-i\delta_c, \\ \Gamma_{bc}&=\gamma_{bc}-i\Delta, \end{align} where $\gamma_{\mu\nu}$ are the dephasing rates of the respective coherences, $\delta_{p,c}$ are the one-photon detunings of the probe and coupling lasers, respectively, and $\Delta$ is the two-photon detuning. We have also assumed that the coupling field can be treated as a classical field with Rabi frequency $\Omega_c$ given by \begin{equation} \Omega_c(z,t)=\Omega_c^+(z,t)e^{ikz}+\Omega_c^-(z,t)e^{-ikz}. \end{equation} In the following we shall disregard the noise operators $F_{\mu\nu}$ since we will be considering the adiabatic limit. \subsection{Weak probe approximation} In order to solve the propagation problem, we assume that the probe field is weak compared to the coupling field and that the probe photon density is small compared to the atomic density. In this case the Heisenberg-Langevin equations can be solved perturbatively. To first order in the probe field amplitude, the relevant Heisenberg-Langevin equations are \begin{subequations}\label{Eq:weak_probe_HL_stand} \begin{align} \sigma_{ba}&=\frac{1}{i\Omega_c^*}\left(\Gamma_{bc}+\ddt{}\right) \sigma_{bc}\label{Eq:weak_probe_HL_stand_a} \\ \sigma_{bc}&=-\frac{g_pE_p}{\Omega_c}-\frac{i}{\Omega_c}\left( \Gamma_{ba}+\ddt{}\right)\sigma_{ba} \end{align} \end{subequations} Combining equations (\ref{Eq:weak_probe_HL_stand}) we can obtain a differential equation for $\sigma_{bc}$ \begin{equation}\label{Eq:diffeqn_sigmabc} \sigma_{bc}=-\frac{g_pE_p}{\Omega_c}-\frac{1}{\Omega_c}\left( \Gamma_{ba}+\ddt{}\right)\left[\frac{1}{\Omega_c^*}\left( \Gamma_{bc}+\ddt{}\right)\sigma_{bc}\right] \end{equation} \subsection{Adiabatic limit} In order to solve equation (\ref{Eq:diffeqn_sigmabc}), we consider the adiabatic limit in which the fields vary slowly in time. Introducing a characteristic timescale $T$ of the slowly varying operators, we expand $\sigma_{bc}$ in powers of $(\gamma_{ba}T)^{-1}$. To zeroth order we find \begin{equation}\label{Eq:sigma_bc_stand_adia} \sigma_{bc}=-\frac{g_pE_p}{\Omega_c}. \end{equation} Inserting this expression into (\ref{Eq:weak_probe_HL_stand_a}), we find an expression for $\sigma_{ba}$ valid in the adiabatic limit \begin{equation}\label{Eq:sigma_ba_stand_adia} \sigma_{ba}=-\frac{1}{i\Omega_c^*}\left(\Gamma_{bc}+\ddt{}\right) \left(\frac{g_pE_p}{\Omega_c}\right). \end{equation} By inserting the field decomposition (\ref{Eq:standingwave_field_decomp2}) into the adiabatic expression for $\sigma_{ba}$ (\ref{Eq:sigma_ba_stand_adia}) we get \begin{equation}\label{Eq:sigma_ba_stand_adia2} \begin{split} \sigma_{ba}&=\frac{-g_p\left(\Gamma_{bc}+\ddt{}\right)}{i\Omega (1+2\abs{\kappa^+}\abs{\kappa^-}\cos(2kz+\phi))} \\ &\quad\times\left(\frac{E_p^+e^{ikz}+E_p^-e^{-ikz}}{\Omega}\right), \end{split} \end{equation} where we have also introduced the time dependent total Rabi frequency $\Omega(t)=\sqrt{\abs{\Omega_c^+}^2+\abs{\Omega_c^-}^2}$ and the ratios $\kappa^\pm=\frac{\Omega_c^\pm}{\Omega}$ which are assumed to be constant. The phase angle $\phi$ is defined by the relation \begin{equation} \kappa^+\kappa^{-*}=\abs{\kappa^+}\abs{\kappa^-}e^{i\phi}. \end{equation} \subsection{Polariton field} We now introduce a dark-state polariton (DSP) field analogous to the DSP field defined in \cite{Fleischhauer1} \begin{equation}\label{Eq:def_polariton_stand} E_p^\pm(z,t)=\cos\theta(t)\Psi^\pm(z,t), \end{equation} where the angle $\theta$ is given by the total coupling laser Rabi frequency through \begin{equation} \tan\theta(t)=\frac{g_p\sqrt{N_\mathbf{r}}}{\Omega(t)}. \end{equation} By inserting the definition (\ref{Eq:def_polariton_stand}) of the DSP field into (\ref{Eq:sigma_ba_stand_adia2}) we get \begin{equation}\label{Eq:sigma_ba_stand_adia3} \begin{split} \sqrt{N_\mathbf{r}}\sigma_{ba}&=\frac{-\bigl(\Gamma_{bc}+\ddt{} \bigr)}{i\Omega\bigl(1+2\abs{\kappa^+}\abs{\kappa^-}\cos(2kz +\phi)\bigr)}\\ &\quad\times\Bigl(\sin\theta\bigl(\Psi^+e^{ikz}+\Psi^-e^{-ikz}\bigr)\Bigr). \end{split} \end{equation} To derive wave equations for the components of the DSP field, we need to expand the optical coherence $\sigma_{ba}$ in spatial Fourier components. We do this by inserting the Fourier series \begin{equation}\label{Eq:Fourier_series1} \frac{1}{1+y\cos x}=\frac{a_0}{2}+\sum_{n=1}^\infty a_n\cos(nx), \end{equation} where $y=2\abs{\kappa^+}\abs{\kappa^-}$ and $x=2kz+\phi$, into (\ref{Eq:sigma_ba_stand_adia3}). Note that $y\leq 1$ which guarantees the existence of the Fourier series except in the case of a standing wave coupling field $(y=1)$. Fortunately, we can treat this case successfully by considering the limit $y\rightarrow 1$ at the end of our calculation. Inserting the Fourier series into (\ref{Eq:sigma_ba_stand_adia3}) we find \begin{equation}\label{Eq:sigma_ba_Fourier} \begin{split} \sqrt{N_\mathbf{r}}\sigma_{ba}&=\frac{i}{\Omega}\left(\frac{a_0}{2}+ \sum_{n=1}^\infty \frac{a_n}{2} \left( e^{in(2kz+\phi)} +e^{-in(2kz+\phi)}\right)\right) \\ &\quad\times\left(\Gamma_{bc}+\ddt{}\right) \left(\sin\theta\left(\Psi^+e^{ikz}+\Psi^-e^{-ikz}\right)\right). \end{split} \end{equation} From (\ref{Eq:sigma_ba_Fourier}) we see that $\sigma_{ba}$ can we written as \begin{equation}\label{Eq:sigma_ba_expansion} \sigma_{ba}=\sum_{n=-\infty}^\infty \sigma_{ba}^{(2n+1)} e^{i(2n+1)kz}. \end{equation} To derive a set of wave equations for the polariton field components, we need to calculate the components $\sigma_{ba}^{+1}$ and $\sigma_{ba}^{-1}$ of the expansion (\ref{Eq:sigma_ba_expansion}) which we label $\sigma_{ba}^\pm$ for brevity. These components are given by \begin{subequations}\label{Eq:sigma_ba_Fourier2} \begin{align} \sqrt{N_\mathbf{r}}\sigma_{ba}^+&=\frac{i}{2\Omega}\left(\Gamma_{bc} +\ddt{}\right)[\sin\theta(a_0\Psi^++a_1e^{i\phi}\Psi^-)] \\ \sqrt{N_\mathbf{r}}\sigma_{ba}^-&=\frac{i}{2\Omega}\left(\Gamma_{bc} +\ddt{}\right)[\sin\theta(a_0\Psi^-+a_1e^{-i\phi}\Psi^+)]. \end{align} \end{subequations} We see that we only need to calculate the first two Fourier coefficients $a_0$ and $a_1$ of the expansion (\ref{Eq:sigma_ba_Fourier}). These are given by \begin{subequations}\label{Eq:Fouriercoef_a} \begin{align} a_0&=\frac{1}{\pi}\int_{-\pi}^\pi \frac{\mathrm{d}x}{1+y\cos x} =\frac{2}{\sqrt{1-y^2}} \\ a_1&=\frac{1}{\pi}\int_{-\pi}^\pi \frac{\cos x\mathrm{d}x}{1+y\cos x} =2\frac{\sqrt{1-y^2}-1}{y\sqrt{1-y^2}}. \end{align} \end{subequations} Inserting the adiabatic expression (\ref{Eq:sigma_ba_Fourier2}) for $\sigma_{ba}^\pm$ into the wave equations for the probe field components \begin{subequations}\label{Eq:probe_waveeqn} \begin{align} \left(\ddt{}+c\ddz{}\right)E_p^+(z,t)&=ig_pN_\mathbf{r}\sigma_{ba}^+(z,t)\\ \left(\ddt{}-c\ddz{}\right)E_p^-(z,t)&=ig_pN_\mathbf{r}\sigma_{ba}^-(z,t), \end{align} \end{subequations} we obtain a set of coupled wave equations for the DSP field components \begin{widetext} \begin{subequations}\label{Eq:Polariton_waveeqn_cold1} \begin{align} \ddt{\Psi^+}+c\cos^2\theta'\ddz{\Psi^+}&=-\sin^2\theta'\biggl[ \Gamma_{bc}\Psi^++se^{i\phi}\biggl(\Gamma_{bc}+\ddt{}\biggr)\Psi^- -s\dot{\theta}\frac{\cos\theta}{\sin\theta}\Bigl(y\Psi^+-e^{i\phi} \Psi^-\Bigr)\biggr]\\ \ddt{\Psi^-}-c\cos^2\theta'\ddz{\Psi^-}&=-\sin^2\theta'\biggl[ \Gamma_{bc}\Psi^-+se^{-i\phi}\biggl(\Gamma_{bc}+\ddt{}\biggr)\Psi^+ -s\dot{\theta}\frac{\cos\theta}{\sin\theta}\Bigl(y\Psi^--e^{-i\phi} \Psi^+\Bigr)\biggr], \end{align} \end{subequations} \end{widetext} where we have introduced a new angle $\theta'$ defined by \begin{equation} \tan\theta'=\sqrt{\frac{a_0}{2}}\frac{g_p\sqrt{N_\mathbf{r}}}{\Omega} =\sqrt{\frac{a_0}{2}}\tan\theta, \end{equation} as well as the constant \begin{equation} s=\frac{a_1}{a_0}=\frac{\sqrt{1-y^2}-1}{y}. \end{equation} Since we are considering the adiabatic limit in which the coupling field Rabi frequency changes slowly in time, we shall neglect the last term on the rhs.~of (\ref{Eq:Polariton_waveeqn_cold1}) in the following. \subsection{Low group velocity limit} In the experimentally relevant \emph{low group velocity limit} $\cos^2\theta\ll 1$, the wave equations (\ref{Eq:Polariton_waveeqn_cold1}) take the simpler form \begin{subequations}\label{Eq:polariton_waveeqn2a} \begin{align} \left(\Gamma_{bc}+\ddt{}\right)\Psi^+ +\abs{\kappa^+}^2v_g \ddz{\Psi^+}&=\kappa^+\kappa^{-*}v_g\ddz{\Psi^-}, \\ \left(\Gamma_{bc}+\ddt{}\right)\Psi^- -\abs{\kappa^+}^2v_g \ddz{\Psi^-}&=-\kappa^{+*}\kappa^-v_g\ddz{\Psi^+}, \end{align} \end{subequations} in the case where $\abs{\kappa^+}\geq\abs{\kappa^-}$. In the opposite case, $\abs{\kappa^+}\leq\abs{\kappa^-}$, the wave equations are \begin{subequations}\label{Eq:polariton_waveeqn2b} \begin{align} \left(\Gamma_{bc}+\ddt{}\right)\Psi^+ +\abs{\kappa^-}^2v_g \ddz{\Psi^+}&=\kappa^+\kappa^{-*}v_g\ddz{\Psi^-}, \\ \left(\Gamma_{bc}+\ddt{}\right)\Psi^- -\abs{\kappa^-}^2v_g \ddz{\Psi^-}&=-\kappa^{+*}\kappa^-v_g\ddz{\Psi^+}. \end{align} \end{subequations} We have introduced the group velocity $v_g=c\cos^2\theta$ in equations (\ref{Eq:polariton_waveeqn2a}) and (\ref{Eq:polariton_waveeqn2b}), and have also made use of the fact that in the low group velocity limit, $\cos^2\theta'\simeq \sqrt{1-y^2}\cos^2\theta$. \subsection{Initial conditions} We shall consider the same kind of experiment as in \cite{Bajcsy} in which a probe pulse, propagating under the influence of a copropagating \emph{traveling wave} coupling field, is stored in the medium and subsequently retrieved by a \emph{standing wave} coupling field with $\abs{\kappa^+}\geq\abs{\kappa^-}$. Assuming that the standing wave coupling field is switched on at $t=0$, we need to find the initial conditions for the two components of the DSP field $\Psi^\pm(z,0)$. The initial condition for the Raman coherence i \begin{equation}\label{Eq:sigma_bc_init_cond_stand} \sqrt{N_\mathbf{r}}\sigma_{bc}(z,0)=-\Psi(z,0), \end{equation} where $\Psi(z,0)$ is a known function of $z$ determined by the DSP field prior to switching on the standing wave coupling field. Using (\ref{Eq:sigma_bc_stand_adia}) and the definition of the DSP field in the standing wave case (\ref{Eq:def_polariton_stand}), along with the initial condition (\ref{Eq:sigma_bc_init_cond_stand}), we get \begin{equation} \Psi(z,0)\left(\kappa^+e^{ikz}+\kappa^-e^{-ikz}\right)=\Psi^+(z,0) e^{ikz}+\Psi^-(z,0)e^{-ikz}. \end{equation} From this expression we see that the initial conditions for the components of the DSP field $\Psi^\pm(z,0)$ are \begin{equation}\label{Eq:Psi_pm_init_cond} \Psi^+(z,0)=\kappa^+\Psi(z,0),\qquad\Psi^-(z,0)=\kappa^-\Psi(z,0). \end{equation} With the initial conditions (\ref{Eq:Psi_pm_init_cond}), we find the solution \begin{widetext} \begin{subequations}\label{Eq:DSP_stand_sol2} \begin{align} \Psi^+(z,t)&=\frac{\kappa^+}{2}\Biggl[\left(1+ \frac{\beta}{\abs{\kappa^+}^2}\right)\Psi(z-\beta r(t),0)+\left(1- \frac{\beta}{\abs{\kappa^+}^2}\right) \Psi(z+\beta r(t),0)\Biggr]e^{-\Gamma_{bc}t}, \\ \Psi^-(z,t)&=\frac{\kappa^-}{2}\Biggl[\Psi(z-\beta r(t),0)+\Psi(z+\beta r(t),0) \Biggr]e^{-\Gamma_{bc}t}, \end{align} \end{subequations} where $\beta=\sqrt{\abs{\kappa^+}^2(\abs{\kappa^+}^2-\abs{\kappa^-}^2)}$ and $r(t)=\int_0^t c\cos^2\theta(t')\mathrm{d}t'$. \end{widetext} \subsection{Probe retrieval by a standing wave coupling field} To determine the solution for a standing wave coupling field, we let $\kappa^\pm\rightarrow\frac{1}{\sqrt{2}}$ in the solution (\ref{Eq:DSP_stand_sol2}). In this limit we find the solution \begin{subequations} \begin{align} \Psi^+(z,t)&=\frac{1}{\sqrt{2}}\Psi(z,0)e^{-\Gamma_{bc}t}, \\ \Psi^-(z,t)&=\frac{1}{\sqrt{2}}\Psi(z,0)e^{-\Gamma_{bc}t}. \end{align} \end{subequations} As an example, we take the initial condition for the DSP field to be $\Psi(z,0)=\Psi_0\exp(-(z/L_p)^2)$, where $L_p$ is the characteristic length of the stored probe pulse. The polariton amplitude $\Psi_0$ is related to the initial probe field amplitude $E_0$ by $\Psi_0=E_0/\cos\theta_0$, where $\theta_0$ is determined by the Rabi frequency of the traveling wave coupling field prior to storage. The components of the retrieved probe field found from (\ref{Eq:def_polariton_stand}) are \begin{subequations} \begin{align} E_p^+(z,t)&=\frac{1}{\sqrt{2}}\frac{\cos\theta(t)}{\cos\theta_0} E_0\exp(-(z/L_p)^2)e^{-\Gamma_{bc}t}, \\ E_p^-(z,t)&=\frac{1}{\sqrt{2}}\frac{\cos\theta(t)}{\cos\theta_0} E_0\exp(-(z/L_p)^2)e^{-\Gamma_{bc}t}. \end{align} \end{subequations} In Fig.~\ref{Fig:stand} we compare the retrieval of an initially stored probe pulse by a standing wave coupling field in the thermal gas and ultra cold gas cases. The time dependence of the angle $\theta$ is assumed to be given by $\cos^2\theta(t)=\cos^2\theta_0\tanh(t/T_s)$ for $t\geq 0$, where $T_s$ is the characteristic switching time. For simplicity, we have assumed zero Raman dephasing $(\Gamma_{bc}=0)$ and taken the characteristic length of the stored probe pulse to be $L_p=v_{g,0} T_s$, where $v_{g,0}=c\cos^2\theta_0$. The probe field photon density averaged over many wavelengths $\abs{E_p^+}^2+\abs{E_p^-}^2$, in units of the photon density prior to storage $\abs{E_0}^2$, is plotted as a function of $z$ in units of $L_p$ and $t$ in units of $T_s$. In both the stationary atom case and the thermal gas case we see that the stored probe pulse is revived into a stationary probe field, but we note that in the stationary atom case, the diffusive broadening of the probe field, evident in the thermal gas case, is absent. The solution for thermal gas media is based on the theory in \cite{Zimmer}, which is reviewed briefly in appendix \ref{Sec:Standing_wave_polaritons_thermal}. The medium is characterized by the absorption length in the absence of EIT $l_a=0.1\times L_p$, which roughly corresponds to the conditions in \cite{Bajcsy}. \begin{figure} \subfigure[\ Stationary atoms] {\includegraphics[width=4cm]{standcold_Epsq.eps}}\label{Fig:standa} \subfigure[\ Thermal gas] {\includegraphics[width=4cm]{standhot_Epsq.eps}}\label{Fig:standb} \caption{\label{Fig:stand}(color online) Retrieval of a stored probe pulse with a standing wave coupling field. The probe field energy density, in units of $\frac{\hbar\omega_p}{V}\abs{E_0}^2$, is plotted for a medium comprised of stationary atoms (a) and thermal atoms (b) as a function of $z$ in units of the pulse length $L_p$, and $t$ in units of the switching time $T_s$. The absorption length of the media is taken to be $l_a=0.1\times L_p$.} \end{figure} \subsection{Probe retrieval by a quasi-standing wave coupling field} We shall now study the situation in which the probe field is retrieved by a quasi-standing wave coupling field. In the previous section, we saw that in the thermal gas case a quasi-standing wave coupling field leads to a drift of the revived probe pulse in the direction of the stronger of the two coupling field components. In the ultra cold gas case considered here, we find from the solution (\ref{Eq:DSP_stand_sol2}) that the revived probe pulse instead splits into two parts. A stronger part which propagates in the direction of the stronger of the coupling field components, and a weaker part which propagates in the opposite direction. Fig.~\ref{Fig:qstand1} shows the solution (\ref{Eq:DSP_stand_sol2}) with the same initial conditions as in Fig.~\ref{Fig:stand}, but with $\kappa^+ =\sqrt{0.55}$ and $\kappa^-=\sqrt{0.45}$. Fig.~\ref{Fig:qstand2} compares the retrieval of a stored probe pulse by a quasi-standing wave coupling field in thermal and ultra cold gas media. The splitting of the revived probe pulse is clearly evident in the cold gas case, indicating a qualitative difference between the thermal gas and the ultra cold gas cases. The cause of this difference is the coupling to the high spatial-frequency components of the Raman coherence $\hat{\sigma}_{bc}$ in the ultra cold gas case. This splitting of the probe pulse is very important when considering various schemes for interacting pulses. In the phase-gate proposal of Andr\'e et al. \cite{Andre2}, a small imbalance in the two components of the coupling field is used to propagate a quasi-stationary light pulse across a stored excitation in a thermal gas medium. As is evident from Fig.~\ref{Fig:qstand2} this scheme would not work in media comprised of stationary atoms, since a large part of the revived probe field would then propagate in the wrong direction. \begin{figure} \subfigure[\ $\Psi^+(z,t)$]{\includegraphics[width=4cm]{qstandcold_Psip.eps}} \label{Fig:qstand1a} \subfigure[\ $\Psi^-(z,t)$]{\includegraphics[width=4cm]{qstandcold_Psim.eps}} \label{Fig:qstand1b} \caption{\label{Fig:qstand1}Retrieval of a stored probe pulse with a quasi-standing wave coupling field ($\kappa^+=\sqrt{0.55}$, $\kappa^-=\sqrt{0.45}$). Figures (a) and (b) show the polariton amplitudes $\Psi^\pm$ in units of $\Psi_0$ as a function of $z$ and $t$, in units of $L_p$ and $T_s$, respectively.} \end{figure} \begin{figure} \subfigure[\ Cold gas]{\includegraphics[width=4cm]{qstandcold_Epsq.eps}}\label{Fig:qstand2a} \subfigure[\ Thermal gas]{\includegraphics[width=4cm]{qstandhot_Epsq.eps}}\label{Fig:qstand2b} \caption{\label{Fig:qstand2}Retrieval of a stored probe pulse with a quasi-standing wave coupling field. The probe field energy density, in units of $\frac{\hbar\omega_p}{V}\abs{E_0}^2$, is shown for both the ultra cold gas case (a) and for the thermal gas case (b). Parameters are the same as in Fig.~\ref{Fig:qstand1}.} \end{figure} \subsection{Calculation of the Raman coherence} To calculate the Raman coherence of the atoms, we use the zeroth order expression (\ref{Eq:sigma_bc_stand_adia}) for $\sigma_{bc}$ \begin{equation} \sqrt{N_\mathbf{r}}\sigma_{bc}=-\frac{g_p\sqrt{N_\mathbf{r}} E_p}{\Omega_c}. \end{equation} By inserting the decompositions of the probe and coupling fields, as well as the definition (\ref{Eq:def_polariton_stand}) of the DSP field, we get \begin{equation} \sqrt{N_z}\sigma_{bc}=-\sin\theta\frac{\Psi^+(z,t)e^{ikz}+ \Psi^-(z,t)e^{-ikz}}{\kappa^+e^{ikz}+\kappa^-e^{-ikz}}. \end{equation} Inserting the solution (\ref{Eq:DSP_stand_sol2}) into this expression, we find by a binomial expansion \begin{equation \begin{split} \sqrt{N_z}\sigma_{bc}&=-\frac{1}{2}\sin\theta\biggl(\Psi(z-\beta r,0)+\Psi(z+\beta r,0) \\ &\quad+\frac{\beta}{\abs{\kappa^+}^2}[\Psi(z-\beta r,0)-\Psi(z+\beta r,0)] \\ &\quad\times\sum_{n=0}^\infty\left(-\frac{\kappa^-}{\kappa^+} \right)^n e^{-2inkz}\biggr)e^{-\Gamma_{bc}t}. \end{split} \end{equation} From this expression we see that the Raman coherence can be written as \begin{equation}\label{Eq:sigma_bc_expansion} \sigma_{bc}(z,t)=\sum_{n=-\infty}^\infty\sigma_{bc}^{(2n)}(z,t)e^{2inkz}, \end{equation} where the dc component is \begin{equation} \begin{split} \sqrt{N_\mathbf{r}}\sigma_{bc}^{(0)}&=-\frac{1}{2}\sin\theta\biggl[ \biggl(1+\frac{\beta}{\abs{\kappa^+}^2}\biggr)\Psi(z-\beta r(t),0)\\ &+\biggl(1-\frac{\beta}{\abs{\kappa^+}^2}\biggr)\Psi(z+\beta r(t),0) \biggr]e^{-\Gamma_{bc}t}. \end{split} \end{equation} For the rapidly varying components of the Raman coherence we find \begin{equation} \begin{split} \sqrt{N_\mathbf{r}}\sigma_{bc}^{(-2n)}&=-\frac{1}{2}\sin\theta \frac{\beta}{\abs{\kappa^+}^2}\bigl(\Psi(z-\beta r(t),0) \\ &-\Psi(z+\beta r(t),0)\bigr)\left(-\frac{\kappa^-}{\kappa^+} \right)^n e^{-\Gamma_{bc}t} \end{split} \end{equation} and \begin{equation} \sqrt{N_\mathbf{r}}\sigma_{bc}^{(2n)}=0 \end{equation} where, in both cases, $n>0$. In the case of a perfect standing wave coupling field, only the dc component of the Raman coherence is present which is given by \begin{equation} \sqrt{N_z}\sigma_{bc}^{(0)}(z,t)=-\sin\theta(t)\Psi(z,0) e^{-\Gamma_{bc}t}. \end{equation} In the quasi-standing wave case, the rapidly varying components of the Raman coherence $\sigma_{bc}^{(2n)}$ with \emph{negative} values of $n$ attain a small but non-vanishing value, becoming progressively smaller with decreasing $n$. The rapidly varying components of the Raman coherence with \emph{positive} values of $n$ all vanish. An asymmetry in the Raman coherence is to be expected, since neither the coupling field nor the revived probe field is symmetric in $z$. \section{Non-adiabatic corrections}\label{Sec:Non-adiabatic} In \cite{Fleischhauer2} it was shown that the finite length of the probe pulse leads to a broadening of the pulse envelope due to dispersion. In this section we shall investigate the same effect in the standing wave case and show that the dispersive broadening vanishes in the case of a pure standing wave coupling field. Our starting point is the differential equation (\ref{Eq:diffeqn_sigmabc}) for the Raman coherence $\sigma_{bc}$. To first order in $(\gamma_{ba}T)^{-1}$ we find \begin{equation} \sigma_{bc}=-\frac{g_pE_p}{\Omega_c}+ \frac{\Gamma_{ba}}{\abs{\Omega_c}^2}\ddt{}\left( \frac{g_pE_p}{\Omega_c}\right), \end{equation} where we have assumed $\Gamma_{bc}=0$ to simplify the calculations. Inserting this expression into (\ref{Eq:weak_probe_HL_stand_a}) and introducing the DSP fields defined in (\ref{Eq:def_polariton_stand}), we get \begin{widetext} \begin{equation}\label{Eq:sigma_ba_stand_nonadia} \begin{split} \sqrt{N_\mathbf{r}}\sigma_{ba}&=\frac{-\sin\theta}{i\Omega\bigl(1+ 2\abs{\kappa^+}\abs{\kappa^-}\cos(2kz+\phi)\bigr)}\ddt{}\left( \Psi^+e^{ikz}+\Psi^-e^{-ikz}\right) \\ &+\frac{\Gamma_{ba}}{g_p^2N_\mathbf{r}} \frac{\sin\theta\tan^2\theta}{i\Omega\bigl(1+2\abs{\kappa^+} \abs{\kappa^-}\cos(2kz+\phi)\bigr)^2}\dddtdt{}\left(\Psi^+e^{ikz}+ \Psi^-e^{-ikz}\right) \end{split} \end{equation} \end{widetext} where we have assumed that the coupling laser Rabi frequency changes slowly enough to set $\dot{\theta}=0$ in the equations. As in section \ref{Sec:Standing_wave_polaritons_cold} we need to find the Fourier components $\sigma_{ba}^\pm$. To do this we apply the Fourier series (\ref{Eq:Fourier_series1}) and we introduce \begin{equation} \frac{1}{(1+y\cos x)^2}=\frac{d_0}{2}+\sum_{n=1}^\infty d_n\cos(nx) \end{equation} where, as before, $y=2\abs{\kappa^+}\abs{\kappa^-}$ and $x=2kz+\phi$. Inserting the Fourier series into (\ref{Eq:sigma_ba_stand_nonadia}), we get \begin{subequations}\label{Eq:sigma_ba_stand_nonadia2} \begin{align} \begin{split} \sqrt{N_\mathbf{r}}\sigma_{ba}^+&=\frac{\sin\theta}{2i\Omega}\biggl( -\ddt{}\left(a_0\Psi^++a_1e^{i\phi}\Psi^-\right) \\ &+\frac{\Gamma_{ba}}{g_p^2N_\mathbf{r}}\tan^2\theta\dddtdt{}\left(d_0 \Psi^++d_1e^{i\phi}\Psi^-\right)\biggr) \end{split} \\ \begin{split} \sqrt{N_\mathbf{r}}\sigma_{ba}^-&=\frac{\sin\theta}{2i\Omega}\biggl( -\ddt{}\left(a_0\Psi^-+a_1e^{-i\phi}\Psi^+\right) \\ &+\frac{\Gamma_{ba}}{g_p^2N_\mathbf{r}}\tan^2\theta\dddtdt{}\left(d_0 \Psi^-+d_1e^{-i\phi}\Psi^+\right)\biggr) \end{split} \end{align} \end{subequations} The Fourier coefficients $a_{0,1}$ have already been calculated and are given by (\ref{Eq:Fouriercoef_a}), while the Fourier coefficients $d_{0,1}$ are given by \begin{subequations}\label{Eq:Fouriercoef_d} \begin{align} d_0&=\frac{1}{\pi}\int_{-\pi}^\pi\frac{\mathrm{d}x}{(1+y\cos x)^2} =\frac{2}{(1-y^2)^{3/2}}, \\ d_1&=\frac{1}{\pi}\int_{-\pi}^\pi\frac{\cos x\mathrm{d}x}{(1+y\cos x)^2}=-\frac{2y}{(1-y^2)^{3/2}}. \end{align} \end{subequations} We now insert the expressions (\ref{Eq:sigma_ba_stand_nonadia2}) into (\ref{Eq:probe_waveeqn}) to obtain a set of coupled wave equations for the DSP fields \begin{widetext} \begin{subequations} \begin{align} \ddt{\Psi^+}+c\cos^2\theta'\ddz{\Psi^+}&=-\sin^2\theta'\biggl[ se^{i\phi}\ddt{\Psi^-}-\frac{\Gamma_{ba}}{g_p^2N_\mathbf{r}} \tan^2\theta\dddtdt{}\left( s'\Psi^++s''e^{i\phi}\Psi^-\right)\biggr] \\ \ddt{\Psi^-}-c\cos^2\theta'\ddz{\Psi^-}&=-\sin^2\theta'\biggl[ se^{-i\phi}\ddt{\Psi^+}-\frac{\Gamma_{ba}}{g_p^2N_\mathbf{r}} \tan^2\theta\dddtdt{}\left( s'\Psi^-+s''e^{-i\phi}\Psi^+\right)\biggr] \end{align} \end{subequations} \end{widetext} where we have introduced the constants \begin{equation} s=\frac{a_1}{a_0}\qquad s'=\frac{d_0}{a_0}\qquad s''=\frac{d_1}{a_0}. \end{equation} Once again we consider the low group velocity limit $\cos^2\theta\ll 1$. With this approximation the wave equations simplify to \begin{widetext} \begin{subequations}\label{Eq:Polariton_waveeqn3} \begin{align} \ddt{\Psi^+}+c\cos^2\theta'\ddz{\Psi^+}&=-se^{i\phi}\ddt{\Psi^-} +\frac{\Gamma_{ba}}{g_p^2N_\mathbf{r}}\tan^2\theta\dddtdt{}\left( s'\Psi^++s''e^{i\phi}\Psi^-\right)\\ \ddt{\Psi^-}-c\cos^2\theta'\ddz{\Psi^-}&=-se^{-i\phi}\ddt{\Psi^+} +\frac{\Gamma_{ba}}{g_p^2N_\mathbf{r}}\tan^2\theta\dddtdt{}\left( s'\Psi^-+s''e^{-i\phi}\Psi^+\right) \end{align} \end{subequations} \end{widetext} To the same order of approximation, we can replace the second time derivatives of the DSP fields with the second time derivative of the zeroth order solution. Differentiating both sides of (\ref{Eq:Polariton_waveeqn3}) with respect to $t$, and discarding derivatives of order greater than two, we get \begin{subequations} \begin{align} \dddtdt{\Psi^+}+c\cos^2\theta'\ddz{}\ddt{\Psi^+}&=-se^{i\phi} \dddtdt{\Psi^-} \\ \dddtdt{\Psi^-}-c\cos^2\theta'\ddz{}\ddt{\Psi^-}&=-se^{-i\phi} \dddtdt{\Psi^+}, \end{align} \end{subequations} where we once again assume that the coupling laser Rabi frequency changes slowly. Using (\ref{Eq:Polariton_waveeqn3}) we can solve for the second time derivatives of the DSP field. We find \begin{equation}\label{Eq:Second_deriv_relation} \dddtdt{\Psi^\pm}=\frac{1-y^2}{1-s^2}\left(c\cos^2\theta\right)^2 \dddzdz{\Psi^\pm}, \end{equation} where we exploited the fact that in the low group velocity limit $\cos^2\theta'\simeq\sqrt{1-y^2}\cos^2\theta$. Inserting (\ref{Eq:Second_deriv_relation}) into (\ref{Eq:Polariton_waveeqn3}), and assuming that $\abs{\kappa^+}\geq\abs{\kappa^-}$, the coupled wave equations take the form \begin{widetext} \begin{subequations}\label{Eq:Polariton_waveeqn4} \begin{align} \ddt{\Psi^+}+\abs{\kappa^+}^2v_g\ddz{\Psi^+}&=\kappa^+\kappa^{-*}v_g \ddz{\Psi^-}+\frac{\abs{\kappa^+}^2l_av_g}{\sqrt{1-y^2}}\dddzdz{} \left(\abs{\kappa^+}^2\Psi^+-\kappa^+\kappa^{-*}\Psi^-\right) \\ \ddt{\Psi^-}-\abs{\kappa^+}^2v_g\ddz{\Psi^-}&=-\kappa^{+*}\kappa^-v_g \ddz{\Psi^+}+\frac{\abs{\kappa^+}^2l_av_g}{\sqrt{1-y^2}}\dddzdz{} \left(\abs{\kappa^+}^2\Psi^--\kappa^{+*}\kappa^-\Psi^+\right) \end{align} \end{subequations} \end{widetext} To solve the coupled wave equations (\ref{Eq:Polariton_waveeqn4}) we proceed by Fourier transforming with respect to $z$, such that $\Psi^\pm(z,t)\to\tilde{\Psi}^\pm(q,t)$, and find the solution \begin{widetext} \begin{subequations} \begin{align} \begin{split} \tilde{\Psi}^+(q,t)&=\frac{1}{2d}\biggl(\bigl(b\tilde{\Psi}^-(q,0)- (\abs{\kappa^+}^2-d)\tilde{\Psi}^+(q,0)\bigr)\exp\bigl(iq\lambda_+ r(t)\bigr) \\ &+\bigl( (\abs{\kappa^+}^2+d)\tilde{\Psi}^+(q,0)-b\tilde{\Psi}^- (q,0)\bigr)\exp\bigl(iq\lambda_-r(t)\bigr)\biggr) \end{split} \\ \begin{split} \tilde{\Psi}^-(q,t)&=\frac{1}{2d}\biggl(\bigl(-b^*\tilde{\Psi}^+(q,0) +(\abs{\kappa^+}^2+d)\tilde{\Psi}^-(q,0)\bigr)\exp\bigl(iq\lambda_+ r(t)\bigr) \\ &+\bigl(b^*\tilde{\Psi}^+(q,0)-(\abs{\kappa^+}^2-d)\tilde{\Psi}^- (q,0)\bigr)\exp\bigl(iq\lambda_-r(t)\bigr)\biggr) \end{split} \end{align} \end{subequations} \end{widetext} where \begin{equation} b=\kappa^+\kappa^{-*}(1-iq\xi),\quad \lambda^\pm=i\abs{\kappa^+}^2\xi q\pm d \end{equation} and \begin{equation} \xi=\frac{\abs{\kappa^+}^2l_a}{\sqrt{1-y^2}}, \end{equation} \begin{equation} d=\sqrt{\abs{\kappa^+}^2\left(\abs{\kappa^+}^2-\abs{\kappa^-}^2\right) -\abs{\kappa^+}^2\abs{\kappa^-}^2\xi^2q^2}. \end{equation} Inserting the initial conditions (\ref{Eq:Psi_pm_init_cond}) for the DSP field and considering the limit $\kappa^\pm\rightarrow \tfrac{1}{\sqrt{2}}$, corresponding to a pure standing wave coupling field, the solution becomes \begin{subequations} \begin{align} \Psi^+(z,t)&=\frac{1}{\sqrt{2}}\Psi(z,0), \\ \Psi^-(z,t)&=\frac{1}{\sqrt{2}}\Psi(z,0). \end{align} \end{subequations} From this solution it is clear that the broadening of the pulse envelope due to dispersion is absent in the case of a pure standing wave coupling field. The effect \emph{is} present in the case of a quasi-standing wave coupling field. If we consider the limiting case of a traveling wave coupling field $(\kappa^-\rightarrow 0)$, we find the same dispersion term in the wave equation (\ref{Eq:Polariton_waveeqn4}) that is given in \cite{Fleischhauer2}. \section{Summary}\label{Sec:Summary} In this article we have presented a detailed theoretical treatment of stationary light pulses in media comprised of stationary atoms, such as ultra cold gasses and solid state media. We found that contrary to the thermal gas case, the achievable trapping time is limited only by the Raman dephasing rate of the atoms and such media are thus ideally suited for the kind of nonlinear optical interactions envisaged in \cite{Friedler,Andre2}. It was also shown that the behavior of the probe pulse when employing quasi-stationary coupling fields is significantly different for moving and non-moving atoms. This fact must be taken into account when considering schemes for interacting pulses. Although, to the best of our knowledge, no experiment with stationary light pulses in ultra cold media has yet been reported, several experiments on normal EIT and light storage have been performed with ultra cold gasses \cite{Liu} and solid state media \cite{Turukhin,Longdell}. These experiments have also demonstrated the possibility of using beam geometries other than copropagating probe and coupling lasers. We therefore expect that the experimental demonstration of stationary light pulses in such media is within present day capability. \begin{acknowledgments} We gratefully acknowledge stimulating discussions with M. Fleischhauer and F. Zimmer, and we thank A. Andr\'e and M. Lukin for communicating their results on stationary pulses in thermal gasses prior to publication in \cite{Zimmer}. This work is supported by the European Integrated Project SCALA and the ONR-MURI collaboration on quantum metrology with atomic systems. \end{acknowledgments}
1,116,691,498,165
arxiv
\section{Band structure of the spin filter using micropillars of different diameters} In this section, we consider micropillars of two different diameters to construct the honeycomb lattice. In the considered system two types of micropilars have different mode localization energies due to their different diameters, which automatically gives rise to the onsite detuning, $\delta$. In Fig.~\ref{FigS1}(a) a schematic diagram of such a system of micropillars arranged in a honeycomb lattice is considered, which is infinite along the $x$ direction and finite along the $y$ direction having the zigzag edges. The band structure is calculated the same way as done in the main text. The diameters of the micropillars are considered to be $2.4~\mu$m and $1.9~\mu$m \cite{Klembt_2018}, which represent the A and B sublattices, respectively, and the depth of the micropillars are the same and taken 16.5 meV. In order to realize the coupling between the two neighboring sites, the micropillars have an overlap between them such that the unit cell size along the $x$ direction is 3 $\mu$m. In Fig.~\ref{FigS1}(b) the band structure of the system is plotted without taking into account the Zeeman splitting, $2\Delta_z$, and the TE-TM splitting, $\Delta_T$. There is a trivial bandgap, $\varepsilon_g$, which is due to the breaking of the inversion symmetry. Since, the edges of the lattice are composed of different types of micropillars, the edge states located at the two edges of the lattice are no longer symmetric. There is no term ($\Delta_z=0$) to break the symmetry between the $\sigma_{+}$ and $\sigma_{-}$ spin polarized polaritons and as a result the band structure is degenerate for both spins. The effect of $\Delta_z$ is shown in Fig.~\ref{FigS1}(c), which lifts the degeneracy between the two spins and the band structure of the $\sigma_+$ and $\sigma_-$ polarized polaritons gets shifted in energy by $2\Delta_z$. Figs.~\ref{FigS1}(d-e) show the SF band structure taking into account both $\Delta_z$ and $\Delta_T$ terms. The parameters are chosen such that the $\sigma_-$ spin polarized edge state is surrounded by the $\sigma_+$ spin polarized bulk modes and hence one can show in the same way as done in the main text that the $\sigma_-$ spin polarized polaritons will propagate robustly along the lower edge from left to right (determined by the group velocity of the edge state which is positive in this case) and the system will act as a SF where $\sigma_-$ spin polarized polaritons will be filtered out. It should be noted that, by reversing the sign of the magnetic field (which changes $\Delta_z$ to $-\Delta_z$ ) one can easily change the spin polarization of the SF edge state (see Fig.~\ref{FigS1_2}). \begin{figure}[H] \includegraphics[width=0.5\textwidth]{Sup1.pdf} \caption{(a) Schematic diagram of a honeycomb lattice having zigzag edges consisting of two different types of micropillars having different diameters. Band structure of the system under consideration for $\Delta_z=\Delta_T=0$ in (b); for $\Delta_z = 0.18$ meV and $\Delta_T=0$ meV$\mu$m$^2$ in (c); and for $\Delta_z = 0.18$ meV and $\Delta_T=0.1$ meV$\mu$m$^2$ in (d-e). In (b) and (d) the bulk modes are represented in blue whereas green and red represent the states located at the two edges. In (c) and (e) red and green represent the $\sigma_+$ and $\sigma_-$ spin polarized modes, respectively. The mass of the polaritons is 2.5 $\times$ $10^{-5}m_e$ with $m_e$ the free electron mass. } \label{FigS1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{Sup1_2.pdf} \caption{Band structure of the system shown in Fig.~\ref{FigS1}(a) for $\Delta_z=\Delta_T=0$ in (a); for $\Delta_z =- 0.18$ meV and $\Delta_T=0$ meV$\mu$m$^2$ in (b); and for $\Delta_z = -0.18$ meV and $\Delta_T=0.1$ meV$\mu$m$^2$ in (c-d). The colour codings are the same as those in Fig.~\ref{FigS1}. In this case the SF works for $\sigma_+$ spin polarized polaritons.} \label{FigS1_2} \end{figure} \section{Effect of TE-TM splitting on the band structure} In this section we obtain the band structure by solving Eqs. (1-3) of the main text corresponding to different $\Delta_T$. In Fig.~\ref{FigS2}(a) the band structure of the system without $\Delta_z$ and $\Delta_T$ is plotted which shows the trivial bandgap similar to Fig.~\ref{FigS1}(b). Fig.~\ref{FigS2}(b) shows the effect of $\Delta_z$ which shifts the band structure for $\sigma_\pm$ spin polarized polaritons in energy. Figs.~\ref{FigS2}(c-d) the band structures corresponding to $\Delta_T=0.09$ meV$\mu$m$^2$ are plotted which shows that the SF regime can still exist even for a lower value (than the one used in the main text) of TE-TM splitting. However, the band structure starts to deviate from the SF regime for larger values of TE-TM splitting, $\Delta_T=0.24$ meV$\mu$m$^2$, as shown in \ref{FigS2}(e-f). The SF edge state starts to be flat and also two edge states start to appear within the same energy interval. To conclude, there is a great degree of freedom in choosing the parameters as long as $\varepsilon_g\simeq 2\Delta_z \ge \Delta_T$. \begin{figure}[H] \centering \includegraphics[width=0.49\textwidth]{Sup2v2.pdf} \caption{Band structure of the system under consideration for $\Delta_z=\Delta_T=0$ in (a), for $\Delta_z = 0.18$ meV and $\Delta_T=0$ meV$\mu$m$^2$ in (b), for $\Delta_z = 0.18$ meV and $\Delta_T=0.09$ meV$\mu$m$^2$ in (c-d), and for $\Delta_z = 0.18$ meV and $\Delta_T=2.4$ meV$\mu$m$^2$ in (e-f). The colour codings are the same as those in Fig.~\ref{FigS1}.} \label{FigS2} \end{figure} \section{Effect of the polariton lifetime on the propagation distance} In this section, the propagation distance for different polariton lifetime is plotted using Eqs. (1-2) and (4) in the main text. For polaritons with lifetime 30 ps the propagation distance is about 60 $\mu$m. As expected, the propagation distance increases with polariton lifetime and the propagation distance reaches around 100 $\mu$m for 50 ps. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{Sup3.pdf} \caption{Spin propagation in real space for different values of $\Gamma$ which corresponds to the polariton lifetime of 30 ps in (a), 40 ps in (b), 50 ps in (c), and 60 ps in (d). The red arrows indicate the position of the continuous pump. All other parameters were kept the same as those in Fig. 3(a) in the main text.} \label{FigS3} \end{figure} \section{Band structure of the spin filter using micropillars of different diameters} In this section, we consider micropillars of two different diameters to construct the honeycomb lattice. In the considered system two types of micropilars have different mode localization energies due to their different diameters, which automatically gives rise to the onsite detuning, $\delta$. In Fig.~\ref{FigS1}(a) a schematic diagram of such a system of micropillars arranged in a honeycomb lattice is considered, which is infinite along the $x$ direction and finite along the $y$ direction having the zigzag edges. The band structure is calculated the same way as done in the main text. The diameters of the micropillars are considered to be $2.4~\mu$m and $1.9~\mu$m \cite{Klembt_2018}, which represent the A and B sublattices, respectively, and the depth of the micropillars are the same and taken 16.5 meV. In order to realize the coupling between the two neighboring sites, the micropillars have an overlap between them such that the unit cell size along the $x$ direction is 3 $\mu$m. In Fig.~\ref{FigS1}(b) the band structure of the system is plotted without taking into account the Zeeman splitting, $2\Delta_z$, and the TE-TM splitting, $\Delta_T$. There is a trivial bandgap, $\varepsilon_g$, which is due to the breaking of the inversion symmetry. Since, the edges of the lattice are composed of different types of micropillars, the edge states located at the two edges of the lattice are no longer symmetric. There is no term ($\Delta_z=0$) to break the symmetry between the $\sigma_{+}$ and $\sigma_{-}$ spin polarized polaritons and as a result the band structure is degenerate for both spins. The effect of $\Delta_z$ is shown in Fig.~\ref{FigS1}(c), which lifts the degeneracy between the two spins and the band structure of the $\sigma_+$ and $\sigma_-$ polarized polaritons gets shifted in energy by $2\Delta_z$. Figs.~\ref{FigS1}(d-e) show the SF band structure taking into account both $\Delta_z$ and $\Delta_T$ terms. The parameters are chosen such that the $\sigma_-$ spin polarized edge state is surrounded by the $\sigma_+$ spin polarized bulk modes and hence one can show in the same way as done in the main text that the $\sigma_-$ spin polarized polaritons will propagate robustly along the lower edge from left to right (determined by the group velocity of the edge state which is positive in this case) and the system will act as a SF where $\sigma_-$ spin polarized polaritons will be filtered out. It should be noted that, by reversing the sign of the magnetic field (which changes $\Delta_z$ to $-\Delta_z$ ) one can easily change the spin polarization of the SF edge state (see Fig.~\ref{FigS1_2}). \begin{figure}[H] \includegraphics[width=0.5\textwidth]{Sup1.pdf} \caption{(a) Schematic diagram of a honeycomb lattice having zigzag edges consisting of two different types of micropillars having different diameters. Band structure of the system under consideration for $\Delta_z=\Delta_T=0$ in (b); for $\Delta_z = 0.18$ meV and $\Delta_T=0$ meV$\mu$m$^2$ in (c); and for $\Delta_z = 0.18$ meV and $\Delta_T=0.1$ meV$\mu$m$^2$ in (d-e). In (b) and (d) the bulk modes are represented in blue whereas green and red represent the states located at the two edges. In (c) and (e) red and green represent the $\sigma_+$ and $\sigma_-$ spin polarized modes, respectively. The mass of the polaritons is 2.5 $\times$ $10^{-5}m_e$ with $m_e$ the free electron mass. } \label{FigS1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{Sup1_2.pdf} \caption{Band structure of the system shown in Fig.~\ref{FigS1}(a) for $\Delta_z=\Delta_T=0$ in (a); for $\Delta_z =- 0.18$ meV and $\Delta_T=0$ meV$\mu$m$^2$ in (b); and for $\Delta_z = -0.18$ meV and $\Delta_T=0.1$ meV$\mu$m$^2$ in (c-d). The colour codings are the same as those in Fig.~\ref{FigS1}. In this case the SF works for $\sigma_+$ spin polarized polaritons.} \label{FigS1_2} \end{figure} \section{Effect of TE-TM splitting on the band structure} In this section we obtain the band structure by solving Eqs. (1-3) of the main text corresponding to different $\Delta_T$. In Fig.~\ref{FigS2}(a) the band structure of the system without $\Delta_z$ and $\Delta_T$ is plotted which shows the trivial bandgap similar to Fig.~\ref{FigS1}(b). Fig.~\ref{FigS2}(b) shows the effect of $\Delta_z$ which shifts the band structure for $\sigma_\pm$ spin polarized polaritons in energy. Figs.~\ref{FigS2}(c-d) the band structures corresponding to $\Delta_T=0.09$ meV$\mu$m$^2$ are plotted which shows that the SF regime can still exist even for a lower value (than the one used in the main text) of TE-TM splitting. However, the band structure starts to deviate from the SF regime for larger values of TE-TM splitting, $\Delta_T=0.24$ meV$\mu$m$^2$, as shown in \ref{FigS2}(e-f). The SF edge state starts to be flat and also two edge states start to appear within the same energy interval. To conclude, there is a great degree of freedom in choosing the parameters as long as $\varepsilon_g\simeq 2\Delta_z \ge \Delta_T$. \begin{figure}[H] \centering \includegraphics[width=0.49\textwidth]{Sup2v2.pdf} \caption{Band structure of the system under consideration for $\Delta_z=\Delta_T=0$ in (a), for $\Delta_z = 0.18$ meV and $\Delta_T=0$ meV$\mu$m$^2$ in (b), for $\Delta_z = 0.18$ meV and $\Delta_T=0.09$ meV$\mu$m$^2$ in (c-d), and for $\Delta_z = 0.18$ meV and $\Delta_T=2.4$ meV$\mu$m$^2$ in (e-f). The colour codings are the same as those in Fig.~\ref{FigS1}.} \label{FigS2} \end{figure} \section{Effect of the polariton lifetime on the propagation distance} In this section, the propagation distance for different polariton lifetime is plotted using Eqs. (1-2) and (4) in the main text. For polaritons with lifetime 30 ps the propagation distance is about 60 $\mu$m. As expected, the propagation distance increases with polariton lifetime and the propagation distance reaches around 100 $\mu$m for 50 ps. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{Sup3.pdf} \caption{Spin propagation in real space for different values of $\Gamma$ which corresponds to the polariton lifetime of 30 ps in (a), 40 ps in (b), 50 ps in (c), and 60 ps in (d). The red arrows indicate the position of the continuous pump. All other parameters were kept the same as those in Fig. 3(a) in the main text.} \label{FigS3} \end{figure}
1,116,691,498,166
arxiv
\section{Introduction} \label{sec:Introduction} The solution of the linear Boltzmann equation (LBE) is computationally expensive due to its high dimensionality. Studies of numerical techniques focus mostly on the steady mono-energetic form being the building block for more complex cases. Discretization is generally performed by discrete ordinates which is straightforward to implement. Spatial discretization is most often performed by the discontinous Galerkin in space. Whatever the discretization used, some form of iteration is needed to resolve the system of equations. Source iteration, the most widespread technique, works well for optically thin media but is less suitable for thick scattering materials. As the slowest mode of convergence in thick scattering media has little angular dependence this can be remedied by the diffusion synthetic acceleration (DSA) that accelerates reduction of these errors by solution of a diffusion problem. When used as preconditioner in a Krylov method, this practically gives rise to an unconditionally effective and stable method. For anisotropic scattering and in thin media, DSA is still ineffective. Multigrid methods where the problem is formulated on multiple levels lead to a more successfull approach. The basic principle of the multigrid approach is to smooth small wavelength errors on successive grids combined with an exact solver on the coarsest grid. DSA can be viewed as a two-grid technique in physical approach, switching between complete transport on the fine level and diffusion on the coarse level. Our previous paper \cite{Lathouwers2019} has an extensive discussion of literature on multigrid approaches which we only briefly summarize here. The multigrid technique for radiation transport problems was pioneered by Morel and Manteuffel \cite{Morel1991} where they constructed an angular multigrid method for the one dimensional $S_N$ equations. The method was extended to two dimensions by Pautz \cite{Pautz1999} by the introduction of high-frequency filtering to increase stability of the method. Various references have investigated the efficiency of angular multigrid schemes for both the $P_N$ and the $S_N$ equations \cite{Lee2010a,Lee2010b,Oliveira2000,Turcksin2012,Drumm2017}. These papers have shown the great benefit of multigrid: speedups of up to 10 are reported compared to single grid methods. In previous work we have introduced a novel finite element angular discretization of the transport equation \cite{Kophazi2015}. The method offers the capability to anisotropically refine the sphere to focus on important directions. In a later paper we added a discretization scheme for the Fokker-Planck small angle scatter term \cite{Hennink2017} and an angular multigrid scheme for the efficient solution \cite{Lathouwers2019}. The scheme utilizes the multigrid method as preconditioner for a Krylov method and was found to be highly effective compared to the single mesh preconditioner. The purpose of this paper is to widen the scope of these previously introduced methods to the more general case of highly anisotropic scatter as modeled by Legendre scatter expansion. This paper is outlined as follows. In Section 2, we summarize the discontinuous Galerkin discretization in space-angle that our method is based on. The standard source iteration procedure is described in Section 3. The angular multigrid procedure proposed in the present work is discussed in Section 4 comprising of the mesh hierarchy, and the inter-grid transfers. Section 5 describes a newly formulated sweep methodology for efficient coarse mesh solution compatible with high order scatter. The method is tested on a set of model problems. Final conclusions are drawn in Section 6. \section{Space-angle discretization of the transport equation} \label{sec:Space_angle_discretization} The space-angle discretization of the Boltzmann equation has been presented in detail in \cite{Kophazi2015} and \cite{Hennink2017}, and will therefore be described briefly by only covering the essentials. Details can be found in the original references. Particle transport with highly anisotropic scatter is described by the linear Boltzmann transport equation. We neglect energy dependence in this paper as the focus is on accelerating the single group iterative method. The linear mono-energetic Boltzmann equation reads \begin{equation} \mathbf{\Omega} \cdot \nabla \phi(\textbf{r},\mathbf{\Omega}) + \Sigma_t(\textbf{r}) \phi(\textbf{r},\mathbf{\Omega}) = Q(\textbf{r},\mathbf{\Omega}) \label{eq:LBE} \end{equation} where $\textbf{r}$ is the spatial coordinate, $\mathbf{\Omega}$ is the unit direction vector, $\phi$ is the angular flux density, $Q$ is the volumetric source density including scatter, $\Sigma_t$ is the total macroscopic cross section. The angular flux for incoming directions is specified on the domain boundary, $\Gamma_I$, \begin{equation} \phi (\textbf{r},\mathbf{\Omega}) = \phi_{in} (\textbf{r},\mathbf{\Omega}), \; \textbf{r} \in \Gamma_I, \; \mathbf{\Omega} \cdot \hat{\textbf{n}} < 0 \end{equation} \subsection{Phase space elements} The spatial domain $V$ is made up of elements $V_k$, where $k$ is the index of the spatial element. A discontinuous solution space $S_{h,p}$, is defined containing polynomials of order $p$ at most. This is a standard approach and we focus our attention on the angular discretization. The construction of angular elements is based on hierarchical sectioning of the unit sphere into patches, $D_p$, where $p$ is the patch index. The coordinate planes divide the sphere into cardinal octants, which are spherical triangles. We also assign a level, $l_p$, to a patch. The spherical triangles at the coarsest level are assigned a level of $l_p = 0$. Spherical triangles with $l_p = 1$ are obtained by halving the edges of the $l_p = 0$ patches and subsequently connecting the emerged points with great circles. Every patch is hereby split into four daughters. This procedure can be repeated to arbitrary depth and locally on the sphere (aniostropic refinement). The angular subdivision of the sphere is described by a set $P$ of patch indices such that $\cup_{p \in P} D_p \equiv D$ where $D$ denotes the unit sphere surface and $D_p \cap D_q \equiv \emptyset$ for $\forall p,q \in P, p \neq q$. The phase space mesh is then obtained by assigning an angular subdivision $P_k$ to each spatial element $V_k$ providing a high level of flexibility. \subsection{Angular basis functions} Two sets of angular basis functions, $\psi_{[p] d} (\mathbf{\Omega})$, are used throughout this paper. Both are local to the patch $D_p$ by setting $\psi_{[p] d} (\mathbf{\Omega})=0$ if $\mathbf{\Omega} \notin D_p$ and are discontinuous at the patch boundary. Here $\psi_{[p] d} (\mathbf{\Omega})$ denotes the $d$-th basis function on the patch with index $p$. The locality-property of the functions ensures that the streaming-removal terms will not couple to non-overlapping patches. As explained later, this eases the use of sweep-based algorithm. The two basis functions sets are: \begin{enumerate} \item{\textbf{Const} A unit value function on the patch.} \item{\textbf{Lin} A nodal set of three functions satisfying $\psi_{[p] d} (\mathbf{\Omega}_{d'}) = \delta_{dd'}$ where $\mathbf{\Omega}_{d'}$ are the patch-vertices. These functions result from projecting the standard Lagrange functions on a specific flat triangle on the octahedron onto the sphere. The flat triangle is formed by projecting the vertices of the patch to the octahedron.} \end{enumerate} \subsection{Discretization} The flux in each energy group is written as a product of spatial and angular basis functions as \begin{equation} \phi (\textbf{r},\mathbf{\Omega}) = \sum_{k,i} \sum_{p \in P_k,d} \phi\indices{_{[k,p]}^{i,d}} \Phi_{[k]i} (\textbf{r}) \Psi_{[p]d} (\mathbf{\Omega}) \label{eq:Expansion} \end{equation} where $\Phi_{[k]i} (\textbf{r})$ is the $i$-th spatial basis function of element $k$ and $\Psi_{[p]d} (\mathbf{\Omega})$ is the $d$-th angular basis function on patch (angular element) $p$. Substituting the expansion given by \egyref{eq:Expansion} into the transport equation (\egyref{LBE}), multiplying with a test function in space and angle and subsequently integrating over complete phase space leads to: \begin{equation} \label{eq:complete} \sum_f \Upsilon_{[f,j,q]lm} - \sum_{\xi=1}^3 \sum_i \sum_d V_{[j]li\xi} \phi\indices{_{[j,q]}^{id}} A\indices{_{[q,p]md}^\xi} + \Sigma_a \sum_i \sum_d N_{[j]li} \phi\indices{_{[j,q]}^{id}} M_{[q]md} = Q_{[j,q]lm}, \end{equation} where the streaming term reads \begin{equation} \Upsilon_{[f,j,q]lm} = \int_{\partial V_j^f} \Phi_{[j]l} (\textbf{r}) \sum_{\begin{array}{c} k \in \{j,j_f'\} \\ i \end{array}} \sum_{\begin{array}{c} p \in P_k \\ d \end{array}} \phi\indices{_{[k,p]}^{id}} \Phi_{[k]i} (\textbf{r}) A_{[f,q,p]md} d \textbf{r} \label{eq:Surface_Integral} \end{equation} and we made use of the following shorthand notations: \begin{eqnarray} N_{[j]li} = \int_{V_j} \Phi_{[j]l} (\textbf{r}) \Phi_{[j]i} (\textbf{r}) d \textbf{r} \\ V_{[j]li\xi} = \int_{V_j} \nabla_\xi \Phi_{[j]l} (\textbf{r}) \Phi_{[j]i} (\textbf{r}) d \textbf{r} \\ A\indices{_{[q,p]md}^\xi} = \int_{D_q} \Omega^\xi \Psi_{[q]m} (\mathbf{\Omega}) \Psi_{[p]d} (\mathbf{\Omega}) d \mathbf{\Omega} \\ A_{[f,q,p]md} = \sum_{\xi=1}^3 {\hat{n}}_{[f] \xi} A\indices{_{[q,p]md}^{\xi}} \\ A_{[f,q]md} = A_{[f,q,q]md} \\ M_{[q,p]md} = \int_{D_q} \Psi_{[q]m} (\mathbf{\Omega}) \Psi_{[q]d} (\mathbf{\Omega}) d \mathbf{\Omega} \\ M_{[q]md} = M_{[q,q]md} \\ Q_{[j,q]lm} = \int_{V_j} \int_{D_q} \Phi_{[j]l} (\textbf{r}) \Psi_{[q]m} (\mathbf{\Omega}) Q(\textbf{r},\mathbf{\Omega}) d \mathbf{\Omega} d \textbf{r}. \end{eqnarray} Here, element $j$ has faces indexed by $f$ with its neighbor at face $f$ denoted as $j_f'$ and ${\hat{n}}_{[f] \xi}$ is the $\xi$ coordinate of the outward normal of face $f$ in element $j$. The mass matrices in space and angle are denoted by $N$ and $M$, respectively. $V$ contains the volmetric streaming integral and $A$ is the angular Jacobian. The face integrals from streaming need to be made unique by an upwinding procedure. In $S_N$ schemes this is straightforward. Here, the angular components on a patch need to be treated simultaneously. Various authors (e.g. \cite{Pain2006}) have introduced Riemann procedures to separate the surface terms into inward and outward contributions. In previous papers \cite{Kophazi2015,Hennink2017} we have derived how the numerical flux needs to be evaluated in the different situations, i.e. where the neighbor is either (i) equally refined, (ii) coarser or (iii) finer. To prevent repetition and concentrate on the multigrid aspects, we refer the reader to the original references for more in-depth discussion. \section{Single-grid solution approach} \label{sec:solver} The discrete transport equation is written as \begin{equation} L {\boldsymbol \phi} = S {\boldsymbol \phi} + {\bf f}. \end{equation} Here, $L$ and $S$ are the discrete transport and scatter operators, respectively and $f$ is the vector containing the independent source. We use preconditioned Bicgstab \cite{bicgstab} to solve the linear system with $L^{-1}$ as preconditioner, corresponding to Krylov-accelerated source iteration \begin{equation} L {\boldsymbol \phi}^{k+1} = S {\boldsymbol \phi}^k + {\boldsymbol f}. \end{equation} In discrete ordinates codes the operator $L^{-1}$ is easily applied as the ordinates are independent. Here, we use a finite element discretization in angle, where such a sweep is no longer exact. The cause is possible bi-directionality due to some directions on a patch being incoming whereas others are outgoing with respect to the face. To precondition the system we have devised the following sweep algorithm: \begin{itemize} \item An $S_2$ ordinate set is used and for each ordinate, corresponding to a particular octant, the sweep order of the elements is determined. \item For each direction, we traverse the spatial elements and each angular element in the octant is visited and the corresponding linear system is solved. In more mathematical terms we perform a block Gauss-Seidel iteration for a given ordering of the spatio-angular elements. Splitting $L$ into implicit and explicit part, $L=L_I + L_E$, the iteration reads % \begin{equation} L_I {\bf \phi}^{k+1} = (S - L_E) {\bf \phi}^k + {\bf f} \end{equation} % and the preconditioner then is $L_I$. \end{itemize} We have previously demonstrated that this sweep algorithm is an effective preconditioner (see \cite{Kophazi2015} for more details). In problems where scatter is highly anisotropic the procedure unfortunately becomes less effective. \section{An angular multigrid preconditioner} \label{sec:Angular_Multigrid_Preconditioner} The multigrid method smooths the error on a given mesh and then transfers the residual to a coarser mesh where the lower frequency errors can be more effectively attenuated. As we deal with a linear problem, we use the linear multigrid algorithm as shown in Figure~\ref{fig:lmg}. \begin{figure} Algorithm $LMG(\boldsymbol{\phi}_l,{\bf{f}}_l,l)$ \\ if $(l=0)$ then \\ \mbox{} \mbox{} $S(\boldsymbol{\phi}_l,{\bf{f}}_l,\nu_{coarse})$ \\ else \\ \mbox{} \mbox{} $S(\boldsymbol{\phi},f,\nu_{pre})$ \\ \mbox{} \mbox{} ${\bf{r}}_l = {\bf{f}}_l - A_l \boldsymbol{\phi}_l$ \\ \mbox{} \mbox{} ${\bf{f}}_{l-1} = R {\bf{r}}_l$ \\ \mbox{} \mbox{} $\boldsymbol{\phi}_{l-1} = 0$ \\ \mbox{} \mbox{} $LMG(\boldsymbol{\phi}_{l-1},{\bf{f}}_{l-1},l-1)$ \\ \mbox{} \mbox{} $\boldsymbol{\phi}_l = \boldsymbol{\phi}_l + P \boldsymbol{\phi}_{l-1}$ \\ \mbox{} \mbox{} $S(\boldsymbol{\phi}_l,{\bf{f}}_l,\nu_{post})$ \\ endif \\ end Algorithm LMG \\ \caption{Recursive linear multigrid algorithm based on the V-cycle (\cite{Wesseling1992}). The number of pre- and post-smoothing steps is $\nu_{pre}$ and $\nu_{post}$, respectively. The restriction and prolongation operators are denoted by R and P.} \label{fig:lmg} \end{figure} Since the multigrid method has been extensively documented (ee e.g. \cite{Wesseling1992}), we only discuss the particular multigrid components. \subsection{Multigrid components} A nested series of spherical meshes is obtained by refining a uniformly discretized sphere consisting of 8 spherical triangles (the octants). The coarsest mesh, $T_0$ has level $0$. Refined meshes are constructed by refining specific angular elements. Angular meshes are constrained to possess only up to two-irregularity, i.e. neighboring angular elements can differ by two levels at most. A series of triangulations $\{T_l\}$ is obtained with the maximum angular element level on $T_l$ is $l_p=l$. The finest angular mesh is $T_L$. The maximum occuring element refinement in the problem is $l_p=L$. An example of a series of spherical meshes is shown \figreff{fig:circ_meshes}. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\columnwidth]{./Circ5_at_0.jpg} \caption{$T_0$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\columnwidth]{./Circ5_at_1.jpg} \caption{$T_1$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\columnwidth]{./Circ5_at_2.jpg} \caption{$T_2$} \end{subfigure} \\ \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\columnwidth]{./Circ5_at_3.jpg} \caption{$T_3$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\columnwidth]{./Circ5_at_4.jpg} \caption{$T_4$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\columnwidth]{./Circ5_at_5.jpg} \caption{$T_5$} \end{subfigure} \caption{A set of nested angular meshes. Mesh $T_5$ is the finest mesh. The mesh elements are shown as bisected triangles rather than as spherical triangles. This is due to the plotting.} \label{fig:circ_meshes} \end{figure} The prolongation operator follows naturally from the use of a nested discontinuous finite element space, i.e. the complete angular solution can be prolongated without approximation from the coarse to the fine grid by Galerkin projection. The restriction operator is the transpose of prolongation operator, i.e. $R = P^T$. As smoother the standard source iteration procedure is used. Hence we perform iterations of the form \begin{equation} L_l \phi_l^{k+1} = S_l \phi_l^k + f_l \end{equation} where k is the iteration index. The coarse mesh problems are formulated by direct discretization (Discretization Coarse Grid Approximation, DCA). This procedure is compatible with our matrix-free implementation of the transport solver. Solution on the coarsest level could be performed by source iteration. This is however not effective for highly anisotropic scatter. In Section~\ref{subsec:alt_sweep} we describe a more effective approach. \section{Results} \label{sec:Results} Although our method is general in terms of scattering functions used we focus on the Fokker-Planck equivalent scatter kernel. For such scatter kernels we have that the Legendre moments are given by \begin{equation} \sigma_{s,n} = \frac{\alpha}{2} \left( N(N+1) - n(n+1) \right) \label{eqn:scatter_moments} \end{equation} where $\alpha$ is the momentum transfer coefficient. In the present paper we will vary the scatter order, $N$, to increase the forward-peaked nature of the scatter kernel. The normalized scatter cross section for varying $N$ is shown in Figure~\ref{fig:scatter_for_diff_orders}. \begin{figure} \includegraphics[width=0.9\columnwidth]{./scatter.jpg} \caption{Scatter cross section normalized to maximum value as function of $\mu$ for different expansion orders, $N$.} \label{fig:scatter_for_diff_orders} \end{figure} \subsection{Preliminary multigrid performance} \label{subsec:prel_mg_perf} In this section we present a preliminary study of multigrid performance and a discussion of the aspects that are crucial for obtaining an efficient method. We apply the basic multigrid algorithm to a radiation transport problem in a cube of size $5 \times 5 \times 5\ cm^3$. The geometry is meshed with $30 \times 30 \times 30$ hexahedral elements. The meshes were produced by Gmsh \cite{gmsh2009}. Boundary conditions are vacuum on all sides and the problem is driven by a uniform isotropic unit strength source in the cube. The transport cross section is equal to 1 and the scatter moments are calculated from equation~\ref{eqn:scatter_moments}. We use the multigrid V-cycle with the transport sweep as smoother, inter grid transfers based on Galerkin projection, discussed previously and a thorough coarse grid solution. The stop criterion is chosen as $||r|| / ||b|| < 10^{-8}$ where $r$ is the residual vector and $b$ the right hand side. The stop criterion for the coarse grid solver is set to $10^{-5}$. Uniformly refined angular meshes are used. Angular discretization is based on constant and linear basis functions. For different angular refinement levels on the finest mesh (l = 1, 2, or 3), we investigate the effects of the number of smoothing cycles and restricting the scatter order on the coarse level to $N_r < N$ on the multigrid performance. The results for the number of iterations obtained are given in Table~\ref{tab:opt_mg_perf}. \begin{table} \centering \caption{Iteration counts for isotropically refined angular meshes (l) up to level 3 and scatter orders N of 4,8 and 16 for the 3D box problem with uniform isotropic source for single grid (SG), different multigrid cycles and limited scatter orders $N_r$ on the coarsest level.} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline basis & l & $N_r$ & SG & V(1,1) & V(2,1) & SG & V(1,1) & V(2,1) & SG & V(1,1) & V(2,1) \\ \hline \hline & & & \multicolumn{3}{|c|} {$N=4$} & \multicolumn{3}{|c|} {$N=8$} & \multicolumn{3}{|c|} {$N=16$} \\ \hline \hline const & 1 & 0 & 39 & 14 & 14 & 93 & 22 & 22 & 179 & 42 & 42 \\ const & 1 & 1 & & 7 & 7 & & 12 & 11 & & 18 & 16 \\ const & 1 & 2 & & 7 & 7 & & 11 & 11 & & 19 & 19 \\ \hline const & 2 & 0 & 43 & 15 & 14 & 103 & 33 & 33 & 187 & 49 & 42 \\ const & 2 & 1 & & 9 & 8 & & 17 & 15 & & 24 & 23 \\ const & 2 & 2 & & 9 & 8 & & 17 & 15 & & 26 & 23 \\ \hline const & 3 & 0 & 44 & 17 & 15 & 121 & 34 & 34 & 253 & 111 & 210 \\ const & 3 & 1 & & 9 & 8 & & 19 & 18 & & 50 & 34 \\ const & 3 & 2 & & 9 & 8 & & 20 & 17 & & 40 & 36 \\ const & 3 & 3 & & & & & 17 & 14 & & 32 & 28 \\ \hline lin & 1 & 0 & 44 & 15 & 14 & 127 & 45 & 33 & 287 & 245 & 231 \\ lin & 1 & 1 & & 6 & 5 & & 15 & 14 & & 51 & 47 \\ lin & 1 & 2 & & 5 & 4 & & 12 & 9 & & 26 & 22 \\ lin & 1 & 3 & & & & & 9 & 9 & & 22 & 21 \\ \hline lin & 2 & 0 & 45 & 14 & 14 & 123 & 42 & 32 & 294 & & \\ lin & 2 & 1 & & 5 & 4 & & 14 & 11 & & 43 & 41 \\ lin & 2 & 2 & & 4 & 4 & & 9 & 8 & & 23 & 21 \\ lin & 2 & 3 & & & & & 9 & 8 & & 22 & 18 \\ \hline lin & 3 & 0 & 44 & 13 & 12 & 124 & 31 & 31 & 284 & & \\ lin & 3 & 1 & & 4 & 4 & & 11 & 12 & & 31 & 36 \\ lin & 3 & 2 & & 4 & 3 & & 8 & 7 & & 22 & 22 \\ lin & 3 & 3 & & & & & 4 & 3 & & 22 & 20 \\ \hline \end{tabular} \label{tab:opt_mg_perf} \end{table} The main conclusions that can be drawn from this table are as follows: (i) Source iteration is an excellent smoother with V(1,1) not performing significantly worse than V(2,1). Considering the amount of work in smoothing, V(1,1) is favorable in all cases and V(2,1) is not considered throughout the remainder of the paper. (ii) Multigrid performance depends strongly on the choice of scatter order, $N_r$, in the coarse grid problem. The iteration counts decrease with increasing $N_r$. The iteration counts are consistently much lower than when using the single grid solver and more so for higher scatter anisotropy. Timings are not given in this table as the coarse grid problem has been deliberately solved accurately to investigate optimal multigrid performance and this part dominates the solution time in this approach. It is concluded that in order to build an effective acceleration technique for anisotropic scatter using multigrid, the coarse grid problem needs to be based on a scatter order that is sufficiently high. Using standard source iteration as solver on the coarse angular mesh however requires many steps to reduce the error adequately defying the goal of a cheap solution on the coarse angular mesh. An alternative approach is required for the coarse level. \subsection{Alternative coarse grid solver} \label{subsec:alt_sweep} The results using the exact solution at the coarsest grid indicate clearly that the coarse grid operator needs to include sufficiently high scatter orders for the multigrid algorithm to be efficient. The use of isotropic or linear anisotropic scatter leads to poor convergence ruling out DSA and its variations. Standard source iteration is a (reasonably) efficient smoother but not a good solution method for anisotropic scattering media, even on the coarsest angular grid. Our previous work on the Fokker-Planck equation \cite{Lathouwers2019} has shown that for that case a good solver is obtained by performing a block Gauss-Seidel iteration over the angular elements, with 10 such sweeps being sufficient on the coarse level. Here, in the Legendre-scatter case we follow the same route: instead of using source iteration, a Gauss-Seidel technique is used. Such a Gauss-Seidel technique is easily implemented by first expanding the source vector given by the usual Legendre summation \begin{equation} Q(\textbf{r},\mathbf{\Omega}) = \sum_{n,o} \sigma_{s,n} \Phi_{n,o} (\textbf{r}) Y_{n,o}(\mathbf{\Omega}) \end{equation} with the flux moments defined as \begin{equation} \Phi_{n,o} (\textbf{r}) = \int_{4 \pi} \phi(\textbf{r},\mathbf{\Omega}) Y_{n,o}(\mathbf{\Omega})d \mathbf{\Omega} \end{equation} Inserting and expanding the flux moments leads to the discrete source, $Q_{[j,q]lm}$ \begin{equation} Q_{[j,q]lm} = \sum_{p \in P_j,d,i} N_{[j]li} \sum_{n,o} \sigma_{s,n} X_{[p],d,n,o} X_{[q],m,n,o}\phi\indices{_{[j,p]}^{id}} \end{equation} where \begin{equation} X_{[q],m,n,o} = \int_{D_q} \Psi_{[q]m} (\mathbf{\Omega}) Y_{n,o}(\mathbf{\Omega}) d \mathbf{\Omega} \label{eq:X} \end{equation} These integrals are pre-calculated and stored. On a given spatial element $j$ and angular element, $q$, a linear system can be solved considering all flux unknowns outside $j$,$q$ as given. The order in which the spatial and angular elements are visited is equal to the order in the standard sweep. This iteration has shown to be effective on the coarse grid. As in our previous work, 10 of such sweeps are sufficient. This coarse grid solver is used throughout the remainder of this work. \subsection{Reduced multigrid scatter order} \label{subsec:red_scat_order} The algorithm discussed in Section~\ref{subsec:alt_sweep} - replacing the coarse mesh solver by the alternative sweep solver - is not optimal. Especially the large number of scatter moments involved for large $N$ is detrimental for performance. In the present section we investigate the performance of the multigrid algorithm when operating with reduced scatter order in the multigrid preconditioning stage. We consider the same problem as before, i.e. the box geometry with isotropic uniform source with an angular mesh that is 3 times isotropically refined. The multigrid preconditioner uses a reduced maximum scatter order $N_r$. We use 10 sweeps on the coarsest grid. The stop criterion is $||r|| / ||b|| < 10^{-8}$. The results of varying $N_r$ for different angular basis functions, and different cycles (V(1,0) and V(1,1)), are shown in Figure~\ref{fig:Lr_effect}. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./mg_const_iters_vs_mg_L.jpg} \caption{Iteration counts for constant angular basis functions.} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./mg_lin_iters_vs_mg_L.jpg} \caption{Iteration counts for linear angular basis functions.} \end{subfigure} \\ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./mg_const_timing_vs_mg_L.jpg} \caption{Computational time for constant angular basis functions.} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./mg_lin_timing_vs_mg_L.jpg} \caption{Computational time for linear angular basis functions.} \end{subfigure} \caption{Effectivity of the multigrid algorithm for varying levels of the scatter order for the multigrid preconditioner using the V(1,0) and V(1,1) cycles for the 3D box geometry with uniform source and isotropic angular refinement. Iteration counts given in a and b. Computational times are given in c and d. Constant angular basis functions used in a and c. Linear basis is used in b and d.} \label{fig:Lr_effect} \end{figure} The number of iterations required as function of $N_r$ decreases with the observation that linear basis function require less iterations than constant basis functions. This may be related to the interpolation operators in the constant basis function case not being accurate enough: Mesh independent multigrid convergence can only be obtained when the following condition is met: $m_P + m_R > 2m$ where $m_P$ and $m_R$ are the order of polynomial that can be exactly prolongated, and restricted and $2m$ is the order of the differential equation. Although this can be improved by using more complex operators the main target of the paper is the higher-order basis functions, hence this approach has not been pursued. Concerning the computational time, there is a clear optimum choice of $N_r$. On the one hand a small value of $N_r$ gives little computational cost but it is not very effective. On the other hand, large values of $N_r$ give more optimal multigrid performance but at great expense per iteration. The optimal choice adopted in the remainder of this paper are $N_r=2,4,6,7,8,9$ for $N=4,8,12,16,20,24$, respectively. This choice is not very sensitive as long as $N_r$ is not chosen on the low side where the computational cost rises quickly with decreasing $N_r$. \subsection{Final results on cubic domain} \label{subsec:cubic_domain} To illustrate the efficiency of the final multigrid procedure using the alternative sweep on the coarsest grid as solver and the reduced scatter order in the multigrid preconditioner we compare the single mesh algorithm to the multigrid algorithm in terms of the number of Krylov iterations and the computational time used. Figure~\ref{fig:cubic_final_results} illustrates the main results where the parameters varied are the multigrid cycle used (V(1,0) and V(1,1)) and the scatter order $N$ which is varied between the relatively smooth case of $N=4$ and very forward peaked $N=24$. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./iters_const.jpg} \caption{Iteration counts for constant angular basis functions.} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./iters_lin.jpg} \caption{Iteration counts for linear angular basis functions.} \end{subfigure} \\ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./timing_const.jpg} \caption{Computational time for constant angular basis functions.} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./timing_lin.jpg} \caption{Computational time for linear angular basis functions.} \end{subfigure} \caption{Number of Krylov iterations and computational time for the standard sweep and the multigrid preconditioner using the V(1,0) and V(1,1) cycles for the 3D box geometry with uniform source and isotropic angular refinement. Iteration counts given in a and b. Computational times are given in c and d. Constant angular basis functions used in a and c. Linear basis is used in b and d.} \label{fig:cubic_final_results} \end{figure} It is clear that the multigrid preconditioner gives a great enhancement of the performance compared to the single grid case both in terms of the number of Krylov iterations required and more importantly in terms of computational effort. As seen before, the constant basis functions do not lead to the same performance enhancements as the linear basis. Both sets of basis functions show the multigrid advantage to increase as the scatter order increases. The computational time saved is between 3 for the constant basis case (at $N=24$) and 6.5 for the linear basis case (at $N=20$, as the single grid at $N=24$ were considered too expensive to run). \subsection{Cubic domain with a boundary source} \label{subsec:bc_cubic_domain} Following Turcksin and Morel \cite{Turcksin2012} and our previous work on Fokker-Planck solution using multigrid \cite{Lathouwers2019} we consider the same cubic domain as earlier but impose a boundary source composed of a pencil beam in the z-direction on part of the cube surface \begin{equation} \phi (\textbf{r},\mathbf{\Omega}) = \delta(\textbf{r},\mathbf{\Omega} - \textbf{r},\mathbf{\Omega}_z), \; z=0, \; 2 < x,y < 3 \end{equation} The Dirac function is used in the DG formalism (see \cite{Lathouwers2019}). To capture the forward peaked radiative field due to the boundary condition used, we use a refined angular mesh concentrating elements on the z-pole of the sphere. For a given maximum level $l_{max}$ in the angular, elements with $\Omega_z > 0.97$ are assigned this refinement level, elements with $0.9 < \Omega_z < 0.97$ are assigned level $l_{max} - 1$, elements with $0.8 < \Omega_z < 0.9$ are assigned level $l_{max} - 2$, elements with $0.6 < \Omega_z < 0.8$ are assigned level $l_{max} - 3$, and all remaining elements are assigned level $l_{max} - 4$. The angular meshes contain 20, 32, 44, 128, 440 and 1388 angular elements for $l_{max}$ between 1 and 6. Some of theses meshes are shown in Figure~\ref{fig:non_iso_meshes}. The non-isotropic refinement is one of the advantages of the present method over discrete ordinates techniques for highly non-isotropic radiative fields. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\columnwidth]{./Cube_bc_mesh2.jpg} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\columnwidth]{./Cube_bc_mesh4.jpg} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=0.9\columnwidth]{./Cube_bc_mesh6.jpg} \end{subfigure} \caption{Non-isotropically refined angular meshes corresponding to $l_{max}$ = 2 (left), 4 (middle), 6 (right). The meshes are viewed from the positive $\Omega_z$-direction.} \label{fig:non_iso_meshes} \end{figure} The results of the multigrid preconditioned method versus the regular sweep preconditioner are shown in Figure~\ref{fig:bc_results}. \begin{figure}[!ht] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./BC_iters_const.jpg} \caption{Iteration counts for constant angular basis functions.} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./BC_iters_lin.jpg} \caption{Iteration counts for linear angular basis functions.} \end{subfigure} \\ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./BC_timing_const.jpg} \caption{Computational time for constant angular basis functions.} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.9\columnwidth]{./BC_timing_lin.jpg} \caption{Computational time for linear angular basis functions.} \end{subfigure} \caption{Number of Krylov iterations and computational time for the standard sweep and the multigrid preconditioner using the V(1,0) and V(1,1) cycles for the 3D box geometry with boundary source and anisotropic angular refinement with varying depth $l_{max}$. Iteration counts given in a and b. Computational times are given in c and d. Constant angular basis functions used in a and c. Linear basis is used in b and d.} \label{fig:bc_results} \end{figure} As in the isotropically refined case, the multigrid based solver gets increasingly efficient w.r.t. to the single grid case with higher scatter order $N$. Here, the savings are even greater, i.e. up to around a factor of 6 for the constant basis functions ($N=24$) and up to a factor of around 9 for the linear basis ($N=24$). Finally, the effect of choice of multigrid cycling strategy is of little influence for the efficiency. \section{Conclusions} \label{sec:Conclusions} The standard transport sweep is not an effective preconditioner for the transport equation in the presence of highly forward scatter. Diffusion synthetic acceleration is also known not to be effective in these cases. In this work we have adapted an angular multigrid preconditioner originally developed for the Fokker-Planck equation to the case using Legendre scatter modeling. Interpolation operators are defined naturally through the hierarchic nature of the discontinuous Galerkin angular discretization. Smoothing on each level is effected by the standard source iteration technique. A new sweep algorithm was developed that is better capable of solving the coarse mesh problem, even for high scatter orders. The multigrid preconditioner was used in several test cases. The first test case consists of a 3D geometry with a uniform isotropic source. Uniform angular refinement is used. It was found that the most effective strategy is to use a reduced scatter order for the multigrid preconditioner. The optimal value was found to increase slightly with the scatter order, $N$. Using this optimal value, a comparison was done with the standard sweep preconditioner in terms of number of Krylov iterations required and the computational time to solve the problem. In all cases the multigrid preconditioner outperformed the sweep algorithm, especially for the linear angular basis. With increasing scatter order, the difference between sweep and multigrid become greater. Another test case is a 3D geometry with a uni-directional boundary source. Here, the angular meshes are anisotropically refined to capture the anisotropic radiation field induced by the boundary condition. The multigrid preconditioner was again highly effective compared to the sweep preconditioner with similar trends as found in the volumetric source case.
1,116,691,498,167
arxiv
\section{Implementation Details} \label{appendix:implementation} \begin{table*}[t] \centering \caption{Hypeparameter specifications.} \begin{tabular}{ccccccccccc} \toprule Dataset & \(p_{e,1}\) & \(p_{e,2}\) & \(p_{f,1}\) & \(p_{f,2}\) & \(p_\tau\) & \(\tau\) & \makecell{Learning\\rate} & \makecell{Training\\epochs} & \makecell{Hidden\\dimension} & \makecell{Activation\\function} \\ \midrule Wiki-CS & 0.2 & 0.4 & 0.1 & 0.1 & 0.7 & 0.6 & 0.01 & 3,000 & 256 & PReLU \\ Amazon-Computers & 0.5 & 0.5 & 0.2 & 0.1 & 0.7 & 0.1 & 0.01 & 1,500 & 128 & PReLU \\ Amazon-Photo & 0.3 & 0.5 & 0.1 & 0.1 & 0.7 & 0.3 & 0.1 & 2,000 & 256 & ReLU \\ Coauthor-CS & 0.3 & 0.2 & 0.3 & 0.4 & 0.7 & 0.4 & 0.0005 & 1,000 & 256 & RReLU \\ Coauthor-Physics & 0.4 & 0.1 & 0.1 & 0.4 & 0.7 & 0.5 & 0.01 & 1,500 & 128 & RReLU \\ \bottomrule \end{tabular} \label{tab:hyperparameters} \end{table*} \subsection{Computing Infrastructures} \paragraph{Software infrastructures.} All models are implemented using PyTorch Geometric 1.6.1 \cite{Fey:2019wv}, PyTorch 1.6.0 \cite{Paszke:2019vf}, and NetworkX 2.5 \cite{Hagberg:2008tk}. All datasets used throughout experiments are available in PyTorch Geometric libraries. \paragraph{Hardware infrastructures.} We conduct experiments on a computer server with four NVIDIA Tesla V100S GPUs (with 32GB memory each) and twelve Intel Xeon Silver 4214 CPUs. \subsection{Hyperparameter Specifications} All model parameters are initialized with Glorot initialization \cite{Glorot:2010uc}, and trained using Adam SGD optimizer \cite{Kingma:2015us} on all datasets. The \(\ell_2\) weight decay factor is set to \(10^{-5}\) and the dropout rate \cite{Srivastava:2014cg} is set to zero on all datasets. The probability parameters controlling the sampling process, \(p_{e,1}, p_{f,1}\) for the first view and \(p_{e,2}, p_{f,2}\) for the second view, are all selected between 0.0 and 0.4, since the original graph will be overly corrupted when the probability is set too large. Note that to generate different contexts for nodes in the two views, \(p_{e,1}\) and \(p_{e,2}\) should be distinct, and the same holds for \(p_{f,1}\) and \(p_{f,2}\). All dataset-specific hyperparameter configurations are summarized in Table \ref{tab:hyperparameters}. \section{Detailed Proofs} \label{appendix:proofs} \subsection{Proof of Theorem 1} \begin{theorem} \label{thm:objective-InfoMax} Let \(\bm{X}_i = \{ \bm{x}_k \}_{k \in \mathcal{N}(i)}\) be the neighborhood of node \(v_i\) that collectively maps to its output embedding, where \(\mathcal{N}(i)\) denotes the set of neighbors of node \(v_i\) specified by GNN architectures, and \(\bm{X}\) be the corresponding random variable with a uniform distribution \(p(\bm{X}_i) = \nicefrac{1}{N}\). Given two random variables \(\bm{U, V} \in \mathbb{R}^{F^\prime}\) being the embedding in the two views, with their joint distribution denoted as \(p(\bm{U}, \bm{V})\), our objective \(\mathcal{J}\) is a lower bound of MI between encoder input \(\bm{X}\) and node representations in two graph views \(\bm{U, V}\). Formally, \begin{equation} \mathcal{J} \leq I(\bm{X}; \bm{U}, \bm{V}). \end{equation} \end{theorem} \begin{proof} We first show the connection between our objective \(\mathcal{J}\) and the InfoNCE objective \cite{vandenOord:2018ut,Poole:2019vk} , which is defined as \[I_\text{NCE}(\bm U; \bm V) \triangleq \mathbb{E}_{\prod_{i} p( {\bm u}_i, {\bm v}_i)} \left[ \frac{1}{N} \sum_{i=1}^N \log \frac{e^{\theta(\bm{u}_i, \bm{v}_i)}}{\frac{1}{N}\sum_{j = 1}^{N} e^{\theta(\bm{u}_i, \bm{v}_j)}} \right], \] where the critic function is defined as \(\theta (\bm{x}, \bm{y}) = s(g(\bm{x}), g(\bm{y}))\). We further define \(\rho_r({\bm{u}}_i) = \sum_{j \neq i}^N \exp(\theta({\bm u}_i, {\bm u}_j) / \tau)\) and \(\rho_c({\bm u}_i) = \sum_{j=1}^N \exp(\theta ({\bm u}_i, {\bm v}_j) / \tau)\) for convenience of notation. \(\rho_r({\bm v}_i)\) and \(\rho_c({\bm v}_i)\) can be defined symmetrically. Then, our objective \(\mathcal{J}\) can be rewritten as \begin{equation} \mathcal{J} = \mathbb E_{\prod_{i} p( {\bm u}_i, {\bm v}_i)} \left[ \frac{1}{N} \sum_{i=1}^N \log \frac {\exp(\theta( {\bm u}_i, {\bm v}_i) / \tau)} {\sqrt{\left( \rho_c( {\bm u}_i) + \rho_r({\bm u}_i) \right) \left( \rho_c( {\bm v}_i) + \rho_r({\bm v}_i) \right)}} \right]. \end{equation} Using the notation of \(\rho_c\), the InfoNCE estimator \(I_\text{NCE}\) can be written as \begin{equation} I_\text{NCE}( {\bm U}, {\bm V}) = \mathbb E_{\prod_{i} p( {\bm u}_i, {\bm v}_i)} \left[ \frac{1}{N} \sum_{i=1}^N \log \frac {\exp(\theta( {\bm u}_i, {\bm v}_i) / \tau)} {\rho_{c}( {\bm u}_i)} \right]. \end{equation} Therefore, \begin{equation} \begin{aligned} 2\mathcal{J} & = I_\text{NCE}(\bm U, \bm V) - \mathbb E_{\prod_i p( {\bm u}_i, {\bm v}_i)} \left[ \frac{1}{N} \sum_{i=1}^N \log \left( 1 + \frac {\rho_r( {\bm u}_i)} {\rho_c( {\bm u}_i)} \right) \right] \\ & \quad + I_\text{NCE}(\bm V, \bm U) - \mathbb E_{\prod_i p( {\bm u}_i, {\bm v}_i)} \left[ \frac{1}{N} \sum_{i=1}^N \log \left( 1 + \frac {\rho_r( {\bm v}_i)} {\rho_c( {\bm v}_i)} \right) \right] \\ & \leq I_\text{NCE}(\bm U, \bm V) + I_\text{NCE}(\bm V, \bm U). \\ \end{aligned} \end{equation} According to \citet{Poole:2019vk}, the InfoNCE estimator is a lower bound of the true MI, i.e. \begin{equation} I_\text{NCE}(\bm{U}, \bm{V}) \le I(\bm{U}; \bm{V}). \end{equation} Thus, we arrive at \begin{equation} 2\mathcal{J} \leq I(\bm U; \bm V) + I(\bm V; \bm U) = 2 I(\bm U; \bm V), \end{equation} which leads to the inequality \begin{equation} \label{eq:objective-uv} \mathcal J \le I( {\bm U}; {\bm V}). \end{equation} According to the data processing inequality \cite{Cover:2006ei}, which states that, for all random variables \(\bm{X}, \bm{Y}, \bm{Z}\) satisfying the Markov relation \(\bm{X} \rightarrow \bm{Y} \rightarrow \bm{Z}\), the inequality \(I(\bm{X}; \bm{Z}) \leq I(\bm{X}; \bm{Y})\) holds. Then, we observe that \(\bm{X}, \bm{U}, \bm{V}\) satisfy the relation \(\bm{U} \leftarrow \bm{X} \rightarrow \bm{V}\). Since, \(\bm{U}\) and \(\bm{V}\) are conditionally independent after observing \(\bm{X}\), the relation is Markov equivalent to \(\bm{U} \rightarrow \bm{X} \rightarrow \bm{V}\), which leads to \(I(\bm{U}; \bm{V}) \leq I(\bm{U}; \bm{X})\). We further notice that the relation \(\bm{X} \rightarrow (\bm{U}, \bm{V}) \rightarrow \bm{U}\) holds, and hence it follows that \(I(\bm{X}; \bm{U}) \leq I(\bm{X}; \bm{U}, \bm{V})\). Combining the two inequalities yields the required inequality \begin{equation} \label{eq:data-processing} I(\bm U; \bm V) \leq I(\bm X; \bm U, \bm V). \end{equation} Following Eq. (\ref{eq:objective-uv}) and Eq. (\ref{eq:data-processing}), we finally arrive at inequality \begin{equation} \mathcal{J} \leq I(\bm X; \bm U, \bm V), \end{equation} which concludes the proof. \end{proof} \subsection{Proof of Theorem 2} \begin{theorem} \label{thm:objective-triplet-loss} When the projection function \(g\) is the identity function and we measure embedding similarity by simply taking inner product, and further assuming that positive pairs are far more aligned than negative pairs, i.e. \(\bm{u}_i^\top \bm{v}_k \ll \bm{u}_i^\top \bm{v}_i\) and \(\bm{u}_i^\top \bm{u}_k \ll \bm{u}_i^\top \bm{v}_i\), minimizing the pairwise objective \(\ell(\bm u_i, \bm v_i)\) coincides with maximizing the triplet loss, as given in the sequel \begin{equation} \begin{split} - \ell (\bm u_i, \bm v_i) \propto 4N \tau + \sum_{j \neq i}\Bigg[ &\left(\| {\bm u_i} - {\bm v_i} \|^2 - \| {\bm u_i} - {\bm v_j} \|^2\right)\\ &+ \left(\| {\bm u_i} - {\bm v_i} \|^2 - \| {\bm u_i} - {\bm u_j} \|^2\right) \Bigg]. \end{split} \end{equation} \end{theorem} \begin{proof} Based on the assumptions, we can rearrange the pairwise objective as \begin{equation} \begin{aligned} - \ell(\bm{u}_i, \bm{v}_i) & = - \log \frac {e^{\left( \bm{u}_i^\top \bm{v}_{i} / \tau\right)}} {\sum_{k=1}^{N} e^{\left( \bm{u}_i^\top \bm{v}_k / \tau\right)} + \sum_{k \neq i}^{N} e^{\left( \bm{u}_i^\top \bm{u}_k / \tau\right)}} \\ & = \log \left( 1 + \sum_{k \neq i}^{N} e^{\left( \frac { {\bm u}_i^\top {\bm v}_k - {\bm u}_i^\top {\bm v}_i} {\tau} \right)} + \sum_{k \neq i}^{N} e^{\left( \frac { {\bm u}_i^\top {\bm u}_k - {\bm u}_i^\top {\bm v}_i} {\tau} \right)} \right). \end{aligned} \end{equation} By Taylor expansion of first order, \begin{equation} \begin{aligned} & \quad - \ell( {\bm u}_i, {\bm v}_i) \\ & \approx \sum_{k \neq i}^{N} \exp\left( \frac { {\bm u}_i^\top {\bm v}_k - {\bm u}_i^\top {\bm v}_i} {\tau} \right) + \sum_{k \neq i}^{N} \exp\left( \frac { {\bm u}_i^\top {\bm u}_k - {\bm u}_i^\top {\bm v}_i} {\tau} \right) \\ & \approx 2 + \frac 1 \tau \left[ \sum_{k \neq i}^{N} ( {\bm u}_i^\top {\bm v}_k - {\bm u}_i^\top {\bm v}_i) + \sum_{k \neq i}^{N} ( {\bm u}_i^\top {\bm u}_k - {\bm u}_i^\top {\bm v}_i) \right] \\ & = 2 - \frac 1 {2 \tau}\sum_{k \neq i}^{N} \left( \| {\bm u_i} - {\bm v_k} \|^2 - \| {\bm u_i} - {\bm v_i} \|^2 + \| {\bm u_i} - {\bm u_k} \|^2 - \| {\bm u_i} - {\bm v_i} \|^2 \right) \\ & \propto 4N\tau + \sum_{k \neq i}^{N} \left(\| {\bm u_i} - {\bm v_i} \|^2 - \| {\bm u_i} - {\bm v_k} \|^2 + \| {\bm u_i} - {\bm v_i} \|^2 - \| {\bm u_i} - {\bm u_k} \|^2\right) , \end{aligned} \end{equation} which concludes the proof. \end{proof} \section{Conclusion} \label{sec:conclusion} In this paper, we have developed a novel graph contrastive representation learning framework with adaptive augmentation. Our model learns representation by maximizing the agreement of node embeddings between views that are generated by adaptive graph augmentation. The proposed adaptive augmentation scheme first identifies important edges and feature dimensions via network centrality measures. Then, on the topology level, we randomly remove edges by assigning large probabilities on unimportant edges to enforce the model to recognize network connectivity patterns. On the node attribute level, we corrupt attributes by adding more noise to unimportant feature dimensions to emphasize the underlying semantic information. We have conducted comprehensive experiments using various real-world datasets. Experimental results demonstrate that our proposed GCA\xspace method consistently outperforms existing state-of-the-art methods and even surpasses several supervised counterparts. \section{Experiments} \label{sec:experiments} In this section, we conduct experiments to evaluate our model through answering the following questions. \begin{itemize} \item \textbf{RQ1}. Does our proposed GCA\xspace outperform existing baseline methods on node classification? \item \textbf{RQ2}. Do all proposed adaptive graph augmentation schemes benefit the learning of the proposed model? How does each graph augmentation scheme affect model performance? \item \textbf{RQ3}. Is the proposed model sensitive to hyperparameters? How do key hyperparameters impact the model performance? \end{itemize} We begin with a brief introduction of the experimental setup, and then we proceed to details of experimental results and their analysis. \subsection{Experimental Setup} \subsubsection{Datasets} For comprehensive comparison, we use six widely-used datasets, including Wiki-CS, Amazon-Computers, Amazon-Photo, Coauthor-CS, and Coauthor-Physics, to study the performance of transductive node classification. The datasets are collected from real-world networks from different domains; their detailed statistics is summarized in Table \ref{tab:dataset-statistics}. \begin{itemize} \item \textbf{Wiki-CS} \cite{Mernyei:2020wh} is a reference network constructed based on Wikipedia. The nodes correspond to articles about computer science and edges are hyperlinks between the articles. Nodes are labeled with ten classes each representing a branch of the field. Node features are calculated as the average of pretrained GloVe \cite{Pennington:2014kh} word embeddings of words in each article. \item \textbf{Amazon-Computers} and \textbf{Amazon-Photo} \cite{Shchur:2018vv} are two networks of co-purchase relationships constructed from Amazon, where nodes are goods and two goods are connected when they are frequently bought together. Each node has a sparse bag-of-words feature encoding product reviews and is labeled with its category. \item \textbf{Coauthor-CS} and \textbf{Coauthor-Physics} \cite{Shchur:2018vv} are two academic networks, which contain co-authorship graphs based on the Microsoft Academic Graph from the KDD Cup 2016 challenge. In these graphs, nodes represent authors and edges indicate co-authorship relationships; that is, two nodes are connected if they have co-authored a paper. Each node has a sparse bag-of-words feature based on paper keywords of the author. The label of an author corresponds to their most active research field. \end{itemize} Among these datasets, Wiki-CS has dense numerical features, while the other four datasets only contain sparse one-hot features. For the Wiki-CS dataset, we evaluate the models on the public splits shipped with the dataset \cite{Mernyei:2020wh}. Regarding the other four datasets, since they have no public splits available, we instead randomly split the datasets, where 10\%, 10\%, and the rest 80\% of nodes are selected for the training, validation, and test set, respectively. \begin{table}[t] \begin{threeparttable} \small \centering \caption{Statistics of datasets used in experiments.} \label{tab:dataset-statistics} \begin{tabular}{cccccc} \toprule Dataset & \#Nodes & \#Edges & \#Features & \#Classes \\ \midrule Wiki-CS\tnotex{fn:wikics} & 11,701 & 216,123 & 300 & 10 \\ Amazon-Computers\tnotex{fn:amazon-computers} & 13,752 & 245,861 & 767 & 10 \\ Amazon-Photo\tnotex{fn:amazon-photo} & 7,650 & 119,081 & 745 & 8 \\ Coauthor-CS\tnotex{fn:coauthor-cs} & 18,333 & 81,894 & 6,805 & 15 \\ Coauthor-Physics\tnotex{fn:coauthor-phy} & 34,493 & 247,962 & 8,415 & 5 \\ \bottomrule \end{tabular} \begin{tablenotes}[flushleft] \scriptsize{ \item[1] \label{fn:wikics} \url{https://github.com/pmernyei/wiki-cs-dataset/raw/master/dataset} \item[2] \label{fn:amazon-computers} \url{https://github.com/shchur/gnn-benchmark/raw/master/data/npz/amazon_electronics_computers.npz} \item[3] \label{fn:amazon-photo} \url{https://github.com/shchur/gnn-benchmark/raw/master/data/npz/amazon_electronics_photo.npz} \item[4] \label{fn:coauthor-cs} \url{https://github.com/shchur/gnn-benchmark/raw/master/data/npz/ms_academic_cs.npz} \item[5] \label{fn:coauthor-phy} \url{https://github.com/shchur/gnn-benchmark/raw/master/data/npz/ms_academic_phy.npz} } \end{tablenotes} \end{threeparttable} \addtocounter{footnote}{+5} \end{table} \subsubsection{Evaluation protocol.} For every experiment, we follow the linear evaluation scheme as introduced in \citet{Velickovic:2019tu}, where each model is firstly trained in an unsupervised manner; then, the resulting embeddings are used to train and test a simple \(\ell_2\)-regularized logistic regression classifier. We train the model for twenty runs for different data splits and report the averaged performance on each dataset for fair evaluation. Moreover, we measure performance in terms of accuracy in these experiments. \subsubsection{Baselines.} We consider representative baseline methods belonging to the following two categories: (1) traditional methods including DeepWalk \cite{Perozzi:2014ib} and node2vec \cite{Grover:2016ex} and (2) deep learning methods including Graph Autoencoders (GAE, VGAE) \cite{Kipf:2016ul}, Deep Graph Infomax (DGI) \cite{Velickovic:2019tu}, Graphical Mutual Information Maximization (GMI) \cite{Peng:2020gw}, and Multi-View Graph Representation Learning (MVGRL) \cite{Hassani:2020un}. Furthermore, we report the performance obtained using a logistic regression classifier on raw node features and DeepWalk with embeddings concatenated with input node features. To directly compare our proposed method with supervised counterparts, we also report the performance of two representative models Graph Convolutional Networks (GCN) \cite{Kipf:2016tc} and Graph Attention Networks (GAT) \cite{Velickovic:2018we}, where they are trained in an end-to-end fashion. For all baselines, we report their performance based on their official implementations. \subsubsection{Implementation details.} We employ a two-layer GCN \cite{Kipf:2016tc} as the encoder for all deep learning baselines due to its simplicity. The encoder architecture is formally given by \begin{align} \GC_i (\bm{X}, \bm{A}) & = \sigma \left( \hat{\bm{D}}^{-\frac{1}{2}} \hat {\bm{A}} \hat{\bm{D}}^{-\frac{1}{2}} \bm{X} \bm{W}_i \right), \\ f(\bm X, \bm A) & = \GC_2 ( \GC_1 ( \bm{X}, \bm{A} ), \bm{A} ). \end{align} where \(\hat{\bm{A}} = \bm{A} + \bm{I}\) is the adjacency matrix with self-loops, \(\hat {\bm D} = \sum_i \hat{\bm{A}}_i\) is the degree matrix, \(\sigma(\cdot)\) is a nonlinear activation function, e.g., \(\operatorname{ReLU}(\cdot) = \max(0, \cdot)\), and \(\bm{W}_i\) is a trainable weight matrix. For experimental specifications, including details of the configurations of the optimizer and hyperparameter settings, we refer readers of interest to Appendix \ref{appendix:implementation}. \begin{table*}[t] \centering \caption{Summary of performance on node classification in terms of accuracy in percentage with standard deviation. Available data for each method during the training phase is shown in the second column, where \(\bm{X}, \bm{A}, \bm{Y}\) correspond to node features, the adjacency matrix, and labels respectively. The highest performance of unsupervised models is highlighted in boldface; the highest performance of supervised models is underlined. OOM indicates Out-Of-Memory on a 32GB GPU.} \label{tab:node-classification} \begin{tabular}{ccccccc} \toprule Method & Training Data & Wiki-CS & Amazon-Computers & Amazon-Photo & Coauthor-CS & Coauthor-Physics \\ \midrule Raw features & \(\bm{X}\) & 71.98 ± 0.00 & 73.81 ± 0.00 & 78.53 ± 0.00 & 90.37 ± 0.00 & 93.58 ± 0.00 \\ node2vec & \(\bm{A}\) & 71.79 ± 0.05 & 84.39 ± 0.08 & 89.67 ± 0.12 & 85.08 ± 0.03 & 91.19 ± 0.04 \\ DeepWalk & \(\bm{A}\) & 74.35 ± 0.06 & 85.68 ± 0.06 & 89.44 ± 0.11 & 84.61 ± 0.22 & 91.77 ± 0.15 \\ DeepWalk + features & \(\bm{X}, \bm{A}\) & 77.21 ± 0.03 & 86.28 ± 0.07 & 90.05 ± 0.08 & 87.70 ± 0.04 & 94.90 ± 0.09 \\ \midrule GAE & \(\bm{X}, \bm{A}\) & 70.15 ± 0.01 & 85.27 ± 0.19 & 91.62 ± 0.13 & 90.01 ± 0.71 & 94.92 ± 0.07 \\ VGAE & \(\bm{X}, \bm{A}\) & 75.63 ± 0.19 & 86.37 ± 0.21 & 92.20 ± 0.11 & 92.11 ± 0.09 & 94.52 ± 0.00 \\ DGI & \(\bm{X}, \bm{A}\) & 75.35 ± 0.14 & 83.95 ± 0.47 & 91.61 ± 0.22 & 92.15 ± 0.63 & 94.51 ± 0.52 \\ GMI & \(\bm{X}, \bm{A}\) & 74.85 ± 0.08 & 82.21 ± 0.31 & 90.68 ± 0.17 & OOM & OOM \\ MVGRL & \(\bm{X}, \bm{A}\) & 77.52 ± 0.08 & 87.52 ± 0.11 & 91.74 ± 0.07 & 92.11 ± 0.12 & 95.33 ± 0.03 \\ \rowcolor{lightgray!50} \textbf{GCA\xspace-DE} & \(\bm{X}, \bm{A}\) & 78.30 ± 0.00 & \textbf{87.85 ± 0.31} & 92.49 ± 0.09 & \textbf{93.10 ± 0.01} & 95.68 ± 0.05 \\ \rowcolor{lightgray!50} \textbf{GCA\xspace-PR} & \(\bm{X}, \bm{A}\) & \textbf{78.35 ± 0.05} & 87.80 ± 0.23 & \textbf{92.53 ± 0.16} & 93.06 ± 0.03 & 95.72 ± 0.03 \\ \rowcolor{lightgray!50} \textbf{GCA\xspace-EV} & \(\bm{X}, \bm{A}\) & 78.23 ± 0.04 & 87.54 ± 0.49 & 92.24 ± 0.21 & 92.95 ± 0.13 & \textbf{95.73 ± 0.03} \\ \specialrule{0.5pt}{0.5pt}{1pt} \midrule GCN & \(\bm{X}, \bm{A}, \bm{Y}\) & 77.19 ± 0.12 & 86.51 ± 0.54 & 92.42 ± 0.22 & \underline{93.03 ± 0.31} & \underline{95.65 ± 0.16} \\ GAT & \(\bm{X}, \bm{A}, \bm{Y}\) & \underline{77.65 ± 0.11} & \underline{86.93 ± 0.29} & \underline{92.56 ± 0.35} & 92.31 ± 0.24 & 95.47 ± 0.15 \\ \bottomrule \end{tabular} \end{table*} \begin{table*}[t] \centering \caption{Performance of model variants on node classification in terms of accuracy in percentage with standard deviation. We use the degree centrality in all variants. The highest performance is highlighted in boldface.} \label{tab:ablation-study} \begin{tabular}{cccccccc} \toprule Variant & Topology & Attribute & Wiki-CS & Amazon-Computers & Amazon-Photo & Coauthor-CS & Coauthor-Physics \\ \midrule GCA\xspace--T--A & Uniform & Uniform & 78.19 ± 0.01 & 86.25 ± 0.25 & 92.15 ± 0.24 & 92.93 ± 0.01 & 95.26 ± 0.02 \\ GCA\xspace--T & Uniform & Adaptive & 78.23 ± 0.02 & 86.72 ± 0.49 & 92.20 ± 0.26 & 93.07 ± 0.01 & 95.59 ± 0.04 \\ GCA\xspace--A & Adaptive & Uniform & 78.25 ± 0.02 & 87.66 ± 0.30 & 92.23 ± 0.20 & 93.02 ± 0.01 & 95.54 ± 0.02 \\ \rowcolor{lightgray!50} \textbf{GCA\xspace} & Adaptive & Adaptive & \textbf{78.30 ± 0.01} & \textbf{87.85 ± 0.31} & \textbf{92.49 ± 0.09} & \textbf{93.10 ± 0.01} & \textbf{95.68 ± 0.05} \\ \bottomrule \end{tabular} \end{table*} \subsection{Performance on Node Classification (RQ1)} The empirical performance is summarized in Table \ref{tab:node-classification}. Overall, from the table, we can see that our proposed model shows strong performance across all five datasets. GCA\xspace consistently performs better than unsupervised baselines by considerable margins on both transductive tasks. The strong performance verifies the superiority of the proposed contrastive learning framework. On the two Coauthor datasets, we note that existing baselines have already obtained high enough performance; our method GCA\xspace still pushes that boundary forward. Moreover, we particularly note that GCA\xspace is competitive with models \emph{trained with label supervision} on all five datasets. We make other observations as follows. Firstly, the performance of traditional contrastive learning methods like DeepWalk is inferior to the simple logistic regression classifier that only uses raw features on some datasets (Coauthor-CS and Coauthor-Physics), which suggests that these methods may be ineffective in utilizing node features. Unlike traditional work, we see that GCN-based methods, e.g., GAE, are capable of incorporating node features when learning embeddings. However, we note that on certain datasets (Wiki-CS), their performance is still worse than DeepWalk + feature, which we believe can be attributed to their na\"ive method of selecting negative samples that simply chooses contrastive pairs based on edges. This fact further demonstrates the important role of selecting negative samples based on augmented graph views in contrastive representation learning. Moreover, compared to existing baselines DGI, GMI, and MVGRL, our proposed method performs strong, adaptive data augmentation in constructing negative samples, leading to better performance. Note that, although MVGRL employs diffusion to incorporate global information into augmented views, it still fails to consider the impacts of different edges adaptively on input graphs. The superior performance of GCA\xspace verifies that our proposed adaptive data augmentation scheme is able to help improve embedding quality by preserving important patterns during perturbation. Secondly, we observe that all three variants with different node centrality measures of GCA\xspace outperform existing contrastive baselines on all datasets. We also notice that GCA\xspace-DE and GCA\xspace-PR with the degree and PageRank centrality respectively are two strong variants that achieve the best or competitive performance on all datasets. Please kindly note that the result indicates that our model is not limited to specific choices of centrality measures and verifies the effectiveness and generality of our proposed framework. In summary, the superior performance of GCA\xspace compared to existing state-of-the-art methods verifies the effectiveness of our proposed GCA\xspace framework that performs data augmentation adaptive to the graph structure and attributes. \subsection{Ablation Studies (RQ2)} In this section, we substitute the proposed topology and attribute level augmentation with their uniform counterparts to study the impact of each component of GCA\xspace. GCA\xspace--T--A denotes the model with uniform topology and node attribute augmentation schemes, where the probabilities of dropping edge and masking features are set to the same for all nodes. The variants GCA\xspace--T and GCA\xspace--A are defined similarly except that we substitute the topology and the node attribute augmentation scheme with uniform sampling in the two models respectively. Degree centrality is used in all the variants for fair comparison. Please kindly note that the downgraded GCA\xspace--T--A fallbacks to our preliminary work GRACE \cite{Zhu:2020vf}. The results are presented in Table \ref{tab:ablation-study}, where we can see that both topology-level and node-attribute-level adaptive augmentation scheme improve model performance consistently on all datasets. In addition, the combination of adaptive augmentation schemes on the two levels further benefits the performance. On the Amazon-Computers dataset, our proposed GCA\xspace gains 1.5\% absolute improvement compared to the base model with no adaptive augmentation enabled. The results verify the effectiveness of our adaptive augmentation schemes on both topology and node attribute levels. \subsection{Sensitivity Analysis (RQ3)} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{figures/sensitivity.pdf} \caption{The performance of GCA\xspace with varied different hyperparameters on the Amazon-Photo dataset in terms of node classification accuracy.} \label{fig:sensitivity-analysis} \end{figure} In this section, we perform sensitivity analysis on critical hyperparameters in GCA\xspace, namely four probabilities \(p_{e,1},p_{f,1},p_{e,2}\), and \(p_{f,2}\) that determine the generation of graph views to show the stability of the model under perturbation of these hyperparameters. We conduct transductive node classification by varying these parameters from 0.1 to 0.9. For sake of visualization brevity, we set \(p_e = p_{e,1} = p_{e,2}\) and \(p_f = p_{f,1} = p_{f,2}\) to control the magnitude of the proposed topology and node attribute level augmentation. We only change these four parameters in the sensitivity analysis, and other parameters remain the same as previously described. The results on the Amazon-Photo dataset are shown in Figure \ref{fig:sensitivity-analysis}. From the figure, it can be observed that the performance of node classification in terms of accuracy is relatively stable when the parameters are not too large, as shown in the plateau in the figure. We thus conclude that, overall, our model is insensitive to these probabilities, demonstrating the robustness to hyperparameter perturbation. If the probability is set too large (e.g., \(> 0.5\)), the original graph will be heavily undermined. For example, when \(p_e = 0.9\), almost every existing edge has been removed, leading to isolated nodes in the generated graph views. Under such circumstances, the GNN is hard to learn useful information from node neighborhoods. Therefore, the learned node embeddings in the two graph views are not distinctive enough, which will result in the difficulty of optimizing the contrastive objective. \section{Introduction} \begin{figure*} \centering \includegraphics[width=\linewidth]{figures/model.pdf} \caption{Our proposed deep Graph Contrastive representation learning with Adaptive augmentation (GCA\xspace) model. We first generate two graph views via stochastic augmentation that is adaptive to the graph structure and attributes. Then, the two graphs are fed into a shared Graph Neural Network (GNN) to learn representations. We train the model with a contrastive objective, which pulls representations of one node together while pushing node representations away from other node representations in the two views. N.B., we define the negative samples as all other nodes in the two views. Therefore, negative samples are from two sources, intra-view (in purple) and inter-view nodes (in red).} \label{fig:model} \end{figure*} Over the past few years, graph representation learning has emerged as a powerful strategy for analyzing graph-structured data. Graph representation learning using Graph Neural Networks (GNN) has received considerable attention, which aims to transform nodes to low-dimensional dense embeddings that preserve graph attributive and structural features. However, existing GNN models are mostly established in a supervised manner \cite{Kipf:2016tc,Velickovic:2018we,Hu:2019vq}, which require abundant labeled nodes for training. Recently, Contrastive Learning (CL), as revitalization of the classical Information Maximization (InfoMax) principle \cite{Linsker:1988ho}, achieves great success in many fields, e.g., visual representation learning \cite{Tian:2019vw,He:2020tu,Bachman:2019wp} and natural language processing \cite{Weston:2008kg,Kavukcuoglu:2013to}. These CL methods seek to maximize the Mutual Information (MI) between the input (i.e. images) and its representations (i.e. image embeddings) by contrasting positive pairs with negative-sampled counterparts. Inspired by previous CL methods, Deep Graph InfoMax (DGI) \cite{Velickovic:2019tu} marries the power of GNN into InfoMax-based methods. DGI firstly augments the original graph by simply shuffling node features. Then, a contrastive objective is proposed to maximize the MI between node embeddings and a global summary embedding. Following DGI, GMI \cite{Peng:2020gw} proposes two contrastive objectives to directly measure MI between input and representations of nodes and edges respectively, without explicit data augmentation. Moreover, to supplement the input graph with more global information, MVGRL \cite{Hassani:2020un} proposes to augment the input graph via graph diffusion kernels \cite{Klicpera:2019vc}. Then, it constructs graph views by uniformly sampling subgraphs and learns to contrast node representations to global embeddings across the two views. Despite the prosperous development of graph CL methods, data augmentation schemes, proved to be a critical component for visual representation learning \cite{Wu:2020tj}, remain rarely explored in existing literature. Unlike abundant data transformation techniques available for images and texts, graph augmentation schemes are non-trivial to define in CL methods, since graphs are far more complex due to the non-Euclidean property. We argue that the augmentation schemes used in the aforementioned methods suffer from two drawbacks. At first, simple data augmentation in \emph{either} the structural domain \emph{or} the attribute domain, such as feature shifting in DGI \cite{Velickovic:2019tu}, is not sufficient for generating diverse neighborhoods (i.e. contexts) for nodes, especially when node features are sparse, leading to difficulty in optimizing the contrastive objective. Secondly, previous work ignores the discrepancy in the impact of nodes and edges when performing data augmentation. For example, if we construct graph views by \emph{uniformly} dropping edges, removing some influential edges will deteriorate the embedding quality. As the representations learned by the contrastive objective tend to be \emph{invariant} to corruption induced by the data augmentation scheme \cite{Xiao:2020vt}, the data augmentation strategies should be \emph{adaptive} to the input graph to reflect its intrinsic patterns. Again, taking the edge removing scheme as an example, we can give larger probabilities to unimportant edges and lower probabilities to important ones, when randomly removing the edges. Then, this scheme is able to guide the model to ignore the introduced noise on unimportant edges and thus learn important patterns underneath the input graph. To this end, we propose a novel contrastive framework for unsupervised graph representation learning, as shown in Figure \ref{fig:model}, which we refer to as \underline{G}raph \underline{C}ontrastive learning with \underline{A}daptive augmentation, GCA\xspace for brevity. In GCA\xspace, we first generate two correlated graph views by performing stochastic corruption on the input. Then, we train the model using a contrastive loss to maximize the agreement between node embeddings in these two views. Specifically, we propose a joint, adaptive data augmentation scheme at both topology and node attribute levels, namely removing edges and masking features, to provide diverse contexts for nodes in different views, so as to boost optimization of the contrastive objective. Moreover, we identify important edges and feature dimensions via centrality measures. Then, on the topology level, we adaptively drop edges by giving large removal probabilities to unimportant edges to highlight important connective structures. On the node attribute level, we corrupt attributes by adding more noise to unimportant feature dimensions, to enforce the model to recognize underlying semantic information. The core contribution of this paper is two-fold: \begin{itemize} \item Firstly, we propose a general contrastive framework for unsupervised graph representation learning with strong, adaptive data augmentation. The proposed GCA\xspace framework jointly performs data augmentation on both topology and attribute levels that are adaptive to the graph structure and attributes, which encourages the model to learn important features from both aspects. \item Secondly, we conduct comprehensive empirical studies using five public benchmark datasets on node classification under the commonly-used linear evaluation protocol. GCA\xspace consistently outperforms existing methods and our unsupervised method even surpasses its supervised counterparts on several transductive tasks. \end{itemize} To make the results of this work reproducible, we make all the code publicly available at \url{https://github.com/CRIPAC-DIG/GCA}. The remaining of the paper includes the following sections. We briefly review related work in Section \ref{sec:related-work}. In Section \ref{sec:method}, we present the proposed GCA\xspace model in detail. The results of the experiments are analyzed in Section \ref{sec:experiments}. Finally, we conclude the paper in Section \ref{sec:conclusion}. For readers of interest, additional configurations of experiments and details of proofs are provided in Appendix \ref{appendix:implementation} and \ref{appendix:proofs}, respectively. \section{The Proposed Method} \label{sec:method} In the following section, we present GCA\xspace in detail, starting with the overall contrastive learning framework, followed by the proposed adaptive graph augmentation schemes. Finally, we provide theoretical justification behind our method. \subsection{Preliminaries} Let \(\mathcal{G} = (\mathcal{V}, \mathcal{E})\) denote a graph, where \(\mathcal{V} = \{ v_1, v_2, \cdots, v_N\}\), \(\mathcal{E} \subseteq \mathcal V \times \mathcal V\) represent the node set and the edge set respectively. We denote the feature matrix and the adjacency matrix as \(\bm{X} \in \mathbb{R}^{N \times F}\) and \(\bm{A} \in \{0,1\}^{N \times N}\), where \(\bm{x}_i \in \mathbb{R}^{F}\) is the feature of \(v_i\), and \(\bm{A}_{ij} = 1\) iff \((v_i, v_j) \in \mathcal{E}\). There is no given class information of nodes in \(\mathcal{G}\) during training in the unsupervised setting. Our objective is to learn a GNN encoder \(f(\bm{X}, \bm{A}) \in \mathbb{R}^{N \times F^\prime}\) receiving the graph features and structure as input, that produces node embeddings in low dimensionality, i.e. \(F^\prime \ll F\). We denote \(\bm{H} = f(\bm{X}, \bm{A})\) as the learned representations of nodes, where \(\bm{h}_i\) is the embedding of node \(v_i\). These representations can be used in downstream tasks, such as node classification and community detection. \subsection{The Contrastive Learning Framework} The proposed GCA\xspace framework follows the common graph CL paradigm where the model seeks to maximize the agreement of representations between different views \cite{Zhu:2020vf,Hassani:2020un}. To be specific, we first generate two graph views by performing stochastic graph augmentation on the input. Then, we employ a contrastive objective that enforces the encoded embeddings of each node in the two different views to agree with each other and can be discriminated from embeddings of other nodes. In our GCA\xspace model, at each iteration, we sample two stochastic augmentation function \(t \sim \mathcal{T}\) and \(t' \sim \mathcal{T}\), where \(\mathcal{T}\) is the set of all possible augmentation functions. Then, we generate two graph views, denoted as \(\widetildeto{X}{\mathcal G}_1 = t(\widetildeto{X}{\mathcal G})\) and \(\widetildeto{X}{\mathcal G}_2 = t'(\widetildeto{X}{\mathcal G})\), and denote node embeddings in the two generated views as \(\bm{U} = f(\widetildeto{X}{\bm{X}}_1, \widetildeto{X}{\bm A}_1)\) and \(\bm{V} = f(\widetildeto{X}{\bm{X}}_2, \widetildeto{X}{\bm A}_2)\), where \(\widetildeto{X}{\bm{X}}_{\ast}\) and \(\widetildeto{X}{\bm{A}}_{\ast}\) are the feature matrices and adjacent matrices of the views. After that, we employ a contrastive objective, i.e. a discriminator, that distinguishes the embeddings of the same node in these two different views from other node embeddings. For any node \(v_i\), its embedding generated in one view, \(\bm{u}_i\), is treated as the anchor, the embedding of it generated in the other view, \(\bm{v}_i\), forms the positive sample, and the other embeddings in the two views are naturally regarded as negative samples. Mirroring the InfoNCE objective \cite{vandenOord:2018ut} in our multi-view graph CL setting, we define the pairwise objective for each positive pair \((\bm{u}_i, \bm{v}_i)\) as \begin{equation} \begin{split} \ell &(\bm{u}_i, \bm{v}_i) =\\ &\log \frac {e^{\theta\left(\bm{u}_i, \bm{v}_{i} \right) / \tau}} {\underbrace{e^{\theta\left(\bm{u}_i, \bm{v}_{i} \right) / \tau}}_{\text{positive pair}} + \underbrace{\sum_{k \neq i} e^{\theta\left(\bm{u}_i, \bm{v}_{k} \right) / \tau}}_{\text{inter-view negative pairs}} + \underbrace{\sum_{k \neq i}e^{\theta\left(\bm{u}_i, \bm{u}_k \right) / \tau}}_{\text{intra-view negative pairs}}},\\ \end{split} \label{eq:pairwise-loss} \end{equation} where \(\tau\) is a temperature parameter. We define the critic \(\theta(\bm{u}, \bm{v}) = s(g(\bm{u}), g(\bm{v}))\), where \(s(\cdot, \cdot)\) is the cosine similarity and \(g(\cdot)\) is a non-linear projection to enhance the expression power of the critic function \cite{Chen:2020wj,Tschannen:2020uo}. The projection function \(g\) in our method is implemented with a two-layer perceptron model. Given a positive pair, we naturally define negative samples as all other nodes in the two views. Therefore, negative samples come from two sources, that are inter-view and intra-view nodes, corresponding to the second and the third term in the denominator in Eq. (\ref{eq:pairwise-loss}), respectively. Since two views are symmetric, the loss for another view is defined similarly for \(\ell(\bm{v}_i, \bm u_i)\). The overall objective to be maximized is then defined as the average over all positive pairs, formally given by \begin{equation} \mathcal{J} = \frac{1}{2N} \sum_{i = 1}^{N} \left[\ell(\bm{u}_i, \bm{v}_i) + \ell(\bm{v}_i, \bm{u}_i)\right]. \label{eq:overall-loss} \end{equation} To sum up, at each training epoch, GCA\xspace first draws two data augmentation functions \(t\) and \(t'\), and then generates two graph views \(\widetildeto{X}{\mathcal{G}}_1 = t(\mathcal{G})\) and \(\widetildeto{X}{\mathcal{G}}_2 = t'(\mathcal{G})\) of graph \(\mathcal{G}\) accordingly. Then, we obtain node representations \(\bm{U}\) and \(\bm{V}\) of \(\widetildeto{X}{\mathcal{G}}_1\) and \(\widetildeto{X}{\mathcal{G}}_2\) using a GNN encoder \(f\). Finally, the parameters are updated by maximizing the objective in Eq. (\ref{eq:overall-loss}). The training algorithm is summarized in Algorithm \ref{algo:training}. \begin{algorithm}[ht] \DontPrintSemicolon\SetNoFillComment \caption{The GCA\xspace training algorithm} \label{algo:training} \For {\(epoch \gets 1, 2, \cdots\)} { Sample two stochastic augmentation functions \(t \sim \mathcal{T}\) and \(t' \sim \mathcal{T}\)\; Generate two graph views \(\widetildeto{X}{\mathcal{G}}_1 = t(\mathcal{G})\) and \(\widetildeto{X}{\mathcal{G}}_2 = t'(\mathcal{G})\) by performing corruption on \(\mathcal{G}\)\; Obtain node embeddings \(\bm{U}\) of \(\widetildeto{X}{\mathcal{G}}_1\) using the encoder \(f\)\; Obtain node embeddings \(\bm{V}\) of \(\widetildeto{X}{\mathcal{G}}_2\) using the encoder \(f\)\; Compute the contrastive objective \(\mathcal{J}\) with Eq. (\ref{eq:overall-loss})\; Update parameters by applying stochastic gradient ascent to maximize \(\mathcal{J}\)\; } \end{algorithm} \subsection{Adaptive Graph Augmentation} In essence, CL methods that maximize agreement between views seek to learn representations that are \emph{invariant} to perturbation introduced by the augmentation schemes \cite{Xiao:2020vt}. In the GCA\xspace model, we propose to design augmentation schemes that tend to keep important structures and attributes unchanged, while perturbing possibly unimportant links and features. Specifically, we corrupt the input graph by randomly removing edges and masking node features in the graph, and the removing or masking probabilities are skewed for unimportant edges or features, that is, higher for unimportant edges or features, and lower for important ones. From an amortized perspective, we emphasize important structures and attributes over randomly corrupted views, which will guide the model to preserve fundamental topological and semantic graph patterns. \subsubsection{Topology-level augmentation.} For topology-level augmentation, we consider a direct way for corrupting input graphs where we randomly remove edges in the graph \cite{Zhu:2020vf}. Formally, we sample a modified subset \(\widetildeto{X}{\mathcal{E}}\) from the original \(\mathcal E\) with probability \begin{equation} P \{ (u, v) \in \widetildeto{X}{\mathcal E} \} = 1 - p^e_{uv}, \end{equation} where \((u, v) \in \mathcal{E}\) and \(p^e_{uv}\) is the probability of removing \((u, v)\). \(\widetildeto{X} {\mathcal{E}}\) is then used as the edge set in the generated view. \(p^e_{uv}\) should reflect the importance of the edge \((u, v)\) such that the augmentation function are more likely to corrupt unimportant edges while keep important connective structures intact in augmented views. In network science, node centrality is a widely-used measure that quantifies the influence of nodes in the graph \cite{Newman:2018aw}. We define edge centrality \(w^e_{uv}\) for edge \((u, v)\) to measure its influence based on centrality of two connected nodes. Given a node centrality measure \(\varphi_c(\cdot) : \mathcal V \rightarrow \mathbb R^+\), we define edge centrality as the average of two adjacent nodes' centrality scores, i.e. \(w^e_{uv} = (\varphi_c(u) + \varphi_c(v))/2\), and on directed graph, we simply use the centrality of the tail node, i.e. \(w^e_{uv} = \varphi_c(v)\), since the importance of edges is generally characterized by nodes they are pointing to \cite{Newman:2018aw}. Next, we calculate the probability of each edge based on its centrality value. Since node centrality values like degrees may vary across orders of magnitude \cite{Newman:2018aw}, we first set \(s^e_{uv} = \log w^e_{uv}\) to alleviate the impact of nodes with heavily dense connections. The probabilities can then be obtained after a normalization step that transform the values into probabilities, which is defined as \begin{equation} p^e_{uv} = \min \left( \frac {s^e_{\max} - s^e_{uv}} {s^e_{\max} - \mu^e_s} \cdot p_e , p_\tau \right), \end{equation} where \(p_e\) is a hyperparameter that controls the overall probability of removing edges, \(s^e_{\max}\) and \(\mu^e_s\) is the maximum and average of \(s^e_{uv}\), and \(p_\tau < 1\) is a cut-off probability, used to truncate the probabilities since extremely high removal probabilities will lead to overly corrupted graph structures. For the choice of the node centrality function, we use the following three centrality measures, including degree centrality, eigenvector centrality, and PageRank centrality due to their simplicity and effectiveness. \paragraph{Degree centrality.} Node degree itself can be a centrality measure \cite{Newman:2018aw}. On directed networks, we use in-degrees since the influence of a node in directed graphs are mostly bestowed by nodes pointing at it \cite{Newman:2018aw}. Despite that the node degree is one of the simplest centrality measures, it is quite effective and illuminating. For example, in citation networks where nodes represent papers and edges represent citation relationships, nodes with the highest degrees are likely to correspond to influential papers. \paragraph{Eigenvector centrality.} The eigenvector centrality \cite{Newman:2018aw,Bonacich:1987up} of a node is calculated as its eigenvector corresponding to the largest eigenvalue of the adjacency matrix. Unlike degree centrality, which assumes that all neighbors contribute equally to the importance of the node, eigenvector centrality also takes the importance of neighboring nodes into consideration. By definition, the eigenvector centrality of each node is proportional to the sum of centralities of its neighbors, nodes that are either connected to many neighbors or connected to influential nodes will have high eigenvector centrality values. On directed graphs, we use the right eigenvector to compute the centrality, which corresponds to incoming edges. Note that since only the leading eigenvector is needed, the computational burden for calculating the eigenvector centrality is negligible. \paragraph{PageRank centrality.} The PageRank centrality \cite{Newman:2018aw,Page:1999wg} is defined as the PageRank weights computed by the PageRank algorithm. The algorithm propagates influence along directed edges, and nodes gathered the most influence are regarded as important nodes. Formally, the centrality values are defined by \begin{equation} \bm\sigma = \alpha \bm{A} \bm{D}^{-1} \bm\sigma + \bm{1}, \end{equation} where \(\sigma \in \mathbb{R}^N\) is the vector of PageRank centrality scores for each node and \(\alpha\) is a damping factor that prevents sinks in the graph from absorbing all ranks from other nodes connected to the sinks. We set \(\alpha=0.85\) as suggested in \citet{Page:1999wg}. For undirected graphs, we execute PageRank on transformed directed graphs, where each undirected edge is converted to two directed edges. \begin{figure}[b] \centering \subfloat[Degree]{ \includegraphics[width=0.31\linewidth,frame]{figures/karate_degree.pdf} } \subfloat[Eigenvector]{ \includegraphics[width=0.31\linewidth,frame]{figures/karate_evc.pdf} } \subfloat[PageRank]{ \includegraphics[width=0.31\linewidth,frame]{figures/karate_pr.pdf} } \caption{Visualization of edge centrality computed by three schemes in the Karate club dataset, where centrality values are shown in terms of the thickness of edges. Node colors indicate two classes inside the network; two coaches are in orange.} \label{fig:visualization-edge-weights} \end{figure} To gain an intuition of these proposed adaptive structural augmentation schemes, we calculate edge centrality scores of the famous Karate club dataset \cite{Zachary:1977fs}, containing two groups of students leading by two coaches respectively. The edge centrality values calculated by different schemes are visualized in Figure \ref{fig:visualization-edge-weights}. As can be seen in the figure, though the three schemes exhibit subtle differences, all of the augmentation schemes tend to emphasize edges that connect the two coaches (in orange) inside the two groups and put less attention to links between peripheral nodes across groups. This verifies that the proposed node-centrality-based adaptive topology augmentation scheme can recognize fundamental structures of the graph. \subsubsection{Node-attribute-level augmentation.} On the node attribute level, similar to the salt-and-pepper noise in digital image processing \cite{Gonzalez:2018dp}, we add noise to node attributes via randomly masking a fraction of dimensions with zeros in node features. Formally, we first sample a random vector \(\widetildeto{X}{\bm{m}} \in \{ 0, 1 \}^F\) where each dimension of it independently is drawn from a Bernoulli distribution independently, i.e., \(\widetildeto{X}{m}_i \sim \text{Bern}(1 - p^f_i), \forall i\). Then, the generated node features \(\widetildeto{X} {\bm X}\) is computed by \begin{equation} \widetildeto{X} {\bm{X}} = [ \bm{x}_1 \circ \widetildeto{X}{\bm{m}}; \bm{x}_2 \circ \widetildeto{X}{\bm{m}}; \cdots; \bm{x}_N \circ \widetildeto{X}{\bm{m}} ]^\top. \end{equation} Here \([\cdot ; \cdot ]\) is the concatenation operator, and \(\circ\) is the element-wise multiplication. Similar to topology-level augmentation, the probability \(p^f_i\) should reflect the importance of the \(i\)-th dimension of node features. We assume that feature dimensions frequently appearing in influential nodes should be important, and define the weights of feature dimensions as follows. For sparse one-hot nodes features, i.e. \(x_{ui} \in \{ 0, 1 \}\) for any node \(u\) and feature dimension \(i\), we calculate the weight of dimension \(i\) as \begin{equation} w^f_i = \sum_{u \in \mathcal V} x_{ui} \cdot \varphi_c (u), \end{equation} where \(\varphi_c (\cdot)\) is a node centrality measure that is used to quantify node importance. The first term \(x_{ui} \in \{ 0, 1 \}\) indicates the occurrence of dimension \(i\) in node \(u\), and the second term \(\varphi_i(u)\) measures the node importance of each occurrence. To provide some intuition behind the above definition, consider a citation network where each feature dimension corresponds to a keyword. Then, keywords that frequently appear in a highly influential paper should be considered informative and important. For dense, continuous node features \(\bm{x}_u\) of node \(u\), where \(x_{ui}\) denotes feature value at dimension \(i\), we cannot directly count the occurrence of each one-hot encoded value. Then, we turn to measure the magnitude of the feature value at dimension \(i\) of node \(u\) by its absolute value \(| x_{ui} |\). Formally, we calculate the weights by \begin{equation} w^f_i = \sum_{u \in \mathcal V} |x_{ui}| \cdot \varphi_c (u). \end{equation} Similar to topology augmentation, we perform normalization on the weights to obtain the probability representing feature importance. Formally, \begin{equation} p_i^f = \min \left( \frac {s^f_{\max} - s^f_{i}} {s^f_{\max} - \mu_s^f} \cdot p_f, p_\tau \right), \end{equation} where \(s^f_i = \log w^f_i\), \(s^f_{\max}\) and \(\mu^f_s\) is the maximum and the average value of \(s^f_i\) respectively, and \(p_f\) is a hyperparameter that controls the overall magnitude of feature augmentation. Finally, we generate two corrupted graph views \(\widetildeto{X} {\mathcal{G}}_1, \widetildeto{X} {\mathcal{G}}_2\) by jointly performing topology- and node-attribute-level augmentation. In GCA\xspace, the probability \(p_e\) and \(p_f\) is different for generating the two views to provide a diverse context for contrastive learning, where the probabilities for the first and the second view are denoted by \(p_{e,1}, p_{f,1}\) and \(p_{e,2}, p_{f,2}\) respectively. In this paper, we propose and evaluate three model variants, denoted as GCA\xspace-DE, GCA\xspace-EV, and GCA\xspace-PR. The three variants employ degree, eigenvector, and PageRank centrality measures respectively. Note that all centrality and weight measures are only dependent on the topology and node attributes of the original graph. Therefore, they only need to be computed once and do not bring much computational burden. \subsection{Theoretical Justification} In this section, we provide theoretical justification behind our model from two perspectives, i.e. MI maximization and the triplet loss. Detailed proofs can be found in Appendix \ref{appendix:proofs}. \paragraph{Connections to MI maximization.} Firstly, we reveal the connections between our loss and MI maximization between node features and the embeddings in the two views. The InfoMax principle has been widely applied in representation learning literature \cite{Tian:2019vw,Bachman:2019wp,Poole:2019vk,Tschannen:2020uo}. MI quantifies the amount of information obtained about one random variable by observing the other random variable. \begin{theorem} \label{thm:objective-InfoMax} Let \(\bm{X}_i = \{ \bm{x}_k \}_{k \in \mathcal{N}(i)}\) be the neighborhood of node \(v_i\) that collectively maps to its output embedding, where \(\mathcal{N}(i)\) denotes the set of neighbors of node \(v_i\) specified by GNN architectures, and \(\bm{X}\) be the corresponding random variable with a uniform distribution \(p(\bm{X}_i) = \nicefrac{1}{N}\). Given two random variables \(\bm{U, V} \in \mathbb{R}^{F'}\) being the embedding in the two views, with their joint distribution denoted as \(p(\bm{U}, \bm{V})\), our objective \(\mathcal{J}\) is a lower bound of MI between encoder input \(\bm{X}\) and node representations in two graph views \(\bm{U, V}\). Formally, \begin{equation} \mathcal{J} \leq I(\bm{X}; \bm{U}, \bm{V}). \end{equation} \end{theorem} \begin{proof}[Proof sketch] We first observe that our objective \(\mathcal{J}\) is a lower bound of the InfoNCE objective \cite{vandenOord:2018ut,Poole:2019vk}, defined by \(I_\text{NCE}(\bm U; \bm V) \triangleq \mathbb{E}_{\prod_i {p(\bm u_i, \bm v_i)}} \left[ \frac{1}{N} \sum_{i=1}^N \log \frac{e^{\theta(\bm{u}_i, \bm{v}_i)}}{\frac{1}{N}\sum_{j = 1}^{N} e^{\theta(\bm{u}_i, \bm{v}_j)}} \right] \). Since the InfoNCE estimator is a lower bound of the true MI, the theorem directly follows from the application of data processing inequality \cite{Cover:2006ei}, which states that \(I(\bm U; \bm V) \leq I(\bm X; \bm U, \bm V)\). \end{proof} \begin{remark} Theorem \ref{thm:objective-InfoMax} reveals that maximizing \(\mathcal{J}\) is equivalent to explicitly maximizing a lower bound of the MI \(I(\bm X; \bm U, \bm V)\) between input node features and learned node representations. Recent work further provides empirical evidence that optimizing a stricter bound of MI may not lead to better downstream performance on visual representation learning \cite{Tschannen:2020uo,Tian:2020vw}, which further highlights the importance of the design of data augmentation strategies. When optimizing \(I(\bm U; \bm V)\), a lower bound of \(I(\bm X; \bm U, \bm V)\), we encourage the model to encode shared information between the two views. From the amortized perspective, corrupted views will follow a skewed distribution where important link structures and features are emphasized. By contrasting the two views, the model is enforced to encode the emphasized information into representations, which improves embedding quality. However, as the objective is not defined specifically on negative samples generated by the augmentation function, it remains challenging to derive the relationship between specific augmentation functions and the lower bound. We shall leave it for future work. \end{remark} \paragraph{Connections to the triplet loss.} Alternatively, we may also view the optimization problem in Eq. (\ref{eq:overall-loss}) as a classical triplet loss, commonly used in deep metric learning. \begin{theorem} \label{thm:objective-triplet-loss} When the projection function \(g\) is the identity function and we measure embedding similarity by simply taking the inner product, i.e. \(s(\bm{u}, \bm{v}) = \bm{u}^\top \bm{v}\), and further assuming that positive pairs are far more aligned than negative pairs, i.e. \(\bm{u}_i^\top \bm{v}_k \ll \bm{u}_i^\top \bm{v}_i\) and \(\bm{u}_i^\top \bm{u}_k \ll \bm{u}_i^\top \bm{v}_i\), minimizing the pairwise objective \(\ell(\bm{u}_i, \bm{v}_i)\) coincides with maximizing the triplet loss, as given in the sequel \begin{equation} \begin{split} & - \ell (\bm u_i, \bm v_i) \propto \\ & 4N \tau + \sum_{j \neq i}\left( \| {\bm u_i} - {\bm v_i} \|^2 - \| {\bm u_i} - {\bm v_j} \|^2 + \| {\bm u_i} - {\bm v_i} \|^2 - \| {\bm u_i} - {\bm u_j} \|^2\right). \end{split} \end{equation} \end{theorem} \begin{remark} Theorem \ref{thm:objective-triplet-loss} draws connections between the objective and the classical triplet loss. In other words, we may regard the problem in Eq. (\ref{eq:overall-loss}) as learning graph convolutional encoders to encourage positive samples being further away from negative samples in the embedding space. Moreover, by viewing the objective from the metric learning perspective, we highlight the importance of appropriate data augmentation schemes, which is often neglected in previous InfoMax-based methods. Specifically, as the objective pulls together representation of each node in the two corrupted views, the model is enforced to encode information in the input graph that is insensitive to perturbation. Since the proposed adaptive augmentation schemes tend to keep important link structures and node attributes intact in the perturbation, the model is guided to encode essential structural and semantic information into the representation, which improves the quality of embeddings. Last, the contrastive objective used in GCA\xspace is cheap to optimize, since we do not have to generate negative samples explicitly and all computation can be performed in parallel. In contrast, the triplet loss is known to be computationally expensive \cite{Schroff:2015wo}. \end{remark} \section{Related Work} \label{sec:related-work} In this section, we briefly review prior work on contrastive representation learning. Then, we review graph representation learning methods. At last, we provide a summary of comparisons between the proposed method and its related work. \subsection{Contrastive Representation Learning} Being popular in self-supervised representation learning, contrastive methods aim to learn discriminative representations by contrasting positive and negative samples. For visual data, negative samples can be generated using a multiple-stage augmentation pipeline \cite{Chen:2020wj,Bachman:2019wp,Falcon:2020uv}, consisting of color jitter, random flip, cropping, resizing, rotation \cite{Gidaris:2018wr}, color distortion \cite{Larsson:2017vt}, etc. Existing work \cite{Wu:2018kw,Tian:2019vw,He:2020tu} employs a memory bank for storing negative samples. Other work \cite{Bachman:2019wp,Ye:2019we,Chen:2020wj} explores in-batch negative samples. For an image patch as the anchor, these methods usually find a global summary vector \cite{Hjelm:2019uk,Bachman:2019wp} or patches in neighboring views \cite{vandenOord:2018ut,Henaff:2020ta} as the positive sample, and contrast them with negative-sampled counterparts, such as patches of other images within the same batch \cite{Hjelm:2019uk}. Theoretical analysis sheds light on the reasons behind their success \cite{Poole:2019vk}. Objectives used in these methods can be seen as maximizing a lower bound of MI between input features and their representations \cite{Linsker:1988ho}. However, recent work \cite{Tschannen:2020uo} reveals that downstream performance in evaluating the quality of representations may strongly depend on the bias that is encoded not only in the convolutional architectures but also in the specific estimator of the InfoMax objective. \subsection{Graph Representation Learning} Many traditional methods on unsupervised graph representation learning inherently follow the contrastive paradigm \cite{Perozzi:2014ib,Grover:2016ex,Kipf:2016ul,Hamilton:2017wa}. Prior work on unsupervised graph representation learning focuses on local contrastive patterns, which forces neighboring nodes to have similar embeddings. For example, in the pioneering work DeepWalk \cite{Perozzi:2014ib} and node2vec \cite{Grover:2016ex}, nodes appearing in the same random walk are considered as positive samples. Moreover, to model probabilities of node co-occurrence pairs, many studies resort to Noise-Contrastive Estimation (NCE) \cite{Gutmann:2012eq}. However, these random-walk-based methods are proved to be equivalent to factorizing some forms of graph proximity (e.g., multiplication of the adjacent matrix to model high-order connection) \cite{Qiu:2018ez} and thus tend to overly emphasize on the encoded structural information. Also, these methods are known to be error-prone with inappropriate hyperparameter tuning \cite{Perozzi:2014ib,Grover:2016ex}. Recent work on Graph Neural Networks (GNNs) employs more powerful graph convolutional encoders over conventional methods. Among them, considerable literature has grown up around the theme of supervised GNN \cite{Kipf:2016tc,Velickovic:2018we,Hu:2019vq,Wu:2019vz}, which requires labeled datasets that may not be accessible in real-world applications. Along the other line of development, unsupervised GNNs receive little attention. Representative methods include GraphSAGE \cite{Hamilton:2017tp}, which incorporates DeepWalk-like objectives. Recent work DGI \cite{Velickovic:2019tu} marries the power of GNN and CL, which focuses on maximizing MI between global graph-level and local node-level embeddings. Specifically, to implement the InfoMax objective, DGI requires an injective readout function to produce the global graph-level embedding. However, it is too restrictive to fulfill the injective property of the graph readout function, such that the graph embedding may be deteriorated. In contrast to DGI, our preliminary work \cite{Zhu:2020vf} proposes to not rely on an explicit graph embedding, but rather focuses on maximizing the agreement of node embeddings across two corrupted views of the graph. Following DGI, GMI \cite{Peng:2020gw} employs two discriminators to directly measure MI between input and representations of both nodes and edges without data augmentation; MVGRL \cite{Hassani:2020un} proposes to learn both node- and graph-level representations by performing node diffusion and contrasting node representations to augmented graph summary representations. Moreover, GCC \cite{Qiu:2020gq} proposes a pretraining framework based on CL. It proposes to construct multiple graph views by sampling subgraphs based on random walks and then learn model weights with several feature engineering schemes. However, these methods do not explicitly consider adaptive graph augmentation at both structural and attribute levels, leading to suboptimal performance. Unlike these work, the adaptive data augmentation at both topology and attribute levels used in our GCA is able to preserve important patterns underneath the graph through stochastic perturbation. \paragraph{Comparisons with related graph CL methods.} In summary, we provide a brief comparison between the proposed GCA and other state-of-the-art graph contrastive representation learning methods, including DGI \cite{Velickovic:2019tu}, GMI \cite{Peng:2020gw}, and MVGRL \cite{Hassani:2020un} in Table \ref{tab:comparison}, where the last two columns denote data augmentation strategies at topology and attribute levels respectively. It is seen that the proposed GCA method simplifies previous node--global contrastive scheme by defining contrastive objective at the node level. Most importantly, GCA is the only one that proposes adaptive data augmentation on both topology and attribute levels. \begin{table} \centering \caption{Comparison with related work.} \begin{tabular}{cccc} \toprule Method & \makecell{Contrastive\\objective} & Topology & Attribute \\ \midrule DGI & Node--global & Uniform & --- \\ GMI & Node--node & --- & --- \\ MVGRL & Node--global & Uniform & --- \\ \textbf{GCA\xspace} & \textbf{Node--node} & \textbf{Adaptive} & \textbf{Adaptive} \\ \bottomrule \end{tabular} \label{tab:comparison} \end{table} \section*{Discussions on Broader Impact} This paper presents a novel graph contrastive learning framework, and we believe it would be beneficial to the graph machine learning community both theoretically and practically. Our proposed self-supervised graph representation learning techniques help alleviate the label scarcity issue when deploying machine learning applications in real-world, which saves a lot of efforts on human annotating. For example, our GCA\xspace framework can be plugged into existing recommender systems and produces high-quality embeddings for users and items to resolve the cold start problem. Note that our work mainly serves as a plug-in for existing machine learning models, it does not bring new ethical concerns. However, the GCA\xspace model may still give biased outputs (e.g., gender bias, ethnicity bias), as the provided data itself may be strongly biased during the processes of the data collection, graph construction, etc.
1,116,691,498,168
arxiv
\section*{Supplementary Material} \subsection{The dielectric constant} Here we elaborate on the understanding of the dielectric constant $\epsilon_c$. In BKT theory, the vortex system is descibed by the Hamiltonian \begin{eqnarray} \frac{{\cal H}_v}{k_BT}=&-&\pi K\int d^2{\mathbf r} \int d^2{\mathbf r'}n(\mathbf r)n(\mathbf r')\log\frac{|{\mathbf r}-{\mathbf r}'|}{R_0} \nonumber\\ &-&\log y \int d^2{\mathbf r} n^2(\mathbf r), \end{eqnarray} where the stiffness $K=n_s\hbar^2/4mk_BT$ and the vortex fugacity $y=e^{-E_c/k_BT}$ obey the renormalization group (RG) equations [\onlinecite{Kosterlitz74, Jose77}] \begin{eqnarray} \frac{d}{dl}K^{-1}(l)&=&4\pi^3y^2(l),\nonumber\\ \frac{d}{dl}y(l)&=&[2-\pi K(l)]y(l). \label{KTRG} \end{eqnarray} Here $l=\ln(r/\xi)$ is the RG scale, $\xi$ is the coherence length, and $E_c$ is the vortex core energy. \begin{figure} \begin{centering} \includegraphics[width=0.6\linewidth]{EcC.pdf} \includegraphics[width=0.48\linewidth]{Ec.pdf} \includegraphics[width=0.48\linewidth]{Ecec.pdf} \end{centering} \caption{(a): The dielectric constant $\epsilon_c$ as function of the dimensionless vortex core energy $C$. The dashed line is a fit to the power law behavior. (b): Renormalization of the dielectric constant $\epsilon(r)$, for different temperatures $T=T_{\rm BKT}, 0.95T_{\rm BKT}, 0.9T_{\rm BKT}$ (from top to bottom). Here $\epsilon_c=90, C=0.0599$. (c): The ratio of vortex core energy and BKT transition temperature as function of the dielectric constant, $E_c/k_BT_{\rm BKT}= (A^{1/\theta}/2\pi)\epsilon_c^{-(1-\theta)/\theta}$, with $\theta=0.83$. } \label{Ec} \end{figure} One can define a scale-dependent dielectric constant $\epsilon(r)=K(0)/K(l)$, which measures the renormalization of the stiffness $K$ due to the screening of vortex-antivortex pairs. Without screening, $K$ takes the bulk value $K(0)=\Phi_0^2d/16\pi^3\lambda^2_{\rm b}(T)k_BT$, with $\lambda_{\rm b}$ the bulk penetration depth. Including the effect of screening, $K$ changes with the scale $r$. One of the most important experimental consequencies of the BKT theory is that, at the BKT transition temperature, the renormalized $K$, i.e. $K(l=\infty)$, approaches a universal value [\onlinecite{Nelson77}], which can be read out directly from the above RG equations to be $K(\infty)=2/\pi$. At $T=T_{\rm BKT}, r=\infty$, the scale-dependent dielectric constant becomes of the form $\epsilon(r=\infty,T_{\rm BKT})=\Phi_0^2d/32\pi^2\lambda^2_{\rm b}(T_{\rm BKT})k_BT_{\rm BKT}\equiv\epsilon_c$. $\epsilon_c$ is a nonuniversal number. It takes different values for different systems. For conventional superconductors, e.g. InO$_x$, it is typically 1.1 to 1.9. For YBa$_2$Cu$_3$O$_7$ thin films, it is much larger, $\epsilon_c\simeq$ 4.6 [\onlinecite{Firoy88}] or 6 [\onlinecite{Matsuda93}] . The penetration depth is correspondingly renormalized with respect to the bulk value, with $\lambda^{-2}=\lambda^{-2}_{\rm b}/\epsilon(r=\infty)$. At the transition, the renormalized penetration depth satisfies the relation [\onlinecite{Nelson77}] $k_BT_{\rm BKT} =\Phi_0^2d/32\pi^2\lambda^2$ (Eq.~(4) in the main text), which is universal in the sense that, different from $\epsilon_c$, this relation is identical for different systems. Thus to determine whether a superconducting transition is of the BKT type, it is crucial to measure the penetration depth $\lambda$, and to check whether such universal relation between $\lambda$ and $T_{\rm BKT}$ is satisfied. Such relation has been observed in superfuid helium thin films [\onlinecite{Bishop78}]. We can parameterize the vortex fugacity in term of a dimensionless quantity $C$, with $y(0)=\exp[-CK(0)/4]$ [\onlinecite{Davis90}]. $C$ is directly proportional to the vortex core energy, with $E_c=E_0C$ and $E_0=\Phi_0^2d/64\pi^3\lambda^2_{\rm b}=(\epsilon_c/2\pi)k_BT_{\rm BKT}$. The vortex core energy can be written as $E_c=(C\epsilon_c/2\pi)k_BT_{\rm BKT}$. From the above RG equations, one can see that the renormalized fugacity vanishes at the transition, i.e. $y(r=\infty, T_{\rm BKT})=0$. Now we proceed to quantify the relation between the vortex core energy $E_c$ (or its dimensionless counterpart $C$) and the dielectric constant $\epsilon_c$. With the initial condition $K(0)=2\epsilon_c/\pi$, $y(0)=e^{-CK(0)/4}$ and the final condition $K(\infty)=2/\pi$, $y(\infty)=0$, we can numerically solve the RG equations. We find that $\epsilon_c=2, 4.6, 6, 90$ corresponds to $C=7.27, 2.24, 1.583, 0.0599$ respectively (see Fig.~\ref{Ec}(a)). Following the RG flow (Fig.~\ref{Ec}(b)), one can see that, only very close to the transition temperature, the dielectric constant changes substantially with scale. When moving away from $T_{\rm BKT}$, $\epsilon(r)$ quickly settles down to its infared value $\epsilon_{\infty}$, and $\epsilon_{\infty}$ decreases significantly with decreasing temperature [\onlinecite{Davis90}]. It is interesting to notice that for $\epsilon_c\gtrsim 5$, $\epsilon_c$ and $C$ has a power law scaling, $\epsilon_c\simeq AC^{-\theta}$, with the coefficient $A\simeq 8.62$ and the power $\theta\simeq 0.83$ (see Fig.~\ref{Ec}(a)). The dielectric constant and the vortex core energy thus has the relation $\epsilon_c\simeq A(E_c/E_0)^{-\theta}$. A large dielectric constant corresponds to a small vortex core energy. For $\epsilon_c=90, C=0.0599$, the vortex core energy $E_c=(C\epsilon_c/2\pi)k_BT_{\rm BKT}\simeq (2.7/\pi) k_BT_{\rm BKT}$ \footnote{In BCS theory, the vortex core energy can be estimated as the loss of condensation energy within the vortex core, $E_c\simeq \pi \xi^2d\epsilon_{\rm cond}$, with the condensation energy density $\epsilon_{\rm cond}=N(0)\Delta^2/2$, the density of states at the Fermi level $N(0)\simeq 3n/2v_F^2m$, the BCS gap $\Delta$, and the coherence length $\xi=\hbar v_F/\pi\Delta$. Assuming $n_s=n$ at $T=0$, we have $E_c\simeq (1.9/\pi)k_BT_{\rm BKT}$ (see e.g. [\onlinecite{Mondal11}]). In XY-model, one has instead $E_c\simeq \pi k_BT_{\rm BKT}$ [\onlinecite{Nagaosa99}]. }. Taking $T_{\rm BKT}\simeq 1.6K$, one obtains $E_c\simeq 0.13 {\rm meV}$. For YBCO thin films [\onlinecite{Matsuda93}], we have $E_c\simeq (1.583\times 6/2\pi)\times 7{\rm meV}\simeq 10.6{\rm meV}$, which is one order of magnitude larger than that of heavy fermion superlattice [\onlinecite{Mizukami11}]. For large $\epsilon_c$, we have $E_c/k_BT_{\rm BKT}\simeq (A^{1/\theta}/2\pi)\epsilon_c^{-(1-\theta)/\theta}$ (see Fig.~\ref{Ec}(c)). Due to the small power $(1-\theta)/\theta\simeq 1/5$, for a given $T_{\rm BKT}$, a small change in the vortex core energy leads to significant change in the dielectric constant. Increasing $\epsilon_c$ from 5 to 90, the vortex core energy only changes from $1.54 k_BT_{\rm BKT}$ to $0.85 k_BT_{\rm BKT}$. In the presence of competing orders, the vortex core energy is reduced, $E_c=E_c^{(0)}-|\delta E_c|$. As shown in the main text, $|\delta E_c|$ increases as one approaches the QCP. The dielectric constant becomes a function of the distance to the QCP, \begin{equation} \epsilon_c=A\left[ \frac{E_c^{(0)}-V_0e^{-2\sqrt{a}}(3+6\sqrt{a}+4a)}{E_0} \right]^{-\theta}, \end{equation} where $a=\alpha\lambda^4/g^2\mu_B^2\Phi_0^2$ and $\alpha$ is the distance to the QCP. $V_0$ and $a$ depends on the material specific parameters $g, \gamma$. In order to determine quantitatively the evolution of the dielectric constant near the QCP, more material specific microscopic calculations are needed. \subsection{Effect of the interface} At the interface, the Yb ions disorder (due to cross diffusion and displacements) and act as nonmagnetic impurities to locally suppress superconductivity in CeCoIn$_5$ layers [\onlinecite{Bauer11}]. The superconducting order parameter is strongly suppressed near the impurity sites, and it recovers the bulk value over the distance on the order of the coherence length [\onlinecite{Franz97,Xiang95,Franz96}], $\xi(T)\simeq \nu \xi_0/\sqrt{1-T/T_{c0}}$, with $T_{c0}$ the bulk superconducting transition temperature, $\xi_0$ the BCS coherence length, and $\nu$ a number of order unity. When the thickness of the CeCoIn$_5$ layers is large, $d>\xi(T)$, the areas of defect-depressed order parameter do not overlap, and the gap is not affected by the defects. When the thickness of CeCoIn$_5$ layers become smaller than $\xi(T)$, the depressed areas will start to overlap, and the superconducting gap in the CeCoIn$_5$ layers will be suppressed. At low temperatures with $T\ll T_{c0}$, $\xi(T)$ is of order $\xi_0$, which is about the thickness of four layers of CeCoIn$_5$. So we expect that for $n\gg 4$, gap has the same value as the bulk material; while for $n\lesssim 4$, gap gets suppressed. This explains the experimental observation that the Pauli-limited upper critical field, which is a direct measure of the gap, retains the bulk value for $n=5,7$, and is suppressed for $n=3$. \begin{figure}[h] \begin{centering} \includegraphics[width=0.48\linewidth]{defect1.pdf} \includegraphics[width=0.48\linewidth]{defect2.pdf} \end{centering} \caption{Illustration of the effect of Yb ions as pair breaking nonmagnetic impurities for $d>\xi$ and $d<\xi$. In the shaded regions of size the coherence length $\xi$ around the Yb ions, superconductivity is suppressed. } \end{figure} \bibliographystyle{apsrev}
1,116,691,498,169
arxiv
\section{Introduction} \label{sec:intro} Determining cloud coverage over a specific location at a given time plays an important role in forecasting key weather related parameters like rainfall, humidity, and solar irradiance~\cite{ma2018application}. Additionally, exact spread of the clouds has been proven to affect the power generated from the photovoltaic systems~\cite{xiang2017very}. While many a studies for cloud analysis have been done using satellite images, they generally suffer from low temporal and/or spatial resolution limiting their utility. This has led to a growing popularity of ground-based sky cameras, also known as Whole-Sky-Imagers~\cite{long2006retrieving,dev2017color}. Although, images captured by these cameras have good temporal resolution with localized focus, they are too noisy and have limited information making them difficult to segment. Traditional image processing techniques for image segmentation tasks have generally been outperformed by more recent advent of deep learning based methods~\cite{sultana2020evolution}. It is also applicable for the task of sky/cloud image segmentation. Using a total of $1128$ annotated images, Dev~et~al.~\cite{dev2019cloudsegnet} trained a deep convolutional neural network (\textit{CloudSegNet}) and reported a maximum F-score of nearly $0.90$ as compared to less than $0.80$ for previous efforts without deep networks~\cite{long2006retrieving,dev2017nighttime}. Similarly, Xie~et~al.~\cite{xie2020segcloud} reported a pixel-wise classification accuracy of more than $95\%$ after training their proposed architecture called \textit{SegCloud}. Although deep learning architectures have shown great promise for the task, they need significantly high volumes of labelled data to effectively optimize the large number of parameters and hyperparameters. Scarcity of such data in the case of cloud/sky image segmentation makes the it highly difficult to achieve higher accuracy and achieve robustness. Hence, determining a method to generate such data in an automated way will be a huge help to generate highly accurate cloud/sky image segmentation models. In recent years, Generative Adversarial Networks (GANs) and its variants have been successfully used to generate synthetic images which look very similar to the real ones~\cite{goodfellow2014generative}. Modified GAN architectures are also suggested in the literature to append class labels as conditions for the GANs to generate images~\cite{antoniou2017data}. While such variants help to generate class labels corresponding to the auto-generated images for the image classification tasks, it is still very difficult to generate ground-truth segmentation maps for the image segmentation tasks. Using a publicly available dataset of sky/cloud images with corresponding segmentation labels (Section~\ref{sec:dataset}), we train a GAN to automatically generate new sky/cloud images (Section~\ref{sec:GANimageGen}). Ground truth segmentation labels are then estimated by an unsupervised clustering algorithm (Section~\ref{sec:gt_estimation}). A simple regression model is then trained on the both non-augmented set (Section~\ref{sec:PLS}) and the augmented set (Section~\ref{sec:gan_augmentation}) to draw a meaningful comparison. The obtained results are then discussed in Section~\ref{sec:results} and the paper is finally concluded in Section~\ref{sec:conclusion}. \section{Dataset}\label{sec:dataset} In this study, we used a relatively small dataset of night-time images called SWINSEG dataset~\cite{dev2017nighttime}. The dataset contains a total of $115$ images (with $500\times500$ pixel resolution) depicting night-time cloud/sky patterns. Furthermore, the ground-truth binary segmentation maps are also provided in the dataset. One of the major difficulties while dealing with the night-time cloud/sky images is the blur which occurs between the edges of sky and clouds. To resolve this issue, we use $R-B$ channel only for our analysis throughout this study~\cite{dev2017nighttime}. Fig.~\ref{fig:datasetwithRBchannel} shows a few images that are available in the dataset along with the extracted $R-B$ channel and the provided ground-truth binary maps. \begin{figure}[!ht] \centering \includegraphics[trim={0 302 0 0},clip,width=0.75\columnwidth]{datasetImages_R-Bchannel.png} \caption{Sample images from the used SWINSEG dataset. \textit{First column:} original $RGB$ images. \textit{Second column:} extracted $R-B$ channel. \textit{Last column:} corresponding ground-truth binary segmentation maps.} \label{fig:datasetwithRBchannel} \end{figure} For the purpose of training the image segmentation model, we have split the given dataset into the training set, the validation set, and the test set. The splitting is as follows: \begin{itemize} \setlength\itemsep{0em} \item Training Set:\tabto{3cm} $69$ Images ($60\%$) \item Validation Set:\tabto{3cm} $18$ Images ($15.65\%$) \item Test Set:\tabto{3cm} $28$ Images ($24.35\%$) \end{itemize} \section{Methodology}\label{sec:methodology} The task is divided into two stages. In the first stage, we train a GAN architecture to generate cloud/sky images, followed by the estimation of the corresponding ground-truth binary maps. The second stage deals with the training of a prediction model to perform the cloud image segmentation. This is done using the Partial Least Squares (PLS) regression model~\cite{wegelin2000asurvey}. \subsection{GAN for cloud/sky image generation}\label{sec:GANimageGen} Generative Adversarial Networks (GANs) are highly valued for their ability to generate synthetic images which look similar to the real ones~\cite{goodfellow2014generative}. A GAN is composed of two parts, namely, a generator and a discriminator. While the generator's task is to generate images from latent noise such that the discriminator fails to classify it as `fake', the discriminator's task is to successfully segregate fake images that are generated by the generator from the real images. Fig.~\ref{fig:GANarchitecture} shows the GAN architecture which is used in this study. \begin{figure}[!ht] \centering \includegraphics[width=0.98\columnwidth]{GAN.pdf} \caption{GAN architecture used to generate synthetic cloud/sky images which look similar to the original image.} \label{fig:GANarchitecture} \vspace{-0.3cm} \end{figure} Before proceeding with the training of the defined GAN, training data is augmented with basic image transformation procedures of rotation and reflection. The images are thus 16-folded by rotating the original images (by $90^{\circ}$, $180^{\circ}$, and $270^{\circ}$) and then reflecting the resulting images both vertically and horizontally. Post augmentation, the images are normalized in the range $[-1, 1]$ for the training process (see equation~\ref{eq:GANpre-normalization}). This means that the generator also learns to construct images in the same range. Hence, before finally using the generator to generate images for the second stage of this study, we again convert them back to the actual range of $[0, 255]$ (see equation~\ref{eq:GANpost-denormalization}). \vspace{-0.2cm} \begin{equation} pixel_{[-1, 1]} = \frac{pixel_{[0, 255]}}{127.5} - 1 \label{eq:GANpre-normalization} \end{equation} \vspace{-0.4cm} \begin{equation} pixel_{[0, 255]} = \lfloor (pixel_{[-1, 1]} \times 127.5) + 127.5 \rceil \label{eq:GANpost-denormalization} \end{equation}\vspace{-0.1cm} The GAN is finally trained using the Adam optimizer~\cite{kingma2014adam} (with a learning rate of $0.00025$) and the cross-entropy loss function with a batch size of $32$ and for $1000$ epochs using the TensorFlow $2.1.0$ library over MX150 GPU (with $2$GB memory) running CUDA $10.1$ and cuDNN $7.6$ version. Fig.~\ref{fig:GANgenImages} shows some of the cloud/sky images that were generated by the trained generator. Since all the original images were converted to $R-B$ channel (as described in Section~\ref{sec:dataset}), the generated images correspond to the processed $R-B$ channel. \begin{figure}[!ht] \centering \includegraphics[trim={0 0 0 302},clip,width=0.75\columnwidth]{GAN-generated-images.png} \caption{Sample cloud/sky images (in $R-B$ channel) that were generated by the trained GAN.} \label{fig:GANgenImages} \vspace{-0.25cm} \end{figure} \subsubsection{Ground truth estimation} \label{sec:gt_estimation} Since the GAN only generates the sky/cloud images, we estimate the corresponding binary segmentation maps using an unsupervised clustering algorithm, as proposed by Dev.~et.~al.~\cite{dev2014systematic}. The clustering algorithm gives pixel-wise maps which were smoothened to estimate area-wise maps similar to the ones present in the SWINSEG dataset (see Fig.~\ref{fig:GANimageGTestimation} for reference results). We use the smoothened binary maps as the reference ground-truth maps for the images generated by GAN. \begin{figure}[!ht] \centering \includegraphics[width=0.75\columnwidth]{GANgeneratedimageandgroundtruthestimation.png} \caption{Starting from left, (a) a sample image which was generated using the trained generator of the GAN, (b) ground-truth estimate of the binary map (obtained using the unsupervised clustering algorithm), (c) smoothened binary map (to be finally used as the estimated ground-truth value).} \label{fig:GANimageGTestimation} \vspace{-0.5cm} \end{figure} \subsection{PLS model to perform image segmentation}\label{sec:PLS} PLS regression has already been used to perform cloud/sky image segmentation tasks for small image datasets~\cite{dev2017color}. Although more complex deep learning algorithms may perform better for this task, they also need huge datasets for effective training which is not the case here. This regression technique projects both the source and target variables to a lower dimension space before attempting to fit the regression model. The number of dimensions of the projected space is also called the `number of components' ($n\_comp$), which is a hyper-parameter for this technique. It is optimized by computing the coefficient of determination ($R^2$) for different values of $n\_comp$ on both training and validation sets. We pick the value of $n\_comp$ for which the value of $R^2$ on validation set is maximum, i.e. $n\_comp=8$ (see Fig.~\ref{fig:HPtuningPLS}). \begin{figure}[!ht] \centering \includegraphics[trim={0 0 0 65},clip,width=0.88\columnwidth]{optimizingPlotPLS_reduced.pdf} \caption{Hyper-parameter tuning to determine the optimal number of PLS components.} \label{fig:HPtuningPLS} \vspace{-0.5cm} \end{figure} \subsubsection{GAN Augmentation} \label{sec:gan_augmentation} Augmenting images generated from GAN is not straightforward due to the fact that the ground-truth maps were only estimated and not manually assigned. This leads to the fact that some GAN generated images and its corresponding binary maps (i.e. the estimated and smoothened ground-truth maps), or generated `data points', are highly incoherent or inaccurate. Such data points, when added to the training set act as huge outliers for the generic trend. Since any regression model doesn't ignore such outliers by default, they throw off the model from its original trajectory; thereby resulting in a low score (or high error rate). We removed any and all such outliers by utilizing the validation set once again. We augment the training set with one data point at a time and recalculate the value of $R^2$ on the training set and the validation set (see Fig.~\ref{fig:favourableGANimages}). If the newly obtained value of $R^2$ on validation set is less than the original value of $R^2$ on the validation set, the data point is declared as an outlier or unfavourable. Only the favourable data points were finally augmented to the training set on which final results are reported. \begin{figure}[!ht] \centering \includegraphics[width=0.88\columnwidth]{optimizingPlotPLS_ganAugmented.pdf} \caption{Finding favorable data points which were generated by the process described in Section~\ref{sec:GANimageGen}.} \label{fig:favourableGANimages} \vspace{-0.4cm} \end{figure} \section{Results}\label{sec:results} To evaluate the effectiveness of augmenting training set with the GAN generated data points, we calculate the value of coefficient of determination ($R^{2}$) for the trained PLS model with and without augmentation. Table~\ref{table:R2scoreComparison} shows that post augmentation, $R^2$ decreases on training set but increases on test set. This shows that augmenting GAN generated images helps in better generalization of the model. \begin{table}[htb!] \small \centering \begin{tabular}{p{3.25cm}||C{1.9cm}|C{1.9cm}} \hline Cases & $R^2$ (Training) & $R^2$ (Test) \\ \hline\hline Without Augmentation & $\bm{0.568}$ & $0.372$\\ After Augmentation & $0.539$ & $\bm{0.377}$\\ \hline \end{tabular} \caption{Coefficient of determination ($R^{2}$) as calculated when the PLS model was trained without augmenting the training set and after augmenting the training set.} \label{table:R2scoreComparison} \end{table} Since the PLS model generates real numbered values in range $(-\infty, \infty)$, a threshold value (say, $thr$) needs to be identified to convert the predicted image into a binary segmentation map. For a particular value of $thr$, we created a confusion matrix for each image while considering clouds as positives and sky as negatives. The confusion matrix was used to plot the ROC (receiver operating characteristic) curves for each image and determine the optimal value of $thr$ (see Fig.~\ref{fig:rocTestImages}). Using the optimal values of $thr$, average values of \textit{Precision}, \textit{Recall} and \textit{F-Score} over the entire test set are computed. Table~\ref{table:PrecisionRecallFScore} shows an improvement in the \textit{F-Score} for the post augmentation case. \begin{figure}[!ht] \centering \begin{center} \includegraphics[width=0.88\columnwidth]{27_roc.pdf} \vspace{-0.4cm} \end{center} \caption{ROC curves for a sample image from the test set as computed for without and with augmentation cases.} \label{fig:rocTestImages} \vspace{-0.35cm} \end{figure} \begin{table}[htb!] \small \centering \begin{tabular}{L{0.34\columnwidth}||C{0.145\columnwidth}|C{0.145\columnwidth}|C{0.145\columnwidth}} \hline Cases & Precision & Recall & F-Score \\ \hline\hline Without Augmentation & $0.846$ & $\bm{0.749}$ & $0.776$\\ After Augmentation & $\bm{0.862}$ & $0.744$ & $\bm{0.781}$\\ \hline \end{tabular} \vspace{0.1cm} \caption{Values of Precision, Recall and F-Score metrics when computed over the entire test set for the two cases.} \label{table:PrecisionRecallFScore} \end{table} \vspace{-0.5cm} \section{Conclusion \& Future Work}\label{sec:conclusion} In this paper, we demonstrated the effectiveness of GANs in data augmentation for cloud/sky image segmentation tasks. While GANs were used for generating raw cloud/sky images, corresponding ground-truth binary segmentation maps were estimated using an unsupervised clustering algorithm. Even then we observed a slight improvement in both metrics, i.e. the coefficient of determination and the F-score, when the training set was augmented by GAN generated images. In future, we intend to modify the GAN architecture in order to generate ground-truth segmentation maps alongside the cloud/sky images. Further, we plan to verify their effectiveness when used alongside more complex prediction models like deep convolutional neural networks. \vspace{-0.3cm}
1,116,691,498,170
arxiv
\section{Introduction} Neutron stars observed in nature are magnetized objects with the magnetic field strength at the surface in the range of $10^{9}$-$10^{13}$~G~\cite{LGS}. For a special class of neutron stars such as soft gamma-ray repeaters and anomalous X-ray pulsars, the field strength can be much larger and is estimated to be about $10^{14}$-$10^{15}$~G~\cite{TD}. These strongly magnetized objects are called magnetars~\cite{DT} and comprise about $10\%$ of the whole population of neutron stars~\cite{K}. However, in the interior of a magnetar the magnetic field strength may be even larger, reaching values of about $10^{18}$~G~\cite{CBP,BPL}. The possibility of existence of such ultrastrong magnetic fields is not yet excluded, because what we can learn from the magnetar observations by their periods and spin-down rates, or by hydrogen spectral lines is only their surface fields. There is still no general consensus regarding the mechanism to generate such strong magnetic fields of magnetars, although different scenarios were suggested such as, e.g., a turbulent dynamo amplification mechanism in a neutron star with the rapidly rotating core at first moments after it goes supernova~\cite{TD}, or the possibility of spontaneous spin ordering in the dense quark core of a neutron star~\cite{ST}. Under such circumstances, the issue of interest is the behavior of neutron star matter in a strong magnetic field~\cite{CBP,BPL,CPL,PG}. In the recent study~\cite{PG}, neutron star matter was approximated by pure neutron matter in the model considerations with the effective Skyrme and Gogny forces. It has been shown that the behavior of the spin polarization of neutron matter in the high density region at a strong magnetic field crucially depends on whether neutron matter develops a spontaneous spin polarization (in the absence of a magnetic field) at several times nuclear matter saturation density as is usual for the Skyrme forces, or the appearance of a spontaneous polarization is not allowed at the relevant densities (or delayed to much higher densities), as in the case with the Gogny D1P force. In the former case, a ferromagnetic transition to a totally spin polarized state occurs while in the latter case a ferromagnetic transition is excluded at all relevant densities and the spin polarization remains quite low even in the high density region. Note that the issue of spontaneous appearance of spin polarized states in neutron and nuclear matter is a controversial one. On the one hand, the models with the Skyrme effective nucleon-nucleon (NN) interaction predict the occurrence of spontaneous spin instability in nuclear matter at densities in the range from $\varrho_0$ to $4\varrho_0$ for different parametrizations of the NN potential~\cite{R}-\cite{RPV} ($\varrho_0 = 0.16\,{\rm fm}^{-3}$ is the nuclear saturation density). For the Gogny effective interaction, a ferromagnetic transition in neutron matter occurs at densities larger than $7\varrho_0$ for the D1P parametrization and is not allowed for D1, D1S parametrizations~\cite{LVRP}. However, for the D1S Gogny force an antiferromagnetic phase transition happens in symmetric nuclear matter at the density $3.8\varrho_0$~\cite{IY2}. On the other hand, for the models with the realistic NN interaction, no sign of spontaneous spin instability has been found so far at any isospin asymmetry up to densities well above $\varrho_0$~\cite{PGS}-\cite{BB}. Here we study thermodynamic properties of spin polarized neutron matter at a strong magnetic field in the model with the Skyrme effective forces. As a framework for consideration, we choose a Fermi liquid approach for the description of nuclear matter~\cite{AKPY,AIP,IY3}. Proceeding from the minimum principle for the thermodynamic potential, we get the self-consistent equations for the spin order parameter and chemical potential of neutrons. In the absence of a magnetic field, the self-consistent equations have two degenerate branches of solutions for the spin polarization parameter corresponding to the case, when the majority of neutron spins are oriented along, or opposite to the spin quantization axis (positive and negative spin polarization, respectively). In the presence of a magnetic field, these branches are modified differently. A thermodynamically stable branch corresponds to the state with the majority of neutron spins aligned opposite to the magnetic field. At a strong magnetic filed, the branch corresponding to the positive spin polarization splits into two branches with the positive spin polarization as well. The last solutions were missed in the study of Ref.~\cite{PG}. We perform a thermodynamic analysis based on the comparison of the respective free energies and arrive at the conclusion about the possibility of the formation of metastable states in neutron matter with the majority of neutron spins directed along the strong magnetic field. The appearance of such metastable states can be possible due to the strong spin-dependent medium correlations in neutron matter with the Skyrme forces at high densities. Note that we consider thermodynamic properties of spin polarized states in neutron matter at a strong magnetic field up to the high density region relevant for astrophysics. Nevertheless, we take into account the nucleon degrees of freedom only, although other degrees of freedom, such as pions, hyperons, kaons, or quarks could be important at such high densities. \section{Basic equations} The normal (nonsuperfluid) states of neutron matter are described by the normal distribution function of neutrons $f_{\kappa_1\kappa_2}=\mbox{Tr}\,\varrho a^+_{\kappa_2}a_{\kappa_1}$, where $\kappa\equiv({\bf{p}},\sigma)$, ${\bf p}$ is momentum, $\sigma$ is the projection of spin on the third axis, and $\varrho$ is the density matrix of the system~\cite{I,IY}. Further it will be assumed that the third axis is directed along the external magnetic field $\bf{H}$. The energy of the system is specified as a functional of the distribution function $f$, $E=E(f)$, and determines the single particle energy \begin{eqnarray} \varepsilon_{\kappa_1\kappa_2}(f)=\frac{\partial E(f)}{\partial f_{\kappa_2\kappa_1}}. \label{1} \end{eqnarray} The self-consistent matrix equation for determining the distribution function $f$ follows from the minimum condition of the thermodynamic potential~\cite{AKPY,AIP} and is \begin{eqnarray} f=\left\{\mbox{exp}(Y_0\varepsilon+ Y_4)+1\right\}^{-1}\equiv \left\{\mbox{exp}(Y_0\xi)+1\right\}^{-1}.\label{2}\end{eqnarray} Here the quantities $\varepsilon$ and $Y_4$ are matrices in the space of $\kappa$ variables, with $Y_{4\kappa_1\kappa_2}=Y_{4}\delta_{\kappa_1\kappa_2}$, $Y_0=1/T$, and $ Y_{4}=-\mu_0/T$ being the Lagrange multipliers, $\mu_0$ being the chemical potential of neutrons, and $T$ the temperature. Given the possibility for alignment of neutron spins along or opposite to the magnetic field $\bf H$, the normal distribution function of neutrons and single particle energy can be expanded in the Pauli matrices $\sigma_i$ in spin spac \begin{align} f({\bf p})&= f_{0}({\bf p})\sigma_0+f_{3}({\bf p})\sigma_3,\label{7.2}\\ \varepsilon({\bf p})&= \varepsilon_{0}({\bf p})\sigma_0+\varepsilon_{3}({\bf p})\sigma_3. \nonumber \end{align} Using Eqs.~\p{2} and \p{7.2}, one can express evidently the distribution functions $f_{0},f_{3}$ in terms of the quantities $\varepsilon$: \begin{align} f_{0}&=\frac{1}{2}\{n(\omega_{+})+n(\omega_{-}) \},\label{2.4} \\ f_{3}&=\frac{1}{2}\{n(\omega_{+})-n(\omega_{-})\}.\nonumber \end{align} Here $n(\omega)=\{\exp(Y_0\omega)+1\}^{-1}$ and \begin{align} \omega_{\pm}&=\xi_{0}\pm\xi_{3},\label{omega}\\ \xi_{0}&=\varepsilon_{0}-\mu_{0},\; \xi_{3}=\varepsilon_{3}.\nonumber\end{align} As follows from the structure of the distribution functions $f$, the quantity $\omega_{\pm}$, being the exponent in the Fermi distribution function $n$, plays the role of the quasiparticle spectrum. The spectrum is twofold split due to the spin dependence of the single particle energy $\varepsilon({\bf p})$ in Eq.~\p{7.2}. The branches $\omega_{\pm}$ correspond to neutrons with spin up and spin down. The distribution functions $f$ should satisfy the norma\-lization conditions \begin{align} \frac{2}{\cal V}\sum_{\bf p}f_{0}({\bf p})&=\varrho,\label{3.1}\\ \frac{2}{\cal V}\sum_{\bf p}f_{3}({\bf p})&=\varrho_\uparrow-\varrho_\downarrow\equiv\Delta\varrho.\label{3.2} \end{align} Here $\varrho=\varrho_{\uparrow}+\varrho_{\downarrow}$ is the total density of neutron matter, $\varrho_{\uparrow}$ and $\varrho_{\downarrow}$ are the neutron number densities with spin up and spin down, respectively. The quantity $\Delta\varrho$ may be regarded as the neutron spin order parameter. It determines the magnetization of the system $M=\mu_n \Delta\varrho$, $\mu_n$ being the neutron magnetic moment. The magnetization may contribute to the internal magnetic field $B=H+4\pi M$. However, we will assume, analogously to Refs.~\cite{PG,BPL}, that the contribution of the magnetization to the magnetic field $B$ remains small for all relevant densities and magnetic field strengths, and, hence, \begin{align} B\approx H.\label{approx}\end{align} This assumption holds true due to the tiny value of the neutron magnetic moment $\mu_n=-1.9130427(5)\mu_N\approx-6.031\cdot10^{-18}$ MeV/G~\cite{A} ($\mu_N$ being the nuclear magneton) and is confirmed numerically by finding solutions of the self-consistent equations in two approximations, corresponding to preserving and neglecting the contribution of the magnetization. In order to get the self--consistent equations for the components of the single particle energy, one has to set the energy functional of the system. In view of the approximation~\p{approx}, it reads~\cite{AIP,IY} \begin{align} E(f)&=E_0(f,H)+E_{int}(f)+E_{field},\label{enfunc} \\ {E}_0(f,H)&=2\sum\limits_{ \bf p}^{} \varepsilon_0({\bf p})f_{0}({\bf p})-2\mu_n H\sum\limits_{ \bf p}^{} f_{3}({\bf p}),\nonumber \\ {E}_{int}(f)&=\sum\limits_{ \bf p}^{}\{ \tilde\varepsilon_{0}({\bf p})f_{0}({\bf p})+ \tilde\varepsilon_{3}({\bf p})f_{3}({\bf p})\},\nonumber\\ E_{field}&=\frac{H^2}{8\pi}\cal V,\nonumber\end{align} where \begin{align}\tilde\varepsilon_{0}({\bf p})&=\frac{1}{2\cal V}\sum_{\bf q}U_0^n({\bf k})f_{0}({\bf q}),\;{\bf k}=\frac{{\bf p}-{\bf q}}{2}, \label{flenergies}\\ \tilde\varepsilon_{3}({\bf p})&=\frac{1}{2\cal V}\sum_{\bf q}U_1^n({\bf k})f_{3}({\bf q}).\nonumber \end{align} Here $\varepsilon_0({\bf p})=\frac{{\bf p}^{\,2}}{2m_{0}}$ is the free single particle spectrum, $m_0$ is the bare mass of a neutron, $U_0^n({\bf k}), U_1^n({\bf k})$ are the normal Fermi liquid (FL) amplitudes, and $\tilde\varepsilon_{0},\tilde\varepsilon_{3}$ are the FL corrections to the free single particle spectrum. Note that in this study we will not be interested in the total energy density and pressure in the interior of a neutron star. By this reason, the field contribution $E_{field}$, being the energy of the magnetic field in the absence of matter, can be omitted. Using Eqs.~\p{1} and \p{enfunc}, we get the self-consistent equations in the form \begin{align}\xi_{0}({\bf p})&=\varepsilon_{0}({\bf p})+\tilde\varepsilon_{0}({\bf p})-\mu_0,\; \xi_{3}({\bf p})=-\mu_nH+\tilde\varepsilon_{3}({\bf p}).\label{14.2} \end{align} To obtain numerical results, we utilize the effective Skyrme interaction. The amplitude of NN interaction for the Skyrme effective forces reads~\cite{VB} \begin{align}\hat v({\bf p},{\bf q})&=t_0(1+x_0P_\sigma)+\frac{1}{6}t_3(1+x_3P_\sigma)\varrho^\beta \label{49}\\&+\frac{1}{2\hbar^2} t_1(1+x_1P_\sigma)({\bf p}^2+{\bf q}^2) +\frac{t_2}{\hbar^2}(1+x_2P_\sigma){\bf p}{\bf q},\nonumber\end{align} where $P_\sigma=(1+{{\boldsymbol\sigma_1\boldsymbol\sigma_2}})/2$ is the spin exchange operator, $t_i, x_i$ and $\beta$ are some phenomenological parameters specifying a given parametrization of the Skyrme interaction. In Eq.~\p{49}, the spin-orbit term irrelevant for a uniform matter was omitted. The normal FL amplitudes can be expressed in terms of the Skyrme force parameters~\cite{AIP,IY3}: \begin{align} U_0^n({\bf k})&=2t_0(1-x_0)+\frac{t_3}{3}\varrho^\beta(1-x_3)\label{101}\\&\qua +\frac{2}{\hbar^2}[t_1(1-x_1)+3t_2(1+x_2)]{\bf k}^{2}, \nonumber\\ U_1^n({\bf k})&=-2t_0(1-x_0)-\frac{t_3}{3}\varrho^\beta(1-x_3)\label{102}\\&\qua +\frac{2}{\hbar^2}[t_2(1+x_2)-t_1(1-x_1)]{\bf k}^{2}\equiv a_n+b_n{\bf k}^{2}.\nonumber\end{align} Further we do not take into account the effective tensor forces, which lead to coupling of the momentum and spin degrees of freedom \cite{HJ,D,FMS}, and, correspondingly, to anisotropy in the momentum dependence of FL amplitudes with respect to the spin quantization axis. Then \begin{align} \xi_{0}&=\frac{p^2}{2m_{n}}-\mu,\label{4.32}\\ \xi_{3}&=-\mu_nH+(a_n+b_n\frac{{\bf p}^{2}}{4})\frac{\Delta\varrho}{4}+\frac{b_n}{16}\langle {\bf q}^{2}\rangle_{3}, \label{4.33} \end{align} where the effective neutron mass $m_{n}$ is defined by the formula \begin{align} \frac{\hbar^2}{2m_{n}}=\frac{\hbar^2}{2m_0}+\frac{\varrho}{8} [t_1(1-x_1)+3t_2(1+x_2)],\label{181}\end{align} and the renormalized chemical potential $\mu$ should be determined from Eq.~\p{3.1}. The quantity $\langle {\bf q}^{2}\rangle_{3}$ in Eq.~\p{4.33} is the second order moment of the distribution function $f_3$: \begin{align} \langle {\bf q}^{2}\rangle_{3}&=\frac{2}{V}\sum_{\bf q}{\bf q}^2f_{3}({\bf q}).\label{6.11}\end{align} In view of Eqs.~\p{4.32}, \p{4.33}, the branches $\omega_\pm\equiv\omega_\sigma$ of the quasiparticle spectrum in Eq.~\p{omega} read \begin{equation}} \newcommand{\eeqe}{\end{equation} \omega_\sigma=\frac{p^2}{2m_{\sigma}}-\mu+\sigma\bigl(-\mu_nH+\frac{a_n\Delta\varrho}{4} +\frac{b_n}{16}\langle {\bf q}^{2}\rangle_{3}\bigr),\label{spectrud}\end{equation} where $m_\sigma$ is the effective mass of a neutron with spin up ($\sigma=+1$) and spin down ($\sigma=-1$) \begin{align} \frac{\hbar^2}{2m_{\sigma}}&=\frac{\hbar^2}{2m_0} +\frac{\varrho_\sigma}{2} t_2(1+x_2)\label{187}\\&\quad+\frac{\varrho_{-\sigma}}{4}[t_1(1-x_1)+t_2(1+x_2)],\; \varrho_{+(-)}\equiv\varrho_{\uparrow(\downarrow)}.\nonumber\end{align} Note that for totally spin polarized neutron matter \begin{align} \frac{m_0}{m^*}=1+\frac{\varrho m_0}{\hbar^2}t_2(1+x_2),\label{masspol} \end{align} where $m^*$ is the effective neutron mass in the fully polarized state. Since usually for Skyrme parametrizations $t_2<0$, we have the constraint $x_2\leq-1$, which guarantees the stability of totally polarized neutron matter at high densities. It follows from Eq.~\p{spectrud} that the effective chemical potential $\mu_\sigma$ for neutrons with spin-up ($\sigma=1$) and spin-down ($\sigma=-1$) can be determined as \begin{align} \mu_\sigma=\mu+\sigma\bigl(\mu_nH-\frac{a_n\Delta\varrho}{4} -\frac{b_n}{16}\langle {\bf q}^{2}\rangle_{3}\bigr).\end{align} Thus, with account of expressions \p{2.4} for the distribution functions $f$, we obtain the self--consistent equations \p{3.1}, \p{3.2}, and \p{6.11} for the effective chemical potential $\mu$, spin order parameter $\Delta\varrho$, and second order moment $\langle {\bf q}^{2}\rangle_{3}$. \section{Solutions of self-consistent equations at $T=0$. Thermodynamic stability} Here we directly solve the self-consistent equations at zero temperature and present the neutron spin order parameter as a function of density and magnetic field strength. In solving numerically the self-consistent equations, we utilize SLy4 and SLy7 Skyrme forces~\cite{CBH}, which were constrained originally to reproduce the results of microscopic neutron matter calculations (pressure versus density curve). Note that the density dependence of the nuclear symmetry energy, calculated with these Skyrme interactions, gives the neutron star models in a broad agreement with the observables such as the minimum rotation period, gravitational mass-radius relation, the binding energy, released in supernova collapse, etc.~\cite{RMK}. Besides, these Skyrme parametrizations satisfy the constraint $x_2\leq-1$, obtained from Eq.~\p{masspol}. We consider magnetic fields up to the values allowed by the scalar virial theorem. For a neutron star with the mass $M$ and radius $R$, equating the magnetic field energy $E_H\sim (4\pi R^3/3)(H^2/8\pi)$ with the gravitational binding energy $E_G\sim GM^2/R$, one gets the estimate $H_{max}\sim\frac{M}{R^2}(6G)^{1/2}$. For a typical neutron star with $M=1.5M_{\odot}$ and $R=10^{-5}R_\odot$, this yields for the maximum value of the magnetic field strength $H_{max}\sim10^{18}$~G. This magnitude can be expected in the interior of a magnetar while recent observations report the surface values up to $H\sim 10^{15}$~G, as inferred from the hydrogen spectral lines~\cite{IShS}. In order to characterize spin ordering in neutron matter, it is convenient to introduce a neutron spin polarization parameter \begin{equation}} \newcommand{\eeqe}{\end{equation} \Pi=\frac{\varrho_{\uparrow}-\varrho_{\downarrow}}{\varrho}\equiv\frac{\Delta\varrho}{\varrho}. \end{equation} Fig.~\ref{fig1} shows the dependence of the neutron spin polarization parameter from density, normalized to the nuclear saturation density $\varrho_0$, at zero temperature in the absence of the magnetic field. The spontaneous polarization develops at $\varrho=3.70\varrho_0$ for the SLy4 interaction ($\varrho_0=0.16\ \rm{fm}^{-3}$) and at $\varrho=3.59\varrho_0$ for the SLy7 interaction ($\varrho_0=0.158\ \rm{fm}^{-3}$), that reflects the instability of neutron matter with the Skyrme interaction at such densities against spin fluctuations. Since the self-consistent equations at $H=0$ are invariant with respect to the global flip of neutron spins, we have two branches of solutions for the spin polarization parameter, $\Pi_0^+(\varrho)$ (upper) and $\Pi_0^-(\varrho)$ (lower) which differ only by sign, $\Pi_0^+(\varrho)=-\Pi_0^-(\varrho)$. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig1.eps} \end{center} \vspace{-2ex} \caption{(Color online) Neutron spin polarization parameter as a function of density at vanishing temperature and magnetic field.} \label{fig1}\vspace{-0ex} \end{figure} \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig2.eps} \end{center} \vspace{-2ex} \caption{(Color online) Neutron spin polarization parameter as a function of density at $T=0$ and different magnetic field strengths for (a) SLy4 interaction and (b) SLy7 interaction. The branches of spontaneous polarization $\Pi_0^-,\Pi_0^+$ are shown by solid curves.} \label{fig2}\vspace{-0ex} \end{figure} Fig.~\ref{fig2} shows the neutron spin polarization parameter as a function of density for a set of fixed values of the magnetic field. The branches of spontaneous polarization are modified by the magnetic field differently, since the self-consistent equations at $H\not=0$ lose the invariance with respect to the global flip of the spins. At nonvanishing $H$, the lower branch $\Pi_1(\varrho)$, corresponding to the negative spin polarization, extends down to the very low densities. There are three characteristic density domains for this branch. At low densities $\varrho\lesssim 0.5\varrho_0$, the absolute value of the spin polarization parameter increases with decreasing density. At intermediate densities $0.5\varrho_0\lesssim\varrho\lesssim3\varrho_0$, there is a plateau in the $\Pi_1(\varrho)$ dependence, whose characteristic value depends on $H$, e.g., $\Pi_1\approx-0.08$ at $H=10^{18}$~G. At densities $\varrho\gtrsim3\varrho_0$, the magnitude of the spin polarization parameter increases with density, and neutrons become totally polarized at $\varrho\approx6\varrho_0$. Note that the results in the low-density domain should be considered as a first approximation to the real complex picture, since, as discussed in detail in Ref.~\cite{PG}, the low density neutron-rich matter in $\beta$-equilibrium possesses a frustrated state, "nuclear pasta", arising as a result of competition of Coulomb long-range interactions and nuclear short-range forces. In our case, where a pure neutron matter is considered, there is no mechanical instability due to the absence of the Coulomb interaction. However, the possibility of appearance of low-density nuclear magnetic pasta and its impact on the neutrino opacities in the protoneutron star early cooling stage should be explored in a more detailed analysis. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig3.eps} \end{center} \vspace{-2ex} \caption{(Color online) Energy per neutron as a function of density at $T=0$ for different branches $\Pi_1(\varrho)$-$\Pi_3(\varrho)$ of solutions of the self-consistent equations at $H=10^{18}$~G for (a) SLy4 and (b) SLy7 interactions, including a spontaneously polarized state.} \label{fig3}\vspace{-0ex} \end{figure} Let us consider the modification of the upper branch of spontaneous polarization $\Pi_0^+(\varrho)$ at nonvanishing magnetic field. It is seen from Fig.~\ref{fig2} that now beginning from some threshold density the self-consistent equations at a given density have two positive solutions for the spin polarization parameter (apart from one negative solution). These solutions belong to two branches, $\Pi_2(\varrho)$ and $\Pi_3(\varrho)$, characterized by different dependence from density. For the branch $\Pi_2(\varrho)$, the spin polarization parameter decreases with density and tends to zero value while for the branch $\Pi_3(\varrho)$ it increases with density and is saturated. These branches appear step-wise at the same threshold density $\varrho_{\rm th}$ dependent on the magnetic field and being larger than the critical density of spontaneous spin instability in neutron matter. For example, for SLy7 interaction, $\varrho_{\rm th}\approx 3.80\,\varrho_0$ at $H=5\cdot 10^{17}$~G, and $\varrho_{\rm th}\approx 3.92\,\varrho_0$ at $H= 10^{18}$~G. The magnetic field, due to the negative value of the neutron magnetic moment, tends to orient the neutron spins opposite to the magnetic field direction. As a result, the spin polarization parameter for the branches $\Pi_2(\varrho)$, $\Pi_3(\varrho)$ with the positive spin polarization is smaller than that for the branch of spontaneous polarization $\Pi_0^+$, and, vice versa, the magnitude of the spin polarization parameter for the branch $\Pi_1(\varrho)$ with the negative spin polarization is larger than the corresponding value for the branch of spontaneous polarization $\Pi_0^-$. Note that the impact of even such strong magnetic field as $H=10^{17}$~G is small: The spin polarization parameter for all three branches $\Pi_1(\varrho)$-$\Pi_3(\varrho)$ is either close to zero, or close to its value in the state with spontaneous polarization, which is governed by the spin-dependent medium correlations. Thus, at densities larger than $\varrho_{\rm th}$, we have three branches of solutions: one of them, $\Pi_1(\varrho)$, with the negative spin polarization and two others, $\Pi_2(\varrho)$ and $\Pi_3(\varrho)$, with the positive polarization. In order to clarify, which branch is thermodynamically preferable, we should compare the corresponding free energies. Fig.~\ref{fig3} shows the energy per neutron as a function of density at $T=0$ and $H=10^{18}$~G for these three branches, compared with the energy per neutron for a spontaneously polarized state [the branches $\Pi_0^\pm(\varrho)$]. It is seen that the state with the majority of neutron spins oriented opposite to the direction of the magnetic field [the branch $\Pi_1(\varrho)$] has a lowest energy. This result is intuitively clear, since magnetic field tends to direct the neutron spins opposite to $\bf{H}$, as mentioned earlier. However, the state, described by the branch $\Pi_3(\varrho)$ with the positive spin polarization, has the energy very close to that of the thermodynamically stable state. This means that despite the presence of a strong magnetic field $H\sim 10^{18}$~G, the state with the majority of neutron spins directed along the magnetic field can be realized as a metastable state in the dense core of a neutron star in the model consideration with the Skyrme effective interaction. In this scenario, since such states exist only at densities $\varrho\geqslant\varrho_{\rm th}$, under decreasing density (going from the interior to the outer regions of a magnetar) a metastable state with the positive spin polarization at the threshold density $\varrho_{\rm th}$ changes into a thermodynamically stable state with the negative spin polarization. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig4.eps} \end{center} \vspace{-2ex} \caption{(Color online) Spin polarization parameter as a function of the magnetic field strength at $T=0$ for different branches $\Pi_1(H)$-$\Pi_3(H)$ of solutions of the self-consistent equations at $\varrho=4\varrho_0$ and for the branch $\Pi_1(H)$ at $\varrho=2\varrho_0$ for (a) SLy4 interaction and (b) SLy7 interaction.} \label{fig4}\vspace{-0ex} \end{figure} At this point, note some important differences between the results in our study and those obtained in Ref.~\cite{PG}. First, in the study~\cite{PG} of neutron matter at a strong magnetic field only one branch of solutions for the spin polarization parameter was found in the model with the Skyrme interaction (for the same SLy4 and SLy7 parametrizations). However, in fact, we have seen that the degenerate branches of spontaneous polarization (at zero magnetic field) with the positive and negative spin polarization are modified differently by the magnetic field, and, as a result, in the Skyrme model, in general, there are three different branches of solutions of the self-consistent equations at nonvanishing magnetic field. Besides, the only branch considered in Ref.~\cite{PG} and corresponding to our thermodynamically stable branch $\Pi_1$, is characterized by the positive spin polarization, contrary to our result with $\Pi_1<0$. This disagreement is explained by the incorrect sign before the term with the magnetic field in the equation for the quasiparticle spectrum in Ref.~\cite{PG} (analogous to Eq.~\p{spectrud} in our case). Clearly, in the equilibrium configuration the majority of neutron spins are aligned opposite to the magnetic field. \begin{figure}[tb] \begin{center} \includegraphics[width=8.6cm,keepaspectratio]{fig5.eps} \end{center} \vspace{-0ex} \caption{(Color online) Same as in Fig.~\ref{fig4} but for the energy per neutron.} \label{fig5 \end{figure} Fig.~\ref{fig4} shows the spin polarization parameter as a function of the magnetic field strength at zero temperature for different branches $\Pi_1(H)$-$\Pi_3(H)$ of solutions of the self-consistent equations at $\varrho=4\varrho_0$ compared with that for the branch $\Pi_1(H)$ at $\varrho=2\varrho_0$. It is seen that up to the field strengths $H=10^{17}$~G, the influence of the magnetic field is rather marginal. For the branches $\Pi_1(H)$ and $\Pi_2(H)$, the magnitude of the spin polarization parameter increases with the field strength while for the $\Pi_3(H)$ it decreases. Interestingly, as is clearly seen from the top panel for the SLy4 interaction, at the given density, there exists some maximum magnetic field strength $H_m$ at which the branches $\Pi_2$ and $\Pi_3$ converge and do not continue at $H>H_m$. Fig.~\ref{fig5} shows the energy of neutron matter per particle as a function of the magnetic field strength at $T=0$ under the same assumptions as in Fig.~\ref{fig4}. It is seen that the state with the negative spin polarization [branch $\Pi_1(H)$] becomes more preferable with increasing the magnetic field although the total effect of changing the magnetic field strength by two orders of magnitude on the energy corresponding to all three branches $\Pi_1(H)$-$\Pi_3(H)$ remains small. It is also seen that the increase of the density by a factor of two leads to the increase in the energy per neutron roughly by a factor of three reflecting the fact that the medium correlations play more important role in building the energetics of the system than the impact of a strong magnetic field. \section{Conclusions} We have considered spin polarized states in neutron matter at a strong magnetic field in the model with the Skyrme effective NN interaction (SLy4, SLy7 parametrizations). The self-consistent equations for the spin polarization parameter and chemical potential of neutrons have been obtained and analyzed at zero temperature. It has been shown that the thermodynamically stable branch of solutions for the spin polarization parameter as a function of density corresponds to the case when the majority of neutron spins are oriented opposite to the direction of the magnetic field (negative spin polarization). This branch extends from the very low densities to the high density region where the spin polarization parameter is saturated, and, respectively, neutrons become totally spin polarized. Besides, beginning from some threshold density $\varrho_{\rm th}$ being dependent on the magnetic field strength the self-consistent equations have also two other branches (upper and lower) of solutions for the spin polarization parameter corresponding to the case when the majority of neutron spins are oriented along the magnetic field (positive spin polarization). For example, for SLy7 interaction, $\varrho_{\rm th}\approx 3.80\,\varrho_0$ at $H=5\cdot 10^{17}$~G, and $\varrho_{\rm th}\approx 3.92\,\varrho_0$ at $H= 10^{18}$~G. The spin polarization parameter along the upper branch increases with density and is saturated, while along the lower branch it decreases and vanishes. The free energy corresponding to the upper branch turns out to be very close to the free energy corresponding to the thermodynamically preferable branch with the negative spin polarization. As a consequence, at a strong magnetic field, the state with the positive spin polarization can be realized as a metastable state at the high density region in neutron matter which under decreasing density (going from the interior to the outer regions of a magnetar) at the threshold density $\varrho_{\rm th}$ changes into a thermodynamically stable state with the negative spin polarization. In this study, we have considered the zero temperature case, but as was shown in Ref.~\cite{PG}, the influence of finite temperatures on spin polarization remains moderate in the Skyrme model, at least, up to the temperatures relevant for protoneutron stars, and, hence, one can expect that the considered scenario will be preserved at finite temperatures as well. The possible existence of a metastable state with positive spin polarization will affect the neutrino opacities of a neutron star matter in a strong magnetic field, and, hence, will lead to the change of cooling rates of a neutron star compared to cooling rates in the scenario with the majority of neutron spins oriented opposite to the magnetic field~\cite{PG2}. The calculations of the neutron spin polarization parameter and energy per neutron show that the influence of the magnetic field remains small at the field strengths up to $10^{17}$~G. Note that in Ref.~\cite{PG} the consideration also has been done for the Gogny effective NN interaction (D1S, D1P parametrizations) up to densities $4\varrho_0$. Since for the D1S parametrization there is no spontaneous ferromagnetic transition in neutron matter for all relevant densities, and for the D1P parametrization this transition occurs at the density larger than $7\varrho_0$~\cite{LVRP}, no sign of a ferromagnetic transition at a strong magnetic field was found in Ref.~\cite{PG} up to densities $4\varrho_0$ for these Gogny forces. According to our consideration, one can expect that the metastable states with the positive spin polarization in neutron matter at a strong magnetic field could appear at densities larger than $7\varrho_0$ for the D1P parametrization while the scenario with the only branch of solutions corresponding to the negative spin polarization would be realized for the D1S force. It is worthy to note also that in the given research a neutron star matter was approximated by pure neutron matter. This approximation allows one to get the qualitative description of the spin polarization phenomena and should be considered as a first step towards a more realistic description of neutron stars taking into account a finite fraction of protons with the charge neutrality and beta equilibrium conditions. In particular, some admixture of protons can affect the onset densities of enhanced polarization in a neutron star matter with the Skyrme interaction. \section*{ACKNOWLEDGEMENTS} J.Y. was supported by grant R32-2008-000-10130-0 from WCU project of MEST and NRF through Ewha Womans University.
1,116,691,498,171
arxiv
\section{Introduction} Performing real-world tasks such as cooking meals or assembling furniture requires an agent to determine long-term strategies. This is often formulated as a \emph{planning} problem. In traditional AI literature, symbolic planners have shown remarkable capability in solving high-level reasoning problems by planning in human-interpretable symbolic spaces~\cite{fikes1971strips,mcdermott1998pddl}. However, classical symbolic planning methods typically abstract away perception with ground-truth symbols and rely on pre-defined planning domains to specify the causal effects of actions. These assumptions significantly restrict the applicability of these methods in real environments, where states are high-dimensional (e.g., color images) and it's tedious, if not impossible, to specify a detailed planning domain. A solution to plan without relying on predefined action models and symbols is to learn to \textit{plan from observations}. Recent works have shown that deep networks can capture the environment dynamics directly in the observation space~\cite{oh2015action,finn2017deep,agrawal2016learning} or a learned latent space~\cite{watter2015embed,hafner2018learning,kurutach2018learning}. With a learned dynamics model, these methods can plan a sequence of actions towards a desired goal through forward prediction. However, these learned models are far from accurate in long-term predictions due to the compounding errors over multiple steps. Moreover, due to the action-conditioned nature of these models, they are bound to use myopic sampling-based action selection for planning~\cite{agrawal2016learning,finn2017deep}. Such strategy may be sufficient for simple short-horizon tasks, e.g., pushing an object to a location, but they fall short in tasks that involve high-level decision making over longer timescale, e.g., making a meal.% In this work, we aim to combine the merits of planning from observation and the high-level reasoning ability and interpretability of classical planners. We propose a learning-to-plan method that can generate a long-term plan towards a symbolic task goal from high-dimensional observation inputs. As discussed above, the key challenge is that planning with either symbolic or observation space requires accurate forward models that are hard to obtain, namely symbolic planning domains and observation-space dynamics models. Instead, we propose to \emph{plan backward} in a symbolic space \emph{conditioning on the current observation}. Similar to forward planning, backward planning in symbolic space (formally known as regression planning~\cite{waldinger1975achieving,korf1987planning} or pre-image backchaining~\cite{lozano1984automatic,kaelbling2011hierarchical,kaelbling2017pre}) also relies on a planning domain to expand the search space starting from the final goal until the current state is reached. Our key insight is that by conditioning on the current observation, we can train a planner to directly predict a \emph{single path} in the search space that connects the final goal to the current observation. The resulting plan is a sequence of intermediate goals that can be used to guide a low-level controller to interact with the environment and achieve the final task goal. We present \textit{Regression Planning Networks (RPN)}, a neural network architecture that learns to perform regression planning (backward planning) in a symbolic planning space conditioned on environment observations. Central to the architecture is a \emph{precondition network} that takes as input the current observation and a symbolic goal and iteratively predicts a sequence of intermediate goals in reverse order. % In addition, the architecture exploits the compositional structure of the symbolic space by modeling the dependencies among symbolic subgoals with a \emph{dependency network}. Such dependency information can be used to decompose a complex task goal into simpler subgoals, an essential mechanism to learn complex plans and generalize to new task goals. Finally, we present an algorithm that combines these networks to perform regression planning and invoke low-level controllers for executing the plan in the environment. An overview of our method is illustrated in Fig.~\ref{fig:pull}. We train RPN with supervisions from task demonstration data. Each demonstration consists of a sequence of intermediate symbolic goals, their corresponding environment observations, and a final symbolic task goal. An advantage of our approach is that the trained RPN models can compose seen plans to solve novel tasks that are outside of the training dataset. As we show in the experiments, when trained to cook two dishes with less than three ingredients, RPN can plan for a three-course meal with more ingredients with near-optimal performance. In contrast, we observe that the performance of methods that lack the essential components of our RPN degrades significantly when facing new tasks. We demonstrate the capabilities of RPN in solving tasks in two domains: a grid world environment that illustrates the essential features of RPN, and a 3D kitchen environment where we tackle the challenges of longer-horizon tasks and increased complexity of visual observations. \begin{figure}[t] \begin{minipage}[c]{0.38\textwidth} \caption{ Regression (backward) planning with Regression Planning Networks (RPN): Starting from the final symbolic goal $g$, our learning-based planner iteratively predicts a sequence of intermediate goals conditioning on the current observation $o_t$ until it reaches a goal $g^{(-T)}$ that is reachable from the current state using a low-level controller.} \label{fig:pull} \end{minipage}\hfill \raggedleft \begin{minipage}[c]{0.58\textwidth} \includegraphics[width=1.\textwidth]{figures/pull_v3.pdf} \end{minipage} \end{figure} \section{Related Work} Although recent, there is a large body of prior works in learning to plan from observation. Methods in model-based RL~\cite{oh2015action,agrawal2016learning,finn2017deep,hafner2018learning} have focused on building action-conditioned forward models and perform sampling-based planning. However, learning to make accurate predictions with high-dimensional observation is still challenging~\cite{oh2015action,watter2015embed,finn2017deep,hafner2018learning}, especially for long-horizon tasks. Recent works have proposed to learn structured latent representations for planning~\cite{kurutach2018learning,corneil2018efficient,asai2018classical}. For example, Causal InfoGAN~\cite{kurutach2018learning} learns a latent binary representation that can be used jointly with graph-planning algorithm. However, similar to model-based RL, learning such representations relies on reconstructing the full input space, which can be difficult to scale to challenging visual domains. Instead, our method directly plans in a symbolic space, which allows more effective long-term planning and interpretability, while still taking high-dimensional observation as input. Our work is also closely related to Universal Planning Networks~\cite{srinivas2018universal}, which propose to learn planning computation from expert demonstrations. However, their planning by gradient descent scheme is not ideal for non-differentiable symbolic action space, and they require detailed action trajectories as training labels, which is agent-specific and can be hard to obtain in the case of human demonstrations. Our method does not require an explicit action space and learns directly from high-level symbolic goal information, which makes it agent-agnostic and adaptive to different low-level controllers. Our method is inspired by classical symbolic planning, especially a) goal-regression planning~\cite{waldinger1975achieving} (also known as pre-image backchaining~\cite{lozano1984automatic,kaelbling2011hierarchical,kaelbling2017pre}), where the planning process regresses from instead progressing towards a goal, and b) the idea of partial-order planning~\cite{korf1987planning,weld1994introduction}, where the ordering of the subgoals within a goal is exploited to reduce search complexity. However, these methods require (1) complete specification of a symbolic planning domain~\cite{fikes1971strips} and (2) the initial symbolic state either given or obtained from a highly accurate symbol grounding module~\cite{harnad1990symbol,kaelbling2013integrated}; both can be hard to obtain for real-world task domains. In contrast, our method does not perform explicit symbolic grounding and can generate a plan directly from a high-dimensional observation and a task goal. Our network architecture design is inspired by recursive networks for natural language syntax modeling~\cite{irsoy2014deep,socher2013recursive} and program induction~\cite{reed2016neural,cai2017making}. Given a goal and an environment observation, our RPN predicts a set of predecessor goals that need to be completed before achieving the goal. The regression planning process is then to apply RPN recursively by feeding the predicted goals back to the network until a predicted goal is reachable from the current observation. \section{Problem Definition and Preliminaries} \label{sec:preliminary} \subsection{Zero-shot Task Generalization with a Hierarchical Policy} The goal of zero-shot task generalization is to achieve task goals that are not seen during training~\cite{andreas2017modular,oh2017zero,sohn2018multitask}. Each task goal $g$ belongs to a set of valid goals $\mathcal{G}$. We consider an environment with transition probability $\mathcal{O}\times \mathcal{A} \times \mathcal{O} \rightarrow \mathbb{R}$, where $\mathcal{O}$ is a set of environment observations and $\mathcal{A}$ a set of primitive actions. Given a symbolic task goal $g$, the objective of an agent is to arrive at $o \in \mathcal{O}_g$, where $\mathcal{O}_g \subset \mathcal{O}$ is the set of observations where $g$ is satisfied. We adopt a hierarchical policy setup where given a final goal $g$ and the current observation $o_t$, a high-level policy $\mu: \mathcal{O}\times \mathcal{G} \rightarrow \mathcal{G}$ generates an intermediate goal $g' \in \mathcal{G}$, and a low-level policy $\pi: \mathcal{O}\times \mathcal{G} \rightarrow \mathcal{A}$ acts in the environment to achieve the intermediate goal. We assume a low-level policy can only perform short-horizon tasks. In this work, we focus on learning an effective high-level policy $\mu$ and assume the low-level policy can be either a pre-trained agent or a motion planner. For evaluation we consider a zero-shot generalization setup~\cite{oh2017zero,sohn2018multitask} where only a subset of the task goals, $\mathcal{G}_{train} \subset \mathcal{G}$, is available during training, and the agent has to achieve a disjoint set of test task goals $\mathcal{G}_{test} \subset \mathcal{G}$, where $\mathcal{G}_{train} \cap \mathcal{G}_{test} = \emptyset$. \subsection{Regression Planning} \label{ssec:rp} In this work, we formulate the high-level policy $\mu$ as a learning-based regression planner. Goal-regression planning~\cite{waldinger1975achieving,korf1987planning,lozano1984automatic,kaelbling2011hierarchical,kaelbling2017pre} is a class of symbolic planning algorithms, where the planning process runs backward from the goal instead of forward towards the goal. Given an initial symbolic state, a symbolic goal, and a planning domain that defines actions as their pre-conditions and post-effects (i.e., \emph{action operators}), a regression planner starts by considering all action operators that might lead to the goal and in turn expand the search space by enumerating all preconditions of the action operators. The process repeats until the current symbolic state satisfies the preconditions of an operator. A plan is then a sequence of action operators that leads to the state from the goal. An important distinction in our setup is that, because we do not assume access to these high-level action operators (from a symbolic planning domain) and the current symbolic state, we cannot perform explicitly such an exhaustive search process. Instead, our model learns to predict the preconditions that need to be satisfied in order to achieve a goal, conditioned on the current environment observation. Such ability enables our method to perform regression planning without explicit action operators, planning domain definition, or symbolic states. We now define the essential concepts of regression planning adopted by our method. Following~\cite{kaelbling2011hierarchical}, we define goal $g \in \mathcal{G}$ as the conjunction of a set of logical \emph{atoms}, each consists of a \emph{predicate} and a list of object arguments, e.g., \texttt{On(pot, stove)$\wedge \neg$IsOpen(fridge)}. We denote each atom in $g$ a \textbf{subgoal} $g_i$. A subgoal can also be viewed as a goal with a single atom. We define \textbf{preconditions} of a goal $g$ or a subgoal $g_i$ as another \textbf{intermediate goal} $g' \in \mathcal{G}$ that needs to be satisfied before attempting $g$ or $g_i$ conditioning on the environment observation. An intuitive example is that \texttt{Open(fridge)} is a precondition of \texttt{InHand(apple)} if the fridge is closed and the apple is in the fridge. \section{Method} \label{sec:method} Our primary contribution is to introduce a learning formulation of regression planning and propose Regression Planning Networks (RPN) as a solution. Here, we summarize the essential regression planning steps to be posed as learning problems, introduce the state representation used by our model, and explain our learning approach to solving regression planning. % \textbf{Subgoal Serialization:} The idea of subgoal serialization stems from partial order planning~\cite{korf1987planning,weld1994introduction}, where a planning goal is broken into sub-goals, and the plans for each subgoal can be combined to reduce the search complexity. The challenge is to execute the subgoal plans in an order such that a plan does not undo an already achieved subgoal. The process of finding such orderings is called subgoal serialization~\cite{korf1987planning}. Our method explicitly models the dependencies among subgoals and formulates subgoal serialization as a directed graph prediction problem (Sec.~\ref{ssec:ss}). This is an essential component for our method to learn to achieve complex goals and generalize to new goals. \textbf{Precondition Prediction: } Finding the predecessor goals (preconditions) that need to be satisfied before attempting to achieve another goal is an essential step in planning backward. As discussed in Sec.~\ref{ssec:rp}, symbolic regression planners rely on a set of high-level action operators defined in a planning domain to enumerate valid preconditions. The challenge here is to directly predict the preconditions of a goal given an environment observation without assuming a planning domain. We formulate the problem of predicting preconditions as a graph node classification problem in Sec.~\ref{ssec:pc}. The overall regression planning process is then as follows: Given a task goal, we (1) decompose the final task goal into subgoals and find the optimal ordering of completing the subgoals (subgoal serialization), (2) predict the preconditions of each subgoal, and (3) set the preconditions as the final goal and repeat (1) and (2) recursively. We implement the process with a recursive neural network architecture and an algorithm that invokes the networks to perform regression planning (Sec.~\ref{sec:algo}). \textbf{Object-Centric Representation:} To bridge between the symbolic representation of the goals and the raw observations, we adopt an object-centric state representation~\cite{wang2018deep,janner2018reasoning,wu2017learning}. % The general form is that each object is represented as a continuous-valued feature vector extracted from the observation. We extend such representation to $n$-ary relationships among the objects, where each relationship has its corresponding feature, akin to a scene-graph feature representation~\cite{xu2017scene}. We refer to objects and their $n$-ary relationship as \emph{entities} and their features as \emph{entity features}, $e$. Such factorization reflects that each goal atom $g_i$ indicates the \emph{desired} symbolic state of an entity in the scene. For example, the goal \texttt{On(A, B)} indicates that the desired states of the binary entity \texttt{(A, B)} is A on top of B. We assume that either the environment observation is already in such entity-centric representations, or there exists a perception function $F$ that maps an observation $o$ to a set of entity features $e_t^i \in \mathbb{R}^{D}, i \in \{1...N\}$, where $N$ is the number of entities in an environment and $D$ is the feature dimension. As an example, $F$ can be a 2D object detector, and the features are simply the resulting image patches. \subsection{Learning Subgoal Serialization} \label{ssec:ss} We pose subgoal serialization as a learning problem. We say that a subgoal $g_i$ \emph{depends on} subgoal $g_j$ if $g_j$ needs to be completed before attempting $g_i$. For example, \texttt{InHand(apple)} depends on \texttt{IsOpen(Fridge)} if the apple is in the fridge. The process of \emph{subgoal serialization}~\cite{korf1987planning} is to find the optimal order to complete all subgoals by considering all dependencies among them. Following the taxonomy introduced by Korf \textit{et al}.~\cite{korf1987planning}, we consider four cases: we say that a set of subgoals is \emph{independent} if the subgoals can be completed in any order and \emph{serializable} if they can be completed in a fixed order. Often a subset of the subgoals needs to be completed together, e.g., \texttt{Heated(pan)} and \texttt{On(stove)}, in which case these subgoals are called a \emph{subgoal block} and $g$ is \emph{block-serializable}. \emph{Non-serializable} is a special case of \emph{block-serializable} where the entire $g$ is a subgoal block. To see how to pose subgoal serialization as a learning problem, we note that the dependencies among a set of subgoals can be viewed as a \emph{directed graph}, where the nodes are individual subgoals and the directed edges are dependencies. Dependence and independence between a pair of subgoals can be expressed as a directed edge and its absence. We express subgoal block as a \emph{complete subgraph} and likewise a non-serializable goal as a complete graph. The dependence between a subgoal block and a subgoal or another subgoal block can then be an edge between \emph{any} node in the subgraph and an outside node or any node in another subgraph. For simplicity, we refer to both subgoals and subgoal blocks interchangeably from now on. Now, we can formulate the subgoal serialization problem as a graph prediction problem. Concretely, given a goal $g=\{g_1, g_2, ..., g_K\}$ and the corresponding entity features $e^g_t=\{e^{g_1}_t, e^{g_2}_t, ... e^{g_K}_t\}$, our subgoal dependency network is then: \begin{align} f_{dependency}(e^g_t, g) = \phi_\theta(\{[e^{g_i}_t, e^{g_j}_t, g_i, g_j]\}_{i,j=1}^K) = \{dep(g_i, g_j)\}^{K}_{i,j=1}, \end{align} where $dep(g_i, g_j) \in [0, 1]$ is a score indicating if $g_i$ depends on $g_j$. $\phi_\theta$ is a learnable network and $[\cdot, \cdot]$ is concatenation. We describe a subgoal serialization algorithm in Sec.~\ref{sec:algo}. \subsection{Learning Precondition Prediction} \label{ssec:pc} We have discussed how to find the optimal order of completing a set of subgoals. The next step is to find the \emph{precondition} of a subgoal or subgoal block, an essential step in planning backward. The preconditions of a subgoal is another goal that needs to be completed before attempting the subgoal at hand. To formulate it as a learning problem, we note that the subgoal and its preconditions may not share the same set of entities. For example, the precondition of \texttt{InHand(apple)} may be \texttt{IsOpen(fridge)}. Hence a subgoal may map to any subgoals grounded on any entities in the scene. To realize such intuition, we formulate the precondition problem as a \emph{node classification problem}~\cite{kipf2016semi}, where each node corresponds to a pair of goal predicate and entity in the scene. We consider three target classes, \texttt{True} and \texttt{False} corresponds to the logical state of the goal predicate, and a third \texttt{Null} class to indicate that the predicate is not part of the precondition. Concretely, given a goal or subgoal set $g=\{g_1, ... , g_K\}$ and all entity features $e_t$, the precondition network is then: \begin{align} f_{precondition}(e_t, g) = \phi_\psi(\Delta(e_t, g)) = g^{(-1)}, \end{align} where $g^{(-1)}$ is the predicted precondition of $g$, and $\phi_\psi$ is a learnable network. Note that $g$ may only map to a subset of $e_t$. $\Delta$ fills the missing predicates with \texttt{Null} class before concatenating $e_t$ and $g$. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{figures/model_v1.pdf} \caption{Given a task goal, $g$, and the current observation, $o_{t}$, RPN performs subgoal serialization (Algorithm~\ref{alg:serialize}) to find the highest priority subgoal and estimates whether the subgoal is reachable by one of the low-level controllers. If not, RPN predicts the preconditions of the subgoal and recursively uses them as new goal for the regression planning} \label{fig:model} \vspace{-0.4cm} \end{figure} \subsection{Learning Subgoal Satisfaction and Reachability} Subgoal serialization determines the order of completing subgoals, but some of the subgoals might have already been completed. Here we use a learnable module to determine if a subgoal is already satisfied. We formulate the subgoal satisfaction problem as a single entity classification problem because whether a subgoal is satisfied does not depend on any other entity features. Similarly, we use another module to determine if a subgoal is reachable by a low-level controller from the current observation. We note that the reachability of a subgoal by a low-level controller may depend on the state of other entities. For example, whether we can launch a grasping planner to fetch an apple from the fridge depends on if the fridge door is open. Hence we formulate it as a binary classification problem conditioning on all entity features. Given a goal $g$ and its subgoals $\{g_1, ... , g_K\}$, the models can be expressed as: \begin{align} f_{satisfied}(e^{g_i}_t, g_i) = \phi_\alpha([e^{g_i}_t, g_i]) = sat(g_i) \;\;\; f_{reachable}(e_t, g) = \phi_\beta([e_t, g]) = rec(g), \end{align} where $sat(g_i) \in [0, 1]$ indicates if a subgoal $g_i \in g$ is satisfied, and $rec(g) \in [0, 1]$ indicates if the goal $g$ is reachable by a low-level controller given the current observation. \input{algo1.tex} \subsection{Regression Planning with RPN} \label{sec:algo} Having described the essential components of RPN, we introduce an algorithm that invokes the network at inference time to generate a plan. Given the entity features $e_t$ and the final goal $g$, the first step is to serialize the subgoals. We start by finding all subgoals are unsatisfied with $f_{satisfied}(\cdot)$ and construct the input nodes for $f_{dependency}(\cdot)$, which in turn predicts a directed graph structure. Then we use the Bron-Kerbosch algorithm~\cite{akkoyunlu1973enumeration} to find all complete subgraphs and construct a DAG among all subgraphs. Finally, we use topological sorting to find the subgoal block that has the highest priority to be completed. The subgoal serialization subroutine is summarized in Algorithm~\ref{alg:serialize}. Given a subgoal, we first check if it is \emph{reachable} by a low-level controller with $f_{reachable}(\cdot)$, and invoke the controller with the subgoal if it is deemed reachable. Otherwise $f_{precondition}(\cdot)$ is used to find the preconditions of the subgoal and set it as the new goal. The overall process is illustrated in Fig.~\ref{fig:model} and is in addition summarized with an algorithm in Appendix. \subsection{Supervision and Training} \label{sec:training} \noindent\textbf{Supervision from demonstrations:} We parse the training labels from task demonstrations generated by a hard-coded expert. A task demonstration consists of a sequence of intermediate goals $\{g^{(0)}, ... g^{(T)}\}$ and the corresponding environment observations $\{o^{(0)}, ..., o^{(T)}\}$. In addition, we also assume the dependencies among the subgoals $\{g_0, .. g_N\}$ of a goal are given in the form of a directed graph. A detailed discussion on training supervision is included in Appendix. % \noindent\textbf{Training:} We train all sub-networks with full supervision. Due to the recursive nature of our architecture, a long planning sequence can be optimized in parallel by considering the intermediate goals and their preconditions independent of the planning history. More details is included in the Appendix. % \section{Experiments} Our experiments aim to (1) illustrate the essential features of RPN, especially the effect of regression planning and subgoal serialization, (2) test whether RPN can achieve zero-shot generalization to new task instances, and (3) test whether RPN can directly learn from visual observation inputs. We evaluate our method on two environments: an illustrative Grid World environment~\cite{gym_minigrid} that dissects different aspects of the generalization challenges (Sec.~\ref{ss:gridworld}), and a simulated Kitchen 3D domain (Sec.~\ref{ss:kitchen3d}) that features complex visual scenes and long-horizon tasks in BulletPhysics~\cite{BulletPhysics}. We evaluate \textbf{RPN} including all components introduced in Sec.~\ref{sec:method} to perform regression planning with the algorithm of Sec.~\ref{sec:algo} and compare the following baselines and ablation versions: 1) \textbf{E2E}, a reactive planner adopted from Pathak \textit{et al}.~\cite{pathak2018zero} that learns to plan by imitating the expert trajectory. Because we do not assume a high-level action space, we train the planner to directly predict the next intermediate goal conditioning on the final goal and the current observation. The comparison to E2E is important to understand the effect of the inductive biases embedded in RPN. 2) \textbf{SS-only} shares the same network architecture as RPN, but instead of performing regression planning, it directly plans the next intermediate goal based on the highest-priority subgoal produced by Algorithm~\ref{alg:serialize}. This baseline evaluates in isolation the capabilities of our proposed subgoal serialization to decompose complex task goal into simpler subgoals. Similarly, in 3) \textbf{RP-only} we replace subgoal serialization (Algorithm~\ref{alg:serialize}) with a single network, measuring the capabilities of the backward planning alone. \subsection{Grid World} \label{ss:gridworld} In this environment we remove the complexity of learning the visual features and focus on comparing planning capabilities of RPN and the baselines. The environment is the 2D grid world built on~\cite{gym_minigrid} (see Table~\ref{table:gridworld}, left). The state space is factored into object-centric features, which consist of object types (door, key, etc), object state (e.g., door is open), object colors (six unique colors), and their locations relative to the agent. The goals are provided in a grounded symbolic expression as described in Sec.~\ref{sec:preliminary}, e.g., \texttt{Open(door\_red)$\wedge$Open(door\_blue)}. Both the expert demonstrator and the low-level controllers are $A^{*}$-based search algorithm. Further details on training and evaluation setup are in the Appendix. In the grid world we consider two domains: \noindent\textbf{DoorKey:} Six pairs of doors and keys, where a locked door can only be unlocked by the key of the same color. Doors are randomly initialized to be locked or unlocked. The training tasks consist of opening $D=2$ randomly selected doors (the other doors can be in any state). The evaluation tasks consist of opening $D\in \{4, 6\}$ doors, measuring the generalization capabilities of the methods to deal with new tasks composed of multiple instances of similar subtasks. The key to solving tasks involving more than two doors is to model opening each door as an independent subgoal. \noindent\textbf{RoomGoal:} Six rooms connected to a central area by possibly locked and closed doors. The training data is evenly sampled from two tasks: \textbf{k-d} (key-door) is to open a randomly selected (possibly locked) door without getting into the room. \textbf{d-g} (door-goal) is to reach a tile by getting through a closed but unlocked door. In evaluation, the agent is asked to reach a specified tile by getting through a \emph{locked} door (\textbf{k-d-g}), measuring the capabilities of the methods to compose plans learned from the two training tasks to form a longer unseen plan. \input{results_gridworld.tex} \textbf{Results:} The results of both domains are shown in Table~\ref{table:gridworld}, right. In \textit{DoorKey}, all methods except E2E almost perfectly learn to reproduce the training tasks. The performance drops significantly for the three baselines when increasing the number of doors, $D$. RP-only degrades significantly for the inability to decompose the goal into independent parts, while the performance of SS-only degrades because, in addition to interpreting the goal, it also needs to determine if a key is needed and the color of the key to pick. However, it still achieves 21\% success rate at $D=6$. RPN maintains 64\% success rate even for $D=6$, although it has been trained with very few samples where all six doors are initialized as closed or locked. Most of the failures (21\% of the episodes) are due to RPN not being able to predict any precondition while no subgoals are reachable (full error breakdown in Appendix). In \textit{RoomGoal} all methods almost perfectly learn the two training tasks. In particular, E2E achieves perfect performance, but it only achieves 3.2\% success rate in the k-d-g long evaluation task. In contrast, both RP-only and RPN achieve optimal performance also on the k-d-g evaluation task, showing that our regression planning mechanism is enough to solve new tasks by composing learned plans, even when the planning steps connecting plans have never been observed during training. \subsection{Kitchen 3D} \label{ss:kitchen3d} This environment features complex visual scenes and very long-horizon tasks composed of tabletop cooking and sorting subtasks. We test in this environment the full capabilities of each component in RPN, and whether the complete regression planning mechanism can solve previously unseen task instances without dropping performance while coping directly with high-dimensional visual inputs. \begin{wrapfigure}{r}{0.28\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/kitchen_setup.png} \caption{The Kitchen 3D environment. An agent (not shown) is tasked to prepare a meal with variable number of dishes and ingredients} \label{fig:kitchen} \end{wrapfigure} \textbf{Cooking:} The task is for a robotic agent to prepare a meal consisting of a variable number of dishes, each involving a variety of ingredients and different cookwares. As shown in Fig.~\ref{fig:kitchen}, the initial layout consists of a set of ingredients and plates randomly located at the center of the workspace surrounded by (1) a stove, (2) a sink, (3) two cookwares, and (4) three serving areas. There are two types of ingredients: fruits and vegetables, and six ingredients in total. An ingredient needs to be cleaned at the sink before cooking. An ingredient can be cooked by setting up the correct cookware at the stove, activating the stove, and placing the ingredient on the cookware. Fruits can only be cooked in the small pan and vegetables in the big pot. % \begin{figure}[t] \centering \includegraphics[width=1.0\linewidth]{figures/kitchen_qual.png} \vspace{-0.5cm} \caption{ Visualization of RPN solving a sample cooking task with one dish and two ingredients ($I=2$, $D=1$): (Top) Visualization of the environment after a goal is achieved (zoom in to see details), and (Bottom) the regression planning trace generated by RPN. Additional video illustration is in the supplementary material.} \vspace{-0.3cm} \label{fig:kitchen} \end{figure} The environment is simulated with~\cite{BulletPhysics}. A set of low-level controllers interact with objects to complete a subgoal, e.g., \texttt{On(tomato, sink)} invokes an RRT-based motion planner~\cite{lavalle1998rapidly} to pick up the tomato and place it in the sink. For the object-centric representation, we assume access to object bounding boxes of the input image $o_t$ and use a CNN-based encoder to encode individual image patches to $e_t$. The encoder is trained end-to-end with the rest of the model. More details on network architectures and evaluation setup are available in the Appendix. We focus on evaluating the ability to generalize to tasks that involve a different number of dishes ($D$) and ingredients ($I$). The training tasks are to cook randomly chosen $I=3$ ingredients into $D=2$ dishes. The evaluation tasks are to cook meals with $I \in \{2, ..., 6\}$ ingredients and $D \in \{1, .., 3\}$ dishes. In addition, cooking steps may vary depending on the order of cooking each ingredient, e.g., the agent has to set up the correct cookware before cooking an ingredient or turn on the stove/sink if these steps are not done from cooking the previous ingredient. \input{results_kitchen.tex} \textbf{Results: } As shown in Table~\ref{table:kitchen}, RPN is able to achieve near-optimal performance on all tasks, showing that our method achieves strong zero-shot generalization even with visual inputs. In comparison, E2E performs poorly on both training and evaluation tasks and RP-only achieves high accuracy on training tasks, but the performance degrades significantly as the generalization difficulty increases. This shows that the regression planning is effective in modeling long-term plans but generalize poorly to new task goals. SS-only performs worse than RP-only in training tasks, but it is able to maintain reasonable performance across evaluation tasks by decomposing task goals to subgoals. In Fig.~\ref{fig:kitchen}, we visualize a sample planning trajectory generated by RPN on a two-ingredients one-dish task ($I=2, D=1$). RPN is able to resolve the optimal order of completing each subgoal. In this case, RPN chooses to cook cabbage first and then banana. Note that the steps for cooking a banana is different than that of cooking cabbage: the agent does not have to activate the stove and the sink, but it has to set a pan on the stove in addition to the pot because fruits can only be cooked with pan. RPN is able to correctly generate different plans for the two ingredients and complete the task. \section{Conclusions} We present Regression Planning Networks, a learning-to-plan method that plans backward in an abstract symbolic space conditioned on high-dimensional environment observation inputs. We show that each component in RPN plays an important role to learn complex long-horizon tasks and to generalize to new task goals. In future work we plan to extend the regression planning mechanism to more complex but structured planning spaces such as geometric planning~\cite{kaelbling2013integrated} (e.g., including the object pose as part of a goal) and programs~\cite{reed2016neural}. We also intend to learn high-level plans from visual demonstration datasets such as instructional videos~\cite{gu2018ava} to extend to real-world tasks. \section{Appendix} \subsection{Architecture} \label{sec:architecture} Below we provide the details of our model size and architecture in the Kitchen 3D environment. The image encoder architecture is shared across all models. For Grid World, we use the same architecture but reduce all layer sizes by a factor of two. We use ReLU for activation. We train all models in all experiments with batch size of 128 and ADAM optimizer~\cite{kingma2014adam} with learning rate $=1e-3$ on a single GTX 1080 Ti GPU. We use a hold-out set for validation that runs every epoch. The models are implemented with PyTorch~\cite{paszke2017automatic}. \begin{table}[h] \small \begin{tabular}{r|c|c|c|c} \hline \multicolumn{1}{l|}{} & $f_{precondition}$ & $f_{dependency}$ & $f_{reachable}$ & $f_{satisfied}$ \\ \hline RPN & MLP(128, 128, 128) & MLP(128, 128, 128) & \multicolumn{1}{c|}{MLP(128, 64, 64)} & MLP(128, 128, 128) \\ \hline RP-only & Same as RPN & N/A & Same as RPN & N/A \\ \hline SS-only & N/A & Same as RPN & N/A & Same as RPN \\ \hline E2E & \multicolumn{4}{c}{MLP(256, 256, 256)} \\ \hline Image encoder & \multicolumn{4}{c}{\begin{tabular}[c]{@{}c@{}}{[}Conv2D(k=3, c=64), MaxPool(2, 2),\\ Conv2D(k=3, c=128), MaxPool(2, 2),\\ Conv2D(k=3, c=32), MaxPool(2, 2){]}\end{tabular}} \\ \hline \end{tabular} \end{table} \subsection{Regression Planning Algorithm} Here we summarize the full regression planning algorithm (the subroutine \textsc{SubgoalSerialization} is included in the main text). We set the maximum regression depth $M=10$ for all experiments. \begin{algorithm}[H] \caption{\textsc{RegressionPlanning}} \label{alg:plan} \begin{algorithmic} \State \textbf{Inputs:} Current entity features $e_t$, final goal $g$, maximum regression depth $M$ \State \textbf{Outputs:} Intermediate goal to be executed by a low-level controller. \State $i \leftarrow 0$ \While{$i < M$} \State $g' \leftarrow \textsc{SubgoalSerialization}(e_t, g)$ \State $rec(g') \leftarrow f_{reachable}(e_t, g') $ \If{$rec(g') > 0.5$} \State \Return $g'$ \EndIf \State $g \leftarrow f_{precondition}(e_t, g')$ \EndWhile \end{algorithmic} \end{algorithm} \subsection{Experiments Details} \subsubsection{Grid World} \textbf{Environment:} The environment is built on~\cite{gym_minigrid}, where an agent moves around a 2D grid to interact with objects. There are three types of objects in our setup: door, key, and tile indicating a goal location, and six unique colors for each object type. Doors can be in one of three states: (1) open, (2) close, (3) close and locked. A door can be only unlocked by the key of the same color, and a key can be used only once. Both the expert demonstrator and the low-level controllers are $A^{*}$-based search algorithm. A low-level controller can be invoked to execute subgoals, e.g., \texttt{Holding(key\_red)}. \textbf{Planning space:} We express task goals as conjunctive expressions such as \texttt{Open(door\_red) $\wedge$ Open(door\_blue)}. The symbolic planning space includes four unitary predicates: \{\texttt{Open}, \texttt{Locked}, \texttt{Holding}, \texttt{On}\}, where \texttt{Open} and \texttt{Locked} are for door-related goals. \texttt{Holding} is for picking up keys. \texttt{On} is for indicating a goal tile that the agent should reach in RoomGoal. \textbf{State representation:} We factor the grid state information to object-centric features. Each feature is the concatenation of a set of one-hot vectors in the order of (1) object types (door, key, tile), (2) colors (6 in total), (3) object state (open, close, locked, holding), and (4) object location relative to the agent. \textbf{Evaluation: }In the DoorKey domain, the task is to open $D$ out of 6 doors. We generate 5000 $D=2$ training task demonstration trajectories by randomly sampling the state of the doors and the locations of keys and the agent. The evaluation tasks are opening $D\in\{4, 6\}$ doors. For RoomGoal, the training tasks are evenly sampled from (1) opening a possibly locked door without getting into the room (k-d) and (2) opening an unlocked door and reach a goal tile (d-g). We generate 2500 task demonstrations for each training tasks and evenly sample from these trajectories during training. All evaluation results are reported by running 1000 trials for each task instance. \subsubsection{Kitchen 3D} \textbf{Environment:} The 3D environment is built using Bullet Physics~\cite{BulletPhysics}. We use a disemboddied gripper for both gathering training data and evaluation. To minimize the effect of low-level controller and focus on evaluating the high-level planners, we assume the controllers are macros equipped with RRT motion planners, and the picking and placing movements are coded as setting and removing motion constraints between an object and the gripper. The placing poses are sampled randomly on the target placing surface with a collision checking algorithm provided by Bullet Physics. The controller in addition have an atomic action to activate the stove and the sink. \textbf{Planning space:} The symbolic planning space includes four predicates: \{\texttt{On}, \texttt{Cooked}, \texttt{Cleaned}, \texttt{Activated}\}, where \texttt{On} indicates desired binary relationship between pairs of objects, and the others are unary predicates for specifying desired object states. A typical cooking goal specifies ingredients, mapping from ingredients to plates, and which serving area should a plate be placed. \textbf{Rendering: }Because we directly take in image observation as input, the state changes of the objects (e.g., an ingredient is cooked, the stove is activated) should be reflected visually. All such visual state changes are implemented with swapping the mesh textures and / or setting the transparency of the texture. For example, cooking an ingredient darkens its texture, and cleaning an object makes the texture semi-transparent. We plan to extend such visual changes to be more realistic and include gradual changes of the state instead of instant changes in the future. We set the gripper to be invisible when rendering to minimize the effect of occlusion. \textbf{State representation: }We render the scene into $320 \times 240$ RGB images with PyBullet's built-in renderer. For object-centric representation, we crop the images into image patches for individual object with object bounding boxes. We reshape the object bounding boxes to be the minimum enclosing square that can cover the full object and expand the box sizes by a factor of 1.1 to emulate an object detector. We resize all image crops to $24 \times 24$ and scale the pixels by $\frac{1}{255}$ before feeding to the image encoder (Sec.~\ref{sec:architecture}) and the rest of the network. The unary entity features are encodings of the individual image crops, and the binary entity features are the concatenation of encoding pairs. \textbf{Scene setup: }For the scene setup, stove, sink, and the tray that initially holds cookwares have fixed locations. All ingredients and plates are initialized with random locations on the table. We have six ingredients in total, each fall into one of two types: fruits and vegetables. Fruits can only be cooked with the small pan, and vegetable can only be cooked with the big pot. \textbf{Evaluation: }For training, we generate a total of 10800 trajectories for the task of cooking one dish with two randomly selected ingredients ($I=2, D=1$). Both the choice of plates and the serving areas are random. All evaluation results are reported with 1000 trials for each evaluation task. The standard error is reported on running 5 evaluation trials with different random seeds. \subsection{Obtaining Training Supervision} We use expert demonstration trajectories annotated with intermediate goal information as training data. We envision a few sources of such demonstrations. First, one can provide intermediate goal trajectories and use hard-coded policies to follow the intermediate goals as instructions to generate the corresponding environment observations. We use such setup in this work and generate training data on simple tasks, from which we train our RPN to generalize to more complex tasks. Second, we also intend to extend our work to use human demonstrations annotated with sub-task information (e.g., the data of the instructional video dataset~\cite{gu2018ava}) as training data. We include in Fig.~\ref{fig:data} a sample intermediate goal trajectory and the subgoal-dependency information used to generated training data for the Kitchen3D environment ($I=2, D=1$). The precondition training label is parsed by tracing in the list of intermediate goals the subgoals that are part of the final goal recursively. Labels to learn subgoals dependency are provided as direct graphs as shown in the figure. \textit{Satisfied} and \textit{Reachable} labels can be directly parsed while stepping through the intermediate goal list in execution. \begin{figure}[H] \centering \fbox{\includegraphics[width=1.0\linewidth]{figures/data.pdf}} \caption{A sample intermediate goal trajectory used to generated training data for the Kitchen 3D environment ($I=2, D=1$). Each row in the intermediate goals is a step to be completed by a low-level policy. Dependencies are global information and are used when applicable. Such type of annotation is commonly provided as labels in instructional video datasets, where video segments are annotated with step-by-step sub-task information.} \label{fig:data} \end{figure} \subsection{Additional Results} \subsubsection{DoorKey: Error Breakdown} Here we show a detailed error breakdown of the $D=6$ evaluation tasks in the DoorKey environment. \begin{table}[H] \caption{Error breakdown of $D=6$ task in the Doorkey environment reported in percentage.} \begin{tabular}{r|c|ccc|ccc} \hline \multicolumn{1}{l|}{} & \multicolumn{1}{l|}{} & \multicolumn{3}{c|}{Network} & \multicolumn{3}{c}{Environment} \\ Error Type & Success & All Sat & No Prec & Max Iter & Controller & Bad Goal & \multicolumn{1}{l}{Max Step} \\ \hline E2E & 0.0 & / & / & / & 0.3 & 92.6 & 7.1 \\ RP-only & 0.0 & 0.0 & 91.8 & 0.0 & 0.2 & 8.0 & 0.0 \\ SS-only & 21.1 & 0.0 & / & / & 0.0 & 78.9 & 0.0 \\ Ours & 64.3 & 0.9 & 21.3 & 8.7 & 0.1 & 1.4 & 3.3 \\ \hline \end{tabular} \end{table} We analyze two categories of errors: Environment and Network. Environment errors are errors occurred when the agent is interacting with the environment: \textbf{Controller} means that the low-level controller cannot find a valid path to reach a particular goal, e.g., all paths to reach a key is blocked. \textbf{Bad Goal} means that the goals predicted by the network are invalid, e.g., picking up a key that's already been used. \textbf{Max Step} is that the maximum of steps that the environment allows is reached. Network errors are internal errors from components of our RPN network: \textbf{All Sat} means that the network incorrectly predicts that all subgoals are satisfied and exits prematurely. \textbf{No Prec} means that the network cannot predict any preconditions while none of the subgoals is reachable. \textbf{Max Iter} means that the regression planning process has reached the maximum number of steps ($M$ in Algorithm~\ref{alg:plan}). We see that the major source of error for E2E and SS-only is Bad Goal, i.e., the predicted goal is invalid to execute. We are able to catch these types of errors due to the simplicity of the grid world environment. However, this type of error may cause a low-level controller to behave unexpectedly in real-world tasks, causing security concerns. In contrast, RP-only and RPN both have very few such errors thanks to the robustness of our precondition networks. However, due to the inability to break a task goal into simpler parts, RP-only can easily make mistakes in the regression planning process, causing No Prec error. Finally, RPN is able to achieve 64\% success rate while minimizing the errors occurred while interacting with the environment, highlighting a potential benefit of our system when deployed to real-world agents. \subsubsection{Kitchen 3D: Average Subgoal Completion Rate} In the main paper, we report results on the Kitchen 3D tasks in task success rate. However, this metric is not informative in the case where an agent can complete most part of a task but not the entire task, e.g., an agent can prepare 5 out of 6 ingredients in a $I=6$ task. Here we include results in a different metric: average fraction of subgoals completed. In the case of an episode where 5 out of 6 ingredients is successfully prepared for a $I=6$ task, the metric value would be $\frac{5}{6}$. We include results using both metrics in Table~\ref{table:kitchen} for reference. \begin{table}[H] \caption{Evaluation results on Kitchen 3D reported in: average task success rate / average subgoal completion rate. All standard errors for average subgoal completion rate is less or equal to 0.1 and is thus omitted.} \label{table:kitchen} \begin{tabular}{r|c|ccccc} \hline & Train & \multicolumn{5}{c}{Evaluation} \\ Task & I=3, D=2 & I=2, D=1 & I=4, D=1 & I=4, D=3 & I=6, D=1 & I=6, D=3 \\ \hline E2E & 5.0 / 8.3 & 16.4 / 21.2 & 2.3 / 3.7 & 0.7 / 3.0 & 0.0 / \textless{}0.1 & 0.0 / \textless 0.1 \\ RP-only & 70.3 / 83.4 & 67.1 / 77.4 & 47.0 / 71.7 & 27.9 / 64.1 & \textless{}0.1 / 23.9 & 0.0 / 22.9 \\ SS-only & 49.1 / 59.7 & 59.3 / 61.9 & 56.6 / 66.2 & 43.4 / 60.0 & 42.8 / 69.3 & 32.7 / 59.7 \\ RPN & \textbf{98.5 / 98.8} & \textbf{98.6 / 98.7} & \textbf{98.2 / 99.2} & \textbf{98.4 / 99.2} & \textbf{95.3 / 98.9} & \textbf{97.2 / 99.4} \\ \hline \end{tabular} \end{table} We observe that for $I=6$ tasks, although RP-only has close to 0\% task success rate, it on average can complete 23\% of the subgoals. On the other hand, the performance degradation of SS-only is less pronounced in the new metric: it is able to maintain 60\% average subgoal completion rate for all evaluation tasks, showing the power of goal decomposition. \section{Acknowledgement} This work has been partially supported by JD.com American Technologies Corporation (``JD'') under the SAIL-JD AI Research Initiative. This article solely reflects the opinions and conclusions of its authors and not JD or any entity associated with JD.com. { \small \bibliographystyle{ieeetr}
1,116,691,498,172
arxiv
\section{Introduction} The study of the dynamics of self-gravitating systems began with the pioneering work by Jeans \cite{Jeans1929}, who showed that, within the framework of the Euler equations with a gravitational potential obeying the Poisson equation, perturbations with the wavelength $\lambda$ greater than the Jeans wavelength $\lambda_{J}$ are unstable. It is believed that the Jeans instability is the source of the emergence of structures in the Universe and this problem remains one of the most important in astrophysics. Even Jeans himself put forward the hypothesis that stars, star clusters, and galaxies arose as a result of this instability -- a process resembling condensation in an ordinary imperfect gas. Subsequently, linear instability in a self-gravitating system and the Jeans criterion were investigated by many authors, starting with Fermi and Chandrasekhar \cite{Fermi1953,Chandrasekhar_book}, for cases of system rotation, inhomogeneity of the equilibrium density, the presence of magnetic fields and dust, accounting for kinetic effects, dissipation, etc. \cite{KineticJeans2004,Ehsan2007,Ehsan2008,Maklund2008,Kremer2018}. The linear instability theory is only valid when the amplitude of the perturbations is so small that the nonlinearity can be neglected. A direct demonstration of the possibility of the emergence of structures from spontaneously arising fluctuations at the nonlinear stage of instability (i.e., nonlinear evolution) is an extremely difficult task. In turn, coherent nonlinear structures themselves, such as solitons and vortices, have been studied for a long time. In a broad sense, a soliton is a localized structure (not necessarily one-dimensional) resulting from the balance of dispersion and nonlinearity effects. Two- or three-dimensional solitons with embedding vorticity are usually called vortex solitons. Note that below we sometimes refer to vortex solitons simply as vortices, although vortices are usually understood as structures (for example, vortices in hydrodynamics) in media without dispersion or where the role of dispersion does not matter. One-dimensional solitons are usually stable, while multidimensional solitons often turn out to be unstable and the most well-known phenomena in this case are wave collapse and wave breaking \cite{Berge1998,Zakharov_UFN2012}. Nevertheless, there are many examples of stable multidimensional solitons. The reason is usually the specific nature of the nonlinearity (nonlocal \cite{Lashkin2006,Lashkin2007PLA}, saturable \cite{Laedke1984,Lashkin2020} or additional higher order nonlinearity) and dispersion \cite{Kivshar_book2003}. In some cases, a specific form of nonlinearity (the Poisson bracket nonlinearity) leads to the presence of an infinite number of integrals of motion (Casimir invariants), which causes the stability of the corresponding multidimensional solitons \cite{Makino1981,Williams1982,Lashkin2017,Petviashvili_book1992}. Multidimensional solitons have also been intensively studied in scalar models of quantum field theory \cite{Friedberg1976,Lee1992}(see, e. g., recent paper \cite{Morris2021}). The stability of such solitons follows from the well-known Derrick criterion \cite{Derrick1964}. Solitons in self-gravitating systems were apparently first considered in Ref.~\cite{Mikhailovskii1977}, where the Jeans perturbations of finite amplitude were studied and it was shown that they can propagate in the form of envelope solitons. Later on, solitons in self-gravitating systems were studied in a number of works. So, nonlinear waves in a self-gravitating isothermal fluid were considered in Ref.~\cite{Yueh1981}. Within the framework of the same model, the nonlinear Schr\"{o}dinger equation (NLS) \cite{Ono1994, Zhang1995,Zhang1998} and the sine-Gordon equation \cite{Gotz1988} were derived and their soliton solutions were presented. One-dimensional nonlinear waves and solitons in self-gravitating fluid systems, with a particular emphasis on applications to molecular clouds, were studied in \cite{Adams1994}. Self-gravitating fluid dynamics and instabilities along with solitons were discussed in Ref.~\cite{Semelin2001}. Solitons in self-gravitating dusty plasmas were considered on the basis of the extended Korteveg-de Fries equation in Ref.~\cite{Verheest1997}, and Alfv\'{e}n ordinary, cusp solitons and modulational instability in a self-gravitating magneto-radiative plasma were studied in Ref.~\cite{Masood2010}. Solitary waves in self-gravitating molecular clouds were investigated in Ref.~\cite{Verheest2005}. In the above works, only one-dimensional solitons were considered. Here we would like to note that in this paper we are interested in nonlinear structures in self-gravitating systems exclusively within the framework of the classical fluid model, since in recent years solitons and vortex solitons in self-gravitating Bose-Einstein condensates (nonlinear matter waves) based on the Gross-Pitaevskii equation for quantum mechanical wave function have been intensively studied \cite{Yakimenko2021}. We are interested in rotating self-gravitating systems and nonlinear perturbations with characteristic frequencies that are much lower than the rotational frequency of the system. Under such assumptions, two-dimensional (2D) nonlinear structures in self-gravitating systems were first considered in Ref.~\cite{Fridman1991}, where the corresponding nonlinear equation was derived, which coincides with the well-known Charney equation in geophysics \cite{Charney1948} and describing nonlinear Rossby waves in atmospheres of rotating planets and in oceans (in plasma physics, this equation is known as the Hasegawa-Mima equation \cite{Hasegawa1978}, and the rotation frequency is replaced by the gyrofrequency in an external magnetic field). In what follows, we refer to this equation as the 2D Charney-Hasegawa-Mima (CHM) equation. The analytical solution to this equation is the 2D dipole solitary vortex obtained for the first time in Refs.~\cite{Larichev1976a,Larichev1976b} and known as the Larichev-Reznik dipole vortex (sometimes called a modon). Subsequently, this solution and some of its generalizations in the form of dipole vortices were used in many areas of nonlinear geophysics, as well as for describing nonlinear drift waves in plasmas \cite{Flierl1987,Petviashvili_book1992,Stenflo2009}. In rotating self-gravitating systems, dipole vortex solutions were obtained for magnetized plasmas \cite{Jovanovich1990} and bounded systems (so called global vortices) \cite{Shukla1993}. Regular structures in a rotating dusty self-gravitating fluid system were also studied in Ref.~\cite{Zinzadze2000}. Some generalizations and monopole vortices were studied in Refs.~\cite{Abrahamyan2016,Abrahamyan2020}. Nonlinearly coupled Rossby-type and inertio-gravity waves in self-gravitating systems were considered in Ref.~\cite{Pokhotelov1998}, nonlinear vortex chains in Ref.~ \cite{Shukla1995}. The emergence of vortices in self-gravitating gaseous discs has been demonstrated by numerical simulation in Ref.~\cite{Rice2009}. To avoid misunderstandings with terminology, it should be noted that, generally speaking, there are two types of vortex solitons. The Larichev-Reznik soliton (as well as the solitons considered in the presented paper) arises in models with linear dispersion $\omega_{\mathbf{k}}$ ($\omega$ and $\mathbf{k}$ are the frequency and wave vector,respectively) of the acoustic type ($\omega_{\mathbf{k}} \rightarrow 0$ as $\mathbf{k}\rightarrow 0$) and is a dipole vortex soliton representing a cyclone-anticyclone dipole pair that rotate in opposite directions. In models with linear dispersion of the optical type ($\omega_{\mathbf{k}}\rightarrow \omega_{c}$ as $\mathbf{k}\rightarrow 0$, where $\omega_{c}$ is the cutoff frequency), such as, for example, the multidimensional nonlinear Schr\"{o}dinger equation (NLS) and its generalizations, there is a completely different type of vortex solitons (sometimes called spinning solitons) having an intensity distribution in the form of a ring or, in the 3D case, a torus, and these solitons can only be found numerically (see, e.g., the recent review \cite{Malomed2019} and references therein). A remarkable property of the Larichev-Reznik soliton is the stability of solitons under head-on and overtaking collisions with zero-impact parameter between the soliton \cite{Makino1981,Williams1982}. In these cases the solitons preserve their form after the collisions, and they behave just like the one-dimensional solitons in the NLS equation and the Korteweg-de Vries equation (KdV) \cite{Ablowitz1981}. For the first time, the 3D generalization of the Larichev-Reznik dipole solution was obtained in \cite{Berestov1979,Berestov1981}. Recently, in the framework of the 3D generalization of the Hasegawa-Mima equation, its the 3D analytic soliton solutions were obtained and, as for the 2D Larichev-Reznik solution, a remarkable elastic character of collisions between the 3D solitons was demonstrated \cite{Lashkin2017}. The aim of this paper is to obtain a set of three-dimensional nonlinear equations describing the dynamics of disturbances in a self-gravitating rotating weakly inhomogeneous fluid system with characteristic frequencies much lower than the rotation frequency, that is, in the so-called geostrophic approximation. In a particular long-wavelength case, when the characteristic size of disturbances is small compared to the Jeans length, we find the 3D analytical solutions of the corresponding equations in the form of vortex dipole solitons. Through numerical simulations, we show that some of these 3D soliton solutions turn out to be extremely stable. The paper is organized as follows. In Sec. II, we present the derivation of a set of nonlinear equations from the fluid equations. In Sec. III, we consider the short-wavelength case and present the 2D (pseudo 3D) soliton solutions. Sec. IV deals with the long-wavelength case, where an analogue of the 3D CHM equation is obtained. In Sec. V, we obtain analytical solutions in the form of three-dimensional vortex dipole solitons of various types. Sec. VI is devoted to the study of the stability of the found analytical solutions. Finally, Sec. VII concludes the paper. \section{ Model Equations} Let us consider a gravitating system rotating with constant angular velocity $\mathbf{\Omega}_{0}=\Omega_{0}\mathbf{\hat{z}}$ and with an equilibrium density $\rho_{0}$ in the plane perpendicular to $\mathbf{\hat{z}}$-axis. The momentum and continuity fluid equations governing the dynamics of self-gravitating rotating isothermal gas are \begin{equation} \label{momentum} \frac{\partial \mathbf{v}}{\partial t}+(\mathbf{v}\cdot\nabla)\mathbf{v}=-\nabla\chi+2\Omega_{0}[\mathbf{v}\times\hat{\mathbf{z}}] +\Omega_{0}^{2}[\hat{\mathbf{z}}\times[\mathbf{r}\times \hat{\mathbf{z}}]], \end{equation} where the function $\chi$ is defined as \cite{Fridman1991,Pokhotelov1998} \begin{equation} \label{chi} \nabla\chi=\nabla\psi+\frac{c_{s}^{2}}{\rho}\nabla\rho, \end{equation} \begin{equation} \label{continuity} \frac{\partial \rho}{\partial t}+\nabla\cdot (\rho \mathbf{v})=0, \end{equation} where the first term in the right-hand side of Eq. (\ref{momentum}) includes the pressure gradient force and self-gravity force, second and third terms account for the Coriolis force and centrifugal force respectively. Here, $\rho$ is the total mass density, $\mathbf{v}$ is the fluid velocity, $\psi$ is the gravitational potential, $c_{s}$ is the isothermal speed of sound. Equations (\ref{momentum}) and (\ref{continuity}) are supplemented by the Poisson equation for the gravity potential $\psi$ \begin{equation} \Delta\psi=4\pi G\rho \label{Poisson}, \end{equation} where $G$ is the gravitational constant. We present the potential and density as a sum of equilibrium and perturbed quantities \begin{equation} \label{perturbed} \psi=\psi_{0}+\tilde{\psi},\quad \rho=\rho_{0}+\tilde{\rho}. \end{equation} At equilibrium we have \begin{equation} \label{equilib} \frac{\partial\chi_{0}}{\partial r}=\Omega_{0}^{2}r , \end{equation} whereas for the perturbations \begin{equation} \label{momentum1} \frac{\partial \mathbf{v}}{\partial t}+(\mathbf{v}\cdot\nabla)\mathbf{v}=-\nabla\tilde{\chi}+2\Omega_{0}[\mathbf{v}\times\hat{\mathbf{z}}]. \end{equation} From Eq. (\ref{Poisson}) we have \begin{equation} \label{Jeans} \Delta\psi_{0}=4\pi G\rho_{0}\equiv \omega_{0}^{2}, \quad \Delta\tilde{\psi}=4\pi G\tilde{\rho}, \end{equation} where we have introduced the notation for the Jeans frequency $\omega_{0}$. We assume a weak inhomogeneity of the equilibrium density $\rho_{0}$ in the radial direction with a characteristic inhomogeneity length $L$, so that all characteristic scales of perturbations are much larger than $L^{-1}$, and use the local Cartesian coordinate system ($x$ corresponds to the radial coordinate $r$ and $y$ corresponds to the polar angle $\varphi$), \begin{equation} \label{equilib-density} \omega_{0}^{2}(\mathbf{r})=\omega_{0}^{2}\left(1+\frac{x}{L}\right), \end{equation} where $x\ll L$. Substituting Eq. (\ref{perturbed}) into the continuity equation (\ref{continuity}) and using Eq. (\ref{Jeans}), one can obtain \begin{equation} \frac{\partial \Delta\tilde{\psi}}{\partial t}+(\omega_{0}^{2}+\Delta\tilde{\psi})\nabla\cdot\mathbf{v}+\mathbf{v}\cdot\nabla (\omega_{0}^{2}+\Delta\tilde{\psi})=0. \label{continuityPoisson} \end{equation} We assume that temporal variation of perturbations is slow compared to the rotation frequency $\Omega_{0}$ and introduce the ordering \begin{equation} \label{ordering} \epsilon\equiv \frac{\partial/\partial t}{\Omega_{0}}\sim \frac{(\mathbf{v}\cdot\nabla)}{\Omega_{0}}\sim \frac{\partial v_{z}/\partial z}{\Omega_{0}}\sim \frac{x}{L}. \end{equation} In the following we omit the tilde for the perturbed quantities. Then, from the momentum equation (\ref{momentum1}), taking into account Eqs. (\ref{chi}) and (\ref{Jeans}), one can obtain to lowest order in $\epsilon$ the velocity $\mathbf{v} _{\perp}$ perpendicular to the rotation axis \begin{equation} \label{v_0} \mathbf{v} _{\perp}^{(0)} =\frac{1}{2\Omega_{0}}[\hat{\mathbf{z}}\times\nabla_{\perp}\Pi] , \end{equation} where \begin{equation} \label{Pi} \Pi=\tilde{\psi}+\frac{c_{s}^{2}} {\omega_{0}^{2}}\Delta\tilde{\psi}. \end{equation} To the next order we have \begin{equation} \label{v_1} \mathbf{v} _{\perp}^{(1)} =\mathbf{v} _{\perp}^{(0)}-\frac{1}{4\Omega_{0}^{2}}\frac{d}{dt}\nabla_{\perp}\Pi \end{equation} where $d/dt=\partial/\partial t +(\mathbf{v}_{\perp}^{(0)}\cdot\nabla)$. With this ordering, and taking into account that $\nabla_{\perp}\cdot\mathbf{v} _{\perp}^{(0)}=0$, we have from Eq. (\ref{continuityPoisson}) \begin{gather} \frac{\partial \Delta\tilde{\psi}}{\partial t}+\omega_{0}^{2}\left(\nabla_{\perp}\cdot\mathbf{v} _{\perp}^{(1)}+\frac{\partial v_{z}}{\partial z}\right)+\mathbf{v} _{\perp}^{(0)}\cdot\nabla_{\perp} \omega_{0}^{2} \nonumber \\ +\mathbf{v} _{\perp}^{(0)}\cdot\nabla_{\perp}\Delta\tilde{\psi}=0. \label{main11} \end{gather} The expression $[\hat{\mathbf{z}}\times\nabla_{\perp}f]\cdot\nabla_{\perp}g=\{f,g\}$, where $f$ and $g$ are arbitrary functions, also known as the Poisson bracket defined by \begin{equation} \{f,g\}=\frac{\partial f}{\partial x}\frac{\partial g}{\partial y} -\frac{\partial f}{\partial y}\frac{\partial g}{\partial x}. \end{equation} Then, substituting Eqs. (\ref{v_0}) and (\ref{v_1}) into Eq. (\ref{main11}), one can obtain \begin{equation} \label{main2} \frac{\partial \Phi}{\partial t}-\frac{\omega_{0}^{2}}{2\Omega_{0}L} \frac{\partial \Pi}{\partial y} +\frac{1}{2\Omega_{0}}\{\Pi,\Phi\}+\omega_{0}^{2}\frac{\partial v_{z}}{\partial z}=0, \end{equation} where \begin{equation} \label{Phi} \Phi=\Delta\psi-\frac{\omega_{0}^{2}}{4\Omega_{0}^{2}}\Delta_{\perp}\Pi . \end{equation} Here, and in what follows, the tilde is omitted for convenience. Equation for the velocity along the rotation axis $v_{z}$, taking into account Eqs. (\ref{v_0}) and (\ref{v_1}) with the ordering Eq. (\ref{ordering}), follows from Eq. (\ref{momentum}) and has the form \begin{equation} \label{main_z} \frac{\partial v_{z}}{\partial t}+\frac{1}{2\Omega_{0}}\left\{\Pi,v_{z}\right\}+\frac{\partial \Pi}{\partial z}=0. \end{equation} Equations (\ref{main2}) and (\ref{main_z}) are full set to describe the dynamics of nonlinear perturbations. In the linear approximation, taking $\psi (\mathbf{r},t)\sim \exp (i\mathbf{k}\cdot\mathbf{r}-i\omega t)$ and $v_{z}(\mathbf{r},t)\sim \exp (i\mathbf{k}\cdot\mathbf{r}-i\omega t)$, Eqs. (\ref{main2}) and (\ref{main_z}) yield the dispersion relation \begin{equation} \label{linear-general} \omega^{2}\left[\frac{k^{2}}{(k^{2}-k_{J}^{2})}+\frac{k_{\perp}^{2}c_{s}^{2}}{4\Omega_{0}^{2}}\right] +\omega\frac{k_{y}c_{s}^{2}}{2\Omega_{0}L} -k_{z}^{2}c_{s}^{2}=0, \end{equation} where $k^{2}=k^{2}_{\perp}+k^{2}_{z}$ with $k_{\perp}^{2}=k_{x}^{2}+k_{y}^{2}$, $\omega$ is the frequency, and $k_{J}=1/\lambda_{J}$, $\lambda_{J}=c_{s}/\omega_{0}$ is the Jeans length. Then Eq. (\ref{linear-general}) predicts an instability if \begin{equation} \label{stability_condition} \frac{\omega_{0}^{2}}{4\Omega_{0}^{2}}\left(\frac{k_{y}^{2}}{L^{2}}+4k_{z}^{2}k_{\perp}^{2}\right)< \frac{4k_{z}^{2}k^{2}}{1-k^{2}\lambda_{J}^{2}}. \end{equation} In particular, from Eq. (\ref{linear-general}) it follows that in the stability region there are two branches of oscillations: the wave due to density inhomogeneity (if $\omega k_{y}c_{s}^{2}/(2\Omega_{0}L)\gg k_{z}^{2}c_{s}^{2}$), \begin{equation} \label{branch1} \omega=\frac{k_{y}c_{s}^{2}}{2\Omega_{0}L[k^{2}/(k^{2}-k^{2}_{J}) +k^{2}_{\perp}c_{s}^{2}/(4\Omega_{0}^{2})]}, \end{equation} and the acoustic wave (if $\omega k_{y}c_{s}^{2}/(2\Omega_{0}L)\ll k_{z}^{2}c_{s}^{2}$), \begin{equation} \label{branch2} \omega=\frac{k_{z}c_{s}}{\sqrt{k^{2}/(k^{2}-k^{2}_{J}) +k^{2}_{\perp}c_{s}^{2}/(4\Omega_{0}^{2})}}. \end{equation} The classical Jeans instability condition in a homogeneous non-rotating self-gravitating system, as is well known, has the form $k\lambda_{J}<1$ , therefore, under the considered conditions, the region of instability in terms of wave numbers decreases significantly. \section{Short-wavelength case and vortex tubes} First we consider the short-wavelength perturbations with $k\lambda_{J}\gg 1$. We introduce dimensionless variables as \begin{equation} \mathbf{r}\rightarrow \frac{\lambda_{J}\omega_{0}}{2\Omega_{0}}\mathbf{r},\, t\rightarrow \frac{t}{2\Omega_{0}}, \, \tilde{\rho}=\frac{\rho_{0}\Delta\psi}{\omega_{0}^{2}}\rightarrow \rho_{0}n, \, v_{z}\rightarrow c_{s}v_{z}, \end{equation} where the variables on the left-hand side are physical variables and those on the right-hand side are used subsequently. Then, from Eqs. (\ref{Pi}) and (\ref{main2})--(\ref{main_z}), we have \begin{equation} \label{main4} \frac{\partial}{\partial t}(n-\Delta_{\perp}n)-v_{\ast}\frac{\partial n}{\partial y}-\{n,\Delta_{\perp}n\}+\frac{\partial v_{z}}{\partial z}=0, \end{equation} \begin{equation} \label{main5} \frac{\partial v_{z}}{\partial t}+\nu\{n,v_{z}\}+\frac{\partial n}{\partial z}=0, \end{equation} where $v_{\ast}=c_{s}/(2\Omega_{0}L)$. The system of equations (\ref{main4}) and (\ref{main5}) is similar to the system of equations obtained in Ref.~\cite{Horton1983} to describe nonlinear drift waves in a plasma except for the sign in the term with $v_{\ast}$. Following Ref.~\cite{Horton1983} and looking for stationary traveling solutions to Eqs. (\ref{main4}) and (\ref{main5}) of the form \begin{gather} n (x,y,z,t)=n (x,\xi), \\ v_{z} (x,y,z,t)=v_{z} (x,\xi), \end{gather} where $\xi=y-ut+\alpha z$ and $u$ is the velocity of propagation in the $y$ direction. Using Eq. (\ref{main5}), one can get \begin{equation} v_{z}(x,\xi)=\frac{\alpha }{u}n(x,\xi), \end{equation} and then from (\ref{main4}) it follows \begin{equation} \{n-ux,\Delta_{\perp}n-n+(\alpha^{2}/u-v_{\ast})x\}=0. \end{equation} Obtaining a localized solution of the resulting 2D nonlinear equation using the Larichev-Reznik procedure is reduced to solving two independent linear equations for the inner and outer (with a circular cut in the plane) spatial regions, respectively. The corresponding two solutions are matched at the cut boundary in such a way that not only the solution itself of the original nonlinear equation, but also all derivatives up to the second order inclusive, must be continuous. The Larichev-Reznik method (in addition to the original works \cite{Larichev1976a,Larichev1976b} ) is described in detail in many works Refs.~\cite{Hasegawa1978,Flierl1980,Makino1981,Williams1982,Horton1983,Petviashvili_book1992}). In the polar coordinates \begin{equation} x=r\cos\varphi, \,\, \xi=r\sin\varphi, \end{equation} the solution obtained in \cite{Horton1983} has the form \begin{equation} \label{Horton} n(r,\varphi)=ua\cos\varphi\left\{ \begin{array}{lc} \displaystyle \left(1+\frac{\beta^{2}}{\gamma^{2}}\right)\frac{r}{a}-\frac{\beta^{2}}{\gamma^{2}}\frac{J_{1}(\gamma r/a)}{J_{1}(\gamma)} ,& r\leqslant a , \\ \displaystyle \,\frac{K_{1}(\beta r/a)}{K_{1}(\beta)},& r\geqslant a , \end{array} \right. \end{equation} where \begin{equation} \label{beta} \beta=\sqrt{1+\frac{v_{\ast}}{u}-\frac{\alpha^{2}}{u^{2}}}, \end{equation} and $\gamma$ is determined by \begin{equation} \frac{K_{2}(\beta)}{\beta K_{1}(\beta)}=-\frac{J_{2}(\gamma)}{\gamma J_{1}(\gamma)}. \end{equation} In Eq. (\ref{Horton}), $J_{n}$ and $K_{n}$ are Bessel and McDonald functions of order $n$. The solution is bounded at zero $r=0$ and decreases exponentially at infinity, being an essentially nonlinear solution in the form of a two-dimensional (pseudo three-dimensional) soliton with embedded vorticity $(\nabla\times \mathbf{v})_{z}\neq 0$ (a modon). It is a pair of vortices rotating in the opposite direction, that is, a cyclone-anticyclone. The modon solution (\ref{Horton}) has three independent free parameters: the velocity $u$, the modon radius $a$ (characteristic size), and $\alpha$ is the angle of inclination of the vortex front with respect to the plane perpendicular to the $z$-axis. As follows from Eq. (\ref{beta}), the soliton velocity is limited by condition \begin{equation} \label{u-condition} u^{2}+v_{\ast} u-\alpha^{2}> 0. \end{equation} Within the interior region $r < a$, the fluid particles are trapped and are thus transported along the direction of modon movement. The density perturbation $n$ is continuous at the boundary $r=a$ together with the first and second derivatives. For $\alpha=0$, the solution (\ref{Horton}) reduces to the Larichev-Reznik solution. \section{Long-wavelength case} Next, we consider the long-wavelength case, when the characteristic lengths of perturbations are much less than the Jeans length, i.e., $k\lambda_{J}\ll 1$. In this case, we obtain a three-dimensional nonlinear equation, which, under certain conditions, admits analytical 3D soliton solutions. In this case, from Eqs. (\ref{Pi}) and (\ref{main2})--(\ref{main_z}) we have \begin{gather} \frac{\partial}{\partial t}\left(\Delta\psi-\nu\Delta_{\perp}\psi\right) -\frac{2\Omega_{0}\nu}{L}\frac{\partial\psi}{\partial y}+\frac{1}{2\Omega_{0}}\left\{\psi,\Delta\psi-\nu\Delta_{\perp}\psi\right\} \nonumber \\ +\omega_{0}^{2}\frac{\partial v_{z}}{\partial z}=0, \label{basic5} \end{gather} and \begin{equation} \label{basic6} \frac{\partial v_{z}}{\partial t}+\frac{1}{2\Omega_{0}}\left\{\psi,v_{z}\right\}+\frac{\partial\psi}{\partial z}=0, \end{equation} where $\nu=\omega_{0}^{2}/(4\Omega_{0}^{2})$. In the linear approximation, the dispersion relation is \begin{equation} \omega^{2}(k^{2}-\nu k_{\perp}^{2})-2\omega\frac{k_{y}\Omega_{0}\nu}{L}+k_{z}^{2}\omega_{0}^{2}=0. \end{equation} Neglecting parallel motion, i.e. the interaction with the acoustic branch of oscillations, from Eqs. (\ref{basic5}) and (\ref{basic6}) one can obtain \begin{equation} \label{without-parallel} \frac{\partial}{\partial t}(\Delta\psi-\nu\Delta_{\perp}\psi)-\frac{2\Omega_{0}\nu}{L}\frac{\partial\psi}{\partial y}+\frac{1}{2\Omega_{0}}\left\{\psi,\Delta\psi-\nu\Delta_{\perp}\psi\right\}=0. \end{equation} The existence of stationary solutions of Eq. (\ref{without-parallel}) requires that the operator $\Delta-\nu\Delta_{\perp}$, depending on the spatial derivatives, be elliptic. One can easily see that this leads to the condition $\nu < 1$. In this paper, we restrict ourselves to just this case. Next, we introduce dimensionless variables $\mathbf{r}_{\perp}^{\prime}$, $z^{\prime}$, $t^{\prime}$, and $\psi^{\prime}$ by \begin{equation} \label{dimensionless} \mathbf{r}_{\perp}= \frac{L\sqrt{1-\nu}}{2\nu}\mathbf{r}_{\perp}^{\prime},\,\, z= \frac{L}{2\nu}z^{\prime},\,\, t\rightarrow \frac{t^{\prime}}{\Omega_{0}}, \,\, \psi=2\Omega_{0}^{2} \psi^{\prime}, \end{equation} and further the primes are omitted. Substituting Eq. (\ref{dimensionless}) into Eq. (\ref{without-parallel}) we have \begin{equation} \label{HM3D} \frac{\partial \Delta\psi}{\partial t}-w\frac{\partial\psi}{\partial y}+\left\{\psi,\Delta\psi\right\}=0, \end{equation} where $w=1/\sqrt{1-\nu}$, and Eq. (\ref{HM3D}) can be rewritten as \begin{equation} \label{HM3D1} \frac{\partial \Gamma}{\partial t}+\left\{\psi,\Gamma\right\}=0, \end{equation} where $\mathbf{v}_{D}=[\hat{\mathbf{z}}\times\nabla_{\perp}\psi]$, and $\Gamma=\Delta\psi +wx$ is the generalized vorticity, or, equivalently, as \begin{equation} \label{G_t1} \frac{\partial \Gamma}{\partial t}+\mathbf{v}_{D}\cdot\nabla\Gamma=0. \end{equation} Note that a three-dimensional generalization of the Charney equation in geophysics was first obtained in Refs.~\cite{Berestov1979,Berestov1981}, and the Hasegawa-Mima equation for plasma in Ref.~\cite{Lashkin2017}. Equation (\ref{HM3D}) differs from the 3D Charney-Hasegawa- Mima equation obtained earlier in \cite{Berestov1979,Berestov1981,Lashkin2017} by the absence of an additional term $-\partial\psi/\partial t$. Equation (\ref{G_t1}) describes the generalized vorticity convection in an incompressible velocity field $\mathbf{v}_{D}$ with $d\Gamma/dt=0$, where $d/dt=\partial/\partial t+\mathbf{v}_{D}\cdot\nabla_{\perp}$. Like equation (19) in Ref.~\cite{Lashkin2017}, equation (\ref{HM3D}) and has an infinite set of integrals of motion (Casimir invariants), \begin{equation} \label{integral1} \int f(\Gamma,z)\,d^{3}\mathbf{r}, \end{equation} where $f$ is an arbitrary function of its arguments. Other integrals of motion are \begin{equation} \label{integral2} \int \psi\Gamma\,d^{3}\mathbf{r}, \quad \int x\Gamma\,d^{3}\mathbf{r}, \quad \int (y+v_{\ast}t)\Gamma\,d^{3}\mathbf{r}. \end{equation} The energy $E$ and enstrophy $K$, which are quadratic invariants, coincide with the energy and enstrophy of the equation obtained in Ref.~\cite{Lashkin2017}, \begin{gather} \label{integrals_E_K} E=\int \left[\psi^{2}+(\nabla\psi)^{2}\right]\,d^{3}\mathbf{r} , \\ K=\int \left[(\nabla\psi)^{2}+(\Delta\psi)^{2}\right]\,d^{3}\mathbf{r}. \end{gather} As was pointed out in Ref.~\cite{Lashkin2017}, the presence of such an infinite set of integrals of motion does not mean the complete integrability of Eq. (\ref{HM3D}), just like the two-dimensional CHM equation \cite{Shulman1988}. \section{Three-dimensional vortex solitons} We look for stationary traveling wave solutions of Eq. (\ref{HM3D}) of the form \begin{equation} \label{mov_reference} \psi(x,y,z,t)=\psi(x,y',z), \, \, \, y'=y-ut, \end{equation} where $u$ is the velocity of propagation in the $y$ direction (in the following we omit the prime). Substituting Eq. (\ref{mov_reference}) into Eq. (\ref{HM3D}), we have the relation \begin{equation} \label{stat_forma} \left\{\Gamma,\psi-ux\right\}=0, \end{equation} from which we can conclude that \begin{equation} \label{gen_solution} \Gamma=F(\psi-ux,z), \end{equation} where $F$ is an arbitrary function of both arguments. Following the known procedure for finding modon solutions \cite{Larichev1976a,Larichev1976a,Hasegawa1978,Flierl1980,Makino1981,Williams1982}, we assume that the generalized vorticity $\Gamma$ and stream function $\psi$ satisfy one linear relation inside a region of trapped fluid, and a different one outside, that is, $F$ is piecewise linear function. For the 3D modon solutions \cite{Berestov1979,Berestov1981,Flierl1987,Lashkin2017} the trapped region is a sphere of radius $a$, \begin{equation} \label{new-brace} F=\Delta\psi+wx= \begin{cases} \displaystyle c_{1}(\psi-ux)+c_{2}+c_{3}z ,& r < a , \\ \displaystyle \,c_{4}(\psi-ux)+c_{5}+c_{6}z,& r > a . \end{cases} \end{equation} Note, that the linearity of function $F$ in the exterior region $r>a$ follows from the requirement that the solution be localized at infinity. Then it is easy to see that $c_{1}=-w/u$, $c_{2}=0$, $c_{3}=0$ and should be $u<0$. The boundedness requirement at $r=0$ implies $c_{4}<0$. In the following we introduce the notations \begin{equation} \label{beta_gamma} \varkappa=a\sqrt{-w/u}, \, k=a\sqrt{-c_{4}}. \end{equation} Then Eq. (\ref{new-brace}) in the exterior and inner regions become \begin{equation} \label{equ-ext} \Delta\psi-\frac{\varkappa^{2}}{a^{2}}\psi=0, \end{equation} and \begin{equation} \label{equ-in} \Delta\psi+\frac{k^{2}}{a^{2}}\psi=\frac{(\varkappa^{2}+k^{2})ux}{a^{2}}+c_{5}+c_{6}z, \end{equation} respectively. Equation (\ref{equ-ext}) has a general solution \begin{equation} \psi=\sum_{n,l,m}A_{nlm}\frac{K_{n+1/2}(\varkappa r/a)}{\sqrt{ r}}Y_{l}^{m}(\theta,\varphi), \end{equation} while a general solution of Eq. (\ref{equ-in}) with zero right hand side is \begin{equation} \psi=\sum_{n,l,m}B_{nlm}\frac{J_{n+1/2}(k r/a)}{\sqrt{ r}}Y_{l}^{m}(\theta,\varphi), \end{equation} where we use spherical coordinates $(r,\theta,\varphi)$, \begin{equation} x=r\sin\theta\cos\varphi,\, y=r\sin\theta\sin\varphi,\, z=r\cos\theta, \end{equation} and $n,m,l$ are integers, $J_{\nu}(\xi)$ is the Bessel function of the first kind, $K_{\nu}(\xi)$ is the modified Bessel function of the second kind, $Y_{lm}$ are the spherical harmonics, $A_{nlm}$ and $B_{nlm}$ are arbitrary constants. At present we consider only the lowest radial modes $n=0,1$, and the lowest spherical harmonics consistent with the terms $wx$ and $c_{6}z$, $l=0,1$, and $m=0,\pm 1$. Then the real solution of Eq. (\ref{equ-ext}) for the exterior region $r > a$ can be written as \begin{gather} \label{ps1} \psi=A_{000}\frac{K_{1/2}(\varkappa r/a)}{\sqrt{ r}}Y_{0}^{0}+A_{110}\frac{K_{3/2}(\varkappa r/a)}{\sqrt{ r}}Y_{1}^{0} \\ \nonumber +A_{111}\frac{K_{3/2}(\varkappa r/a)}{\sqrt{ r}}(Y_{1}^{1}+Y_{1}^{-1}), \end{gather} where the unnormalized spherical harmonics have the form $Y_{0}^{0}=1$, $Y_{1}^{0}=\cos\theta$ and $Y_{1}^{\pm 1}=\exp (\pm i\varphi)\sin\theta$. A general solution of Eq. (\ref{equ-in}) for the inner region is the sum of the general solution of the corresponding homogeneous equation and the particular solution of the complete inhomogeneous equation. As a particular solution, as is easy to see, we can take \begin{equation} \psi_{par}=\left(1+\frac{\varkappa^{2}}{k^{2}}\right)ur\sin\theta\cos\varphi+\frac{c_{5}a^{2}}{k^{2}} +\frac{c_{6}a^{2}}{k^{2}}\cos\theta . \end{equation} Then, a general solution of Eq. (\ref{equ-in}) for the inner region $r < a$ has the form \begin{gather} \label{ps2} \psi=B_{000}\frac{J_{1/2}(\varkappa r/a)}{\sqrt{ r}}Y_{0}^{0}+B_{110}\frac{J_{3/2}(\varkappa r/a)}{\sqrt{ r}}Y_{1}^{0} \\ \nonumber +B_{111}\frac{J_{3/2}(\varkappa r/a)}{\sqrt{ r}}(Y_{1}^{1}+Y_{1}^{-1})+\psi_{par}. \end{gather} We require that $\psi$ and $\nabla\psi$ to be continues at $r=a$ \begin{equation} \label{cond1} \psi\mid_{r=a-0}=\psi\mid_{r=a+0}, \, \, \nabla\psi\mid_{r=a-0}=\nabla\psi\mid_{r=a+0}, \end{equation} and $\Delta\psi$ (or, equivalently, $\Gamma$) has a constant jump $p$ (including the case $p=0$) at $r=a$ \begin{equation} \label{cond2} \Delta\psi\mid_{r=a-0}=\Delta\psi\mid_{r=a+0}+p. \end{equation} The presence of such a jump leads, just as for the modon in Ref.~\cite{Lashkin2017}, to the appearance of a radially symmetric part in the solution. By substituting Eqs. (\ref{ps1}) and (\ref{ps2}) into Eqs. (\ref{cond1}) and (\ref{cond2}), we can find the desired solution. In this case, for a given value of $\varkappa$, the value of $k$ is determined by the relation \begin{equation} \label{transzent} (k^{2}\delta+3-k^{2})\tan k=k(k^{2}\delta+3) \, , \end{equation} where \begin{equation} \label{delta} \delta=\frac{(\varkappa^{2}+3\varkappa+3)}{\varkappa^{2}(\varkappa+1)}. \end{equation} Given that the functions $J_{n+1/2}$ and $K_{n+1/2}$ for integer values of the index $n$ can be expressed in terms of trigonometric functions and the exponential function, respectively (together with rational ones), the final solution can be written as \begin{equation} \label{final_solution} \psi(r,\theta,\varphi)=\Psi_{0}(r)+\Psi(r)(\sin\theta\cos\varphi+\mu\cos\theta), \end{equation} where $\mu$ is an arbitrary constant, and $\Psi_{0}(r)$ and $\Psi(r)$ are determined by \begin{widetext} \begin{equation} \label{psi0} \Psi_{0}(r)=\frac{pa^{2}}{(\varkappa^{2}+k^{2})\delta}\left\{ \begin{array}{lc} \displaystyle \, \frac{a\sin (k r/a)}{r(\sin k-k\cos k)}-\frac{3(\varkappa^{2}+k^{2})}{\varkappa^{2}k^{2}}, & r\leqslant a \\ \displaystyle \,\frac{a}{(1+\varkappa)r}\exp\left[-\varkappa\left(\frac{r}{a}-1\right)\right],& r\geqslant a \end{array} \right. , \end{equation} \end{widetext} and \begin{widetext} \begin{equation} \label{psi} \Psi(r)=ua\left\{ \begin{array}{lc} \displaystyle \left(1+\frac{\varkappa^{2}}{k^{2}}\right)\frac{r}{a}- \frac{\varkappa^{2}}{k^{2}}\frac{a^{2}[\sin (k r/a)-(kr/a)\cos(kr/a)]}{r^{2}(\sin k-k\cos k)} ,& r\leqslant a \\ \displaystyle \,\frac{a^{2}(1+\varkappa r/a)}{r^{2}(1+\varkappa)}\exp\left[-\varkappa\left(\frac{r}{a}-1\right)\right],& r\geqslant a \end{array} \right. . \end{equation} \end{widetext} As can be seen from Eqs. (\ref{final_solution}), (\ref{psi0}) and (\ref{psi}), the 3D soliton consists of an $x$-antisymmetric dipole part, on which, as on a carrier, there is a core - a radially symmetric part of an arbitrary amplitude and a $z$-antisymmetric dipole part of an arbitrary amplitude. The carrier amplitude is determined by the velocity $u$ and localization size of the soliton $a$. The core and the $z$-antisymmetric parts cannot exist without the carrier. The radially symmetric part vanishes if $p=0$, and, under this, $\Delta\psi$ (and the vorticity) is continuous at the boundary $r=a$. The $z$-antisymmetric part vanishes if $\mu=0$. Thus, the 3D soliton solution (\ref{final_solution}) has four independent free parameters - the velocity $u$, the soliton characteristic size $a$, value characterizing the amplitude of the $z$-antisymmetric part $\mu$, and the jump of the vorticity $p$ which determines the amplitude of the radially symmetric part. Within the interior region $r<a$, the fluid particles are trapped and are thus transported along the $y$-direction. In the exterior region $r>a$, the solution decays exponentially to zero. Note that Eq. (\ref{transzent}) has an infinite set of roots $k_{n}$, $n=1,2\dots$ for each $\varkappa$. Therefore, Eqs. (\ref{final_solution}), (\ref{psi0}) and (\ref{psi}) present the infinite set of solutions with $k=k_{n}$. The solution with $n=1$ (the ground state) has no radial nodes. The higher states have $n-1$ nodes (in the interior region). In what follows, we consider only the ground state $n=1$. Note that, as follows from Eq. (\ref{dimensionless}), the values $\nu\lesssim 1$ in physical variables correspond to the oblateness of the soliton along the axis of rotation. In the limiting case $\varkappa\rightarrow 0$, one can obtain \begin{equation} \label{psi0_bet0} \Psi_{0}(r)=\frac{pa^{2}}{k^{2}}\left\{ \begin{array}{lc} \displaystyle \, \frac{a\sin (kr/a)}{r\sin k}-1, & r\leqslant a \\ \displaystyle \,0,& r\geqslant a \end{array} \right. , \end{equation} and \begin{widetext} \begin{equation} \label{psi_bet0} \Psi(r)=wa\left\{ \begin{array}{lc} \displaystyle \frac{r}{a}- \frac{3a^{2}[\sin (k r/a)-(k r/a)\cos(k r/a)]}{r^{2}k^{2}\sin k} ,& r\leqslant a \\ \displaystyle \,\frac{a^{2}}{r^{2}},& r\geqslant a \end{array} \right. , \end{equation} \end{widetext} Note that in this limiting case, the solution has a long tail, that is, it decreases at infinity in a power-law manner, rather than exponentially (which can be seen immediately from Eq. (\ref{equ-ext})). In the other limiting case, $\varkappa\rightarrow \infty$, that is $u\rightarrow 0$, which corresponds to a motionless soliton, the radially symmetric component disappears completely, $\Psi_{0}(r)=0$, and \begin{widetext} \begin{equation} \label{psi_betinft} \Psi(r)=\frac{a^{3}w}{k^{2}}\left\{ \begin{array}{lc} \displaystyle \frac{a^{2}[\sin (k r/a)-(k r/a)\cos(k r/a)]}{r^{2}(\sin k-k\cos k)}-\frac{r}{a} ,& r\leqslant a \\ \displaystyle \,0,& r\geqslant a \end{array} \right. . \end{equation} \end{widetext} so that the solution is completely screened in the outer region $r>a$, although it remains continuous, as can be seen, up to the second derivatives. \begin{figure} \includegraphics[width=3.4in]{fig1.eps} \caption{\label{fig1} The 3D vortex soliton (\ref{final_solution}) with the parameters $u=-1$ (velocity), $a=1$ (cut radius), $p=0$ (no monopole part) and $\mu=0$ (no $z$-antisymmetric part): (a) Streamlines $|\psi|$ in the $x-y$-plane section; (b) the field $\psi$ in the $x-y$-plane section; (c) isosurface $|\psi (x,y,z)|=0.8$. } \end{figure} \begin{figure} \includegraphics[width=3.4in]{fig2.eps} \caption{\label{fig2} The 3D vortex soliton (\ref{final_solution}) with the parameters $u=-1.$ (velocity), $a=1$ (cut radius), $p=-15$ (monopole part) and $\mu=0$ (no $z$-antisymmetric part): (a) Streamlines $|\psi|$ in the $x-y$ plane section; (b) the field $\psi$ in the $x-y$ plane section; (c) isosurface $|\psi (x,y,z)|=1.3$. One can see how the monopole part masks the dipole part.} \end{figure} \begin{figure} \includegraphics[width=3.2in]{fig3.eps} \caption{\label{fig3} Evolution of the 3D soliton with the parameters $u=-0.2$ (velocity), $a=1$ (cut radius), $p=0$ (no monopole part) and $\mu=0$ (no $z$-antisymmetric part) in the presence of strong initial random perturbation; the shape (isosurface $|\psi (x,y,z)|=0.15$) of the perturbed soliton at the initial moment,$t=0$, at $t=30$ and at $t=50$.} \end{figure} \begin{figure} \includegraphics[width=3.2in]{fig4.eps} \caption{\label{fig4} Fast destruction of the 3D soliton with a large amplitude of the monopole part and the parameters $u=-0.2$ (velocity), $a=1$ (cut radius), and $p=-10$; isosurface $|\psi (x,y,z)|=0.35$ is shown. } \end{figure} \begin{figure} \includegraphics[width=3.2in]{fig5.eps} \caption{\label{fig5} Almost stable 3D soliton dynamics with a sufficiently small monopole part and the parameters $u=-0.2$ (velocity), $a=1$ (cut radius), and $p=-1$; isosurface $|\psi (x,y,z)|=0.15$ is shown. } \end{figure} \begin{figure} \includegraphics[width=3.2in]{fig6.eps} \caption{\label{fig6} Stable 3D soliton dynamics with the parameters $u=-0.2$ (velocity), $a=1$ (cut radius), $p=0$ and $\mu=10$; isosurface $|\psi (x,y,z)|=0.3$ is shown. } \end{figure} We emphasize that, as noted above, the amplitude of the radially symmetric part, determined by parameter $p$, can be arbitrary and significantly exceed the amplitude of the basic part (carrier). In this case, the monopole part masks the basic part and the soliton looks like a monopole vortex soliton. The 3D vortex soliton solution (\ref{final_solution}) without a radially symmetric part ($p=0$) and without an $z$-antisymmetric part ($\mu=0$), moving with the velocity $u=-1$ and cut radius $a=1$, is shown in Fig.~\ref{fig1}. From the given values $u$ and $a$, the value $k$ in Eqs. (\ref{final_solution}) and (\ref{psi}) is determined by the numerical solution of the transcendental equation (\ref{transzent}) and the smallest value $k$ corresponding to the ground state is taken. For definiteness, we take $\nu=0.2$ here and in all subsequent numerical simulations. Other values $\nu$ do not qualitatively change the results. The soliton with the same parameters, but with a radially symmetric part of a sufficiently large amplitude with $p=-15$, is presented in Fig.~\ref{fig2}. In this case, one can see a distinct monopole part. \section{Stability of the 3D solitons} To study the stability of the found exact 3D solutions, we numerically solved the dynamical equation (\ref{HM3D}) with the initial conditions corresponding to analytical solution (\ref{final_solution}). The time integration is performed by an implicit Adams-Moulton method with the variable time step and the variable order and local error control (we used the corresponding NAG routine \cite{NAG18}). The periodic boundary conditions are assumed. The linear terms are computed in spectral space. The Poisson bracket nonlinearity is evaluated in physical space by a finite difference method, using the energy- and enstrophy-conserving Arakawa scheme \cite{Arakawa1966} modified for the 3D case (see Appendix). As a first (and principal) example, we consider the stability of the 3D vortex soliton without superimposed parts, that is, without an additional radially symmetric part, $\Psi_{0}=0$, and without an additional antisymmetric part in $z$-axis, $\mu=0$, in Eq. (\ref{final_solution}). The initial condition at the time $t=0$ was taken in the form $\psi(\mathbf{r},0)=\psi_{s}(\mathbf{r},0)[1+\epsilon\xi (\mathbf{r})]$, where $\psi_{s}=\Psi (r)\sin\theta\cos\varphi$, and $\Psi (r)$ is determined by Eq. (\ref{psi}), $\xi (\mathbf{r})$ is the white Gaussian noise with variance $\sigma^{2}=1$, and the parameter of perturbation $\epsilon=0.01-0.1$. The stable dynamics of such a vortex soliton with $\epsilon=0.05$ is shown in Fig.~\ref{fig3}. It can be seen that at the initial time $t=0$ the soliton is perturbed by a sufficiently strong noise, however, the soliton, generally speaking, does not undergo any significant shape distortions at times $t=30$ and $t=50$. In particular, there is no fast development of symmetry breaking between the cyclone and anticyclonic parts. Note that there are two characteristic times of the processes in the model -- the dispersive time $\sim 1/\omega_{\mathbf{k}}$, where $\omega_{\mathbf{k}}$ is the linear dispersion law, at which the packet of linear waves spreads out due to dispersion, and the nonlinear time $\sim 1/\omega_{NL}$, where $\omega_{NL}$ is the characteristic nonlinear frequency (vortex rotation frequency) defined as $\omega_{NL}=\mathbf{k}\cdot\mathbf{v}$, where $k\sim 1/a$ and $\mathbf{v}_{\mathbf{k}}\sim [\hat{\mathbf{z}}\times\mathbf{k}]\psi_{\mathbf{k}}$ is the fluid velocity in the vortex. It can be seen from Fig.~\ref{fig3} that for the given parameters of the vortex soliton, it evolves without significant distortion of its shape over many periods of rotation $T=2\pi /\omega_{NL}$. In the absence of initial noise $\epsilon=0$, the soliton evolves for an arbitrarily long time without any shape distortion (simulation was carried out up to times $t=400$, but in fact we did not observe any distortions at these times, for example, at $\epsilon=0.01$). Such behavior was observed for various values of the soliton velocity $u$ and the parameter $a$. Then we studied the stability of solution (\ref{final_solution}) with a radially symmetric (monopole) part at different values of the amplitude of this part. Recall that the magnitude of this amplitude determines the magnitude of the vorticity jump $p$ at the cut boundary $a$. We also assume that there is no additional $z$-antisymmetric part. Here we assume that there is no additional initial noise disturbance, and then the initial condition for Eq. (\ref{HM3D}) is $\psi(\mathbf{r},0)=\Psi_{0}(r)+\Psi(r)\sin\theta\cos\varphi$, where $\Psi_{0}(r)$ and $\Psi(r)$ are determined by Eqs. (\ref{psi0}) and (\ref{psi}), respectively. Numerical simulation shows that this solution is unstable. With a sufficiently large amplitude of the radially symmetric part (and, accordingly, the vorticity jump $p$), such a soliton is destroyed almost immediately. The destruction of the soliton with $p=-10$ is shown in Fig.~\ref{fig4}. On the other hand, as seen in Fig.~\ref{fig5}, if the amplitude of the monopole part is not too large and $p=-1$, the soliton retains its original shape for quite a long time, although the appearance of an insignificant radiated wave wake is already noticeable at the time $t=50$. Finally, in Fig.~\ref{fig6} shows the dynamics of a soliton without a radially symmetric part, but with an additional $z$-antisymmetric part of a sufficiently large amplitude with $\mu=10$. Once again we emphasize that for such solutions all second derivatives and vorticity are continuous. It can be seen that the soliton moves without any distortion of its shape at times $t=20$ and $t=60$. Nevertheless, such solitons, in contrast to the solitons with $p=0$ and $\mu=0$, turn out to be unstable. For parameter values $\mu=40$, the soliton is destroyed (corresponding figure is not shown). Note that the behavior of all three types of solitons considered is consistent with the results of Ref.~\cite{Lashkin2017}, where head-on and overtaking collisions between three-dimensional vortex solitons were studied in a similar model. As noted above, in contrast to Ref.~\cite{Lashkin2017}, the 3D vortex solitons of Eq. (\ref{HM3D}) can move only in one direction. \section{Conclusion} In this paper, we have derived the system of 3D nonlinear equations describing the dynamics of disturbances in a weakly nonuniform rotating self-gravitating fluid under the assumption that the characteristic frequencies of disturbances are small compared to the rotation frequency. The nonlinear terms in this system have the form of the Poisson bracket, which is quite common in problems of nonlinear geophysics. Linear dispersion is due to weak inhomogeneity. In a linear approximation, we obtained an instability criterion and showed that the region of instability in terms of wave numbers expands significantly in comparison with the classical Jeans criterion for a homogeneous nonrotating system. In the case when the characteristic perturbation lengths are much larger than the Jeans length, the resulting system of nonlinear equations can be reduced to the system previously obtained in \cite{Horton1983}. The analytical solution of this system is obtained by the well-known Larichev-Reznik method for finding the 2D nonlinear dipole solutions in the physics of atmospheres of rotating planets and is actually the 2D Larichev-Reznik vortex dipole soliton in the form of a cyclone-anticyclone pair (in fact, the solution is a pseudo three-dimensional vortex tube). In the opposite long-wavelength case of small perturbation lengths compared to the Jeans length, we obtained the original 3D nonlinear equation resembling the 3D analogue of the 2D CHM equation. We have obtained analytical the 3D soliton solutions of this equation by generalizing the Larichev-Reznik procedure to the 3D case. The solution is a vortex soliton moving with the constant velocity in the direction perpendicular to the direction of the inhomogeneity and the direction of the axis of rotation. The main part of the solution is $x$-antisymmetric, that is, along the direction of the inhomogeneity, and is a three-dimensional dipole in the form of a cyclone-anticyclone pair. In addition to the basic 3D $x$-antisymmetric part, the solution may also contain radially symmetric (monopole) or/and antisymmetric along the rotation axis ($z$-axis) parts with arbitrary amplitudes. It is important to note that these superimposed parts cannot exist without the main part (carrier), although the amplitudes of these superimposed parts can significantly exceed the amplitude of the carrier. For example, if the amplitude of the radially symmetric part is much greater than the amplitude of the antisymmetric parts, then the solution looks like a three-dimensional monopole soliton. We have studied the stability of the obtained 3D analytical solutions by numerically simulating the evolution of these soliton solutions in the framework of the original dynamic equation. The 3D vortex soliton without the superimposed parts turns out to be extremely stable. It moves without distortion and retains its shape even in the presence of a sufficiently strong initial noise disturbance. The solitons with parts that are radially symmetric or/and $z$-antisymmetric are unstable, although at sufficiently small amplitudes of these superimposed parts, the soliton retains its shape for a very long time. \section{ACKNOWLEDGMENTS} V.M.L. and O.K.C. were supported by the Targeted Complex Program of the National Academy of Sciences of Ukraine in Plasma Physics. O.K.C. was also supported by the Thematic Program of the Wolfgang Pauli Institute 'Models in Plasma, Earth and Space Sciences'. \section{Appendix} In problems of nonlinear geophysics, where the nonlinearity is present in the form of the Jacobian (Poisson brackets), one of the most reliable numerical methods for representing such a nonlinearity is the Arakawa scheme \cite{Arakawa1966}. It is based on the finite difference method. Arbitrary functions $p(x,y,z)$ and $q(x,y,z)$ are represented by its values at the discrete set of points $x_{i}$, $y_{j}$ and $z_{k}$, We write $p_{i,j,k}$ for $p(x_{i},y_{j},z_{k})$, and the same for $q$. Since the nonlinearity in the form of the Poisson bracket ${p,q}$ does not contain derivatives with respect to $z$, the generalization of the well-known 2D Arakawa scheme to the 3D case is almost trivial. The corresponding nonlinearity is written as \begin{equation} \{p,q\}=\frac{1}{3}(J^{++}+J^{\times +}+J^{+\times}), \end{equation} where \begin{gather} J^{++}=\frac{1}{4h_{x}h_{y}}\left[(p_{i+1,j,k}-p_{i-1,j,k})(q_{i,j+1,k}-q_{i,j-1,k})\right. \nonumber \\ \left. -(p_{i,j+1,k}-p_{i,j-1,k})(q_{i+1,j,k}-q_{i-1,j,k})\right], \\ J^{\times +}=\frac{1}{4h_{x}h_{y}}\left[q_{i,j+1,k}(p_{i+1,j+1,k}-p_{i-1,j+1,k}) \right. \nonumber \\ \left. -q_{i,j-1,k}(p_{i+1,j-1,k}-p_{i-1,j-1,k}) \right. \nonumber \\ \left. -q_{i+1,j,k}(p_{i+1,j+1,k}-p_{i+1,j-1,k}) \right. \nonumber \\ \left. +q_{i-1,j,k}(p_{i-1,j+1,k}-p_{i-1,j-1,k}) \right], \\ J^{+\times}=\frac{1}{4h_{x}h_{y}}\left[p_{i+1,j,k}(q_{i+1,j+1,k}-q_{i+1,j-1,k}) \right. \nonumber \\ \left. -p_{i-1,j,k}(q_{i-1,j+1,k}-q_{i-1,j-1,k}) \right. \nonumber \\ \left. -p_{i,j+1,k}(q_{i+1,j+1,k}-q_{i-1,j+1,k}) \right. \nonumber \\ \left. +p_{i,j-1,k}(q_{i+1,j-1,k}-q_{i-1,j-1,k}) \right], \end{gather} where $h_{x}$ and $h_{y}$ are the corresponding grid spacings in the $x$ and $y$ directions.
1,116,691,498,173
arxiv
\section{Introduction} Bilayer graphene\cite{novoselov2005,novoselov2006,mccann2006,guinea2006,latil2006,partoens2006,partoens2007,koshino2009} has been intensely studied in the last few years, mainly due to the tunability of its band-gap by an electric field. The two layers can be oriented differently, but what is most commonly studied is the AB or Bernal stacking, which is when the top layer is shifted so that half its atoms lie directly over the center of the hexagon of the lower graphene sheet. The band structure no longer has Dirac cones and the charge carriers in bilayer graphene are expected to be massive Dirac fermions. On the other hand, for spin-orbit coupled materials like silicene (and other materials like germanene and stanene)\cite{ezawa2011,drummond2012,liu12011,liu22011,ezawa2015}, which have a buckled structure, tunability is already present in a monolayer, and it is well-known that the band structure can be tuned between band insulators and topological insulators through a metallic phase by applying an electric field. However, bilayer silicene\cite{ezawa2012,fu2014,padilha2015,zhang2015,do2017} (and other spin-orbit coupled materials) have even richer physical properties due to the interplay of buckling and stacking, and are beginning to be studied in detail. Bilayer silicene is a trivial topological insulator, unlike monolayer silicene, due to the lack of topologically protected edge states, in the presence of interlayer spin-orbit coupling and Rashba spin-orbit coupling. However, it has been argued that many of its physical properties are similar to those of topological insulator and it should be considered a quasi-topological insulator\cite{ezawa2012}. At a critical electric field, the quasi-topological phase of bilayer silicene makes a transition to a band insulator, but due to trigonal warping, there are several such critical electric fields where the band closes, and the band structure is thus controllable by the electric field to an even larger extent than in bilayer graphene. The electronic structure and properties of bilayer silicene are also strongly stacking dependent\cite{fu2014,padilha2015} and recent work has also investigated the effect of perpendicular electric and magnetic fields on bilayer silicene. The study of light-matter interaction has become increasingly popular in the last few years and new developments in laser studies have led to the possibility of Floquet engineering\cite{goldman2014,bukov2015} which is the generation of new Hamiltonians that do not exist in static systems. In particular, besides experiments in cold atom\cite{rechtsman2013} and photonic systems\cite{jotzu2014}, with the emergence of new topological phases in condensed matter systems\cite{wang2013}, the field of Floquet topological insulators\cite{oka2009,kitagawa2011,lindner2011,dora2012,rudner2013,kundu2014,usaj2014,titum2015,farrell2015,kundu2016,titum2016,mikami2016,mohan2016,klinovaja2016} has shot into prominence in recent times. The possibility of tuning the band-gap in graphene and bilayer graphene using light greatly increases potential applications. It has also been shown\cite{varlet2015,shtyk2016} that bilayer graphene is an ideal system to study Lifshitz transitions, because the parabolic dispersion in Bernal stacked bilayer graphene is not protected by the crystal symmetry and is trigonally deformed at the lowest energies due to next nearest neighbour interlayer hopping, giving rise to four Dirac cones. Under an inter-layer voltage bias, the Dirac points can move and cause a Lifshitz transition - i.e., can change the topology of the Fermi surface. In this paper, we study the effect of shining light on bilayer graphene as well as bilayer silicene and other spin-orbit coupled Dirac materials, in the high frequency limit using the Brillouin-Wigner perturbation approach. It was shown recently\cite{shtyk2016} that three van Hove saddles merge at a multi-critical Lifshitz point in bilayer graphene and it was shown that four different phases with different Fermi surface topologies occur. One of the main results that we obtain in this paper is the effect of driving on the phase diagram with these four different Fermi surface topologies. The driving also induces new topological phases with different Chern numbers and we obtain those phases as well. Although the effect of irradiation on bilayer graphene\cite{morell2012,dallago2017} has been recently studied, here we focus specifically on the the Lifshitz transitions and how they can be affected by driving. Moreover, we include effects of buckling as well as spin-orbit coupling so that other materials such as silicene, germanene, etc can also be incorporated, where far fewer studies exist. The plan of our paper is as follows. In Sec. II, we briefly review the Lifshitz transition in bilayer graphene to set the notation. In Sec. III, we include the intrinsic spin-orbit coupling and the buckling, which are the hallmarks of silicene and other spin-orbit coupled materials and emphasize how the Lifshitz transition in these materials differs from graphene. In Sec. IV, we include periodic driving and study the Brillouin-Wigner (B-W) perturbation expansion, without and with spin-orbit coupling, in bilayer systems, generalizing earlier results \cite{mikami2016,mohan2016} on single layers ands obtain the effective static Hamiltonian. In Sec V, we obtain the phase diagrams and show how the Lifshitz transition gets modified in the presence of light for both bilayer graphene and bilayer spin-orbit coupled materials. We end with a small discussion in Sec. VI, where we briefly discuss how the Lifshitz transitions may be observed in a real system. \section{Bilayer graphene and the Lifshitz transition} \begin{figure}[!ht] \centering \includegraphics[width=8cm]{bilayer1.pdf} \caption{Bilayer model viewed from above. The small dots (red and blue) denote the carbon atoms (on A and B sub-lattices) in one layer (U or up) whereas the big dots denote the same in the other layer (D or down).} \label{Fig:bilayer} \end{figure} The Hamiltonian of bilayer graphene can be written as the sum of the Hamiltonians of the individual layers and the inter-layer couplings between them. There are several different possible stackings including layers which are twisted with respect to one another. However, here we shall work with the most common Bernal or AB stacking. Although some details of the phase diagram may change in the other cases, we expect that the general features of the phase diagram will essentially remain the same. The Hamiltonian for Bernal stacking is given by \begin{equation} H_{BL}=H^U_{SL}+H^D_{SL}+H_{\text{inter}} \label{Eqn:Si_biH} \end{equation} where the $H^U_{SL}$, $H^D_{SL}$ are the two single layer Hamiltonians for the up and down layers given by \begin{eqnarray} H_{\text{SLG}}=-t \sum_{\langle i j\rangle,\sigma} a^\dagger_{i,\sigma} b_{j,\sigma} + h.c \label{Eqn:SGH} \end{eqnarray} and $H_{\text{inter}}$ is the interlayer coupling. The index $\sigma$ refers to real spin and is usually dropped in the graphene context, since the Hamiltonian can be written independently for each spin. The sub-lattices of the two layers are denoted as $A,B$ for the up layer and $\tilde{A}, \tilde{B}$ for the down layer and the the Hamiltonian for the inter-layer hopping $H_{\text{inter}}$ is given by, \begin{eqnarray} H_{\text{inter}}&=&t_\perp \sum_{i\in \tilde{A}, j\in B} \left( \tilde{a}^\dagger_{i\sigma}b_{j\sigma} +b^\dagger_{j\sigma}\tilde{a}_{i\sigma}\right)\nonumber\\ &+& t_3 \sum_{i\in A, j\in \tilde{B}} \left(a^\dagger_{i\sigma} \tilde{b}_{j\sigma} + \tilde{b}^\dagger_{j\sigma} a_{i\sigma}\right)~. \label{Eqn:Hinter} \end{eqnarray} As shown in Fig.\ref{Fig:bilayer}, there are two distinct types of hopping terms between the layers. The hopping term $t_\perp$ is between sub-lattices $B$ and $\tilde{A}$ which are separated by $2L$ in the direction perpendicular to the planes, whereas the interlayer hopping term $t_3$ is between sub-lattices $A$ and $\tilde{B}$ separated by $2L$ in the perpendicular direction and also one bond length along the planes. The magnitude of both interlayer hopping terms are $t_3, t_\perp \sim 0.1 t$, where $t_\perp\gtrsim t_3$. Here, we have not considered the interlayer spin-orbit coupling. The total low-energy Hamiltonian in the vicinity of the Dirac points (for each spin) can be obtained in the momentum space and is given by \begin{widetext} \begin{align} \psi^\dagger_{q} H_\eta \psi_q =\psi^\dagger_{q}\begin{pmatrix} LE_z &\frac{3a t}{2} \left(\eta q_x-iq_y\right)&0&-\frac{3a t_3}{2} \left(\eta q_x+iq_y\right) \\ \frac{3a t}{2} \left(\eta q_x+iq_y\right) & LE_z&t_\perp&0 \\ 0&t_\perp & - LE_z &\frac{3a t}{2} \left(\eta q_x-iq_y\right) \\ -\frac{3a t_3}{2} \left(\eta q_x-iq_y\right) &0&\frac{3a t}{2} \left(\eta q_x+iq_y\right) & - LE_z \\ \end{pmatrix} \psi_q \label{Eqn:H_matrix} \end{align} \end{widetext} where $\psi_{q}=\begin{pmatrix} a_{q}&b_{q} &\tilde{a}_{q}& \tilde{b}_{q} \\ \end{pmatrix}^T$, $\eta=\pm1$ for $K,K'$ is the pseudo-spin index and $a$ is the nearest-neighbour distance. An electric field added perpendicular to the layers will add a potential difference between the two layers. The four eigenvalues of the above Hamiltonian are given by \begin{widetext} \begin{eqnarray} \epsilon_{q,\eta}&=&\pm \frac{1}{\sqrt{2}}\left(2L^2E^2_z+\frac{9a^2q^2}{4}(2t^2+t^2_3)+t^2_\perp \right.\nonumber\\ &\pm&\left.\left(t^4_\perp+9a^2 q^2 t^2(4L^2E^2_z+t^2_\perp)+\frac{9a^2q^2t^2_3}{4}(9a^2q^2(t^2+\frac{t^2_3}{4})-2t^2_\perp) -27\eta a^3t^2t_3 t_\perp q_x(q^2_x-3q^2_y) \right)^{1/2}\right)^{1/2} \label{Eqn:bigev1} \end{eqnarray} \end{widetext} where $q^2=q^2_x+q^2_y$. Precisely at the $K$ and $K^\prime$ points, the four eigenvalues reduce to \begin{eqnarray} &&\epsilon_{\eta}=\pm LE_z, ~ \pm \sqrt{L^2E^2_z+t^2_\perp}. \label{Eqn:epsiloneta} \end{eqnarray} The linear dispersion of the two graphene layers become quadratic on the addition of the coupling $t_\perp$. Switching on the $t_3$ term adds trigonal warping to the bands near the $K(K')$ points. This results in the quadratic band dividing into four Dirac cones: one at the $K(K')$ point and three satellite cones around it. The three satellite Dirac points are situated at $\left(\frac{\eta t_\perp t_3}{3a t^2},\pm\frac{t_\perp t_3}{\sqrt{3}at^2}\right)$ and $\left(-\frac{2t_3t_\perp}{3a\eta t^2},0\right)$. These points can be easily computed by checking for nodes at non-zero momenta along the $q_y=0$ axis for $E_z=0$ and using the trigonal symmetry. Each of the satellite Dirac points is separated from the $K$ ($K^\prime$) point by a van-Hove singularity, which here is a maximum. At the satellite Dirac points, Eq.\ref{Eqn:bigev1} takes the form, \begin{eqnarray} \epsilon_{\text{satellite}}&=&\pm\left(2L^2 E^2_z t^4+(t^2+t^2_3)^2 t^2_\perp \right. \nonumber\\ &\pm& \left. t_\perp \left(16L^2 E^2_z t^6 t^2_3+(t^2+t^2_3)^4 t^2_\perp \right)^{1/2} \right)^{1/2}. \label{Eqn:epsilonsatellite} \end{eqnarray} Note that two of the values in both Eq.\ref{Eqn:epsiloneta} and Eq.\ref{Eqn:epsilonsatellite} are zero when $E_z=0$. As we increase the value of $E_z$, the system becomes gapped and the van-Hove singularity moves towards the $K$ ($K^\prime$) points. The four Dirac cones are replaced by electron-like pockets here. At a critical $E_z$ given by $LE^\text{critical}_z=t_\perp t_3/ 2t$, the van-Hove singularities merge causing a Lifshitz multi-critical point\cite{shtyk2016}. For $E_z>E^\text{critical}_z$, the central electron-like pocket becomes a hole-like one while the other three remain electron-like. In Fig.\ref{Fig:bare_lifshitz}, we show the phase diagram with phases with different Fermi surface topologies, similar to the one shown in Ref.\onlinecite{shtyk2016}. However, we choose to show it as a function of the external perpendicular electric field $E_z$ and the Fermi energy. \begin{figure}[!ht] \includegraphics[width=9cm]{ev_barebig.pdf} \caption{The Lifshitz phase diagram of bilayer graphene. The different phases are plotted with an increasing electric field. The shape of Fermi surface in each phase is shown in grey color. The black dot at $E^c_z$ is the Lifshitz multi critical point. The values of inter-layer couplings are $t_\perp=0.5t$ and $t_3=0.45t$ in this phase diagram for illustrative purposes.} \label{Fig:bare_lifshitz} \end{figure} To see the different Fermi surface topologies with an increasing electric field we have to include the chemical potential $\mu\,(\varepsilon_F)$ term in Eq.\ref{Eqn:bigev1} where it has been set to zero. The wavy striped region in Fig.\ref{Fig:bare_lifshitz} is when the Fermi energy or chemical potential lies in the gap between the conduction and valence bands. An increase in the electric field pushes the valence and conduction bands away from each other increasing the striped region. For $E_z=0$, shifting the Fermi level upwards will result in four disconnected Fermi surfaces, one from the Dirac cone and one from each of the three satellite cones. This phase forms the red region in the phase diagram. As we increase the external electric field from zero, the distance between the valence and conduction bands at $K/K'$ points increases further than at the satellite points. When the Fermi level is between the Dirac cone and the satellite cones, the Fermi surface consists of three disconnected regions. This area of the phase diagram is shown in green. The height of the line (measured from zero) separating the blue and red regions in the phase diagram is the height of the van-Hove singularity measured from the point half-way between the valence and conduction bands. Above this phase boundary, the Fermi energy has increased to the level that the Dirac cone and the satellite cones merge to a single band with the topology of a single region. As we increase the external electric field, the three van-Hove singularities move closer to the $K/K'$ point and merge at a critical electric field $E^c_z$. A multi-critical Lifshitz point occurs in the phase diagram at the $E^c_z$ where all the different Lifshitz phases meet. A further increase in electric field causes the electron-like pocket at the $K/K'$ point to become hole-like. This phase is the mustard region with the topology of an annulus. The Lifshitz phase transition between the blue and the red phases in the phase diagram is of the 'neck-narrowing type'. As the name indicates, here the Fermi surface get a new topology by pinching off a region. The transition between the green and mustard region is also of the same type. The phase transitions between the red and green phases and the one between the blue and mustard phases is of the 'pocket vanishing type'. Here the topology of the Fermi surface changes when an electron-like pocket becomes hole-like or vice versa. \section{Extension to spin-orbit coupled bilayer materials} The difference between graphene and other two-dimensional (2D) materials like silicene or germanene is essentially due to the larger size of their atoms which causes buckling of the lattice as well as leading to the presence of an intrinsic spin-orbit coupling. In this paper, we study a general material with arbitrary values for spin-orbit coupling and buckling distance, so that the results can be applied to any of these materials. In the rest of the work, the term silicene is used to represent such a general material. \begin{figure} \centering \vspace{0.3cm} \includegraphics[width=7cm]{bilayerd.pdf} \caption{The different sub-lattices in bilayer silicene viewed horizontally layer by layer. The small and big dots denote the carbon toms in the upper (U) and lower (D) layers respectively.} \label{Fig:buck} \end{figure} For a single layer, the Hamiltonian of these materials has additional terms beyond the terms in Eq.\ref{Eqn:SGH} given by, \begin{eqnarray} H_{\text{SLS-extra}} &=& \frac{i\lambda\sigma}{3\sqrt{3}}\sum_{\langle\langle i j \rangle \rangle,\sigma} \nu_{ij} (a^\dagger_{i,\sigma} a_{j,\sigma}+b^\dagger_{i,\sigma} b_{j,\sigma})\nonumber\\ &+& \sum_{i,\sigma} lE_z (a^\dagger_{i,\sigma} a_{i,\sigma}-b^\dagger_{i,\sigma} b_{i,\sigma}). \label{Eqn:SSH} \end{eqnarray} Here, $\lambda$ is the spin-orbit coupling between the sites of the same sub-lattice and $2l$ is the buckling distance as shown in Fig.\ref{Fig:buck}. The spin index $\sigma=\pm1$ corresponding to $ \uparrow/\downarrow$ and $\nu_{ij}=\pm1$ depending on whether the path taken from $j$ to $i$ is clockwise/counter-clockwise. An external electric field $E_z$ is added perpendicular to the layers which creates a potential difference between the $A$ and $B$ sub-lattices and also between the layers. The Hamiltonian of bilayer silicene has earlier been studied in Ref.\onlinecite{ezawa2012} and its band structure and edge modes were obtained. Here, we review this for two reasons - first, to understand Lifshitz transitions in this model which have not been studied earlier and second, to set our notation for the coupling to photons in the next section. We have two copies of the single layer Hamiltonian given by \beq H_{SLS} = H_{SLG} + H_{SLS-extra} \eeq and the inter-layer coupling as given in Eq.\ref{Eqn:Hinter}. Like the case of bilayer graphene, there are two hopping terms between the layers which are separated by a distance $2L$ (see Fig.\ref{Fig:buck}). The hopping term $t_\perp$ is between sub-lattices $B$ and $\tilde{A}$ which are separated by $2(L-l)$ in the direction perpendicular to the planes. The other interlayer hopping term $t_3$ is between sub-lattices $A$ and $\tilde{B}$ separated by $2(L+l)$ in the perpendicular direction. Due to the individual buckling of both layers, all the four sub-lattices in bilayer silicene are on four different planes and therefore at four different potentials when $E_z$ is applied. The magnitude of both interlayer hopping terms $t_3,t_\perp$ are $\sim 0.1 t$, and furthermore, $t_\perp\gtrsim t_3$. Note that the interlayer spin-orbit coupling has been set to zero, so the model still has spin conservation. However, this conservation is not protected by a symmetry and so bilayer silicene is a `quasi-topological insulator'\cite{ezawa2012}. \begin{figure}[!ht] \includegraphics[width=9cm]{bisicherninset.pdf} \caption{The Chern number, $C_\uparrow$ plotted as a function of $(L+l)E_z$ for $\uparrow$ spin $(\sigma=1)$ in bilayer silicene. Due to the time-reversal symmetry of the model, $C_\downarrow=- C_\uparrow$. The values of inter-layer couplings are $t_\perp=0.12t$, $t_3=0.1t$ and $l=L/4$.} \label{Fig:bilayersi} \end{figure} \begin{figure*} \centering \hspace{-1cm} \includegraphics[height=5cm,width=18cm]{abcd.pdf} \caption{The plots from left to right corresponds to panels (a)-(d) in Fig.\ref{figbisi} and shows the evolution from a single Dirac cone at $K$ point to three Dirac cones on tuning the external electric field. This region is marked by the Chern number $-1$ in the inset of Fig.\ref{Fig:bilayersi} } \label{bisiFST} \end{figure*} The low-energy Hamiltonian expanded around the $K$ and $K^\prime$ points in the Brillouin zone then reads, \begin{widetext} \begin{align} H_\eta =\psi^\dagger_{q}\begin{pmatrix} \eta\sigma\lambda+(L+l)E_z &\frac{3a t}{2} \left(\eta q_x-iq_y\right)&0&-\frac{3a t_3}{2} \left(\eta q_x+iq_y\right) \\ \frac{3a t}{2} \left(\eta q_x+iq_y\right) & -\eta\sigma\lambda+ (L-l)E_z&t_\perp&0 \\ 0&t_\perp & \eta\sigma\lambda -(L-l)E_z &\frac{3a t}{2} \left(\eta q_x-iq_y\right) \\ -\frac{3a t_3}{2} \left(\eta q_x-iq_y\right) &0&\frac{3a t}{2} \left(\eta q_x+iq_y\right) & -\eta\sigma\lambda -(L+l)E_z \\ \end{pmatrix} \psi_q \label{Eqn:Hbisi_matrix} \end{align} \end{widetext} with the same $\psi_q,\psi_q^\dagger$ defined earlier. Note that because of the spin-orbit coupling, this Hamiltonian has an explicit dependence on the spin index $\sigma$. So for each value of $\sigma=\uparrow, \downarrow$, we get a specific $H_\eta$. Note also that this reduces to the Hamiltonian in Ref.\onlinecite{ezawa2012} in the appropriate limit. We can now obtain the eigenvalues of this Hamiltonian as \begin{widetext} \begin{eqnarray} \epsilon_{q,\eta,\sigma}&=&\pm \frac{1}{\sqrt{2}}\left(2(L^2+l^2)^2E^2_z+\frac{9a^2q^2}{4}(2t^2+t^2_3)+t^2_\perp+2\lambda^2 +4lE_z\eta \sigma \lambda \right.\nonumber\\ &\pm& \left.\left(t^4_\perp+9a^2 q^2 t^2(4L^2E^2_z+t^2_\perp)+\frac{9a^2q^2t^2_3}{4}(9a^2q^2(t^2+\frac{t^2_3}{4})-2t^2_\perp) -27\eta a^3t^2t_3 t_\perp q_x(q^2_x-3q^2_y) \right.\right.\nonumber\\ &+&\left.\left.4LE_z (\eta\sigma\lambda+lE_z) \left(\frac{9}{2}a^2q^2t^2_3-2t^2_\perp\right)+16L^2E^2_z(\eta\sigma\lambda+lE_z)^2\right)^{1/2}\right)^{1/2} \label{Eqn:bisiev1} \end{eqnarray} \end{widetext} \begin{figure}[!ht] \includegraphics[width=9cm,height=6cm]{bisi_lifshitz1.pdf} \caption{ The Lifshitz phase diagram of bilayer silicene. The four panels show the evolution of the band structure on the $q_y$ axis (for $q_x=0$) near $K$ as the electric field $E_z$ is varied. Panels (a) and (d) have the band crossings at $K$ and satellite Dirac point respectively, while panels (b)-(c) are band structure evolutions between them. The different Lifshitz phases for varying chemical potential values are marked by different colored regions in the phase diagram. } \label{figbisi} \end{figure} Comparing Eqs. \ref{Eqn:bigev1} and \ref{Eqn:bisiev1}, we note that they differ by terms proportional to the buckling as well as by the spin-orbit coupling terms. Moreover, the eigenvalues are explicitly spin-dependent. At the $K/K'$ points, the eigenvalues have the form, \begin{eqnarray} \epsilon_{\eta,\sigma}&=&\pm (\eta\sigma\lambda+(L+l)E_z),\nonumber\\ &&\pm \sqrt{(\eta\sigma\lambda-(L-l)E_z)^2+t^2_\perp}. \label{Eqn:bisiev2} \end{eqnarray} The most important difference between bilayer graphene and silicene is the gap due to the spin-orbit coupling. Like in the case of monolayer silicene, this gap can be closed by tuning the external electric field and leading to Chern number changes which are depicted in Fig.\ref{Fig:bilayersi}. As the strength of the external electric field increases from zero, the Chern number changes four times corresponding to four gap closings in the Brillouin zone. The first change in the Chern number occurs when the gap closes at $K/K^\prime$ points when $(L+l)E_z=-\eta\sigma\lambda$, evident from Eq.\ref{Eqn:bisiev2}. The Chern number changes from $-2$ to $-1$ here. The second one occurs when the gap closes at the three (expected from trigonal symmetry) satellite Dirac points around $K/K^\prime$ points, as $E_z$ increased slightly. The Chern number goes from $-1$ to $0$ as shown in the inset of Fig.\ref{Fig:bilayersi}. The evolution of the bands from when the gap closes at the $K$ point (for the $\uparrow$ spin sector) to when the gap closes at the satellite Dirac points as a function of increasing $E_z$ is depicted in Fig.\ref{bisiFST}. This is also the region where we choose to study the Lifshitz transition in this model. The third and fourth gap closings happens for much larger values of $E_z$ elsewhere in the Brillouin zone which are not accessible in the low energy Hamiltonian in Eq.\ref{Eqn:Hbisi_matrix}. In Fig.\ref{figbisi}, we numerically show the evolution of the gaps at the $K$ and at the associated satellite point situated along the $q_y=0$ axis. The two other satellite Dirac points are found using the trigonal symmetry. We now try to understand the Lifshitz transition in bilayer silicene and the changes that occur in the Fermi surface topology. We have chosen to show the results for the spin $\uparrow$ sector. The results for the spin $\downarrow$ sector are qualitatively similar. When the chemical potential is set to zero along with the applied electric field the system has a gap of the $O(\lambda)$. On increasing the strength of the applied electric field, the gap closes at the Dirac point. This gapless band is plotted in Fig.\ref{figbisi}(a). By increasing the Fermi level from zero, we obtain a Fermi surface with a disc topology or topology of a single region. This Lifshitz phase is marked by purple colored regions in Fig.\ref{figbisi}(a)-(c). Increasing the electric field further gaps out the Dirac cone at $K$ point (for $\mu=0$) while simultaneously creating electron pockets at the three satellite Dirac points. Fig.\ref{figbisi}(b)-(c) depicts two such cases. Depending on the value of Fermi energy we can get two additional Lifshitz phases along with an insulating phase here. The striped regions in Fig.\ref{figbisi}(b)-(c) are insulating phases where Fermi energy lies below both the electron pockets. The brown regions are the phases with four disconnected Fermi surfaces and the green ones are the phases with three disconnected Fermi surfaces. Further increase of $E_z$ results in the gap closing at the satellite Dirac points as shown in Fig.\ref{figbisi}(d). A non-zero chemical potential here will give a Fermi surface with three disconnected regions. \section{Periodic driving and the effective Hamiltonian for bilayer materials} \label{Sec.BW} In this section, we include the effects of periodic driving in the high frequency limit, by obtaining an effective static Hamiltonian using the Brillouin-Wigner (B-W) perturbation theory. Generalizing earlier work on single layer systems like graphene\cite{mikami2016} and silicene\cite{mohan2016}, we find that for the bilayer Hamiltonian described in Eq. \ref{Eqn:Si_biH}, up to $O(1/\omega)$, the Hamiltonian is given by \beq H^{\text{B-W}} = H^{(0)} + H^{(1)} \eeq where $H^{(0)}$ and $H^{(1)}$ are the zeroth and first order terms in the B-W expansion. The effect of radiation is taken into account by the vector potential ${\bf A}({\tau}) = A_0(\cos\omega\tau, \sin\omega\tau)$, where $\omega$ is the frequency and we work in the high frequency limit. The hopping term is then modified by the Peierls substitution and changes to $-t \sum_{\langle i,j\rangle} e^{-i\alpha\sin(\omega\tau-2\pi l/3)} a_i^\dagger b_j$, where $l=0,1,2$ for the three nearest neighbours in the honeycomb lattice. We will generically include both buckling and the spin-orbit terms and then set them to zero when considering bilayer graphene and use the appropriate values for the different spin-orbit coupled materials. The expansion has terms re normalizing the intra-layer couplings of both the layers and the inter-layer terms between them. The renormalization of the in-plane terms are given by\cite{} \begin{widetext} \begin{align} \mathcal{J}_{\sigma} =& -t J_0(\alpha)+\frac{4t\sigma \lambda }{3\omega}\sum_{n\ne0}\beta_n\sin{\frac{\pi n}{6}} +\frac{t^3}{\omega^2} \left[\sum_{n\ne0} \gamma_n\left(2\cos{\frac{2\pi n}{3}+3}\right) +\sum_{m,n \ne 0}\chi_{nm}\left( 4\cos{\frac{2\pi n}{3}+1} \right)\right], \nonumber \\ \Lambda^0_{\sigma} =& \frac{\sigma \lambda J_0(\alpha\sqrt{3})}{3\sqrt{3}}-\sum_{n\ne0} \frac{t^2J^2_n(\alpha)}{\omega n}\sin{\frac{2\pi n}{3}} , \nonumber \\ L_{\sigma} =&-\frac{4t\sigma\lambda}{3\omega}\sum_{n\ne0} \beta_n\sin{\frac{\pi n}{2}} +\frac{2t^3}{\omega^2} \left(\sum_{n\ne0} \gamma_n\cos{\frac{2\pi n}{3}} +\sum_{m,n \ne 0}\chi_{nm} \cos{\frac{2\pi(m-n)}{3}} \right), \label{eq:Lcoup} \nonumber \\ M_{\sigma}=& -\frac{2t\sigma\lambda}{3\omega}\sum_{n\ne0} \beta_n\cos{\pi n}\,\sin{\frac{\pi n}{6}} +\frac{t^3}{\omega^2} \left(\sum_{n\ne0}\gamma_n\cos{\frac{2\pi n}{3}} + \sum_{m,n \ne 0}\chi_{nm} \cos{\frac{2\pi(m+n)}{3}} \right) , \end{align} \end{widetext} where the original parameters $t$ and $\lambda$ are defined in Eqs. \ref{Eqn:SGH} and \ref{Eqn:SSH} and $\alpha=aA$. $J_n$ is the Bessel function of order $n$ and $\beta_n = J_n(\alpha)J_n(\alpha\sqrt{3})/\sqrt{3}n$, $\gamma_n = J^2_n(\alpha)J_0(\alpha)/n^2$ and $\chi_{nm}=J_m(\alpha)J_n(\alpha)J_{m+n}(\alpha)/mn$. \begin{figure}[!ht] \centering \includegraphics[width=8cm]{bilayer_hopping1.pdf} \caption{Different hopping paths between layers in bilayer silicene} \label{Fig:bilayerh} \end{figure} The inter-layer Hamiltonian $H_\text{inter}$ can also be expanded order by order, leading to longer ranged hoppings as shown in Fig.\ref{Fig:bilayerh}. The zeroth order term is given by \begin{eqnarray} H^{(0)}_{}&=& t_\perp \sum_{i\in \tilde{A}, j\in B} \left( \tilde{a}^\dagger_{i\sigma}b_{j\sigma} +b^\dagger_{j\sigma}\tilde{a}_{i\sigma}\right) \nonumber\\ &+& t_3 J_0(\alpha)\left(\sum_{i\in A} a^\dagger_{i\sigma} \tilde{b}_{i-\delta_l\,\sigma} + \sum_{j\in \tilde{B}} \tilde{b}^\dagger_{j\sigma} a_{j+\delta_l\,\sigma}\right).\qquad \label{Eqn:bwzero} \end{eqnarray} The $t_\perp$ term exists only in the zeroth order term and does not occur at $O(1/\omega)$. The first order term, $O(1/\omega)$ involving $t_3$ is given by, \begin{eqnarray} H^{(1)}_{BW}&=&\sum_{\{n_i\}\ne0} \frac{H_{0,n_1}H_{n_1,0}}{n_1 \omega} \nonumber \\ &=&\sum_{\{n\}\ne0} \frac{1}{n\omega}\left[T^{3}_{-n}T^{3}_{n} +T^{}_{-n}\, T^{3}_n+ \tilde{T}_{-n}\, T^{3}_n+ T^{3}_{-n}\, \tilde{T}^{}_n \right.\nonumber\\ &&\left.+T^{3}_{-n} \, T^{}_n \right]. \label{Eqn:bw1big} \end{eqnarray} Here, the terms of $O(tt_3)$ cancel and the terms of $O(t^2_3)$ renormalizes the spin-orbit coupling terms in the $A$ sub-lattice of the top layer and the $\tilde{B}$ sublattice of the bottom layer as follows - \begin{eqnarray} \Lambda^1=-i\frac{t^2_3 J^2_n(\alpha)}{n\omega}\sin{\frac{2n\pi}{3}} \sum_{\langle \langle i,j \rangle \rangle} \nu_{ij} (a^\dagger_i a_j +\tilde{b}^\dagger_i \tilde{b}_j ). \end{eqnarray} The terms $O(\lambda t_3)$ contribute the following three terms to the effective $H$ - \begin{eqnarray} T^\prime_3&=&\frac{4 t_3 \lambda \sigma}{3\sqrt{3}\omega n} J_n(\alpha\sqrt{3})J_n(\alpha)\sin{\frac{n\pi}{6}} \sum^{T_3-\text{path}}_{\substack{i\in A\, j \in \tilde{B}\\ \langle \langle \langle i,j \rangle \rangle \rangle}} a^\dagger_i \, \tilde{b}_{j}\nonumber\\ T_4&=&\frac{-2 t_3 \lambda \sigma}{3\sqrt{3}\omega n} J_n(\alpha\sqrt{3})J_n(\alpha)\sin{\frac{n\pi}{6}} \cos{n\pi} \sum^{T_4-\text{path}}_{\substack{i\in A\, j \in \tilde{B}\\ \langle \langle \langle i,j \rangle \rangle \rangle}} a^\dagger_i \, \tilde{b}_{j}\nonumber\\ T_5&=&\frac{-4 t_3 \lambda \sigma}{3\sqrt{3}\omega n} J_n(\alpha\sqrt{3})J_n(\alpha)\sin{\frac{n\pi}{2}} \sum^{T_5-\text{path}}_{\substack{i\in A\, j \in \tilde{B}\\ \langle \langle \langle i,j \rangle \rangle \rangle}} a^\dagger_i \, \tilde{b}_{j} \end{eqnarray} The $T^\prime_3$ renormalizes the $t_3$ term in the bare Hamiltonian, whereas the $T_4$ and $T_5$ couplings are longer ranged inter-layer hopping terms depicted in Fig.\ref{Fig:bilayerh}. \begin{figure}[!ht] \includegraphics[width=9cm,height=7cm]{drawingv3.pdf} \caption{The Floquet topological phase diagram of bilayer graphene as a function of the drive amplitude $\alpha$. } \label{Fig:big_pd} \end{figure} \begin{figure*} \centering \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=6cm]{lEz0p01mu0p02ink.pdf} \caption{$LE_z=0.01t$, $\mu=0.02t$ } \label{Fig:big_lif1} \end{subfigure}~ \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=6cm]{lEz0p03mu0p05ink.pdf} \caption{$LE_z=0.03t$, $\mu=0.05t$} \label{Fig:big_lif2} \end{subfigure}% \begin{subfigure}[b]{0.33\textwidth} \includegraphics[width=6cm]{lEz0p06mu0p05ink.pdf} \caption{$LE_z=0.06t$, $\mu=0.05t$} \label{Fig:big_lif3} \end{subfigure \caption{ The Floquet Lifshitz phase diagram of bilayer graphene. The change in phases with increasing $\alpha$ is plotted for different initial values of $LE_z$ and $\mu$. The colors of the different phases are the same as that in Fig.\ref{Fig:bare_lifshitz}} \label{Fig:big_lif} \end{figure*} The effective Hamiltonian for bilayer materials in momentum space with renormalized interactions is given by \begin{widetext} \begin{align} H_{\rm eff} = \begin{pmatrix} \Lambda^0_{q\sigma}+\Lambda^a_{q\sigma}+\Lambda^1_{q\sigma}+\mu^a_{q\sigma} &\mathcal{J}_{q\sigma}+L_{q\sigma}+M_{q\sigma}&0& \tilde{T}_{3q\sigma} \\ \mathcal{J}^\ast_{q\sigma}+L^\ast_{q\sigma}+M^\ast_{q\sigma} & -\Lambda^0_{q\sigma}+\Lambda^b_{q\sigma}+\mu^b_{q\sigma} &t_\perp&0 \\ 0&t_\perp & \Lambda^0_{q\sigma}+\Lambda^{\tilde{a}}_{q\sigma}+\mu^{\tilde{a}}_{q\sigma} &\mathcal{J}_{q\sigma}+L_{q\sigma}+M_{q\sigma} \\ \tilde{T}^\ast_{3q\sigma} &0&\mathcal{J}^\ast_{q\sigma}+L^\ast_{q\sigma}+M^\ast_{q\sigma} &-\Lambda^0_{q\sigma}-\Lambda^1_{q\sigma}+\Lambda^{\tilde{b}}_{q\sigma}+\mu^{\tilde{b}}_{q\sigma} \\ \end{pmatrix} \label{Eqn:H_matrix2} \end{align} \end{widetext} where $T_3+T^\prime_3+T_4+T_5=\tilde{T}_{3q\sigma}$ and \begin{eqnarray} \Lambda^{a/b}_{\sigma}&=&-\frac{t^2 \left( (L\pm l)E_z-\mu \right)}{\omega^2} \left(\sum_{n\ne 0} \frac{J^2_n(\alpha)}{n^2} \cos{\frac{2\pi n}{3}} \right) \nonumber\\ \Lambda^{\tilde{a}/\tilde{b}}_{\sigma}&=&-\frac{t^2 \left( (-L\pm l)E_z-\mu \right)}{\omega^2} \left(\sum_{n\ne 0} \frac{J^2_n(\alpha)}{n^2} \cos{\frac{2\pi n}{3}} \right) \nonumber\\ \mu^{a/b}_{\sigma}&=&\left(1-\frac{3t^2}{\omega^2} \sum_{n\ne 0} \frac{J^2_n(\alpha )}{n^2} \right)\left((L\pm l)E_z-\mu \right)\nonumber\\ \mu^{\tilde{a}/\tilde{b}}_{\sigma}&=&\left(1-\frac{3t^2}{\omega^2} \sum_{n\ne 0} \frac{J^2_n(\alpha)}{n^2} \right)\left((-L\pm l)E_z-\mu \right) \label{Eqn:lEzmu} \end{eqnarray} Using this effective Hamiltonian, we will now study the effect of driving at high frequency in both bilayer graphene and bilayer silicene and obtain its effects in the next section. \section{The phase diagrams} In this section, we show how high frequency light can be used to obtain new phases and control changes in the Fermi surface topology, in both bilayer graphene and bilayer silicene. For single layer materials, we already know that shining light leads to changes in the Chern number and hence leads to several new phases. Here, we wish to study changes in Chern number as well as changes in Fermi surface topology as a function of the amplitude of light. \subsection{Bilayer graphene} The Hamiltonian in Eq.(\ref{Eqn:H_matrix2}) has terms involving spin-orbit coupling and a buckled lattice structure. These are set to zero for bilayer graphene, $i.e.$, we set $\tilde{T}_{3q\sigma}=T_3$, since there is neither buckling nor any spin dependence, and $l=0$, since there is no spin-orbit coupling, along with $\mu=0$ in Eq.\ref{Eqn:lEzmu}. This means that $\Lambda^a = \Lambda^b$ and $\Lambda^{\tilde{a}} = \Lambda^{\tilde{b}} $ with $\Lambda^a=-\Lambda^{\tilde{a}}$. Similarly $\mu^a =\mu^b$ and $\mu^{\tilde{a}} = \mu^{\tilde{b}} $ with $\mu^a=-\mu^{\tilde{a}}$. The values of the inter-layer couplings are taken as $t_\perp=0.12t$ and $t_3=0.1t$. We then expand the Hamiltonian around the $K$ and $K^\prime$ points and calculate the energy eigenvalues to study whether the trigonal warping is modified in the effective Hamiltonian. For simplicity, we set $L E_z=0$ in the original undriven Hamiltonian. The position of the four gapless Dirac cones then reduces to that given in Eq. \ref{Eqn:epsiloneta} with $LE_z=0$ which simplifies the effective B-W Hamiltonian in Eq. \ref{Eqn:H_matrix2}. The only new remaining terms in the diagonal as compared to the static case, $\Lambda^0, \Lambda^1$ are `effective spin-orbit couplings' and are, therefore, momentum independent in the low energy limit. Thus, even in the presence of driving, the functional form of the eigenvalues in Eq.\ref{Eqn:bigev1} does not change. However, the `effective spin-orbit coupling terms' introduced by driving leads to a mass gap, given by $2|\Lambda^0+\Lambda^1|$ just like it does for genuinely spin-orbit coupled materials like silicene. Moreover, since this gap is introduced by the driving, it is dependent on the amplitude $\alpha$ of the light and is oscillating. These oscillations lead to gap closures and Chern number changes as shown in Fig.\ref{Fig:big_pd}. We then compute the Chern numbers for a range of values of $E_z$ and $\alpha$ for $\omega=10t$ to construct a phase diagram describing the various topological phases in this model shown in Fig.\ref{Fig:big_pd}. The phase diagram is symmetric under $LE_z \rightarrow -LE_z$in most parts of the phase diagram other than the regions near $\alpha=2.4$ and $5.5$, although the terms $\Lambda^a$ and $\mu^a$ in the Hamiltonian change their sign under this symmetry. However, since both these terms are very small compared to the other terms, this symmetry breaking is only visible when the all other terms are also small. Other than that, the phase diagram shows the expected repetition of phases (with smaller areas) as we increase the value of the amplitude $\alpha$ of light\cite{mohan2016}. To study the Lifshitz transition using the B-W effective Hamiltonian, we consider three different initial points in the Lifshitz phase diagram of bilayer graphene in Fig.\ref{Fig:bare_lifshitz} and study the effect of shining light on these phases. The inter-layer coupling values are taken as $t_\perp=0.5t$, $t_3=0.45t$ which are same as in Fig.\ref{Fig:bare_lifshitz}. The resultant phase diagrams as a function of the amplitude of light are depicted in Fig.\ref{Fig:big_lif}. The (a) magenta, (b) dark blue and (c) light blue curves are the heights (measured from $\varepsilon_F=0$ to the bottom of the cone) of (a) the Dirac cone at the $K$ point, (b) the satellite Dirac cone and (c) the van-Hove singularity, respectively. The colours in this phase diagram are chosen to match the ones in the phase diagram without light Fig.\ref{Fig:bare_lifshitz}. In Fig.\ref{Fig:big_lif1}, in the red region, both the Dirac point and the satellite Dirac points are below the Fermi level. The topology is that of four disconnected regions in the Fermi level. As the value of $\alpha$ increase from zero, close to $\alpha=0.2$, the Dirac cone rises above the Fermi level changing the topology to that of three disconnected regions. This is the green region in this phase diagram and in Fig.\ref{Fig:bare_lifshitz}. As $\alpha$ is further increased, the satellite cones arises above the Fermi level and we enter the insulating striped phase. Thus, as a function of driving, we are able to tune a Lifshitz (topology-changing) transition. In Figs.~\ref{Fig:big_lif2} and \ref{Fig:big_lif3} we start from initial points in the blue and green phases in Fig.\ref{Fig:bare_lifshitz} and show the changes that occur under driving as shown in Fig.\ref{Fig:big_lif}. In Fig.\ref{Fig:big_lif2}, we start from the phase where even the van-Hove singularities is below the Fermi level. By shining light, we first move to a phase with 4 disconnected regions (red phase) where the van-Hove singularities go above the Fermi level. Further driving moves the Dirac point above the Fermi level and we reach the green phase with three disconnected regions and finally, when even the satellite Dirac points go above the Fermi level, we reach the insulating phase. In Fig.\ref{Fig:big_lif3}, we start from the green phase where only the three satellite Dirac points are below the Fermi level and directly transition into the insulating phase. \subsection{Bilayer spin-orbit coupled materials} In this section, we study the Floquet phase diagram and the Lifshitz transition for spin-orbit coupled materials. \begin{figure}[!ht] \includegraphics[width=9cm,height=6cm]{phase_biSiupfinalv2.pdf} \caption{Floquet topological phase diagram of bilayer materials with spin-orbit coupling. The phase diagram is plotted for up spin. } \label{Fig:bisi_pd} \end{figure} The Floquet topological phase diagram for $\uparrow$ spin is calculated in Fig.\ref{Fig:bisi_pd} for $\lambda=0.05t$, $\omega=10t$, $t_\perp=0.12t$, $t_3=0.1t$ and $L=4l$ as a function of $E_z$ and $\alpha$. The phase diagram can be divided into three regions with topological phases separated by trivial ones as we increase the perpendicular electric field. The top and bottom regions corresponds to the $C_\uparrow=-3$ phase in the static phase diagram of silicene in Fig.\ref{Fig:bilayersi} and exists for both positive and negative values of $E_z$. This phase continues to have a non-zero Chern number $(3,-3)$ until $\alpha$ is increased to $\approx1.25$ before disappearing as shown in the inset. The middle region is similar to that of bilayer graphene in Fig.\ref{Fig:big_pd} and to that of the topological phase diagram of irradiated single layer silicene studied using B-W theory\cite{mohan2016}. The $C_\uparrow=-2$ phase from Fig.\ref{Fig:bilayersi} extends for a small region in $\alpha$ while $C_\uparrow=-1$ region is too narrow to be visible in a large phase diagram. \begin{figure}[ht!] \centering \includegraphics[width=8cm, height=8cm]{lif_bwbisi1.pdf} \caption{Floquet Lifshitz phase diagram for bilayer silicene with varying $\alpha$ for $(L+l)E_z= -0.01t$ and $\mu= 0.00025 t$. } \label{fig:bisi_lifshitz} \end{figure} In the rest of this section, we try to understand the effect of an additional tuning parameter $\alpha$ on the Lifshitz phase transition of bilayer silicene. To compare it with the bare case described in Fig.\ref{figbisi}, we fix the values of $\mu$ and $E_z$ and tune the value of $\alpha$ as shown in Fig.\ref{fig:bisi_lifshitz}. For $\mu/t= 0.00025$ and $(L+l)E_z/t = -0.01$, we have considered four values of $\alpha$ in Fig.\ref{fig:bisi_lifshitz}. In the four different panels, the color of the curve is same as the color of the Lifshitz phase in Fig.\ref{figbisi} for that particular value of $\alpha$. The top left panel where $\alpha=0.3990$ corresponds to the striped phase in Fig.\ref{figbisi}. This is an insulating phase where the Fermi energy is in the gap. The top right panel with $\alpha=0.4000$ and the purple curve corresponds to the phase with the same color in Fig.\ref{figbisi}. Here the electron pocket at the Dirac $K(K')$ point is below the Fermi level and the topology is that of a disc corresponding to a single region in the Fermi surface. The bottom left and the bottom right panels with $\alpha=0.4011$ and $\alpha=0.4025$ correspond to the brown and green colored phases respectively in Fig.\ref{figbisi}. The brown colored phase has the electron pockets at the Dirac point and at the three satellite points below the Fermi level. This implies that the the Fermi surface is made up of four disconnected regions. The green colored region consists of three disconnected regions since the electron pocket at the satellite points are below the Fermi level. Hence, as a function of the drive, we can tune the bilayer silicene through topology changing Lifshitz transitions - from an insulating phase in (a) to a single region Fermi surface in (b) to a four region Fermi surface in (c) and finally to a three region Fermi surface in (d). Note also that these transitions are extremely sensitive to the value of the amplitude of the drive and occur for very small changes in $\alpha$. This is not too surprising because these changes are correlated with changes in the Chern number at the same value of $\alpha$ in Fig.\ref{Fig:bisi_pd}, and in that figure, the region of the phase change from $-2$ to $-1$ to $0$ is so small that it cannot be shown in the figure, where it looks like the phase change is directly from $-2$ to $0$. \section{Discussion and conclusions} In this paper, we have studied the effect of light on bilayer graphene and silicene and seen how they affect the Fermi-surface topology changing Lifshitz transitions. Physically, it is the magnetotransport properties, such as the Shubnikov-de Haas effect and thermodynamic properties such as the de Haas-van Alphen effect which are affected by the changes in Fermi surface topology. Essentially, when there are either creation of additional Fermi surface pockets or merging of Fermi surface pockets, the Landau level degeneracy changes. For instance, when the Fermi level decreases from the blue region in Fig.\ref{Fig:bare_lifshitz} with a single Fermi surface to the green region, with three Fermi surface pockets, the period of the oscillations triple. So changes in the degeneracy which are caused either by the Lifshitz transitions can be easily measured by the Shubnikov-de Haas oscillations period. Moreover, precisely at the multicritical point (black dot) in Fig.\ref{Fig:bare_lifshitz} where the critical points merge, the density of states diverges. This is also something which can be seen experimentally. Finally, in this paper, we have also shown how these phases in bilayer graphene and in spin-orbit coupled materials like bilayer silicene, can be controlled by shining light on these systems, paving the way for opto-electronic devices in these materials. During the preparation of this paper for publication, we became aware of the work of Ref.\onlinecite{iorsh2017} who also study the effect of light on Lifshitz transition in bilayer graphene. Our results agree where there is overlap. \acknowledgements We would like to thank Abhishek Joshi, Arijit Kundu and Ganpathy Murthy for useful discussions. PM would like to thank HRI for hospitality during the completion of this work. \bibliographystyle{apsrev}
1,116,691,498,174
arxiv
\subsection{Introduction} In the past decade, precision experiments of ultracold quantum gases at unitarity have enabled the study of transport phenomena in strongly coupled systems \cite{2004PhRvA..70e1401K,2007PhRvL..98q0401J,2008PhRvA..78e3609R,Cao:2010wa,Vogt:2011np,2012Sci...335..563K}. Usually, transport phenomena are encoded in hydrodynamic transport coefficients, such as the speed of sound, shear and bulk viscosities, heat conductivities, spin diffusion coefficients, etc. Presently, the speed of sound has been measured in a three dimensional unitary Fermi gas \cite{2007PhRvL..98q0401J,2012Sci...335..563K}, and the shear viscosity has been constrained for both three dimensional and two dimensional Fermi gases \cite{Cao:2010wa,Vogt:2011np}. The extremely low values of shear viscosity (when expressed in units of entropy) in particular suggest similarities between cold Fermi gases close to unitarity and very different systems such as hot quark gluon plasmas \cite{Luzum:2008cw,Heinz:2013th}, high-temperature superconductors \cite{Rameau:2014gma} and strongly coupled fluids described by black holes via the AdS/CFT conjecture \cite{Policastro:2001yc}, all of which share similar transport behavior. This apparent similarity in otherwise completely different physical systems suggests that these systems could be part of a broader class of so-called strongly interacting quantum fluids (SIQFs). It is conceivable that SIQFs share other properties besides their similar (hydrodynamic) transport behavior. This could be important because it could imply that it is possible to learn about one example of SIQFs (say high-temperature superconductors) through studying a different SIQF for which a particular trait is more easily accessible. In the present study we will investigate ultracold Fermi gases close to unitarity and argue that they exhibit properties similar to black hole SIQFs. One property that is quite remarkable about black hole SIQFs is that they do not seem to possess a description in terms of weakly coupled quasiparticles \cite{CasalderreySolana:2011us}. Instead, black holes can be characterized in terms of their ring-down spectrum, similar to a glass struck (lightly) with a fork \cite{Kokkotas:1999bd}. Some of these quasinormal modes can be recognized to be the well-known hydrodynamic modes, i.e. sound and shear excitations. Others do not have an equivalent in (Navier-Stokes) hydrodynamics, and are thus non-hydrodynamic, but nevertheless affect transport properties (particularly on short time scales). If properties of SIQFs are universal, one would expect the presence of non-hydrodynamic modes in cold Fermi gases close to unitarity. This provides the motivation for searching for non-hydrodynamic modes in cold Fermi gases, both theoretically and experimentally. \paragraph{\bf Transport in hydrodynamics and beyond:} In order to understand the properties of non-hydrodynamic transport, let us briefly recall the properties of transport within hydrodynamics. To be specific, let us treat the case of a single component uncharged fluid in $D$ spatial dimensions described by the Navier-Stokes equation (which clearly will not be applicable to Fermi gases below the superfluid phase transition temperature). The fluid is then characterized by the mass density $\rho$, the fluid velocity ${\bf u}$, the pressure $P$ and the temperature $T$. Assuming hydrodynamic transport to be dominated by the shear viscosity coefficient $\eta$, we may set the bulk viscosity and heat conductivity to zero (this is a good assumption for Fermi gases close to unitarity, \cite{Schafer:2007pr}) . Finally allowing for a force term ${\bf F}$ (e.g. through a trapping potential) the fluid must obey the equations of motion \begin{eqnarray} \label{eq:pmot} \partial_t \rho+\partial_i \left(\rho u_i\right)&=&0\,,\nonumber\\ \partial_t \left(\rho u_i\right)+\partial_j \left(\rho u_i u_j+P \delta_{ij}+\pi_{\ij}\right)&=&\rho F_i\,,\nonumber\\ \partial_t {\cal E}+\partial_j\left[u_j \left({\cal E}+P\right)+\pi_{ij}u_i\right]&=&\rho F\cdot u\,,\nonumber\\ -\eta \sigma_{ij} &=&\pi_{ij}\,, \end{eqnarray} with ${\cal E}=\frac{\rho {\bf u}^2}{2}+\frac{D}{2} P$ and $\sigma_{ij}=\left(\partial_i u_j+\partial_j u_i-\frac{2}{D}\delta_{ij}\partial \cdot u\right)$. To close the system of equations, we adopt an ideal equation of state implying $P=c_s^2(T) \rho$, with $c_s^2(T)$ the temperature-dependent speed of sound squared (we assume an isothermal system). Additionally, we assume $\frac{\eta}{P}={\rm const}$ for simplicity. Considering small perturbations around some (possibly space-dependent) equilibrium configuration we have $\rho=\rho_0({\bf x})+\delta \rho(t,{\bf x})$, ${\bf u}=\delta {\bf u}(t,{\bf x})$, $c_s^2=c_0^2+\delta c^2(t)$. Hydrostatic equilibrium requires $\rho_0 {\bf F}=\nabla P_0$. For perturbations around a constant density $\rho_0$, there are two solutions for the perturbations, namely the familiar hydrodynamic sound and shear modes. It is useful to characterize these modes in Fourier space, e.g. $\rho\simeq e^{-i\omega t+i{\bf k}\cdot {\bf x}}$. For the sound mode (coupling perturbations $\delta \rho, \partial \cdot {\bf u}$) one finds the dispersion relation $\omega=\pm c_0 |{\bf k}|-i \frac{\eta{\bf k}^2 c_0^2}{P_0} \left(1-\frac{1}{D}\right)$, whereas for the shear mode (perturbations {\bf u} transverse to ${\bf k}$) one finds $\omega=-i\frac{\eta {\bf k}^2c_0^2}{P_0}$. Thus, one expects to find density perturbations which for fixed ${\bf k}$ behave as \begin{equation} \label{eq:hydrosound} \delta \rho_{\rm hydro}(t,{\bf x})\propto e^{\pm i c_0 |{\bf k}| t+ i{\bf k}\cdot x -\Gamma_0 t {\bf k}^2}\,, \end{equation} where $\Gamma_0=\frac{\eta c_0^2}{P_0}\left(1-\frac{1}{D}\right)$. Eq.~(\ref{eq:hydrosound}) was derived using hydrodynamics, so its regime of validity is that of low ${\bf k}$. The result for the sound mode perturbation in hydrodynamics is to be contrasted with the result found for black hole SIQFs. Ref.~\cite{Kovtun:2005ev} calculated the poles of the energy-momentum tensor correlator in the sound mode channel, finding the complete set of quasinormal modes for $D=3$. For small ${\bf k}$, the spectrum can be approximated as \begin{equation} \label{eq:bhexi} \omega_{h}=\pm c_0 |{\bf k}|-i \frac{\eta {\bf k}^2}{4 P_0}\frac{2}{3}\,,\quad \omega_{nh,1}\simeq 2 \pi T (\pm 1.73-1.34 i)\,, \end{equation} and $\omega_{nh,n\gg 1}=2 \pi T n (\pm 1- i)$. The result for $\omega_h$ is just that of a hydrodynamic dispersion relation for relativistic fluids in $D=3$ (note that for Ref.~\cite{Kovtun:2005ev} $\frac{\eta}{4 P_0}=\frac{1}{4\pi T}$ and hence $\omega_{nh,n\gg 1}= \frac{2 P_0 n}{\eta}(\pm 1-i)$ is also a consistent interpretation). The presence of the non-hydrodynamic modes $\omega_{nh}$, however, implies that density perturbations for fixed $|{\bf k}|\ll 1$ behave as \begin{equation} \label{eq:qnm} \delta \rho_{BH}(t,{\bf x})\propto e^{\pm i c_0 |{\bf k}|t +i {\bf k}\cdot {\bf x}-\Gamma_{h} t }+\sum_{n=1}^\infty a_{nh,n} e^{\pm i {\rm Re} \omega_{n} t -\Gamma_{n} t}\,, \end{equation} with $\Gamma=-{\rm Im} \omega$ and amplitudes $a_{nh,n>0}$ for the non-hydrodynamic modes. Note that an infinite tower of quasinormal modes has also been found in non-relativistic systems for black hole SIQFs, cf.~\cite{Schaefer:2014aia}. Also, non-hydrodynamic modes have been discussed before in the context of condensed matter systems in Ref.~\cite{cmnonhy}. Unless the amplitudes $a_n$ are extremely small or the damping rates $\Gamma_n$ are very large, one could expect the non-hydrodynamic contributions to the evolution of density perturbations to be experimentally observable. In the following, we will study the case of cold Fermi gases close to unitarity and investigate whether they exhibit non-hydrodynamic behavior similar to Eq.~(\ref{eq:qnm}). \paragraph{\bf Non-hydrodynamic collective modes for a trapped Fermi gas:} The analysis of collective modes changes if the equilibrium background density $\rho_0({\bf x})$ is space-dependent, as is the case for experiments on Fermi gases in a trap. For an idealized harmonic trapping potential with base frequency $\omega_\perp$, the background density may then be written as \begin{equation} \label{eq:potential} \log \rho_0({\bf x})=-\frac{\omega_\perp^2 (x^2+y^2+\lambda^2 z^2)}{2 c_0^2}\,, \end{equation} where $\lambda=0$ for $D=2$ and $\lambda\ll 1$ for $D=3$ (elongated trap). It is possible to classify all collective modes in this case by realizing that in the isothermal limit, $\delta c^2(t)$ is only a function of time, and hence the equations of motion place constraints on the space-dependence of $\nabla\cdot \delta {\bf u}$. Thus, rather than analyzing perturbations in Fourier space, it is useful to expand perturbations in terms of a power series of $x,y,$ and $z$. Restricting to inequivalent polynomials under rotations in the $x,y$ plane, this leads to the ansatz \begin{equation} \frac{\delta \rho}{\rho_0}=c_{00}(t)+c_{11}(t) r \cos{\phi}+c_{20}(t)r^2+c_{22}(t) r^2 \cos 2\phi+\ldots\,, \end{equation} with $x=r \cos\phi$, $y=r \sin\phi$. Solving the equations of motion (\ref{eq:pmot}) then leads to the identification of three collective modes, corresponding to the time evolution of $c_{11}$, $c_{00}$ coupled with $c_{20}$, and $c_{22}$, respectively. These modes may be recognized to be the usual sloshing (``S''), breathing (``B'') and quadrupole mode (``Q''), respectively, and one finds the following dispersion relations in these channels: \begin{eqnarray} \label{eq:hymodes} \frac{\omega_{S,h}}{\omega_\perp}&=&\pm1\nonumber\\ \frac{\omega_{B,h}}{\omega_\perp}&=&\pm\sqrt{2+\frac{4}{D}}-\frac{i \eta \omega_\perp}{P_0}\left(1-\frac{2}{D}\right)\,,\nonumber\\ \frac{\omega_{Q,h}}{\omega_\perp}&=&\pm\sqrt{2}-\frac{i \eta \omega_\perp}{P_0}\,. \end{eqnarray} All of these modes are driven versions of the sound mode excitations. One can explicitly verify that for an isothermal gas with trapping potential (\ref{eq:potential}) no shear mode perturbations are excited. Unlike the non-hydrodynamic quasinormal modes encountered in Eq.~(\ref{eq:bhexi}), the hydrodynamic damping rates in Eq.~(\ref{eq:hymodes}) monotonically decrease in the ideal hydrodynamic limit $\eta\rightarrow 0$. It is possible to try to extend the collective mode analysis for a trapped Fermi gas beyond Navier-Stokes hydrodynamics. One option to do this is within the framework of second-order hydrodynamics \cite{Baier:2007ix,Chao:2011cy}. While second-order hydrodynamics is able to correctly capture higher order gradient corrections (e.g. ${\cal O}({\bf k}^3)$ corrections in Eq.~(\ref{eq:bhexi})), its regime of validity is still $\omega \ll 1$. Thus one cannot expect second-order hydrodynamics to correctly capture non-hydrodynamic modes. Nevertheless, one may try to analyze collective modes for a Fermi gas in a trap using second-order hydrodynamics to see if non-hydrodynamic modes emerge at least on a qualitative level. This can be done easily since for linear response, second-order hydrodynamics basically replaces $\pi_{ij}=-\eta \sigma_{ij}\rightarrow\pi_{ij}+\tau_\pi \partial_t \pi_{ij}=-\eta \sigma_{ij}$, with $\tau_\pi$ the relaxation time for shear viscous stresses \cite{Baier:2007ix,Chao:2011cy}. Thus, one finds that in addition to the hydrodynamic modes in Eq.~(\ref{eq:hymodes}), there is a non-hydrodynamic mode for the quadrupole mode oscillation in $D=2,3$ and for the breathing mode oscillation in $D=3$, given by \begin{eqnarray} \label{eq:Bmodes} \frac{\omega_{B,nh}}{\omega_\perp}=-\frac{i}{\tau_\pi \omega_\perp}\,,\quad \frac{\omega_{Q,nh}}{\omega_\perp}=-\frac{i}{\tau_\pi \omega_\perp}\,. \end{eqnarray} Another option, seemingly distinct from second-order hydrodynamics, is to consider kinetic theory in the form of the Boltzmann equation. For kinetic theory with a simple collision term in the form of a relaxation time approximation, the collective modes of a Fermi gas in a harmonic trap have been analyzed before \cite{Brewer2015,PhysRevA.76.033610,2008PhRvA..78e3609R,Vogtthesis}. For a relaxation time $\tau_R$ in the collision term of the Boltzmann equation, one finds dispersion relations for the breathing and quadrupole modes given by \begin{eqnarray} \label{eq:bdp} i \omega_Q (w_Q^2-4 \omega_\perp^2)\tau_R + (2 \omega_\perp^2-\omega_Q^2)&=&0\,,\quad D=2,3\nonumber\\ i \omega_B (w_B^2-4 \omega_\perp^2)\tau_R+\left(\frac{10}{3}\omega_\perp^2-\omega_B^2\right)&=&0\,,\quad D=3\,,\hspace*{0.5cm} \end{eqnarray} and $\omega_B^2=4 \omega_\perp^2$ for $D=2$. Eqns.~(\ref{eq:bdp}) have three solutions for $\omega_Q,\omega_B$ each, two of which agree with the hydrodynamic modes Eq.~(\ref{eq:hymodes}) if $\omega_\perp \tau_R=\frac{\eta}{P_0}\ll 1$. However, in addition to the hydrodynamic modes, Eqns.~(\ref{eq:bdp}) contain one non-hydrodynamic mode for both the quadrupole mode and breathing mode in $D=3$ (cf.~\cite{PhysRevA.82.023609,Brewer2015}). In the strong coupling limit $\tau_R\rightarrow 0$, these non-hydrodynamic modes obey the dispersion relations (\ref{eq:Bmodes}) upon setting $\tau_\pi=\tau_R$. (Note that in second-order hydrodynamics $\eta,\tau_\pi$ are two different parameters, while in kinetic theory $\tau_R$ controls both the viscosity and the viscous stress relaxation time). The fact that the kinetic theory result matches that from second-order viscous hydrodynamics is expected since kinetic theory is known to be a special case of second-order hydrodynamics in the strongly interacting $\tau_R\rightarrow 0$ limit. However, one furthermore finds that in kinetic theory, non-hydrodynamic modes are present for all values of $\tau_R$, describing a purely damped response in the quadrupole and $D=3$ breathing mode channel. In particular, one finds from Eqns.~(\ref{eq:bdp}) \begin{equation} \label{eq:kBmodes} \frac{\omega_{B,nh}}{\omega_\perp}=-\frac{5 i}{6 \tau_R \omega_\perp}\,(D=3)\,, \ \frac{\omega_{Q,nh}}{\omega_\perp}=-\frac{i}{2 \tau_R \omega_\perp}\,, \end{equation} in the limit of weak interactions $\tau_R\rightarrow \infty$. Thus, while (\ref{eq:Bmodes}) is beyond the regime of validity in second-order hydrodynamics, Eq.~(\ref{eq:kBmodes}) is well within the regime of validity for kinetic theory (weak interactions, well defined quasiparticles). \paragraph{\bf Experimental evidence for non-hydrodynamic collective modes:} Summarizing the theoretical status of non-hydrodynamic transport, it is known that non-hydrodynamic modes exist for some SIQFs such as black hole duals \cite{Policastro:2001yc}. Furthermore, for trapped Fermi gases it is known that non-hydrodynamic collective modes are featured in descriptions beyond Navier-Stokes, but lie outside the regime of validity of these descriptions for strong interactions \cite{Schaefer:2014awa,Brewer2015}. If the interactions are weak, and quasiparticle degrees of freedom are well defined, kinetic theory also predicts the presence of non-hydrodynamic modes, which are purely damped excitations (\ref{eq:kBmodes}). In view of this, experimental input is crucial to hope to understand non-hydrodynamic transport for SIQFs. For this reason, we reanalyze the raw data for the collective oscillations observed in a $D=2$ gas of $^{40}K$ atoms from Refs.~\cite{Vogt:2011np,2013PhRvA..87d3612B} as well as raw data for the radial breathing mode oscillations observed in a $D=3$ gas of $^6{\rm Li}$ atoms close to unitarity from Refs.~\cite{2005PhRvL..94q0404K,2011Sci...331...58C}. To analyze the data for the time-evolution of the quadrupole (``Q'') and breathing (``B'') mode amplitudes, we use a fitting function of the form \begin{equation} \label{fittingf} F(t)=a_{h} \cos (w_h t)e^{- \Gamma_h t}+a_{nh} \cos(w_{nh} t) e^{-\Gamma_{nh} t}\,, \end{equation} where irrelevant phase shifts and offsets have been suppressed. Minimum $\chi^2$ fits to the data of Refs.~\cite{Vogt:2011np,2013PhRvA..87d3612B,2005PhRvL..94q0404K,2011Sci...331...58C} are performed to extract frequencies $w\equiv {\rm Re}\,\omega$, damping rates $\Gamma \equiv- {\rm Im}\,\omega$ as well as the amplitudes $a$ for the hydrodynamic (``h'') and non-hydrodynamic (``nh'') modes. \begin{figure*}[t] \centering \includegraphics[width=0.45\textwidth]{ratios.eps}\hfill \includegraphics[width=0.45\textwidth]{cmp_exp_g1.eps} \includegraphics[width=0.45\textwidth]{ratios_3D.eps}\hfill \includegraphics[width=0.45\textwidth]{cmp_exp_g1_3D.eps} \caption{Non-hydrodynamic amplitudes and damping rates for the Fermi gas in a trap in two dimensions (upper panels) and three dimensions (lower panels). Shown are results from our reanalysis of the raw time-evolution data from the $D=2$ experiment by Vogt et al. in Ref.~\cite{Vogt:2011np} (upper panels), from the $D=3$ experiment by Kinast et al. in Ref.~\cite{2005PhRvL..94q0404K} (lower panels), as well as the analytic results from kinetic theory with ideal gas equation ofstate in a harmonic trap (\ref{eq:bdp}) . \label{fig:two}} \end{figure*} We employed the Akaike information criterion (AIC) \cite{1974ITAC...19..716A} to avoid overfitting the raw data with too many parameters. Using the AIC values extracted for both $D=2,3$ data suggests that the currently available data is not sufficient to extract information about non-hydrodynamic frequencies. For this reason, we assume $\omega_{nh}=0$ in the following. For the case of $D=2$, we analyze raw data by Vogt et al. in Ref.~\cite{Vogt:2011np} given at $\frac{T}{T_F}=0.45$ with $T_F=\sqrt{2 N} \omega_\perp$, $\omega_\perp\sim 2\pi\times 125$ Hz for the case of $N\simeq 2000$ $^{40}K$ atoms for four different values of the interaction strength parameter $\ln(k_F a)$. At each value of $\ln(k_F a)$, six different measurements of the time evolution are available and we obtain mean values and error estimates for the fit parameters in Eq.~(\ref{fittingf}) from averaging over these six sets. For each of the $\ln(k_F a)$ values, we extract hydrodynamic mode frequencies and damping rates that are consistent with published values \cite{Vogt:2011np,2013PhRvA..87d3612B}. In Fig.~\ref{fig:two}, amplitudes of putative non-hydrodynamic modes extracted from the raw time-evolution data in Ref.~\cite{Vogt:2011np} are shown, indicating a sizeable non-hydrodynamic mode component amplitude in the quadrupole excitations. We find that for a total of 240 data-points, the AIC for the quadrupole mode decreases by 80 units when allowing for a non-hydrodynamic mode with $a_{nh}\neq 0$ to be present in Eq.~(\ref{fittingf}). The reduction in the AIC value for the $D=2$ breathing mode is similar, but the extracted non-hydrodynamic mode amplitude is consistent with zero (see Fig.~\ref{fig:two}). Our interpretation of this finding is that there is evidence for the presence of a non-hydrodynamic mode in the $D=2$ quadrupole mode data, while we find no evidence for such a mode in the $D=2$ breathing mode data. The results for the extracted non-hydrodynamic mode damping rates are also shown in Fig.~\ref{fig:two}. Curiously, the extracted damping rate $\Gamma_{Q,nh}$ for $\ln(k_F a)\simeq 1.84,2.8$ is consistent with the kinetic theory analytic result (\ref{eq:bdp}) for an ideal gas in a harmonic trap in kinetic theory using the relation $\omega_\perp \tau_R=0.12 \left(1+\frac{4}{\pi^2}\ln^2(k_F a)\right)$ (cf. Refs.\cite{Brewer2015,2012PhRvA..85a3636B}). For the case of $D=3$, we analyze raw data by Kinast et al. in Ref.~\cite{2005PhRvL..94q0404K} for the breathing mode oscillations of a gas of $N\simeq2\times 10^{5}$ $^6{\rm Li}$ atoms at a magnetic field of $B=840$ G (close to unitarity) for several different temperatures $T/T_F$ where $T_F=(3 N \lambda)^{1/3}\omega_\perp$, $\lambda\simeq 0.045$, $\omega_\perp\simeq 2\pi\times 1700$ Hz. For each of the $T/T_F$ values, we extract hydrodynamic mode frequencies and damping rates that are consistent with published values \cite{2005PhRvL..94q0404K,2011Sci...331...58C}. In Fig.~\ref{fig:two}, amplitudes and extracted damping rates of a putative non-hydrodynamic mode are shown. For a total of 1100 data points, the AIC decreases by 80 units when allowing for a non-hydrodynamic mode in Eq.~(\ref{fittingf}). Extracted amplitudes and damping rates seem for the most part uncorrelated as a function of $T/T_F$, but it is curious to note that for several values of $T/T_F$, the extracted damping rate is consistent with the kinetic theory result (\ref{eq:bdp}) using the relation $\omega_\perp \tau_R=\frac{45 \pi}{4} \frac{\omega_\perp}{T_F}\frac{T^2}{T_F^2}$ \cite{2008PhRvA..78e3609R}. Our interpretation of these results is that the fits for the $D=3$ breathing mode hint at, but do not provide statistically significant evidence for, the presence of a non-hydrodynamic mode. \paragraph{\bf Conclusions:} We have argued that there could be a new class of strongly interacting systems (``SIQFs") sharing similar transport properties beyond the now well-established value of shear viscosity over entropy density. If this is the case, then non-hydrodynamic quasinormal modes in SIQFs, theoretically well-established using the AdS/CFT framework, would imply the presence of non-hydrodynamic modes in all other SIQFs. We have studied collective modes in trapped cold Fermi gases close to unitarity in $D=2,3$ both theoretically and by reanalyzing experimental data. Our analysis hints at the presence of non-hydrodynamic modes in cold Fermi gases, but further experimental work would be needed to corroborate our findings. We believe that a study of non-hydrodynamic modes in ultracold quantum gases would open a new window into understanding transport properties of these systems and other SIQFs. This could help to found a new theory of ``strongly interacting quantum matter" of importance in many subfields of physics. \begin{acknowledgments} \paragraph{\bf Acknowledgements:} This work was supported in part by the Department of Energy, DOE award No. DE-SC0008132. We would like to thank M.~ Koschorreck and C.~ Cao for providing us with the raw time evolution data for the experiments in Refs.~\cite{Vogt:2011np,2013PhRvA..87d3612B,2005PhRvL..94q0404K,2011Sci...331...58C} and R.~Grimm, M.~K\"ohl, K.~Rajagopal and J.~Thomas for useful discussions. \end{acknowledgments} \bibliographystyle{apsrev}
1,116,691,498,175
arxiv
\section{Introduction} The two prevalent themes in auction theory are revenue maximization for seller - referred to as \textit{optimality}, and social welfare maximization - referred to as \textit{efficiency}. For example, $\text{VCG \cite{Clarke71,Groves73,Vickrey61}}$ is the most widely studied efficient auction, while a Bayesian optimal single item auction for independent private value model was first characterized by Myerson in his seminal work~\cite{Myerson81}. VCG has been generalized for combinatorial auctions (see \cite{AusubelMilgrom06} for a description); Myerson's optimal auction framework has been extended to a more general \textit{single-parameter} setting (see \cite{Hartline&Karlin07} for a description), and to auctions with single-minded buyers \cite{Abhishek10A}. An allocation of items among buyers generates value for the items. The realized social welfare is defined as the total generated value. This is an upper bound on the revenue that a seller can extract\footnote{This follows from the individual rationality assumption defined later in this paper.}. Thus, an allocation that creates a large social welfare might appear as a precursor to extracting large revenue; the seller can extract more revenue by first creating a large total value for the items and then collecting a part of it as payments from the buyers. However, in general, an optimal auction is not efficient and vice versa. As presented in \cite{Myerson81}, in optimal single item auctions where buyers' private valuations are drawn independently from the same distribution (referred to as \textit{same priors} from here on), the seller sets a common reserve price and does not sell the item if the values reported by all buyers are below the reserve price. When buyers' private values are realized from different distributions (referred to as \textit{different priors} from here on), then not only can the reserve prices be different for different buyers, the seller need not always sell the item to the buyer with the highest reported value. However, an efficient auction like VCG will award the item to the buyer who values it the most in all such scenarios. Moreover, as we show later in this paper, in multiple item auctions with single-parameter buyers, an optimal auction need not be efficient, even if the buyers have the same priors and there are no reserve prices. We study how much an optimal auction loses in efficiency when compared with an efficient auction. Our metric is the worst case normalized difference in the realized social welfares by an efficient auction and an optimal auction, where the normalization is with respect to the social welfare realized by an efficient auction. The worst case is taken over all possible probability distributions on buyers' valuations. We refer to this as the worst case efficiency loss ratio (henceforth ELR). This ratio quantifies how much the goal of revenue maximization can be in conflict with the social goal of welfare maximization. Two previous works that also study the trade-off between optimality and efficiency are \cite{Bulow&Klemperer96} and \cite{Aggarwal09}. However, the metrics used by \cite{Bulow&Klemperer96} and \cite{Aggarwal09} are the number of extra buyers required by an efficient auction to match an optimal auction in revenue, and the number of extra buyers required by an optimal auction to match an efficient auction in the realized social welfare, respectively. This is fundamentally different from the problem we study here. In \cite{Correa2009}, authors find bounds on the informational cost introduced by the presence of private information (see Section \ref{sec:discussion} for its relationship with the ELR) for a class of resource allocation problems, but for continuous probability distributions on the cost of resources, and under some restrictive assumptions on the probability distributions. The main contributions of this paper are following. We first focus on optimal auctions with binary valued single-parameter buyers with different priors. We show that the worst case ELR is no worse than it is with only one buyer; moreover, it is at most $1/2$. A tighter bound is obtained for auctions with identical items and buyers with same priors. Moving beyond the case of binary valuations, we focus on single item optimal auctions where buyers have same priors. We reduce the problem of finding the worst case ELR into a relatively simple optimization problem involving only the common probability vector of buyers. For single buyer case, we find a closed form expression for the worst case ELR as function of $r$ - the ratio of the maximum to the minimum possible value of the item for the buyer, and $K$ - the number of discrete values that the buyer can have for the item. For multiple buyers, we provide lower and upper bounds on the worst case ELR as a function of $r$, $K$, and $N$ - the number of buyers. These bounds are tight asymptotically as~$K$ goes to infinity. We also show that when buyers have different priors, the lower bound on the worst case ELR is almost the same as the worst case ELR with only one buyer. The rest of the paper is organized as follows. Section \ref{sec:model} outlines our model, notation, and definitions. Section \ref{sec:opt-auction} summarizes the structure of optimal auctions. Section \ref{sec:elr} formally defines the problem under investigation and presents the bounds on the ELR. Section \ref{sec:discussion} provides some comments and Section \ref{sec:conclusion} gives a summary of results. \section{Model, Definitions, and Notation} \label{sec:model} Consider $N$ buyers competing for a set of items that a seller wants to sell. The set of buyers is denoted by $\setN \triangleq \{1,2,\ldots,N\}$. A buyer is said to be a \textit{winner} if he gets any one of his desired bundles of items. We restrict to single-parameter buyers - a buyer $n$ gets a positive value $v_n^{*}$ if he is a winner, irrespective of the bundle he gets; otherwise, he gets zero value. The bundles desired by the buyers are publicly known. The value $v_n^{*}$ is referred to as the \textit{type} of buyer $n$. The type of a buyer is known only to him and constitutes his private information. For each buyer $n$, the seller and the other buyers have imperfect information about his true type~$v_n^*$; they model it by a discrete random variable $X_n$. The probability distribution of $X_n$ is common knowledge. $X_n$ is assumed to take values from a set $\setX_n \triangleq \{x_n^1, x_n^2,\ldots,x_n^{K_n}\}$ of cardinality~$K_n$, where $0 \leq x_n^1 < x_n^2 < \ldots < x_n^{K_n}$. The probability that $X_n$ is equal to~$x_n^i$ is denoted by~$p_n^i$; i.e., $p_n^i \triangleq \Prob{X_n = x_n^i}$. We assume that $p_n^i > 0$ for all $n \in \setN$ and $1 \leq i \leq K_n$. Thus, the type~$v_n^*$ can be interpreted as a specific realization of the random variable $X_n$, known only to buyer $n$. Random variables $[X_n]_{n \in \setN}$ are assumed to be independent\footnote{This is referred to as the independent private value model, a fairly standard model in auction theory.}. In general, the structure of the problem restricts the possible sets of winners. Such constraints are captured by defining a set $\setA$ to be the collection of all possible sets of winners; i.e., $A \in \mc{A}$ if $A \subseteq \setN$ and all buyers in $A$ can win simultaneously. We assume that $\emptyset \in \setA$, and $\setA$ is \textit{downward closed}; i.e., if $A \in \setA$ and $B \subseteq A$, then $B \in \setA$. Also, assume that for each buyer~$n$, there is a set $A \in \setA$ such that $n \in A$. The single-parameter model is rich enough to capture many scenarios of interest. In single item auctions, a buyer gets a certain positive value if he wins the item and zero otherwise. Here, $\setA$ consists of all singletons $\{n\}$, $n \in \setN$ (and empty set $\emptyset$). In an auction of $S$ identical items, each buyer wants any one of the $S$ items and has the same value for any one of them. Here, $\setA$ is any subset of buyers of size at most $S$. Similarly, in auctions with single-minded buyers with known bundles\footnote{For single-minded buyers, both the desired bundle of items and its value for a buyer are his private information. However, if the bundles are known then this reduces to single-parameter model.}, each buyer~$n$ is interested only in a specific (known) bundle~$b_n^{*}$ of items and has a value~$v_n^{*}$ for any bundle $b_n$ such that $b_n$ contains the bundle $b_n^{*}$, while he has zero value for any other bundle. Here,~$\setA$ is collections of buyers with disjoint bundles. Denote a typical \textit{reported type} (henceforth, referred to as a \textit{bid}) of a buyer $n$ by $v_n$, where $v_n \in \mc{X}_n$, and let $\mb{v} \triangleq (v_1, v_2, \ldots, v_N)$ be the vector of bids of everyone. Define $\mb{X} \triangleq (X_1, X_2, \ldots, X_N)$ and $\bs{\setX} \triangleq \setX_1 \times \setX_2 \times \ldots \times \setX_N$. We use the standard notation of $\mb{v}_{-n} \triangleq (v_1, \ldots, v_{n-1}, v_{n+1}, \ldots, v_N)$ and $\mb{v} \triangleq (v_n, \mb{v}_{-n})$. Similar interpretations are used for $\mb{X}_{-n}$ and $\bs{\setX}_{-n}$. Henceforth, in any further usage, $v_n$, $\mb{v}_{-n}$, and $\mb{v}$ are always in the sets $\setX_n$, $\bs{\setX}_{-n}$, and $\bs{\setX}$ respectively. Let $\mb{x}_n \triangleq (x_n^1, x_n^2, \ldots, x_n^{K_n})$, $\mb{x}_{1:N} \triangleq (\mb{x}_1, \mb{x}_2, \ldots, \mb{x}_N)$, and define $\mb{p}_n$ and $\mb{p}_{1:N}$ similarly. \section{Preliminaries on Finite Support Bayesian Optimal Auction} \label{sec:opt-auction} In this section we summarize the structure of an optimal auction for single-parameter buyers. The presentation here is based on \cite{Abhishek10A}, adapted for single-parameter buyers. Readers are referred to \cite{Abhishek10A} and references therein (in particular \cite{Elkind07}) for further details. We will be focusing only on the auction mechanisms where buyers are asked to report their types directly (referred to as \textit{direct mechanism}). In light of the \textit{revelation principle}\footnote{Revelation principle says that, given a mechanism and a Bayes-Nash equilibrium (BNE) for that mechanism, there exists a direct mechanism in which truth-telling is a BNE, and allocation and payment outcomes are same as in the given BNE of the original mechanism.} \cite{Myerson81}, the restriction to direct mechanisms is without any loss of optimality. A direct auction mechanism is specified by an allocation rule $\bs{\pi}: \bs{\setX} \mapsto [0,1]^{|\setA|}$, and a payment rule $\mb{M}:\bs{\setX} \mapsto \R^N$. Given a bid vector $\mb{v}$, the allocation rule $\bs{\pi}(\mb{v}) \triangleq [\pi_A(\mb{v})]_{A \in \setA}$ is a probability distribution over the collection ${\cal A}$ of possible sets of winners. For each $A \in \setA$, $\pi_A(\mb{v})$ is the probability that the set of buyers $A$ win simultaneously. The payment rule is defined as $\mb{M} \triangleq (M_1,M_2,\ldots,M_N)$, where $M_n(\mb{v})$ is the payment (expected payment in case of random allocation) that buyer~$n$ makes to the seller when the bid vector is $\mb{v}$. Let $Q_n(\mb{v})$ be the probability that buyer $n$ wins in the auction when the bid vector is $\mb{v}$; i.e, \beq{winning-prob} Q_n(\mb{v}) \triangleq \sum_{A \in \setA : n \in A} \pi_A(\mb{v}). \eeq Given that the value of buyer $n$ is $v_n^{*}$, and the bid vector is $\mb{v}$, the payoff (expected payoff in case of random allocation) of buyer $n$ is: \beq{payoff} \sigma_n(\mb{v}; v_n^{*}) \triangleq Q_n(\mb{v})v_n^{*} - M_n(\mb{v}). \eeq So buyers are assumed to be risk neutral and have quasilinear payoffs (a standard assumption in auction theory). The mechanism $(\bs{\pi},\mb{M})$ and the payoff functions $[\sigma_n]_{n \in \setN}$ induce a game of incomplete information among the buyers. The seller's goal is to design an auction mechanism $(\bs{\pi},\mb{M})$ to maximize his expected revenue at a Bayes-Nash equilibrium (BNE) of the induced game. Again, using the revelation principle, the seller can restrict only to the auctions where truth-telling is a BNE (referred to as \textit{incentive compatibility}) without any loss of optimality. For the above revenue maximization problem to be well defined, assume that the seller cannot force the buyers to participate in an auction and impose arbitrarily high payments on them. Thus, a buyer will voluntarily participate in an auction only if his payoff from participation is nonnegative (referred to as \textit{individual rationality}). The seller is assumed to have free disposal of items and may decide not to sell some or all items for certain bid vectors. The idea now, as in \cite{Myerson81}, is to express incentive compatibility, individual rationality, and feasible allocations as mathematical constraints, and formulate the revenue maximization objective as an optimization problem under these constraints. To this end, for each $n \in \setN$, define the following functions: \beq{q} q_n(v_n) \triangleq \E{Q_n(v_n,\mb{X}_{-n})}, \eeq \beq{m} m_n(v_n) \triangleq \E{M_n(v_n,\mb{X}_{-n})}, \eeq Here, $q_n(v_n)$ is the expected probability that buyer $n$ wins given that he reports his type as $v_n$ while everyone else is truthful. The expectation is over the type of everyone else; i.e., over $\mb{X}_{-n}$. Similarly, $m_n(v_n)$ is the expected payment that buyer $n$ makes to the seller. The incentive compatibility and individual rationality constraints can be expressed mathematically as follows: \begin{enumerate} \item \textit{Incentive compatibility (IC)}: For any $n \in \setN$, and $1 \leq i, j \leq K_n$, \beq{ic} q_n(x_n^i)x_n^i - m_n(x_n^i) \geq q_n(x_n^j)x_n^i - m_n(x_n^j). \eeq Notice that, given $X_n = x_n^i$, the left side of \eqref{eq:ic} is the payoff of buyer $n$ from reporting his type truthfully (assuming everyone else is also truthful), while the right side is the payoff from misreporting his type to $x_n^j$. \item \textit{Individual rationality (IR)}: For any $n \in \setN$, and $1 \leq i \leq K_n$, \beq{ir} q_n(x_n^i)x_n^i - m_n(x_n^i) \geq 0. \eeq \end{enumerate} Under IC, all buyers report their true types. Hence, the expected revenue that the seller gets is $\mathbb{E}\big[\sum_{n=1}^{N}M_n(\mb{X})\big]$. The expectation here is over the random vector~$\mb{X}$. Thus, the optimal auction problem is to maximize the seller's expected revenue, $\mathbb{E}\big[\sum_{n=1}^{N}M_n(\mb{X})\big]$, subject to the IC and IR constraints. Define the \textit{virtual-valuation} function, $w_n$, of a buyer $n$ as: \beq{virtual-bid} w_n(x_n^i) = \left\{ \begin{array}{l l} \displaystyle x_n^i - (x_n^{i+1} - x_n^i)\frac{(\sum_{j=i+1}^{K_n}p_n^j)}{p_n^i} & \quad \text{if $1 \leq i\leq K_n -1$,}\\ x_n^{K_n} & \quad \text{if $i = K_n$.}\\ \end{array} \right. \end{equation} The virtual-valuation function $w_n$ is said to be \textit{regular} if $w_n(x_n^i) \leq w_n(x_n^{i+1})$ for $1 \leq i \leq K_n -1$. The proposition below identifies the maximum expected revenue for a given allocation rule, over all payment rules that meet the IC and IR constraints. In particular, it identifies whether such payment rules exist. \begin{proposition}[from \cite{Abhishek10A}] \label{proposition:opt-revenue} Let $\bs{\pi}$ be an allocation rule and let $[Q_n]_{n \in \setN}$ and $[q_n]_{n \in \setN}$ be obtained from $\bs{\pi}$ by \eqref{eq:winning-prob} and \eqref{eq:q}. A payment rule satisfying the IC and IR constraints exists for $\bs{\pi}$ if and only if $q_n(x_n^i) \leq q_n(x_n^{i+1})$ for all $n \in \setN$ and $1\leq i \leq K_n-1$. Given such $\bs{\pi}$ and a payment rule $\mb{M}$ satisfying the IC and IR constraints, the seller's revenue satisfies: \beqn \E{\sum_{n=1}^{N}M_n(\mb{X})} \leq \E{\sum_{n=1}^{N}Q_n(\mb{X})w_n(X_n)}. \eeqn Moreover, a payment rule $\mb{M}$ achieving this bound exists, and any such $\mb{M}$ satisfies: \beqn m_n(x_n^i) = \sum_{j = 1}^{i}(q_n(x_n^j) - q_n(x_n^{j-1}))x_n^j, \eeqn for all $n \in \setN$ and $1\leq i \leq K_n$, where we use the notational convention $q_n(x_n^{0}) \triangleq 0$. \end{proposition} Given $\bs{\pi}$ satisfying the conditions of Proposition \ref{proposition:opt-revenue}, let $R(\bs{\pi})$ denote the maximum revenue to the seller under the IC and IR constraints. From Proposition \ref{proposition:opt-revenue} and \eqref{eq:winning-prob}, \beq{revenue} R(\bs{\pi}) = \E{\sum_{n=1}^{N}Q_n(\mb{X})w_n(X_n)} = \E{\sum_{A \in \setA}\pi_A(\mb{X})\big(\sum_{n \in A}w_n(X_n)\big)}. \eeq The above suggests that an optimal auction can be found by selecting the allocation rule $\bs{\pi}$ (and in turn $[Q_n]_{n \in \setN}$ and $[q_n]_{n \in \setN}$) that assigns nonzero probabilities only to the feasible sets of winners with the maximum total virtual valuations for each bid vector $\mb{v}$. If all $w_n$'s are regular, then it can be verified that such an allocation rule satisfies the monotonicity condition on the $q_n$'s needed by Proposition~\ref{proposition:opt-revenue}. However, if the $w_n$'s are not regular, the resulting allocation rule would not necessarily satisfy the required monotonicity condition on the $q_n$'s. This problem can be remedied by using another function, $\overline{w}_n$, called the \textit{monotone virtual valuation} (henceforth MVV), constructed graphically as follows. Let $(g_n^0,h_n^0) \triangleq (0,-x_n^1)$, $(g_n^i,h_n^i) \triangleq \big(\sum_{j = 1}^{i} p_n^j, -x_n^{i+1}(\sum_{j = i+1}^{K_n} p_n^j)\big)$ for $1 \leq i \leq K_n-1$, and $(g_n^{K_n},h_n^{K_n}) \triangleq (1,0)$. Then, $w_n(x_n^i)$ is the slope of the line joining the point $(g_n^{i-1},h_n^{i-1})$ to the point $(g_n^i,h_n^i)$; i.e., $w_n(x_n^i) = (h_n^i-h_n^{i-1})/(g_n^i-g_n^{i-1})$. Find the lower convex hull of points $[(g_n^i,h_n^i)]_{0 \leq i \leq K_n}$, and let $\overline{h}_n^i$ be the point on this convex hull corresponding to $g_n^i$. Then, $\overline{w}_n(x_n^i)$ is the slope of the line joining the point $(g_n^{i-1},\overline{h}_n^{i-1})$ to the point $(g_n^i,\overline{h}_n^i)$; i.e, $\overline{w}_n(x_n^i) =(\overline{h}_n^i-\overline{h}_n^{i-1})/(g_n^i-g_n^{i-1})$. Notice that, $\overline{w}_n(x_n^i) \leq \overline{w}_n(x_n^{i+1})$ for all $n \in \setN$ and $1 \leq i \leq K_n-1$. Also, if $w_n$ is regular, $\overline{w}_n$ is equal to~$w_n$. The process of finding virtual valuations and monotone virtual valuations can be explained using Figure \ref{fig:vv-mvv}. Since the virtual-valuation function of a buyer depends only on the probability distribution of his type, we describe the scheme for a typical random variable $X$, where we have dropped the subscript. Suppose that $X$ takes four different values $\{x^1,x^2,x^3,x^4\}$ with corresponding probabilities $\{p^1,p^2,p^3,p^4\}$. Draw vertical lines separated from each other by distances $p^1$, $p^2$, $p^3$, and $p^4$ as shown in the figure. For each $1 \leq i \leq 4$, join the point $-x^i$ on the y-axis to the x-axis at $1$ (sum of probabilities). Call such line as line $i$. Then, $(g^0,h^0)=(0,-x^1)$ and $(g^4,h^4)=(1,0)$. The intersection of line $2$ with the first vertical line is the point $(g^1,h^1)$. Similarly, the intersection of line $3$ with the second vertical line is the point $(g^2,h^2)$ and so on. Virtual-valuation function $w$ is given by the slopes of the lines connecting these points. For the case shown in the figure, $w(x^1) > w(x^2)$ and hence virtual-valuation function is not regular. Here, the the lower convex hull of the points $(g^i,h^i)$'s is taken. The slopes of individual segments of this convex hull give the monotone virtual valuation $\overline{w}(x^i)$. This is equivalent to replacing $w(x^1)$ and $w(x^2)$ by their weighted mean, i.e., $\overline{w}(x^1) = \overline{w}(x^2) = (p^1w(x^1) + p^2w(x^2))/(p^1 + p^2)$. \begin{figure}[h] \begin{center} \includegraphics[trim=0.9in 0.9in 0.9in 0.9in, clip=true, height=5.5in,angle=270]{fig1} \caption{\small \sl Virtual valuations and monotone virtual valuations as the slopes of the graph.\label{fig:vv-mvv}} \end{center} \end{figure} The following proposition establishes the optimality of the allocation rule obtained by using~$\overline{w}_n$. \begin{proposition}[from \cite{Abhishek10A}] \label{proposition:monotone-bid-opt} Let $\bs{\pi}$ be any allocation rule satisfying the conditions of Proposition \ref{proposition:opt-revenue} and let $[Q_n]_{n \in \setN}$ be obtained from $\bs{\pi}$ by \eqref{eq:winning-prob}. Then, \beq{monotone-bid-ineq} \E{\sum_{n=1}^{N}Q_n(\mb{X})w_n(X_n)} \leq \E{\sum_{n=1}^{N}Q_n(\mb{X})\overline{w}_n(X_n)}, \eeq where equality is achieved by any allocation rule $\bs{\pi}$ that maximizes $\sum_{n=1}^{N}Q_n(\mb{v})\overline{w}_n(v_n)$ for each bid vector $\mb{v}$. \end{proposition} An optimal auction, which uses the MVVs just defined, is the maximum weight algorithm shown as Algorithm \ref{alg:mwa}. The set~$\mc{W}(\mb{v})$ is the collection of all feasible subsets of buyers with maximum total MVVs for the given bid vector $\mb{v}$. Since $\setA$ is downward closed and $\emptyset \in \setA$, no buyer $n$ with $\overline{w}_n(v_n) < 0$ is included in the set of winners~$W(\mb{v})$. Depending on the tie-breaking rule, a buyer $n$ with $\overline{w}_n(v_n) = 0$ may or may not be included in the set of winners. Assume that only buyers with $\overline{w}_n(v_n) > 0$ are considered. Since $\overline{w}_n(x_n^i) \leq \overline{w}_n(x_n^{i+1})$, the seller equivalently sets a reserve price for each buyer $n$. A buyer whose bid is below his reserve price never wins. From $\cite{Abhishek10A}$, the reserve price $x_n^*$ for buyer $n$ is: \beq{reserve-price} x_n^* = \max \bigg\{v_n: ~ v_n \in \argmax_{\widehat{v}_n \in \setX_n} ~\widehat{v}_n\Prob{X_n \geq \widehat{v}_n}\bigg\}. \eeq In the example given in Figure \ref{fig:vv-mvv}, this corresponds to the y-intercept of the line through the lowermost point of the graph and the point $(1,0)$, which is $x^3$. \begin{algorithm}[h] \floatname{algorithm}{Algorithm} \caption{Maximum weight algorithm} \label{alg:mwa} Given a bid vector $\mb{v}$: \begin{enumerate} \item Compute $\overline{w}_n(v_n)$ for each $n \in \setN$. \item Take $\bs{\pi}(\mb{v})$ to be any probability distribution on the collection $\mc{W}(\mb{v})$ defined as: \beqn \mc{W}(\mb{v}) \triangleq \argmax_{A \in \setA} \sum_{n \in A}\overline{w}_n(v_n). \eeqn Obtain the set of winners $W(\mb{v})$ by sampling from $\mc{W}(\mb{v})$ according to $\bs{\pi}(\mb{v})$. \item Collect payments given by: \beqn M_n(\mb{v}) = \sum_{i:x_n^i \leq v_n}\big(Q_n(x_n^i,\mb{v}_{-n}) - Q_n(x_n^{i-1},\mb{v}_{-n})\big)x_n^i, \eeqn where $Q_n$ is given by \eqref{eq:winning-prob}, and $Q_n(x_n^0,\mb{v}_{-n}) \triangleq 0$. \end{enumerate} \end{algorithm} \section{Efficiency Loss in Optimal Auctions} \label{sec:elr} Given any incentive compatible auction mechanism $(\bs{\pi},\mb{M})$, the social welfare realized by the allocation rule $\bs{\pi}$ is $\mathbb{E}\big[\sum_{n=1}^N Q_n(\mb{X})X_n \big]$. From the IR constraint, this is at least $R(\bs{\pi})$. An efficient auction maximizes the realized social welfare. Since \beq{welfare-eq} \sum_{n=1}^N Q_n(\mb{X})X_n = \sum_{A \in \setA}\pi_A(\mb{X})\big(\sum_{n \in A}X_n\big), \eeq an efficient allocation rule $\bs{\pi}^e(\mb{v})$ is any probability distribution over the set $\argmax_{A \in \setA} \big(\sum_{n \in A}v_n\big)$. It is easy to verify that $\bs{\pi}^e$ satisfies the monotonicity condition needed by Proposition~\ref{proposition:opt-revenue}. The corresponding maximum social welfare (henceforth MSW) is given by: \beq{msw} \text{MSW}(\mb{x}_{1:N},\mb{p}_{1:N};\setA) = \mathbb{E}\bigg[\max_{A \in \setA} \big(\sum_{n \in A}X_n\big)\bigg], \eeq where $\mb{x}_{1:N}$ and $\mb{p}_{1:N}$ are as defined in Section \ref{sec:model}. By contrast, an optimal auction, described in Section \ref{sec:opt-auction}, involves maximizing the sum of MVVs instead of the sum of true valuations. Consequently, it differs from an efficient auction in three ways. First, the buyers with negative MVVs (equivalently, their bids are below their respective reserve prices) do not win. Second, even if the bid of one buyer is higher than that of another, their corresponding MVVs might be in a different order. Hence, in single item optimal auctions, the winner is not necessarily the buyer with the highest valuation for the item. Finally, for a multiple item auction with single-parameter buyers, the allocation that maximizes the sum of the MVVs might be different from the one that maximizes the sum of the true valuations. These three differences are highlighted by the following examples: \begin{example} \label{example:eg1} Consider two i.i.d. buyers competing for one item. Their possible values for the item are $\{ 1,2 \}$ with probabilities $\{ 1/3,2/3 \}$ respectively. An efficient auction, like VCG, will award the item to the highest bidder and charge him the price equal to the second highest bid. Hence, the revenue generated by VCG is $2*(2/3)^2 + 1-(2/3)^2 < 1.45$. However, the optimal auction sets the reserve price equal to $2$ (since $w_n(1) < 0$), and awards the item to any buyer with value $2$. The revenue collected by the optimal auction is $2*(1-(1/3)^2) > 1.77 $. Notice that unlike VCG, the item is not sold when both the buyers have their values equal to $1$. Hence, the optimal auction loses in efficiency. \end{example} \begin{example} \label{example:eg2} Consider two buyers competing for one item. Buyer $1$ takes values $\{5,10\}$ each with probability $0.5$. Buyer $2$ takes values $\{1,2\}$, independent of buyer $1$, each with probability~$0.5$. An efficient auction will always award the item to buyer $1$. Any auction that always awards the item to buyer $1$ cannot him charge more than $5$, else buyer $1$ will misreport his value. Now consider another auction that gives the item to buyer $1$ only if he bids $10$ and charges him $10$, otherwise, the item is given to buyer $1$ at the price $1$. It is easy to see that this auction is incentive compatible. The revenue that this auction generates is $0.5*10 + 0.5*1 = 5.5$. Since the optimal auction must extract at least this much revenue, it cannot always award the item to the buyer $1$. In fact, it can be verified that the second auction is indeed optimal. By not awarding the item to the buyer with the highest value for it, the optimal auction again loses in efficiency. \end{example} \begin{example} \label{example:eg3} Consider $3$ single-minded buyers with known bundles competing for $4$ items. Buyer~$1$ wants the items $(A, B)$, buyer $2$ wants the items $(B, C)$, and buyer $3$ wants the items $(C, D)$. Thus, buyers $(1,3)$ and buyer $2$ cannot get their respective bundles simultaneously. Buyers are i.i.d. with values $\{1,8/5\}$, each with probability $0.5$. Suppose that their true values are $(1,8/5,1)$ respectively. An efficient auction, like VCG, will select buyers $1$ and $3$ as winners since this maximizes the total value of the allocation. However, since $2*w_n(1) = 2*2/5 < w_n(8/5) = 8/5$, the optimal auction will select buyer $2$ as the winner. Again, there is a loss in efficiency because the optimal allocation is not necessarily the efficient one. \end{example} The social welfare realized by an optimal allocation cannot be more than the MSW. We quantify how much an optimal allocations loses in the realized social welfare when compared with the MSW. We normalize this loss in the realized welfare by the MSW. Let $\bs{\pi}^o$ be an optimal allocation rule given by Algorithm \ref{alg:mwa}, and let $[Q^o_n]_{n \in \setN}$ be obtained from $\bs{\pi}^o$ by \eqref{eq:winning-prob}. Given a random vector $\mb{X}$ denoting valuations of buyers, we define efficiency loss ratio (ELR) as: \beq{elr} \text{ELR}(\bs{\pi}^o,\mb{x}_{1:N},\mb{p}_{1:N};\setA) \triangleq \frac{\text{MSW}(\mb{x}_{1:N},\mb{p}_{1:N};\setA) - \E{\sum_{n=1}^N Q^o_n(\mb{X})X_n}}{\text{MSW}(\mb{x}_{1:N},\mb{p}_{1:N};\setA)}. \eeq Recalling step $2$ of Algorithm \ref{alg:mwa}, any optimal allocation rule~$\bs{\pi}^o$ is a probability distribution on~$\mc{W}(\mb{v})$. Different probability distributions on $\mc{W}(\mb{v})$ correspond to different tie-breaking rules for selecting a set of winners $W(\mb{v}) \in \mc{W}(\mb{v})$\footnote{The tie-breaking rule must be consistent in the following sense: let $v_n$ and $\widehat{v}_n$ be such that $v_n < \widehat{v}_n$, but $\overline{w}_n(v_n) = \overline{w}_n(\widehat{v}_n)$, then $\Prob{n \in W(v_n,\mb{v}_{-n})} \leq \Prob{n \in W(\widehat{v}_n,\mb{v}_{-n})}$ for any $\mb{v}_{-n}$.}. They result in the same expected revenue but different realized social welfare. Since the tie-breaking rule is determined by the auction designer (or the seller), we break tie in the favor of the allocation rule that maximizes the social welfare realized within the set of optimal allocations (see Section \ref{sec:discussion} for a related discussion). Call the resulting allocation rule $\widetilde{\bs{\pi}}^o$. Given $r > 1$ and a positive integer $K$, define $\mc{D}_{r,K}$ as the set of $(\mb{x}_{1:N},\mb{p}_{1:N})$ satisfying the following properties: \begin{enumerate} \item For each $n \in \setN$, $0 < x_n^1 < x_n^2 < \ldots < x_n^{K_n}$, and $\mb{p}_n$ is a valid probability vector of dimension~$K_n$, where $K_n \leq K$, \item $(\max_{n\in\setN}~x_n^{K_n})/(\min_{n\in\setN}~x_n^1) \leq r$, \item For all $n \in \setN$ and $1 \leq i \leq K_n$, $p_n^i > 0$. \end{enumerate} The worst case ELR, denoted by $\eta(r,K;\setA)$, is defined as: \beq{worst-elr} \eta(r,K;\setA) \triangleq \sup_{(\mb{x}_{1:N},\mb{p}_{1:N}) \in \mc{D}_{r,K}} \text{ELR}(\widetilde{\bs{\pi}}^o,\mb{x}_{1:N},\mb{p}_{1:N};\setA). \eeq The MSW is continuous in $\mb{x}_{1:N}$ and $\mb{p}_{1:N}$. A slight perturbation in $x_n^i$'s or in $p_n^i$'s can make the MVVs that are zero negative, but still very close to zero, while causing a very small change in the MSW. Hence, even if we restrict to optimal allocations that only include the buyers with positive MVVs, the supremum in \eqref{eq:worst-elr} remains unchanged. Consequently, in the subsequent treatment, for the ease of analysis, we will confine to an efficient allocation within the set of optimal allocations that only includes the buyers with positive MVVs. For notational convenience, we drop $\mb{x}_{1:N}$, $\mb{p}_{1:N}$, and $\setA$ from the arguments of the MSW and ELR functions defined by \eqref{eq:msw} and \eqref{eq:elr} whenever the underlying $\mb{x}_{1:N}$, $\mb{p}_{1:N}$, and $\setA$ are clear from the context. \subsection{Auctions with binary valued single-parameter buyers} \label{sec:elr-mi-bin-val} We first bound the worst case ELR for optimal auctions with binary valued single-parameter buyers. Assume that each random variable $X_n$ takes only two values, $H_n$ and $L_n$, with probabilities $p_n$ and $1 - p_n$ respectively. Here, $H_n > L_n > 0$. The virtual-valuation function $w_n$ is given by $w_n(H_n) = H_n$ and $w_n(L_n) = (L_n - p_nH_n)/(1-p_n)$. Clearly, $w_n(L_n) < w_n(H_n)$, and hence, $w_n = \overline{w}_n$. The reserve price $x_n^*$ for buyer $n$ is $H_n$ if $p_nH_n \geq L_n$, otherwise it is equal to $L_n$. \begin{example} \label{example:single_buyer} Suppose there is only one buyer. We drop the subscript $n$ because $n\equiv 1$. The buyer's value for winning, $X$, is $H$ with probability $p$ and $L$ otherwise, where $0 < L < H$. Here, $\setA = \{ \emptyset , \{ 1 \} \}$. If $pH < L$, then buyer $1$ always wins under the optimal allocation, irrespective of the value of $X$. This is also the efficient allocation. However, if $pH \geq L$, then $w(L) \leq 0$ and buyer $1$ wins only if he bids $H$. This is not efficient because the buyer is not a winner if $X = L$. The social welfare realized is $pH$ while the $\text{MSW} = pH+(1-p)L$. Therefore $\text{ELR}(\bs{\pi}^o) = (1-p)L/(pH+(1-p)L) = (1-p)/(pr + 1 - p)$, where $H/L = r$. Since $pr \geq 1$, $\text{ELR}$ is maximized by setting $p = 1/r$. For this choice of $p$, we get $\text{ELR}(\bs{\pi}^o) = (r-1)/(2r-1)$. As $r\rightarrow \infty$ (so $p = 1/r \rightarrow 0$), $\text{ELR}(\bs{\pi}^o) \rightarrow 1/2$. \end{example} The following proposition shows that the worst case ELR for multiple binary valued single-parameter buyers is no worse than it is for the one buyer example given above. \begin{proposition} \label{proposition:elr-mi-bin-main} Given any $r > 1$, the worst case ELR for binary valued single-parameter buyers, denoted by $\eta(r,2;\setA)$, satisfies: $\eta(r,2;\setA) \leq (r-1)/(2r-1) \leq 1/2$. \end{proposition} \begin{proof} The proof is given in Appendix \ref{sec:appendix1}. \end{proof} Notice that the upper bound in Proposition \ref{proposition:elr-mi-bin-main} holds for any tie-breaking rule, any values of $r$ and $N$, and any $\setA$. The bound $\eta(r,2;\setA) \leq 1/2$, which holds uniformly over all $r$, can be improved further by putting some constraints on the structure of $\setA$ and on $X_n$'s. The following result on auction of $S$ identical items, where buyers are binary valued and have same priors, describes one such case. \begin{proposition} \label{proposition:elr-mi-sym-thm} Let $X_n$'s be i.i.d. random variables taking values $H$ and $L$, where $H > L > 0$, with probabilities $p$ and $1-p$ respectively. Let $\setA = \{A: A \subseteq \setN, |A| \leq S \}$, where $S \leq N$. Then, the worst case ELR satisfies: $\eta(r,2;\setA) \leq S/(S + N)$, and in particular, $\eta(\infty,2;\setA) = S/(S + N)$. \end{proposition} \begin{proof} The proof is given in Appendix \ref{sec:appendix2}. \end{proof} We conjecture that the bound $S/(S + N)$ on the worst case ELR holds uniformly over all $r$ even for any collections of binary valued single-parameter i.i.d. buyers such that the cardinality of any possible set of winners is at most $S$ (but not necessarily containing all subsets of $\setN$ of cardinality at most $S$); i.e., if $A \in \setA$ then $|A| \leq S$. We have the following preliminary result\footnote{Proposition \ref{proposition:elr-mi-conj} restricts to probability distributions such that the reserve price is $H$. It is expected that loss in efficiency in the case of reserve price equal to $H$ will be higher than that with reserve price equal to $L$. Simulations on randomly generated $\setA$ are consistent with the conjecture.}: \begin{proposition} \label{proposition:elr-mi-conj} Let $X_n$'s be i.i.d. random variables taking values $H$ and $L$, where $H > L > 0$, with probabilities $p$ and $1-p$ respectively. Let $\setA$ be such that if $A \in \setA$ then $|A| \leq S$, and $\bs{\pi}$ be any optimal allocation rule. Then, \beqn \lim_{p\rightarrow \infty}\left(\sup_{H,L:~pH \geq L}\text{ELR}(\bs{\pi}^o) = \frac{S}{S + N}\right). \eeqn \end{proposition} \begin{proof} The proof is given in Appendix \ref{sec:appendix3}. \end{proof} \subsection{Single item auctions with i.i.d. buyers} \label{sec:elr-single-item} We now consider single item auctions where $X_n$'s are i.i.d. Each $X_n$ takes $K$ discrete values $\{x^1,x^2,\ldots,x^K\}$, where $0 < x^1 < x^2 < \ldots < x^K$ and $\mathbb{P}\big[X_n = x^i\big] = p^i > 0$ for all $1 \leq i \leq K$. Here,~$\setA$ consists of singletons (and empty set $\emptyset$) only. An efficient allocation simply awards the item to a buyer with the highest valuation for it. The corresponding MSW is $\E{\max_{n \in \setN} X_n}$. Define $z^i$ as: \beq{max-prob} z^i \triangleq \Prob{\max_{n \in \setN} X_n = x^i} = \bigg(\sum_{j=1}^i p^j\bigg)^N - \bigg(\sum_{j=1}^{i-1}p^j\bigg)^N, \eeq with the notational convention of $\sum_{j=1}^{j=0}(.) = 0$. With this, the MSW is equal to $\sum_{i=1}^K z^ix^i$. An optimal allocation awards the item to a buyer with the highest positive MVV. The MVVs are nondecreasing in the true values but need not be strictly increasing. Tie is broken is the favor of a buyer with the highest value for the item. This maximizes the social welfare realized within the set of optimal allocations. Since $X_n$'s are i.i.d., the reserve prices are same for everyone. Hence, the optimal allocation rule $\widetilde{\bs{\pi}}^o$ sets a common reserve price for everyone and awards the item to the buyer with the highest valuation for it. The loss in efficiency is only because of not selling the item if the maximum bid of all the buyers is below the common reserve price. Let $\mb{p} \triangleq (p^1,p^2,\ldots, p^K)$ and $\mb{x} \triangleq (x^1,x^2,\ldots,x^K)$. Let the reserve price be $x^{\thld}$, where $\thld$ is the index corresponding to the reserve price. From \eqref{eq:reserve-price}, \beq{reserve-price-idx} \thld = \bigg\{\text{max} ~ i: ~ i \in \argmax_{k: 1\leq k \leq K} ~ x^k(\sum_{j = k}^K p^j)\bigg\}. \eeq The social welfare realized by the optimal allocation is $\sum_{i=\thld}^K z^ix^i$. Hence, the ELR for single item auctions as a function of $\mb{x}$ and $\mb{p}$ is given by: \beq{elr-si} \elr{N} = \frac{\sum_{i=1}^{\thld-1} z^ix^i}{\sum_{i=1}^K z^ix^i}, \eeq where we use $\elr{N}$ to denote the ELR function defined by \eqref{eq:elr}. This is because $\mb{x}_n$ and~$\mb{p}_n$ are same for all $n \in \setN$, $\setA$ contains only singletons, and $\widetilde{\bs{\pi}}^o$ is kept fixed in the subsequent discussion. The worst-case ELR is given by the following optimization problem: \beq{elr-si-opt-obj} \displaystyle \maximize_{\mb{x},\mb{p}} \quad \elr{N}, \\ \eeq \beq{elr-si-opt-cons} \begin{array}{c} \text{subject to:} \quad p^i > 0 ~\text{for}~ 1\leq i\leq K, ~~ \sum_{i=1}^K p^i = 1, \\ \quad\quad\quad\quad \quad 0 < x^1 < x^2 < \ldots < x^K, ~~ x^K \leq r x^1. \end{array} \eeq The optimum value of the above problem is denoted by $\eta(r,K,N)$. The following proposition shows that the optimization problem defined by \eqref{eq:elr-si-opt-obj} and \eqref{eq:elr-si-opt-cons} can be reduced to a relatively simple optimization problem involving only the common probability vector of the buyers. \begin{proposition} \label{proposition:elr-si-main-thm} Let $\gamma^{*}(r,K,N)$ be the value of the optimization problem given below. \begin{equation*} \maximize_{\mb{p}} \sum_{i=1}^{K-1}\left(\frac{z^i}{\sum_{j=i}^K p^j}\right), \end{equation*} \beq{elr_si_constraint_new} \text{subject to:} ~ p^K = \frac{1}{r}, ~ \sum_{i=1}^K p^i = 1, ~ p^i > 0, ~ 1\leq i\leq K. \eeq Then the worst case ELR for single item auctions is: \beqn \eta(r,K,N) = \frac{\gamma^{*}(r,K,N)}{\frac{r^N - (r-1)^N}{r^{N-1}}+\gamma^{*}(r,K,N)}. \eeqn \end{proposition} \begin{proof} The proof is given in Appendix~\ref{sec:appendix4}. \end{proof} \begin{corollary} \label{corollary:elr-si-corr-bin} For single item auctions with binary valued i.i.d. buyers, the worst case ELR, denoted by $\eta(r,2,N)$, is given by: \beq{elr-si-binary} \eta(r,2,N) = \frac{1}{\sum_{i=0}^N\left(\frac{r}{r-1}\right)^i} < \frac{1}{N+1}. \eeq Moreover, equality is achieved by letting $r\rightarrow\infty$. \end{corollary} \begin{proof} From Proposition \ref{proposition:elr-si-main-thm} and \eqref{eq:max-prob}, $\gamma^{*}(r,2,N) = (1-1/r)^N$. Thus, \begin{align*} \eta(r,2,N) & = \frac{\gamma^{*}(r,2,N)}{\frac{r^N - (r-1)^N}{r^{N-1}}+\gamma^{*}(r,2,N)} = \frac{(r-1)^N}{r\left(r^N-(r-1)^N\right)+(r-1)^N}, \\ & = \frac{(r-1)^N}{r^{N+1}-(r-1)^{N+1}} = \frac{1}{\sum_{i=0}^N\left(\frac{r}{r-1}\right)^i}. \end{align*} Since $r/(r-1) > 1$, $\eta(r,2,N) < 1/(N+1)$. The second part of the corollary is easy to verify. \end{proof} As a consequence of Proposition \ref{proposition:elr-si-main-thm}, we obtain a closed form expression for the worst case ELR for single buyer case, and lower and upper bounds on the worst case ELR for multiple buyers. \begin{proposition} \label{proposition:elr-si-thm-one-buyer} For the case of only one buyer, the solution to the optimization problem defined in Proposition \ref{proposition:elr-si-main-thm} is given by: \beqn \gamma^{*}(r,K,1) = (K-1)\left(1-r^{\frac{-1}{K-1}}\right). \eeqn Consequently, the worst case ELR, denoted by $\eta(r,K,1)$, is $\eta(r,K,1) = \gamma^{*}(r,K,1)/(1+\gamma^{*}(r,K,1))$. \end{proposition} \begin{proof} The proof is given in Appendix~\ref{sec:appendix5}. \end{proof} \begin{corollary} \label{corollary:elr-si-corr-one-buyer} For a fixed $K$, $\eta(r,K,1) < 1- 1/K$, uniformly over all $r > 1$, and $\lim_{r \rightarrow \infty}\eta(r,K,1) = 1- 1/K$. For a fixed $r$, $\eta(r,K,1) \leq \ln(r)/(1+\ln(r))$, uniformly over all positive integers $K$, and $\lim_{K \rightarrow \infty}\eta(r,K,1) = \ln(r)/(1+\ln(r))$. \end{corollary} \begin{proof} The first part follows easily by observing that $\eta(r,K,1)$ is an increasing function of $r$ and by letting $r \rightarrow \infty$. For the second part, notice that for any $a \geq 0$, \beqn \left(\frac{1-r^{-a}}{a}\right) = \left(\frac{1-e^{-a\ln(r)}}{a}\right) \leq \left(\frac{a\ln(r)}{a}\right) = \ln(r). \eeqn Also, notice that, $\lim_{a \rightarrow 0} (1-r^{-a})/a = \ln(r)$. Taking $a = 1/(K-1)$ gives the result. \end{proof} \begin{proposition} \label{proposition:elr-si-thm-many-buyers} Define $\gamma^{*}_1(r,K,N)$ and $\gamma^{*}_2(r,K,N)$ as following: \beqn \gamma^{*}_1(r,K,N) \triangleq N\left[\sum_{i=N}^{\infty}\frac{1}{i}\left(1-\frac{1}{r}\right)^i\left(1-\frac{1}{K-1}\right)^i\right], \eeqn \beqn \gamma^{*}_2(r,K,N) \triangleq N\left[\sum_{i=N}^{\infty}\frac{1}{i}\left(1-\frac{1}{r}\right)^i\right]. \eeqn Then, for auctions with $N$ i.i.d. buyers and $K > 2$, the worst case ELR, denoted by $\eta(r,K,N)$, satisfies: \beqn \frac{\gamma^{*}_1(r,K,N)}{\frac{r^N - (r-1)^N}{r^{N-1}}+\gamma^{*}_1(r,K,N)} \leq \eta(r,K,N) \leq \frac{\gamma^{*}_2(r,K,N)}{\frac{r^N - (r-1)^N}{r^{N-1}}+\gamma^{*}_2(r,K,N)}. \eeqn \end{proposition} \begin{proof} The proof is given in Appendix \ref{sec:appendix6}. \end{proof} \begin{corollary} \label{corollary:elr-si-corr-many_buyers} Bounds given in Proposition \ref{proposition:elr-si-thm-many-buyers} are tight asymptotically as $K\rightarrow \infty$. Moreover, $\lim_{N\rightarrow\infty} \eta(r,K,N) = 0$ and $\lim_{r\rightarrow\infty}\left(\lim_{K\rightarrow\infty} \eta(r,K,N)\right) = 1$. Also, keeping $r$ and $K$ fixed, the worst case ELR goes to zero as $N$ goes to infinity at the rate $O\big((1-1/r)^N\big)$. \end{corollary} \begin{proof} The first part follows easily from the observation that $\lim_{K\rightarrow \infty}\gamma^{*}_1(r,K,N) = \gamma^{*}_2(r,K,N)$. The third part follows since $\lim_{r\rightarrow\infty}\gamma^{*}_2(r,K,N) = \infty$. For the second part, notice that for any $0 < a < 1$, \beqn N\sum_{i=N}^{\infty}\frac{a^i}{i} < \sum_{i=N}^{\infty}a^i = \frac{a^N}{1-a} \rightarrow 0 ~\mbox{as}~ N\rightarrow \infty. \eeqn From above, $\gamma^{*}_2(r,K,N) \leq (r-1)^N/r^{N-1}$. Hence, $\eta(r,K,N) \leq (1-1/r)^N$. \end{proof} \subsection{Single item auctions with different priors} As described earlier, if priors are different, the seller might set different reserve prices for different buyers. In addition, he need not always award the item to the buyer with the highest reported value for the item. We first obtain a lower bound on the worst case ELR that is almost the same as the worst case ELR with only one buyer. \begin{proposition} \label{proposition:si-asy-elr} For single item auctions with multiple buyers with different priors, the worst case ELR, denoted by $\eta(r,K,N)$ \eqref{eq:worst-elr}, satisfy: \beqn \eta(r,K,N) \geq \frac{\gamma^{*}(r,K,1)-\left(1-\frac{1}{r}\right)}{1+\gamma^{*}(r,K,1)}, \eeqn where $\gamma^{*}$ is as defined in Proposition \ref{proposition:elr-si-thm-one-buyer}. \end{proposition} \begin{proof} The proof is given in Appendix \ref{sec:appendix7}. \end{proof} For large values of $r$ and $K$, the lower bound of Proposition \ref{proposition:si-asy-elr} is close to the worst case ELR for the single buyer case given by Proposition \ref{proposition:elr-si-thm-one-buyer}. Moreover, it is independent of $N$ and shows that the ELR does not go below this lower bound even if there are large number of buyers. The following example shows the worst ELR computation for a special case of single item auctions with binary valued buyers. \begin{example} \label{example:asy_elr_eg2} Consider $N$ binary valued buyers competing for one item. Let the value of the item for a buyer $n$ be denoted by the random variable $X_n$ taking values $(L, H_n)$ with probabilities $(1-p_n, p_n)$ respectively. Buyers are numbered such that $L < H_1 < H_2 < \ldots H_N$. For any buyer $n$, the virtual-valuation function satisfies $w_n(H_n) = H_n$ and $w_n(L) < L$. Hence, if there is at least one buyer with value greater than $L$, the optimal auction will allocate the item to the buyer with the highest value. If all buyers have their values equal to $L$, but there is at least one buyer $m$ such that his virtual valuation at $L$ is positive, i.e., $w_m(L) > 0$, then the welfare generated by an optimal auction will be $L$ (irrespective of who gets the item). Thus, loss in efficiency occurs only when $X_n = L$ and $w_n(L) \leq 0$ for all $n$. Notice that, $w_n(L) \leq 0$ is equivalent to $p_n H_n \geq L$. The MSW in this case is: \beqn \mbox{MSW} = \E{\max_{1\leq n \leq N}X_n} = p_N H_N + \sum_{n=1}^{N-1}\left(\prod_{m=n+1}^{N}(1-p_m)p_nH_n\right) + \prod_{n=1}^{N}(1-p_n)L, \eeqn while the loss in the realize social welfare is $\prod_{n=1}^{N}(1-p_n)L$. Hence, the ELR is given by: \begin{align*} \mbox{ELR} & = \frac{\prod_{n=1}^{N}(1-p_n)L}{p_N H_N + \sum_{n=1}^{N-1}\left(\prod_{m=n+1}^{N}(1-p_m)p_nH_n\right) + \prod_{n=1}^{N}(1-p_n)L}, \\ & \leq \frac{\prod_{n=1}^{N}(1-p_n)L}{L + \sum_{n=1}^{N-1}\left(\prod_{m=n+1}^{N}(1-p_m)L\right) + \prod_{n=1}^{N}(1-p_n)L}, \\ & = \frac{1}{1+ \sum_{n=1}^N\left(\prod_{m=1}^n(1-p_m)^{-1}\right)} \leq \frac{1}{N+1}, \end{align*} where the first inequality follows from $p_n H_n \geq L$ for all $n$. \end{example} \section{Discussion} \label{sec:discussion} \begin{enumerate}[(a)] \item \textit{On tie breaking}: Although we have set up the worst case ELR problem under breaking ties in the favor of the most efficient allocation among the set of optimal allocations, we can also define the worst case ELR where ties are broken in the favor of the least efficient allocation among the set of optimal allocations. The results of Section \ref{sec:elr-mi-bin-val} still hold true. Also, the lower bound on the ELR of Section \ref{sec:elr-single-item} for single item auctions with multiple buyers is still a valid lower bound under this tie breaking. \item \textit{Bounds on information rent}: The expected difference between the revenue that the seller could have extracted if he exactly knew the buyers' type (same as the MSW) and the revenue collected by an optimal auction under private types is called \textit{information rent}. Because of the IR constraint, the optimal revenue cannot be larger than the realized social welfare. Hence, the ELR is less than or equal to the ratio of information rent and the MSW. Also, notice that the proof of Proposition \ref{proposition:elr-mi-bin-main} bounds the worst case ELR by finding an upper bound on the ratio of information rent and the MSW. \end{enumerate} \section{Conclusions} \label{sec:conclusion} In this work, we highlighted the differences between the objectives of revenue maximization and social welfare maximization. We quantified this as the loss in efficiency in optimal auctions and obtained bounds on the same for various cases. A summary of the results is presented in Table~\ref{tab:results-summary}. \begin{table}[ht] \centering \begin{tabular}{ | c | c |} \hline \multicolumn{1}{|c|}{\textbf{Case}} & \multicolumn{1}{|c|}{\textbf{ELR bounds}} \\ \hline $N$ binary valued single-parameter buyers & ELR $\displaystyle \leq \frac{r-1}{2r-1} \leq \frac{1}{2}$ \\[7pt] \hline $N$ binary valued single-parameter i.i.d.. buyers, & \multirow{2}{*}{ELR $\displaystyle \leq \text{min}\left\{\frac{S}{S + N}, \frac{r-1}{2r-1}\right\}$} \\ auction of $S$ identical items, $S \leq N$ & \\ [2pt] \hline Single item auction, $1$ buyer, $K$ discrete values & ELR = $\displaystyle \frac{\gamma^{*}}{1+\gamma^{*}}, ~~ \gamma^{*} = (K-1)\left(1-r^{\frac{-1}{K-1}}\right)$ \\[8pt] \cline{1-2} \multirow{2}{*}{Single item auction, $N$ i.i.d. buyers, $K = 2$} & \multirow{2}{*}{ELR $= \left[\sum_{i=0}^N\left(\frac{r}{r-1}\right)^i\right]^{-1} \leq \frac{1}{N+1}$} \\ & \\[4pt] \cline{1-2} & \\ \multirow{3}{*}{Single item auction, $N$ i.i.d. buyers, $K > 2$} & $\displaystyle \frac{\gamma_1}{1+\gamma_1} \leq \mbox{ELR} \leq \frac{\gamma_2}{1+\gamma_2}$, \\[7pt] & $\gamma_1 = N\left[\sum_{i=N}^{\infty}\frac{1}{i}\left(1-\frac{1}{r}\right)^i\left(1-\frac{1}{K-1}\right)^i\right]$, \\[7pt] & $\gamma_2 = N\left[\sum_{i=N}^{\infty}\frac{1}{i}\left(1-\frac{1}{r}\right)^i\right]$ \\[4pt] \hline \end{tabular} \caption{Efficiency loss in revenue optimal auctions - summary of the results.} \label{tab:results-summary} \end{table} An interesting extension would be to show that even if the private valuations (or types) of buyers can take more than two values, for optimal auctions with single-parameter buyers with independent (not necessarily identically distributed) private values, the worst case loss in efficiency is no worse than that with only one buyer. Another possible extension would be to establish the conjecture that the ELR bound $S/(S + N)$ holds for optimal auctions with binary valued single-parameter i.i.d. buyers, where any possible set of winners has cardinality at most $S$, but not any set of buyers with cardinality at most $S$ can win simultaneously.
1,116,691,498,176
arxiv
\section{Introduction} Visual object tracking, which tracks a specified target in a changing video sequence automatically, is a fundamental problem in many aspects such as visual analytics \cite{c5}, automatic driving \cite{c6}, pose estimation \cite{c8} and et al. On the one hand, a core problem of tracking is how to detect and locate the object accurately in the changing scenario such as illumination variations, scale variations, occlusions, shape deformation, and camera motion \cite{c9, c12}. On the other hand, tracking is a time-critical problem because it is always performed in each frame of sequences. Therefore, accuracy, robustness and efficiency are main development directions of the recent tracking approaches. \begin{figure}[!tp] \centering \includegraphics[width=1\linewidth]{figure1.pdf} \caption{Comparisons of our approach with three state-of-the-art trackers in the changing scenario. The compared trackers are two recent real-time trackers: SiamFC \cite{c33} Staple \cite{c26}, and another fully convolutional tracker FCNT \cite{c36}.} \label{figure1} \end{figure} As a core components of trackers, appearance model can be divided into generative methods and discriminative methods. In generative model, candidates are searched to minimize reconstruction errors. Representative sparse coding \cite{c4,c7} have been exploited for visual tracking. In discriminative models, tracking is regarded as a classification problem by separating foreground and background. Numerous classifiers have been adapted for object tracking, such as structured support vector machine (SVM) \cite{c2}, boosting \cite{c3} and online multiple instance learning \cite{c1}. Recently, significant attention has been paid to discriminative correlation filters (DCF) based methods \cite{c15, c16, c17, c35} for real-time visual tracking. The DCF trackers can efficiently train a repressor by exploiting the properties of circular correlation and performing the operations in the Fourier domain. Thus conventional DCF trackers can perform at more than 100 FPS \cite{c15, c25}, which is significant for real-time applications. Many improvements for DCF tracking approaches have also been proposed, such as SAMF \cite{c35} for scale changes, LCT \cite{c17} for long-term tracking, SRDCF \cite{c16} to mitigate boundary effects. The better performance is obtained but the high-speed property of DCF is broken. What is more, all these methods use handcrafted features, which hinder their accuracy and robustness. Inspired by the success of CNN in object classification \cite{c18, c19}, detection \cite{c20} and segmentation \cite{c21}, the visual tracking community has started to focus on the deep trackers that exploit the strength of CNN in recent years. These deep trackers come from two aspects: one is DCF framework with deep features, which means replacing the handcrafted features with CNN features in DCF trackers \cite{c27, c28}. The other aspect of deep trackers is to design the tracking networks and pre-train them which aim to learn the target-specific features for each new video \cite{c29}. Despite their notable performance, all these approaches separate tracking system into some individual components. What is more, most of trackers are not designed towards real-time applications because of their time-consuming feature extraction and complex optimization details. For example, the speed of winners in VOT2015 \cite{c10} and VOT2016 \cite{c11} are less than 1 FPS on GPU. We address these two problems by introducing unified convolutional networks (UCT) to learn the features and perform the tracking process simultaneously. This is an end-to-end and extensible framework for tracking. Specifically, The proposed UCT treats feature extractor and tracking process both as convolution operation, resulting a fully convolutional network architecture. In online tracking, the whole patch can be predicted using the foreground response map by one-pass forward propagation. What is more, efficient model updating and scale handling are proposed to ensure real-time tracking speed. \subsection{Contributions} The contributions of this paper can be summarized in three folds as follows: 1, We propose unified convolutional networks to learn the convolutional features and perform the tracking process simultaneously. The feature extractor and tracking process are both treated as convolution operation that can be trained simultaneously. End-to-end training enables learned CNN features are tightly coupled to tracking process. 2, In online tracking, efficient updating and scale handling strategies are incorporated into the tacking framework. The proposed standard UCT (with ResNet-101) and UCT-Lite (with ZF-Net) can track generic objects at 41 FPS and 154 FPS, respectively, which is of significance for real-time computer vision systems. 3, Extensive experiments are carry out on tracking benchmarks and demonstrate that the proposed tracking algorithm performs favorably against existing state-of-the-art methods in terms of accuracy and speed. Figure~\ref{figure1} shows a comparison to state-of-the-art trackers on three benchmark sequences. \section{Related works} Visual tracking is a significant problem in computer vision systems and a series of approaches have been successfully proposed for tracking. Since our main contribution is an UCT framework for real-time visual tracking, we give a brief review on three directions closely related to this work: CNN-based trackers,real-time trackers, and fully convolutional networks (FCN). \subsection{On CNN-based trackers} Inspired by the success of CNN in object recognition \cite{c18, c19, c20}, researchers in tracking community have started to focus on the deep trackers that exploit the strength of CNN. Since DCF provides an excellent framework for recent tracking research, the first trend is the combination of DCF framework and CNN features. In HCF \cite{c27} and HDT \cite{c28}, the CNN are employed to extract features instead of handcrafted features, and final tracking results are obtained by combining hierarchical response and hedging weak trackers, respectively. DeepSRDCF \cite{c32} exploits shallow CNN features in a spatially regularized DCF framework. Another trend in deep trackers is to design the tracking networks and pre-train them which aim to learn the target-specific features and handle the challenges for each new video. MDNet \cite{c29} trains a small-scale network by multi-domain methods, thus separating domain independent information from domain-specific layers. C-COT \cite{c30} and ECO \cite{c31} employ the implicit interpolation method to solve the learning problem in the continuous spatial domain, where ECO is an improved version of C-COT in performance and speed. These trackers have two major drawbacks: Firstly, they can only tune the hyper-parameters heuristically since feature extraction and tracking process are separate. And they can not end-to-end train and perform tracking systems. Secondly, none of these trackers are designed towards real-time applications. \subsection{On real-time trackers} Other than accuracy and robustness, the speed of the visual tracker is a crucial factor in many real world applications. Therefore, a practical tracking approach should be accurate and robust while operating at real-time. Classical real-time trackers, such as NCC \cite{c22} and Mean-shift \cite{c23}, perform tracking using matching. Recently, discriminative correlation filters (DCF) based methods, which efficiently train a repressor by exploiting the properties of circular correlation and performing the operations in the Fourier domain, have drawn attentions for real-time visual tracking. Conventional DCF trackers such as MOSSE, CSK and KCF can perform at more than 100 FPS \cite{c24, c25, c15}. Subsequently, a series of trackers that follow DCF method are proposed. In DSST algorithm, tracker searches over the scale space for correlation filters to handle the variation of object size. Staple \cite{c26} tracker combines complementary template and color cues in a ridge regression framework. CFLB \cite{c48} and BACF \cite{c49} mitigate the boundary effects of DCF in the Fourier domain. Nevertheless, all these DCF-based trackers employ handcrafted features, which limits better performance. The recent years have witnessed significant advances of CNN-based real-time tracking approaches. L. Bertinetto et.al \cite{c23} propose a fully convolutional siamese network (SiamFC) to predict motion between two frames. The network is trained off-line and evaluated without any fine-tuning. Similarly to SiamFC, In GOTURN tracker \cite{c34}, the motion between successive frames is predicted using a deep regression network. These two tackers are able to perform at 86 FPS and 100 FPS respectively on GPU because no fine-tuning is performed. On the one hand, their simplicity and fixed-model nature lead to high speed. On the other hand, this also lose the ability to update the appearance model online which is often critical to account for drastic appearance changes in tracking scenarios. Therefore, there still is an improvement space of performance for real-time deep trackers. \subsection{On Fully Convolutional trackers} Fully convolutional networks can efficiently learn to make dense predictions for visual tasks like semantic segmentation, detection as well as tracking. Jonathan Long et al. \cite{c21} transform fully connected layers into convolutional layers to output a heat map for semantic segmentation. The region proposal network (RPN) in Faster R-CNN \cite{c20} is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. DenseBox \cite{c37} is an end-to-end FCN detection framework that directly predicts bounding boxes and object class confidences through whole image. The most related work in tracking literatures is FCNT \cite{c36}, which propose a two-stream fully convolutional network to capture both general and specific object information for visual tracking. However, its tracking components are still independently, so the performance may be impaired. What is more, the FCNT can only perform at 3 FPS on GPU because of its layers switch mechanism and feature map selection method, which hinder it from real-time applications. Compared with FCNT, our UCT treats feature extractor and tracking process in a unified architecture and train them end-to-end, resulting a more compact and much faster tracking approach. \section{Unified Convolutional networks for tracking} In this section, the overall architecture of proposed UCT is introduced firstly. Afterwards, we detail the formulation of convolutional operation both in training and test stages. \begin{figure*}[thpb] \centering \includegraphics[width=0.8\linewidth]{figure2.pdf} \caption{The overall UCT architecture. The solid lines indicate online tracking process, while dashed box and dashed lines indicate off-line training and training on first frame.} \label{figure2} \end{figure*} \subsection{UCT Architecture} The overall framework of our tracking approach is a unified convolutional architecture (see Figure~\ref{figure2}), which consists of feature extractor and convolutions performing tracking process. We adopt two groups convolutional filters to perform tracking process which is trained end-to-end with features extractor. Compared to two-stage approaches adopted in DCF framework within CNN features \cite{c27, c28, c32}, our end-to-end training pipeline is generally preferable. The reason is that the parameters in all components can cooperate to achieve tracking objective. In Figure ~\ref{figure2}, the search window of current frame is cropped and sent to unified convolutional networks. The estimated new target position is obtained by finding the maximum value of the response map. Another separate 1-dimentioanl convolutional branch is used to estimate target scale and model updating is performed if necessary. The solid lines indicate online tracking process, while dashed box and dashed lines are included in off-line training and training on first frame. Each feature channel in the extracted sample is always multiplied by a Hann window, as described in \cite{c15}. \subsection{Formulation} In the UCT formulation, the aim is to learn a series of convolution filters $f$ from training samples ${(x_k, y_k)}_{k=1:t}$. Each sample is extracted using another CNN from an image region. Assuming sample has the spatial size $M \times N$, the output has the spatial size $m \times n$ ($m=M / stride_M, n=N / stride_N$). The desired output $y_k$ is a response map which includes a target score for each location in the sample $x_k$. The convolutional response of the filter on sample $x$ is given by \begin{equation} \label{eq1} R(x) = \sum_{l=1}^dx^l*f^l \end{equation} where $x^l$ and $f^l$ is $l$-th channel of extracted CNN features and desired filters, respectively, $*$ denotes convolutional operation. The filter can be trained by minimizing $L_2$ loss which is obtained between the response $R(x_k)$ on sample $x_k$ and the corresponding Gaussian label $y_k$ \begin{equation} \label{eq2} L = {||R(x_k) - y_k||}^2 + \lambda\sum_{l=1}^d{||f^l||}^2 \end{equation} The second term in~(\ref{eq2}) is a regularization with a weight parameter $\lambda$. In test stage, the trained filters are used to evaluate an image patch centered around the predicted target location. The evaluation is applied in a sliding-window manner, thus can be operated as convolution: \begin{equation} \label{eq3} R(z) = \sum_{l=1}^dz^l*f^l \end{equation} Where $z$ denote the feature map extracted from last target position including context. It is noticed that the formulation in our framework is similar to DCF, which solve this ridge regression problem in frequency domain by circularly shifting the sample. Different from DCF, we adopt gradient descent to solve equation~(\ref{eq2}), resulting in convolution operations. Noting that the sample $x_k$ is also extracted by CNN, these convolution operations can be naturally unified in a fully convolutional network. Compared to DCF framework, our approach has three advantages: firstly, both feature extraction and tracking convolutions can be pre-trained simultaneously, while DCF based trackers can only tune the hyper-parameters heuristically. Secondly, model updating can be performed by SGD, which maintains the long-term memory of target appearance. Lastly, our framework is much faster than DCF framework within CNN features. \subsection{Training} Since the objective function defined in equation~(\ref{eq2}) is convex, it is possible to obtain the approximate global optima via gradient descent with an appropriate learning rate in limited steps. We divide the training process into two periods: off-line training that can encode the prior tracking knowledge, and the training on first frame to adapt to specific target. In off-line training, the goal is to minimize the loss function in equation~(\ref{eq2}). In tracking, the target position in last frame is always not centered in current cropped patch. So for each image, the train patch centered at the given object is cropped with jittering. The jittering consists of translation and scale jittering, which approximates the variation in adjacent frames when tracking. Above cropped patch also includes background information as context. In training, the final response map is obtained by last convolution layer within one channel. The label is generated using a Gaussian function with variances proportional to the width and height of object. Then the $L_2$ loss can be generated and the gradient descent can be performed to minimize equation ~(\ref{eq2}). In this stage, the overall network consists of a pre-trained network with ImageNet (ResNet101 in UCT and ZF-Net in UCT-Lite) and following convolutional filters. Last part of ResNet or ZF-Net is trained to encode the prior tracking knowledge with following convolutional filters, making the extracted feature more suitable for tracking. The goal of training on first frame is to adapt to a specific target. The network architecture follows that in off-line training, while later convolutional filters are randomly initialized by zero-mean Gaussian distribution. Only these randomly initialized layers are trained using SGD in first frame. Off-line training encodes prior tracking knowledge and constitute a tailored feature extractor. We perform online tracking with and without off-line training to illustrate this effect. In Figure ~\ref{figure3}, we show tracking results and corresponding response maps without or with off-line training. In left part of Figure~\ref{figure3}, the target singer is moving to right, the response map with off-line training effectively reflects this translation changes while response map without off-line training are not capable of doing this. So the tracker without off-line training misses this critical frame. In right part of Figure~\ref{figure3}, the target player is occluded by another player, the response map without off-line training becomes fluctuated and tracking result is effected by distractor, while response map with off-line training still keeps discriminative. The results are somewhat unsurprising, since CNN features trained on ImageNet classification data are expected to have greater invariance to position and same class. In contrast, we can obtain more suitable feature tracking by end-to-end off-line training. \begin{figure}[thpb] \centering \includegraphics[width=1\linewidth]{figure3.pdf} \caption{From left to right: images, response maps without off-line training and response maps with off-line training. Green and red boxes in images indicates tracking results without and with off-line training, respectively.} \label{figure3} \end{figure} \section{Online tracking} After off-line training and training on the first frame, the learned network is used to perform online tracking by equation~(\ref{eq3}). The estimate of the current target state is obtained by finding the maximum response score. Since we use a fully convolutional network architecture to perform tracking, the whole patch can be predicted using the foreground heat map by one-pass forward propagation. Redundant computation was saved. Whereas in \cite{c29} and \cite{c38}, network has to be evaluated for $N$ times given $N$ samples cropped from the frame. The overlap between patches leads to a lot of redundant computation. \subsection{Model update} Most of tracking approaches update their model in each frame or at a fixed interval \cite{c15, c25, c27, c30, c31}. However, this strategy may introduce false background information when the tracking is inaccurate, target is occluded or out of view. In the proposed method, model update is decided by evaluating the tracking results. Specifically, we consider the maximum value in the response map and the distribution of other response value simultaneously. Ideal response map should have only one peak value in actual target position and the other values are small. On the contrary, the response will fluctuate intensely and include more peak values as shown in Figure~\ref{figure4}. We introduce a novel criterion called peak-versus-noise ratio (\emph{PNR}) to reveal the distribution of response map. The \emph{PNR} is defined as \begin{equation} \label{eq4} \emph{PNR} = \frac{R_{max}-R_{min}}{mean(R{\backslash}R_{max})} \end{equation} where \begin{equation} R_{max} = \max{R(z)} \end{equation} and $R_{min}$ is corresponding minimum value of response map. Denominator in equation~(\ref{eq4}) represents mean value of response map except maximum value and is used to measure the noise approximately. The \emph{PNR} criterion becomes larger when response map has fewer noise and sharper peak. Otherwise, the \emph{PNR} criterion will fall into a smaller value. We save the \emph{PNR} and $R_{max}$ and calculate their historical average values as threshold: \begin{equation} \label{eq6} \left\{ \begin{array}{rl} \emph{PNR}_{threshold} =& \frac{\sum_{t=1}^T{PNR}_t}{T} \\ R_{threshold} =& \frac{\sum_{t=1}^T{R}_{max}^t}{T} \end{array} \right. \end{equation} Model update is performed only when both two criterions in equation~(\ref{eq6}) are satisfied. The updating is one step SGD with smaller learning rate compared with that in the first frame. Figure~\ref{figure4} illustrates the necessity of proposed \emph{PNR} criterion by showing tracking results under occlusions. As shown in Figure~\ref{figure4}, updating is still performed if only according to $R_{max}$ criterion when target is under occlusion. Introduced noise will result in inaccurate tracking results even failures. The \emph{PNR} criterion significantly decreases in these unreliable frames thus avoids unwanted updating. \begin{figure}[thpb] \centering \includegraphics[width=1\linewidth]{figure4.pdf} \caption{Updating results of UCT and UCT$\_$No$\_$\emph{PNR} (UCT without \emph{PNR} criterion). The first row shows frames that the target is occluded by distractor. The second row are corresponding response maps. $R_{max}$ still keeps large in occlusion while \emph{PNR} significantly decreases. So the unwanted updating is avoided by considering \emph{PNR} constraint simultaneously. The red and blue boxes in last image are tracking results of UCT and UCT$\_$No$\_$\emph{PNR}, respectively.} \label{figure4} \end{figure} \subsection{Scale estimation} A conventional approach of incorporating scale estimation is to evaluate the appearance model at multiple resolutions by performing an exhaustive scale search \cite{c35}. However, this search strategy is computationally demanding and not suitable for real-time tracking. Inspired by \cite{c45}, we introduce a 1-dimensional convolutional filters branch to estimate the target size as shown in Figure~\ref{figure2}. This scale filter is applied at an image location to compute response scores in the scale dimension, whose maximum value can be used to estimate the target scale. Such learning separate convolutional filters to explicitly handle the scale changes is more efficient for real-time tracking. In training and updating of scale convolutional filters, the sample $x$ is extracted from variable patch sizes centered around the target: \begin{equation} \label{eq7} size(P^n) = a^nW \times a^nH~~~n \in \{-\lfloor\frac{S-1}{2}\rfloor,...,\lfloor\frac{S-1}{2}\rfloor\} \end{equation} Where $S$ is the size of scale convolutional filters, $W$ and $H$ are the current target size, $a$ is the scale factor. In scale estimation test, the sample is extracted using the same way after translation filters are performed. Then the scale changes compared to previous frame can be obtained by maximizing the response score. Note that the scale estimation is performed only when model updating condition is satisfied. \section{Experiments} Experiments are performed on four challenging tracking datasets: OTB2013 with 50 videos, OTB2015 with 100 videos, VOT2014 with 25 videos and VOT2015 with 60 videos . All the tracking results are using the reported results to ensure a fair comparison. \subsection{Implement details} We adopt ResNet-101 in standard UCT and ZF-Net in UCT-Lite as feature extractor, respectively. In off-line training, last four layers of ResNet and last two layers of ZF-Net are trained. Our training data comes from UAV123 \cite{c47}, and TC128 \cite{c13} excluding the videos that overlap with test set. In each frame, patch is cropped around ground truth and resized into 224*224. The translation and scale jittering are 0.05 and 0.03, respectively. We apply stochastic gradient descent (SGD) with momentum of 0.9 to train the network and set the weight decay $\lambda$ to 0.005. The model is trained for 30 epochs with a learning rate of $10^{-5}$. In online training on first frame, SGD is performed 50 steps with a learning rate of $5*10^{-7}$ and $\lambda$ is set to 0.01. In online tracking, the model update is performed by one step SGD with a learning rate of $1*10^{-7}$. S and a in equation~(\ref{eq7}) is set to 33 and 1.02, respectively. The proposed UCT is implemented using Caffe \cite{c39} with Matlab wrapper on a PC with an Intel i7 6700 CPU, 48 GB RAM, Nvidia GTX TITAN X GPU. The code and results will be made publicly available. \subsection{Results on OTB2013} \label{resultsonOTB2013} OTB2013 \cite{c14} contains 50 fully annotated sequences that are collected from commonly used tracking sequences. The evaluation is based on two metrics: precision plot and success plot. The precision plot shows the percentage of frames that the tracking results are within certain distance determined by given threshold to the ground truth. The value when threshold is 20 pixels is always taken as the representative precision score. The success plot shows the ratios of successful frames when the threshold varies from 0 to 1, where a successful frame means its overlap is larger than this given threshold. The area under curve (AUC) of each success plot is used to rank the tracking algorithm. In this experiment, ablation analyses are performed to illustrate the effectiveness of proposed component at first. Then we compare our method against the three best trackers that presented in the OTB2013, Struck \cite{c2}, SCM \cite{c42} and TLD \cite{c43}. We also include recent real-time trackers presented at top conferences and journals, they are KCF (T-PAMI 2015) \cite{c15}, Siamese-FC (ECCV 2016) \cite{c33}, Staple (CVPR 2016) \cite{c26}, SCT (CVPR 2016) \cite{c34}. What is more, other recent trackers, HDT (CVPR2016) \cite{c28}, FCNT (ICCV 2015) \cite{c36}, CNN-SVM (ICML 2015) \cite{c40}, DLSSVM (CVPR2016) \cite{c41} and HCF (ICCV2015) \cite{c27} are also compared, these approaches are not real-time but most of their speed is more than 10FPS. There are five deep trackers and seven shallow trackers in total. The one-pass evaluation (OPE) is employed to compare these trackers. \begin{figure}[thpb] \centering \includegraphics[width=1.0\linewidth]{figure5.pdf} \caption{ Precision and success plots on OTB2013 \cite{c14}. The numbers in the legend indicate the representative precisions at 20 pixels for precision plots, and the area-under-curve scores for success plots.} \label{figure5} \end{figure} \subsubsection{Ablation analyses} To verify the contribution of each component in our algorithm, we implement and evaluate four variations of our approach: Firstly, the effectiveness of our off-line training is tested by comparison without this procedure (UCT$\_$No$\_$Off-line), where the network is only trained within the first frame of a specific sequence. Secondly, the tracking algorithm that updates model without \emph{PNR} constraint (UCT$\_$No$\_$\emph{PNR}, only depends on $R_{max}$) is compared with the proposed efficient updating method. Last two additional versions are UCT within multi-resolutions scale (UCT$\_$MulRes$\_$Scale) and without scale handling (UCT$\_$No$\_$Scale). As shown in Table 1, the performances of all the variations are not as good as our full algorithm (UCT) and each component in our tracking algorithm is helpful to improve performance. Specifically, Off-line training encodes prior tracking knowledge and constitute a tailored feature extractor, so the UCT outperforms UCT$\_$No$\_$Off-line with a large margin. Proposed \emph{PNR} constraint for model update improves performance as well as speed, since it avoids updating in unreliable frames. Although exhaustive scale method in multiple resolutions improves the performance of tracker, it brings higher computational cost. By contrast, learning separate filters for scale in our approach gets a better performance while being computationally efficient. \begin{table}[t] \centering \begin{tabular}{cccc} \hline \bf Approaches & \bf AUC & \bf Precision20 & \bf Speed (FPS) \\ \hline UCT$\_$No$\_$Off-line & 0.601 & 0.863 & 41 \\ UCT$\_$No$\_$\emph{PNR} & 0.624 & 0.880 & 33 \\ UCT$\_$No$\_$Scale & 0.613 & 0.871 & 51 \\ UCT$\_$MulRes$\_$Scale& 0.629 & 0.893 & 22 \\ UCT & 0.641 & 0.904 & 41 \\ \hline \end{tabular} \centerline {\caption{ Performance on OTB2013 of UCT and its variations}} \label{table1} \end{table} \subsubsection{Comparison with state-of-the-art trackers} We compare our method against the state-of-the-art trackers as shown in~\ref{resultsonOTB2013}. There are five deep trackers and seven shallow trackers in total. Figure~\ref{figure5} illustrates the precision and success plots based on center location error and bounding box overlap ratio, respectively. It clearly illustrates that our algorithm, denoted by UCT, outperforms the state-of-the-art trackers significantly in both measures. In success plot, our approach obtain an AUC score of 0.641, significantly outperforms SiamFC and HCF by 3.3\% and 3.6\%, respectively. In precision plot, our approach obtains a score of 0.904, outperforms HCF and HDT by 1.3\% and 1.5\%, respectively. It worth mentioning that our UCT provides significantly better performance while being 13 times faster compared to the FCNT tracker. The top performance can be attributed to that our methods encodes prior tracking knowledge by off-line training and extracted features is more suitable for following tracking convolution operations. By contrast, the CNN features in other trackers are always pre-trained in different task and is independently with the tracking process, thus the achieved tracking performance may not be optimal. What is more, efficient updating and scale handling strategies ensure robustness and speed of the tracker. Besides standard UCT, we also implement a lite version of UCT (UCT-Lite) which adopts ZF-Net \cite{c46} and ignores scale changes. As shown in Figure~\ref{figure5}, the UCT-Lite obtains a precision score of 0.856 while operates at 154 FPS. Our UCT-Lite approach is much faster than recent real-time trackers, SiamFC and Staple, while significantly outperforms them in precision. \begin{figure}[thpb] \centering \includegraphics[width=1.0\linewidth]{figure6.pdf} \caption{ Precision and success plots on OTB2015 \cite{c9}. The numbers in the legend indicate the representative precisions at 20 pixels for precision plots, and the area-under-curve scores for success plots.} \label{figure6} \end{figure} \begin{figure*}[thpb] \centering \includegraphics[width=1\linewidth]{figure7.pdf} \caption{The success plots of OTB2015 \cite{c9} for five challenge attributes: illumination variation, out-of-plane rotation, scale variation, occlusion deformation and background clutter. In the caption of each sub-figure, the number in parentheses denotes the number of the video sequences in the corresponding situation.} \label{figure7} \end{figure*} \subsection{Results on OTB2015} OTB2015 \cite{c9} is the extension of OTB2013 and contains 100 video sequences. Some new sequences are more difficult to track. In this experiment, we compare our method against the best trackers that presented in the OTB2015, Struck \cite{c2}. What is more, some recent trackers are also compared, they are KCF (T-PAMI 2015) \cite{c15}, DSST (T-PAMI 2017) \cite{c45}, SiamFC (ECCV 2016) \cite{c33}, Staple (CVPR 2016) \cite{c26}, HDT (CVPR2016) \cite{c28}, HCF (ICCV2015) \cite{c27}, FCNT (ICCV 2015) \cite{c36}, DLSSVM (CVPR2016) \cite{c41} and CNN-SVM (ICML 2015) \cite{c40}. There are five deep trackers and four shallow trackers in total. The one-pass evaluation (OPE) is employed to compare these trackers. Figure~\ref{figure6} illustrates the precision and success plots of compared trackers, respectively. The proposed UCT approach outperforms all the other trackers in terms of both precision score and success score. Specifically, our method achieves a success score of 0.611, which outperforms the SiamFC (0.582) and Staple (0.581) method with a large margin. Since the proposed tracker adopts a unified convolutional architecture and efficient online tracking strategies, it achieves superior tracking performance and real-time speed. For detailed performance analysis, we also report the results on various challenge attributes in OTB2015, such as illumination variation, scale changes, occlusion, etc. Figure~\ref{figure7} demonstrates that our tracker effectively handles these challenging situations while other trackers obtain lower scores. Comparisons of our approach with three state-of-the-art trackers in the changing scenario is shown in Figure~\ref{figure1}. \subsection{Results on VOT} The Visual Object Tracking (VOT) challenges are well-known competitions in tracking community. The VOT have held several times from 2013 and their results will be reported at ICCV or ECCV. In this subsection, we compare our method, UCT with entries in VOT 2014 \cite{c44} and VOT2015 \cite{c10}. VOT2014 contains 25 sequences with substantial variations. A tracker is re-initialized whenever tracking fails and the evaluation module reports both accuracy and robustness, which correspond to the bounding box overlap ratio and the number of failures, respectively. There are two sets of experiments: trackers are initialized with either ground-truth bounding boxes (baseline) or randomly perturbed ones (region noise). The VOT evaluation then provides a ranking analysis based on both statistical and practical significance of the performance gap between trackers. We compare our algorithm with the top 7 trackers in VOT2014 challenges \cite{c44}. What is more, we add additional three state-of-the-art real-time trackers GOTURN (ECCV2016) \cite{c34}, SiamFC (ECCV2016 Workshop) \cite{c33} and Staple (CVPR2016) \cite{c26}. \begin{figure}[thpb] \centering \includegraphics[width=0.9\linewidth]{figure8.pdf} \caption{Accuracy and robustness rank plot on VOT2014.The better trackers are located at the upper-right corner.} \label{figure8} \end{figure} As shown in Figure~\ref{figure8}, proposed UCT is ranked top both in accuracy and robustness. With precise re-initializations, UCT ranks second both in accuracy and robustness while comprehensive performance is best. It worth mentioning that UCT significantly outperforms three state-of-the-art real-time trackers in robustness rank. The similar performance is obtained with imprecise re- initializations as shown in region noise experiment results, which implies that out UCT can achieve long-term tracking within a re-detection module. VOT2015 \cite{c10} consists of 60 challenging videos that are automatically selected from a 356 sequences pool. The trackers in VOT2015 is evaluated by expected average overlap (EAO) measure, which is the inner product of the empirically estimating the average overlap and the typical-sequence-length distribution. The EAO measures the expected no-reset overlap of a tracker run on a short-term sequence. \begin{figure}[thpb] \centering \includegraphics[width=1\linewidth]{figure9.pdf} \caption{EAO rank plot on VOT2015. The better trackers are located at the right. The ranking of other trackers is consistent with VOT2015.} \label{figure9} \end{figure} Figure~\ref{figure9} illustrates that proposed UCT can ranked seventh in EAO measures. None of top six trackers can perform in real-time(their speed is less than 5 EFO). Since UCT employs end-to-end training, efficient updating and scale handling strategies, it can achieve a great balance between performance and speed. \section{Conclusions} In this work, we proposed a unified convolutional tracker (UCT) that learn the convolutional features and perform the tracking process simultaneously. In online tracking, efficient updating and scale handling strategies are incorporated into the network. It is worth to emphasize that our proposed algorithm not only performs superiorly, but also runs at a very fast speed which is significant for real-time applications. Experiments are performed OTB2013, OTB2015, VOT2014 and VOT2015, and our method achieves state-of-the-art results on these benchmarks compared with other real-time trackers. \section*{Acknowledgment} This work was done when Zheng Zhu was an intern at Horizon Robotics, Inc. This work is supported in part by the National Natural Science Foundation of China under Grant No. 61403378 and 51405484, and in part by the National High Technology Research and Development Program of China under Grant No.2015AA042307. {\small \bibliographystyle{ieee}
1,116,691,498,177
arxiv
\section{Introduction} The single-particle momentum distribution plays an important role in our understanding of the ground-state properties of quantum many-particle systems~\cite{si.so.89,west.75}. It is defined as the average number of particles with momentum {\bf k} in an N-particle system, $n({\bf k})=\langle \Psi|\sum_\sigma a^\dagger_{{\bf k}\sigma} a^{}_{{\bf k}\sigma} |\Psi \rangle$. Here the normalized $N$-particle state of the system is represented by $|\Psi \rangle$ and $a^\dagger_{{\bf k}\sigma} (a^{}_{{\bf k}\sigma})$ are the creation (annihilation) operators for particles with momentum {\bf k} and spin projection $\sigma$. In real space the one-particle density matrix \begin{equation} \rho({\bf x}_1, {\bf x}^\prime_1) = \int \prod_{i=2}^N d{\bf x}_i \psi^\dagger ({\bf x}_1,{\bf x}_2, ..., {\bf x}_N) \psi({\bf x}^\prime_1,{\bf x}_2, ..., {\bf x}_N) \end{equation} measures the change of the $N$-particle wave function when a particle is moved from ${\bf x}^\prime_1$ to ${\bf x}_1$ while all other particles are fixed. In homogeneous systems this two-point function depends only upon the separation: $ \rho({\bf x}_1, {\bf x}^\prime_1) =\rho(|{\bf x}_1-{\bf x}^\prime_1|)$. Accordingly, the momentum distribution and the one-particle density matrix are related by Fourier transformation: \begin{eqnarray} n({\bf k}) = \int d{\bf x}_1 \int d{\bf x}^\prime_1 e^{i{\bf k}\cdot ({\bf x}_1-{\bf x}^\prime_1)} \rho({\bf x}_1-{\bf x}^\prime_1). \label{nofp} \end{eqnarray} The momentum distribution, eq.~(\ref{nofp}), is determined by a product of two field operators whose short-distance behavior can be calculated exactly using renormalization group methods~\cite{wi.zi.72,al.ci.12,va.ry.12,an.bo.10}. The latter techniques when applied in nuclear physics decouple the low- from the high-momentum degrees of freedom and leave the scattering cross section invariant~\cite{west.75,re.si.85,bjor.69,bjor.67,poli.73,poli.74}. Furthermore, the nuclear momentum distributions calculated within the impulse approximation~\cite{ch.wi.52,as.wi.52} provide universal scaling laws for the high-momentum tails~\cite{west.75} of one- and two-particle momentum distributions~\cite{al.ci.12,va.co.11}. These tails are the consequence of short range-correlations in the nuclear wave functions~\cite{pi.wi.92}. Renormalization group arguments~\cite{bo.ro.12,ho.ba.13} have also shown that high-momentum tails of momentum distributions factorize into a product between a universal function of momentum, which is determined by two-particle physics and a factor depending on the low-momentum structure of the many-body state~\cite{an.bo.10}. This observation goes back to Kimball~\cite{kimb.73,kimb.75} who pointed out that when two particles are sufficiently close their interaction dominates, and the two-particle Schr\"odinger equation provides a reasonable starting point to compute quantum mechanical observables from the knowledge of the pair-wave function. Experimental measurements of $n({\bf k})$ involve inelastic scattering processes with energy and momentum transfers larger than the characteristic length scale of the scatterer. They determine the double differential scattering cross section $d^2\sigma/ d\Omega d\omega$ for a given infinitesimal solid angle $d\Omega$ and energy $d\omega$ of the scattered particle, respectively. The incident energy and the scattering angle are fixed during the experiment, and the scattering cross section is measured as a function of the transferred momentum and energy. The data analysis of the measured scattering cross section generally employs the impulse approximation~\cite{ch.wi.52,as.wi.52}, which assumes that a single particle is struck by the scattering probe, and that the particle recoils freely from the collision. Within the impulse approximation the scattering cross section is proportional to the Compton profile $d^2\sigma/d\Omega d\omega \propto J(k_z)$. The latter can be calculated directly by integrating the momentum distribution $n({\bf k})$ in a plane perpendicular to the scattering vector $k_z$: $J(k_z) = \int \int n({\bf k}) dk_x \ dk_y$. The proportionality implies that, whenever the measured scattering cross section is modelled within the impulse approximation~\cite{si.so.89,west.75} and is found to be invariant under some scaling transformation, the Compton profile will show the same scaling behavior. In this paper we investigate whether the momentum distribution of a Coulomb system, which yields the Compton profile by integration, also shows scaling behavior. In high energy physics it is well known that the scaling of the scattering cross section is a consequence of confinement (``Bjorken scaling''). Indeed, by assuming the existence of a simple confining potential for two point particles, Elitzur and Susskind ~\cite{el.su.72} derived the scaling behavior of the resonance excitations found experimentally in deep inelastic reactions~\cite{bl.gi.70,bjor.69}. Making use of Kimball's observation~\cite{kimb.73,kimb.75} we will therefore compute the momentum distribution of two interacting electrons by numerically solving the two-particle Schr\"odinger equation for a repulsive Coulomb interaction in the presence of a confining potential. In the following we work in atomic units, where the unit length is $a_0=1 \ \textrm{Bohr} (0.529167 \times 10^{-10} \ \textrm{m})$, the unit of mass is the electron mass $m$, and the unit of energy is 1 Hartree (1 $\textrm{Ha} = 27.2113$ eV). To keep the investigation general we consider an algebraic confining potential of the model form $V = \alpha \left|x/a_0\right|^\eta$. The condition $\eta >0$ ensures that the potential produces bound states, and we chose $\alpha=1\, \textrm{Ha}$. We will show that the momentum distribution of a quantum many-particle system interacting via the Coulomb interaction in the presence of a confining potential can be parametrized by a $q$-Gaussian distribution whose parameters are determined by the confining potential. In Sec.~\ref{sec:kimbal} we compute the high-momentum tails of the momentum distribution, eq.~(\ref{nofp}), in the groundstate and show that they obey scaling relations. A crossover in the momentum density from an ordinary Gaussian distribution at small momenta to a power-law behavior at larger momenta occurs when the Coulomb potential dominates the confinement. In the cross-over region of size $\approx 5\,/a_0$ % we find that the shape of the momentum distribution is described by a $q$-Gaussian with $k$-dependent parameters. At large momenta we recover the exact results obtained by renormalization group methods~\cite{al.ci.12,va.ry.12,an.bo.10,bo.ro.12,ho.ba.13}. Using the solutions of the two-particle Schr\"odinger equation of Sec.~\ref{sec:kimbal} we show in Sec.~\ref{sec:susskind} that when the confinement dominates the Coulomb interaction, the transition matrix elements into excited states ($n^{th}$-bound level) due to momentum absorption also obey $q$-Gaussian distributions, and we connect the $q$-parameter to the shape of the confining potential. The $q$-Gaussian used to parametrize the momentum distribution is characterized by parameters which are different from those used to fit the transition probabilities between the bound states. Finally, in Sec.~\ref{sec:Discussion} we relate these results to the recent observation~\cite{se.ap.18} that the Compton profiles of all alkali elements can be collapsed onto a single curve which is described by a $q$-Gaussian. \section{Ground state: Kimball's approach to the momentum distribution} \label{sec:kimbal} We consider non-relativistic electrons with Coulomb interaction whose Hamiltonian reads~\cite{kimb.73,kimb.75}: \begin{equation}\label{ham} H=-\frac{\hbar^2}{2m}\sum_{i=1}^{N}\nabla_i^2 - \sum_{i=1}^{N} \sum_{I=1}^{N_I} \frac{e^2Z_I}{|{\bf x}_i-{\bf R}_I|} + \sum_{i < j} \frac{e^2}{|{\bf x}_i-{\bf x}_j|} \ . \end{equation} Here ${\bf x}_i$ and ${\bf R}_I$ are the electronic and nuclear coordinates, respectively, $eZ_I$ is the charge of the $I^{th}$ nucleus, and $m$ is the mass of an electron. The first term is the kinetic energy of the electrons, and the remaining two terms represent the Coulomb repulsion between the electrons and nuclei, and between the electrons themselves. The eigenstates of the Hamiltonian (\ref{ham}) are time independent wave functions $\psi({\bf x}_1,{\bf x}_2, ..., {\bf x}_N)$ which are normalized to the volume of the system. Here periodic boundary conditions are assumed and spin indices are suppressed. Following the idea of Kimball \cite{kimb.73,kimb.75}, the behavior of the wavefunction at large momenta is determined by the dependence at small distances between two particles. As the distance approaches zero, the dynamics of adjacent particles is dominated by the Coulomb force. Instead of the explicit electron-nucleon attraction we consider an effective confining potential $V(r)$ whose nature or origin we do not further specify, since we merely wish to understand the consequences of the confining potentials, e.g., possible scaling properties. Introducing relative coordinates ${\bf x}={\bf x}_1-{\bf x}_2$ and center-of-mass coordinates ${\bf X}=({\bf x}_1+{\bf x}_2)/2$ for the two particles as well as the reduced mass $\mu=1/2$, the Schr\"odinger equation becomes \begin{equation} \left( \frac{\partial^2 }{\partial x^2} + V(x)+ \frac{1}{x} \right)\psi_n(x) = E_n \psi_n(x) . \label{eq:model1} \end{equation} \begin{figure}[h] \includegraphics[width =0.45\textwidth]{fig1.PNG} \caption{Probability density $|\psi_n(x)|^2$ of the pair wave functions for $n=0,1,2$ obtained from the solutions of eq.~\ref{eq:model1}. Red solid line: Confining potential $V(x)$ together with the repulsive Coulomb term $1/x$, which is singular at $x = 0$. The corresponding energies $E_n$ of the ground state and the first excited states are shown on the right. For a better visibility the probability densities are separated along the vertical axis.} \label{Fig:psi_n} \end{figure} The solutions to eq.~(\ref{eq:model1}) are denoted by $\psi_n(x)$ with the corresponding eigenenergy $E_n$, are shown in Fig.~\ref{Fig:psi_n}. The ansatz for the total wave function introduced by Kimball~\cite{kimb.73,kimb.75} separates the dependence on the relative coordinates from that of the center of mass motion \begin{eqnarray}\label{separation} \Psi_n &\equiv& \Psi_n ({\bf x}_1,{\bf x}_2,{\bf x}_3, ..., {\bf x}_N ) \nonumber \\ & = & \Psi_n ({\bf x},{\bf X},{\bf x}_3, ..., {\bf x}_N ) \nonumber \\ &\simeq& \Phi({\bf X},{\bf x}_3, ..., {\bf x}_N )\, \psi_n({\bf x}). \end{eqnarray} The one-particle density matrix for the relative coordinates is then given by \begin{eqnarray} \rho({\bf x}, {\bf x}^\prime) & \simeq& \int d{\bf X} \prod_{i=3}^N d{\bf x}_i \ \Phi^\dagger ({\bf X},{\bf x}_3, ..., {\bf x}_N) \psi^\dagger_n ({\bf x}) \nonumber \\ & & \Phi^{} ({\bf X},{\bf x}_3, ..., {\bf x}_N) \psi^{}_n({\bf x}^\prime) \nonumber \\ & = & \overline{\rho} \ \psi^\dagger_n ({\bf x}) \psi^{}_n ({\bf x}^\prime), \end{eqnarray} where $\overline{\rho}$ represents the integral of the $N$-particle wave function over all coordinates ${\bf x}_i$ and ${\bf X}$. From $\rho({\bf x}, {\bf x}^\prime)$ we obtain the two-particle correlation function $g({\bf x})$, which is defined as $ g({\bf x}) = \rho({\bf x},{\bf x}) = \overline{\rho} \ \psi^\dagger_n ({\bf x}) \psi^{}_n ({\bf x})$ and is proportional to the probability of two particles being at a distance ${\bf x}$. The momentum distribution is computed according to eq.~\ref{nofp}. \begin{figure}[h] \includegraphics[width =0.45\textwidth ]{fig2a.png} \\ \includegraphics[width =0.45\textwidth ]{fig2b.png} \caption{(a) Computed momentum distribution $n(k)$ of electrons in the presence of a confining potential ($\alpha=1 \textrm{Ha}$, $\eta = 2$). The momentum distribution is fitted by a $q$-Gaussian in the entire momentum region. (b) Dependence of the $q, \beta$-parameters on the average momentum $\bar{k}$. The low momentum region shows an ordinary Gaussian behavior ($q=1$). In the intermediate region ($k_{min}\approx 1.0\,/a_0 < k< k_{max}\approx5.0\,/a_0$) $q$-Gaussian type of fits are possible with momentum dependent parameters. At large momenta the power-law dependence is recovered. \label{nk-k-plot}} \end{figure} Fig.~\ref{nk-k-plot}~(a) shows the momentum dependence of $n(k)$ for the confining potential $V(x)=\alpha \left|x \right/a_0|^\eta$ with $\eta =2$, where $k$ is the magnitude of $\bf k$. One clearly sees a Gaussian regime at small $k$, which is followed by a crossover into the asymptotic region at large $k$. In order to specify the momentum dependence by a unique functional form we fit the momentum distribution with a $q$-Gaussian~\cite{tsal.88}. To determine the $(q,\beta)$ parameters we collect k values into ``bins" that are characterized by an average value $\bar{k}$. The latter quantity is computed from the interval in which the fitting to the $q$-Gaussian form is performed and which contains at least five points in the low momentum and an order of magnitude more (fifty) points in the asymptotic region. The initial fitting parameters of the $i^{th}$ bin are the final parameters of the $(i-1)^{th}$ $\bar{k}$-bin. For a given bin the same values $q(\bar{k}),\beta(\bar{k})$ parametrise the momentum dependence as: \footnote{In this context we note that the ground-state wave function of a quantum particle in a Coulomb potential has the form of a q-Gaussian in momentum space \cite{vi.pl.12}. Furthermore the distribution of the energies of all elements of the periodic table were also observed to follow a $q$-Gaussian~\cite{am.za.10}. } \begin{equation} n_{q(\bar{k}),\beta(\bar{k})}(k) = \frac{1}{\sqrt{2}\beta(\bar{k}) C_{q(\bar{k})}} \exp_{q(\bar{k})}\left(-\beta(\bar{k}) k^2\right). \label{ftsal} \end{equation} For arbitrary values of $q$, the $q$-exponential is defined as $\exp_q(x) = \left[1+\left(1-q\right)x\right]^{1/(1-q)}$. In eq.~\ref{ftsal} $C_{q(\bar{k})}$ is a normalization constant and $\beta(\bar{k})$ controls the width of the distribution In Fig.~\ref{nk-k-plot}~(b) we plot the momentum dependence of the ($q,\beta$)-parameters. We see that for low momenta $q(\bar{k})=1$ while $\beta(\bar{k}) = 0.49$. In fact, the $\exp_q$-function becomes the exponential function in the limit of $q \rightarrow 1$, whereby the Gaussian distribution is recovered. The low momentum region corresponds to large distances. In this case the Coulomb interaction is negligible and the solutions become plane waves~\cite{kimb.73,kimb.75}, which form Gaussian wave packages leading to a Gaussian momentum dependence. In the crossover region, i.e., in the range $\bar{k}_{min}\approx 1.0\,/a_0$ to $\bar{k}_{max}\approx 5.0\,/a_0$, a smooth transition between Gaussian and power law behavior is observed in the momentum dependence of the ($q,\beta$) parameters. For large values of $k>k_{max}$ the $q$-Gaussian distribution has a power law dependence $f_{q,\beta}(x)|_{x\rightarrow \infty} \propto x^{2/(1-q)}$, which in our case amounts to constant values of the parameters $q(\bar{k}) = 1.50$ and $\beta(\bar{k}) = 0.06$. Thus, at large momenta the power-law behavior is recovered~\cite{kimb.73,kimb.75}, and the asymptotic behavior of the momentum density agrees with the result obtained by the renormalization group approach~\cite{bo.ro.12,ho.ba.13}. We note that the $q$-Gaussian fits and the corresponding $q(\bar{k}),\beta(\bar{k})$-parameters differ for different values of the confining potentials (values of the parameter $\eta$). Nonetheless, constant values of $q(\bar{k}),\beta(\bar{k})$ are obtained for the low momentum region as well as the asymptotic region. In the following section we will show that the transition matrix elements between the ground state and the $n^{th}$-energy level caused by absorption of a momentum, calculated from the solutions of eq.~\ref{eq:model1}, also obey scaling. This result was previously found by Elitzur and Susskind within a simplified parton model~\cite{el.su.72}. Here we compute the full transition probability and demonstrate that the scaling functions for the maxima of the transition probabilities can also be expressed by $q$-Gaussians. Since transition probabilities are connected to the scattering cross section from which the Compton profiles follow, our calculation proves that the Compton profile also scales, provided that the potential energy due to the confinement dominates the Coulomb interaction. \section{Excited states: Elitzur-Susskind bound state resonances} \label{sec:susskind} Elitzur and Susskind employed a simple confining potential~\cite{el.su.72} to explain the scaling of the resonance excitations in deep inelastic reactions~\cite{bl.gi.70,bjor.69}. In their simplified model the probability for transitions between the bound states in the confining potential was computed using the dipol approximation and was shown to be compatible with the scaling of resonant excitations~\cite{el.su.72,elit.71,kogu.73} in deep inelastic reactions. Contrary to Ref.~\cite{el.su.72} in which the WKB approximation was used, we solve eq.~\ref{eq:model1} numerically for a pair of point particles with masses $m_1$, $m_2$ in a potential which is sufficiently deep to bind states. We assume that the momentum $Q$ is absorbed by one of the particles, and the bound pair is lifted to the $n^{th}$-level, at an energy $\nu_n = E_n - E_0$. Using the solutions of eq.~\ref{eq:model1} we evaluate the matrix elements $F\left(\nu_n,Q^2\right)$ for the transition into the $n^{th}$ bound level due to the absorption of a momentum ${\bf Q}$. We note that the solutions of eq.~\ref{eq:model1} produce bound states~\cite{Maha-2006} irrespective of the sign of the confining potential. The transition probability is the square of the transition matrix element: \begin{equation} T\left(\nu_n,Q^2\right) = | \braket{\psi_{n}|e^{i{\bf Q}\cdot{\bf x}_1}|\psi_0} |^2 =| \braket{\psi_{n}|e^{i{\bf Q}_1{\bf x}}|\psi_0} |^2 \ . \end{equation} Here $\psi_0$ and $\psi_{n}$ describe the ground state and the $n^{th}$ bound state of the confining potential, respectively, with ${\bf Q}_i={\bf Q}\, m_i/(\sum_i m_i)$. For electrons $m_1=m_2$ whereby ${\bf Q}_i={\bf Q}/2$ as the scaled momentum. For finite $Q^2$ the transition probability $T(\nu_n,Q^2)$ leads to $n$ discrete resonances. The numerical results are presented in Fig.~\ref{graph2}. \begin{figure}[h] \includegraphics[width = .49\textwidth]{fig3.PNG} \caption{Transition probability $T(\nu_n,Q^2)$ computed for the confining potential with $\eta=2$ in the presence of the Coulomb interaction. The scaling direction is seen by projection into the $(\nu_n,Q^2)$ plane. \label{graph2}} \end{figure} In Ref.~\cite{el.su.72} it was noted that the matching between the phase of $\psi_{n}$ and the exponential factor $e^{i{\bf Q}_1{\bf x}}$ implies a linear relation (``scaling direction'') between the square of the transferred momentum, $Q^2$, and the excitation energy $\nu_n$. The scaling direction therefore represents a line in the $(\nu_n,Q^2)$-plane which we illustrate in the upper part of Fig.~\ref{graph2}. For any other direction in the $(\nu_n,Q^2)$-plane the transition probability decays exponentially (no phase matching). This result was already derived in Ref.~\cite{el.su.72} within the WKB approximation, where the line has slope one. It is interesting to note that this result is not exactly reproduced in the present calculations, where the Coulomb interaction is taken into account. \begin{figure}[htbp] \includegraphics[width = .45\textwidth]{fig4a.PNG} \includegraphics[width = .45\textwidth]{fig4b.png} \caption{(a)~The maxima of the transition probabilities corresponding to each transferred momentum Q are plotted for different confining potential strengths $\eta$. Dots are numerical data, while the red lines show the fit with a $q$-Gaussian distribution. (b) By scaling all maxima the transition probabilities collapse onto a single curve. \label{graph4} } \end{figure} From the fact that the scaling direction is essentially a line in the $(\nu_n,Q^2)$-plane along which the transition probability is maximal, we can identify the ratio $Q^2/{\nu_n} \equiv \gamma$ as a scaling variable. We fitted the maxima along the scaling direction using a $q$-Gaussian form: \begin{equation} T(\nu_n,Q^2):= T_{q,\beta}(Q^2/\nu_n)= \frac{1}{\sqrt{2}\beta C_{q}} \exp_{q}\left(-\beta Q^2/\nu_n \right). \label{Ttsal} \end{equation} In the limit of large momenta we find: \begin{equation} \lim_{Q^2\rightarrow \infty} T_{q,\beta}(Q^2/\nu_n) \bigg|_{\nu_n=Q^2/\gamma} \propto \gamma^{2/(1-q)} \ , \end{equation} i.e. the $q$-Gaussian takes the form of a power-law, which is characterized by scale invariance. In Fig.~\ref{graph4}(a) we show the $q$-Gaussian fits to the maxima of the transition probability for different confining potentials, i.e., different values of $\eta$. In the limit of large momentum transfer, $Q^2\rightarrow \infty$, we obtain, for all $\eta$ values, power laws along the ``scaling direction" with a particular scaling exponent. Due to the scaling property they are all equivalent up to constant factors. This behavior is presented in Fig.~\ref{graph4}(b) where the $q$-logarithm of the transition probability is plotted against the scaling variable $Q^2/\nu_n$. Here $\ln_q$ is the $q$-analog of the logarithm defined by: $\ln_q x := (x^{1-q}-1)/(1-q)$. This plot is seen to produce an essentially linear relation between the scaled transition probabilities, which collapse onto a single curve. Deviations are due to finite-size effects and numerical precision in the high $Q$-regime. \begin{figure}[htbp] \includegraphics[width = .45\textwidth]{fig5.PNG} \caption{(a)/(b) The dependence of $q$/$\beta$ for different $\eta$ values. The red line is a guide to the eye. \label{graph5}} \end{figure} The $q$ and $\beta$ values for the fits in Fig.~\ref{graph4} are shown in fig.~\ref{graph5}(a),(b). We see that $q$ and $\beta$ increase linearly for growing $\eta$ values. With increasing $\eta$ the confining potential becomes steeper as the interparticle distance increases. Therefore one expects that the wave function (eigenfunction of $H$) is more localized, which leads to a slower decay of the transition probabilities at high momenta. \section{Discussion} \label{sec:Discussion} The results presented in this paper were initiated by the question whether, and under what conditions, the momentum distribution of a Coulomb systems shows scaling behavior. Following Kimball's approach~\cite{kimb.73,kimb.75}, we computed the momentum distribution of two interacting electrons by numerically solving the two-particle Schr\"odinger equation for a repulsive Coulomb interaction in the presence of a confining potential. We found that $n({\bf k})$ can be parametrized by a $q$-Gaussian in the entire momentum range. This crossover region connects the low-momenta region, described by an ordinary Gaussian momentum dependence, with a power-law behavior at large momenta. In the confinement dominated (intermediate) momentum region we used the method of Elitzur and Susskind~\cite{el.su.72} and demonstrated that bound-state resonances also show scaling behavior. In particular, we demonstrated that $q$-Gaussians are suitable scaling functions for the maxima of the transition probability. Indeed, the $q$-Gaussian behavior is expected to enter in this investigation since it is the natural mathematical function that can describe fat-tail distributions, whose asymptotic momentum dependence is not exponential but is described, for example, by a power law. Whenever the Coulomb interaction dominates the confining potential (in the large momenta region) our results recover the exact analytic results obtained by renormalization group techniques~\cite{an.bo.10,bo.ro.12,ho.ba.13}. It would be desirable to gain insight into the numerically derived scaling properties also within an analytic approach. Using density functional theory (DFT)~\cite{ho.ko.64,kohn.99,jo.gu.89,jone.15} in combination with the impulse approximation we recently showed that Compton profiles of the first column elements of the periodic table can be collapsed onto a single curve~\cite{se.ap.18} which can be fitted by a $q$-Gaussian with element specific $(q,\beta)$-parameters. In that study we did not address the questions of why there should be scaling behavior at all, and why the $q$-Gaussian was found to be a suitable scaling function. In view of the fact that in the electronic band theory of solids the periodic ionic potential provides a natural confining potential, the results of the present paper may provide an explanation of the unexpected scaling behavior of the Compton profiles of the alkali elements~\cite{se.ap.18}. For the application of the DFT~\cite{ho.ko.64,kohn.99,jo.gu.89,jone.15} in the framework of the Kohn-Sham ansatz the knowledge of the exchange-correlation functional is crucial. A central quantity is the so-called coupling constant integrated pair-correlation function. It accounts for the electronic correlations contribution into the kinetic energy~\cite{jo.gu.89} and is the input to the derivation of the gradient corrected functionals~\cite{pe.bu.96a,pe.bu.96b}. The contribution of electronic correlations to the kinetic energy has been analyzed also in momentum space~\cite{go.pe.02}, and the limits of large and small momenta were discussed and found to be in agreement with results of Kimball~\cite{kimb.73,kimb.75}. In fact, according to our investigation which extends the work of Kimball by including confining potentials, a $q$-Gaussian fit for the momentum density is possible for every value of the momentum, and the somewhat arbitrary separation between short- and long-range decomposition can be avoided. We expect that the concept of scaling of the momentum distribution in terms of a $q$-Gaussian can be further developed such that it actually provides new exchange-correlation functionals. \section{acknowledgments} Financial support by the Deutsche Forschungsgemeinschaft through TRR80 (project F6) Project number 107745057 and useful discussions with M. Sekania and M. Kollar are gratefully acknowledged.
1,116,691,498,178
arxiv
\section{Historical review and methodology.} The one of the question which naturaly appears in all algebraic studies, is the question of the classification (up to isomorphism) of the algebraic objects from some class. One of the classical example of this kind of results is the classification of the semisimple finite dimensional associative algebras over fields. Also we have the very easely observed classification of the finitely generated abelian groups. Both of these classification were achieved many years ago. About 60 years ago, the classification of the simple Lie algebras over $% \mathbb{C} $ was achieved. The classification of the finite simple groups, is a newer result of this kind. This result requested a huge effort of many mathematicians. The natural next step after the classification of the finitely generated abelian groups, is the classification of the finitely generated nilpotent class $2$ groups, in particular the classification of the finitely generated torsion free nilpotent class $2$ groups. In the 1970s the research in this area was very active. We can remember \cite{GrSch}, \cite{GrSeSt}, auxiliary technical work \cite{Sch} and others. Achievements were summarized in \cite% {GrSe}. A full classification up to isomorphism was established only for finitely generated torsion free nilpotent class $2$ groups of Hirsh length $% 6 $. Such modest results indicate the complication of the problem. This complication follows, as it will be seen from our survey, first of all, from the complication of the wild matrix problem. Also in the 1970s, the problem of classification of the nilpotent class $2$ $% p$-groups up to isomorphism was considered in \cite{Serg1}. It was proved that this problem can be reduced to the wild matrix problem even when the rank of the center of the groups is equal to $2$. It is known that for every nilpotent torsion free group $G$ there is a Maltsev completion $\sqrt{G}$ - the minimal group, such that $G\subset \sqrt{% G}$ and for every $x\in \sqrt{G}$ and every $n\in \mathbf{% \mathbb{N} }$ there exists $x^{\frac{1}{n}}\in \sqrt{G}$, such that $\left( x^{\frac{1}{% n}}\right) ^{n}=x$. The element $x^{\frac{1}{n}}\in \sqrt{G}$ is uniquely defined by $x\in \sqrt{G}$ and $n\in \mathbf{% \mathbb{N} }$. The $\sqrt{G}$ is the nilpotent group of the same class of nilpotence as $G$. It is clear that if two nilpotent torsion free groups are isomorphic, then their Maltsev completions are isomorphic too. The inverse, of course, is false. So the classification of the complete (in the Maltsev sense) nilpotent torsion free groups of the finite rank up to isomorphism is a simpler problem than the classification up to isomorphism of the arbitrary finitely generated nilpotent class $2$ groups. But even in this simpler problem we have (see \cite{GrSe}) a solution, only in the case when the rank of the center of the groups is not greater than $2$. If the problem of the classification up to isomorphism is so complicated, it is natural to consider a less delicate classification. The notion of the geometric equivalence of universal algebras (see \cite{Pl1}), was investigated in 1995 by B. Plotkin. By \cite{Pl2} two finitely generated universal algebras $A_{1}$ and $A_{2}$ from some variety $\Theta $ are geometrically equivalent if and only if, the first one can be embedded into some direct power of the second and vice versa (we denote\ $A_{1}\sim A_{2}$% ). So, the classification up to geometric equivalence is less delicate then the classification up to isomorphism. Classification of the nilpotent groups up to geometric equivalence is especialy interesting because in the case of the nilpotent groups geometric equivalence is closely connected with the logic proprieties of groups: two nilpotent groups\ are geometrically equivalent if and only if, they have the same quasi-identities (see \cite{Ts}). So classification of nilpotent groups, up to geometric equivalence is an equivalent to the classification of the quasi-varieties generated by a single nilpotent group. The classification of the finitely generated abelian groups up to geometric equivalence was achieved in \cite{Be}. It was proved that two abelian groups are geometrically equivalent if and only if, for every prime number $p$ the exponents of their corresponding $p$-Sylow subgroups coincide, and if one of these group is not periodic, then the second group is not periodic either. So the classification of finitely generated abelian groups up to geometric equivalence, is in principal, simpler than the classification of these groups up to isomorphism. The classification of the torsion free abelian groups up to geometric equivalence is trivial: all these groups are geometrically equivalent. In the case of the nilpotent class $2$ groups we have a different situation. Even classification of the finitely generated torsion free nilpotent class $% 2 $ groups up to geometric equivalence is a very complicated problem. By \cite[Theorem 1]{Ts} every finitely generated torsion free nilpotent class $2$ group geometrically equivalent to it's Maltsev completion. So for resolving of our problem it is enough to classify up to geometric equivalence the nilpotent class $2$ finite rank torsion free complete groups. It is well known (see for example \cite[Chapter 8]{Ba}) that in every nilpotent Lie $\mathbf{% \mathbb{Q} }$ -algebra $L$, we can define multiplication by Campbell-Hausdorff formula. With this multiplication $L$ will be a group, which we denote $L^{\circ }$. The group $L^{\circ }$ will be torsion free and complete. It has the same class of nilpotency as algebra $L$. Conversely for every complete nilpotent torsion free group $A$, there is nilpotent Lie $\mathbf{% \mathbb{Q} }$-algebra $L$, such that $A\cong L^{\circ }$. Algebra $L$ has the some class of nilpotency as group $A$. The homomorphisms (epimorphisms, monomorphisms, isomorphisms) of the nilpotent Lie $\mathbf{% \mathbb{Q} }$-algebras coincide with the homomorphisms (epimorphisms, monomorphisms, isomorphisms) of the corresponding groups and vice versa. In other words, the functor $\Gamma :L\rightarrow L^{\circ },\left( \lambda :L_{1}\rightarrow L_{2}\right) \rightarrow \left( \lambda :L_{1}^{\circ }\rightarrow L_{2}^{\circ }\right) $ provides an isomorphism from the category of the nilpotent class $s$ Lie $\mathbf{% \mathbb{Q} }$-algebras to the category of the nilpotent class $s$ torsion free complete groups. By this isomorphism to the finite dimensional Lie $\mathbf{% \mathbb{Q} }$ -algebra corresponds the nilpotent group of the finite rank and vice versa. So two complete nilpotent torsion free finite rank groups $% A_{1}=L_{1}^{\circ }$ and $A_{2}=L_{2}^{0}$, are isomorphic if and only if, the Lie $\mathbf{% \mathbb{Q} }$-algebras $L_{1}$ and $L_{2}$ are isomorphic. And two complete nilpotent torsion free finite rank groups $A_{1}=L_{1}^{\circ }$ and $% A_{2}=L_{2}^{\circ }$ are geometrically equivalent if and only if, the Lie $% \mathbf{% \mathbb{Q} }$-algebras $L_{1}$ and $L_{2}$ are geometrically equivalent, i.e. as it was stated above, if and only if the algebra $L_{1}$ can be embedded into some direct power of the algebra $L_{2}$ and vice versa. So for researching our problem, we can concentrate on the geometric equivalence of the finite dimension nilpotent class $2$ Lie $\mathbf{% \mathbb{Q} }$-algebras. It will be proved in this paper, that the problem of classification of the finite dimension nilpotent class $2$ Lie $\mathbf{% \mathbb{Q} }$-algebras up to the geometric equivalence, is equivalent to the problem of classification of these algebras up to the isomorphism. It means that the problem of the classification of the nilpotent class $2$ finite rank torsion free complete groups up to the geometric equivalence, is equivalent to the problem of classification of these groups up to the isomorphism. The problem of classification of the finite dimension nilpotent class $2$ Lie algebras over an algebraic closed field up to the isomorphism was considered in \cite{Serg2} and \cite{BLS}. In \cite{Serg2} this problem was resolved when the dimension of the center of the algebra is not great then $% 2 $. In \cite{BLS} it was proved that the problem of classification of the finite dimension nilpotent class $2$ Lie algebras over an algebraic closed field up to the isomorphism when the dimension of the center of the algebra is great then $2$ is equivalent to the wild problem. In \cite{Be} it was proved that if $A_{1}\sim A_{2}$ and $B_{1}\sim B_{2}$ ($% A_{1},A_{2},B_{1},B_{2}$ are arbitrary universal algebras from some variety $% \Theta $) then $A_{1}\oplus B_{1}\sim A_{2}\oplus B_{2}$. So for classification up to the geometric equivalence it is enough to consider the algebras which are can not be decomposed to the direct sum. In our situation, we can consider only Lie $\mathbf{% \mathbb{Q} }$-algebras $L$ which fulfill \begin{equation} \left[ L,L\right] =Z\left( L\right) , \label{com_cond} \end{equation}% where $Z\left( L\right) $ is a center of the algebra $L$. The Lie $\mathbf{% \mathbb{Q} }$-algebra $L$, which fulfills this condition, can be considered as the direct sum of the $\mathbf{% \mathbb{Q} }$-linear spaces $L=V\oplus W$ (form this place and below we considered the direct sum only in the category of the $\mathbf{% \mathbb{Q} }$-linear spaces), where $W=Z\left( L\right) $, $V\cong L/Z\left( L\right) $% . The Lie brackets in $L$, define the skew symmetric non singular bilinear mapping $\omega _{L}:V\times V\ni \left( v_{1},v_{2}\right) \rightarrow % \left[ v_{1},v_{2}\right] \in W$. (For arbitrary skew symmetric bilinear mapping $\omega :V\times V\rightarrow W$ we denote $\ker \omega =\left\{ x\in V\mid \forall v\in V\left( \omega \left( x,v\right) =0\right) \right\} $ and we say that $\omega $ is singular if $\ker \omega \neq \left\{ 0\right\} $; other skew symmetric bilinear mappings we call non singular.) Contrariwise, if we have two $\mathbf{% \mathbb{Q} }$-linear spaces $V$ and $W$ and the skew symmetric bilinear mapping $\omega :V\times V\rightarrow W$, then in the direct sum $L=V\oplus W$ we can define the Lie brackets: $\left[ v_{1}+w_{1},v_{2}+w_{2}\right] =\omega \left( v_{1},v_{2}\right) $ ($v_{1},v_{2}\in V$, $w_{1},w_{2}\in W$). If $\omega $ is a non singular, then $Z\left( L\right) =W$ and condition (\ref{com_cond}) is an equivalent to the condition% \begin{equation} \omega \left( V,V\right) =W. \label{im_cond} \end{equation}% If $L=V\oplus W$ and $\dim V=n$, $\dim W=m$, $\left\{ v_{1},\ldots ,v_{n}\right\} $ is a basis of $V$, $\left\{ w_{1},\ldots ,w_{m}\right\} $ is a basis of $W$, then the skew symmetric bilinear mapping $\omega $ is defined by $m$ skew symmetric matrices of the size $n\times n$: $A^{\left( 1\right) },\ldots ,A^{\left( m\right) }$, such that $\left[ v_{i},v_{j}% \right] =\sum\limits_{k=1}^{m}a_{i,j}^{\left( k\right) }w_{k}$ ($1\leq i,j\leq n$). There is a homomorphism of the Lie algebras with condition (\ref% {com_cond}) $\lambda :L=V_{L}\oplus W_{L}\rightarrow S=V_{S}\oplus W_{S}$ if and only if, there is a pair of the linear mappings $\left( \varphi ,\psi \right) $ such that $\varphi :V_{L}\rightarrow V_{S}$, $\psi :W_{L}\rightarrow W_{S}$ and $\omega _{S}\left( \varphi \left( v_{1}\right) ,\varphi \left( v_{2}\right) \right) =\psi \omega _{L}\left( v_{1},v_{2}\right) $ for every $v_{1},v_{2}\in V_{L}$. $\lambda =\varphi \oplus \psi $ holds. By using this approach in \cite{Ts} (and by using the isomorphism from the category of the nilpotent class $2$ Lie $\mathbf{% \mathbb{Q} }$ -algebras to the category of the nilpotent class $2$ torsion free complete groups) the problem of the classification of the nilpotent class $2$ finite rank torsion free groups, whose centers have rank no more then $2$, up to the geometric equivalence was deeply researched. It was proved two theorems: \begin{enumerate} \item Theorem 3. Two nilpotent torsion free class $2$ finitely generated groups $G_{1}$ and $G_{2}$ with the cyclic center are geometrically equivalent ($G_{1}\sim G_{2}$) if and only if their Maltsev completions are isomorphic: $\sqrt{G_{1}}\cong \sqrt{G_{2}}$. \item Theorem 4. Let $G_{1},G_{2}$ two nilpotent torsion free class $2$ finitely generated groups, whose centers have rank $2$. Then $G_{1}\sim G_{2} $ if and only if, or there is a nilpotent torsion free class $2$ finitely generated group with the cyclic center $N$, such that $G_{1}\sim N\sim G_{2}$, or $\sqrt{G_{1}}\cong \sqrt{G_{2}}$. \end{enumerate} Also Proposition 1 and Proposition 2 from \cite{Ts}, which formulated by the language of the properties of the skew symmetric bilinear forms, provide us tools for finding when for two nilpotent torsion free class $2$ finitely generated groups $G_{1}$ and $G_{2}$, whose centers have rank $2$, fulfills the first or the second condition of the Theorem 4. \section{New results.} Bellow the word "algebra" means: nilpotent class $2$ finite dimension Lie $% \mathbf{% \mathbb{Q} }$-algebra. \begin{definition} \label{decomp}We say that the algebra is \textit{geometrically decomposable} if it is geometrically equivalent to the direct product of some of its nontrivial subalgebras. Other algebras we call \textit{geometrically indecomposable}. \end{definition} \begin{proposition} \label{equivalent_isom}\textit{If two geometrically indecomposable algebras are geometrically equivalent, then they are isomorphic.} \end{proposition} \begin{proof} We assume that $L$ and $S$ are geometrically indecomposable and $L$ geometrically equivalent to $S$. There is a family of homomorphisms\linebreak\ $\left\{ \lambda _{i}:L\rightarrow S\mid i\in I\right\} $ such that $\bigcap\limits_{i\in I}\ker \lambda _{i}=\left\{ 0\right\} $. Also exists a family of homomorphisms $\left\{ \sigma _{j}:S\rightarrow L\mid j\in J\right\} $ such that $\bigcap\limits_{j\in J}\ker \sigma _{j}=\left\{ 0\right\} $. If $L=\left\{ 0\right\} $ then $% S=\left\{ 0\right\} $ and vice versa, so we can assume that $L,S\neq \left\{ 0\right\} $. We assume that $\ker \lambda _{i}\neq \left\{ 0\right\} $ for every $i\in I$% . We consider the family of endomorphisms $\left\{ \sigma _{j}\lambda _{i}:L\rightarrow L\mid j\in J,i\in I\right\} $. $\bigcap\limits_{\substack{ % j\in J \\ i\in I}}\ker \sigma _{j}\lambda _{i}=\left\{ 0\right\} $, so there exists an embedding $L\hookrightarrow \prod\limits_{\substack{ j\in J \\ i\in I}}\mathrm{im}\sigma _{j}\lambda _{i}$. $\ker \sigma _{j}\lambda _{i}\supset \ker \lambda _{i}\neq \left\{ 0\right\} $, so, by reason of dimensions, $\mathrm{im}\sigma _{j}\lambda _{i}$ is not equal to $L$. We have that $L$ is geometrically equivalent to $\prod\limits_{\substack{ j\in J \\ i\in I}}\mathrm{im}\sigma _{j}\lambda _{i}$. If $L\neq \left\{ 0\right\} $ then there is a nonzero factor in this product. This is a contradiction. By symmetry we achieve a contradiction when we assume that $\ker \sigma _{j}\neq \left\{ 0\right\} $ for every $j\in J$. So there exist $i\in I$, such that $\ker \lambda _{i}=\left\{ 0\right\} $ and $j\in J$ , such that $\ker \sigma _{j}=\left\{ 0\right\} $. By reason of dimensions, $L$ and $S$ are isomorphic. \end{proof} \begin{proposition} \label{decompos_cond}\textit{If the algebra }$L=V\oplus W$\textit{\ is geometrically decomposable then there exists a family of linear mappings }$% \left\{ \psi _{i}:W\rightarrow W\mid i\in I\right\} $\textit{\ such that all skew symmetric bilinear mappings }$\psi _{i}\omega $\textit{\ are singular and }$\bigcap\limits_{i\in I}\ker \psi _{i}=\left\{ 0\right\} $ ($\omega :V\times V\rightarrow W$ is a skew symmetric bilinear mapping defined by the Lie brackets of the \textit{algebra }$L$)\textit{.} \end{proposition} \begin{proof} Let the algebra $L$ be geometrically decomposable. Then $L$ is geometrically equivalent to $\prod\limits_{i\in I_{0}}L_{i}$, where $L_{i}$ ($i\in I_{0}$) is nontrivial subalgebras of $L$. So there exists an embedding $% L\hookrightarrow \left( \prod\limits_{i\in I_{0}}L_{i}\right) ^{J}$. But we can write $\left( \prod\limits_{i\in I_{0}}L_{i}\right) ^{J}=\prod\limits_{i\in I}L_{i}$,where $I=I_{0}\times J$ and $L_{i}$ ($i\in I $) is also nontrivial subalgebras of $L$. By Remak theorem there exists a family of endomorphisms $\left\{ \widetilde{\lambda }_{i}:L\rightarrow L_{i}\mid i\in I\right\} $, such that $\bigcap\limits_{i\in I}\ker \widetilde{\lambda }_{i}=\left\{ 0\right\} $. Denote $\iota _{i}:L_{i}\hookrightarrow L$ the embedding and $\lambda _{i}=\iota _{i}% \widetilde{\lambda }_{i}$. We have $\lambda _{i}:L\rightarrow L$ and\ $% \bigcap\limits_{i\in I}\ker \lambda _{i}=\left\{ 0\right\} $. $\mathrm{im}% \lambda _{i}\leq L_{i}$, $L_{i}$ is a nontrivial subgroup of $L$, so $\dim \mathrm{im}\lambda _{i}<\dim L$ for every $i\in I$. By reason of dimension, $% \ker \lambda _{i}\neq \left\{ 0\right\} $ for every $i\in I$. Let $\lambda _{i}=\varphi _{i}\oplus \psi _{i}$, where $\varphi _{i}:V\rightarrow V$, $\psi _{i}:W\rightarrow W$. If $\ker \psi _{i}=\left\{ 0\right\} $ then $\ker \lambda _{i}=\left\{ 0\right\} $ (the intersection of a nontrivial normal subgroup of the nilpotent group with the center of group is nontrivial - \cite[16.2.5]{KM}, similar theorem for nilpotent class Lie algebras can be easy proved). If $\ker \varphi _{i}=\left\{ 0\right\} $ then $\dim \psi _{i}\left( W\right) =\dim \psi _{i}\omega \left( V,V\right) =\dim \omega \left( \varphi _{i}\left( V\right) ,\varphi _{i}\left( V\right) \right) =\dim \omega \left( V,V\right) =\dim W$. Hence $\ker \psi _{i}=\left\{ 0\right\} $. It is a contradiction. So, if $\ker \lambda _{i}\neq \left\{ 0\right\} $, then $\ker \psi _{i}\neq \left\{ 0\right\} $ and $\ker \varphi _{i}\neq \left\{ 0\right\} $. If $x\in \ker \varphi _{i}$, then for every $v\in V$ we have $\psi _{i}\omega \left( x,v\right) =\omega \left( \varphi _{i}\left( x\right) ,\varphi _{i}\left( v\right) \right) =0$. So $x\in \ker \psi _{i}\omega $, i.e., skew symmetric bilinear mapping $\psi _{i}\omega $\ is singular. $\bigcap\limits_{i\in I}\ker \psi _{i}\subset $ $% \bigcap\limits_{i\in I}\ker \lambda _{i}=\left\{ 0\right\} $. \end{proof} Now for every algebra $L$ we will construct a specific geometrically indecomposable algebra $E\left( L\right) $, such that $L\subset E\left( L\right) $. Let $L=V\oplus W$, $\left\{ v_{1},\ldots ,v_{n}\right\} $ be a basis of $V$, $\left\{ w_{1},\ldots ,w_{m}\right\} $ be a basis of $W$. Then, we construct the $E\left( L\right) $ this way: $Z\left( E\left( L\right) \right) =W\oplus T$, where $T=\mathrm{Sp}\left\{ t\right\} $ is $1$% -dimensional $\mathbf{% \mathbb{Q} }$-linear spaces, $E\left( L\right) /Z\left( E\left( L\right) \right) \cong U\oplus V$, where $U$ is a $n$-dimensional $\mathbf{% \mathbb{Q} }$-linear spaces with the basis $\left\{ u_{1},\ldots ,u_{n}\right\} $. If the skew symmetric bilinear mapping $\omega _{L}$ is defined by $m$ skew symmetric matrices of the size $n\times n$: \begin{equation*} A^{\left( 1\right) },\ldots ,A^{\left( m\right) } \end{equation*}% then the skew symmetric bilinear mapping $\omega _{E\left( L\right) }$ define by $m+1$ skew symmetric matrices of the size $2n\times 2n$:% \begin{equation*} \left( \begin{array}{cc} 0 & I_{n} \\ -I_{n} & 0% \end{array}% \right) ,\left( \begin{array}{cc} 0 & 0 \\ 0 & A^{\left( 1\right) }% \end{array}% \right) ,\ldots ,\left( \begin{array}{cc} 0 & 0 \\ 0 & A^{\left( m\right) }% \end{array}% \right) \end{equation*}% i.e. \begin{equation} \left[ u_{i},u_{j}\right] =0,\left[ u_{i},v_{j}\right] =-\left[ v_{j},u_{i}% \right] =\delta _{ij}t\text{ (}1\leq i,j\leq n\text{).} \label{basis_calc} \end{equation} \begin{proposition} \label{extension1}For every algebra $L$ algebra $E\left( L\right) $ is \textit{geometrically indecomposable.} \end{proposition} \begin{proof} Let $\omega =\omega _{E\left( L\right) }:\left( U\oplus V\right) \times \left( U\oplus V\right) \rightarrow W\oplus T$. We assume that $\psi :W\oplus T\rightarrow W\oplus T$ is a linear mapping such that $\psi \left( t\right) =z\neq 0$ and $x\in \ker \psi \omega $. Denote $x=\sum% \limits_{i=1}^{n}x_{i}u_{i}+\sum\limits_{i=1}^{n}x_{n+i}v_{i}$. $\psi \omega \left( x,u_{j}\right) =\psi \left( -x_{n+j}t\right) =-x_{n+j}z=0$, so $% x_{n+j}=0$ for every $j\in \left\{ 1,\ldots ,n\right\} $. Hence $% x=\sum\limits_{i=1}^{n}x_{i}u_{i}$ and $\psi \omega \left( x,v_{j}\right) =x_{j}z=0$, so $x_{j}=0$ for every $j\in \left\{ 1,\ldots ,n\right\} $. Therefore $x=0$ and $\ker \psi \omega =0$. So for every linear mapping $\psi :W\oplus T\rightarrow W\oplus T$, for which we have $\ker \psi \omega \neq 0$% , we also have $\psi \left( t\right) =0$. By Proposition \ref{decompos_cond} $E\left( L\right) $ is geometrically indecomposable. \end{proof} $U\oplus T=H$ is an ideal of the algebra $E\left( L\right) $, $\dim \left( H\cap Z\left( E\left( L\right) \right) \right) =1$, $\dim H/\left( H\cap Z\left( E\left( L\right) \right) \right) =n$, $E\left( L\right) /H\cong V\oplus W\cong L$. \begin{theorem} \label{main}Let $L_{1}=V_{1}\oplus W_{1}$\textit{\ and }$L_{2}=V_{2}\oplus W_{2}$ are \textit{algebras, }$\dim V_{1}=\dim V_{2}=n$, $\dim W_{1}=\dim W_{2}=m$,\textit{. Then }$E\left( L_{1}\right) \cong E\left( L_{2}\right) $% \textit{, if and only if }$L_{1}\cong L_{2}$. \end{theorem} \begin{proof} We denote $E\left( L_{i}\right) =H_{i}\oplus L_{i}$, $H_{i}=U_{i}\oplus T_{i} $, $T_{i}=\mathrm{Sp}\left\{ t^{\left( i\right) }\right\} $, $\left\{ v_{1}^{\left( i\right) },\ldots ,v_{n}^{\left( i\right) }\right\} $ - basis of $V_{i}$, $\left\{ u_{1}^{\left( i\right) },\ldots ,u_{n}^{\left( i\right) }\right\} $ - basis of $U_{i}$ ($i=1,2$). We assume that there is an isomorphism of algebras Lie $\alpha :E\left( L_{1}\right) \rightarrow E\left( L_{2}\right) $. $\alpha \left( H_{1}\right) $ is an ideal of the algebra $E\left( L_{2}\right) $. $\dim \left( H_{1}\cap Z\left( E\left( L_{1}\right) \right) \right) =1$, so\linebreak\ $\dim \left( \alpha \left( H_{1}\right) \cap Z\left( E\left( L_{2}\right) \right) \right) =1$; $H_{1}\nsubseteq Z\left( E\left( L_{1}\right) \right) $ so $\alpha \left( H_{1}\right) \nsubseteq Z\left( E\left( L_{2}\right) \right) $. First of all, we shall prove that $\alpha \left( H_{1}\right) \subset U_{2}\oplus Z\left( E\left( L_{2}\right) \right) $. Let $l=u+v+z\in \alpha \left( H_{1}\right) $ ($u\in U_{2}$, $v\in V_{2}$, $z\in Z\left( E\left( L_{2}\right) \right) $). If $v\neq 0$, then $v=\sum% \limits_{i=1}^{n}b_{i}v_{i}^{\left( 2\right) }$, where $b_{1},\ldots ,b_{n}\in \mathbb{Q} $, and exists $j\in \left\{ 1,\ldots ,n\right\} $ such that $b_{j}\neq 0$. Then $\left[ l,u_{j}^{\left( 2\right) }\right] =-b_{j}t^{\left( 2\right) }$ by (\ref{basis_calc}) and $T_{2}\subset \alpha \left( H_{1}\right) $. Also there exists $v_{0}\in V$, such that $\left[ v,v_{0}\right] \in W\smallsetminus \left\{ 0\right\} $, because the skew symmetric bilinear mapping $\omega _{L_{2}}$ is a non singular. Therefore $\left[ l,v_{0}\right] =\left[ u,v_{0}\right] +\left[ v,v_{0}\right] \notin T_{2}$ \ and $\dim \left( \alpha \left( H_{1}\right) \cap Z\left( E\left( L\right) \right) \right) >1$. By this contradiction we have that $v=0$ and $\alpha \left( H_{1}\right) \subset U_{2}\oplus Z\left( E\left( L_{2}\right) \right) $. $\alpha \left( H_{1}\right) \nsubseteq Z\left( E\left( L_{2}\right) \right) $ so there exists $l=u+z\in \alpha \left( H_{1}\right) $ ($u\in U_{2}\smallsetminus \left\{ 0\right\} $, $z\in Z\left( E\left( L_{2}\right) \right) $). Because $u\neq 0$, we have as above that there is $j\in \left\{ 1,\ldots ,n\right\} $ such that $\left[ l,v_{j}^{\left( 2\right) }\right] \in T_{2}\smallsetminus \left\{ 0\right\} $. But $\dim \left( \alpha \left( H_{1}\right) \cap Z\left( E\left( L_{2}\right) \right) \right) =1$, so $% \alpha \left( H_{1}\right) \cap Z\left( E\left( L_{2}\right) \right) =T_{2}$. It is clear that $\left( U_{2}\oplus Z\left( E\left( L_{2}\right) \right) \right) \cap L_{2}=\left( U_{2}\oplus W_{2}\oplus T_{2}\right) \cap \left( V_{2}\oplus W_{2}\right) =W_{2}$, so $\alpha \left( H_{1}\right) \cap L_{2}\subset W_{2}\subset Z\left( E\left( L_{2}\right) \right) $ and $\alpha \left( H_{1}\right) \cap L_{2}\subset W_{2}\cap Z\left( E\left( L_{2}\right) \right) \cap \alpha \left( H_{1}\right) =W_{2}\cap T_{2}=\left\{ 0\right\} $% . By arguments of dimensions we have $E\left( L_{2}\right) =\alpha \left( H_{1}\right) \oplus L_{2}$ and, because $\alpha \left( H_{1}\right) $ is an ideal the linear mapping $E\left( L_{2}\right) /\alpha \left( H_{1}\right) \rightarrow L_{2}$ is an isomorphism of algebras. So we have that $% L_{1}\cong E\left( L_{1}\right) /H_{1}\cong \alpha \left( E\left( L_{1}\right) \right) /\alpha \left( H_{1}\right) \cong E\left( L_{2}\right) /\alpha \left( H_{1}\right) \cong L_{2}$. Let $\lambda =\varphi \oplus \psi :L_{1}=V_{1}\oplus W_{1}\rightarrow L_{2}=V_{2}\oplus W_{2}$ is an isomorphism of algebras. The linear mapping $% \varphi $ is a bijection, so in the referred above bases of $V_{1}$ and $% V_{2}$ it is presented by the invertible matrix $F=\left( f_{ij}\right) _{i,j=1}^{n}\in GL_{n}\left( \mathbf{% \mathbb{Q} }\right) $. We take the matrix $G=\left( g_{ij}\right) _{i,j=1}^{n}=\left( F^{-1}\right) ^{t}$ and by this matrix and by referred above bases of $U_{1}$ and $U_{2}$ define the linear mapping $\gamma :U_{1}\rightarrow U_{2}$. Also, we define the linear mapping $\tau :T_{1}\ni t^{\left( 1\right) }\rightarrow t^{\left( 2\right) }\in T_{2}$. We will prove that the linear mapping $\gamma \oplus \varphi \oplus \tau \oplus \psi :E\left( L_{1}\right) =U_{1}\oplus V_{1}\oplus T_{1}\oplus W_{1}\rightarrow U_{2}\oplus V_{2}\oplus T_{2}\oplus W_{2}=E\left( L_{2}\right) $ is an isomorphism of algebras. This mapping is a bijection, so it is necessary to prove that for every $u^{\prime },u^{\prime \prime }\in U_{1}$ and every $v^{\prime },v^{\prime \prime }\in V_{1}$ that the $\left( \tau \oplus \psi \right) % \left[ u^{\prime }+v^{\prime },u^{\prime \prime }+v^{\prime \prime }\right] =% \left[ \left( \gamma \oplus \varphi \right) \left( u^{\prime }+v^{\prime }\right) ,\left( \gamma \oplus \varphi \right) \left( u^{\prime \prime }+v^{\prime \prime }\right) \right] $ fulfills. But it is enough to prove this, for basis elements of $U_{1}\oplus V_{1}$. For $1\leq i,j\leq n$ we have $\left[ \varphi \left( v_{i}^{\left( 1\right) }\right) ,\varphi \left( v_{j}^{\left( 1\right) }\right) \right] =\psi \left[ v_{i}^{\left( 1\right) },v_{j}^{\left( 1\right) }\right] $, because $\varphi \oplus \psi $ is an isomorphism of algebras, $\left[ \gamma \left( u_{i}^{\left( 1\right) }\right) ,\varphi \left( v_{j}^{\left( 1\right) }\right) \right] =\left[ \sum\limits_{k=1}^{n}g_{ki}u_{k}^{\left( 2\right) },\sum\limits_{s=1}^{n}f_{sj}v_{s}^{\left( 2\right) }\right] =\sum\limits_{s=1}^{n}\sum\limits_{k=1}^{n}g_{ki}f_{sj}\delta _{ks}t^{\left( 2\right) }=\delta _{ij}t^{\left( 2\right) }=\tau \left[ u_{i}^{\left( 1\right) },v_{j}^{\left( 1\right) }\right] $ by (\ref{basis_calc}), and $% \left[ \gamma \left( u_{i}^{\left( 1\right) }\right) ,\gamma \left( u_{j}^{\left( 1\right) }\right) \right] =0=\tau \left[ u_{i}^{\left( 1\right) },u_{j}^{\left( 1\right) }\right] $, because $\gamma \left( u_{i}^{\left( 1\right) }\right) ,\gamma \left( u_{j}^{\left( 1\right) }\right) \in U_{2}$. So $E\left( L_{1}\right) \cong E\left( L_{2}\right) $. \end{proof} So if one can resolve the problem of the classification of the nilpotent class $2$\ finite dimension nilpotent class $2$ Lie $\mathbf{% \mathbb{Q} }$-algebras up to geometric equivalence, then (by Proposition \ref% {equivalent_isom} and Proposition \ref{extension1}) he can classify up to isomorphism the algebras $E\left( L\right) $, where $L$\ is an arbitrary Lie nilpotent class $2$ finite dimension $\mathbf{% \mathbb{Q} }$-algebra ($\dim Z\left( \left( E\left( L\right) \right) \right) =\dim Z\left( L\right) +1$) and by Theorem \ref{main} he can classified up to isomorphism all nilpotent class $2$ Lie finite dimension $\mathbf{% \mathbb{Q} }$-algebras and all nilpotent class $2$ finite rank torsion free complete groups. \section{Acknowledgements.} This research was motivated by Prof. B. Plotkin. I would like to express my gratitude to him and to Prof. S. Margolis for their constant attention to this work. Conversations with Prof. E. Rips, Prof. Z. Sela, Prof. E. Hrushovski, Prof. A. Mann and Dr. E. Plotkin were very useful. After discussions with Prof. D. Kazhdan and Prof. Yu. Drozd, I paid my attention to the researches of Prof. V. Sergeichuk and his collaborators (\cite{Serg1}% , \cite{Serg2}, \cite{BLS}). The debates about this problem with Dr. R. Lipyanski led to the major break in this research, and I would like to express my sincere gratitude. I appreciate all the authors of the paper \cite% {BLS}, which was very contributory to this research.
1,116,691,498,179
arxiv
\section{Introduction} \label{sec:Introd} The associated Legendre functions are an important class of special functions that appear in a wide range of problems of mathematical physics. The physical importance of these functions is related to the fact that they appear as solutions of the field theory equations in various situations. In particular, the radial parts of the solutions for the scalar, fermionic and electromagnetic wave equations on background of constant curvature spacetimes are expressed in terms of the associated Legendre functions (see, for instance, \cite{Grib94,Most97,Birr82}). The eigenfunctions in braneworld models with de Sitter and anti-de Sitter branes are also expressed in terms of these functions (see \cite{Noji00}). Motivated by this, in \cite{Saha08}, by making use of the generalized Abel-Plana formula, we have derived a summation formula for the series over the zeros of the associated Legendre function of the first kind with respect to the degree (for the generalized Abel-Plana formula and its applications to physical problems see \cite% {Sah1,Saha00Rev,Saha07Rev}). This type of series is contained in the mode-sum for two-point functions of a quantum scalar field in background of a constant curvature space with spherical boundary, on which the field obeys the Dirichlet boundary condition. The application of the summation formula allowed us to extract from the vacuum expectation values the part corresponding to the situation without boundary and to present the boundary-induced part in terms of rapidly convergent integral. In the corresponding problem with two concentric spherical boundaries, in the region between two spheres the eigenfunctions are the combination of the associated Legendre functions of the first and second kinds. The eigenfrequences are determined by the location of the zeros of this combination with respect to the degree. In the present paper, by specifying the functions in the generalized Abel-Plana formula, we obtain a summation formula for the series over these zeros. As in the case of the other Abel--Plana-type formulae, previously considered in the literature, this formula presents the sum of the series over the zeros of the combination of the associated Legendre function in the form of the sum of two integrals. In boundary-value problems with two boundaries the first integral corresponds to the situation when one of the boundaries is absent and the second one presents the part induced by the second boundary. For a large class of functions the latter is rapidly convergent and, in particular, is useful for the numerical evaluations of the corresponding physical characteristics. The paper is organized as follows. In section \ref{sec:SumForm}, by specifying the functions in the generalized Abel-Plana formula we derive a formula for the summation of series over zeros of the combination of the associated Legendre functions with respect to the degree. In section \ref% {sec:Special}, special cases of this summation formula are considered. First, as a partial check we show that as a special case the standard Abel-Plana formula is obtained. Then we show that from the summation formula discussed in section \ref{sec:SumForm}, as a limiting case the formula is obtained for the summation of the series over the zeros of the combinations of the Bessel functions, previously derived in \cite{Sah1}. A physical application is given in section \ref{sec:Phys}, where the positive frequency Wightman function for a scalar field is evaluated in the region between two spherical boundaries on background of a negative constant curvature space. It is assumed that the field obeys Dirichlet boundary condition on the spherical shells. The use of the summation formula from section \ref% {sec:SumForm} allows us to extract from the vacuum expectation value the part corresponding to the geometry where the outer sphere is absent. The part induced by the latter is presented in terms of an integral, which is rapidly convergent in the coincidence limit for points away from the sphere. The main results of the paper are summarized in section \ref{sec:Conclus}. In appendix \ref{sec:Zeros} the formula for the normalization integral is derived and we show that the zeros of the combination of the associated Legendre functions with respect to the degree are simple. \section{Summation formula} \label{sec:SumForm} Let $z=z_{k}$, $k=1,2,\ldots $, be zeros of the function \begin{equation} X_{iz}^{\mu }(u,v)=\frac{P_{iz-1/2}^{\mu }(u)P_{iz-1/2}^{-\mu }(v)-P_{iz-1/2}^{-\mu }(u)P_{iz-1/2}^{\mu }(v)}{\sin (\mu \pi )}, \label{LegComb} \end{equation}% in the right-half plane of the complex variable $z$:% \begin{equation} X_{iz_{k}}^{\mu }(u,v)=0. \label{Pzk0} \end{equation}% In (\ref{LegComb}), $P_{iz-1/2}^{\mu }(u)$\ is the associated Legendre function\ of the first kind (in this paper the definition of the associated Legendre functions follows that given in \cite{Abra72}). In the discussion below we will assume that $u,v>1$. The expression in the numerator of (\ref% {LegComb}) has simple zeros for integer values of $\mu $ and the function $% X_{iz}^{\mu }(u,v)$ is regular at these points. Since one has the property $% X_{\nu }^{-\mu }(u,v)=X_{\nu }^{\mu }(u,v)$, without loss of generality, we consider the parameter $\mu $ being non-negative, $\mu \geqslant 0$. For given values $u$, $v$, and $\mu $ the function $X_{iz}^{\mu }(u,v)$ has an infinity of real zeros. From the asymptotic formula for the associated Legendre functions we can see that for $z\rightarrow +\infty $ one has% \begin{equation} X_{iz}^{\mu }(u,v)\approx \frac{2\sin [\left( \eta _{v}-\eta _{u}\right) z]}{% \pi z\sqrt{\sinh \eta _{u}\sinh \eta _{v}}}, \label{Xlargez} \end{equation} where $\eta _{u}$ and $\eta _{v}$ are defined as% \begin{equation} u=\cosh \eta _{u},\;v=\cosh \eta _{v}. \label{ueta} \end{equation}% From here we obtain the asymptotic expression for large zeros:% \begin{equation} z_{k}\approx \pi k/\left( \eta _{v}-\eta _{u}\right) . \label{zkAsymp} \end{equation} In general, the zeros $z_{k}$ are functions of the parameters $u$, $v$, and $% \mu $: $z_{k}=z_{k}(u,v,\mu )$. By taking into account that for the associated Legendre function one has $P_{-\nu -1/2}^{\mu }(u)=P_{\nu -1/2}^{\mu }(u)$, we see that $X_{-\nu }^{\mu }(u,v)=X_{\nu }^{\mu }(u,v)$. Hence, the points $z=-z_{k}$ are zeros of the function $X_{iz}^{\mu }(u,v)$ as well. In Appendix \ref{sec:Zeros} we show that the zeros $z=z_{k}$ are simple and under the conditions specified above the function $X_{iz}^{\mu }(u,v)$ has no zeros which are not real. We will assume that $z_{k}$ are arranged in ascending order of magnitude. Note that the function $X_{\nu }^{\mu }(u,v)$ can also be expressed in terms of the combination \begin{equation} Y_{\nu }^{\mu }(u,v)=Q_{\nu -1/2}^{\mu }(u)P_{\nu -1/2}^{\mu }(v)-P_{\nu -1/2}^{\mu }(u)Q_{\nu -1/2}^{\mu }(v), \label{Ynu} \end{equation}% as% \begin{equation} X_{\nu }^{\mu }(u,v)=\frac{2}{\pi e^{i\mu \pi }}\frac{\Gamma ( \nu -\mu +1/2) }{\Gamma ( \nu +\mu +1/2) }Y_{\nu }^{\mu }(u,v), \label{XY} \end{equation}% where $Q_{\nu -1/2}^{\mu }(u)$ is the associated Legendre function of the second kind and $\Gamma (x)$ is the gamma function. A summation formula for the series over $z_{k}$ can be derived by using the generalized Abel-Plana formula \cite{Sah1} (see also \cite{Saha00Rev,Saha07Rev}% ). For functions $f(z)$ and $g(z)$ meromorphic in the strip $a\leqslant x\leqslant b$ of the complex plane $z=x+iy$ this formula has the form \begin{eqnarray} &&\lim_{b\rightarrow \infty }\bigg[{\mathrm{p.v.}}\!\int_{a}^{b}dx\,f(x)-\pi i\sum_{k}\underset{z=z_{g,k}}{\mathrm{Res}}g(z)-\pi i\sum_{k,{{\mathrm{Im\,}}% }z_{f,k}\neq 0}\sigma (z_{f,k})\underset{z={\mathrm{\,}}z_{f,k}}{\mathrm{Res}% }f(z)\bigg] \notag \\ &&\quad =\frac{1}{2}\int_{a-i\infty }^{a+i\infty }dz\,\left[ g(z)+{\sigma (z)% }f(z)\right] , \label{GAPF} \end{eqnarray}% where ${\sigma (z)\equiv \mathrm{sgn}}({{\mathrm{Im\,}}}z)$ and p.v. means the principal value of the integral. In this formula, $z_{f,k}$ and $z_{g,k}$ are the positions of the poles of the functions $f(z)$ and $g(z)$ in the strip $a<x<b$. As functions $f(z)$ and $g(z)$ in formula (\ref{GAPF}) we choose% \begin{eqnarray} f(z) &=&\frac{h(z)}{4Q_{iz-1/2}^{\mu }(u)Q_{-iz-1/2}^{\mu }(u)}\frac{\Gamma \left( iz+\mu +1/2\right) \pi ^{2}e^{2i\mu \pi }i\sinh (z\pi )}{\Gamma \left( iz-\mu +1/2\right) \cos [(iz-\mu )\pi ]}, \notag \\ g(z) &=&\left[ \frac{Q_{-iz-1/2}^{\mu }(v)}{Q_{-iz-1/2}^{\mu }(u)}+\frac{% Q_{iz-1/2}^{\mu }(v)}{Q_{iz-1/2}^{\mu }(u)}\right] \frac{h(z)}{2X_{iz}^{\mu }(u,v)}, \label{gz} \end{eqnarray}% where $h(z)$ is a meromorphic function for $a\leqslant {{\mathrm{Re}}}% \,z\leqslant b$. The combinations appearing on the left-hand side of formula (\ref{GAPF}) are presented in the form% \begin{equation} g(z)\pm f(z)=\frac{Q_{\mp iz-1/2}^{\mu }(v)}{Q_{\mp iz-1/2}^{\mu }(u)}\frac{% h(z)}{X_{iz}^{\mu }(u,v)}. \label{gzplmin} \end{equation}% Note that the function $g(z)$ has simple poles at the zeros $z_{k}$ of the function (\ref{LegComb}). The conditions for the generalized Abel-Plana formula (\ref{GAPF}), formulated in terms of the function $h(z)$, take the form% \begin{eqnarray} \lim_{w\rightarrow \infty }\int_{a\pm iw}^{b\pm iw}dz\frac{Q_{\mp iz-1/2}^{\mu }(v)}{Q_{\mp iz-1/2}^{\mu }(u)}\frac{h(z)}{X_{iz}^{\mu }(u,v)} &=&0, \notag \\ \lim_{b\rightarrow \infty }\int_{b}^{b\pm i\infty }dz\frac{Q_{\mp iz-1/2}^{\mu }(v)}{Q_{\mp iz-1/2}^{\mu }(u)}\frac{h(z)}{X_{iz}^{\mu }(u,v)} &=&0. \label{Cond1} \end{eqnarray}% With the help of the asymptotic formulae for the associated Legendre functions, we can see that these conditions are satisfied if the function $% h(z)$ is restricted by the constraint% \begin{equation} |h(z)|<x^{-2\mu }\varepsilon (x)e^{c(\eta _{v}-\eta _{u})y},\;z=x+iy,\;|z|\rightarrow \infty , \label{Cond2} \end{equation}% uniformly in any finite interval of \ $x$, where $c<2$, $\varepsilon (x)\rightarrow 0$ for $x\rightarrow +\infty $. Now, after the substitution of the functions (\ref{gz}) into formula (\ref% {GAPF}), we see that for a function $h(z)$ meromorphic in the half-plane ${{% \mathrm{Re}}}\,z\geqslant a$ and satisfying condition (\ref{Cond2}), the following formula takes place% \begin{eqnarray} &&\lim_{b\rightarrow \infty }\bigg\{\sum_{k=m}^{n}\frac{h(z)}{\partial _{z}X_{iz}^{\mu }(u,v)}\frac{Q_{iz-1/2}^{\mu }(v)}{Q_{iz-1/2}^{\mu }(u)}% \bigg|_{z=z_{k}}+\frac{i}{\pi }{\mathrm{p.v.}}\!\int_{a}^{b}dx\,f(x)+r[h(z)]% \bigg\} \notag \\ &&\qquad =\frac{i}{2\pi }\int_{a-i\infty }^{a+i\infty }dz\,\frac{Q_{-\sigma (z)iz-1/2}^{\mu }(v)}{Q_{-\sigma (z)iz-1/2}^{\mu }(u)}\frac{h(z)}{% X_{iz}^{\mu }(u,v)}, \label{SumForm0} \end{eqnarray}% where the function $f(z)$ is defined by the relation (\ref{gz}). In this formula we have introduced the notation% \begin{eqnarray} r[h(z)] &=&\sum_{k,{{\mathrm{Im\,}}}z_{h,k}\neq 0}\underset{z=z_{h,k}}{% \mathrm{Res}}\bigg[\frac{Q_{-\sigma (z_{k})iz-1/2}^{\mu }(v)}{Q_{-\sigma (z_{k})iz-1/2}^{\mu }(u)}\frac{h(z)}{X_{iz}^{\mu }(u,v)}\bigg] \notag \\ &&+\frac{1}{2}\sum_{k,{{\mathrm{Im\,}}}z_{h,k}=0}\underset{z=z_{h,k}}{% \mathrm{Res}}\bigg[\frac{h(z)}{X_{iz}^{\mu }(u,v)}\sum_{l=\pm }\frac{% Q_{liz-1/2}^{\mu }(v)}{Q_{liz-1/2}^{\mu }(u)}\bigg]. \label{rhz} \end{eqnarray}% with $z_{h,k}$ being the positions of the poles for the function $h(z)$. On the left-hand side of (\ref{SumForm0}), one has $z_{m-1}<a<z_{m}$, $% z_{n}<b<z_{n+1}$ and in (\ref{rhz}) the summation goes over the poles $% z_{h,k}$ in the strip $a<{{\mathrm{Re}}}\,z<b$. Note that one has the relations% \begin{equation} \frac{Q_{iz-1/2}^{\mu }(v)}{Q_{iz-1/2}^{\mu }(u)}=\frac{P_{iz-1/2}^{\mu }(v)% }{P_{iz-1/2}^{\mu }(u)}=\frac{P_{iz-1/2}^{-\mu }(v)}{P_{iz-1/2}^{-\mu }(u)}% ,\;z=z_{k}, \label{QWrons} \end{equation}% and in the summation of the first term in figure braces of (\ref{SumForm0}) we can replace the ratio of the associated Legendre functions of the second kind by the ratio of the functions of the first kind. A useful form of the summation formula (\ref{SumForm0}) is obtained in the limit $a\rightarrow 0$. In this limit, we see that for a function $h(z)$ meromorphic in the half-plane ${{\mathrm{Re}}}\,z\geqslant 0$ and satisfying the condition (\ref{Cond2}) the following formula holds% \begin{eqnarray} &&\sum_{k=1}^{\infty }\frac{h(z)}{\partial _{z}X_{iz}^{\mu }(u,v)}\frac{% Q_{iz-1/2}^{\mu }(v)}{Q_{iz-1/2}^{\mu }(u)}\bigg|_{z=z_{k}}=\frac{\pi e^{2i\mu \pi }}{4}{\mathrm{p.v.}}\!\int_{0}^{\infty }dx\,\frac{\Gamma \left( ix+\mu +1/2\right) \sinh (x\pi )}{\Gamma \left( ix-\mu +1/2\right) \cos [(ix-\mu )\pi ]} \notag \\ && \times \frac{h(x)}{Q_{ix-1/2}^{\mu }(u)Q_{-ix-1/2}^{\mu }(u)}% -r[h(z)]-\frac{1}{2\pi }\int_{0}^{\infty }dx\,\frac{Q_{x-1/2}^{\mu }(v)}{% Q_{x-1/2}^{\mu }(u)}\frac{h(xe^{\pi i/2})+h(xe^{-\pi i/2})}{X_{x}^{\mu }(u,v)% }. \label{SumFormula} \end{eqnarray}% For large values $x\gg 1$, for the associated Legendre functions we have \begin{equation} P_{x-1/2}^{\mu }(\cosh \eta )\approx \frac{x^{\mu -1/2}e^{\eta x}}{\sqrt{% 2\pi \sinh \eta }},\;Q_{x-1/2}^{\mu }(\cosh \eta )\approx \sqrt{\frac{\pi }{2% }}e^{i\mu \pi }\frac{x^{\mu -1/2}e^{-\eta x}}{\sqrt{\sinh \eta }}. \label{PQasimp} \end{equation}% By using these formulae and the relation (\ref{XY}), for the corresponding asymptotic behavior of the function $X_{x}^{\mu }(u,v)$ one finds \begin{equation} X_{x}^{\mu }(u,v)\approx \frac{e^{(\eta _{v}-\eta _{u})x}}{\pi x\sqrt{\sinh \eta _{u}\sinh \eta _{v}}}. \label{Xxlarge} \end{equation}% From these asymptotic formulae it follows that under the condition (\ref% {Cond2}) for the function $h(z)$, the second integral on the right-hand side of formula (\ref{SumFormula}) exponentially converges in the upper limit. If the function $h(z)$ has poles on the positive real axis, it is assumed that the first integral on the right-hand side converges in the sense of the principal value. From the derivation of (\ref{SumFormula}) it follows that this formula may be extended to the case of some functions $h(z)$ having branch-points on the imaginary axis, for example, having the form $% h(z)=h_{1}(z)/(z^{2}+c^{2})^{1/2}$, where $h_{1}(z)$ is a meromorphic function. This type of function appears in the physical example discussed in section \ref{sec:Phys}. Special cases of formula (\ref{SumFormula}) are considered in the next section. Another generalization of formula (\ref{SumFormula}) can be given for a class of functions $h(z)$ having purely imaginary poles at the points $z=\pm iy_{k}$, $y_{k}>0$, $k=1,2,\ldots $, and at the origin $z=y_{0}=0$. We assume that the function $h(z)$ satisfies the condition% \begin{equation} h(z)=-h(ze^{-\pi i})+o((z-\sigma _{k})^{-1}),\;z\rightarrow \sigma _{k},\;\sigma _{k}=0,iy_{k}. \label{Impolecond} \end{equation}% Let us denote by $C_{\rho }(\sigma _{k})$ the right half of the circle with radius $\rho $ and with the center at the point $\sigma _{k}$, described in the positive direction. Similarly, we denote by $\gamma _{\rho }^{+}$ and $% \gamma _{\rho }^{-}$ the upper and lower halves of the semicircle in the right half-plane with radius $\rho $ and with the center at the point $z=0$, described in the positive direction with respect to this point. Now, in the limit $a\rightarrow 0$ the right-hand side of (\ref{SumForm0}) can be presented in the form% \begin{equation} \frac{i}{2\pi }\sum_{\alpha =+,-}\bigg(\int_{\gamma _{\rho }^{\alpha }}dz+\sum_{\sigma _{k}=\alpha iy_{k}}\int_{C_{\rho }(\sigma _{k})}dz\bigg)\,% \frac{Q_{-\alpha iz-1/2}^{\mu }(v)}{Q_{-\alpha iz-1/2}^{\mu }(u)}\frac{h(z)}{% X_{iz}^{\mu }(u,v)}, \label{Impoles1} \end{equation}% plus the sum of the integrals along the straight segments $(\pm i(y_{k-1}+\rho ),\pm i(y_{k}-\rho ))$\ of the imaginary axis between the poles. In the limit $\rho \rightarrow 0$ the sum of the integrals along the straight segments of the imaginary axis gives the principal value of the last integral on the right-hand side of (\ref{SumFormula}). In the terms of (% \ref{Impoles1}) with $\alpha =-$ we introduce a new integration variable $% z^{\prime }=ze^{\pi i}$. By using the relation (\ref{Impolecond}), the expression (\ref{Impoles1}) is presented in the form% \begin{equation} -\sum_{\sigma _{k}=0,iy_{k}}(1-\delta _{0\sigma _{k}}/2)\underset{z=\sigma _{k}}{\mathrm{Res}}\bigg[\frac{Q_{-iz-1/2}^{\mu }(v)}{Q_{-iz-1/2}^{\mu }(u)}% \frac{h(z)}{X_{iz}^{\mu }(u,v)}\bigg] \label{Impoles2} \end{equation}% plus the part which vanishes in the limit $\rho \rightarrow 0$. As a result, formula (\ref{SumFormula}) is extended for functions having purely imaginary poles and satisfying condition (\ref{Impolecond}). For this, on the right-hand side of (\ref{SumFormula}) we have to add the sum of residues (% \ref{Impoles2}) at these poles and take the principal value of the second integral on the right-hand side. The latter exists due to condition (\ref% {Impolecond}). \section{Special cases} \label{sec:Special} First we consider \ the case $\mu =1/2$. For the corresponding associated Legendre functions one has% \begin{equation} P_{z-1/2}^{-1/2}(\cosh \eta )=\sqrt{\frac{2}{\pi }}\frac{\sinh (z\eta )}{z% \sqrt{\sinh \eta }},\;P_{z-1/2}^{1/2}(\cosh \eta )=\sqrt{\frac{2}{\pi }}% \frac{\cosh (z\eta )}{\sqrt{\sinh \eta }}. \label{SpCase1} \end{equation}% By making use of these formulae we find \begin{equation} X_{iz}^{1/2}(u,v)=\frac{2}{\pi }\frac{\sin [z(\eta _{v}-\eta _{u})]}{z\sqrt{% \sinh \eta _{u}\sinh \eta _{v}}}. \label{X1/2} \end{equation}% Hence, in this case for the zeros $z_{k}$ one has $z_{k}=\pi k/(\eta _{v}-\eta _{u})$. Introducing a new function $F(z)$ in accordance with the relation $zh(z)=F(z(\eta _{v}-\eta _{u})/\pi )$, from the formula (\ref% {SumFormula}) we obtain the Abel-Plana summation formula in its standard form:% \begin{equation} \sum_{k=1}^{\infty }F(k)=-\frac{1}{2}F(0)+\int_{0}^{\infty }dx\,F(x)+i\int_{0}^{\infty }dx\frac{F(ix)-F(-ix)}{e^{2\pi x}-1}, \label{AP1} \end{equation}% where the first term on the right-hand side comes from the residue term at $% \sigma _{k}=0$ in (\ref{Impoles2}). Now let us show that from formula (\ref{SumFormula}), as a special case, a summation formula is obtained for the series over zeros of the combination of cylinder functions. First of all, by making use of formulae \begin{equation} \lim_{s\rightarrow +\infty }(sz)^{\pm \mu }P_{isz-1/2}^{\mp \mu }(\cosh (\lambda /s))=J_{\pm \mu }(\lambda z), \label{Plim} \end{equation}% with $J_{\mu }(\eta )$ being the Bessel function of the first kind, we can see that the following relation holds:% \begin{equation} \lim_{s\rightarrow +\infty }X_{isz}^{\mu }(\cosh (\lambda _{u}/s),\cosh (\lambda _{v}/s))=C_{\mu }(\lambda _{u}z,\lambda _{v}z), \label{XBess} \end{equation}% where% \begin{equation} C_{\mu }(\lambda _{u}z,\lambda _{v}z)=J_{\mu }(\lambda _{u}z)Y_{\mu }(\lambda _{v}z)-Y_{\mu }(\lambda _{u}z)J_{\mu }(\lambda _{v}z). \label{Cmu} \end{equation}% Note that, instead of the function $J_{-\mu }(z)$ we have introduced the Neumann function $Y_{\mu }(z)$. Hence, in the limit $s\rightarrow \infty $ from (\ref{SumFormula}) we obtain the summation formula for the series over zeros $z=\lambda _{\mu ,k}$, $k=1,2,\ldots $, of the function $C_{\mu }(\lambda _{u}z,\lambda _{v}z)$. For this, first we rewrite formula (\ref% {SumFormula}) making the replacements $z\rightarrow sz$, $x\rightarrow sx$, in both sides of this formula including the terms in $r[h(z)]$, and we take $% u=\cosh (\lambda _{u}/s)$, $v=\cosh (\lambda _{v}/s)$. Introducing a new function $F(z)=h(sz)$, in the limit $s\rightarrow +\infty $ we find the formula \begin{eqnarray} &&\sum_{k=1}^{\infty }\frac{F(z)}{\partial _{z}C_{\mu }(\lambda _{u}z,\lambda _{v}z)}\frac{J_{\mu }(\lambda _{v}z)}{J_{\mu }(\lambda _{u}z)}% \bigg|_{z=\lambda _{\mu ,k}}=\frac{1}{\pi }{\mathrm{p.v.}}\!\int_{0}^{\infty }dx\,\frac{F(x)}{J_{\mu }^{2}(\lambda _{u}x)+Y_{\mu }^{2}(\lambda _{u}x)} \notag \\ &&\qquad -r_{C}[F(z)]-\frac{1}{4}\int_{0}^{\infty }dx\,\frac{K_{\mu }(\lambda _{v}x)}{K_{\mu }(\lambda _{u}x)}\frac{F(xe^{\pi i/2})+F(xe^{-\pi i/2})}{K_{\mu }(\lambda _{u}x)I_{\mu }(\lambda _{v}x)-I_{\mu }(\lambda _{u}x)K_{\mu }(\lambda _{v}x)}, \label{SumFormBess} \end{eqnarray}% where $I_{\mu }(x)$ and $K_{\mu }(x)$\ are the modified Bessel functions and% \begin{eqnarray} r_{C}[F(z)] &=&\pi \sum_{k}\underset{{{\mathrm{Im\,}}}z_{F,k}=0}{\mathrm{Res}% }\bigg[\frac{J_{\mu }(\lambda _{u}z)J_{\mu }(\lambda _{v}z)+Y_{\mu }(\lambda _{u}z)Y_{\mu }(\lambda _{v}z)}{J_{\mu }^{2}(\lambda _{u}x)+Y_{\mu }^{2}(\lambda _{u}x)}\frac{F(z)}{C_{\mu }(\lambda _{u}z,\lambda _{v}z)}\bigg] \notag \\ &&+\pi \sum_{l=1,2}\sum_{k}\underset{(-1)^{l}{{\mathrm{Im\,}}}z_{F,k}<0}{% \mathrm{Res}}\bigg[\frac{H_{\mu }^{(l)}(\lambda _{v}z)}{H_{\mu }^{(l)}(\lambda _{u}z)}\frac{F(z)}{C_{\mu }(\lambda _{u}z,\lambda _{v}z)}% \bigg]. \label{rCF} \end{eqnarray}% In deriving (\ref{SumFormBess}) we have also used the formulae \begin{eqnarray} \lim_{\nu \rightarrow +\infty }\nu ^{-\mu }Q_{i\nu -1/2}^{\mu }(\cosh (\eta /\nu )) &=&-\frac{\pi i}{2}e^{i\mu \pi }H_{\mu }^{(2)}(\eta ), \notag \\ \lim_{\nu \rightarrow \infty }\nu ^{\pm \mu }P_{\nu }^{\mp \mu }(\cosh (x/\nu )) &=&I_{\pm \mu }(x), \label{LegLimit} \\ \lim_{\nu \rightarrow \infty }\nu ^{-\mu }Q_{\nu }^{\mu }(\cosh (x/\nu )) &=&e^{i\mu \pi }K_{\mu }(x), \notag \end{eqnarray}% and the relation \begin{equation} \frac{H_{\mu }^{(2)}(\lambda _{v}z)}{H_{\mu }^{(2)}(\lambda _{u}z)}=\frac{% J_{\mu }(\lambda _{v}z)}{J_{\mu }(\lambda _{u}z)},\;z=\lambda _{\mu ,k}. \end{equation}% Note that from (\ref{LegLimit}) it follows that \begin{equation} \lim_{s\rightarrow +\infty }X_{sx}^{\mu }(\cosh (\lambda _{u}/s),\cosh (\lambda _{v}/s))=\frac{2}{\pi }\left[ K_{\mu }(\lambda _{u}x)I_{\mu }(\lambda _{v}x)-I_{\mu }(\lambda _{u}x)K_{\mu }(\lambda _{v}x)\right] . \label{rel5} \end{equation}% Formula (\ref{SumFormBess}) is a special case of the result derived in \cite% {Sah1} (see also, \cite{Saha07Rev}). Physical applications of this formula are given in \cite{Saha01,Saha04}. \section{Vacuum polarization by concentric spherical boundaries in a constant curvature space} \label{sec:Phys} In this section we give a physical application of the summation formula (\ref% {SumFormula}). Consider a scalar field $\varphi (x)$ on background of the space with constant negative curvature described by the line element \begin{equation} ds^{2}=dt^{2}-a^{2}\left[ dr^{2}+\sinh ^{2}r(d\theta ^{2}+\sin ^{2}\theta d\phi ^{2})\right] , \label{metric} \end{equation}% where $a$ is a constant. The field equation has the form% \begin{equation} \left( \nabla _{l}\nabla ^{l}+M^{2}+\xi R\right) \varphi (x)=0, \label{FieldEq} \end{equation}% where $M$ is the mass of the field quanta, $\xi $ is the curvature coupling parameter and for the Ricci scalar one has $R=-6a^{-2}$. We will assume that the field operator satisfies Dirichlet boundary conditions on two concentric spherical shells with radii $r=r_{1}$ and $r=r_{2}$, $r_{1}<r_{2}$, \begin{equation} \varphi (x)|_{r=r_{1,2}}=0. \label{BoundCond} \end{equation} The boundary conditions modify the spectrum of the zero-point fluctuations and, as a result of this modification, the physical properties of the vacuum are changed. Among the most important characteristics of these properties are the expectation values of quantities bilinear in the field operator such as the field squared and the energy-momentum tensor. These expectation values are obtained from two-point functions in the coincidence limit of the arguments. As a two-point function here we will consider the positive frequency Wightman function. Other two-point functions are evaluated in a similar way. Expanding the field operator over the complete set $\{\varphi _{\alpha }(x),\varphi _{\alpha }^{\ast }(x)\}$ of classical solutions to the field equation satisfying the boundary conditions (\ref{BoundCond}), the Wightman function is presented in the form of the following mode-sum% \begin{equation} W(x,x^{\prime })=\langle 0|\varphi (x)\varphi (x^{\prime })|0\rangle =\sum_{\alpha }\varphi _{\alpha }(x)\varphi _{\alpha }^{\ast }(x^{\prime }), \label{WFsum} \end{equation}% where $|0\rangle $ is the amplitude of the vacuum state and $\alpha $ is a set of quantum numbers specifying the solution. In accordance with the spherical symmetry of the problem under consideration, the eigenfunctions for the scalar field can be presented in the factorized form \begin{equation} \varphi _{\alpha }(x)=Z(r)Y_{lm}(\theta ,\phi )e^{-i\omega t}, \label{eigfunc1} \end{equation}% where $Y_{lm}(\theta ,\phi )$ are the spherical harmonics with $% l=0,1,2,\ldots $, $-l\leqslant m\leqslant l$. The equation for the radial function is obtained from the field equation (\ref{FieldEq}) and has the form% \begin{equation} \frac{1}{\sinh ^{2}r}\frac{d}{dr}\left( \sinh ^{2}r\frac{dZ}{dr}\right) +% \left[ (\omega ^{2}-M^{2})a^{2}+6\xi -\frac{l(l+1)}{\sinh ^{2}r}\right] Z=0. \label{Zeq} \end{equation}% In the region between the spherical shells the solution of equation (\ref% {Zeq}) is expressed in terms of the associated Legendre function as% \begin{equation*} Z(r)=\frac{c_{1}P_{iz-1/2}^{-l-1/2}(u)+c_{2}P_{iz-1/2}^{l+1/2}(u)}{\sqrt{\sinh r% }}, \end{equation*}% with integration constants $c_{1}$ and $c_{2}$ and the notations% \begin{equation} z^{2}=(\omega ^{2}-M^{2})a^{2}+6\xi -1,\;u=\cosh r. \label{lambda} \end{equation} From the boundary condition on the inner sphere we find% \begin{equation} \frac{c_{2}}{c_{1}}=-\frac{P_{iz-1/2}^{-l-1/2}(u_{1})}{% P_{iz-1/2}^{l+1/2}(u_{1})},\;u_{i}\equiv \cosh r_{i},\;i=1,2, \label{Pu1Ratio} \end{equation}% and, hence,% \begin{equation} Z(r)=C_{\alpha }\frac{X_{iz}^{l+1/2}(u_{1},u)}{\sqrt{\sinh r}}, \label{ZX} \end{equation}% where $C_{\alpha }$ is the normalization constant and the function $% X_{iz}^{l+1/2}(u_{1},u)$ is defined by (\ref{LegComb}). From the boundary condition on the outer sphere we see that the eigenvalues for $z$ are solutions of the equation% \begin{equation} X_{iz}^{l+1/2}(u_{1},u_{2})=0. \label{ejgmodes} \end{equation} As a result, the eigenfunctions have the form% \begin{equation} \varphi _{\alpha }(x)=\frac{C_{\alpha }}{\sqrt{\sinh r}}% X_{iz}^{l+1/2}(u_{1},u)Y_{lm}(\theta ,\phi )e^{-i\omega t}, \label{eigfunc2} \end{equation}% and, hence, $z=z_{k}$, $k=1,2,\ldots $, in the notations of section \ref% {sec:SumForm}. The corresponding eigenfrequencies are related to these zeros by the formula% \begin{equation} \omega _{k}^{2}=\omega ^{2}(z_{k})=(z_{k}^{2}+1-6\xi )/a^{2}+M^{2}. \label{eigfreq} \end{equation}% Hence, the set $\alpha $ of the quantum numbers is specified to $\alpha =(l,m,k)$. The coefficient $C_{\alpha }$ in (\ref{eigfunc2}) is determined from the orthonormalization condition for the eigenfunctions:% \begin{equation} \int d^{3}x\,\sqrt{|g|}\varphi _{\alpha }(x)\varphi _{\alpha ^{\prime }}^{\ast }(x)=\frac{\delta _{\alpha \alpha ^{\prime }}}{2\omega }, \label{normcond} \end{equation}% where the integration goes over the region between the spherical shells. Making use of the integration formula given in Appendix \ref{sec:Zeros} and the boundary conditions, for this coefficient we find% \begin{equation} C_{\alpha }^{-2}=a^{3}\frac{\omega (z)}{z}(u_{2}^{2}-1)[\partial _{z}X_{iz}^{l+1/2}(u_{1},u_{2})]\partial _{u}X_{iz}^{l+1/2}(u_{1},u), \label{Cnorm} \end{equation}% with $z=z_{k}$, $u=u_{2}$. By using the Wronskian relation for the associated Legendre functions,% \begin{equation} W\{P_{i\nu -1/2}^{\mu }(u),Q_{i\nu -1/2}^{\mu }(u)\}=\frac{e^{i\mu \pi }\Gamma (i\nu +\mu +1/2)}{(1-u^{2})\Gamma (i\nu -\mu +1/2)}, \label{WrPQ} \end{equation}% it can be seen that \begin{equation} \lbrack \partial _{u}X_{iz_{k}}^{l+1/2}(u_{1},u)]_{u=u_{2}}=\frac{2}{\pi }% \frac{1}{u_{2}^{2}-1}\frac{P_{iz_{k}-1/2}^{l+1/2}(u_{1})}{% P_{iz_{k}-1/2}^{l+1/2}(u_{2})}. \label{dXrel1} \end{equation}% Upon the substitution this into (\ref{Cnorm}), the normalization coefficient is written in the equivalent form% \begin{equation} C_{\alpha }^{-2}=a^{3}\frac{2\omega (z)}{\pi z}\partial _{z}X_{iz}^{l+1/2}(u_{1},u_{2})\frac{P_{iz-1/2}^{l+1/2}(u_{1})}{% P_{iz-1/2}^{l+1/2}(u_{2})}\bigg|_{z=z_{k}}. \label{Cnorm2} \end{equation}% Note that the ratio of the gamma functions in this formula can also be presented in the form% \begin{equation} \frac{\Gamma (iz_{k}+l+1)}{\Gamma (iz_{k}-l)}=\frac{1}{\pi }\cos [\pi (iz_{k}-l-1/2)]|\Gamma (iz_{k}+l+1)|^{2}. \label{GamRatio} \end{equation} Substituting the eigenfunctions into the mode-sum formula (\ref{WFsum}) and using the addition theorem for the spherical harmonics, for the Wightman function one finds% \begin{eqnarray} W(x,x^{\prime }) &=&\frac{1}{8a^{3}}\sum_{l=0}^{\infty }\frac{% (2l+1)P_{l}(\cos \gamma )}{\sqrt{\sinh r\sinh r^{\prime }}} \notag \\ &&\times \sum_{k=1}^{\infty }z\frac{% X_{iz}^{l+1/2}(u_{1},u)X_{iz}^{l+1/2}(u_{1},u^{\prime })}{\partial _{z}X_{iz}^{l+1/2}(u_{1},u_{2})}\frac{P_{iz-1/2}^{l+1/2}(u_{2})}{% P_{iz-1/2}^{l+1/2}(u_{1})}\frac{e^{-i\omega (z)\Delta t}}{\omega (z)}\bigg|% _{z=z_{k}} \label{WF1} \end{eqnarray}% where $\Delta t=t-t^{\prime }$ and $u^{\prime }=\cosh r^{\prime }$. In (\ref% {WF1}), $P_{l}(\cos \gamma )$ is the Legendre polynomial and \begin{equation} \cos \gamma =\cos \theta \cos \theta ^{\prime }+\sin \theta \sin \theta ^{\prime }\cos (\phi -\phi ^{\prime }). \label{cosgam} \end{equation}% As the expressions for the zeros $z_{k}$ are not explicitly known, formula (% \ref{WF1}) for the Wightman function is not convenient. In addition, the terms in the sum are highly oscillatory for large values of quantum numbers. For the further evaluation of the Wightman function we apply to the series over $k$ the summation formula (\ref{SumFormula}) with $u=u_{1}$ and $% v=u_{2} $, taking in this formula% \begin{equation} h(z)=zX_{iz}^{l+1/2}(u_{1},u)X_{iz}^{l+1/2}(u_{1},u^{\prime })\frac{% e^{-i\omega (z)\Delta t}}{\omega (z)}, \label{hz} \end{equation}% where the function $\omega (z)$ is defined by (\ref{eigfreq}). The function (% \ref{hz}) has no poles in the right-half plane and, hence, $r[h(z)]=0$. The corresponding conditions are satisfied if $r+r^{\prime }+\Delta t/a<2r_{2}$. In particular, this is the case in the coincidence limit $t=t^{\prime }$ for the region under consideration. For the function (\ref{hz}) the part of the integral on the right-hand side of formula (\ref{SumFormula}) over the region $(0,x_{M})$ vanishes, and for the Wightman function one finds% \begin{eqnarray} W(x,x^{\prime }) &=&W_{1}(x,x^{\prime })-\frac{1}{8\pi a^{2}}% \sum_{l=0}^{\infty }\frac{(2l+1)P_{l}(\cos \gamma )}{\sqrt{\sinh r\sinh r^{\prime }}}\int_{x_{M}}^{\infty }dx\,x \notag \\ &&\times \frac{Q_{x-1/2}^{l+1/2}(u_{2})}{Q_{x-1/2}^{l+1/2}(u_{1})}\frac{% X_{x}^{l+1/2}(u_{1},u)X_{x}^{l+1/2}(u_{1},u^{\prime })}{% X_{x}^{l+1/2}(u_{1},u_{2})}\frac{\cosh (\sqrt{x^{2}-x_{M}^{2}}\Delta t/a)}{% \sqrt{x^{2}-x_{M}^{2}}}, \label{WF2} \end{eqnarray}% where we have defined% \begin{equation} x_{M}=\sqrt{M^{2}a^{2}+1-6\xi }. \label{xM} \end{equation}% In formula (\ref{WF2}), the first term on the right-hand side is given by% \begin{eqnarray} W_{1}(x,x^{\prime }) &=&-\frac{1}{32a^{3}}\sum_{l=0}^{\infty }\frac{% (2l+1)P_{l}(\cos \gamma )}{\sqrt{\sinh r\sinh r^{\prime }}}\int_{0}^{\infty }dx\,x\sinh (x\pi ) \notag \\ &&\times |\Gamma (ix+l+1)|^{2}\frac{% X_{ix}^{l+1/2}(u_{1},u)X_{ix}^{l+1/2}(u_{1},u^{\prime })}{% Q_{ix-1/2}^{l+1/2}(u_{1})Q_{-ix-1/2}^{l+1/2}(u_{1})}\frac{e^{-i\omega (x)\Delta t}}{\omega (x)}. \label{WF0} \end{eqnarray}% This function does not depend on the outer sphere radius whereas the second term in (\ref{WF2}) vanishes in the limit $r_{2}\rightarrow \infty $. Hence, the two-point function given by (\ref{WF0}) is the Wightman function for a scalar field in background spacetime described by the line element (\ref% {metric}) outside a single sphere with radius $r_{1}$ on which the field obeys Dirichlet boundary condition. This can also be seen by the direct evaluation using the corresponding eigenfunctions. Thus, we can interpret the second term on the right-hand side of (\ref{WF2}) as the part in the Wightman function induced by the presence of the outer sphere. An alternative form for the function (\ref{WF0}) is obtained by making use of the identity% \begin{eqnarray} \frac{X_{ix}^{l+1/2}(u_{1},u)X_{ix}^{l+1/2}(u_{1},u^{\prime })}{% Q_{ix-1/2}^{l+1/2}(u_{1})Q_{-ix-1/2}^{l+1/2}(u_{1})} &=&-\frac{4}{\pi ^{2}}% P_{ix-1/2}^{-l-1/2}(u)P_{ix-1/2}^{-l-1/2}(u^{\prime }) \notag \\ &&-\frac{4i}{\pi ^{3}}P_{ix-1/2}^{-l-1/2}(u_{1})\sum_{\sigma =\pm 1}\frac{% Q_{\sigma ix-1/2}^{-l-1/2}(u)Q_{\sigma ix-1/2}^{-l-1/2}(u^{\prime })}{% Q_{\sigma ix-1/2}^{-l-1/2}(u_{1})}. \label{Ident1} \end{eqnarray}% Substituting (\ref{Ident1}) into (\ref{WF0}), we can see that the part with the first term on the right of formula~(\ref{Ident1}),% \begin{eqnarray} W_{0}(x,x^{\prime }) &=&\frac{1}{8\pi ^{2}a^{3}}\sum_{l=0}^{\infty }\frac{% (2l+1)P_{l}(\cos \gamma )}{\sqrt{\sinh r\sinh r^{\prime }}}\int_{0}^{\infty }dx\,x\sinh (\pi x) \notag \\ &&\times |\Gamma (ix+l+1)|^{2}P_{ix-1/2}^{-l-1/2}(\cosh r)P_{ix-1/2}^{-l-1/2}(\cosh r^{\prime })\frac{e^{-i\omega (x)\Delta t}}{% \omega (x)}, \label{WF00} \end{eqnarray}% is the Wightman function for a scalar field on background of the constant curvature space without boundaries (see \cite{Saha08}). In the part with the second term on the right-hand side of formula (\ref{Ident1}) we rotate the contour of integration over $x$ by the angle $\pi /2$ for the term with $% \sigma =-1$ and by the angle $-\pi /2$ \ for the term with $\sigma =1$. As a result, the exterior Wightman function for a single spherical boundary is presented in the decomposed form \begin{eqnarray} W_{1}(x,x^{\prime }) &=&W_{0}(x,x^{\prime })-\frac{i}{4\pi ^{2}a^{2}}% \sum_{l=0}^{\infty }(-1)^{l}\frac{(2l+1)P_{l}(\cos \gamma )}{\sqrt{\sinh r\sinh r^{\prime }}}\int_{x_{M}}^{\infty }dx\,x\frac{\Gamma (x+l+1)}{\Gamma (x-l)% } \notag \\ &&\times \frac{P_{x-1/2}^{-l-1/2}(u_{1})}{Q_{x-1/2}^{-l-1/2}(u_{1})}% Q_{x-1/2}^{-l-1/2}(\cosh r)Q_{x-1/2}^{-l-1/2}(\cosh r^{\prime })\frac{\cosh (% \sqrt{x^{2}-x_{M}^{2}}\Delta t/a)}{\sqrt{x^{2}-x_{M}^{2}}}, \label{WF1ext} \end{eqnarray}% where the second term on the right-hand side is induced by the spherical boundary. The Wightman function for the region inside a single spherical shell is investigated in \cite{Saha08}. The corresponding expression is obtained from (\ref{WF1ext}) by the replacements $P_{x-1/2}^{-l-1/2}% \rightleftarrows Q_{x-1/2}^{-l-1/2}$ in the second term on the right of this formula. Taking the limit $a\rightarrow \infty $ with fixed $ar=R$, from the formulae given above we obtain the corresponding results for spherical boundaries in the Minkowski spacetime with radii $R_{1}=ar_{1}$ and $R_{2}=ar_{2}$. Note that in this limit one has $x_{M}=aM$ and the result does not depend on the curvature coupling parameter. Introducing a new integration variable $y=x/a$ and using the asymptotic formula for the gamma function for large values of the argument, from (\ref{WF2}) we find% \begin{eqnarray} W^{\mathrm{(M)}}(x,x^{\prime }) &=&W_{1}^{\mathrm{(M)}}(x,x^{\prime })-\sum_{l=0}^{\infty }\frac{(2l+1)P_{l}(\cos \gamma )}{4\pi ^{2}\sqrt{% RR^{\prime }}}\int_{M}^{\infty }dy\,y\frac{\cosh (\sqrt{y^{2}-M^{2}}\Delta t)% }{\sqrt{y^{2}-M^{2}}} \notag \\ &&\times \frac{K_{l+1/2}(R_{2}y)}{K_{l+1/2}(R_{1}y)}\frac{% G_{l+1/2}(R_{1}y,Ry)G_{l+1/2}(R_{1}y,R^{\prime }y)}{G_{l+1/2}(R_{1}y,R_{2}y)}% , \label{WFM} \end{eqnarray}% where we have introduced the notation $G_{\nu }(x,y)=K_{\nu }(x)I_{\nu }(y)-K_{\nu }(y)I_{\nu }(x)$. The first term on the right-hand side of formula (\ref{WFM}) is the Wightman function in the region outside a single spherical boundary with radius $R_{1}$ in the Minkowski bulk. This function is given by the expression% \begin{eqnarray} W_{1}^{\mathrm{(M)}}(x,x^{\prime }) &=&W_{0}^{\mathrm{(M)}}(x,x^{\prime })-\sum_{l=0}^{\infty }\frac{(2l+1)P_{l}(\cos \gamma )}{4\pi ^{2}\sqrt{% RR^{\prime }}}\int_{M}^{\infty }dy\,y \notag \\ &&\times K_{l+1/2}(Ry)K_{l+1/2}(R^{\prime }y)\frac{I_{l+1/2}(R_{1}y)}{% K_{l+1/2}(R_{1}y)}\frac{\cosh (\sqrt{y^{2}-M^{2}}\Delta t)}{\sqrt{y^{2}-M^{2}% }}. \label{WMink} \end{eqnarray}% Expressions (\ref{WFM}) and (\ref{WMink}) are special cases of the general formulae given in \cite{Saha01} for a scalar field with Robin boundary conditions in arbitrary number of spatial dimensions. The vacuum expectation value of the field squared is obtained from the Wightman function taking the coincidence limit of the arguments. This limit is divergent and some renormalization procedure is necessary. Here the important point is that for points outside the spherical shells the local geometry is the same as for the case of without boundaries and, hence, the structure of the divergences is the same as well. This is also directly seen from formulae (\ref{WF2}) and (\ref{WF1ext}), where the second terms on the right-hand sides are finite in the coincidence limit. Since in these formulae we have already explicitly subtracted the boundary-free part, the renormalization is reduced to that for the geometry without boundaries. In this way for the renormalized vacuum expectation value of the field squared one has% \begin{eqnarray} \langle \varphi ^{2}\rangle _{\mathrm{ren}} &=&\langle \varphi ^{2}\rangle _{1,\mathrm{ren}}+\frac{\pi }{16a^{2}}\sum_{l=0}^{\infty }\frac{2l+1}{\sinh r% }\int_{x_{M}}^{\infty }dx\,\frac{x}{\sqrt{x^{2}-x_{M}^{2}}} \notag \\ &&\times \frac{\Gamma (x+l+1)}{\Gamma (x-l)}\frac{Q_{x-1/2}^{l+1/2}(u_{2})}{% Q_{x-1/2}^{l+1/2}(u_{1})}\frac{[X_{x}^{l+1/2}(u_{1},u)]^{2}}{% X_{x}^{l+1/2}(u_{1},u_{2})}, \label{phi2} \end{eqnarray}% where the first term on the right-hand side is the corresponding quantity outside a spherical boundary with radius $r_{1}$ in the constant negative curvature space without boundaries and the second one is induced by the presence of the second spherical shell with the radius $r_{2}$. For the first term one has% \begin{eqnarray} \langle \varphi ^{2}\rangle _{1,\mathrm{ren}} &=&\langle \varphi ^{2}\rangle _{0,\mathrm{ren}}-\sum_{l=0}^{\infty }\frac{e^{i(l+1/2)\pi }}{4\pi ^{2}a^{2}}% \frac{(2l+1)}{\sinh r}\int_{x_{M}}^{\infty }dx\,x \notag \\ &&\times \frac{\Gamma (x+l+1)}{\Gamma (x-l)}\frac{P_{x-1/2}^{-l-1/2}(u_{1})}{% Q_{x-1/2}^{-l-1/2}(u_{1})}\frac{\left[ Q_{x-1/2}^{-l-1/2}(u)\right] ^{2}}{% \sqrt{x^{2}-x_{M}^{2}}}, \label{phi2single} \end{eqnarray}% where $\langle \varphi ^{2}\rangle _{0,\mathrm{ren}}$ is the vacuum expectation value\ for the field squared in the constant negative curvature space without boundaries and the second one is induced by the presence of a single spherical shell with radius $r_{1}$. Note that the corresponding formula for the vacuum expectation value inside a spherical shell (see \cite% {Saha08}) is obtained from (\ref{phi2single}) by the replacements $% P_{x-1/2}^{-l-1/2}\rightleftarrows Q_{x-1/2}^{-l-1/2}$ in the second term on the right-hand side. The physical example discussed in this section demonstrates the advantages for the application of the Abel-Plana-type formulae in the evaluation of the expectation values of local physical observables in the presence of boundaries. For the summation of the corresponding mode-sums the explicit form of the eigenfrequencies is not necessary and the part corresponding to the boundary-free space is explicitly extracted. Further, the boundary induced parts are presented in the form of integrals which rapidly converge and are finite in the coincidence limit for points away from the boundaries. In this way the renormalization procedure for local physical observables is reduced to that in quantum field theory without boundaries. Methods for the evaluation of global characteristics of the vacuum, such as the total Casimir energy, in problems where the eigenmodes are given implicitly as zeros of a given function, are described in references~\cite{Eliz94}. \section{Conclusion} \label{sec:Conclus} The associated Legendre functions arise in many problems of mathematical physics. By making use of the generalized Abel-Plana formula, we have derived summation formula (\ref{SumFormula}) for the series over the zeros of the combination (\ref{LegComb}) of the associated Legendre functions with respect to the degree. This formula is valid for functions $h(z)$ meromorphic in the right half-plane and obeying condition (\ref{Cond2}). The summation formula may be extended to a class of functions having purely imaginary poles and satisfying the condition (\ref{Impolecond}). For this, on the right-hand side of (\ref{SumFormula}) we have to add the sum of residues (\ref{Impoles2}) and take the principal value of the second integral on the right-hand side. Using formula (\ref{SumFormula}), the difference between the sum over the zeros of the combination of the associated Legendre functions and the corresponding integral is presented in terms of an integral involving the Legendre associated functions with real values of the degree plus residue terms. For a large class of functions $h(z) $ this integral converges exponentially fast and, in particular, is useful for numerical calculations. The Abel-Plana summation formula is obtained as a special case of formula (\ref{SumFormula}) with $\mu =1/2$ and for an analytic function $h(z)$. Applying the summation formula for the series over the zeros of the function $X_{iz}^{\mu }(\cosh (\lambda _{u}/s),\cosh (\lambda _{v}/s))$ and taking the limit $s\rightarrow \infty $, we have obtained formula (\ref{SumFormBess}) for the summation of the series over zeros of the combination of the Bessel functions. The latter is a special case of the formula, previously derived in \cite{Sah1}. A physical application of the summation formula is given in section \ref% {sec:Phys}. For a quantum scalar field with the general curvature parameter we have evaluated the positive frequency Wightman function and the vacuum expectation value of the field squared for the geometry of concentric spherical shells in a constant negative curvature space. The Dirichlet boundary conditions on both shells are assumed. In the region between the shells the eigenfunctions have the form (\ref{eigfunc2}) and the corresponding eigenfrequencies are related to the zeros of the function $X_{iz}^{l+1/2}(u_{1},u_{2})$ by the formula (\ref{eigfreq}). For the evaluation of the corresponding series in the mode-sum (\ref{WF1}) for the Wightman function we apply summation formula (\ref{SumFormula}) with the function $h(z)$ given by (\ref{hz}). As a result this function is presented in the decomposed form (\ref{WF2}), where the first term on the right is the Wightman function for the region outside a single spherical boundary and the second one is induced by the presence of the outer sphere. By making use of the identity (\ref{Ident1}), we have presented the single shell Wightman function as a sum of two terms, formula (\ref{WF1ext}). The first one is the corresponding function in the constant curvature space without boundaries and the second one is induced by the shell. For points away from the shell the latter is finite in the coincidence limit and can be directly used for the evaluation of the boundary-induced part in the vacuum expectation value of the field squared. The renormalization is necessary for the boundary-free part only and this procedure is the same as that in quantum field theory without boundaries. In the region between the spherical shells the vacuum expectation value of the field squared is presented in the form (\ref{phi2}% ), where the first term on the right-hand side is the corresponding quantity outside a spherical boundary and is given by the expression (\ref{phi2single}% ). \section*{Acknowledgements} The work was supported by the Armenian Ministry of Education and Science Grant No. 119.
1,116,691,498,180
arxiv
\section{Introduction} The subject of open quantum systems has undergone substantial growth in the last three decades, starting with contributions to the field of fundamental quantum physics with the aim of understanding the process of decoherence. Based on the von Neumann approach to the reduction of the state vector \cite{Neumann}, these contributions were mainly driven by the pioneering work of Zurek \cite{Zurek}, Caldeira and Leggett \cite{CL}, and Joos and Zeh \cite{JZ}. The repercussions of their work, together with the advent of the field of quantum information theory, led to renewed interest in open quantum systems, the focus now shifting from fundamental issues to practical applications in circuits to implement quantum logic operations. The master equation approach has long been used to derive system-reservoir dynamics, to account for energy loss under a weak coupling regime \cite{Walls}. Its effectiveness comes from the fact that the energy loss of most quantum mechanical systems, especially within quantum and atomic optics, can be handled by the single-pole Wigner-Weisskopf approximation \cite{WW}, where a perturbative expansion is performed in the system-reservoir coupling. Following developments by Caldeira and Leggett \cite{CL}, more sophisticated methods to deal with the system-reservoir strong coupling regime have been advanced, such as the Hu-Paz-Zhang \cite{HPZ} master equation, with time-dependent coefficients, which allows for non-Markovian dynamics. Halliwell and Yu \cite{HY} have published an alternative derivation of the Hu-Paz-Zhang equation, in which the dynamics is represented by the Wigner function, and an exact solution of this equation was given by Ford and O'Connell \cite{FO}. Recently, the non-Markovian dynamics of open quantum systems has been studied with renewed interest, especially in connection with quantum information theory, as in Refs. \cite{Nori,Wu}. However, in these studies, as well as in most of the derivations of master equations with time-dependent coefficients, the authors assume either the rotating-wave approximation (RWA) or the secular approximation (SA) for the system-reservoir coupling \cite{Makela}. Since non-Markovian behavior is sensitive to the counter-rotating terms in the interaction Hamiltonian, important features of the dynamics are missing under the RWA in the strong-coupling regime. It is worth mentioning that a study of the effect of the RWA and the SA on the non-Markovian behavior in the spin-boson model at zero temperature has already been advanced \cite{Makela}, without, however, deriving a master equation. Our goal in this work is to derive {and investigate the consequences of} a master equation within the strong-coupling regime, which prevents us resorting to either the RWA or the SA in the system-reservoir coupling. Moreover, instead of the path integrals approach \cite{FH}, we use the formalism of quasi-probability distributions, thus enabling us to cast the problem as the solution of a linear system of equations. Our results follow from the general treatment of a bosonic dissipative network we have previously presented in Ref. \cite{MickelGeral}, where the network dynamics were investigated, and further used for quantum information purposes \cite{MickelBunch}. However, differently from our previous developments, we first consider the general model for a network of bosonic non-dissipative oscillators and, subsequently, we focus on some of these oscillators (or in just one of them) as our system of interest, and treat all the others as a (structured) reservoir. The exact dynamics of the network allows us to obtain an exact dynamics of the system-reservoir interaction. Moreover, we present a simple inequality to distinguish between Markovian and non-Markovian dynamics. Finally, this development enables us to generalize an earlier result by Glauber \cite{GlauberBook}.{ When using the RWA and a zero-temperature reservoir, it was shown that the quasi-probability functions maintain their shape while they are displaced in phase space; in particular, coherent states remain coherent states}. We find that, for a general Gaussian state, the center of its phase space distribution follows classical dynamics (as in Ref. \cite{GlauberBook}), but its shape is changed. Furthermore, this change can be derived from the evolution of the vacuum state, which is no longer stationary, because of the counter-rotating terms. The change in shape is affected by both quantum and thermal fluctuations, and these contributions can be distinguished, at least in theory. Our developments can be straightforwardly translated to the derivation of an exact master equation for fermionic systems, using the reasoning in Ref. \cite{Glauber}. \section{Unitary dynamics of the universe} \label{sec:model} The universe considered here consists of a set of $M+N$ harmonic oscillators, which are linearly coupled to each other in an arbitrary network. We consider $M$ of them to be part of our system of interest, and the remaining $N$ to be part of a reservoir. However, at this stage, we are concerned with the full dynamics of the universe, and there is actually no difference between system and reservoir modes. The oscillators are described by mass $m_{k}$ and natural, isolated frequencies $\varpi_{k}$; the coupling between modes $k$ and $j$, which occurs via their position coordinates, has strength $\lambda_{kj}$ (which, without loss of generality, is symmetric in its indices). Before we write the Hamiltonian that describes such a universe, we note that it must be positive-definite, in order to be bounded from below and have a well-defined ground state. Then, the Hamiltonian which is compatible with this model is \begin{equation} H=\frac{1}{2}\sum_{k=1}^{M+N}\left( \frac{1}{m_{k}}\hat{p}_{k}^{2 +m_{k}\varpi_{k}^{2}\hat{q}_{k}^{2}\right) +\frac{1}{4}\sum_{kj=1 ^{M+N}\lambda_{kj}\left( \hat{q}_{k}-\hat{q}_{j}\right) ^{2}, \label{eq:hamiltonqp \end{equation} where t{he coefficients $\lambda_{kj}$ form a real, symmetric matrix. We do not assume any particular form for them, so as to generate an arbitrary network, as depicted in Fig. \ref{fig:fig1} }The coupling term induces a change in the natural frequency of each mode, that is now represented by \begin{equation} \omega_{k}=\sqrt{\varpi_{k}^{2}+\frac{1}{m_{k}}\sum_{j=1}^{N}\lambda_{kj}}. \end{equation} \begin{figure}[ptb] \begin{center} \includegraphics[ height=8.0cm, width=10.0cm {fig1.eps \caption{Network of coupled quantum harmonic oscillators in a general topology. \end{center} \label{fig:fig1 \end{figure} Using this renormalized frequency, we can define annihilation operators $a_{k}$ and rewrite the Hamiltonian as \begin{equation} H=\sum_{k=1}^{M+N}\omega_{k}a_{k}^{\dagger}a_{k}+\frac{1}{2}\sum_{kj=1 ^{M+N}g_{kj}\left( a_{k}+a_{k}^{\dagger}\right) \left( a_{j}+a_{j ^{\dagger}\right) , \label{eq:hamiltona \end{equation} the coupling in this picture being given by \begin{equation} g_{kj}=\frac{\lambda_{kj}}{2\sqrt{m_{k}m_{j}\omega_{k}\omega_{j}}}. \label{eq:grenorm \end{equation} From here on, we will focus on $\omega_{k}$ and $g_{kj}$, the latter forming a real, symmetric matrix. \subsection{Characteristic function} The dynamics given by the Hamiltonian of Eq. (\ref{eq:hamiltona}) is best understood in terms of the characteristic function of a state, which is just the expected value of the multimode displacement operator in the symmetric ordering,{ \begin{equation} \chi\left( \left\{ \beta_{k}\right\} \right) =\left\langle \prod _{k=1}^{M+N}\exp\left( \beta_{k}a_{k}^{\dagger}-\beta_{k}^{\ast}a_{k}\right) \right\rangle \;, \end{equation} where $\left\{ \beta_{k}\right\} $ represents all coordinates $\beta_{k}$ with $k=1,\dots,N$, as well as their complex conjugates. } The characteristic function carries the complete information about the state, and in particular information about moments of all orders; this is one of the reasons it is a better approach than using the Heisenberg equations of motion directly. The von Neumann equation in Hilbert space is mapped to a differential equation in dual phase space (where the characteristic function is defined) \begin{equation} \frac{\partial\chi}{\partial t}=i\sum_{k=1}^{M+N}\left( \omega_{k}\beta _{k}-\sum_{j=1}^{N}g_{kj}\left( \beta_{j}+\beta_{j}^{\ast}\right) \right) \frac{\partial\chi}{\partial\beta_{k}}+\text{ H.c.}. \end{equation} Being linear and of first order, this equation admits a simple ansatz, \begin{equation} \chi\left( \left\{ \beta_{k}\right\} ,t\right) =\chi\left( \left\{ \beta_{k}\left( t\right) \right\} ,0\right) , \label{eq:ansatz \end{equation} which implies that the characteristic function maintains its shape, but the underlying (dual) phase space undergoes a linear transformation, given by \begin{equation} \beta_{k}\left( t\right) =\sum_{j=1}^{M+N}\left( U_{j,k}\left( t\right) \beta_{j}-V_{j,k}\left( t\right) \beta_{j}^{\ast}\right) . \label{eq:linear \end{equation} This transformation is defined by the solution to a system of differential equations, \begin{subequations} \begin{align} \frac{dU_{kj}}{dt} & =i\omega_{j}U_{kj}-i\sum_{n=1}^{M+N}\left( U_{k,n}-V_{k,n}\right) g_{n,j},\label{s1}\\ \frac{dV_{kj}}{dt} & =-i\omega_{j}V_{kj}-i\sum_{n=1}^{M+N}\left( U_{k,n}-V_{k,n}\right) g_{n,j}. \label{s2 \end{align} The Heisenberg equations of motion for the first moments have a similar structure. However, since they refer only to first moments, they do not represent a complete solution of the problem, which can be obtained from the characteristic function with the same computational effort. \section{Reduced dynamics of the system} From this point on, we shall be interested only in the behavior of a subset of $M$ oscillators (the ones labeled $1$ to $M$), which form our system of interest, while the oscillators labeled $M+1$ to $M+N$ play the role of a (structured) reservoir. The complete solution to the dynamics is given by Eq.(\ref{eq:ansatz}); in order to eliminate the reservoir degrees of freedom, all we need to do is set $\beta_{k}=0$ if $k>M$ (i.e., evaluate the characteristic function at the origin of the phase space of the modes we want to eliminate from the description). Before continuing, we observe that although not strictly necessary in our method, for the sake of simplicity we assume the usual sudden-coupling hypothesis, i.e., that the states of system and reservoir are initially uncorrelated: \end{subequations} \begin{equation} \chi_{SR}\left( \left\{ \beta_{k}\right\} ,0\right) =\chi_{S}\left( \left\{ \beta_{k}\right\} _{k\leq M},0\right) \chi_{R}\left( \left\{ \beta_{m}\right\} _{m>M}\right) . \label{eq:initial \end{equation} Tracing out the reservoir degrees of freedom, following the procedure above, leads to \begin{equation} \chi_{S}\left( \left\{ \beta_{k}\right\} ,t\right) =\chi_{S}\left( \left\{ \beta_{k}\left( t\right) \right\} ,0\right) \chi_{\text{in }\left( \left\{ \beta_{k}\right\} ,t\right) \;, \label{eq:reducedsolution \end{equation} where the indices run only through the degrees of freedom of the system (i.e., $k$ runs from $1$ to $M$). Therefore, we must use Eq.(\ref{eq:linear}) with $\beta_{k}=0$ for $k>M$, and it follows that we only need $U_{kj}$ and $V_{kj}$ for $k\leq M$. Eqs. (\ref{s1},\ref{s2}), although written as a matrix equation, are actually a set of $N$ independent vector equations and we conclude that only a few of these need to be solved. In fact, if our system of interest were a single oscillator, we would reduce the problem of finding its exact dynamics to a single vector equation of dimension $2N$. The two terms of Eq. (\ref{eq:reducedsolution}) are called the homogeneous (because it depends on the initial state of the system) and inhomogeneous terms (because it is independent of it, depending only on the initial state of the reservoir). The homogeneous part of the solution is just the linear transformation of phase space induced only by the elements $U_{kj}$ and $V_{kj}$ for which both $k,j\leq M$. These elements can be arranged in two general complex $M\times M$ matrices, resulting in $4M^{2}$ real parameters. At this point, we make an additional assumption that the initial state of the reservoir is Gaussian \cite{Gaussian}, i.e., its characteristic function has the Gaussian form. Moreover, the reservoir is unbiased (i.e., $\left\langle a_{m}\right\rangle =0$ for $m>M$). These are reasonable hypotheses, since the Gaussian states include the thermal states of quadratic Hamiltonians. The inhomogeneous characteristic function is then also a Gaussian function: \begin{align} \chi_{in}\left( \left\{ \beta_{k}\right\} ,t\right) & =\exp\left( -\frac{1}{2}\sum_{kj=1}^{M}A_{kj}\left( t\right) \beta_{k}\beta_{j}^{\ast }\right) \nonumber\\ & \times\exp\left( \sum_{kj=1}^{M}B_{kj}\left( t\right) \beta_{k}\beta _{j}+\text{c.c}\right) \text{. \end{align} The time-dependent functions $A_{kj}$ and $B_{kj}$ may be divided into two terms, in the form $A_{kj}=A_{kj}^{\left( 0\right) }+A_{kj}^{\left( th\right) }$ (and similarly for $B$), the first of which is the solution for a zero-temperature reservoir, \begin{subequations} \label{eq:pqzero \begin{align} A_{kj}^{\left( 0\right) } & =\frac{1}{2}\sum_{m=M+1}^{M+N}\left( U_{km}U_{jm}^{\ast}+V_{km}V_{jm}^{\ast}\right) \\ B_{kj}^{\left( 0\right) } & =\frac{1}{2}\sum_{m=M+1}^{M+N}\left( U_{km}V_{jm}+V_{km}U_{jm}\right) \;, \end{align} while the second incorporates the effects of the reservoir initial state, which is completely characterized by the second-order moments $\left\langle a_{m}^{\dagger}a_{n}\right\rangle _{0}$ and $\left\langle a_{m}a_{n \right\rangle _{0}$, \end{subequations} \begin{subequations} \label{eq:pqtemp \begin{align} A_{kj}^{\left( th\right) }= & \sum_{m=M+1}^{M+N}\left\langle a_{m}^{\dagger}a_{n}\right\rangle _{0}\left( U_{km}U_{jn}^{\ast}+V_{kn V_{jm}^{\ast}\right) \\ & +\sum_{m=M+1}^{M+N}\left( \left\langle a_{m}a_{n}\right\rangle _{0 V_{km}U_{jn}^{\ast}+\text{c.c.}\right) \nonumber\\ B_{kj}^{\left( th\right) } & =\sum_{m=M+1}^{M+N}\left\langle a_{m}^{\dagger}a_{n}\right\rangle _{0}\left( U_{kn}V_{jm}+V_{km}U_{jn}^{\ast }\right) \nonumber\\ & +\sum_{m=M+1}^{M+N}\left( \left\langle a_{m}a_{n}\right\rangle _{0 V_{km}V_{jn}+\text{c.c.}\right) \;. \end{align} Both $A$ and $B$ form complex $M\times M$ matrices; however, $A$ must be Hermitian, while $B$ is not. This represents an additional $3M^{2}$ real parameters, giving a total of $7M^{2}$ that completely specifies a given Gaussian evolution map (so called because, if the initial state of the system is Gaussian, it will remain Gaussian). The functions $A_{kj}^{\left( 0\right) }$ and $B_{kj}^{\left( 0\right) }$ represent the solution for a zero-temperature reservoir; therefore, they represent the quantum, or zero-point fluctuations. The functions $A_{kj}^{\left( th\right) }$ and $B_{kj}^{\left( th\right) }$ represent the thermal fluctuations (when the reservoir is assumed to be in a thermal state), and other effects that may arise due to, e.g., squeezing in the reservoir modes. \section{Single-mode Dynamics} The above result may be written in a simpler fashion for the case of a single oscillator taken as the system of interest: \end{subequations} \begin{align} \chi\left( \beta,t\right) = & \chi\left( U\beta-V\beta^{\ast},0\right) \nonumber\\ & \times\exp\left( -A\left\vert \beta\right\vert ^{2}+\frac{1}{2}B\beta ^{2}+\frac{1}{2}B^{\ast}\beta^{\ast2}\right) \;, \label{eq:solution \end{align} where the indices $1,1$ are dropped. The single-mode Gaussian map is completely characterized by $7$ real parameters (since $A$ is real, and $U$, $V$ and $B$ are complex). When a single mode is considered as the system of interest, we can perform a diagonalization of the reservoir part of the Hamiltonian, and consider the interaction of the system with each of the reservoir normal modes, as depicted in Fig. \ref{fig:fig2} (normal modes of the reservoir do not interact with each other, but interact with the system). \begin{figure}[ptb] \begin{center} \includegraphics[ height=8.0cm, width=10.0cm {fig2.eps \caption{The system of interest (represented by a single harmonic oscillator of the original network) interacting with the normal modes of the diagonalized reservoir (represented by the remaining oscillators of the network). \label{fig:fig2 \end{center} \end{figure} In order to get physical results in the limit $N\rightarrow\infty$, it is essential to keep track of the oscillator masses ($m_{k}$ in Eq. (\ref{eq:hamiltonqp})). Essentially, the central oscillator must be much more massive than the reservoir modes. This is the case with Brownian motion, where the observed particle, though mesoscopic, is still much larger than the bath of fluid molecules it interacts with. It is also the case in Quantum Optics, where the mode inside a cavity has a much smaller mode volume (i.e., it is concentrated in a small region) than the vacuum modes outside the cavity. We shall consider then that the central oscillator has mass $M$ and the reservoir modes have mass $\mu$, with $M\gg\mu$, and the renormalized frequencies and couplings are \begin{subequations} \begin{align} \omega_{1} & =\sqrt{\varpi_{1}^{2}+\frac{1}{M}\sum_{j=2}^{N+1}\lambda_{1j }\\ \omega_{j} & =\sqrt{\varpi_{j}^{2}+\frac{1}{\mu}\lambda_{1j}}\quad\left( 2\leq j\leq N+1\right) \\ g_{j} & =\frac{1}{2\sqrt{\mu M}}\frac{\lambda_{1j}}{\sqrt{\omega_{1 \omega_{j}}}\quad\left( 2\leq j\leq N+1\right) \end{align} Dropping the first index, Eqs.(\ref{s1},\ref{s2}) become \end{subequations} \begin{subequations} \begin{align} \frac{dU_{1}}{dt} & =i\omega_{1}U_{1}-i\sum_{j=2}^{N}g_{j}\left( U_{j}-V_{j}\right) \\ \frac{dV_{1}}{dt} & =-i\omega_{1}V_{1}-i\sum_{j=2}^{N}g_{j}\left( U_{j}-V_{j}\right) \\ \frac{dU_{j}}{dt} & =i\omega_{j}U_{j}-ig_{j}\left( U_{1}-V_{1}\right) \quad\quad\left( j\neq1\right) \\ \frac{dV_{j}}{dt} & =-i\omega_{j}V_{j}-ig_{j}\left( U_{1}-V_{1}\right) \quad\quad\left( j\neq1\right) \;. \end{align} The bottom two equations can be solved by considering $U_{1}$ and $V_{1}$ as external parameters. Then, by substituting them into the top two equations, we get a pair of coupled integro-differential equations: \end{subequations} \begin{subequations} \begin{align} \frac{dU_{1}}{dt} & =i\omega_{1}U_{1}+i\int_{0}^{t}d\tau h\left( t-\tau\right) \left( U_{1}\left( \tau\right) -V_{1}\left( \tau\right) \right) \label{eq:u1integro}\\ \frac{dV_{1}}{dt} & =-i\omega_{1}V_{1}+i\int_{0}^{t}d\tau h\left( t-\tau\right) \left( U_{1}\left( \tau\right) -V_{1}\left( \tau\right) \right) \;, \label{eq:v1integro \end{align} which depends on the reservoir topology only through the function \end{subequations} \begin{equation} h\left( t\right) =\sum_{j=2}^{N+1}g_{j}^{2}\sin\left( \omega_{j}t\right) =\frac{1}{4\mu M\omega_{1}}\sum_{j=2}^{N+1}\frac{\lambda_{j}^{2}}{\omega_{j }\sin\left( \omega_{j}t\right) \;, \end{equation} which in turn is related to the Fourier transform of the reservoir spectral density \begin{equation} J\left( \omega\right) =\sum_{j=2}^{N+1}g_{j}^{2}\delta\left( \omega -\omega_{j}\right) =\frac{1}{4\mu M\omega_{1}}\sum_{j=2}^{N+1}\frac {\lambda_{j}^{2}}{\omega_{j}}\delta\left( \omega-\omega_{j}\right) \end{equation} This is the homogeneous part of the solution. To obtain the inhomogeneous one, we need to use the solution found previously for $U_{k}$ and $V_{k}$ in terms of the now known $U_{1}$ and $V_{1}$, and then use Eqs. (\ref{eq:pqzero}) and (\ref{eq:pqtemp}). \section{Master Equation} The complete solution for single-mode dynamics is Eq. (\ref{eq:solution}), with time-dependent functions $U$, $V$, $A$ and $B$. It was derived by assuming an explicit microscopic model for the reservoir as a set of other modes, which are coupled to the mode of interest, but over which the experimenter has little control (except for macroscopic parameters such as temperature). In this section, our goal is to find a dynamical equation (in fact, a master equation) whose solution is precisely Eq. (\ref{eq:solution}), but which does not need to involve any other degrees of freedom, besides those of the system. We start by differentiating Eq. (\ref{eq:solution}) with respect to time, and then mapping it from phase space back to Hilbert space \begin{equation} \frac{d\rho}{dt}=-i\left[ H_{S}\left( t\right) ,\rho\left( t\right) \right] +\mathcal{D}_{t}\left( \rho\left( t\right) \right) , \label{eq:master \end{equation} where we have a time-dependent effective Hamiltonian \begin{equation} H_{S}\left( t\right) =\omega\left( t\right) a^{\dagger}a+\xi\left( t\right) a^{\dagger2}+\xi^{\ast}\left( t\right) a^{2}\;, \label{eq:masterham \end{equation} and a time-dependent dissipation super-operator, \begin{align} \mathcal{D}_{t}\left( \rho\right) = & \frac{\gamma_{1}\left( t\right) +\gamma_{2}\left( t\right) }{2}\left( \left[ a\rho,a^{\dagger}\right] +\left[ a,\rho a^{\dagger}\right] \right) \nonumber\\ & +\frac{\gamma_{2}\left( t\right) }{2}\left( \left[ a^{\dagger \rho,a\right] +\left[ a^{\dagger},\rho a\right] \right) \nonumber\\ & -\frac{1}{2}\left( \eta\left( t\right) \left( \left[ a^{\dagger \rho,a^{\dagger}\right] +\left[ a^{\dagger},\rho a^{\dagger}\right] \right) +\text{H.c.}\right) \;. \label{eq:masterdiss \end{align} This master equation depends on $7$ real time-dependent parameters, which in turn depend on the $7$ real parameters that define solution Eq.(\ref{eq:solution}); the three real parameters \begin{subequations} \begin{equation} \omega\left( t\right) =\frac{1}{\left\vert U\right\vert ^{2}-\left\vert V\right\vert ^{2}}\Im\left( U^{\ast}\frac{dU}{dt}-V^{\ast}\frac{dV {dt}\right) \;, \end{equation \begin{align} \gamma_{1}\left( t\right) = & \frac{-2}{\left\vert U\right\vert ^{2}-\left\vert V\right\vert ^{2}}\Re\left( U^{\ast}\frac{dU}{dt}-V^{\ast }\frac{dV}{dt}\right) \nonumber\\ = & -\frac{d}{dt}\log\left( \left\vert U\right\vert ^{2}-\left\vert V\right\vert ^{2}\right) \;, \label{eq:gammafrommapa \end{align \begin{equation} \gamma_{2}\left( t\right) =\frac{dA}{dt}+\gamma_{1}\left( A-\frac{1 {2}\right) +2\Im\left( \xi^{\ast}B\right) \;, \label{eq:gamma2 \end{equation} and the two complex parameters \begin{equation} \xi\left( t\right) =\frac{-i}{\left\vert U\right\vert ^{2}-\left\vert V\right\vert ^{2}}\left( U\frac{dV}{dt}-V\frac{dU}{dt}\right) , \end{equation \begin{equation} \eta\left( t\right) =\frac{dB}{dt}+\left( \gamma_{1}+2i\omega\right) B+2i\xi A. \label{eq:eta \end{equation} The time-dependent functions $\omega\left( t\right) $, $\gamma_{1}\left( t\right) $ and $\xi\left( t\right) $ are independent of the initial state of the reservoir, while $\gamma_{2}\left( t\right) $ and $\eta\left( t\right) $ depend on it. The dissipator, Eq. (\ref{eq:masterdiss}), is not explicitly in Lindblad-like form, but can be put into it, \end{subequations} \begin{equation} \mathcal{D}_{t}\left( \rho\right) =\sum_{n=1}^{2}\frac{\lambda_{n}\left( t\right) }{2}\left( \left[ L_{n}\left( t\right) \rho,L_{n}^{\dagger }\left( t\right) \right] +\left[ L_{n}\left( t\right) ,\rho L_{n}^{\dagger}\left( t\right) \right] \right) \label{eq:masterdisslind \end{equation} by defining the Lindblad operators \begin{subequations} \begin{align} L_{1}\left( t\right) & =\cos\left( \frac{\theta}{2}\right) a-\sin\left( \frac{\theta}{2}\right) \frac{\eta}{\left\vert \eta\right\vert }a^{\dagger }\label{1}\\ L_{2}\left( t\right) & =\cos\left( \frac{\theta}{2}\right) a^{\dagger }+\sin\left( \frac{\theta}{2}\right) \frac{\eta^{\ast}}{\left\vert \eta\right\vert }a\;, \label{2 \end{align} and Lindblad rates \end{subequations} \begin{subequations} \begin{align} \lambda_{1}\left( t\right) & =\frac{\gamma_{1}}{2}+\frac{\gamma_{1 }{\left\vert \gamma_{1}\right\vert }\sqrt{\frac{\gamma_{1}^{2}}{4}+\left\vert \eta\right\vert ^{2}}+\gamma_{2}\\ \lambda_{2}\left( t\right) & =\frac{\gamma_{1}}{2}-\frac{\gamma_{1 }{\left\vert \gamma_{1}\right\vert }\sqrt{\frac{\gamma_{1}^{2}}{4}+\left\vert \eta\right\vert ^{2}}+\gamma_{2}\;, \end{align} with the auxiliary definition \end{subequations} \begin{equation} \theta=\arctan\left( \frac{2\left\vert \eta\right\vert }{\gamma_{1}}\right) \quad\left( -\frac{\pi}{2}\leq\theta\leq\frac{\pi}{2}\right) \end{equation} The standard master equation derived with the Born-Markov approximation has the same form as equations Eq. (\ref{eq:master})-(\ref{eq:masterdiss}), but with constant-in-time parameters. In it, each term has a physical meaning: \begin{itemize} \item The first term in Eq. (\ref{eq:masterham}), with $\omega\left( t\right) =\omega_{1}+\Delta\omega\left( t\right) $, accounts for the free dynamics of the system, modified by a frequency shift due to its interaction with the reservoir. \item The second term in Eq. (\ref{eq:masterham}) is a squeezing term, arising from an asymmetry between position and momentum variables in the coupling Hamiltonian. However, in the weak-coupling regime, this term is small (being exactly zero in the RWA), leading to a negligible squeezing effect. \item $\gamma_{1}\left( t\right) $ is a decay rate, that drives the center of the system wave-packet towards its equilibrium at the origin of phase space. \item $\gamma_{2}\left( t\right) $ is a diffusion coefficient, related to injection of extra noise into the system due to non-zero reservoir temperature and counter-rotating terms, which only spreads the wave-packet without affecting the trajectory of its center. \item $\eta\left( t\right) $ is a coefficient of anomalous diffusion, which injects different levels of noise in position and momentum. From Eqs. (\ref{1},\ref{2}), we see that, when $\eta\neq0$, the Lindblad operators are not given by $a$ and $a^{\dagger}$, but by linear combinations of the two, giving rise to anomalous diffusion. \end{itemize} \subsection{Markovian and non-Markovian behavior} An interesting discussion in the current literature (see Ref. \cite{NonMarkovian} and references therein) concerns non-Markovian behavior. The Born-Markov approximation always leads to a Lindblad equation with a dissipator written in the form of Eq.(\ref{eq:masterdisslind}), with rates $\lambda_{n}\left( t\right) $, which are positive but may vary in time (in which case it can be called a \emph{time-dependent Markovian process}). If, at any given time, one of these rates assumes a negative value, then it is said to be a \emph{non-Markovian process}, according to the divisibility criterion of Rivas-Huelga-Plenio \cite{NonMarkovian,RHP}. The model we have developed allows us to compute these rates exactly from the solution, obtained through the system-reservoir interaction Hamiltonian. We can thus describe the system as \emph{Markovian} if the following conditions hold for all times $t$ \begin{subequations} \begin{align} \gamma_{1}\left( t\right) +2\gamma_{2}\left( t\right) & \geq0\\ \gamma_{1}\left( t\right) \gamma_{2}\left( t\right) +\gamma_{2}^{2}\left( t\right) -\left\vert \eta\left( t\right) \right\vert ^{2} & \geq0\;, \end{align} where the functions are defined in Eq. (\ref{eq:gammafrommapa}), Eq. (\ref{eq:gamma2}) and Eq. (\ref{eq:eta}). \section{Rotating Wave Approximation} In many physical systems described by the Hamiltonian of Eq. (\ref{eq:hamiltona}), the typical coupling intensity, $\left\vert g_{kj}\right\vert $, is many orders of magnitude smaller than the frequencies $\omega_{k}$, characterizing the \emph{weak coupling regime}. It is then a good approximation to drop the counter-rotating terms ($a_{k}a_{j}$ and $a_{k}^{\dagger}a_{j}^{\dagger}$), a procedure which is known as the \emph{rotating wave approximation} (\emph{RWA}). Eqs. (\ref{s1},\ref{s2}) are greatly simplified, with $V_{kj}=0$ and $U_{kj}$ obeying: \end{subequations} \begin{equation} \frac{dU_{kj}}{dt}=i\omega_{j}U_{kj}-i\sum_{n=1}^{N}U_{kn}g_{nj}\;. \end{equation} The condition $V_{kj}=0$ (for all $kj$) implies both $\xi\left( t\right) =0$ (no squeezing term in the effective system Hamiltonian) and $B^{\left( 0\right) }=0$ and, unless the reservoir initial state has some degree of squeezing (i.e., $\left\langle a_{m}a_{n}\right\rangle _{0}\neq0$ for some $m,n$), then also $B^{\left( th\right) }=0$. Together, this implies that $\eta\left( t\right) =0$. The condition $\xi\left( t\right) =\eta\left( t\right) =0$ is required to maintain the symmetry between position and momentum variables (the exchange $\left( \hat{q},\hat{p}\right) \leftrightarrow\left( \hat{p},-\hat{q}\right) $ leaves the RWA Hamiltonian unchanged, while it changes the one in Eq. (\ref{eq:hamiltonqp})). Therefore, in RWA, the squeezing term in Eq. (\ref{eq:masterham}) and the last term in Eq. (\ref{eq:masterdiss}) both vanish at all times, leading to the usual three terms (frequency shift, dissipation and diffusion) in the expression. The Markovianity condition is then simplified to \begin{subequations} \begin{align} \gamma_{1}\left( t\right) +2\gamma_{2}\left( t\right) & \geq0\\ \gamma_{2}\left( t\right) & \geq0 \end{align} \section{Natural Basis For System Evolution} It is a well known result \cite{GlauberBook} that a coherent state remains coherent when in contact with a reservoir at absolute zero, if one assumes RWA. This makes coherent states a natural basis to analyze system dynamics, ultimately motivating Glauber and Sudarshan to define the normal-order quasi-probability $P$ function: \end{subequations} \begin{equation} \rho\left( t\right) =\int d^{2M}\left\{ \alpha\right\} P\left( \left\{ \alpha\right\} ,t\right) \left\vert \left\{ \alpha\right\} \right\rangle \left\langle \left\{ \alpha\right\} \right\vert . \end{equation} {We have returned to the general case, where the system is composed of $M$ modes.} The coherent state follows a dynamics in phase space that can be written $\left\vert \left\{ \alpha\right\} \right\rangle \rightarrow \left\vert \left\{ \alpha\left( t\right) \right\} \right\rangle $, where $\left\{ \alpha\left( t\right) \right\} $ is given by (compare with Eq. (\ref{eq:linear})) \begin{equation} \alpha_{k}\left( t\right) =\sum_{j=1}^{M}\left( U_{kj}\alpha_{j +V_{kj}\alpha_{j}^{\ast}\right) \quad\left( 1\leq k\leq M\right) \;. \label{eq:lineardirect \end{equation} Combining these two equations, we have the familiar result \begin{equation} \rho\left( t\right) =\int d^{2M}\left\{ \alpha\right\} P\left( \left\{ \alpha\right\} ,0\right) \left\vert \left\{ \alpha\left( t\right) \right\} \right\rangle \left\langle \left\{ \alpha\left( t\right) \right\} \right\vert . \label{eq:glauberevolution \end{equation} The fact that coherent states remain coherent is intimately connected with the fact that the vacuum is a stationary state of this non-unitary evolution. However, for non-zero temperature, or when one includes the counter-rotating terms, this is no longer true: coherent states do not maintain their coherence, and we must resort to another basis, formed by Gaussian states. In the same way that the coherent states are generated by displacing the vacuum, the time-dependent Gaussian basis states are generated by displacing a squeezed thermal state: \begin{equation} \rho_{B}\left( \left\{ \alpha\right\} ,t\right) =D\left( \left\{ \alpha\right\} \right) \rho_{o}\left( t\right) D^{\dagger}\left( \left\{ \alpha\right\} \right) , \end{equation} where $\rho_{o}\left( t\right) $ is obtained by allowing an initial vacuum state to evolve in accordance with the solution presented in Eq. (\ref{eq:solution}): \begin{equation} \left\vert 0\right\rangle \left\langle 0\right\vert \rightarrow\rho_{o}\left( t\right) =\int d^{2M}\left\{ \alpha\right\} P_{o}\left( \left\{ \alpha\right\} ,t\right) \left\vert \left\{ \alpha\right\} \right\rangle \left\langle \left\{ \alpha\right\} \right\vert \label{eq:evolvacuum \end{equation} Adopting then this natural Gaussian basis, we can write the evolution of any initial state as: \begin{equation} \rho\left( t\right) =\int d^{2M}\left\{ \alpha\right\} P\left( \left\{ \alpha\right\} ,0\right) \rho_{B}\left( \left\{ \alpha\left( t\right) \right\} ,t\right) . \label{eq:evolany \end{equation} Combining Eq. (\ref{eq:evolvacuum}) and Eq. (\ref{eq:evolany}), we can rewrite the evolution of an arbitrary initial state (albeit one with a reasonably well-defined $P$ function) as \begin{align} \rho\left( t\right) = & \int d^{2M}\left\{ \alpha\right\} \int d^{2M}\left\{ \eta\right\} P\left( \left\{ \alpha\right\} ,0\right) P_{o}\left( \left\{ \eta\right\} ,t\right) \nonumber\\ & \times\left\vert \left\{ \eta+\alpha\left( t\right) \right\} \right\rangle \left\langle \left\{ \eta+\alpha\left( t\right) \right\} \right\vert , \label{eq:naturalbasis \end{align} where $\left\{ \alpha\left( t\right) \right\} $ describe the evolution of the \emph{center} of the wavepacket (which obeys a classical equation of motion, as required by the Ehrenfest theorem, and is independent of the state of the reservoir) and $P_{o}\left( \left\{ \eta\right\} ,t\right) $ describe the evolution of the \emph{shape} of the wavepacket. When the RWA and an absolute-zero reservoir are assumed, the wavepacket is not distorted, and $P_{o}\left( \left\{ \eta\right\} ,t\right) $ reduces to a delta function at the origin, making Eq. (\ref{eq:naturalbasis}) identical to Eq. (\ref{eq:glauberevolution}). Therefore, Eq. (\ref{eq:naturalbasis}) is a generalization of Eq. (\ref{eq:glauberevolution}) and we have obtained a generalization of the dynamics described in Ref. \cite{GlauberBook}. Another way to look at this result is that the displaced phase-space quasi-probability function is convoluted with another function, which accounts for the change in shape. \begin{equation} P\left( \left\{ \alpha\right\} ,t\right) =\int d^{2M}\left\{ \gamma\right\} P\left( \left\{ \gamma\right\} ,0\right) P_{o}\left( \left\{ \alpha-\gamma\left( t\right) \right\} ,t\right) \end{equation} For a single mode, the center path follows $\alpha\left( t\right) =U_{1}\alpha+V_{1}\alpha^{\ast}$, $U_{1}$ and $V_{1}$ being given by the solutions to Eqs. (\ref{eq:u1integro}) and (\ref{eq:v1integro}). The function $P_{o}\left( \left\{ \alpha\right\} ,t\right) $ is just the solution when the initial state is the vacuum, i.e., it satisfies the initial condition $P_{o}\left( \left\{ \alpha\right\} ,0\right) = \delta^{\left( 2\right) }\left( \alpha\right) $. Under the RWA, this continues to be true at all times, $P_{o}^{\text{RWA}}\left( \left\{ \alpha\right\} ,t\right) = \delta^{\left( 2\right) }\left( \alpha\right) $. \section{Conclusions} We have presented a technique to derive an exact master equation for the system-reservoir dynamics under the strong coupling regime, where neither the rotating-wave-approximation nor the secular approximation apply. To this end, we adopted the strategy of considering a network of bosonic systems coupled to each other, picking out one of them as the system of interest and leaving the rest to play the role of the reservoir. Working {with phase-space distribution functions and Gaussian states, we generalize an earlier result by Glauber, that a coherent state remains coherent despite dissipation when coupled to a zero temperature reservoir. We demonstrate that t}here is a class of Gaussian states which serves as a generalization of the coherent state basis of the Glauber-Sudarshan $P$ representation. This class of Gaussian states follows from the distortion of the vacuum state which, in the strong-coupling regime, is no longer a stationary state, even for a zero temperature reservoir. We have also presented an investigation of the conditions that lead to a non-completely-divisible map, and thus non-Markovian dynamics. So far, conditions for non-Markovianity have been studied for finite Hilbert spaces under the rotating-wave and/or secular approximations. We remark that a master equation similar to the one derived here has been obtained using the Path Integrals approach \cite{HPZ}. The simplicity of our development, using phase-space distribution functions, offers the significant advantage of enabling us to cast the problem as the solution of a linear system of equations. \begin{acknowledgments} The authors acknowledge financial support from PRP/USP within the Research Support Center Initiative (NAP Q-NANO) and FAPESP, CNPQ and CAPES, Brazilian agencies. \end{acknowledgments}
1,116,691,498,181
arxiv
\section{Introduction} \label{S_introduction} \noindent Outflows, produced by Active Galactic Nuclei (AGNs) and intense episodes of star-formation, have been proposed to play a crucial role in regulating the built up of stellar mass and black-hole mass growth through negative and positive feedback (see e.g. \citealt{Kormendy2013} and reference therein). Recently, it has been shown that outflows can be driven also by radio-jets (e.g. \citealt{Morganti2005,Harrison2014,Morganti2018, Jarvis2019,Molyneux2019, Venturi2021}). The outflows might result as an important source of feedback as they evolve and heat the interstellar medium (ISM) preventing the cooling of the gas on possibly large scales.\\ The different gas phases of outflows have been widely studied in different galaxies populations (\citealt{Veilleux2005, Veilleux2020}, for reviews) mostly via long-slit spectroscopy (e.g. \citealt{Heckman2000, Rupke2002, Arribas2014, VillarMartin2018,Rose2018, LHG2019, Saturni2021}) and Integral Field Spectroscopy observations (IFS, e.g. \citealt{Cazzoli2014, Cresci2015, CRA2017, Maiolino2017, Bosch2019, Perna2020, Perna2021, Comeron2021}). \\ \noindent To date, the vast majority of studies of multi-phase outflows and feedback have focussed on local luminous and ultra luminous infrared galaxies (U/LIRGs, e.g. \citealt{Rupke2013,Cazzoli2014,Cazzoli2016, PereiraSantaella2016, PereiraSantaella2020, Fluetsch2021}) and luminous AGNs (e.g. quasars or Seyferts galaxies; \citealt{Feruglio2010, MullerSanchez2011, Fiore2017, Brusa2018, Venturi2018, Cazzoli2020}). These works demonstrated the power of the 3D IFS in studies of this kind. For example, the wealth of optical and infrared (IR) IFS-data enable the exploration of possible scaling relations between AGN properties, host galaxy properties, and outflows (e.g. \citealt{Kang2018}, \citealt{Fluetsch2019, Kakkad2020, RuschelDutra2021, Avery2021, Luo2021}, \citealt{Singha2021}, and references therein). \\ For low-luminosity AGNs, as low-ionisation nuclear emission line regions (LINERs), no systematic search for outflows has been done yet. Except for individual discoveries (e.g. \citealt{Dopita2015, Raimundo2021}) the only systematic studies are by \citet{Cazzoli2018} and \citet{LHM2020} about ionised gas outflows in type-1 and type-2 LINERs, respectively. These two latter works, where 30 LINERS were studied on the basis of optical long-slit spectroscopy, showed that multi-phase outflows are common in LINERS (detection rate: 60$\%$, \citealt{Cazzoli2018}), showing an intriguing ionisation structure in which low ionisation lines (e.g. [O\,I]$\lambda\lambda$6300,6364) behave differently than high ionisation lines (e.g. [O\,III]$\lambda\lambda$4959,5007). Most of these spectroscopically-identified outflows show in their \textit{HST}-H$\alpha$ $\lambda$6563 image \citep{Pogge2000, Masegosa2011, LHM2022} a large-scale biconical or bubble-like shape along with evident spatially resolved sub-structures, such as gas $\sim$\,20-70 pc wide clumps.\\ A 3D description of multi-phase outflows and the quantification of their feedback (mass, energy and their rates) in low-luminosity AGNs, as LINERs, is lacking. The exploration of outflows and feedback for this AGN-family is crucial to improve our understanding of galaxy evolution as these sources are thought to bridge the gap between normal and luminous AGNs, and belongs to the most numerous AGN population in the local Universe (\citealt{Ho2008}, for a review). \\ \noindent NGC\,1052 (MCG-01-07-034, PKS\,0238-084) is considered as the prototypical LINER in the local Universe (z\,$\sim$\,0.005). Table\,\ref{T_properties} summarises the basic properties for this object.\\ \noindent There are four previous IFS studies focussing on NGC\,1052: \citet{Sugai2005}, \citet{Dopita2015} and \citet{Dahmer2019a,Dahmer2019b}. \citet{Sugai2005} probed the bulk of the outflow with channel maps of the [O\,III] emission line thanks to Kyoto3DII/Subaru data over the innermost 3$\arcsec$\,$\times$\,3$\arcsec$. \citet{Dopita2015} (D15, hereafter) analysed the stellar and gas kinematics within the inner 25$\arcsec$\,$\times$\,38$\arcsec$ using WiFeS/ANU data. The authors mapped the emission line properties on scales of hundred of parsecs (spatial sampling $\sim$\,1$\farcs$3) mainly studying shocks, with no detailed information on the properties of the different kinematic components. \citet{Dahmer2019a,Dahmer2019b} (DH19a,b, hereafter) mapped optical and NIR lines in the inner 3$\farcs$5\,$\times$\,5\arcsec (similar to the work by \citealt{Sugai2005}) exploiting GMOS/GEMINI data. The richness of tracers provided by the combination of multi-wavelength data offers a more detailed view than the previous works of the complex kinematics in NGC\,1052. Nevertheless the large scale (i.e. kpc-scale) emission is not covered by the GEMINI data set. Summarising, all these IFS-based works support the presence in NGC\,1052 of an emission line outflow possibly extended on kpc-scales.\\ \noindent In this paper, we use spectral and spatial capabilities of MUSE/VLT and MEGARA/GTC optical IFS observations to build up, for the first time, a comprehensive picture of both stellar and ISM components in NGC\,1052 of the outflow, with a tens-of-pc resolution.\\ \noindent This paper is organised as follows. In Section\,\ref{S_datared}, the data and observations are presented, as well as, the data reduction. In Section\,\ref{S_data_analysis}, we present the spectroscopic analysis: stellar subtraction, line modelling and maps generation. Section\,4 highlights the main observational results. In Section\,5, we discuss the stellar kinematics and dynamics, and the ionised and neutral gas properties with special emphasis on outflow properties and its possible connection with the radio jet. In this Section we also estimate the black hole mass, and compare the full width half maximum (FWHM) of the unresolved BLR component with previous estimates. The main conclusions are presented in Section\,\ref{S_conclusions}. In Appendix\,\ref{Appendix_A}, we summarise the procedure to account for background sources. In Appendix\,\ref{Appendix_B} we present the kinematic, flux-intensity maps and fluxes-ratios from our IFS data set. Appendix\,\ref{Appendix_C} is devoted to present the 1D position-velocity and position-dispersion diagrams aimed at comparing gas and stellar motions along the three major axes, i.e. major and minor axes of the host galaxy, and the radio jet.\\ \noindent All images and spectral maps are oriented following the standard criterion, so the north is up and east to the left. \\ Throughout the whole paper, angular dimensions will be converted into physical distances using the scale distance from the Local Group, i.e. 110 pc/$\arcsec$ (see Table\,\ref{T_properties}). \begin{table} \caption[Subsample]{General properties of NGC\,1052.} \begin{center} \tiny{\begin{tabular}{l c c} \hline \hline Properties & value & References\\ \hline R.A. (J\,2000) & 02$^{h}$41$^{m}$04$^{s}$.799 & NED \\ Decl. (J\,2000) & -08$^{d}$15$^{m}$20$^{s}$.751 & NED \\ z & 0.00504 & NED \\ V$_{\rm sys}$ [$\mbox{km\,s}^{-1}$] & 1532 $\pm$ 6 & \citet{Karachentsev1996} \\ D [Mpc] & 22.6 $\pm$ 1.6 & NED \\ Scale [pc/$\arcsec$] & 110 & NED \\ Nuclear Spectral Class. & LINER (1.9) & \citet{OGM2009}\\ Morphology & E3-4/S0 & \citet{Bellstedt2018} \\ \textit{i}\,[$^{\circ}$] & 70.1 & Hyperleda \\ PA$_{\rm phot}$ & 112.7 & Hyperleda\\ R$_{eff}$ [$\arcsec$] & 21.9 & \citet{Forbes2017} \\ M$_{\rm BH}$\,[M$_{\sun}$] & 3.4\,(0.9)\,$\times$\,10$^{8}$& \citet{Beifiori2012}\\ PA$_{\rm jet}$\,[$^{\circ}$] & 70 & \citet{Kadler2004a} \\ SFR [M$_{\odot}$/yr] & 0.09 & \citet{Falocco2020}\\ \hline \end{tabular} \label{T_properties}} \end{center} \tiny{Notes. --- \lq V$_{\rm sys}$\rq, \lq D\rq \ and \lq Scale\rq: systemic velocity, distance, and scale, respectively, from the Local Group. \lq Morphology\rq: Hubble classification. \lq \textit{i} \rq \ is the inclination angle defined as the inclination between line of sight and polar axis of the galaxy. It is determined from the axis ratio of the isophote in the B-band using a correction for intrinsic thickness based on the morphological type. \lq PA$_{\rm phot}$\rq: is the position angle of the major axis of the isophote 25 mag/arcsec$^{2}$ in the B-band measured north-eastwards (see \citealt{Paturel1997} and references therein). \lq R$_{eff}$\rq \ is the effective radius from Spitzer data. The black hole mass (M$_{\rm BH}$) is derived from a keplerian disc model assuming an inclination of 33$^{\circ}$(81$^{\circ}$) and a distance of 18.11\,Mpc. \lq PA$_{\rm jet}$\rq \ is the position angle of the jet from VLBI data (covering only the central region of NGC\,1052). As the PA depends on the different components of the jet, varying from 60$^{\circ}$ to 80$^{\circ}$, in this work we consider the average value of 70$^{\circ}$. See \citet{Kadler2004a} and references therein for further details. \lq SFR\rq : upper limit to the star formation rate from FIR luminosity (2\,$\times$\,10$^{42}$ ergs$^{-1}$) as measured by \citet{Falocco2020}.} \end{table} \begin{figure*} \centering \includegraphics[width=.95\textwidth]{figures/Fig1_v3.pdf} \caption{Optical continuum images computed from MUSE (left) and MEGARA (right) in units of erg\,s$^{-1}$/cm$^{2}$ (logarithmic scale). To obtain these images we considered a 60\,\AA-wide continuum band (i.e. from 6105\,$-$\,6165\,\AA). The cross is the photometric center, and the sizes of the different PSFs for MEGARA and MUSE data are indicated in the bottom left part of the figure (see also Sect.\,\ref{S_datared}). As reference we show the field of view (dashed rectangle) and average seeing (1$\farcs$4, bottom circle) for the WiFeS datacube analysed in D15. The black bar at the upper right represents 1\,kpc ($\sim$\,9$\arcsec$) at the redshift of NGC\,1052 (see Table\,\ref{T_properties}). Similarly, the white bar at the upper right, right panel, represents 400\,pc ($\sim$\,3$\farcs$6).} \label{Fig_MM_continuum} \end{figure*} \section{Observations and data reduction} \label{S_datared} In this section we describe MUSE and MEGARA data and their data reduction process, see Sections \ref{S_datared_MUSE} and \ref{S_datared_MEGARA}, respectively. \subsection{MUSE observations and data reduction} \label{S_datared_MUSE} \noindent The data were gathered on September 5$^{\rm th}$ 2019 with the Multi-Unit Spectroscopic Explorer (MUSE, \citealt{Bacon2010, Bacon2014}), mounted at the UT4 of the Very Large Telescope at the Paranal Observatory in Chile as part of programme 0103.B-0837(B) (PI: L.\,Hern{\'a}ndez-Garc{\'i}a). \\ \noindent They were acquired in the wide field mode configuration with the nominal setting (i.e. no extended wavelength coverage), covering the spatial extent of 1\,arcmin$^{2}$ with 0.2$\arcsec$\,pix$^{-1}$ sampling. The MUSE data has a wavelength coverage of 4800\,-\,9300\,\AA, with a mean spectral resolution of R\,$\sim$\,3000 at 1.25\,\AA \ spectral sampling. During the observations the average DIMM seeing was 0$\farcs$62 (varying between 0$\farcs$48 and 0$\farcs$85); mean airmass was 1.06. \\ In total, we obtained eight exposures with a total integration time of 93\,min. Including overheads, the observations took two hours, i.e. two observing blocks. Each one consists in four dithered exposures of 697\,s. The relative offsets in RA(DEC) were 10$\arcsec$, 0$\farcs$5, -21$\farcs$5 and 0$\farcs$5 (11$\arcsec$, 0$\farcs$5, -21$\farcs$5 and 0$\farcs$5) with respect to the position of NGC\,1052 (Table\,\ref{T_properties}). The dither pattern also involves a 90$^{\circ}$ rotation for a better reconstruction of the final cube, i.e. an homogeneous quality across the field of view.\\ \noindent The eight pointings constitute a mosaic covering a contiguous area of 80$\arcsec$\,$\times$\,80$\arcsec$, i.e, 8.8\,kpc\,$\times$\,8.8\,kpc at the adopted spatial scale (110 pc/$\arcsec$, Table\,\ref{T_properties}). The radius of the covered area is about 3.5 times the effective radius of NGC\,1052 (i.e. 21$\farcs$9, Table\,\ref{T_properties}). \noindent The data reduction was performed with the MUSE pipeline (version 2.8.1) via \texttt{EsoRex} (version 3.13.2). It performs the basic reduction steps, that is bias subtraction, flat fielding, wavelength calibration and illumination correction, as well as the combination of individual exposures in order to create the final mosaic. For flux calibration we used the spectrophotometric standard star Feige\,110 (spectral type: DOp), observed before the science frames. Since we did not apply any telluric correction, some residuals remain in the region between $7110-7310$\,\AA. In this spectral window only the HeI$\lambda$7065.3, [Ar\,III]$\lambda$7135.80 and [Fe\,II]$\lambda$7155 lines are detected, but they are not crucial for our analysis. The sky-subtraction was performed in the latest step of the processing of MUSE observations using the sky-background obtained from the outermost spaxels in each science exposure (no dedicated on-sky exposures were gathered). We perform the astrometry calibration using the astrometric catalogue distributed with the pipeline. \\ The final cube has dimensions of 418\,$\times$\,422\,$\times$\,3682. The total number of spectra is 176\,396, of these 28\,508 (16\,$\%$) are not useful, as they correspond to artefacts from the creation of the mosaic, i.e. empty spaxels located at bottom-left and top-right corners, and at the edges of the field of view.\\ \noindent The radius of the point spread function (PSF) of the MUSE observations (0$\farcs$4, see Fig.\,\ref{Fig_MM_continuum}) was estimated from the full width at half-maximum (FWHM) of the 2D-profile brightness distribution of the standard star used for flux calibration. Throughout the paper, in order to avoid any possible PSF contamination in the kinematic measurements, we will conservatively consider as \lq nuclear region\rq \ a circular area of radius equal to the width at 5 per cent intensity of the PSF radial profile, i.e. 0$\farcs$8. This area does not coincide with any peculiar feature (e.g. dust lanes) visible in the MUSE continuum image shown in the left panel of Fig.\,\ref{Fig_MM_continuum}. The \lq nuclear region\rq \ is marked (with a circle) in the spectral maps computed from the MUSE datacubes (see Fig.\,\ref{Fig_MM_continuum} but also Sect.\,\ref{S_data_analysis} and Appendix\,\ref{Appendix_B}).\\ \noindent We obtained the instrumental profile by measuring the single (not blended) OH$\lambda$\,7993.332 sky-line \citep{Osterbrock1996, Bai2017}. We measured it in the fully-reduced data cube of the standard star Feige\,110 (see above) by selecting a region of size 50\,$\times$\,50 spaxels free from stellar emission. On average, the central wavelength and the width of the OH sky-line are 7993.335\,$\pm$\,0.114\,\AA \ and 1.19\,$\pm$\,0.13\,\AA, respectively. This instrumental profile correction has been further checked with the 5577\,\AA \ sky-line. In this case, the value of the average instrumental resolution is consistent with that from the OH line (i.e. 1.2\,\AA). \subsection{MEGARA observations and data reduction} \label{S_datared_MEGARA} \noindent The data were taken on December 28$^{\rm th}$ 2019 with the MEGARA instrument (see \citealt{GdP2016, Carrasco2018}), located in the Cassegrain focus of GTC using the Large Compact Bundle IFU mode (GTC94-19B, PI: S.\,Cazzoli). The 567 fibres that constitute the MEGARA IFU (100 $\mu$m in core size) are arranged on a square microlens-array that projects on the sky a field of 12$\farcs$5 x 11$\farcs$3. Each microlens is a hexagon inscribed in a circle with diameter of 0$\farcs$62 projected in the sky. A total of 56 ancillary fibres (organised in eight fibre bundles), located at a distance of 1.75–2.0 arcmin from the centre of the IFU field of view, deliver simultaneous sky observations.\\ We made use of two low-resolution Volume Phase Holographic gratings (LR-VPHs) that provides a R\,$\sim$\,6000 in the central wavelengths of the selected bands: LR-V has a wavelength coverage $5140-6170$\,\AA\,and LR-R $6100-7300$\,\AA. \\ \noindent We obtained six exposures with a integration time of 900s per VPH in two observing blocks, leading to a total observing time of four hours. The mean signal-to-noise ratio (S/N) in the spectra continuum was 25 for the LR-R and 30 for the LR-V datacube. \noindent The data reduction was done using the MEGARA Data Reduction Pipeline \citep{MegaraDRP2020, MegaraDRP2020ACM} available as a package inside \textsc{Python} (version 0.9.3). We performed the standard procedures: bias subtraction, flat-field correction, wavelength calibration and flux calibration using the star HR\,4963. Each fibre was traced individually at the beginning of the data reduction and, within the pipeline, we applied additional corrections for the possible differences of each fibre with respect to the whole image, including an illumination correction based on individual fibre flats. For this correction, we used \textsc{iraf} to smooth the sensitivity curve as, in the case of the LR-R VPH, some structure due to the lamp emission are present (see MEGARA cookbook). The pipeline also performs the individual exposures combination to generate the final cube (one per VPH), which can be transformed into a standard IFS cube from raw stacked spectra format by means of a regularisation grid to obtain 0$\farcs$4 square spaxels \citep[see][]{Cazzoli2020}. The PSF of the MEGARA data was measured as in Sect.\,\ref{S_datared_MUSE} with the star HR\,4963, giving a FWHM of 1$\farcs$2 (see Fig.\,\ref{Fig_MM_continuum}, left).\\ \noindent Considering the wavelength ranges of the VPHs and the emission lines present in NGC\,1052 spectra, we decided to combine the two cubes into a single datacube to optimise the stellar modelling and subtraction (increasing the range of line-free continuum, see Sect.\,\ref{S_stellar_cont}). More specifically, the need of combining the two MEGARA cubes to reliably model the stellar continuum is twofold. First, the spectral range of the MEGARA LR-V cube covers only the MgI stellar feature whereas none are present in the LR-R one. Second, for the latter (red) cube, the stellar continuum emission is limited by the presence of the broad emission features and a telluric band (see Sect.\,\ref{S_datared_MEGARA}). For the cube combination, we scaled the fluxes for every spaxel to have the continuum at the same level in the common wavelength range of both VPHs ($6100-6170$\,\AA). The combined datacube was used in the whole analysis. \section{Data analysis} \label{S_data_analysis} In this section we summarise the identification and subtraction of background sources in the MUSE field of view (Sect.\,\ref{S_bkg_sources}), as well as we describe the stellar continuum modelling (Sect.\,\ref{S_stellar_cont}) and line fitting for MUSE and MEGARA cubes (Sect.\,\ref{S_line_mod}). \subsection{Background sources in the MUSE field of view} \label{S_bkg_sources} \noindent We visually inspected the white light image generated in the last step of the data reduction of MUSE data, i.e. the mosaic creation (see Sect.\,\ref{S_datared}) and the continuum image in Fig.\,\ref{Fig_MM_continuum}. We note that there are a number of sources (both point-like and extended) some of which may not be part of the NGC\,1052 galaxy. In Appendix\,\ref{Appendix_A}, we summarise the procedure to identify putative background sources. \\ We found two background galaxies at redshifts $\sim$\,0.03 and $\sim$\,0.022. Only the former is identified in NED as: SDSSCGB$\_$67616.02. Both of these two galaxies were masked out from the final MUSE datacube used for the analysis. \subsection{Stellar continuum modelling} \label{S_stellar_cont} \begin{figure*} \centering \includegraphics[width=1.\textwidth]{figures/Vp_ppxf_N1052_v3.pdf} \caption{Example of stellar continuum modelling and its subtraction for high S/N nuclear spectra from both MUSE (top panel) and MEGARA (bottom panel) data. The red line indicates the modelled stellar spectrum that matches the observed continuum, obtained applying the \texttt{pPXF} (Sect.\,\ref{S_stellar_cont}). The wavelength regions blocked for the modelling are shown in grey. Spectral features are labelled at the top, and Balmer lines, forbidden lines and absorption lines are marked in green, blue and pink, respectively. Note that, in case of MEGARA, we combined the cubes in LR-V and LR-R bands, which have a 70\,\AA \ overlap around 6130\,\AA \ (see Sect.\,\ref{S_datared_MEGARA}).} \label{Fig_ppxf_model} \end{figure*} \noindent For the stellar continuum modelling we used the penalised PiXel-Fitting code (\texttt{pPXF}) by \citet{Cappellari2003} (see also \citealt{Cappellari2017} and references therein) for both MEGARA and MUSE, in different coding environments. We used the \texttt{pPXF} code within the GIST pipeline (see below) for MUSE and within \textsc{python} for MEGARA. \\ \noindent For MUSE, we used the GIST pipeline (v.\,3) by \citet{Bittner2019}\footnote{\url{http://ascl.net/1907.025}} as a comprehensive tool both to spatially bin the spectra in order to increase the S/N in the continuum and to model the stellar contribution to the observed spectra. The MUSE spectra were shifted to rest-frame based on the initial guess of the systemic redshift from NED, i.e. z\,$=$\,0.005 (Table\,\ref{T_properties}). Then the data were spatially binned using the 2D Voronoi binning technique by \citet{Cappellari2003}, that creates bins in low S/N regions, preserving the spatial resolution of those above a minimum S/N threshold. The S/N has been calculated in the line-free wavelength band between 5350 and 5800\,\AA. All spaxels with a continuum S/N\,$<$\,3 were discarded to avoid noisy spectra in the Voronoi bins. We found that a minimum S/N threshold of 30 results in reliable measurements of stellar kinematics in NGC\,1052 as well as an optimum spatial resolution. Specifically, in general, cells are not larger than 60 spaxels (2.4\,arcsec$^{2}$ in area), hence stellar properties are likely to be homogeneous within a Voronoi-cell. \\ For MEGARA data the Voronoi binning was not necessary to achieve a proper stellar continuum modelling as in the spaxels with the lowest S/N ($<$\,15), that constitute $\sim$12$\%$ of the total, the resulting velocity and velocity dispersion are consistent with the rest of the cube with higher S/N.\\ \noindent To accurately measure spectral line properties (wavelength, width and flux), it is necessary to account for stellar absorption, which primarily affects the Balmer emission lines and the NaD absorption doublet. For MUSE we limited the wavelength range used for the fit to $4800-9000$\,\AA , which contains spectral features from H$\beta$ to CaT, and excluded the region of the auroral [S\,III]$\lambda$9069 line\footnote{This line is noisy and only barely detected in a region of radius of $\sim$\,1$\arcsec$, hence no spatially resolved analysis will be done.}. For MEGARA the total wavelength range was from 5150-7000\,\AA\,covering the main spectral features in both LR-V and LR-R bands. For both data sets, we masked the spectral regions (emission lines and atmospheric/telluric absorptions) affected by emission from the interstellar medium (ISM). Additionally, we excluded the NaD absorption that is not properly matched by the stellar templates owing to the impact of interstellar absorption. \noindent For MUSE, we used the Indo-U.S. stellar library \citep{Valdes2004} as in \citet{Cazzoli2014, Cazzoli2016, Cazzoli2018}. Briefly, in this library there are 885 stars selected to provide a broad coverage of the atmospheric parameters (effective temperature, surface gravity and metallicity). The stellar spectra have a continuous spectral coverage from 3460 to 9464 \AA, at a resolution of $\sim$\,1\,\AA \ FWHM \citep{Valdes2004}. For MEGARA we used the RGD synthetic stellar library \citep{GonzalezDelgado2005,Martins2005}, since it covers the whole spectral range for the combined datacubes and the spectral resolution is consistent with that from our spectra. The library consisted on 413 stars selected with a metallicity of Z\,=\,0.02, ranging from 4000 to 7000 \AA\,, and covering a wide range of surface gravities and temperatures \citep[see][and references therein]{GonzalezDelgado2005}.\\ \noindent Finally, we set up \texttt{pPXF} using four moments of the line of sight velocity distribution (LOSVD), i.e. V, $\sigma$, h3 and h4, for both MUSE and MEGARA. The additive and multiplicative polynomials were set to 4-4 (0-12) for MUSE (MEGARA) in order to, respectively, minimise template mismatch, and match the overall spectral shape of the data so that the fit is insensitive to reddening by dust \citep[see][and references therein]{Westfall2019,Perna2020}.\\ An example of the \texttt{pPXF} modelling is shown in Fig.\,\ref{Fig_ppxf_model} for both MUSE (top panel) and MEGARA (bottom panel) data. \noindent The results of the \texttt{pPXF} fits, i.e. the stellar kinematics maps of the first two moments of the LOSVD, are shown in Fig.\,\ref{Fig_stellar_kin}, and discussed in Sect.\,\ref{S_stellar_kin}. A detailed study of higher order moments of the stellar LOSVD (h3 and h4) is beyond the aim of the paper, hence the corresponding maps are not displayed.\\ \noindent Through the analysis we will consider formal uncertainties provided by the \texttt{pPXF} tool. These are in good agreement with those from MC-simulations performed on MUSE data. Specifically, differences are generally lower than 5\,$\mbox{km\,s}^{-1}$\, and 7\,$\mbox{km\,s}^{-1}$\, for velocity and velocity dispersion, respectively.\\ \noindent Motivated by the typical small sizes of the Voronoi cells in the MUSE data, we made the simplifying assumption that the stellar populations and kinematics do not change radically within one Voronoi bin. For each spaxel, the stellar spectrum of the corresponding bin is normalised and then subtracted to the one observed to obtain a datacube consisting exclusively of ISM absorption and emission features. For MEGARA data the stellar subtraction is performed on spaxel-by-spaxel basis.\\ In what follows, we will refer to this data cube (data\,$-$\,stellar model) as the \lq ISM-cube\rq. \begin{figure*} \centering \includegraphics[width=1\textwidth]{figures/Vp_stellar_kin_v2.pdf} \caption{NGC\,1052 stellar kinematics maps from our \texttt{pPXF} analysis (Sect.\,\ref{S_stellar_cont}). These maps, i.e. velocity (left) and velocity dispersion (right), are displayed in units of $\mbox{km\,s}^{-1}$. In both panels, the large scale kinematics is obtained from MUSE data whereas the insets show the smoothed \texttt{pPXF}-maps from MEGARA datacube. The cross marks the photometric center as in Fig.\,\ref{Fig_MM_continuum}.} \label{Fig_stellar_kin} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.975\textwidth]{figures/Vp_ajustes_v2.pdf} \caption{Examples of emission line spectra (black) after stellar subtraction (Sect.\,\ref{S_stellar_cont}) and their modelling from the central region of both MUSE (R\,=\,0$\farcs$7, i.e. 77\,pc) and MEGARA data (R\,=\,0$\farcs$9, i.e. 100\,pc), see top-left labels. As reference, orange vertical lines mark the systemic wavelengths of the emission lines which are labelled on top-right. For each panel the modelled line profile (red line) and the components (with different colours) are shown. Specifically, green, blue, pink Gaussian curves indicate: primary, secondary, and third components used to model the profiles. In cyan is marked the broad H$\alpha$ component from the BLR. Residuals from the fit are shown in the small lower panels, in which yellow horizontal lines indicate the $\pm$\,2.5\,$\varepsilon_{\rm c}$ (Sect.\,\ref{S_em_lin_mod}). Vertical yellow lines mark the wavelength range considered for calculating $\varepsilon_{\rm fit}$ for each line (Sect.\,\ref{S_em_lin_mod}). Note that the high residuals redward to H$\beta$, cannot be fitted with a BLR-component (velocities and widths would be inconsistent with those of the broad H$\alpha$ component), and are likely due to some residuals from stellar subtraction (Sect.\,\ref{S_stellar_cont}).} \label{Fig_spectral_model} \end{figure*} \subsection{Line modelling} \label{S_line_mod} \noindent From the ISM-cube, we produce line-maps by modelling the spectral lines with multiple Gaussian functions. To achieve that, we applied a Levenberg-Marquardt least-squares fitting routine under both Interactive Data Analysis (IDL) and \textsc{Python} environments, using \textsc{mpfitexpr} by \citet{Markwardt2009} and \textsc{lmfit}, respectively (see Sections \ref{S_em_lin_mod} and \ref{S_abs_lin_mod}). We imposed the intensity ratios between the [O\,III]$\lambda$4959,5007 (only for MUSE), [O\,I]$\lambda$6300,6363 and [N\,II]$\lambda$6548,6584 to be 2.99, 3.13 and 2.99 \citep{Osterbrock2006}. The ratio between the equivalent widths (EW) of the two lines of the NaD$\lambda\lambda$5890,5896 absorption, R$_{\rm NaD}$\,=\,EW$_{5890}$/EW$_{5896}$, is restricted to vary from 1 (optically thick limit) to 2 (optically thin absorbing gas), according to \citet{Spitzer1978}. \subsubsection{Emission Line modelling} \label{S_em_lin_mod} \begin{figure*} \centering \includegraphics[width=1\textwidth]{maps_em/combo_OIII_v2.pdf} \caption{Example of emission line maps produced from the fitting of the [O\,III]$\lambda$5007 line using the MUSE ISM-cube (Sect.\,\ref{S_em_lin_mod}). In both panels from left to right: velocity field ($\mbox{km\,s}^{-1}$), velocity dispersion ($\mbox{km\,s}^{-1}$) and flux intensity (erg\,s$^{-1}$\,cm$^{-1}$, log scale) maps for the primary component. The black solid line indicates the major axis of the stellar rotation (Table\,\ref{T_kinematics}). The dot-dashed square indicates the MEGARA field of view. The contours indicate the central region at high velocity dispersion (see text for details, e.g. Sect.\,\ref{S_result_primary_MUSE_butterfly}). \textit{Top Panel}: The maps cover a smaller field of view with respect to original MUSE mosaic (80$\arcsec$\,$\times$\,80$\arcsec$, Sect.\,\ref{S_datared_MUSE}) to highlight weak features. The dashed square indicate the selected zoomed view displayed in the bottom panels of this figure and for Figures from \ref{M_OIII_primary_zoom} to \ref{M_BPT_primary_zoom} of the Appendix\,\ref{Appendix_B}. \textit{Bottom Panel}: zoomed area. The dashed line indicates the orientation of the radio-jet (Table\,\ref{T_properties}).} \label{Fig_combo_OIII} \end{figure*} \noindent We derive the kinematics of the ISM properties by modelling all the spectral lines available in the cubes. To perform the fitting, and hence discriminate between line models and number of components, we followed the approach proposed by \citet{Cazzoli2018}. Specifically, for both MUSE and MEGARA data, we tested the \lq [S\,II]-model\rq \ and \lq [O\,I]-model\rq, for which we first fitted in the spectrum only [S\,II] and [O\,I] lines (depending on the model) and then used them as reference to tie all the other narrow lines, so they share the same width and velocity shift. Additionally, we tested the \lq mix-models\rq, that consist on using [S\,II] and [O\,I] simultaneously as reference for [N\,II] and narrow H$\alpha$ respectively or, alternatively, using [O\,I] for narrow H$\alpha$ and [N\,II], with [S\,II] lines behaving otherwise. For MUSE data only (see Sect.\,\ref{S_datared_MEGARA}), the best fit to the H$\alpha$ ([S\,II]) line is applied to the H$\beta$ ([O\,III]) line.\\ \noindent However, none of these models provided a good fit for the whole set of lines. In the MEGARA field of view the independent fitting of [O\,I] and [S\,II] lines produced differences of $\sim$\,100\,$\mbox{km\,s}^{-1}$\, for the velocity measurements, although the line widths resulted similar, i.e. differences $\leq$\,50\,$\mbox{km\,s}^{-1}$. For MUSE data, on the one hand, we found that at large spatial scales (R\,$>$\,10$\arcsec$) the kinematics of these lines are similar within 75\,$\mbox{km\,s}^{-1}$\, (mostly) when they are fitted independently. Although large discrepancies ($>$\,100\,$\mbox{km\,s}^{-1}$) arise in the central region (inside the MEGARA field of view; R\,$<$\,10$\arcsec$ oriented in E-W direction), with a peculiar \lq butterfly\rq\ shape (see Sect.\,\ref{S_main_results}). A similar behaviour has been found comparing [O\,III] and [S\,II] kinematics. Moreover, the S/N of the [O\,I] (H$\beta$) drops steeply in the NW-SE direction, complicating the tying with H$\alpha$-[N\,II] (H$\alpha$) in both MUSE and MEGARA data. Taking all this into account, we decided to fit H$\beta$, [O\,III], [O\,I] and [S\,II] independently and use the latter as a template for the H$\alpha$-[N\,II] blend. Finally, as NGC\,1052 is a type 1.9 LINER (Table\,\ref{T_properties}), we added a broad AGN component (from the unresolved BLR) with width \,$>$\,600\,$\mbox{km\,s}^{-1}$\, (1400\,$\mbox{km\,s}^{-1}$\, in FWHM) only in H$\alpha$ forcing its spatial distribution to be the same of the PSF. Figure\,\ref{Fig_spectral_model} shows examples of the Gaussian fits of the whole set of emission lines for both MUSE (four upper panels) and MEGARA data (two lower panels). \\ \noindent The emission lines present complex profiles with broad wings and double peaks\footnote{Note that double peaks were already detected by DH19a for NGC\,1052 (their Fig.\,3) and in other LINERs, e.g. NGC\,5077 \citep{Raimundo2021}.} (Fig.\,\ref{Fig_spectral_model}) suggesting the presence of more than one kinematic component, especially within the innermost 10$\arcsec$ of radius. In order to prevent overfit, we first fitted all emission lines with one Gaussian component, and then more components were added based on the parameter: $\varepsilon_{\rm line}$. This parameter is defined as the standard deviation of the residuals under the emission lines, after a component is added. In the cases in which $\varepsilon_{\rm line}$\,$>$\,2.5\,$\times\,\varepsilon_{\rm cont}$ (standard deviation of the line-free continuum), another Gaussian component is added. This criterion has been already successfully applied to optical spectra of active galaxies both from long-slit \citep{Cazzoli2018, LHG2019, LHM2020} and IFS \citep{Cazzoli2020}.\\ Overall, we allowed for a maximum of three Gaussians per line plus the BLR-component in H$\alpha$ (Fig.\,\ref{Fig_spectral_model}). This provides a good trade-off between obtaining a statistical good fit to the spectra (i.e. residuals are of the same order as the noise without any peculiar structures as spikes or bumps) and the number of components used having a reasonable a physical explanation.\\ \noindent For each emission line and component found, we ended up with the following information: central wavelength, width, and flux intensity along with their respective fitting uncertainties. These are the formal 1-sigma uncertainty weighted with the square root of $\chi^{2}$, as in \citet{Cazzoli2020}.\\ \noindent Taking into account both their central velocities and line widths, we identify a \lq primary\rq, a \lq secondary\rq \ and a \lq third\rq \ component. More specifically, the primary component can be mapped over the whole galaxy line-emitting region ($\sim$\,39$\arcsec$, i.e. 4.3\,kpc), with clear blue/red velocities with generally the lowest widths (it is also clearly detected by D15b). The third component is not spatially resolved (is extended within a radius of $\leq$\,2$\arcsec$, i.e PSF size) being generally the broadest. The secondary component has intermediate properties: it is spatially resolved, being mapped up to R\,$<$\,5$\arcsec$ (i.e. 550\,pc), with extreme velocities (up to $\sim$\,660\,$\mbox{km\,s}^{-1}$). Additionally, in order to discriminate between components (especially primary and secondary ones) we considered spatial continuity of both flux and kinematic values. For the former a visual inspection was already satisfactory to prevent wild variations, for the latter we avoid sharp variations of the kinematics between adjacent spaxels. Specifically, we imposed that the values of the velocity fields vary smoothly (differences are less than 200\,$\mbox{km\,s}^{-1}$) and that the secondary component is broader than the narrow one. Differences in line-widths are of $\sim$\,160\,-\,180\,$\mbox{km\,s}^{-1}$\, on average, for the brightest lines as [O\,III] and H$\alpha$-[N\,II]. A minor number of spaxels ($<$\,40) constitute an exception to this general behaviour of velocity dispersion but these are mainly located either within the PSF or at the largest radii where the secondary component is detected.\\ \noindent For each of these components we created velocity, velocity dispersion and flux maps. These are shown in figures in Appendix\,\ref{Appendix_B} (from Fig.\,\ref{M_OIII_primary_zoom} to Fig.\,\ref{M_SII_second_zoom} and Fig.\,\ref{M_SII_primary_megara} to Fig.\,\ref{M_SII_second_megara} for MUSE and MEGARA, respectively). An example of these maps is shown in Fig.\,\ref{Fig_combo_OIII} for the [O\,III] line for MUSE data. In this figure we display both the large and small scales mapped by our IFS data. As the large-scale emission is similar among emission lines, the maps in Appendix\,\ref{Appendix_B} show only the central region (R\,$\sim$\,10$\arcsec$) where the largest differences are observed. We refer to Sect.\,\ref{S_main_results} for details.\\ To obtain velocity dispersion, for each spectrum (i.e. on spaxel-by-spaxel basis), the effect of instrumental dispersion (i.e. $\sigma_{\rm\,INS}$, see Sect.\,\ref{S_datared}) was corrected for by subtracting it in quadrature from the observed line dispersion ($\sigma_{\rm\,obs}$) i.e. $\sigma_{\rm\,line}$\,=\,$\sqrt{\sigma_{\rm\,obs}^{2}\,-\,\sigma_{\rm\,INS}^{2}}$. \\ We use the [S\,II] ratio (i.e. [S\,II]$\lambda$6716/[S\,II]$\lambda$6731, e.g. Fig.\,\ref{M_SII_primary_zoom}, right panel) to estimate the electron density (n$_{e}$) in accordance with the relation of \citet{Sanders2016}. \noindent To investigate the ionising mechanisms across the field of view, for each component used to model emission features (forbidden and narrow Balmer lines), the maps of the four line ratios used in standard BPTs diagnostic diagrams \citep{Baldwin1981} were also generated. The maps are presented in Appendix\,\ref{Appendix_B} (Figures from \ref{M_BPT_primary_zoom} to \ref{M_BPT_second_megara}) and the diagnostic diagrams in Figures \ref{Fig_BPT_primary} and \ref{Fig_BPT_secondary}. For the two spatially resolved components (namely primary and secondary), the typical values of kinematics and line ratios are summarised in Tables \ref{T_ism_result_primary} and \ref{Table_ism_S_result_secondary}. \begin{figure*} \centering \includegraphics[trim = 1.8cm 15.5cm 1.8cm 5.55cm, clip=true, width=1.\textwidth]{figures/Vp_BPTs_primary_polar_low_ne.pdf} \includegraphics[trim = 1.8cm 15.5cm 1.8cm 5.55cm, clip=true, width=1.\textwidth]{figures/Vp_BPTs_primary_butterfly_low_ne.pdf} \caption{Optical standard BPT diagrams for the primary component for the gas distributed in the polar direction and that in the central region at high-$\sigma$ (top and bottom panels, respectively) obtained from MUSE data. Grey circles mark the data points presented in this paper. Black lines in all diagrams represent the dividing curves between H\,II star-forming regions, Seyferts, and LINERs from \citet{Kewley2006} and \citet{Kauffmann2003}. Pink boxes show the predictions of photoionisation models by pAGB stars for Z\,=\,Z$_{\sun}$, a burst age of 13 Gyr \citep{Binette1994} and ionisation parameter values (log\,U) between -3 and -4. Log\,U is typically -3.5 in LINERs \citep{Netzer2015}. The predictions of shock-ionisation models are overlaid in each diagram. Specifically, following \citet{Cazzoli2018} we considered shock+precursor grids from \citet{Groves2004} with Z\,=\,Z$_{\odot}$ and for different n$_{e}$. Blue and red curves correspond to models with n$_{e}$\,=\,1\,cm$^{-3}$ and n$_{e}$\,=\,100\,cm$^{-3}$, respectively (see also Sect.\,\ref{S_result_primary_MUSE}). We plotted the values corresponding to the minimum and maximum preshock magnetic field allowed in each model. Also, we consider only shock-velocities from 100 to 500 km\,s$^{-1}$ (yellow dashed lines) as larger $\sigma$ are not observed for the primary component (Sect.\,\ref{S_result_primary_MUSE}). The dividing line between weak-[O\,I] and strong-[O\,I] LINERs \citep{Filippenko1992} is marked in black with a dashed line (right panels). In all diagrams, green symbols indicate the average values calculated in the polar (cross) and central (square) regions; as reference the cyan star is the typical value in the nucleus (average within the PSF region). In the top panels, the pink diamond, black triangle and black upside down triangle are the average BPT-values for the faint features: the arm, the east and south-east clumps, respectively (see Sect.\,\ref{S_result_lowSB}). These features are not detected in [O\,I] hence no symbols are displayed in the corresponding diagnostic diagrams.} \label{Fig_BPT_primary} \end{figure*} \begin{figure*} \centering \includegraphics[trim = 1.8cm 15.5cm 1.8cm 5.55cm, clip=true, width=1.\textwidth]{figures/Vp_BPTs_secondary_noprec.pdf} \caption{The same as Fig.\,\ref{Fig_BPT_primary} but for the secondary component. Here, we considered shock models (no precursor), the blue and red curves correspond to models with n$_{e}$\,=\,100\,cm$^{-3}$ and n$_{e}$\,=\,1000\,cm$^{-3}$, respectively (see also Sect.\,\ref{S_result_secondary}). The green triangle indicates the average value of the line ratios distribution.} \label{Fig_BPT_secondary} \end{figure*} \begin{figure} \centering \includegraphics[trim = 0.cm 0cm 8.5cm 0cm, width=.495\textwidth, clip=true]{maps_abs/EW_NaD_OIII_cont_p2.pdf} \caption{NaD EW map in \AA-units from MUSE cube. The dot dashed square indicate the MEGARA field of view. The black solid line indicate the major axis of the stellar rotation (Table\,\ref{T_kinematics}). The contours indicate the region with enhancement velocity dispersion of emission lines (see Sect.\,\ref{S_result_primary_MUSE_butterfly} and e.g. Fig.\,\ref{Fig_combo_OIII}). } \label{Fig_EW_NaD_abs} \end{figure} \begin{figure} \centering \includegraphics[width=0.495\textwidth]{maps_abs/Vp_ajustes_NaD_18_10_crop.pdf} \caption{Example of absorption line spectra (black) after stellar subtraction (Sect.\,\ref{S_stellar_cont}) and their modelling from the central region of MEGARA data (R\,=\,1$\farcs$45, i.e. 160\,pc). The grey band indicate the spectral band blocked during the fitting due to residuals from stellar subtraction (see Sect.\,\ref{S_abs_lin_mod}). Orange vertical lines and red and green curves are as Fig.\,\ref{Fig_spectral_model}, as well as both vertical and horizzontal yellow lines.} \label{Fig_spectral_model_NaD} \end{figure} \begin{figure*} \centering \includegraphics[width=1.\textwidth]{maps_abs/Vp_MEGARA_NaD_cont_v3.pdf} \caption{The neutral gas velocity field ($\mbox{km\,s}^{-1}$), velocity dispersion ($\mbox{km\,s}^{-1}$) and flux intensity (mJy) maps for the single kinematic component used to model NaD. Black lines are as in Fig.\,\ref{M_SII_second_megara}. Specifically, the black solid line indicate the major axis of the stellar rotation (Table\,\ref{T_kinematics}). The dashed lines indicates the orientation of the radio-jet (Table\,\ref{T_properties}). The contours indicate the region at high velocity dispersion (Sect.\,\ref{S_outflow_kin_neutral} for details). } \label{M_NaD_megara} \end{figure*} \subsubsection{Sodium doublet modelling} \label{S_abs_lin_mod} The wavelength coverage of our MUSE and MEGARA data sets allow us to probe the NaD absorption doublet. This feature originates both in the cold-neutral ISM of galaxies and in the atmospheres of old stars (e.g. K-type giants). We modelled the doublet in the ISM-cubes (after the stellar subtraction, Sect.\,\ref{S_stellar_cont}) to obtain the neutral gas kinematics and hence to infer whether the cold neutral gas is either participating to the ordinary disc rotation or entraining in non-rotational motions such as outflows (see e.g. \citealt{Cazzoli2014,Cazzoli2016}).\\ For MUSE data the NaD is detected at S/N\,$>$\,3 up to R\,$\sim$\,25$\farcs$7 (2.8 kpc); however, most of the absorption (95 percent of the spaxels at S/N\,$>$\,3) is concentrated within the inner $\sim$\,16$\arcsec$ (1.8 kpc). The NaD equivalent width (EW) map is presented in Fig.\,\ref{Fig_EW_NaD_abs}. The values range from 0.4 to 3.3 \AA \ (1.1\,\AA, on average).\\ \noindent We prefer to model the NaD doublet on spaxel-by-spaxel basis in MEGARA data, as it has generally higher S/N with respect to that of MUSE data. We consider one kinematic component (a Gaussian-function each line), and we masked the wavelength range between 5900 and 5920 \AA \ due to some residuals from the stellar subtraction. \\ To infer the presence of a second component we inspect the map of the residuals (i.e. $\varepsilon$$_{\rm line}$/$\varepsilon$$_{\rm cont}$), as done for emission lines (Sect.\,\ref{S_em_lin_mod}). However, the values are in range 0.7 and 2 (1.2 on average) hence there is not a strong indication of the need of multiple components to fit the doublet.\\ Figure\,\ref{Fig_spectral_model_NaD} shows an example of the modelling of the NaD doublet absorption, and in Fig.\,\ref{M_NaD_megara} we present the corresponding kinematic and absorbed-flux maps. The results for the NaD absorption doublet are presented in Sect.\,\ref{S_results_NaD} and discussed in Sect.\,\ref{S_outflow_kin_neutral}. \section{Main observational results} \label{S_main_results} In Sect.\,\ref{S_stellar_kin} we present the results from the \texttt{pPXF} stellar kinematics analysis of both MUSE and MEGARA data.\\ \noindent The emission lines detected in both MUSE and MEGARA ISM-cubes are [S\,II], H$\alpha$-[N\,II] and [O\,I], whereas H$\beta$ and [O\,III] are covered by MUSE data only (see Sect.\ref{S_datared}). In both data sets, the maximum number of kinematic components used to model forbidden lines and narrow H$\alpha$ is three (Sect.\,\ref{S_em_lin_mod}). These components have different kinematics and spatial distribution supporting that they are distinct components. In Sect.\,\ref{ISM_kinematics_MUSE} we present the spatial distributions of kinematics and ISM properties (e.g. line ratios and electron density), measured for each of the three components in MUSE data. The comparison between MUSE and MEGARA results is presented in Sect.\,\ref{S_MUSE_vs_MEGARA}. An additional broad H$\alpha$ component originated in the BLR of the AGN has been used to model spectra within the nuclear region (Sect.\,\ref{S_em_lin_mod}). Its properties are presented in Sect.\,\ref{S_BLR_detection} for both data sets.\\ Finally, Sect.\,\ref{S_results_NaD} summarises the main results from the modelling of the NaD absorption (Sect.\,\ref{S_abs_lin_mod}). \subsection{Stellar Kinematics} \label{S_stellar_kin} \begin{figure} \centering \includegraphics[width=.495\textwidth]{figures/Vp_PVPS_v2.pdf} \caption{Position-Velocity (P-V, top) and Position-Velocity Dispersion (P-$\sigma$, bottom) curves of the stellar component of NGC\,1052 from MUSE data (Sect.\,\ref{S_stellar_kin}). Both curves were obtained considering a pseudo-slit of 1$\arcsec$-width aligned according to the major axis of the rotation (i.e. 112$^{\circ}$, Table\,\ref{T_kinematics}). Velocities are centred to the kinematic center, and the radius is calculated as the distance from the photometric center. In the top panel, blue and red symbols indicate the approaching (negative velocities) and receding sides (positive velocities) of the rotation, respectively. Green lines mark the R$_{\rm eff}$ (21$\farcs$9, i.e. 2.4\,kpc, Table\,\ref{T_properties}) and R$_{\rm eff}$/8 (2$\farcs$75, i.e. 303\,pc, Sect.\,\ref{S_stellar_kin}), as labelled on the top. Grey dashed lines show zero-points for position and velocity, as reference. The field of view of MEGARA observations is marked with orange dotted lines. Note that the the typical uncertainty (extracted from the uncertainties estimated with \texttt{pPXF}) on the velocity and velocity dispersion measurements are generally $\leq$\,12\,$\mbox{km\,s}^{-1}$\, and $\leq$\,14\,$\mbox{km\,s}^{-1}$, respectively.} \label{Fig_PVPS} \end{figure} \begin{table} \caption[kinsummary]{Stellar kinematic properties of NGC\,1052 from MUSE and MEGARA.} \begin{center} \begin{tabular}{l c c c c } \hline \hline IFU$_{\rm FoV}$ & $\Delta$V & PA & $\sigma_{\rm c}$ & $\sigma$ \\ & $\mbox{km\,s}^{-1}$ & \degr & $\mbox{km\,s}^{-1}$ & $\mbox{km\,s}^{-1}$ \\ \hline MEGARA & 78\,$\pm$\,3 & 112\,$\pm$\,6 & 215\,$\pm$\,13 & 201\,$\pm$\,16 \\ MUSE$_{\rm MEGARA}$ & 75\,$\pm$\,9 & 122\,$\pm$\,5 & --- & 180 $\pm$ 6 \\ MUSE & 167\,$\pm$\,19 & 122\,$\pm$\,10 & 201\,$\pm$\,10 & 145\,$\pm$\,22 \\ \hline \end{tabular} \label{T_kinematics} \end{center} \tiny{Notes. --- \lq $\Delta$V\rq: observed velocity amplitude; PA: position angle of the major kinematic axis; \lq $\sigma_{c}$\rq \ and \lq $\sigma$\rq \ are the central velocity dispersion (at R\,$<$\,R$_{eff}$/8, i.e. 303\,pc) and the mean velocity dispersion, respectively (see Sect.\,\ref{S_stellar_kin}). For velocity dispersion measurement the quoted uncertainties are 1 standard deviation. The \lq MUSE$_{\rm MEGARA}$\rq \ line indicate that the values are measured using MUSE data but over the field of view (FoV) of MEGARA.} \end{table} \noindent As explained in Sect.\,\ref{S_stellar_cont}, we used \texttt{pPXF} to fit the stellar continuum of the spectra for both MEGARA and MUSE datacubes. The maps of the stellar kinematics (velocity and velocity dispersion) for both data sets are shown in Fig.\,\ref{Fig_stellar_kin} and the main properties are summarised in Table\,\ref{T_kinematics}.\\ \noindent The stellar velocity field (Fig.\,\ref{Fig_stellar_kin}, left panels) shows the typical spider-pattern consistent with a rotating disc, at both large and small spatial scales mapped by our IFS data. The peak-to-peak velocity ($\Delta$V, Table\,\ref{T_kinematics}) from MUSE (MEGARA) data is 167\,$\pm$\,19\,$\mbox{km\,s}^{-1}$\, (78\,$\pm$\,3\,$\mbox{km\,s}^{-1}$) at a galactocentric distance of 40$\arcsec$ (4$\arcsec$), that corresponds to 4.4 kpc (0.4 kpc). The $\Delta$V from MUSE map within the MEGARA footprint, 75\,$\pm$\,9\,$\mbox{km\,s}^{-1}$\, (Table\,\ref{T_kinematics}), is consistent with that from MEGARA cube.\\ \noindent The stellar major kinematic axis, estimated at the largest scales for MUSE and MEGARA data are (122\,$\pm$\,10)$^{\circ}$ and (112\,$\pm$\,6)$^{\circ}$ measured north-eastwards, respectively (Table\,\ref{T_kinematics}). Both measurements indicate this axis is aligned with the photometric major axis (112.7$^{\circ}$, Table\,\ref{T_properties}). \\ \noindent Overall, the stellar velocity dispersion varies from 75 to 235\,$\mbox{km\,s}^{-1}$\, for MUSE and from 100 to 250\,$\mbox{km\,s}^{-1}$\, for MEGARA (Fig.\,\ref{Fig_stellar_kin}, right panels). As expected in the case of a rotating disc, the stars exhibit a centrally peaked velocity dispersion map, with a maximum value of 233\,$\pm$\,6\,$\mbox{km\,s}^{-1}$\, and 241\,$\pm$\,4 $\mbox{km\,s}^{-1}$\, as measured from MUSE and MEGARA maps, respectively, being in positional agreement within uncertainties with the nucleus (considered as the photometric center, i.e. the cross in all maps). \\ \noindent Following \citet{Cappellari2013} for the ATLAS$^{\rm 3D}$ legacy project, the central velocity dispersion ($\sigma_{\rm c}$) is calculated at a distance corresponding to R$_{eff}$/8, which is R\,$<$\,2$\farcs$75 (303 pc) for NGC\,1052. The value for the central velocity dispersion is 201\,$\pm$\,10\,$\mbox{km\,s}^{-1}$\, (215\,$\pm$\,13\,$\mbox{km\,s}^{-1}$) whereas the extra-nuclear mean velocity dispersion is 145\,$\pm$\,22\,$\mbox{km\,s}^{-1}$\, (201\,$\pm$\,16\,$\mbox{km\,s}^{-1}$) for MUSE (MEGARA) data (see Table\,\ref{T_kinematics}). The mean velocity dispersion from MUSE data within the MEGARA footprint is 180\,$\pm$\,6 $\mbox{km\,s}^{-1}$\, (Table\,\ref{T_kinematics}), hence consistent within uncertainties with that measured directly from the MEGARA velocity dispersion map.\\ Besides the main point-symmetric disc-like pattern, in MUSE data towards the north-east and south-west and up to R\,$\sim$\,30$\arcsec$ (i.e. $\sim$\,3.3\,kpc) we observe a smooth local enhancement of the velocity dispersion values. This enhancement is of about $150-180$\,$\mbox{km\,s}^{-1}$\, (hence above the average, Table\,\ref{T_kinematics}) but does not match features in either the continuum or ISM maps (Fig.\ref{Fig_MM_continuum} and Appendix\,\ref{Appendix_B}), and it is not an artefact from cross-talk effects.\\ \noindent Higher velocity dispersion ($\sim$\,220\,$\mbox{km\,s}^{-1}$) with respect to the mean values that seems to be present only in MEGARA at R\,$\sim$\,5$\arcsec$ prominent only to the east and to the west. Given its the position, this feature it is likely caused by the lower S/N of the spaxels near the edges (see Sect.\,\ref{S_stellar_cont}). \\ \noindent We obtained position-velocity and position-dispersion diagrams, i.e. the \lq P-V\rq \ and \lq P-$\sigma$\rq \ diagrams in Fig.\,\ref{Fig_PVPS}, in a 1$\arcsec$-width pseudo-slit along their major axis of rotation listed in Table\,\ref{T_kinematics}. We checked that in the (central) region mapped by both data sets kinematics and curves are in agreement within uncertainties (Table\,\ref{T_kinematics}). However, as MEGARA observations cover only the innermost region (see Fig.\,\ref{Fig_MM_continuum} and Sect.\,\ref{S_datared}), in this work we will consider the kinematics from MUSE cube as reference for the stellar component.\\ The large scale rotation curve (Fig.\,\ref{Fig_PVPS}, top) is characterised by two plateau. The first flattening is at a galactocentric distance of $\sim$\,2$\arcsec$, i.e. 220\,pc with velocities of $\sim$\,70\,$\mbox{km\,s}^{-1}$. At large distances, between 10$\arcsec$-20$\arcsec$, the curve rise slowly reaching values up to 140\,$\mbox{km\,s}^{-1}$, and then finally flattens at 30$\arcsec$.\\ \noindent The velocity dispersion profile shows a sharp peak within the innermost 3$\arcsec$ (i.e. 330 pc) without an exponential decline up to the largest distances mapped by MUSE (Fig.\,\ref{Fig_PVPS}, bottom). \subsection{Kinematics and fluxes of the different ISM components detected by MUSE} \label{ISM_kinematics_MUSE} \begin{sidewaystable} \caption{Summary of measurements for the primary component from MUSE and MEGARA.} \label{T_ism_result_primary} \centering \begin{tabular}{l c c c c c c c c c } \hline\hline & \multicolumn{2}{c}{whole FoV} & \multicolumn{3}{c}{Polar Emission} & \multicolumn{3}{c}{Central region (high-$\sigma$)}\\ Line & $\sigma$ & BPT & $\sigma$ & $\Delta$V & BPT & $\sigma$ & $\Delta$V & BPT \\ \hline & $\mbox{km\,s}^{-1}$ & & $\mbox{km\,s}^{-1}$ & $\mbox{km\,s}^{-1}$ & & $\mbox{km\,s}^{-1}$ & $\mbox{km\,s}^{-1}$ & & \\ \hline H$\beta$ & 60\,(52)\,$\pm$\,51 & -- & 47\,(47)\,$\pm$\,25 &247\,$\pm$\,13 & -- & 128\,(118)\,$\pm$\,34 & 358\,$\pm$\,51 & -- \\ $[$O\,III$]$ & 66\,(62)\,$\pm$\,39 & 0.47\,(0.46)\,$\pm$\,0.16 & 54\,(57)\,$\pm$\,21 & 251\,$\pm$\,3 & 0.46\,(0.45)\,$\pm$\,0.16 & 121\,(114)\,$\pm$\,27 & 215\,$\pm$\,6 & 0.48\,(0.48)\,$\pm$\,0.15 \\ $[$O\,I$]$ & 204\,(142)\,$\pm$\,151 & -0.36\,(-0.44)\,$\pm$\,0.21 & 115\,(110)\,$\pm$\,32 & 207\,$\pm$\,11 & -0.48\,(-0.48)\,$\pm$\,0.07 & 351\,(358)\,$\pm$\,106 & 231\,$\pm$\,34 & -0.25\,(-0.30)\,$\pm$\,0.22\\ H$\alpha$-$[$N\,II$]$ & 66\,(54)\,$\pm$\,47 & -0.02\,(-0.03)\,$\pm$\,0.07 & 50\,(49)\,$\pm$\,17 & 190\,$\pm$\,3 & -0.03\,(-0.03)\,$\pm$\,0.06 & 149\,(134)\,$\pm$\,52 & 295\,$\pm$\,6 & 0.06\,(0.05)\,$\pm$\,0.05 \\ $[$S\,II$]$ & 58\,(48)\,$\pm$\,46 & 0.08\,(-0.08)\,$\pm$\,0.06 & 44\,(44)\,$\pm$\,21 & 200\,$\pm$\,16 & 0.07\,(0.07)\,$\pm$\,0.06 & 143\,(130)\,$\pm$\,44 & 260\,$\pm$\,15 & 0.12\,(0.12)\,$\pm$\,0.04 \\ \hline $[$O\,I$]$ & 157\,(121)\,$\pm$\,115 & -0.84\,(-0.81)\,$\pm$\,0.34 & 101(94)\,$\pm$\,33 &520\,$\pm$\,117 & -0.63(-0.66)\,$\pm$\,0.19 & 282(276)\,$\pm$\,66 & 175\,$\pm$\,92 & -0.53(-0.59)\,$\pm$\,0.23 \\ H$\alpha$-$[$N\,II$]^{\dagger}$ & -- & 0.02(0.03)\,$\pm$\,0.04 & -- & -- & 0.03(0.03)\,$\pm$\,0.04 & -- & -- & 0.01(0.01)\,$\pm$\,0.03 \\ $[$S\,II$]$ & 154\,(138)\,$\pm$\,69 & 0.17(0.17)\,$\pm$\,0.06 & 78(78)\,$\pm$\,7 &192\,$\pm$\,80 & 0.14(0.15)\,$\pm$\,0.06 & 170(155)\,$\pm$\,65 & 259\,$\pm$\,97 & 0.17(0.18)\,$\pm$\,0.06 \\ \hline \end{tabular} \tiny{Notes. --- \lq $\Delta$V\rq: observed velocity amplitude; average velocity dispersion and value of the average line ratio used for standard BPTs in Fig.\,\ref{Fig_BPT_primary} in log units. These latter are reported in correspondence of the numerator of the standard line ratios. The values are reported for the different spatial scales labelled on the top, except for the \lq whole field of view (FoV)\rq \ for which we did not report $\Delta$V as it coincides with that of polar emission (indeed the most extreme velocity values are seen at large galactocentric distance). For velocity dispersion and line-ratios measurements the quoted uncertainties are 1 standard deviation. $^{\dagger}$ [S\,II] and H$\alpha$-[N\,II] lines were fixed to have the same kinematics; only the line ratios differ.} \end{sidewaystable} \noindent As mentioned at the end of Sect.\,\ref{S_line_mod}, Tables \ref{T_ism_result_primary} and \ref{Table_ism_S_result_secondary} summarise the most important properties of the two spatially resolved components (primary and secondary). Figures \ref{Fig_BPT_primary} and \ref{Fig_BPT_secondary} show the location of the line ratios for the narrow and secondary emission line components onto standard \lq BPT diagrams\rq \ \citep{Baldwin1981}. Note that, a direct comparison of gas and stellar motions for the primary component is presented in Fig.\,\ref{P_kin} that includes the P-V and P-$\sigma$ along the three major axes, i.e. major and minor axes of the host galaxy, and the radio jet.\\ In the following we describe the overall results for each component. \subsubsection{Overall properties of the primary component} \label{S_result_primary_MUSE} The primary component is the narrowest among the three detected (Sect.\,\ref{S_line_mod}), with $\sigma$\,$\leq$\,66\,$\mbox{km\,s}^{-1}$\, on average (except for [O\,I] which is of 204\,$\mbox{km\,s}^{-1}$). Exceptions to this general behaviour are few spaxels ($<$\,65) mostly within the PSF area (circle in all maps in Appendix\,\ref{Appendix_B}, see also Sect.\,\ref{S_data_analysis}). The velocities are generally $\mid$V$\mid$\,$<$\,350\,$\mbox{km\,s}^{-1}$, except for H$\beta$, which are up to 450\,$\mbox{km\,s}^{-1}$\, (these extreme values are observed only towards the north-west). \\ \noindent The kinematic maps (both velocity and velocity dispersion) lack of any symmetry typical of a rotation dominated system (e.g. left and central panels of Fig.\,\ref{Fig_combo_OIII}). A clear distinguishable feature in the velocity dispersion map is the $\sigma$-enhancement crossing the galaxy from east to west (along the major axis of rotation) with a \lq butterfly-shape\rq \ (contours in Fig.\,\ref{Fig_combo_OIII} and Figures\,\ref{M_OIII_primary_zoom} to \ref{M_SII_primary_zoom}). The gas here present complex motions that differ markedly from gas elsewhere. \\ \noindent For the identification of this region with high-$\sigma$, we consider as reference the average velocity dispersion in two square regions of side 15$\arcsec$ (1.65\,kpc) in the outer part of the maps lacking of any peculiar $\sigma$ feature. Specifically, at a distance of 15$\arcsec$ from the photometric center towards the north-east and south-west. In the case of [O\,I], the box size and distance are 5$\arcsec$ and 8$\arcsec$ (550 and 880 pc), respectively, due to the decrease in S/N already visible at a radius of 10$\arcsec$ (1.1\,kpc).\\ The final threshold (i.e. 2$\sigma$ above the average velocity dispersion) is 90\,$\mbox{km\,s}^{-1}$\, for all the emission lines but [O\,I], for which is 180\,$\mbox{km\,s}^{-1}$. Hereafter, we consider as polar\footnote{Throughout this paper, the polar direction (NE-SW) correspond to that of the minor photometric kinematic axis. It is not related to the direction of the AGN ionisation cones.} emission all the spaxels with velocity dispersion below those thresholds (Sect.\,\ref{S_result_primary_MUSE_polar}). These are mostly distributed along the minor axis of rotation, i.e. NE-SW direction. The properties of the intriguing feature with high-$\sigma$ in the central region of NGC\,1052 will be described separately from that of emitting gas organised along the polar direction (Sect.\,\ref{S_result_primary_MUSE_butterfly}). \\ \noindent Maps of line fluxes (Fig.\,\ref{Fig_combo_OIII} and Figures\,\ref{M_OIII_primary_zoom} to \ref{M_SII_primary_zoom}, right panels) show a similar general morphology which is very different from the smooth continuum flux (Fig.\,\ref{Fig_MM_continuum}). More specifically, the gas emission within the inner 3$\arcsec$ resembles a mini-spiral while it appears extended along the NE-SW direction with some filaments and irregularities especially relevant up to R\,$\sim$\,10$\arcsec$ (mostly within the central region at high-$\sigma$). However, flux maps do not show any \lq butterfly-morphology\rq \ matching that of the innermost region at high velocity dispersion. Outside the inner 10$\arcsec$\,$\times$\,9$\arcsec$ (i.e. 1.1\,kpc\,$\times$\,1.0\,kpc, see Sect.\,\ref{S_result_primary_MUSE_butterfly}), the flux maps do not reveal any peculiar morphology (e.g. filaments or clumps). Taken all this into account, we prefer to describe the morphology of line-fluxes only in this section and not separately for the polar and central region (Sections \ref{S_result_primary_MUSE_polar} and \ref{S_result_primary_MUSE_butterfly}). \\ \noindent At all scales line ratios from standard BPT-diagnostic indicate LINER-like ionisation (see Table\,\ref{T_ism_result_primary} for typical values and Fig.\,\ref{M_BPT_primary_zoom}). These line ratios will be discussed in Sect.\,\ref{S_ionisation_structure} together with the weak-[O\,I] and strong-[O\,I] LINERs classification by \citet{Filippenko1992} as in \citet{Cazzoli2018}.\\ The [S\,II] lines ratio varies from 1.2 to 1.7 (Fig.\,\ref{M_SII_primary_zoom}) excluding extreme values (i.e. the 5 per\,cent at each end of the line-ratio distribution). This ratio is 1.47\,$\pm$\,0.2 on average, indicating a gas with relatively low density (n$_{e}$\,$<$\,100\,cm$^{-3}$). \subsubsection{Polar emission on kpc scale} \label{S_result_primary_MUSE_polar} \noindent The velocity field of the primary component for all the lines show a similar overall pattern (see Fig.\,\ref{Fig_combo_OIII} for [O\,III]), with well defined blue and red sides oriented along the minor axis of rotation (polar direction, i.e. NE-SW). Despite that, the velocities do not show a rotating disc features (spider diagram) in any emission line (Fig.\,\ref{Fig_combo_OIII} and Appendix\,\ref{Appendix_B}).\\ The region with negative velocities extends from the photometric center towards the north-east up to 30$\arcsec$, i.e. 3.3\,kpc (12$\arcsec$, i.e. 1.3\,kpc for [O\,I], Figures \ref{Fig_combo_OIII} and \ref{M_OI_primary_zoom}, left) with an opening angle of $105^{\circ}$ as measured from the velocity maps of [O\,III]. \noindent The most blueshifted value of the observed velocity field is $\sim$\,250\,$\mbox{km\,s}^{-1}$, located at a distance of $\sim$\,11$\farcs$5, i.e. 1.3\,kpc as measured from [O\,III] line (Fig.\,\ref{Fig_combo_OIII}, top-left). Similar negative velocities (within uncertainties) are seen for all the other emission lines. The unique exception is [O\,I], for which the maximum blueshifted velocity is of about -250\,kms at a radius of 7$\farcs$5 (825\,pc) in the NE-direction (e.g. Fig.\,\ref{Fig_combo_OIII}, top-left). \\ It is worth to note that these blueshifted velocities do not decrease smoothly up to its minimum. Instead, the maps show three concentric arcs which do not cross each other (see e.g. Fig.\,\ref{Fig_combo_OIII}). These arcs are not symmetric since they are absent where positive velocities are observed (see e.g. Figures \ref{Fig_combo_OIII} and \ref{P_kin}) i.e. towards the south-west and up to 25$\arcsec$, corresponding to 2.75\,kpc (15$\arcsec$, i.e. 1.65\,kpc for [O\,I]). We checked the possibility that extinction due to the galaxy dusty stellar disc might cause this asymmetry. By comparing the velocity maps of the ionised gas and that of the ratio between H$\alpha$ and H$\beta$ fluxes we did not find evident dusty structures at the location of the arcs. Hence, we excluded this possibility. \\ \noindent The average velocity dispersion is typically of about 50\,$\mbox{km\,s}^{-1}$\, varying between 44\,$\pm$\,21 and 54\,$\pm$\,21 $\mbox{km\,s}^{-1}$\, for [S\,II] and [O\,III], respectively (Table\,\ref{T_ism_result_primary}). The [O\,I] emission represents the exception, with a velocity dispersion of 115\,$\pm$\,32\,$\mbox{km\,s}^{-1}$\, on average (Table\,\ref{T_ism_result_primary} and Fig.\,\ref{M_OI_second_zoom}).\\ \noindent The [N\,II]/H$\alpha$, [S\,II]/H$\alpha$, [O\,I]/H$\alpha$ line ratios for the large scale gas distribution are rather homogeneous (Fig.\,\ref{M_BPT_primary_zoom}), see values in Table\,\ref{T_ism_result_primary} and the discussion in Sect.\,\ref{S_ionisation_structure}. The typical standard deviation of the values in the maps is 0.08 in log units; the scatter for the [O\,III]/H$\beta$ ratio is larger, i.e. about $\sim$\,0.2 (Fig.\,\ref{M_BPT_primary_zoom}, left). Note that, low log\,[O\,III]/H$\beta$ ratio values (i.e. $<$\,0.1) corresponding to both log\,[N\,II]/H$\alpha$ and log\,[S\,II]/H$\alpha$ of about $\sim$\,$-$0.1\,-\,0.0, are sparsely observed at large distances from the nucleus (R\,$>$\,10$\arcsec$) and towards the north-east and the south where faint clumpy features are detected (see Sect.\,\ref{S_result_lowSB}). \subsubsection{High-$\sigma$ feature in the central region of NGC\,1052} \label{S_result_primary_MUSE_butterfly} \noindent For all emission lines, the region of higher velocity dispersion with $\sigma$\,$>$\,90\,$\mbox{km\,s}^{-1}$\, ($\sigma$\,$>$\,180\,$\mbox{km\,s}^{-1}$\, for [O\,I], see e.g. Figures \ref{M_OIII_primary_zoom} and \ref{M_OI_primary_zoom} and Sect.\,\ref{S_result_primary_MUSE}) is located in the innermost parts of the maps, 10$\arcsec$\,$\times$\,9$\arcsec$ (i.e. 1.1\,kpc\,$\times$\,1.0\,kpc, Table\,\ref{T_ism_result_primary}, contours in Fig.\,\ref{Fig_combo_OIII} and Figures \ref{M_OIII_primary_zoom} to \ref{M_SII_primary_zoom}). It is mostly aligned with the major axis of the stellar rotation, with a PA of $\sim$\,124$^{\circ}$ and opening angle of $\sim$\,70$^{\circ}$ measured from [O\,III] line (Fig.\,\ref{Fig_combo_OIII}). This region is partially mapped also with MEGARA data (see Section\,\ref{S_MUSE_vs_MEGARA} and Figures \ref{M_SII_primary_megara} and \ref{M_OI_primary_megara}). \\ \noindent The line-emitting gas is spatially resolved with MUSE into streams of filamentary strands with a tail (clearly visible especially in [O\,III] line-maps, Fig.\,\ref{M_OIII_primary_zoom}) departing from the photometric center towards the south, with velocities up to 150\,$\mbox{km\,s}^{-1}$. \noindent In this central region, the velocity of the narrow component does not closely match the motion of the large scale gas in the polar direction. \\ Similar patterns in kinematics maps are seen for all emission lines (Figures \,\ref{M_OIII_primary_zoom}, \ref{M_Ha_primary_zoom}, \ref{M_SII_primary_zoom} and \ref{P_kin}) except [O\,I] (Figures \ref{M_OI_primary_zoom} and \ref{P_kin}) for which we summarise the main results separately. \\ \noindent For Balmer features, [O\,III], [N\,II] and [S\,II] lines large blueshifted (redshifted) velocities up to -290 (260) $\mbox{km\,s}^{-1}$, are detected towards the east and west of the center of the \lq butterfly\rq \ region. The southern tail has redshifted velocities from 100 to 180 $\mbox{km\,s}^{-1}$\, generally, with a typical velocity dispersion that varies from 90 to 110 $\mbox{km\,s}^{-1}$. The $\sigma$-map show non-symmetric, clumpy structures in the west strands. Such clumpiness is particularly evident in the H$\alpha$-[N\,II] velocity dispersion map (Fig.\,\ref{M_Ha_primary_zoom} central panel). \\ \noindent For the [O\,I] line, the morphology of the high-$\sigma$ region is characterised by two well defined regions with a triangular projected area (contours in Fig.\,\ref{M_OI_primary_zoom}). The apex of the east projected triangle is at 2\arcsec \ from the photometric center whereas that of the west one is at the photometric center.\\ The velocity distribution is skewed to negative (blueshifted) velocities (60\% of the spaxels in this region). The main difference of [O\,I] kinematics with respect to the common patterns of all other lines is seen to the east. Specifically, at this location in the velocity map, two thick strands are clearly visible, both at negative velocities $\sim$\,200 and $\sim$\,160 $\mbox{km\,s}^{-1}$\, in the northern and southern directions, respectively (Fig.\,\ref{M_OI_primary_zoom}, left panel). For other emission lines, at the same spatial location, the velocities are negative and positive, hence partially kinematically distinct to what found for [O\,I].\\ The values of the [O\,I] $\sigma$-map increase gradually from the photometric center both to the east and to the west from $\sim$\,200\,$\mbox{km\,s}^{-1}$\, up to $\sim$\,500\,$\mbox{km\,s}^{-1}$\, (Fig.\,\ref{M_OI_primary_zoom}, central panel). The highest values are seen in correspondence to the most extreme velocities (e.g. the two strands towards the east). \\ \noindent Apart from the flux features summarised in Sect.\,\ref{S_result_primary_MUSE}, in the innermost 10$\arcsec$ the maps do not reveal any peculiar morphology (e.g. clumps or filament) but only a gradual decrease towards the external part of this region.\\ \noindent At the location of enhanced-$\sigma$, line ratios indicate LINER-like emission (Fig.\,\ref{M_BPT_primary_zoom}). More specifically, the [O\,III]/H$\beta$ line ratio is typically $>$\,0.1 in log units (on average 0.46\,$\pm$\,0.16, Table\,\ref{T_ism_result_primary}); except for an elongated region from the east to the south-west crossing the photometric center. At this location line the log\,[O\,III]/H$\beta$ varies between 0.005 and 0.3. This peculiar structures do not match any feature of any other map for the narrow component. However, it overlaps with the location of the secondary component. Any putative link between the properties of these two components will be discussed in Sect.\,\ref{S_outflow_jet}.\\ \noindent The main feature of the [N\,II]/H$\alpha$ ratio map (Fig.\,\ref{M_BPT_primary_zoom} second panel) is the presence of two clumps of similar size (diameter is 1$\farcs$2, i.e. 130\,pc). On the one hand, one clump is located within the PSF region (Sect.\,\ref{S_datared}) with log\,[N\,II]/H$\alpha$\,$\sim$\,0.2. On the other hand, the other clump with log\,[N\,II]/H$\alpha$\,$\sim$\,$-$0.3 is located 2$\farcs$6 (290\,pc) westward to the photometric center. This clump is embedded in an area with a local enhancement of the [N\,II]/H$\alpha$ ratio. Specifically, this region emerges from the photometric center and extends for 8$\arcsec$ towards the west, and partially matches the region where velocity dispersion is higher (about 250\,-\,350\,$\mbox{km\,s}^{-1}$) with respect to the \lq butterfly\rq \ average, i.e. 149\,$\pm$\,52\,$\mbox{km\,s}^{-1}$\, (Table\,\ref{T_ism_result_primary}). Local [N\,II]/H$\alpha$ ratios are also enhanced at a distance of 7$\arcsec$ to the north and to the west.\\ Similarly, two clumps with log\,[S\,II]/H$\alpha$\,$\sim$\,0.03 (hence lower than the average, i.e. 0.07\,$\pm$\,0.06, Table\,\ref{T_ism_result_primary}) are detected to the north of the photometric center at R\,$\sim$\,1$\farcs$5 (Fig.\,\ref{M_BPT_primary_zoom}, right).\\ \noindent The observed values of the log\,[O\,I]/H$\alpha$ vary between $-$0.69 and 0.25 ($-$0.48\,$\pm$\,0.07 on average, Table\,\ref{T_ism_result_primary}). The morphology of this line ratio closely match the one seen in the [O\,I] kinematic maps (with well defined strands) at the same position (Fig.\,\ref{M_BPT_primary_zoom} third panel).\\ For a detailed discussion of ionisation mechanisms from BPTs we refer to Sect.\,\ref{S_ionisation_structure}. \subsubsection{Properties of the secondary component} \label{S_result_secondary} \begin{table} \caption{Summary of measurements for the second component from MUSE and MEGARA.} \centering \begin{tabular}{l c c c} \hline\hline Line & $\sigma$ & $\Delta$V & BPT \\ \hline & $\mbox{km\,s}^{-1}$ & $\mbox{km\,s}^{-1}$ & \\ \hline H$\beta$ & 313\,(316)\,$\pm$\,128 & 637\,$\pm$\,59 & -- \\ $[$O\,III$]$ & 267\,(277)\,$\pm$\,44 & 582\,$\pm$\,12 & 0.52\,(0.53)\,$\pm$\,0.14\\ $[$O\,I$]$ & 637\,(704)\,$\pm$\,167 & 371\,$\pm$\,51 & -0.07\,(-0.07)\,$\pm$\,0.18\\ H$\alpha$-$[$N\,II$]$ & 281\,(277)\,$\pm$\,105 & 569\,$\pm$\,12 & 0.14\,(0.14)\,$\pm$\,0.08\\ $[$S\,II$]$ & 260\,(256)\,$\pm$\,96 & 571\,$\pm$\,14 & 0.16\,(0.11)\,$\pm$\,0.14\\ \hline H$\alpha$-$[$N\,II$]$ & -- & -- & 0.05\,(0.05)\,$\pm$\,0.09 \\ $[$S\,II$]$ & 445\,(434)\,$\pm$\,106 & 430\,$\pm$\,175 & 0.28\,(0.28)\,$\pm$\,0.09\\ \hline \end{tabular} \label{Table_ism_S_result_secondary} \tiny{Notes. -- The same as Table\,\ref{T_ism_result_primary} but for the secondary component. For MEGARA data: [S\,II] and H$\alpha$-[N\,II] lines were fixed to have the same kinematics (Sect.\,\ref{S_em_lin_mod}); we do not report measurements for [O\,I] due to its low S/N.} \end{table} For MUSE data, the spatial distribution of the secondary component has a bipolar shape extended up to 7\farcs2, that corresponds to 790\,pc (Figures from \ref{M_OIII_second_zoom} to \ref{M_SII_second_zoom}); its properties are summarised in Table\,\ref{Table_ism_S_result_secondary}. This emission is aligned with the radio jet (PA\,=\,70$^{\circ}$, Table\,\ref{T_properties}) with a PA of $\sim$\,75$^{\circ}$, thought not center but slightly more extended to the south to the photometric center. The morphology is almost symmetric with respect to the photometric center with a redshifted region towards the west of the nucleus, and a blueshifted region towards the east. Overall, the velocity distribution is large, with velocities ranging from -680 to 730 $\mbox{km\,s}^{-1}$\, (Table\,\ref{Table_ism_S_result_secondary}). The line profile is broad, generally with $\sigma$\,$>$\,150\,$\mbox{km\,s}^{-1}$. The average values of the $\sigma$-maps are within 260 and 320 $\mbox{km\,s}^{-1}$\, for all emission lines, except for [O\,I] which is 637\,$\pm$\,167 $\mbox{km\,s}^{-1}$\, (Table\,\ref{Table_ism_S_result_secondary}, Fig.\,\ref{M_OI_second_zoom}). Despite these high values, there is a $\sigma$-decrement ($\sigma$\,$\sim$\,80\,$\mbox{km\,s}^{-1}$) that mostly corresponds to the PSF region. This feature is more evident in the H$\beta$, [O\,III] and [O\,I] maps with respect to the same maps for [S\,II] and H$\alpha$-[N\,II]. The unique feature of the flux maps outside the PSF region is a shallow elongation towards the south-west (Figures from \ref{M_OIII_second_zoom} to \ref{M_SII_second_zoom}, right panels).\\ The average value for the [S\,II] line ratio is 1.2\,$\pm$\,0.5 (Fig.\,\ref{M_SII_second_zoom}) indicating a gas with relatively high density (100\,$<$\,n$_{e}$\,$<$\,1000\,cm$^{-3}$). The values of the standard BPT line ratios (see Table\,\ref{Table_ism_S_result_secondary} for average values, and Fig.\,\ref{M_BPT_second_zoom}) indicate the LINER-like AGN-photoionisation as the dominant mechanism for the gas of this component (see Fig.\,\ref{Fig_BPT_secondary}). We refer to Sect.\,\ref{S_ionisation_structure} for further discussion. \subsubsection{Faint features} \label{S_result_lowSB} \noindent All emission line maps from MUSE (e.g. [O\,III], Fig.\,\ref{Fig_combo_OIII}, top panels), except [O\,I] due to the lower S/N (Fig.\,\ref{M_OI_primary_zoom}), show two peculiar faint features with typical fluxes of about 3\,$\times$\,10$^{-18}$\,erg\,s$^{-1}$\,cm$^{-2}$ with kinematics (velocity and velocity dispersion) consistent with that observed in the polar direction (Sect.\,\ref{S_result_primary_MUSE_polar}). \\ On the one hand, towards the west, a stream is well visible in [O\,III] (Fig.\,\ref{Fig_combo_OIII}, top), and H$\alpha$-[N\,II]; whereas it is weakly or barely detected in [S\,II] and H$\beta$ maps. It is extended for 18$\arcsec$ (2\,kpc) as measured from H$\alpha$-[N\,II] maps considering only the detached region to the west. The same measurement in the [O\,III] map (Fig.\,\ref{Fig_combo_OIII}) is more difficult due to the fact that the stream is connected to the main body of NGC\,1052, and no peculiar feature in kinematic and flux maps allow us to disentangle the stream from the body of the galaxy.\\ This stream is found to have nearly systemic velocities (i.e. $\pm$\,60\,$\mbox{km\,s}^{-1}$) and low velocity dispersion ($<$\,50\,$\mbox{km\,s}^{-1}$, generally). A small clump of radius 0$\farcs$4 (45\,pc) is detected at high-$\sigma$ ($>$\,100\,$\mbox{km\,s}^{-1}$) in [O\,III] only. \\ \noindent On the other hand, towards the south and south-east, there are two detached clumps. Both clumps show redshifted velocities, but the one to the south shows the most extreme kinematics. Specifically, at this location observed velocities vary from 80 to 150 $\mbox{km\,s}^{-1}$\, (130\,$\pm$\,16\,$\mbox{km\,s}^{-1}$, on average) whereas, towards the south-east, the velocity maps show values between 65 and 115 $\mbox{km\,s}^{-1}$\, (95\,$\pm$\,7\,$\mbox{km\,s}^{-1}$, on average). Among these two clumps the differences in velocity dispersion are mild. The average values are 45\,$\pm$\,13\,$\mbox{km\,s}^{-1}$\, and 28\,$\pm$\,9\,$\mbox{km\,s}^{-1}$\, for the south and south-east clumps, respectively.\\ The location of the line ratios for all these faint features onto the standard BPT diagrams (Fig.\,\ref{Fig_BPT_primary} top panels, black and pink symbols) are generally consistent with those observed in AGNs (LINER-like) considering the dividing curves proposed by \citet{Kewley2006} and \citet{Kauffmann2003}. This result excludes star formation as dominant ionisation mechanism in these clumps. \subsection{Main kinematic properties of the third spatially unresolved component} \label{S_result_third} For MUSE data this component is generally the broadest one ($\sigma$\,$>$\,400\,$\mbox{km\,s}^{-1}$) for H$\beta$ and Oxygen lines. For [S\,II] and H$\alpha$-[N\,II] the average line widths are 134\,$\pm$\,45 and 217\,$\pm$\,104 $\mbox{km\,s}^{-1}$, respectively. Its velocity distribution is skewed to blueshifted velocities (typically within -600 and 200 $\mbox{km\,s}^{-1}$). \\ In none of earlier works but D19b, the detection of a broad (FWHM\,$\sim$\,1380\,$\mbox{km\,s}^{-1}$) and blueshifted (V\,$\sim$\,490\,$\mbox{km\,s}^{-1}$) unresolved component in narrow lines has been reported. D19b found such a broad component in [O\,III] only, whereas with our current MUSE data we detect it in all emission lines. \\ The FWHM of the [O\,III] line is 1053\,$\pm$\,84\,$\mbox{km\,s}^{-1}$, on average, hence lower than the measurements by D19b. Although this discrepancy, considering such a large FWHM of the [O\,III] and the AGN-like BPT-ratios measured for this third component, it could probe either an unresolved AGN component as proposed by DH119b or a more recent AGN-driven outflow, which is very central and therefore unresolved. \\ However, as mentioned in Sect.\,\ref{S_em_lin_mod}, this component is found only in the central region affected by the PSF (Sect.\,\ref{S_datared}), hence no spatially resolved analysis can be done. \subsection{Comparison between MUSE and MEGARA results} \label{S_MUSE_vs_MEGARA} \noindent Similarly to the case of MUSE data, with MEGARA we map three different kinematic components in narrow lines and the BLR emission in H$\alpha$. Among the detected emission lines in MEGARA ISM cube (Sect.\,\ref{S_main_results}), [O\,I] has the lowest S/N. Hence we focus on the results from the modelling of [S\,II] and H$\alpha$-[N\,II]. These lines were tied to share the same kinematics (Sect.\,\ref{S_em_lin_mod}).\\ The field of view of MEGARA data is almost completely coincident with the region at high-$\sigma$, with a minor fraction of few spaxels ($\sim$14\%) corresponding to the polar emission. Hence, we focus the comparison between the results from the MUSE and MEGARA ISM cubes on the \lq butterfly region\rq. However, we summarised the properties of the polar emission from MEGARA in Table\,\ref{T_ism_result_primary} for sake of completeness.\\ \noindent For the primary component, the velocity maps for both [S\,II] and [O\,I] lines from MEGARA data set (Figures \ref{M_SII_primary_megara} and \ref{M_OI_primary_megara}, left panels) show a rotation pattern, with larger redshifted velocities in the [O\,I] (systematically $\sim$\,100\,$\mbox{km\,s}^{-1}$\, larger). For both lines, there is a velocity decrement at R\,$\sim$\,5$\arcsec$ north-westward from the photometric center which continues spatially up to $\sim$\,770\,pc, as seen from MUSE maps (e.g. Fig.\,\ref{M_OIII_primary_zoom}), at larger distances. This decrement is spatially coincident with the high-$\sigma$ region, and divides the two strands seen in the \lq butterfly\rq \ region defined by the MUSE maps (see Sect.\,\ref{S_result_primary_MUSE_butterfly}). Additionally, the velocity map of the [S\,II] line (Fig.\,\ref{M_SII_primary_megara}, left) clearly shows an arc at almost rest frame velocities at approximately 3$\arcsec$ northward of the photometric center, that is also seen in MUSE maps (see Sect.\,\ref{S_result_primary_MUSE_polar}; Fig.\,\ref{M_SII_primary_zoom}).\\ \noindent The velocity dispersion shows an average value of the [S\,II] lines of 154\,$\pm$\,38\,$\mbox{km\,s}^{-1}$, broadly consistent within uncertainties with that of MUSE in the same innermost region (Table\,\ref{T_ism_result_primary}). The [S\,II] and [O\,I] lines share the same structure (Figures \ref{M_SII_primary_megara} and \ref{M_OI_primary_megara}), with increasing values in the west and east regions of the photometric center (also mentioned before for MUSE; see Sect.\,\ref{S_result_primary_MUSE_butterfly}). The photometric center has lower values ($\sim$\,100\,$\mbox{km\,s}^{-1}$) than the east and west parts of the map (generally $>$\,200\,$\mbox{km\,s}^{-1}$), that emerge in a biconical shape (defining the \lq wings\rq \ of the \lq butterfly\rq) from the center in a similar way as in MUSE maps (e.g. Figures \ref{M_OI_primary_zoom} and \ref{M_SII_primary_zoom}).\\ \noindent The flux maps for the narrow component of all the emission lines in MEGARA data are not centrally peaked, but show instead a spiral-like shape with high fluxes (right panels in Figures \ref{M_OI_primary_megara} and \ref{M_SII_primary_megara}). It does not correspond to any peculiar feature in the kinematic maps (velocity or velocity dispersion). This structure is also present in MUSE maps limited to the region of MEGARA field of view, being the only noticeable feature in the maps (as mentioned in Sect.\,\ref{S_result_primary_MUSE}).\\ \noindent The limited spectral coverage of MEGARA data allow us to estimate the [S\,II]/H$\alpha$, [N\,II]/H$\alpha$ and [O\,I]/H$\alpha$ line ratios (see Sect.\,\ref{S_main_results}). The [S\,II]/H$\alpha$ ([N\,II]/H$\alpha$) ratio in log for the primary component ranges between -0.17 to 0.44 (-0.19 to 0.18), with an average value of 0.17\,$\pm$\,0.06 (0.02\,$\pm$\,0.04). As for the [O\,I]/H$\alpha$, the values range from -1.6 to 0.3, on average -0.84\,$\pm$\,0.34 in the complete MEGARA field of view (see Table\,\ref{T_ism_result_primary} and Fig.\,\ref{M_BPT_primary_megara}). In the maps of this latter ratio (Fig.\,\ref{M_BPT_primary_megara} center), a clump is present near the photometric center, within the PSF, that is spatially coincident with an enhanced region of this ratio also in MUSE maps. Table\,\ref{T_ism_result_primary} shows that the ratios are consistent within the uncertainties independently of the high-$\sigma$\,/\,polar emission splitting. We have also estimated the electronic density using the [S\,II] line-ratio (Fig.\,\ref{M_SII_primary_megara}, right) which indicates a low density regime, as for MUSE data (see Sect.\,\ref{S_result_primary_MUSE}). The density maps of this component are homogeneous, with small deviations only in the outer parts of the field of view (with lower S/N).\\ \noindent For the second component detected in MEGARA data (Fig.\,\ref{M_SII_second_megara}), it has the same spatial extension as in MUSE, accounting for the differences in the spatial resolution of the two data sets. For both [S\,II] and [O\,I] velocity maps, the same structure is seen, with a clear velocity distribution ranging up to an absolute value of $\sim$\,400\,$\mbox{km\,s}^{-1}$\, for both lines. For this component, the velocities of both lines are in well agreement, also with MUSE data (see Table\,\ref{Table_ism_S_result_secondary}). As for the velocity dispersion, this component is the broadest of all the components detected in MEGARA data (excluding the broad H$\alpha$ in Sect.\,\ref{S_BLR_detection}). The values are consistent for all lines, although the [O\,I] measured in MEGARA differs considerably with that from MUSE (average of 359\,$\pm$\,64 vs. 627\,$\pm$\,167 $\mbox{km\,s}^{-1}$), probably due to the lower S/N of this line in the MEGARA data. Therefore, we cannot ensure a proper determination of the properties of the secondary component with the [O\,I] lines.\\ The flux maps of all the lines show a centrally-peaked distribution, with no peculiar features. However, as in MUSE, the line ratios present elongated substructures both east and south-west from the photometric center in both [S\,II]/H$\alpha$ and [N\,II]/H$\alpha$, that do not correspond to any kinematic feature (Fig.\,\ref{M_BPT_second_megara}). The mean values of these ratios are summarised in Table\,\ref{Table_ism_S_result_secondary}. For both MUSE and MEGARA data sets, the [S\,II] flux ratio of the second component (Fig.\,\ref{M_SII_second_megara}) indicates a gas with high density, i.e. n$_{e}$\,$\sim$\,1000\,cm$^{-3}$.\\ \noindent As already mentioned, MEGARA also identified a third spatially unresolved kinematic component in the emission lines. However, differing from MUSE data, this component is detected only in [S\,II]. Its main kinematic properties are velocities ranging between -365 and 221 $\mbox{km\,s}^{-1}$\, (mean error 72\,$\mbox{km\,s}^{-1}$\,), and an average velocity dispersion of 127\,$\pm$\,47 $\mbox{km\,s}^{-1}$. This results are in broad agreement within uncertainties with that obtained with MUSE data for the [S\,II] lines (see Sect.\,\ref{S_result_third}). \subsection{BLR component} \label{S_BLR_detection} The broad H$\alpha$ component from the spatially unresolved BLR of NGC\,1052 is observed only within the PSF radius (i.e. 0$\farcs$8 and 1$\farcs$2 for MUSE and MEGARA respectively, Sect.\,\ref{S_datared}) in both data sets. For this component we obtained, on average, velocities near rest frame, i.e. -38 $\mbox{km\,s}^{-1}$\, (-60\,$\mbox{km\,s}^{-1}$) as measured from MUSE (MEGARA) data. Overall, the average velocity dispersion is 1031\,$\pm$\,141 $\mbox{km\,s}^{-1}$\, and 998\,$\pm$\,200 $\mbox{km\,s}^{-1}$\, (2427 and 2350 $\mbox{km\,s}^{-1}$\, in FWHM) for MUSE and MEGARA data, respectively.\\ Finally, note that our final modelling of the H$\beta$ line does not require a broad component confirming the type 1.9 AGN classification of the active nucleus in NGC\,1052 (see Table\,\ref{T_properties}).\\ The FWHM of this AGN component is compared to that of previous works in Sect.\,\ref{S_BLR_properties}. \subsection{NaD Absorption} \label{S_results_NaD} Figure\,\ref{Fig_EW_NaD_abs} shows the equivalent width map of the NaD absorption corresponding to spaxels with S/N\,$\geq$\,5 in the MUSE ISM-cube. Its overall spatial distribution has an intriguing morphology similar to that of the central, butterfly-like, region at high-$\sigma$ described in Sect.\,\ref{S_result_primary_MUSE_butterfly}. It is oriented in the SE-NW direction with the north-west side more prominent (EWs generally $>$\,1.5\,\AA).\\ Our kinematic maps obtained from MEGARA data indicate a complex neutral gas kinematics (Fig.\,\ref{M_NaD_megara}). Specifically, on the one hand the velocity map shows the blue/red pattern of a rotating disc (velocities are from -96 to 57 $\mbox{km\,s}^{-1}$) but with a flat gradient ($\Delta$V is 77\,$\pm$\,12\,$\mbox{km\,s}^{-1}$, Fig.\,\ref{M_NaD_megara}, left). On the other hand, the peak of velocity dispersion map is off-centred (Fig.\,\ref{M_NaD_megara} center). It peaks at 2$\farcs$5 (277 pc) eastwards with a value of 263\,$\pm$\,10\,$\mbox{km\,s}^{-1}$. Moreover, large velocity dispersion values (i.e. $>$\,220\,$\mbox{km\,s}^{-1}$, larger than the central velocity dispersion of the stars, $\sigma_{\rm c}$ in Table\,\ref{T_kinematics}) are observed up to 4$\farcs$8 (530\,pc) towards the north-east. These large values do not have any counterparts in either velocity or flux maps (Fig.\,\ref{M_NaD_megara}, left and right).\\ The maps of the ratio between the NaD fluxes indicate that the gas is optically thick (R$_{\rm NaD}$\,=\,1.3\,$\pm$\,0.1, on average) similarly to what was estimated for the nuclear spectrum analysed in \citet{Cazzoli2018} (R$_{\rm NaD}$\,=\,1.0), so far the only study of the NaD-absorption in NGC\,1052. \section{Discussion} \label{S_discussion} The results obtained with MUSE data are in general agreement with those from MEGARA cube at higher spectral resolution (Sections \ref{S_MUSE_vs_MEGARA} and \ref{S_BLR_detection}). \\ In Sect.\,\ref{S_disc_kin}, we discuss the stellar kinematics and dynamics using the full data set, whereas the discussion in Sect.\,\ref{S_ISM_prop} is mostly based on the results from MUSE data only in order to exploit its capabilities (spectral range, spatial sampling and field of view, Sect.\,\ref{S_datared}). Sections \ref{S_outflow_kin_ion} and \ref{S_outflow_kin_neutral} are dedicated to explore the kinematics and energetics of the multi-phase (ionised and neutral gas) outflow. Finally, in Sect.\,\ref{S_BLR_properties} we will compare the FWHM of the unresolved BLR component with previous measurements.\\ Note that the estimation of the black hole mass based on the stellar kinematics and the broad H$\alpha$ components is discussed in Sect.\,\ref{S_disc_kin} and Sect.\,\ref{S_BLR_properties}, respectively. \subsection{Kinematics and dynamics of the stellar disc} \label{S_disc_kin} \noindent As mentioned in Sect.\,\ref{S_stellar_kin}, the stellar component of NGC\,1052 shows features of rotational motions at both on small (MEGARA) and large (MUSE) scales. These include a spider pattern in the velocity field and a centrally peaked velocity dispersion map (Fig.\,\ref{Fig_stellar_kin}). Besides, the fact that the kinematic major axis coincides with the photometric major axis further confirms the presence of rotation-dominated kinematics.\\ \noindent NGC\,1052 is classified as oblate galaxy of E3-4/S0-type (\citealt{Bellstedt2018}, Table\,\ref{T_properties}). Its stellar kinematic properties (e.g. large velocity amplitude, Table\,\ref{T_kinematics} and Fig.\,\ref{Fig_PVPS}, bottom) suggest that NGC\,1052 is more likely a lenticular-S0 galaxy (see \citealt{Cappellari2016} for a review). The motivation is twofold. First, the lack of the exponential decline of the P-$\sigma$ curve (Fig.\,\ref{Fig_PVPS}, bottom) that indicates the presence of relevant random motions. Second, the combination of a large velocity amplitude and a symmetric velocity field (Table\,\ref{T_kinematics}, Fig.\,\ref{Fig_PVPS}, top) suggesting that NGC\,1052 has a prominent rotating disc. \\ \noindent The rotational support of the stellar disc can be drawn from the observed (i.e. no inclination corrected) velocity-to-velocity dispersion (V/$\sigma$) ratio\footnote{Some authors (e.g. \citealt{Perna2022} and references therein) to calculate the dynamical ratio use the inclination-corrected velocity. For NGC\,1052, such a correction does not strongly affect the V/$\sigma$ ratio, i.e. it would be 1.23 instead of 1.16, hence $\sim$\,1.2 in both cases.}, calculated as the ratio between the amplitude and the mean velocity dispersion across the disc. For MUSE (MEGARA) the dynamical ratio is $\sim$\,1.2 (0.8) indicating a strong random motion component, hence a dynamical hot disc. \\ \noindent The results from the analysis of the stellar kinematics from present IFS data are generally in agreement with those from previous works by D15 and DH19a with optical IFS from WiFEs and GMOS/GEMINI, respectively, although these are limited in either spectral range or in field of view, and in spatial sampling (see Sect.\,\ref{S_introduction}). For both these past works, the stellar velocity field shows clearly a smooth rotation. Although, a 1:1 comparison is not possible as no velocity amplitude measurements are given by the authors. The velocity dispersion shows a central cusp ($\sim$\,200\,$\mbox{km\,s}^{-1}$\, and $\sim$\,250\,$\mbox{km\,s}^{-1}$\, as measured by D15 and DH19a, respectively). This is qualitatively consistent with the shape of the P-$\sigma$ curve (Fig.\,\ref{Fig_PVPS}, bottom). Finally, our results are broadly consistent with those by \citet{Bellstedt2018} obtained with DEIMOS/Keck, i.e. a rotational velocity and central velocity dispersion of $\sim$\,120\,$\mbox{km\,s}^{-1}$\, and $\sim$\,200\,$\mbox{km\,s}^{-1}$, respectively.\\ \noindent The large scale rotation curve (Fig.\,\ref{Fig_PVPS}, top) is characterised by two plateau. The first flattening is at a galactocentric distance of $\sim$\,2$\arcsec$ (i.e. 220\,pc) with velocities of $\sim$\,70\,$\mbox{km\,s}^{-1}$. At large distances, between 10$\arcsec$-20$\arcsec$, the curve rise slowly reaching values up to 140\,kms, and then finally flattens at 30$\arcsec$. \\ Thanks to our measurement of the stellar dynamics, we can provide an estimate of the black hole mass (M$_{\rm BH}$). From the central velocity dispersion of stars measured in MUSE data (201\,$\pm$\,10\,$\mbox{km\,s}^{-1}$, Table\,\ref{T_kinematics}), the Eq.\,8 by \citet{Bluck2020} (see also \citealt{Saglia2016}) yields M$_{\rm BH}$ of 2\,$\pm$\,0.5 $\times$\,10${^8}$ M$_{\sun}$. This value is in good agreement with the previous estimates by \citet{Beifiori2012} listed in Table\,\ref{T_properties}. Note that the use of other prescriptions can return different black hole masses (see e.g. \citealt{Ho2008} and reference therein) as briefly discussed in Sect.\,\ref{S_BLR_properties}. \subsection{Multi-phase ISM properties} \label{S_ISM_prop} \noindent Early-type galaxies were traditionally thought to be uniform stellar systems with little or no gas and dust \citep{FalconBarroso2005}. The spatial distribution and kinematics of the ionised gas in NGC\,1052 challenges this view, as ISM and stars seem completely decoupled indicating a complex interplay between the two galaxy components. The proposed scenario is summarised in the cartoon shown in Fig.\,\ref{Fig_cartoon}.\\ In what follows we mostly focus on spatially resolved components (i.e. primary and second for emission lines and that for the NaD absorption). Note that, a third component is needed to reproduce lines profiles in all forbidden lines and narrow H$\alpha$ (see Sect.\,\ref{S_em_lin_mod}). The presence of this component has been previously reported by \citet{Dahmer2019b} but only in [O\,III] with FWHM\,$\sim$\,1380\,$\mbox{km\,s}^{-1}$. These authors propose that is tracing the interaction between the jet and the ISM-environment. Despite the fact that we were able to map such a component for all emission lines (from H$\beta$ to [S\,II]), it is spatially unresolved (see Sect.\,\ref{ISM_kinematics_MUSE}). Due to this limitation we are not investigating this component further. However, its general properties are summarised in Sect.\,\ref{S_result_third}. \begin{figure*} \centering \includegraphics[width=.995\textwidth]{figures/cartoon_v3.pdf} \caption{Cartoon illustrating the proposed scenario for the stellar component and the ionised ISM for NGC\,1052 (see text for details).} \label{Fig_cartoon} \end{figure*} \subsubsection{The intriguing ISM kinematics in NGC\,1052} \label{S_kinematics} For NGC\,1052, the presence of non rotational motions such as an AGN-driven outflow has been suggested in many previous works on the basis of \textit{HST} imaging \citep{Pogge2000, Walsh2008} as well as 1D and IFS spectroscopy \citep{Sugai2005, Dopita2015, Cazzoli2018, Dahmer2019b}, mostly in the optical band.\\ Generally, the detection of outflows is widely based on the comparison between the observed velocity field and line width distribution and what expected in a case of a rotating disc (see e.g. \citealt{Veilleux2020} and reference therein). However, for NGC\,1052 the deviations from the disc-like behaviour (i.e. outflow signatures) in the kinematics maps of the two spatially resolved components are ambiguous. \\ On the one hand, for the primary ISM-component it is observed a clear velocity gradient in the perpendicular direction with respect to the stars (SE-NW direction, e.g. Figures \ref{Fig_combo_OIII} and \ref{P_kin}). This feature can be explained in terms of either large buoyant bubbles or a polar disc\footnote{We discard the scenario in which the polar gas arise from the AGN's narrow line region (NLR). Indeed, by means of the relation between the X-ray luminosity and size of the NLR for LINERs by \citet{Masegosa2011} (see their Fig.\,4), for NGC\,1052 the NLR physical size would be $\sim$\,600\,pc. Hence, the NLR is much less extended than the polar emission detected at a distance $>$\,3\,kpc.}. Such a bipolar velocity field is not perfectly symmetrical (see Sect.\,\ref{S_result_primary_MUSE_polar} and Figures \ref{Fig_combo_OIII} and \ref{P_kin}) indicating that the putative disc would be either a perturbed rotator or a complex kinematic object according to the classification by \citealt{Flores2006}. \\ On the other hand, the velocity dispersion map is not centrally peaked as expected for rotating discs (e.g. Figures \ref{Fig_combo_OIII} and \ref{P_kin}). Instead, a $\sigma$-enhancement\footnote{Such line-width enhancement cannot be explained in terms of beam smearing because the scale on which we observe it is much larger than the spatial resolution of the observations.} $>$\,90\,$\mbox{km\,s}^{-1}$\, is present at a galactocentric distance lower than 10$\arcsec$ with a peculiar \lq butterfly\rq \ shape (Sect.\,\ref{S_result_primary_MUSE_butterfly} and contours in Fig.\,\ref{Fig_combo_OIII} and those in the Appendix, from \ref{M_OIII_primary_zoom} to \ref{M_SII_primary_zoom}, and Fig.\,\ref{P_kin}). At this location, the maximum velocity gradient is oriented nearly along the stellar major axis of rotation (black solid line in Figures in Appendix\,\ref{Appendix_B}). The morphology and the kinematics of this \lq butterfly\rq \ feature are suggestive of the presence of two bubbles outside the plane of the galaxy similarly to the well known superwind in NGC\,3079 (an optically thin bubble with blue and red sides from the front and back volumes, e.g. \citealt{Veilleux1994}). Indeed, if two bubbles (or biconical outflows) are moving away in the polar direction, high velocity dispersion is expected along the major axis of rotation due to the overlap of the blue and red clouds to the line-of-sight. The observed \lq butterfly\rq \ feature may represent this effect. \\ \noindent This twofold behaviour of the ionised gas on different spatial scales might indicate that the gas probed by the primary ISM-component is tracing two different substructures that are possibly related. Neither of them are likely probing a rotating disc due to the irregularities in the kinematics and the significance of shocks in ionising the gas (as discussed in Sect.\,\ref{S_ionisation_structure}).\\ \noindent For the second ISM-component the blue-to red velocity gradient is mostly aligned with the radio jet (70$^{\circ}$, Table\,\ref{T_properties}) with large widths (Table\,\ref{Table_ism_S_result_secondary}) extended mostly within 5$\arcsec$ from the photometric center (see e.g. Fig.\,\ref{M_OIII_second_zoom}).\\ As mentioned in Sect.\,\ref{S_results_NaD} the spatial distribution of the NaD absorption has a morphology similar that of the central region at high-$\sigma$ described with a prominent north-west side (Fig.\,\ref{Fig_EW_NaD_abs}). However, the kinematics maps do not show clear evidence of a neutral gas outflow (Fig.\,\ref{M_NaD_megara}; Sections \ref{S_results_NaD} and \ref{S_outflow_kin_neutral}). \\ Hence, by using the kinematics only we cannot claim the robust detection of a multi-phase outflow. In the next section we explore the ionisation structure and the possible connection with the radio jet in order to pinpoint the location of the outflow and hence study the kinematic, energetic and power source. \subsubsection{Line Ratios and Ionisation structure} \label{S_ionisation_structure} \noindent We use the observed spatially resolved narrow emission line fluxes and line ratios to investigate the excitation mechanisms at work in NGC\,1052 by means of the standard diagnostic diagrams by \citet{Baldwin1981}, also known as BPT diagrams (Figures\,\ref{Fig_BPT_primary} and \ref{Fig_BPT_secondary}). \\ For MUSE data, H$\beta$ and [O\,I] lines are the weakest among the detected. Therefore, they constrain the spatial regions where the BPT analysis can be carried out (see maps in Appendix\,\ref{Appendix_B}). For MEGARA data the main limitation is that the observed spectra lack both H$\beta$ and [O\,III], preventing to exploit BPT-diagnostics.\\ \noindent In this section we mainly compare our results about ionisation mechanism with those in D15 due to the similarities in spatial and wavelength coverage. Such a comparison would be more difficult with the results by DH19b, as these authors present the analysis of [N\,II]/H$\alpha$ ratio, complemented by NIR BPT-diagrams, in the central region of NGC\,1052 (i.e. 3$\farcs$5\,$\times$\,5$\farcs$0). However, their general findings, i.e. LINER-like line-ratios throughout the whole GMOS/GEMINI field of view and a combination of shocks and photoionisation mechanisms in act in NGC\,1052, are in broad agreement with our results (see Sect.\,\ref{S_main_results} and below). \\ \noindent For the primary (narrowest) component, we exclude the pAGB or H\,II-ionisation scenarios in favour of a mixture of AGN-photoionisation and shocks-excitation as the dominant mechanisms of ionisation. \\ On the one hand, the large majority of the line ratios lies above the empirical dividing curves between H\,II- and AGN- like ionisation by \citet{Kewley2006} and \citet{Kauffmann2003}. These line ratios are not fully reproduced by pAGBs models by \citet{Binette1994}. Furthermore, the observed [O\,I]/H$\alpha$ ratios indicate that NGC\,1052 is a strong-[O\,I] object (i.e. genuine AGN) according to the criterion for dividing weak-[O\,I] and strong-[O\,I] LINERs, proposed by \citealt{Filippenko1992}, that is [O\,I]/H$\alpha$\,$>$\,0.16. Hence, these findings indicate the need of an ionisation mechanism more energetic than star formation or pAGB-stars such as AGN photoionisation. Note that only for a small number of spaxels (50, i.e. $<$\,1\,per\,cent of the map), the AGN scenario is disfavoured as the log\,([O\,III]/H$\beta$) ratio is $<$\,0.3 and log\,([N\,II]/H$\alpha$) is $<$\,0.2. However, these spaxels are sparsely distributed at large distances (R\,$>$\,20$\arcsec$, i.e. 2.2\,kpc), where faint gas-clumps are detected (see Sect.\,\ref{S_result_lowSB}). \\ On the other hand, shocks models with a photoionising precursor (grids in Fig.\,\ref{Fig_BPT_primary}) are able to reproduce the large majority of the observed line ratios in the [N\,II]/H$\alpha$ and [O\,I]/H$\alpha$ diagrams, and only partially in the [S\,II]/H$\alpha$ diagram. \\ The match between data-points and shocks models is more accurate for the gas distributed along the polar direction (Fig.\,\ref{Fig_BPT_primary}, top) than for that within the central region at high-$\sigma$ (Fig.\,\ref{Fig_BPT_primary}, bottom).\\ \noindent The same two dominant sources of ionisation (AGN and shocks) acting in NGC\,1052 were identified by D15. These authors propose that part of the ionised line-emitting gas is photoionised by the AGN with a central region (R\,$<$\,1$\arcsec$) that appears shock excited. Emission lines have been modelled with a dusty plasma having a three times solar abundance and via double-shock model. The latter combines an accretion shock with velocities of about 150\,$\mbox{km\,s}^{-1}$\, and a cocoon shock at higher velocities i.e. 200\,$-$\,300\,$\mbox{km\,s}^{-1}$. Such a model explains the high densities observed ($\sim$\,10$^{4}$\,$-$\,10$^{6}$\,cm$^{-3}$) in WiFes data and provides a good fit to the observed emission-line spectrum. The proposed physical scenario establishes the existence of a higher ionisation cone and a large-scale bipolar outflow (energised by the jet) and a turbulent flow along the major axis of the galaxy. However, the model by D15 only marginally fits our measurements, as explained below.\\ On the one hand, our 2D mapping of primary component reveals a central region at high velocity dispersion consistent with the accretion shocks proposed by D15, i.e. $\sigma$\,$\sim$\,120\,$-$\,150\,$\mbox{km\,s}^{-1}$\,, (except for [O\,I] for which $\sigma$\,$\sim$\,350\,$\mbox{km\,s}^{-1}$; Table\,\ref{T_ism_result_primary}). Generally the high velocity dispersion region seen in WiFES data match that of our IFS data. However, thanks to the high sensitivity and spatial resolution of MUSE we can map this region on a larger area and spatially resolve substructures in flux and kinematics. \\ \noindent On the other hand, the discrepancy is threefold. \noindent First, we do not find any indication of such extremely high density. We rather measure two regimes of gas densities both at lower densities (i.e. n$_{e}$\,$<$\,10$^{4}$\,cm$^{-3}$) for the primary and second components, as mentioned in Sections \ref{S_result_secondary} and \ref{S_result_primary_MUSE}. Second, by using the line-flux maps from MUSE, in the central region we measured a metallicity of 8.18\,$\pm$\,2.059 (the solar value is 8.69; \citealt{Asplund2009}) following \citet{PerezDiaz2021}, so using the HII-CHI-MISTRY tool by \citet{PerezMontero2014}). Hence there are no hints of extreme metallicities as adopted by D15. Third, velocities consistent with the cocoon shock velocities (200$-$300\,$\mbox{km\,s}^{-1}$) in the model by D15 are observed for the secondary component (Table\,\ref{Table_ism_S_result_secondary}, except for [O\,I]). This is a separated component with respect to that distributed on kpc-scales being extended up to 7\farcs2, that corresponds to 790\,pc (hence not only in the central R\,$<$1$\arcsec$, Sect.\,\ref{S_result_secondary}) and oriented similarly to the radio jet.\\ \noindent For the secondary component, shock models (without precursor) are able to reproduce satisfactorily the observed [N\,II]/H$\alpha$ and [O\,I]/H$\alpha$ ratios. Nearly half of the data points in the [S\,II]/H$\alpha$ diagram are too high to be modelled either with shocks or pAGBs models (Fig.\,\ref{Fig_BPT_secondary}, grids and pink boxes, respectively).\\ \noindent Hence, taking into account all this, we conclude that the emission lines ionisation in NGC\,1052 cannot be explained by one mechanism alone, as proposed by D15 and DH19b. The ionisation in the central region (R\,$<$\,10$\arcsec$) is a mixture of AGN-photoionisation and shock-ionisation, while at larger galactocentric distances the shock mechanism is dominating. Finally, note that we used different shocks models (with and without precursor) to reproduce line ratios of the two spatially resolved components. The gas probed by the primary component is \lq self-ionizing\rq \ including both shocks and precursor. For the secondary component, the gas is collisionally ionised by the shock (i.e. no precursor) likely as a consequence of the passage of the radio jet, given the alignment between the axis of the radio jet and the secondary component.\\ Note that, although the [O\,I] kinematic-properties are different from those of other lines (e.g. H$\alpha$), they are consistent with the current scenario (see e.g. Fig.\,\ref{Fig_cartoon}). Indeed, [O\,I] is highly sensitive to shocks (especially shock-heating), and it is being enhanced in the region where the butterfly-feature is observed. \subsubsection{Connection between ISM and radio jet} \label{S_outflow_jet} \noindent NGC\,1052 is a radio-loud AGN (L$_{\rm 1-100\,GHz}$\,$\sim$\,4.4\,$\times$\,10$^{40}$\,ergs$^{-1}$, \citealt{Wrobel1984}) with a twin radio jet strongly interacting with its environment \citep{Kadler2004a,Mukherjee2018}. The jet has been detected in numerous observational studies (\citealt{Falocco2020} and references therein) associated with a X-ray emitting region (spatially coincident with radio emission at 1.5 GHz, \citealt{Kadler2004b}).\\ Radio jets can produce gaseous outflows impacting the host-galaxy on sub-kpc and kpc-scales (e.g. \citealt{Harrison2014}, \citealt{LHG2018}, \citealt{Jarvis2019}, \citealt{Molyneux2019} and \citealt{Venturi2021}) with different observational features. On the one hand, powerful AGNs show high velocity dispersion along the full extent of the radio emission (e.g. \citealt{Oosterloo2019}). On the other hand, an enhancement of the emission-line velocity width is found to be perpendicular to the direction of the AGN ionisation cones and jets (e.g. \citealt{Venturi2021}).\\ In NGC\,1052 the footprints of the interaction between the jet and the gas in the galaxy disc are probed by both primary and secondary components. Specifically, the alignment between the radio emission and ionised gas in the inner $\sim$\,1-1.2\,kpc is indicative of the radio jet interacting with the ISM (PAs are about 70$^{\circ}$, Table\,\ref{T_properties} and Sect.\,\ref{S_result_secondary}). Such an interaction can trigger an outflow as well as induce large turbulence and kpc-scale bow-shocks, i.e. a jet-induced outflow acting on both sub-kpc and kpc scales. In this scenario, the outflow is probed by the second component, whereas the primary component is tracing both the cocoon of gas surrounding the expanding jet-induced outflow (with enhanced turbulence) and the large scale gas (extended up to $\sim$\,2-3\,kpc) expanding perpendicular to the axis of the jet. The shells seen in the blue part of the velocity field of the primary component along the polar direction could indicate shock waves propagating in a smooth medium. These could be absent in the red-side of the polar emission at positive velocities due ISM anisotropy.\\ The proposed scenario for NGC\,1052 is similar to that presented by \citet{Morganti2021} for PKS\,0023--26 (a far IR bright source hosting a young powerful radio source) on the basis of the results from ALMA CO\,(2$-$1) and 1.7\,mm continuum data. In PKS\,0023--26, the highly perturbed gas tends to follow the edge of the radio emission on sub-kpc scales, whereas the relatively mild expansion of the cocoon, created by the interaction between jet and ISM, is pushing the gas aside. For NGC\,1052 the strong coupling between radio jets and the ISM is limited to the innermost 7\farcs2, corresponding to 790\,pc (Sect.\,\ref{S_result_secondary}), with large buoyant bubbles extended up to 30$\arcsec$, that corresponds to 3.3\,kpc (Sect.\,\ref{S_result_primary_MUSE_polar}). As for PKS\,0023--26, the cocoon is not reaching extreme velocities but it is injecting turbulence into the ISM, triggering the creation of the bubbles along the polar direction. With the present data set we cannot infer the presence of cavities devoid of dense gas at larger radii due to the maintenance phase of outflow, neither any relation between radio lobes and the ISM, as the former are absent for the jet in NGC\,1052.\\ Although the comparison between PKS 0023--26 and our results for NGC\,1052 is illustrative, it has to be taken with caution as we are tracing different gas phases within the jet-ISM interaction. \subsection{Ionised gas outflow kinematics and energetic} \label{S_outflow_kin_ion} On the basis of the morphology and kinematics of the different components, and taking into account that shocks are a crucial mechanism of ionisation in NGC\,1052, we claim the detection of an ionised gas outflow. It is probed by the secondary component, with a bipolar morphology and velocity dispersion $>$\,150\,$\mbox{km\,s}^{-1}$. The outflow is strongly interacting with the surrounding ISM mapped by the primary component. Such an interplay is suggested by both the high-$\sigma$ ($>$\,90\,$\mbox{km\,s}^{-1}$) region with a peculiar \lq butterfly like\rq \ morphology and the presence of two kpc-scale buoyant bubbles. \\ In this section, we summarise the main properties (kinematics and energetic) of the outflow as well as its power source (i.e. verify the jet-driven scenario proposed in Sect.\,\ref{S_outflow_jet}).\\ We assume a simple outflow model, with inclination-corrected velocities and distances, that considers the outflow oriented perpendicular to the plane of the disc.\\ We estimated the total mass of the emitting ionised hydrogen gas following \citet{Venturi2021} (see also \citealt{Carniani2015}, \citealt{Cresci2017}). We calculated the H$\alpha$ luminosity corrected for extinction (L$_{\rm H\alpha}$), considering the corresponding distance (i.e. 22.6\,Mpc, Table\,\ref{T_properties}) and using the attenuation law by \citet{Calzetti2000} for galactic diffuse ISM (R$_{V}$ = 3.12) and an intrinsic ratio (H$\alpha$/H$\beta$)\,=\,2.86 (for an electron temperature of 10$^{4}$\,K, \citealt{Osterbrock2006}). The intrinsic H$\alpha$ luminosity is converted in mass of the ionised gas with the Eq.\,1 in \citet{Venturi2021} using the median value of the electron density (i.e. 360\,cm$^{-3}$). We obtain a total L$_{\rm H\alpha}$ of (1.8\,$\pm$\,0.7) $\times$\,10$^{40}$ erg\,s$^{-1}$ and a total mass of ionised gas in the outflow of M$_{\rm OF, ion}$ = (1.6\,$\pm$\,0.6) $\times$\,10$^{5}$ M$_{\sun}$ with our data. \\ \noindent The mass outflow rate is $\dot{\rm M}_{\rm OF, ion}$\,= (0.4\,$\pm$\,0.2) M$_{\sun}$yr$^{-1}$, estimated with 3\,$\times$\,V$_{\rm OF, ion}$/R$_{\rm OF, ion}$\,$\times$\,M$_{\rm OF, ion}$ as in \citet{Cresci2015}. Note that, in this estimation, we assumed V$_{\rm OF, ion}$ to be the maximum of the velocity field from the map of the secondary component ($\sim$\,655\,$\mbox{km\,s}^{-1}$). \\ \noindent We also estimated the kinetic energy and power of the outflowing ionised gas, i.e. E$_{\rm OF, ion}$\,=\,0.5\,$\times$\,$\sigma_{\rm OF, ion}^{2}$\,M$_{\rm OF, ion}$ and $\dot{\rm E}_{\rm OF, ion}$\,=\,0.5\,$\times$\,$\dot{\rm M}$\,$\times$\,(V$_{\rm OF, ion}^{2}$+\,3$\sigma_{\rm OF, ion}^{2}$), using an average velocity dispersion of H$\alpha\sim$\,280\,$\mbox{km\,s}^{-1}$. We obtained E$_{\rm OF, ion}$ = (1.3\,$\pm$\,0.9) $\times$ 10$^{53}$ erg and $\dot{\rm E}_{\rm OF, ion}$= (8.8\,$\pm$\,3.5) $\times$ 10$^{40}$ erg\,s$^{-1}$.\\ \noindent An upper limit on the outflow mass and energy could be estimated by considering the whole \lq outflow-phenomenon\rq \ i.e. the \lq outflow-core\rq \ (secondary component) plus the buoyant bubbles (primary component, excluding the faint features described in Sect.\,\ref{S_result_lowSB}). Hence, the mass of the ionised gas associated to the outflow-phenomenon (bubbles) is 1.8\,$\pm$\,1.1 $\times$\,10$^{6}$ M$_{\sun}$ (1.7\,$\pm$\,1.1 $\times$\,10$^{6}$) M$_{\sun}$, the corresponding energy is 1.9\,$\pm$\,1.5\,$\times$ 10$^{53}$ (8.1\,$\pm$\,1.1 $\times$ 10$^{52}$) erg.\\ \noindent We exclude the star-formation as a power source of the outflow since the kinetic power of the starburst associated to supernovae is low ($\sim$\,6.3\,$\times$\,10$^{40}$ erg\,s$^{-1}$, as calculated following \citealt{Veilleux2005} from the total SFR, i.e. 0.09 M$_{\odot}$yr$^{-1}$, Table\,\ref{T_properties}). In what follows we focus in disentangling between the two most likely scenarios: AGN- vs. jet- driven outflow.\\ \noindent The energy rate is of the order of 0.01 of the bolometric luminosity of NGC\,1052 (L$_{\rm bol}$\,=\,10$^{42.91}$ erg\,s$^{-1}$ \citealt{Onori2017b}). This is in broad agreement with the results of \citet{Fiore2017} (see their Fig.\,1\,right) that showed that the average ratio $\dot{\rm E}_{\rm OF, ion}$/L$_{\rm bol}$ for AGN-driven ionised outflows is generally below 0.1. \noindent As in \citet{Venturi2021}, in order to infer whether the jet is energetic enough to power the observed features, we compared the total kinetic energy of the jet (E$_{\rm jet}$) with the kinetic energy of the outflow. By assuming the power and travelling time of the jet (i.e. 10$^{45}$\,erg\,s$^{-1}$ and 0.7\,Myr, respectively) used in \citet{Mukherjee2018} to simulate the observed kinematics and morphology of the ionised gas in NGC\,1052 we obtain a total energy of the jet of E$_{\rm jet}$\,=\,2.2\,$\times$\,10$^{58}$\,erg.\\ \noindent The comparisons $\dot{\rm E}_{\rm OF, ion}$ vs. L$_{\rm bol}$ and E$_{\rm jet}$ vs. E$_{\rm OF, ion}$ indicate that both the AGN and the jet in NGC\,1052 are capable to inject the required energy into the ISM to power the outflow. However, taking into account the alignment between the radio jet, the secondary component, and the cocoon with enhanced turbulence, we consider that the most likely power source of the outflow is the jet, although some contribution from the AGN is possible. \subsection{Neutral gas outflow detection} \label{S_outflow_kin_neutral} \noindent As mentioned in Sect.\,\ref{S_kinematics}, the mapping of the neutral gas properties does not show evident outflow features (e.g. a broad kinematic component with significant blueshifted velocities). Hence, the identification of the neutral gas outflow and the corresponding estimates provided in this Section is exploratory and hence must be taken with caution. \\ To identify the putative neutral gas outflow we used the velocity dispersion map (Fig.\,\ref{M_NaD_megara}, center) that shows the most clear deviations from the rotating-disc behaviour among those obtained from the NaD modelling (Sect.\,\ref{S_results_NaD}). \\ As a threshold to identify the outflowing neutral gas, we consider the 75th percentiles of the distributions of the velocity dispersion, that is, $\sigma_{\rm thr}$\,$>$\,245\,$\mbox{km\,s}^{-1}$. The selected region (with $\sigma$\,$>$\,$\sigma_{\rm thr}$) is marked with contours in the maps shown in Fig.\,\ref{M_NaD_megara}. It is extended up to a galactocentric distance of 4$\farcs$8 (530\,pc), with an elongated morphology (oriented north-south), and a projected area of 3.8\,arcsec$^{2}$. The region is characterised by a mild kinematics with velocity and velocity dispersion of 63\,$\pm$\,21\,$\mbox{km\,s}^{-1}$\, and 251\,$\pm$\,5\,$\mbox{km\,s}^{-1}$, respectively, on average. The EW is on average of 1.2\,$\pm$\,0.3\,\AA. This value is converted into column density of the wind (N$_{\rm H}$) via reddening (E$_{\rm B-V}$) following the approach by \citet{Cazzoli2014, Cazzoli2016}, already used for MEGARA data by \citet{CatalanTorrrecilla2020}. On average, the column density of the outflow is (2.8\,$\pm$\,0.7) $\times$ 10$^{21}$ cm$^{-2}$. As in \citet{CatalanTorrrecilla2020} we assumed that the outflow is organised in a series of thin shells and, to obtain the deprojected velocities, distances and solid angle, we used a simple geometrical model of a conical outflow that emerges perpendicular to the disc.\\ Following these prescriptions, the total mass of neutral gas contained in the outflowing region is (7.1\,$\pm$\,2.8) $\times$ 10$^{6}$ M$_{\sun}$ and the outflow rate is (0.86\,$\pm$\,0.30) M$_{\sun}$yr$^{-1}$. We also derived the total energy of the neutral outflow which is (1.1\,$\pm$\,0.4)\,$\times$\,10$^{55}$\,erg.\\ \noindent Cold neutral gas outflows in LINERs and early-type galaxies (ETG), probed by the NaD absorption, have been less studied compared to e.g. ionised/molecular outflows in Seyferts or U/LIRGs (e.g. \citealt{Arribas2014, PereiraSantaella2016, Venturi2018, PereiraSantaella2020, Wylezalek2020, Perna2021, Comeron2021, Riffel2021}). However, there are two systematic studies of neutral gas in LINERs in large samples by \citet{Lehnert2011} and \citet{Sarzi2016}. \citet{Lehnert2011} detected neutral ISM-gas in about one-third of their sample of 691 radio-loud ETGs on the basis of SDSS data. The detected NaD profiles suggest the presence of outflows with low velocities ($\sim$\,50\,$\mbox{km\,s}^{-1}$) and broad profiles ($\sim$\,500\,$\mbox{km\,s}^{-1}$). On the contrary \citet{Sarzi2016} found that only a dozen radio-AGNs (out of 103 objects) show NaD absorption from ISM, but the neutral gas never appears to be outflowing. \\ The unique study of the NaD absorption in NGC\,1052 is by \citet{Cazzoli2018} on the basis of slit spectroscopy. In this work, the neutral gas kinematics has been interpreted as due to rotation. However, slit observations give only a partial description of the outflow phenomenon, hence in the case of NGC\,1052, IFS observations could have been the key for our (tentative) detection of the neutral gas outflow. \subsection{Comparison with current and previous H$\alpha$ broad component measurements} \label{S_BLR_properties} The BLR component in NGC\,1052 has been observed in polarized light \citep{Barth1999} and at different wavelengths (\citealt{Onori2017a}, \citealt{Cazzoli2018}, \citealt{Dahmer2019a} and references therein).\\ \citet{Onori2017a} modelled the BLR component in both optical and near-IR bands with \textit{HST}/FOS (R\,$\sim$\,2800) and ISAAC (R\,$\sim$\,730) spectra. The near-IR He\,I$\lambda$1.083$\mu$m line has been modelled with a broad Gaussian curve with width of 2455\,$\mbox{km\,s}^{-1}$, a slightly smaller value (i.e. 2193\,$\mbox{km\,s}^{-1}$) has been used for H$\alpha$. The broad H$\alpha$ emission was also measured by \citet{Balmaverde2014} (FWHM\,$\sim$\,2240\,$\mbox{km\,s}^{-1}$), \citet{Constantin2015} (FWHM\,$\sim$\,2800\,$\mbox{km\,s}^{-1}$) and \citet{Cazzoli2018}\footnote{They found evidence for the BLR only in \textit{HST}/STIS and not in ground-based CAHA/CAFOS data, due to a less reliable fit to the H$\alpha$ emission line in ground- and space-based data sets.} (FWHM\,$\sim$\,2915\,$\mbox{km\,s}^{-1}$) with \textit{HST}/STIS slit spectra, all obtaining values that are in fair agreement.\\ There exists three measurements of the width of the broad H$\alpha$ component with optical IFS. Two of them are from the present MEGARA and MUSE IFS data, that are 2427\,$\pm$\,332 and 2350\,$\pm$\,470 $\mbox{km\,s}^{-1}$, respectively. These values are consistent within uncertainties but smaller from the value reported by DH19a, from their GMOS/GEMINI cube (R\,$\sim$\,1700 and final angular resolution 0$\farcs$7), that is $\sim$\,3200\,$\mbox{km\,s}^{-1}$.\\ We considered as the main sources of the discrepancies the number of components used to model emission lines and the different spectral/spatial resolution of the different data sets (see e.g. \citealt{Cazzoli2020}). Another possibility for explaining the differences in the FWHM of the broad component is AGN variability (see e.g. \citealt{LHG2014}) whose study is beyond the aim of the paper.\\ \noindent The FWHM and luminosity of the broad H$\alpha$ component determined from the best-fitting model of the H$\alpha$ broad component can be converted in black hole mass using the virial relation. For NGC\,1052, we found that M$_{\rm BH}$ is $\sim$\,3\,$\times$\,10${^5}$ M$_{\sun}$ from Eq.\,3 in \citet{Koss2017}. Considering that the assumed value of luminosity is a lower limit as we did not applied any correction for reddening, the estimate of the black hole mass from H$\alpha$ is in broad agreement with that by \citet{Onori2017b} using the virial relation i.e. $\sim$\,4\,$\times$\,10${^6}$ M$_{\sun}$ (Table\,\ref{T_properties}). \\ \noindent However, the determination based in the broad H$\alpha$ has been explored the most for luminous type 1 AGNs (see e.g. \citealt{Greene2005} for details) hence for type 1.9 LINERs as NGC\,1052 (Table\,\ref{T_properties}), could be uncertain (e.g. it challenging to isolate the AGN contribution unambiguously). Therefore, we consider the M$_{\rm BH}$ from the stellar velocity dispersion, more reliable (it is the result of coevolution between the host galaxies and the supermassive black holes). \section{Conclusions} \label{S_conclusions} \noindent On the basis of optical MUSE and MEGARA IFS data we have studied the properties of the stellar and ionised/neutral gas components in the LINER\,1.9 NGC\,1052, using as tracers both emission lines (from H$\beta$ to [S\,II]) and the NaD absorption doublet.\\ The conclusions of this study can be summarised as follows: \begin{enumerate} \item \textit{Kinematics and dynamical support for the stellar component.} The stellar velocity field is characterised by ordered large-scale rotational motions ($\Delta$V\,=\,167\,$\pm$\,19\,$\mbox{km\,s}^{-1}$), although the velocity dispersion is generally high as measured from MUSE (145\,$\pm$\,22\,$\mbox{km\,s}^{-1}$) and MEGARA data (201\,$\pm$\,16\,$\mbox{km\,s}^{-1}$). The rotational support is, however, low. The dynamical ratio, V/$\sigma$\,=\,1.2 (0.8) from MUSE (MEGARA) data, is indicative of a dynamically hot disc with a significant random motion component. In both data sets, the stellar major axis is well aligned with the photometric one. The kinematic and dynamics of the stellar disc of NGC\,1052 favour its classification as a S0-type. The black hole mass estimated from stellar dynamics is 2\,$\pm$\,0.5 $\times$\,10${^8}$ M$_{\sun}$.\\ \item \textit{Ionisation mechanisms}. By combining the location of line ratios onto BPTs, theoretical models of shocks and pAGBs ionisation, and the weak/strong [O\,I] classification, we exclude star formation and pAGB scenarios in favour of a mixture of shock excitation and AGN activity as the main mechanisms of ionisation in NGC\,1052. The general behaviour is that the ionisation in the central region (R\,$<$\,10$\arcsec$) is a mixture of AGN-photoionisation and shocks while at larger galactocentric distances the shock excitation is dominating.\\ \item \textit{The intriguing properties of the ionised gas probed by the primary component}. The velocity field shows a large scale structure extended in the polar direction (NE-SW direction) up to $\sim$\,30$\arcsec$ ($\sim$\,3.3\,kpc) with blue and red velocities (typically $<$\,$\mid$\,250\,$\mid$\,$\mbox{km\,s}^{-1}$\,). The velocity dispersion map lacks of any symmetry typical of a rotation dominated system with a notable enhancement ($\sigma$\,$>$\,90\,$\mbox{km\,s}^{-1}$) crossing the galaxy along the major axis of rotation in the central $\sim$\,10$\arcsec$ (also called \lq butterfly\rq \ region within the main text). We consider that both features are likely related to the presence of an ionised gas outflow instead of, for example, a polar disc.\\ \item \textit{Ionised gas outflow}. It is probed by the secondary component with a bipolar morphology, velocity dispersions $>$\,150\,$\mbox{km\,s}^{-1}$\, and velocities up to 660\,$\mbox{km\,s}^{-1}$. The outflow (with mass of 1.6\,$\pm$\,0.6 $\times$\,10$^{5}$ M$_{\sun}$, and mass rate of 0.4\,$\pm$\,0.2 M$_{\sun}$yr$^{-1}$) is propagating in a cocoon of gas with enhanced turbulence (the \lq butterfly\rq\ region) and triggering the onset of kpc-scale buoyant bubbles (polar emission). Considering the energy (1.3\,$\pm$\,0.9 $\times$ 10$^{53}$ erg) and energy rate (8.8\,$\pm$\,3.5 $\times$ 10$^{40}$ erg\,s$^{-1}$) of the outflow both the AGN and the radio jet are able to launch the outflow. However, taking into account both its alignment with the jet and with the cocoon, and that the gas is collisionally ionised, we consider that the most likely power source of the outflow is the jet, although some contribution from the AGN is possible. \\ \item \textit{Neutral gas content}. The kinematics maps of the NaD absorption obtained with MEGARA data indicate optically thick neutral gas with complex kinematics. The velocity field is consistent with a slow rotating disc ($\Delta$V\,=\,77\,$\pm$\,12\,$\mbox{km\,s}^{-1}$) but the velocity dispersion maps is off-centred with a peak value of 263\,$\pm$\,10 $\mbox{km\,s}^{-1}$\, observed at 2$\farcs$5 (277 pc) eastwards to the photometric center without any counterpart in the (centrally peaked) flux map. The hints of the presence of the neutral gas outflow are weak and our identification its tentative. The putative neutral gas outflow is extended to the west with projected area of 3.8 arcsec$^{2}$ with mild kinematics i.e. with velocity and velocity dispersion of 63\,$\pm$\,21\,$\mbox{km\,s}^{-1}$\, and 251\,$\pm$\,5\,$\mbox{km\,s}^{-1}$, respectively. The mass, the mass rate and the energy of the neutral would be (7.1\,$\pm$\,2.8)\,$\times$\,10$^{6}$ M$_{\sun}$, (0.86\,$\pm$\,0.30)\,M$_{\sun}$yr$^{-1}$, and (1.1\,$\pm$\,0.4)\,10$^{55}$\,erg, respectively. \\ \item \textit{BLR properties}. In the nuclear region of NGC\,1052 ($\leq$\,1$\arcsec$) the broad H$\alpha$ component originated in the (unresolved) BLR of the AGN is modelled with a Gaussian component with FWHM of 2427\,$\pm$\,332 and 2350\,$\pm$\,470\,$\mbox{km\,s}^{-1}$, respectively for MUSE and MEGARA data.\\ \item \textit{Unresolved component}. It has been detected with MUSE data (barely with MEGARA) in all emission lines. This component is observed in the central region, with a spatially extension matching that of the PSF, with an average FWHM\,$\sim$\,1380\,$\mbox{km\,s}^{-1}$\, and line ratios indicating AGN-ionisation. It could probe either an unresolved AGN component as proposed by DH119b or a more recent AGN-driven outflow. However, with the current data set it is not possible to disentangle among the two scenarios. \end{enumerate} \begin{acknowledgements} The authors acknowledge the anonymous referee for her/his instructive comments that helped to improve the presentation of this paper.\\ SC, IM, JM and LHM acknowledge financial support from the State Agency for Research of the Spanish MCIU through the \lq Center of Excellence Severo Ochoa\rq \ award to the Instituto de Astrof{\'i}sica de Andaluc{\'i}a (SEV-2017-0709). These authors are also supported by the Spanish Ministry of Economy and Competitiveness under grants no. AYA2016-76682-C3 and PID2019-106027GB-C41. LHM acknowledges financial support under the FPI grant BES-2017-082471. AGdP and ACM acknowledge the grant RTI-2018-096188-B-I00. LHG acknowledges funds by ANID – Millennium Science Initiative Program – ICN12$\_$009 awarded to the Millennium Institute of Astrophysics (MAS). FLF acknowledges support from PRIN MIUR project \lq Black Hole winds and the Baryon Life Cycle of Galaxies: the stone-guest at the galaxy evolution supper\rq, contract no. 2017PH3WAT. CRA acknowledges financial support from the European Union's Horizon 2020 research and innovation programme under Marie Sk\l odowska-Curie grant agreement No 860744 (BiD4BESt) and from the State Research Agency (AEI-MCINN) and the Spanish MCINN under grant \lq Feeding and feedback in active galaxies\rq, with reference PID2019-106027GB-C42. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. We acknowledge the usage of the HyperLeda database (\url{http://leda.univ-lyon1.fr}).\\ This work has made extensive use of \texttt{IRAF} and \textsc{Python}, particularly with \textsc{astropy} \url{http://www.astropy.org} \citep{astropy:2013, astropy:2018}, \textsc{matplotlib} \citep{Hunter:2007}, \textsc{numpy} and \textsc{lmfit}.\\ This paper made use of the plotting package \textsc{jmaplot}, developed by Jes{\'u}s Ma{\'i}z-Apell{\'a}niz available at: \url{http://jmaiz.iaa.es/software/jmaplot/ current/html/jmaplot_overview.html}. \\ This research has made use of the \texttt{Skycat} tool that combines visualization of images and access to catalogues and archive data for astronomy. In particular, \texttt{EXTRACTOR} as part of the \texttt{GAIA} (Graphical Astronomy and Image Analysis Tool) package.\\ We thanks J.\,Perea Duarte for the technical support and B. Perez for the calculation of gas metallicity. \end{acknowledgements} \bibliographystyle{aa}
1,116,691,498,182
arxiv
\section*{References}% \begin{quotation}\mbox{}\par} \def\refer#1\par{{\setlength{\parindent}{-\leftmargin}\indent#1\par}} \def\end{quotation}{\end{quotation}} {\noindent\small{\bf Abstract:} The complexity of composite spectra of close binaries makes the study of the individual stellar spectra extremely difficult. For this reason there exists very little information on the chemical composition of high-mass stars in close binaries, despite its importance for understanding the evolution of massive stars and close binary systems. A way around this problem exists: spectral disentangling allows a time-series of composite spectra to be decomposed into their individual components whilst preserving the total signal-to-noise ratio in the input spectra. Here we present the results of our ongoing project to obtain the atmospheric parameters of high-mass components in binary and multiple systems using spectral disentangling. So far, we have performed detailed abundance studies for 14 stars in eight eclipsing binary systems. Of these, V380\,Cyg, V\,621 Per and V453\,Cyg are the most informative as their primary components are evolved either close to or beyond the TAMS. Contrary to theoretical predictions of rotating single-star evolutionary models, both of these stars show no abundance changes relative to unevolved main sequence stars of the same mass. It is obvious that other effects are important in the chemical evolution of components in binary stars. Analyses are ongoing for further systems, including AH\,Cep, CW\,Cep and V478\,Cyg.} \section{Introduction} In the last decade theoretical stellar evolutionary models, particularly for higher masses, were improved considerably with the inclusion of additional physical effects beyond the standard ingredients. It was found that rotationally induced mixing and magnetic fields could cause substantial changes in the resulting predictions (Meynet \& Maeder 2000, Heger \& Langer 2000). Some of these concern evolutionary changes in the chemical composition of stellar atmospheres. Due to the CNO cycle in the core of high-mass stars some elements are enhanced (such as helium and nitrogen), some are depleted (e.g.\ carbon), whilst some (e.g.\ oxygen) are not affected at all. The rotational mixing predicted by stellar models is so efficient that changes in the atmospheric composition should be identifiable whilst the star is still on the main sequence. On the observational side, substantial progress has also been made. The VLT/FLAMES survey (Evans et al.\ 2005) produced CNO abundances for a large sample of B stars in the Milky Way, and the Magellanic Clouds. This survey has opened new questions since a large population of slow rotators have shown an enhancement of nitrogen (Hunter et al.\ 2009). Also, important empirical constraints on models arose from the observational study performed by Morel et al.\ (2008) who found that magnetic fields have an important effect on the atmospheric composition of these stars. \begin{figure}[h] \centering \includegraphics[width=14.5cm]{K_Pavlovski_fig1.ps} \caption{Time series of observed composite spectra (red lines) of the B0\,V + B1\,V close eclpsing binary system V453\,Cyg (Pavlovski \& Southworth 2009). This is a portion of \'echelle spectra secured with the FIES spectrograph at the Nordic Optical Telescope (La Palma). The individual disentangled spectra of the two stars, which have very similar effective temperatures, are shown at the bottom of the plot (blue lines, secondary offset by $-0.2$) with their correct continuum levels. The disentangled spectra have been adjusted with the appropriate Doppler shifts and relative intensities to reproduce the observed spectra, and are overlaid on them using blue lines.} \end{figure} Detached eclipsing binaries are fundamental objects for obtaining empirical constraints on the structure and evolution of high-mass stars, and are the primary source of directly measured stellar properties. Accurate physical properties are available for fewer than a dozen high-mass binaries, and most have no observational constraints on their chemical composition (Torres, Andersen \& Gim\'enez 2010). The aim of our projects is to obtain a sample of high-mass binaries both with accurate parameters and, for the first time, with detailed abundance studies of the individual stars. We aim to gain insight into the chemical evolution of high-mass stars in close binary systems. The close proximity of the components leads to strong tidal forces, which may be an important additional effect on the internal and chemical structure of the stars, beside rotation and magnetic fields. \section{Sample and Method} The complexity of the composite spectra of close binaries makes studying the spectra of the individual stars extremely difficult. For this reason there exists very little information on the chemical composition of high-mass stars in close binaries, despite its importance for understanding the evolution of both massive stars and close binaries. A way around this problem exists: spectral disentangling. This technique allows a time-series of composite spectra to be decomposed into their individual components whilst preserving the total signal-to-noise ratio in the input spectra, and without the use of template spectra (Simon \& Sturm 1994). An overview of almost a dozen methods for spectral disentangling has been given by Pavlovski \& Hensberge (2010). For our work we use the {\sc fdbinary} Fourier-space code (Iliji\'c et al.\ 2004). Synthetic spectra are generated using {\sc atlas9} with non-LTE model atoms (see Pavlovski \& Southworth 2009 and Pavlovski et al.\ 2009 for details). \begin{figure}[h] \centering \includegraphics[width=14.2cm]{K_Pavlovski_fig2.ps} \caption{Helium abundances for high-mass stars in close binaries from our sample (red symbols) compared to single sharp-lined B-type main sequence stars and BA supergiants in the Przybilla et al.\ (2010) sample (blue symbols). $\epsilon$(He) is the fractional helium abundance by number of atoms.} \end{figure} A vital step in a spectroscopic abundance study is precise determination of the stellar atmospheric parameters (effective temperature, surface gravity, microturbulence velocity, etc). When reconstructing the separate spectra of the components their individual light contributions have to be obtained either from the disentangled spectra itself, or from some other source such as a complementary light curve analysis (c.f.\ Pavlovski \& Hensberge 2010). So far, we have performed detailed abundance studies for 14 components in eight eclipsing binaries. In many cases we have also reanalysed existing or new light curves. Of the systems studied, V380\,Cyg (Pavlovski et al.\ 2009), V453\,Cyg (Pavlovski \& Southworth 2009, Southworth et al.\ 2004a) and V621\,Per (Southworth et al., 2004b, 2011 in prep.) are the most informative as their primary components are evolved either close to or beyond the terminal-age main sequence (TAMS). Other binaries studied include V578\,Mon (Pavlovski \& Hensberge 2005, see also Hensberge et al.\ 2000), AH\,Cep, CW\,Cep, Y\,Cyg and V478\,Cyg [helium abundances have also been measured from disentangled spectra for DH\,Cep (Sturm \& Simon 1994), Y\,Cyg (Simon, Sturm \& Fiedler 1994) and DW\,Car (Southworth \& Clausen 2007)]. These objects mostly contain stars at the beginning of their main sequence lifetimes, so are important for calibrating theoretical models near their initial conditions. \section{The quest for surface helium enrichment} Theoretical stellar evolutionary models which include rotational mixing predict an enrichment of helium at the stellar surface, even during a star's main sequence lifetime. Extensive observational studies comprising B-type stars in the field (Lyubimkov et al.\ 2004), and in stellar clusters (Huang \& Gies 2006) yield evidence for this enrichment, but with a very large scatter in the individual measurements. An unexpectedly large fraction of both helium-rich, and helium-weak stars were detected by Huang \& Gies (2006), who included only three helium lines in their analysis. The results of our detailed abundance determinations in the sample of 14 components of close binary stars are shown in Fig.\ 2 (red symbols). The results of a recent study of helium abundances in the sample of sharp-lined main sequence and BA supergiants (Przybilla et al.\ 2010) are also plotted (blue symbols). It is interesting that no helium abundance enrichment has been detected in these studies, either for single stars (Przybilla et al.\ 2010) or the components of close binaries (this work). The studies therefore do not support a large spread in helium abundance, as found by other authors, with the caveat that the sample of main sequence stars studied is limited. \begin{figure}[h] \centering \includegraphics[width=15cm]{K_Pavlovski_fig3.ps} \caption{Evolution of nitrogen in high-mass MS stars to supergiants. The close binary systems in our sample are represented by red symbols. Other symbols represent single stars as follows: blue symbols the VLT/FLAMES survey of B stars in Milky Way (Hunter et al.\ 2009); green symbols the results of an abundance study for a sample of B stars with detected magnetic fields (Morel et al.\ 2008); and open symbols a study of sharp-lined stars (Przybilla et al.\ 2010).} \end{figure} \section{The evolution of nitrogen in high-mass binaries} In close binary stars, both fast rotation and tidal forces due to the proximity of the components play an important role in stellar evolution. Tides spin up (or spin down) the stars until their rotation period synchronises with the orbital period. The effects of tides, rotational mixing and magnetic fields were studied by de Mink et al.\ (2009). Their model calculations indicate a significant dependence of the surface helium and nitrogen abundances for short-period systems ($P < 2$ days) for a considerable fraction of their MS lifetime. The best candidates for testing these concepts contain more massive components, in advanced phases of the core hydrogen-burning phase, with significantly less massive and less evolved companions. V380\,Cyg, V621\,Per and V453\,Cyg fit this bill well, but have longer orbital periods (3.9\,d to 25.5\,d) so are not predicted to show significant abundance enhancements. This is illustrated in Fig.\,3 in which the abundance ratio N/O is plotted against $\log g$, which is a good indicator of evolutionary stage. The N/O ratios for the evolved stars in our sample are consistent with ZAMS values, like many of the stars in the VLT/FLAMES sample of Hunter et al.\ (2009). The evolutionary enhancement of nitrogen is only clearly present in the sample of supergiants observed by Przybilla et al.\ (2010). On average the magnetic B-type stars (Morel et al.\ 2008) have large nitrogen abundances, but definitive conclusions on the role of magnetic fields on nitrogen enrichment are still not possible (Morel 2010). The large spread in nitrogen abundances for MS stars is obvious. Since the enhancements of helium and nitrogen are larger at lower metallicity, the best candidates for detailed study would be close binaries in the Magellanic Clouds. However, these are challenging objects for accurate abundance determination due to their high rotational velocities (resulting in line blending) and relative faintness. \section*{Acknowledgements} KP acknowledges receipt of the Leverhulme Trust Visiting Professorship which enables him to work at Keele University, UK, where this work was performed. This research is supported in part by a grant to KP from the Croatian Ministry of Science and Education. \footnotesize \beginrefer \refer De Mink, S., Cantiello, M., Langer, N., Pols, O.R., Brott, I., Yoon, S.-Ch., 2009, A\&A, 497, 243 \refer Evans, C.J., Smartt, S.J., Lee, J.-K., et al., 2005, A\&A, 437, 467 \refer Heger, A., Langer, N., 2000, ApJ, 544, 1016 \refer Hensberge, H., Pavlovski, K., Verschueren, W., 2000, A\&A, 358, 553 \refer Huang, W., Gies, D.R., 2006, ApJ, 648, 591 \refer Hunter, I., Brott, I., Langer, N., et al., 2009, A\&A, 496, 841 \refer Iliji\'c S., Hensberge H., Pavlovski K., Freyhammer L.M., 2004, in Hilditch R.W., Hensberge H., Pavlovski K., eds, ASP Conf. Ser. Vol. 318, {\it Spectroscopically and Spatially Resolving the Components of Close Binary Systems}. Astron. Soc. Pac., San Francisco, p.\ 111 \refer Lyubimkov, L.S., Rostophchin, S.I., Lambert, D.L., 2004, MNRAS, 351, 745 \refer Meynet, G., Maeder, A., 2000, A\&A, 361, 101 \refer Morel, T., 2010, this proceedings ({\sf arXiv:1009.3433}) \refer Morel, T., Hubrig, S., Briquet, M., 2008, 481, 453 \refer Pavlovski, K., Hensberge, H., 2005, A\&A, 439, 309 \refer Pavlovski, K., Hensberge, H., 2010, in {\it Binaries - Key to Comprehension of the Universe}, eds.\ A.\ Pr\v{s}a and M.\ Zejda, ASP Conference Series (in press); arXiv:0909.3246 \refer Pavlovski, K., Southworth, J., 2009, MNRAS, 394, 1519 \refer Pavlovski, K., Tamajo, E., Koubsky, P., Southworth, J., Yang, S., Kolbas, V., 2009, MNRAS, 400, 791 \refer Przybilla, N., Firnstein, M., Nieva, M.F., Meynet, G., Maeder, A., 2010, A\&A, 517, 38 \refer Simon, K.P., Sturm, E., 1994, A\&A, 281, 286 \refer Simon, K.P., Sturm, E., Fiedler, A., 1994, A\&A, 292, 507 \refer Southworth, J., Clausen J.\ V., 2007, A\&A, 461, 1077 \refer Southworth, J., Maxted, P.F.L., Smalley, B., 2004a, MNRAS, 351, 1277 \refer Southworth, J., Zucker, S., Maxted, P.F.L., Smalley, B., 2004b, MNRAS, 355, 986 \refer Sturm, E., Simon, K.P., 1994, A\&A, 282, 93 \refer Torres, G., Andersen, J., Gim\'{e}nez, A., 2010, ARA\&A, 18, 67 \end{quotation} \end{document}
1,116,691,498,183
arxiv
\section{Introduction} Linear quadratic (LQ) stochastic games have attracted a great deal of attention in the control and related community due to its wide applicability in stochastic control, minimax control, multi-agent systems and economics \cite{basar1995dynamic}, \cite{engwerda2005lq}, \cite{foley1971class}, \cite{weeren1999asymptotic}, \cite{basar1976uniqueness}, \cite{cruz1971series}, \cite{isaacs1999differential}. There is a well established notion of (Nash) equilibrium (NE) strategies for static games, and in dynamic games there are refinements of NE known as subgame perfect equilibria (SPE). Closed form solutions for these NE (or SPE) may generally not exist or hard to compute if one such exists. Among the various classes of dynamic games, LQ games exhibit a closed form expression for SPE, and it is characterized by some Riccati equations. Necessary and sufficient conditions for the NE strategies of LQ games have been studied in \cite{foley1971class}, \cite{bernhard1979linear},\cite{basar1976uniqueness}. Contrary to the prior belief, \cite{basar1974counterexample} shows existence of nonlinear control strategies for LQ games. Amongst the vast majority of the prior works, the underlying assumption is the availability of free observations. Dynamic games are studied with either open-loop strategy (i.e. only measurement is the initial state) or feedback strategies where the observation is available freely at any time. Challenges emerge when the measurements are on demand, but costly. This adds an extra layer of decision making, for the players, because now they have to both, control the system and ask for measurements. In this work, we consider a class of two-player linear quadratic stochastic games of finite horizon. The game dynamics are partially observable. Contrary to the existing literature, the observations are not freely available. Each observation requires a finite cost for establishing a link for communication. The link through which the observations are communicated to the players (their controllers) is noiseless but operated by two switches (Figure \ref{F:schematic}), one for each player. The link is established only when both the players are willing for it, and they both get the actual state measurement at that time. Consequently, there is an apparent trade off between cost of obtaining state measurement and the estimation quality. \begin{figure} \begin{center} \includegraphics[width=0.75 \textwidth]{Untitled2} \caption{Schematic of the system. Each player has to select their controller strategy $g^i$ and switching strategy $s^i$. All the links are noiseless and delay-free.} \label{F:schematic} \end{center} \end{figure} In this game, the players can make a precise estimate of the state if they establish the link at every time instance. However, since the link establishment is costly, they can compromise the estimation accuracy in exchange of the cost for accessing the measurement. Therefore, the problem is to optimally decide when to establish the link and how to use the acquired measurement in order to minimize their individual cost. Since, in general, the players will have different preferences over the time instances when they want to acquire the measurement, they have to come to an agreement when to actually establish the link. The closest work on the similar game framework has been studied in \cite{maity2016optimal} where the authors studied zero-sum stochastic differential LQ games. However, the selection for switching times were performed in an collaborative way rather than being the outcome of a strategic interaction. The major digression of this work from \cite{maity2016optimal} is that we consider an explicit game for the switching strategy as well. We express the switch as a Boolean control action and seek for SPE for both control and switching strategies. Our contributions are as follows:\\ (a) We study the SPE of this dynamic game and show that they can be found through a two-step process. Specifically, in the first step we fix the switching strategy and study the SPE for control strategies. The study shows that the control strategy is linear in estimated state, where the gain is characterized with two backward Riccati equations which can be computed offline. Moreover, the Riccati equations do not depend on the switching strategy. \\ (b) Regarding the equilibrium switching strategy, we provide a backward recursive algorithm to find all SPE where value functions need only be computed over a finite and quadratically-sized (in the duration of the game) set.\\ (c) Regarding the equilibrium switching strategy, we show that there are many equilibria among which there is one that is strictly preferable by both users and has a Markovian structure. It is found in our study that a strictly preferable switching strategy for a player not only depends on their own cost-to-go, but also depends on the cost-to-go for the opponent. The remaining of the paper is organized as follows: The problem formulation is provided in Section \ref{Problem Formulation}, Section \ref{Subgame Perfect Control Strategy} contains the results on the SPE of the control strategy, SPE for the switching strategy and its offline computation are analyzed in Section \ref{Subgame Perfect Switching Strategy}. Finally we conclude our work in Section \ref{Conclusion}. \section{Problem Formulation} \label{Problem Formulation} In the discrete time Gauss-Markov setting, we consider the following linear dynamics of the state $X_t$: \begin{equation} X_{t+1}=A X_t+B^1 U^1_t+B^2 U^2_t+W_t \end{equation} where $X_t \in \mathcal{X} = \mathbb{R}^n$, and $U^i_t \in \mathcal U^i = \mathbb{R}^m$ denotes the action of player $i$. $W_t\in \mathbb{R}^n$ is a Gaussian noise with $\mathbb{E}[W_t]=0$ and $\mathbb{E}[W_t W_s^{'}]=S \delta_{t-s}$ ($\delta_{t}$ is the Kronecker delta.), and $X_0 \sim \mathcal{N}(0,\Sigma_0)$. There are two additional actions (switching actions) $V^1_t\in\{0,1\}$ and $V^2_t\in\{0,1\}$. These switching actions control a switch (switch closes if both are equal to 1) and the observation available to both users is $Y_t\in \mathcal{X} \cup \{e\}$ with \begin{align} Y_t = \left\{ \begin{array}{ll} X_t &, V^1_t=V^2_t=1\\ e & , \text{else}, \end{array} \right. \end{align} where ``$e$'' denotes an erasure. The evolution of random variables in period $t$ is assumed to be $... X_t \rightarrow (V^1_t,V^2_t) \rightarrow Y_t \rightarrow (U^1_t,U^2_t) ...$ The information available at time $t$ to player $i$ before she takes the switching action $V^i_t$ is \begin{align} I_t = ( Y^{t-1}, U^{1,t-1}, U^{2,t-1}, V^{1,t-1}, V^{2,t-1} ), \end{align} and the information available at time $t$ to player $i$ before she takes the control action $U^i_t$ is \begin{align} \bar I_t = ( I_t, V^1_t,V^2_t,Y_t ). \end{align} As a result, the actions have the functional form \begin{subequations} \begin{align} V^i_t &= s^i_t(I_t), \qquad i=1,2, \\ U^i_t &= g^i_t(\bar I_t), \qquad i=1,2, \end{align} \end{subequations} where by $g^i=(g^i_t)_{t=0}^{T-1}$, $s^i=(s^i_t)_{t=0}^{T-1}$, we denote the control and switching strategies of player $i$. $\forall t \in \{0,\cdots,T-1\}$, let us denote an $I_t$ measurable random variable $\Delta_t=V^1_t\cdot V^2_t$. The individual cost that each player needs to minimize is quadratic in state and action, and it also depends on the switching actions $V^1_t$ and $V^2_t$. We consider a game for a finite duration ($\{0,\cdots,T\}$) and the per-stage {costs} are explicitly written as: \begin{align} \label{E:reward} {C}^i_t(x_t,u^1_t,u^2_t,v^1_t,v^2_t)=&\|x_t\|^2_{Q^i}+\|u^i_t\|^2_{Q^{ii}}+\|u^j_t\|^2_{Q^{ij}}+ \lambda_i(v^i_t\cdot v^j_t) \end{align} for all $t\in \{0,\cdots,T-1\}$ and \begin{equation} {C}_T^i(x_t,u^1_t,u^2_t,v^1_t,v^2_t)=\|x_T\|^2_{Q^i}. \end{equation} The quantity $\lambda_i>0$ is the cost paid by player $i$ when both the players attempt to close the switch and they observe the state information $X_t$. Therefore the average cost over the time horizon $\{0,\cdots,T\}$ is represented as, \begin{equation} J^i(\sigma^1,\sigma^2)= \sum_{t=0}^T\mathbb{E}[{C}_t^i(X_t,U^1_t,U^2_t,V^1_t,V^2_t)] \end{equation} where $\sigma^i=(g^i,s^i)$ denotes the strategy of the player $i$ that corresponds to control strategy $g^i$ and switching strategy $s^i$. The objective of player $i$ is: \begin{equation} \min_{\sigma^i}J^i(\sigma^1,\sigma^2)=\min_{s^i}\{\min_{g^i}J^i(\sigma^1,\sigma^2)\} \end{equation} \section{Subgame Perfect Control Strategy} \label{Subgame Perfect Control Strategy} For dynamic games with complete information the appropriate equilibrium concept is a refinement of Nash equilibrium (NE) called the subgame perfect equilibrium (SPE). A strategy profile $(\sigma^1,\sigma^2)$ is a SPE if the restriction of $(\sigma^1,\sigma^2)$ to any proper subgame of the original game constitutes a NE \cite[pp. 94]{fudenberg1991game}. We seek to characterize the SPE $(\sigma^{1*},\sigma^{2*})$ for this switched LQG game. Moreover, we will show that among the multiple SPE, there exists one that simultaneously minimizes the cost for both users among all SPE and thus it will be the preferable SPE solution of this game. In this section, we study the SPE control strategy for both the players. \begin{thm} \label{T:NashControl} \it{For any switching profile $(s^{1}, s^{2})$ of the players, the SPE control strategy $g^{i*}$ has the following structure: \begin{align} U^{i}_t &= g^{i*}_t(\bar I_t)= -L^i_t\hat X_t, \end{align} where \begin{align} \label{E:hatx} \hat X_{t}= \left\{ \begin{array}{ll} A\hat X_{t-1}+B^1 U^{1}_{t-1}+B^2 U^{2}_{t-1}, & V^1_{t}\cdot V^2_{t}=0\\ X_{t}, & V^1_{t}\cdot V^2_{t}=1. \end{array} \right. \end{align} Furthermore, the cost-to-go incurred by player $i$ under the SPE control strategy at any time step $k$ is given by, \begin{align} \label{E:T1} \mathbb{J}^{i*}_k(\bar I_k)=\mathbb{E}\Big[\sum_{t=k}^{T-1}&(\|E_t\|^2_{Q^i}+\lambda_i\Delta_t)+\sum_{t=k}^{T-2}\Delta_{t+1} \|AE_{t}+W_{t}\|^2_{P^i_{t+1}} +\|E_T\|^2_{Q^i}|~\bar I_k\Big]+\|\hat X_{k}\|_{P^i_k}^2 \end{align} where $E_t=X_t-\hat X_t$. The matrices $L^i_t$ and $P^i_t$ depend only on the game parameters $A,B^i,Q^i, Q^{ii}$ and $Q^{ij}$ (detailed expressions are in the proof of the theorem) and thus, can be calculated offline without the knowledge of the switching strategy profile.} \end{thm} \textit{proof} The proof of this theorem is provided in Appendix \ref{A:1}. To maintain brevity $\mathbb{J}^{i*}_k(\bar I_k)$ will be denoted as $\mathbb{J}^{i*}_k$. From this point onward we will set $\Delta_T=0$ and write (\ref{E:T1}) in compact form as \begin{align} \label{E:T2} \mathbb{J}^{i*}_k(\bar I_k)=\mathbb{E}\Big[\sum_{t=k}^{T-1}&(\|E_t\|^2_{Q^i}+\Delta_{t+1} \|AE_{t}+W_{t}\|^2_{P^i_{t+1}} +\lambda_i\Delta_t)+ \|E_T\|^2_{Q^i}|~\bar I_k\Big]+\|\hat X_{k}\|_{P^i_k}^2 \end{align} It should be noted that in Theorem \ref{T:NashControl}, the $g^{i*}$ depends on the given switching strategy $(s^1, s^2)$ through the $\hat X_t$. There are several remarks to be made at this point. The stochastic control version of the same problem (i.e. single player single objective) is a modified Kalman filtering problem where the observations are available on demand after paying certain cost $\lambda$ per observation. Therefore, the decision of switching will solely depend on the influence of switching on the error covariance matrix. This is a side result of our work and details will appear elsewhere. { From Theorem \ref{T:NashControl}, $\min_{\sigma^i} J^i(\sigma^1,\sigma^2)=\min_{s^i}\mathbb{E}[\mathbb{J}^{i*}_0]$. Therefore, the total cost incurred by player $i$ with control strategy profile ($g^{1*},g^{2*}$) is $\mathbb{E}[\mathbb{J}^{i*}_0]$. Hence, the total cost incurred with the switching is: \begin{align} \label{AE:cost} &\mathbb{E}[\mathbb{J}^{i*}_0]=\mathbb{E}[\|\hat X_0\|_{P^i_0}^2]+\mathbb{E}\Big[\sum_{t=0}^{T-1}\big(\|E_t\|^2_{Q^i}+\Delta_{t+1} \|AE_{t}+W_{t}\|^2_{P^i_{t+1}}+\lambda_i\Delta_t\big)+\|E_T\|^2_{Q^1}\Big]. \end{align} } Another remark that is apparent from our result is that the SPE control strategy is completely characterized by the pair of matrices $(P^1_t, P^2_t)$ which is uniquely determined by backward dynamic equations. \section{Subgame Perfect Switching Strategy} \label{Subgame Perfect Switching Strategy} In this section we complete the procedure for finding the SPE of this game by focusing on the switching strategies. We will do that by considering the backward induction process for finding SPE and reduce the cost-to-go functions into a simpler and more tractable form (compared to the one in (\ref{E:T2})). In this problem the switching action is taken first at time $k$ based on the knowledge $I_k$ and then the augmented knowledge $\bar I_k=$ ($ I_k$, $V^1_k$, $V^2_k$, $Y_k$) is used to select the control strategies $U^i_k$. In order to visualize it, one might break the time period $[k,k+1]$ into two halves where in the first half, switching action is performed and in the second half, control action is performed. In Theorem \ref{T:NashControl}, $\mathbb{J}^{i*}_k$ is the optimal cost-to-go after the switching decision has been taken at time $k$. The actual (before switching action is taken) cost-to-go at stage $k$ is: \begin{align} &\mathbb{V}^{i}_k(I_k)= \mathbb{E}\Big[\sum_{t=k}^{T} C^i_t(X_t,U^1_t,U^2_t,V^1_t,V^2_t)|~ I_k\Big] \end{align} and the optimization (game) variables are control $U^i_t$ and switching $V^i_t$ for all $t\ge k$. Due to the fact $\bar I_t \supseteq I_t $ for all $t$, we can write \begin{align} \mathbb{V}^{i}_k(I_k)= &\mathbb{E}\Big[\mathbb{E}\Big[\sum_{t=k}^{T} C^i_t(X_t,U^1_t,U^2_t,V^1_t,V^2_t)|~\bar I_k\Big]~ I_k\Big]\\ =&\mathbb{E}\big[ \mathbb{J}^i_k(g^1,g^2)|~I_k\big] \nonumber \end{align} where $\mathbb{J}^i_k(g^1,g^2)=\mathbb{E}\Big[\sum_{t=k}^{T} C^i_t(X_t,U^1_t,U^2_t,V^1_t,V^2_t)|~\bar I_k\Big]$. Since each player is interested in minimizing their cost, they are interested in $\min_{s^i,g^i}\mathbb{V}^i_k(I_k)$ at every stage $k$ (finally they want to minimize $\mathbb{V}^i_0(I_0)$). We can write, \begin{align} \label{E:V} \min_{s^i,g^i}\mathbb{V}^i_k(I_k)=&\min_{s^i}\{\min_{g^i}\mathbb{V}^i_k(I_k)\}=\min_{s^i}\mathbb{E}[\mathbb{J}^{i*}_k|~I_k]. \end{align} We substitute the expression of $\mathbb{J}^{i*}_k$ from Theorem \ref{T:NashControl} into (\ref{E:V}), but before that, let us define, \begin{align} M_t =\mathbb{E}[E_tE_t'|~\bar I_t]=(1-\Delta_t)(AM_{t-1}A'+S) \end{align} where $AM_{-1}A'+S=\Sigma_0$ (since $X_0 \sim \mathcal{N}(0,\Sigma_0)$). We also define $M_{t|t-1}=AM_{t-1}A'+S$. Note that $M_{t|t-1}$ is $I_t$ measurable whereas $M_t$ is $\bar I_t$ measurable. $M_t$ and $M_{t|t-1}$ are related as follows: \begin{align} \label{AE:Mfil} M_t=(1-\Delta_t)M_{t|t-1} \end{align} Now let us consider the $k$-th stage cost $\mathbb{J}^{i*}_k$. \begin{align} \mathbb{J}^{i*}_k=&\mathbb{E}\Big[\sum_{t=k}^{T-1}\big(\|E_t\|^2_{Q^i}+\Delta_{t+1} \|AE_{t}+W_{t}\|^2_{P^i_{t+1}} \nonumber +\lambda_i\Delta_t\big)+ \|E_T\|^2_{Q^i}|~\bar I_k\Big]+\|\hat X_{k}\|_{P^i_k}^2\nonumber \\ =&\mathbb{E}\Big[\sum_{t=k}^{T-1}\big(tr(Q^iM_{t}+\Delta_{t+1}(AM_{t}A'+S)P^i_{t+1}) +\lambda_i\Delta_t\big)+tr(Q^iM_{T})|~\bar I_k\Big] +\|\hat X_k\|_{P_k^i}^2\nonumber \\ =&\mathbb{E}\Big[\sum_{t=k}^{T-1}\big(tr((1-\Delta_t)Q^iM_{t|t-1}+ \Delta_{t+1}M_{t+1|t}P^i_{t+1})+\lambda_i\Delta_t\big)+tr(Q^iM_{T})|~\bar I_k\Big]\nonumber\\ &~~~~~~~+\|\hat X_k\|_{P_k^i}^2 \end{align} Let us define $\mathcal{V}^i_k(I_k)=\mathbb E \Big[\mathbb{J}^{i*}_k|~I_k\Big]$ Therefore, \begin{align} \mathcal{V}^i_k(I_k)=& \mathbb{E}\Big[\sum_{t=k}^{T-1}\big(tr((1-\Delta_t)Q^iM_{t|t-1})+tr(\Delta_{t+1}M_{t+1|t}P^i_{t+1})+\lambda_i\Delta_t\big)+tr(Q^iM_{T})|~ I_k\Big]\nonumber\\ &+\mathbb{E}\Big[\|\hat X_k\|_{P_k^i}^2|~ I_k\Big] \end{align} Using (\ref{AE:filpred}), we get \begin{align} &\mathcal{V}^i_k(I_k)= \mathbb{E}\Big[\sum_{t=k}^{T-1}\big(tr((1-\Delta_t)Q^iM_{t|t-1})+tr(\Delta_{t}M_{t|t-1}P^i_{t})+\lambda_i\Delta_t\big)+tr(Q^iM_{T})|~ I_k\Big]\nonumber\\ &~~~~~~~~~~~~+\|\hat X_{k|k-1}\|_{P_k^i}^2 \end{align} The selection of switching strategy $s^i_k(I_k)$ has no effect of $\hat X_{k|k-1}$ and hence it does not play any role in the game at stage $k$. Let us define an instantaneous cost: \begin{align} \bar C^i_t(M_{t|t-1},\Delta_t)=&(1-\Delta_t)tr(Q^iM_{t|t-1})+\Delta_{t}tr(M_{t|t-1}P^i_{t})+\lambda_i\Delta_t. \end{align} With slight abuse of notation, after neglecting the $\hat X_{k|k-1}$ term, we obtain, \begin{align} &\mathcal{V}^i_k(I_k)=\mathbb{E}\Big[\sum_{t=k}^{T}\bar C^i_t(M_{t|t-1},\Delta_t)|~I_k\Big]. \end{align} Therefore, \begin{align} \min_{s^i,g^i}\mathbb{V}^i_k(I_k)&=\min_{s^i}\mathcal{V}^i_k(I_k) \nonumber \\ &=\min_{s^i}\mathbb{E}\Big[\sum_{t=k}^{T}\bar C^i_t(M_{t|t-1},\Delta_t)|~I_k\Big]. \end{align} Let us denote: \begin{equation} \mathcal{V}^{i*}_k=\min_{s^i}\mathcal{V}^i_k(I_k). \end{equation} Let us perform the similar backward induction to find the SPE for the switching strategies. Note at time $T$, there is no action to optimize and \begin{align} &\mathcal{V}^i_T(I_T)=\mathbb E \Big[\bar C^i_T(M_{T|T-1},\Delta_T)|~I_T\Big] =\mathbb{E}\Big[tr(Q^iM_{T})~|~I_T\Big]. \end{align} Let us define \begin{align} \mathcal{V}^{i*}_T=\mathcal{V}^i_T(I_T)=\mathbb{E}\Big[tr(Q^iM_{T})~|~I_T\Big]. \end{align} Similarly, at $T-1$, \begin{align} \mathcal{V}^i_{T-1}(I_{T-1})=\mathbb E \Big[&\bar C^i_{T-1}(M_{T-1|T-2},\Delta_{T-1})+\bar C^i_T(M_{T},0)|~I_{T-1}\Big]. \end{align} \begin{align} \mathcal{V}^i_{T-1}(I_{T-1})=\mathbb E \Big[&\bar C^i_{T-1}(M_{T-1|T-2},\Delta_{T-1})+tr(Q^iM_{T})|~I_{T-1}\Big]. \end{align} Using $M_{T-1}=(1-\Delta_{T-1})M_{T-1|T-2}$ and $M_T=AM_{T-1}A'+S$, \begin{align} \mathcal{V}^i_{T-1}(I_{T-1})=&\mathbb E \Big[\bar C^i_{T-1}(M_{T-1|T-2},\Delta_{T-1})+\nonumber \\&~~~(1-\Delta_{T-1})tr(Q^i(AM_{T-1|T-2}A'))+tr(Q^iS)|~I_{T-1}\Big] \end{align} If $(s^{1*}_{T-1}, s^{2*}_{T-1})$ is a SPE strategy at time $T-1$ then \begin{align} &\mathcal{V}^i_{T-1}(I_{T-1})\big|_{(s^{i*}_{T-1}, s^{j*}_{T-1})} \le \mathcal{V}^i_{T-1}(I_{T-1})\big|_{( s^i_{T-1}, s^{j*}_{T-1})} \end{align} $\forall s^i_{T-1}$ and for both $i=1,2$; $j=1,2$ and $i\ne j$. Using the above definition of SPE, $s^i_{T-1}(I_{T-1})=0$ for $i=1,2$ is an equilibrium strategy since unilateral change from $0$ to $1$ does not change the cost for any player. However, there might be other equilibria (in this case only $(1,1)$) which produces lower cost for the above cost function. It is straightforward to show that the equilibrium strategy at $T-1$ is \begin{equation} \label{E:switch} s^{i*}_{T-1}(I_{T-1})= \begin{cases} 1 ~~\text{ if } ~~~\bar C^i_{T-1}(M_{T-1|T-2},1) -\bar C^i_{T-1}(M_{T-1|T-2},0)\le tr(Q^i(AM_{T-1|T-2}A'))\\ 0 \text{~~~otherwise} \end{cases} \end{equation} From (\ref{E:switch}) we notice that $(1,0)$ and $(0,1)$ can also be an equilibrium strategy. However those equilibria are equivalent to $(0,0)$ in the sense that they produce the same cost-to-go $\mathcal{V}^{i*}_{T-1}$ for both $i=1,2$. Therefore, we will restrict our attention on two equilibria $(0,0)$ and $(1,1)$ As a remark, it is pointed out that adding an infinitesimal switching cost $\epsilon_i$ for every time player $i$ requests for a switching (irrespective of whether the switch was closed or not) will ensure that $(0,1)$ and $(1,0)$ is never an SPE. Let us note when $\bar C^i_{T-1}(M_{T-1|T-2},1) -\bar C^i_{T-1}(M_{T-1|T-2},0)= tr(Q^i(AM_{T-1|T-2}A'))$, then $s^{i*}_{T-1}=0$ or $1$, both produces the same cost-to-go value. Under such situations, all possible switching actions are equivalent. In order to obliterate such instances we make the following assumption: \begin{asm} \label{As:1} If $\bar C^i_{T-1}(M_{T-1|T-2},1) -\bar C^i_{T-1}(M_{T-1|T-2},0)= tr(Q^i(AM_{T-1|T-2}A'))$, $s^{i*}(I_{T-1})=0$ for all possible history $I_{T-1}$. Then, (\ref{E:switch}) is modified as follows: \begin{equation} \label{E:switch1} s^{i*}_{T-1}(I_{T-1})= \begin{cases} 1 ~~\text{ if } ~~~\bar C^i_{T-1}(M_{T-1|T-2},1) -\bar C^i_{T-1}(M_{T-1|T-2},0)< tr(Q^i(AM_{T-1|T-2}A'))\\ 0 \text{~~~otherwise} \end{cases} \end{equation} \end{asm} Irrespective of whether SPE $s^i_{T-1}(I_{T-1})$ is $0$ or $1$, the optimal cost-to-go $\mathcal{V}^{i*}_{T-1}$ depends only on $M_{T-2}$ and also the best SPE strategy (that produces the least cost among all SPE) $s^{i*}_{T-1}(I_{T-1})$ depends only on $M_{T-2}$ (or $M_{T-1|T-2}$). Therefore, we hypothesize the following: \begin{clm} \label{C:markov} For any $k$, there exists a $s^{i*}_k(I_k)$ that depends only on $M_{k-1}$ and produces the least cost-to-go among all SPE. Hence $\mathcal V^{i*}_k\equiv \mathcal V^{i*}_k(M_{k-1})$ (i.e. $\mathcal V^{i*}_k$ only depends on $M_{k-1}$). \end{clm} \textit{Proof:} The hypothesis is true for $k=T,T-1$. Let us assume it is true for some $k+1\le T$, i.e. $\mathcal V^{i*}_{k+1}\equiv \mathcal V^{i*}_{k+1}(M_{k})$. Therefore, $$\mathcal{V}^{i*}_k=\min_{s^i}\sum_{t=k}^{T}\mathbb E \Big[\bar C^i_t(M_{t|t-1},\Delta_t)|~I_k\Big]$$ Then using a dynamic programming argument, \begin{align} \label{AE:VJ3} \mathcal{V}^{i*}_k=&\min_{s^i_k}\mathbb E \Big[\bar C^i_k(M_{k|k-1},\Delta_k)+\mathcal V^{i*}_{k+1}(M_k)|~I_k\Big] \nonumber \\ =&\min_{s^i_k}\mathbb E \Big[\bar C^i_k(M_{k|k-1},\Delta_k)+\mathcal V^{i*}_{k+1}\big((1-\Delta_k)M_{k|k-1}\big)|~I_k\Big] \end{align} From (\ref{AE:VJ3}), the best equilibrium strategy $s^{i*}_k(I_k)=1$ if $$\bar C^i_k(M_{k|k-1},1)+\mathcal{V}^{i*}_{k+1}(\textbf{0}) < \bar C^i_k(M_{k|k-1},0)+\mathcal{V}^{i*}_{k+1}(M_{k|k-1})$$ (similar to assumption \ref{As:1}, we only consider the strict inequality), otherwise $s^i_k(I_k)=0$. Therefore $s^{i*}_k(I_k)$ requires only the knowledge of $M_{k-1}$ and hence from (\ref{AE:VJ3}), $\mathcal{V}^i_k(I_k)\equiv \mathcal{V}^{i*}_k(M_{k-1})$ For this class of games, there always exists a Markovian SPE switching strategy and a Markovian SPE control strategy which produce the least cost-to-go among all SPE. Though, there might be other non-Markovian SPE strategies which produce the same cost, however, due to the claim \ref{C:markov}, it is sufficient to consider only the Markovian strategies to find the best SPE corresponding to the least cost-to-go. \subsection{Offline Calculation of $\mathcal{V}^{i*}_k(M_{k-1})$ } In the following we define how the players can take the decision online by using some stored offline functions (value functions). Let us define $\mathcal{V}^{i*}_k(M)$ in the following manner: \begin{align} \mathcal{V}^{i*}_T(M)=\bar C^i_T(M,0). ~~~~~~~~\forall M \text{ and } i=1,2 \end{align} and \begin{align} \label{E:opt_cost2go} \mathcal{V}^{i*}_k(M)= \begin{cases} \bar C^i_k(M,1)+\mathcal{V}^{i*}_{k+1}(\textbf{0})~~~~~~~~~~~ \text{ if } \vartheta(k,M) >1,\\ \bar C^i_k(M,0)+\mathcal{V}^{i*}_{k+1}(AMA'+S)~~~~~\text{ otherwise. } \end{cases} \end{align} where $\vartheta(k,M)=\min\{\vartheta^1(k,M),\vartheta^2(k,M) \}$, and \begin{align} \vartheta^i(k,M)=\frac{\bar C^i_k(M,0)+\mathcal{V}^{i*}_{k+1}(AMA'+S)}{\bar C^i_k(M,1)+\mathcal{V}^{i*}_{k+1}(\textbf{0})} \end{align} By construction, if $\mathcal{V}^{i*}_{k+1}(\cdot)$ denotes the minimum cost-to-go (for the subgame starting at $k+1$) among the SPE, $\mathcal{V}^{i*}_{k}(\cdot)$ defined in (\ref{E:opt_cost2go}) provides the minimum cost-to-go at stage $k$ for player $i$. Therefore, by backward inductions, $\mathcal V^{i*}_\cdot(\cdot)$ denotes the cost-to-go function along an SPE that simultaneously minimizes the cost-to-go for both players. \begin{clm} For any $k, M$ and history $I_k$, the best switching strategy (SPE) is given by $s^{i*}_k(I_k)=1$ for $i=1,2$ if and only if, \begin{align} \label{E:condition} \bar C^i_k(M,1)+\mathcal{V}^{i*}_{k+1}(\textbf{0}) < \bar C^i_k(M,0)+\mathcal{V}^{i*}_{k+1}(AMA'+S). \end{align} Otherwise $s^{i*}_k(I_k)=0$ for $i=1,2$. \end{clm} \textit{proof} $\Rightarrow$ is trivially true. $\Leftarrow$: First, notice that we have established $s^i_k(I_k)=0$ is an SPE strategy for all $k,I_k$. Now let us assume that at some $k,M$, (\ref{E:condition}) holds, then if player $i$ selects a strategy such that $s^i_k(I_k)=0$, then the cost-to-go for player $i$ with any strategy profile $((s^1)_{k+1}^T,(s^2)_{k+1}^T)$ from time $k+1$ onward is \begin{align} \label{AE:thrs} &\bar C^i_k(M,0)+\mathcal{V}^{i}_{k+1}(AMA'+S,(s^1)_{k+1}^T,(s^2)_{k+1}^T) \nonumber\\ &\ge \bar C^i_k(M,0)+\mathcal{V}^{i*}_{k+1}(AMA'+S)\nonumber\\ &>\bar C^i_k(M,1)+\mathcal{V}^{i*}_{k+1}(\textbf{0}) \end{align} Therefore, unilateral deviation is harmful (strictly non-profitable) for the player $i$, and that allows us to conclude $s^i_k(I_k)=1$ for $i=1,2$ is an equilibrium for $(k,M)$. Therefore $(s^1_k(I_k),s^2_k(I_k))=(0,0)$ and $(1,1)$ both are equilibria. However, the cost-to-go by selecting $(1,1)$ is strictly lesser than selecting $(0,0)$, and this is, therefore, preferable by the players. Note that, (\ref{E:opt_cost2go}) can be calculated and stored offline and (\ref{E:condition}) can be evaluated online using the stored values. Equation (\ref{E:condition}) is equivalent to: \begin{align} \lambda_i < &\mathcal{V}^{i*}_{k+1}(AMA'+S)-\mathcal{V}^{i*}_{k+1}(\textbf{0})-tr\big( (P^i_k-Q^i)(AMA'+S)\big) \end{align} which shows a threshold policy for SPE switching. We note that $M_{0|-1}=\Sigma_0$, therefore at time $0$ we only need the value $\mathcal{V}^{i*}_0(\Sigma_0)$ not the function $\mathcal{V}^{i*}_0(\cdot)$ in the entire space of symmetric positive semidefinite matrices. In order to decide $(s^{1*}_0(I_0),s^{2*}_0(I_0))$ we need to know only four values $\mathcal{V}^{i*}_1(\textbf{0}), \mathcal{V}^{i*}_1(A\Sigma_0A'+S)$ for $i=1,2$. Therefore, given the variance of $X_0$, we need to store only finite number of values to characterize all the value functions for a finite duration game. \begin{clm} The maximum number of values (value function evaluations) needed to be stored to calculate the switching strategies for entire game of duration $[0,T]$ is ${T(T+3)}$. \end{clm} \textit{proof} Let at stage $k$, $M_{k|k-1}$ (or $M_{k-1}$) takes $n_k$ number of possible distinct values based on all possible previous history $I_k$. Therefore to determine the switching at time $k$, we need to make $n_k$ comparison tests (\ref{E:condition}) and for each test the $\mathcal{V}^{i*}_{k+1}(\textbf{0})$ term is common. Therefore we need to evaluate the value function only at $n_k+1$ number of points at time $k$. For the switching pair $(1,1)$, $M_k=0$ (or $M_{k+1|k}=S$) and for any other possible switching profile at stage $k$, $M_k=M_{k|k-1}$. Therefore at stage $k+1$, $n_{k+1}$ will be at most $n_k+1$ ( and $n_{k+1}$ possible values of $M_{k}$.) Therefore, \begin{equation} n_{k+1} \le n_k+1 \end{equation} with $n_0=1$, we get $n_k\le k+1$. \textit{Total value function evaluations to be stored} = $$2*\sum_{k=0}^{T-1}(n_{k}+1)\le {T(T+3)}.$$ The factor $2$ in above equation is due to the fact that we have to evaluate the value functions for both the players. \begin{rem} A switching is performed only when it strictly reduces the cost-to-go for both users. Therefore, each switching minimizes the welfare cost-to-go. However, the converse is not necessarily true i.e. a switching with a potential to reduce the welfare cost-to-go may not always be performed. \end{rem} \subsection{Centralized Optimization vs. Game Setup} The problem we consider here is a game theoretic setup between two players with their own optimization criterion with two actions (control and switch). While they can select their controllers independently, however, their individual switching action does not affect the system (and cost) unless they switch synchronously. A valid question to ask is how a centralized agent would select its action strategies in order to optimize the welfare cost (i.e. the sum of two individual players' cost). We have shown in Theorem \ref{T:NashControl} that the control strategy is totally characterized by Riccati equations for two-player setup. Similar analysis would show that same characteristics for the control strategy are true for the centralized agent. However, it will have a single Riccati equation as opposed to two equations that we have. Similarly, the gain of the controller might change. Considering the symmetric case i.e. $B^1=B^2$, $Q^1=Q^2$, $Q^{12}=Q^{22}=Q^{11}=Q^{21}$ we can show that the control strategy for the centralized agent will be equivalent to the strategies of the two agents (i.e. $L_t= \begin{bmatrix} L^1_t \\ L^2_t \end{bmatrix}$), Therefore, for a fixed switching strategy, the optimal welfare cost is the same for both, the game setup and the centralized structure. However, the centralized switching strategy will be different from game switching if $\lambda_1\ne \lambda_2$. The above anomaly is seen since, in our model, the (selfish) players will not switch unless the switching strictly reduces their own cost, even though the switch might reduce the social welfare cost. However, the social welfare cost will always be minimized when we give the switching control to a centralized entity with the cost-to-go at stage $k$ being the social welfare $\mathcal{V}_k=\mathcal{V}^1_k+\mathcal{V}^2_k$. It is straightforward to notice $\mathcal V^*_k\le \mathcal{V}^{1*}_k+\mathcal{V}^{2*}_k$. The centralized switching strategy is given by \begin{align} \label{AE:thrs2} &\bar C_k(M,0)+\mathcal{V}^*_{k+1}(AMA'+S) >\bar C_k(M,1)+\mathcal{V}^*_{k+1}(\textbf{0}) \end{align} where $\bar C_K(\cdot,\cdot)=\bar C^1_k(\cdot,\cdot)+\bar C^2_k(\cdot,\cdot)$. An interesting study will be to characterize the social loss $l_k= \mathcal{V}^{1*}_k+\mathcal{V}^{2*}_k-\mathcal V^*_k$. This is also known as price of anarchy and it will be studied elsewhere. { \section{Simulation Results} We consider the following two-dimensional system to illustrate our analysis that has been carried out in the preceding sections. \begin{align*} X_{k+1} = \begin{bmatrix} &0.4 & 0.8 \\ &-0.8 & 1 \end{bmatrix} X_k + U^1_k-U^2_k+W_k \end{align*} where $X_k, U^1_k, U^2_k,W_k \in \mathbb{R}^2$ for all $k$. $W_k \sim \mathcal{N}(0,0.25\textbf{I})$. The observation cost parameters $\lambda_1=1$ and $\lambda_2=1.5$. For the cost (\ref{E:reward}), the following parameters are taken: \\ $Q^1=\begin{bmatrix} &0.3 &0 \\ &0 &0.7 \end{bmatrix} $, $Q^2=\begin{bmatrix} &0.8 &0 \\ &0 &0.2 \end{bmatrix} $, $Q^{12}=Q^{21}=\textbf{0}$ and $Q^{11}=Q^{22}=\textbf{I}$. One can show that $L^i_{t-1}=(-1)^{i-1}P^i_t(\textbf{I}+P^1_t+P^2_t)^{-1}A$. By denoting $P_t=\textbf{I}+P^1_t+P^2_t$, one can verify: \begin{align*} &P^i_t=Q^i+A'P^{-1}_{t+1}P^i_{t+1}(P^i_{t+1}+1)P^{-1}_{t+1}A\\ &P^i_T=Q^i \end{align*} We set the horizon of the game to be $T=15$ and assume that $X_0$ is known to the players i.e. $M_0=\textbf{0}$. \begin{figure}[h] \begin{center} \includegraphics[width=0.7 \textwidth]{plot1} \caption{The red and blue lines plot $\mathcal V^{i*}_k(M^*)$ w.r.t $k$ for $i=1$ and $2$ respectively. $M^*$ is the optimal trajectory of $M_k$ for the optimal switching strategy $(s^{1*},s^{2*})$. The black dots show the behavior of the optimal switching signal.} \label{F:1} \end{center} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=0.6 \textwidth]{plot2} \caption{A comparison among the costs for the cases when costly measurements are available ($\lambda_1=1, \lambda_2=1.5$) and no measurements are available ($\lambda_1=\lambda_2=\infty$)} \label{F:2} \end{center} \end{figure} In Figure \ref{F:1}, we show the optimal switching strategy $\Delta_k^*(\equiv V^{1*}_k \cdot V^{2*}_k)$ in black dots. In a game with horizon 15, the switch was closed for $5$ times. In red line, we plot the value function $\mathcal V^{1*}_k(M^*_k)$ along the optimal trajectory of $M_k$ determined by (\ref{AE:Mfil}) and the optimal $\Delta_k^*$. Similarly, in blue lines we plot $\mathcal V^{2*}_k(M_k^*)$. In Figure \ref{F:2}, we illustrate a comparative result for the cases when observation costs are finite ($\lambda_1=1, \lambda_2=1.5$) and when observation costs are infinite (so that no observation is practically acquired). In this figure we see that even with $5$ observations (out of $15$ possible), there are more than $50\%$ reductions in costs. The dotted curves in this figure also indicate the envelop of $\mathcal V^{i*}_k$. In other words, all the graphs of $\mathcal V^{i*}_k(M^*_k)$ obtained by varying the pair $(\lambda_1, \lambda_2)$ will remain below the dotted lines shown in Figure \ref{F:2}. } \section{ Conclusion} \label{Conclusion} In this work, we have considered a switched stochastic LQ game where the switching carries a finite cost. We have characterized the SPE control and switching strategies for both the players. The SPE control strategy turns out to be a linear strategy characterized by Riccati equations which do not depend on the switching strategy. The quality of state estimation depends on the switching strategy and hence the switching cost-to-go function depends on the estimation error variance. We have shown that no-switch (open switch) is always a SPE. However, at certain time instances coordinated switching is also a SPE. Moreover when both no-switch and switch are SPE, then the cost-to-go with switching is lower than the same with no-switching for both players. We studied a two-player game, however similar analysis is easily carried out for a general $n$-player game. \section{Appendix} \subsection{Proof of Theorem \ref{T:NashControl}} \label{A:1} The idea of the proof is based on backward induction. It should be noted that $\hat{X}_t$ satisfies the Kalman-filter like equations except the fact that the measurements are only available only through a switching and we always get noise free measurements whenever a switching is done. We define the filtered variable as $\hat{X}_t= \mathbb{E}[X_t~|~\bar I_t]$ and the prediction variable as $\hat X_{t+1|t}=\mathbb{E}[X_{t+1}|~I_t]$. \begin{align} \hat X_{t|t-1}=A\hat X_{t-1}+B^1 U^{1}_{t-1}+B^2 U^{2}_{t-1} \end{align} Therefore, $$\hat{X}_t=(1-\Delta_t)\hat X_{t|t-1}+\Delta_t X_t$$ where $\Delta_t=V^1_t\cdot V^2_t$. $\hat X_{t|t-1}$ satisfies the dynamics (\ref{E:hatx}). In a compact form, one can check \begin{align} \hat{X}_{t}=A\hat X_{t-1}+B^1 U^{1}_{t-1}+B^2 U^{2}_{t-1} +\Delta_t(AE_{t-1}+W_{t-1}) \end{align} where $E_t=X_t-\hat X_t$. Thus it satisfies the difference equation: \begin{align}\label{E:e} E_{t}= (1-\Delta_t)(A E_{t-1}+W_{t-1}) \end{align} Therefore, we can write \begin{align} \label{AE:filpred} \hat X_t=\hat X_{t|t-1}+\Delta_t(AE_{t-1}+W_{t-1}) \end{align} Let $P^i_t$ satisfy the following backward equation for $i=1,2$: \begin{align} \label{E:Pi} P^i_t=&Q^i+L_t^i{'}Q^{ii} L^i_t+L_t^j{'}Q^{ij} L^j_t+(A-B^i L^i_t-B^j L^j_t)'P^i_{t+1}(A-B^i L^i_t-B^j L^j_t) \nonumber\\ P^i_T=&Q^i, \end{align} and $L^i_t$ satisfies the relation: \begin{align} \label{E:L} &\Big(Q^{ii}+B^i{'}P^i_{t}\big(I-B^j(Q^{jj}+B^j{'}P^j_{t}B^j)^{-1}B^j{'}P^j_{t}\big)B^i \Big)L^i_{t-1} \nonumber \\ &=B^i{'}P^i_{t}(I-B^j(Q^{jj}+B^j{'}P^j_{t}B^j)^{-1}B^j{'}P^j_{t})A \end{align} Let us consider the cost segment for player $i$ for the given switching strategy profile $(s^{1*},s^{2*})$: \begin{align} \label{E:parcost} \mathbb{J}^{i}_k(g^1,g^2)=& \mathbb{E}\Big[\sum_{t=k}^{T} C^i_t(X_t,U^1_t,U^2_t,V^1_t,V^2_t)|~\bar I_k\Big] \nonumber \\ \nonumber =&\mathbb{E}\Big[\sum_{t=k}^{T}C^i_t(\hat X_t,U^1_t,U^2_t,V^1_t,V^2_t)+\|E_t\|^2_{Q^i}|~\bar I_k\Big] \\ =&\mathbb{E}\Big[\sum_{t=k}^{T-1}(\|\hat X_t\|^2_{Q^i}+\|U^i_t\|^2_{Q^{ii}}+\|U^{j}_t\|^2_{Q^{ij}})+\| \hat X_{T}\|^2_{Q^i_{}}|~\bar I_k\Big] +\nonumber \\ &\mathbb{E}\Big[\sum_{t=k}^{T-1}(\|E_t\|^2_{Q^i}+\lambda_i\Delta_t)+\|E_T\|^2_{Q^i}|~\bar I_k\Big] \end{align} Let us denote $\mathbb{J}^{1*}_k=\min_{g^1} \mathbb{J}^1_k(g^1,g^{2*})$ and $g^{1*}=\arg\min_{g^1} \mathbb{J}^1_k(g^1,g^{2*})$. Similarly we define $\mathbb{J}^{2*}_k$ and $g^{2*}$. \begin{clm} For all $k$, \begin{align} \mathbb{J}^{i*}_k=\mathbb{E}\Big[\sum_{t=k}^{T-1}&(\|E_t\|^2_{Q^i}+\Delta_{t+1} \|AE_{t}+W_{t}\|^2_{P^i_{t+1}} +\lambda_i\Delta_t)+\|E_T\|^2_{Q^i}|~\bar I_k\Big]+\|\hat X_{k}\|_{P^i_k}^2 \end{align} where $\Delta_{T}=0$. \end{clm} \textit{proof} This is proven by induction. It is easy to check that conditioned under $\bar I_t$, $E_t$ and $\hat X_t$ are uncorrelated. Hence $\mathbb{E}[\|X_t\|^2_{Q^i}|~\bar I_t]=\mathbb{E}[\|\hat X_t\|^2_{Q^i}~|~\bar I_t]+\mathbb{E}[\|E_t\|^2_{Q^i}|~\bar I_t]$. Therefore, at $k=T$, the above claim is true. At $k=T-1$, \begin{align} \mathbb{J}^{i*}_{T-1}=\min_{g^i_{T-1}}\mathbb{E}\Big[&\|\hat X_{T-1}\|^2_{Q^i}+\|U^i_{T-1}\|^2_{Q^{ii}}+\|U^{j}_{T-1}\|^2_{Q^{ij}}+ \nonumber \\ &\| \hat X_{T}\|^2_{Q^i_{}}|~\bar I_{T-1}\Big] +\mathbb{E}\Big[\|E_{T-1}\|^2_{Q^i}+\lambda_i\Delta_{T-1}+\|E_T\|^2_{Q^i}|~\bar I_{T-1}\Big] \end{align} Using $\hat X_T=A\hat X_{T-1}+B^1U^1_{T-1}+B^2U^2_{T-2}$, we obtain \begin{equation*} g^{i*}_{T-1}=-L^i_{T-1}\hat X_{T-1} \end{equation*} Consequently, the claim holds for $k=T-1$. Let us assume that the claim holds for some $k+1\le T$. Then, \begin{align} \label{AE:VJ} \mathbb{J}^{i*}_k=&\mathbb{E}\Big[\sum_{t=k}^{T-1}(\|E_t\|^2_{Q^i}+\lambda_i\Delta_t)+\|E_T\|^2_{Q^i}|~\bar I_k\Big] + \nonumber\\ &\min_{g^{i}}\mathbb{E}\Big[\sum_{t=k}^{T-1}(\|\hat X_t\|^2_{Q^i}+\|U^i_t\|^2_{Q^{ii}}+\|U^{j}_t\|^2_{Q^{ij}})+\| \hat X_{T}\|^2_{Q^i_{}}|~\bar I_k\Big] \nonumber \nonumber \\ =&\min_{g^i_k}\mathbb{E}\Big[ \|\hat X_k\|^2_{Q^i}+\|U^i_k\|^2_{Q^{ii}}+\|U^{j}_k\|^2_{Q^{ij}}+J^{i*}(k+1)|~\bar I_k\Big] + \mathbb{E}\Big[\|E_k\|^2_{Q^i}+\lambda_i\Delta_k|~\bar I_k\Big] \end{align} We used that fact that $\bar I_k \subset \bar I_{k+1}$ and hence $$\mathbb{E}[\mathbb{E}[X|I_{k+1}]|I_k]=\mathbb{E}[X|I_k].$$ Therefore using the hypothesis that the claim holds for $k+1$, we can write (\ref{AE:VJ}) as: \begin{align} \mathbb{J}^{i*}_k =&\min_{g^i_k}\mathbb{E}\Big[ \|\hat X_k\|^2_{Q^i}+\|U^i_k\|^2_{Q^{ii}}+\|U^{j}_k\|^2_{Q^{ij}}+\|\hat X_{k+1}\|^2_{P^i_{k+1}}|~\bar I_k\Big] \nonumber \\ &+ \mathbb{E}\Big[\sum_{t=k}^{T-1}(\|E_t\|^2_{Q^i}+\lambda_i\Delta_t)+\|E_T\|^2_{Q^i}|~\bar I_k\Big]+\mathbb{E}\Big[\sum_{t=k+1}^{T-1}\Delta_{t+1} \|AE_{t}+W_{t}\|^2_{P^i_{t+1}}|~\bar I_k\Big] \end{align} Note that, $$\hat X_{k+1}=A\hat X_k+B^1U^1_k+B^2U^2_k +\Delta_{k+1}(AE_k+W_k)$$ and therefore, $$\mathbb{E}[\|\hat X_{k+1}\|^2_{P^i_{k+1}}|~\bar I_k]=\mathbb{E}[\|A\hat X_k+B^1U^1_k+B^2U^2_k\|^2_{P^i_{k+1}}+\Delta_{k+1}\|(AE_k+W_k)\|^2_{P^i_{k+1}}|~\bar I_k]$$. As a result, we obtain, \begin{align} \label{AE:VJ2} \mathbb{J}^{i*}_k =&\min_{g^i_k}\mathbb{E}\Big[ \|\hat X_k\|^2_{Q^i}+\|U^i_k\|^2_{Q^{ii}}+\|U^{j}_k\|^2_{Q^{ij}}+ \|A\hat X_k+B^1U^1_k+B^2U^2_k\|^2_{P^i_{k+1}}|~\bar I_k\Big] \nonumber\\ &+ \mathbb{E}\Big[\sum_{t=k}^{T-1}(\|E_t\|^2_{Q^i}+\lambda_i\Delta_t)+\|E_T\|^2_{Q^i}|~\bar I_k\Big] +\mathbb{E}\Big[\sum_{t=k}^{T-1}\Delta_{t+1} \|AE_{t}+W_{t}\|^2_{P^i_{t+1}}|~\bar I_k\Big] \end{align} Note that $\hat X_t$ is $\bar I_t$ measurable for all $t$. Thus, we can say from (\ref{AE:VJ2}) that the optimal $U^1_{k}$ for player 1 should be given by: \begin{align} U^1_{k}=-(Q^{11}+B^1{'}P^1_{k+1}B^1)^{-1}B^1{'}P^1_{k+1}(A\hat X_{k}+B^2U^{2}_{k}) \end{align} Similarly, for player $2$, it can be shown that the optimal $U^2_{k}$ will be: \begin{align} U^2_{k}=-(Q^{22}+B^2{'}P^2_{k+1}B^2)^{-1}B^2{'}P^2_{k+1}(A\hat X_{k}+B^1 U^{1}_{k}) \end{align} Comparing the expressions for optimal $U^i_{k}$ and along with the definition of $L^i_k$ matrices we obtain (basically solving the two linear equations in $U^i_{k}$): \begin{align} U^i_{k}=g^{i*}_k(\bar I_k)=-L^i_{k}\hat X_{k} \end{align} Now substituting the optimal $U^i_{k}$ in (\ref{AE:VJ2}), and using the definition of $P_k^i$ from (\ref{E:Pi}) we get: \begin{align} \mathbb{J}^{i*}_k=\mathbb{E}\Big[\sum_{t=k}^{T-1}&(\|E_t\|^2_{Q^i}+\Delta_{t+1} \|AE_{t}+W_{t}\|^2_{P^i_{t+1}} \nonumber +\lambda_i\Delta_t)+ \|E_T\|^2_{Q^i}|~\bar I_k\Big]+\|\hat X_{k}\|_{P^i_k}^2 \end{align} \bibliographystyle{IEEEtran}
1,116,691,498,184
arxiv
\section{Introduction} The structure of nuclei far from stability is of primary interest within low-energy nuclear physics. The Nuclear Shell Model has long been established as the backbone of isotopic structure, correctly reproducing the canonical magic numbers ($N,Z = 2,8,20,28,50,82$) for protons and neutrons for spherical-like nuclei. Outside of these established points of strongest binding, magic behavior has appeared and disappeared at many asymmetric proton-to-neutron ratios \cite{Tanihata1985,Kanungo2013}. The establishment of shell closures across the chart of nuclides is critical in part for the benchmarking of nuclear theories \cite{Georgieva2015,Simonis2016}. Previous experimental and theoretical studies have confirmed the presence of a closed shell at $N = 32$ near the proton $Z = 20$ shell. In particular, large 2$^+$ excitation energies \cite{Huck1985, Cortes2020}, small $B(E2; 0^+ \to 2^+)$ \cite{Seidlitz2011,Goldkuhle2019} and nuclear mass trends \cite{Wienholtz2013,Reiter2018,Leistenschneider2021} have all indicated the presence of a subshell at $N = 32$. Nuclear theories have corroborated experimental findings of such a subshell as well \cite{Otsuka2001,Leistenschneider2018,Li2020}. The existence of a similar subshell at $N = 34$ remains an open question. Experimental evidence from excitation energies \cite{Steppenbeck2013} support the presence of a subshell. However, trends in two-neutron separation energies from mass measurements, most recently of Sc isotopes in \cite{Leistenschneider2021}, generally do not support the existence of such a subshell. Data from \cite{Michimasa2018} suggests the presence of the $N = 34$ subshell in the Ca isotopes, whereas current Ti and V data do not suggest such a closure in their isotope chains. More precise experimental data is required in the region to establish the existence or non-existence of an $N = 34$ subshell closure. In this article, we present high-precision nuclear mass measurements of neutron-rich Ca, Ti and V isotopes completed in collaboration at TRIUMF’s Ion Trap for Atomic and Nuclear science (TITAN) and the Low Energy Beam and Ion Trap (LEBIT) facility at the National Superconducting Cyclotron Laboratory. \section{Experiment} At TRIUMF, mass measurements were performed at TITAN \cite{Dilling2006} using the Multiple-Reflection Time-of-Flight Mass Spectrometer (MR-ToF-MS) \cite{Jesch2015}. Isotopes of interest were produced at TRIUMF's Isotope Separator and Accelerator (ISAC) \cite{Ball2016}, where a 480 MeV, 50 $\mu$A proton beam was impinged on a $22.7$~g/cm$^2$ thick Ta target. Stopped spallation and fragmentation products diffused out of the target towards a hot Re ion source, where they were surface ionized. Further ionization for Ti isotopes was achieved via TRIUMF's resonant ionization laser ion source (TRILIS), via a two-step resonant laser excitation scheme \cite{Lassen2017}. All ionized beams were sent to a mass separator which removed non-isobaric products. The isobaric beam was then transported to the TITAN facility, where it was cooled and bunched via the TITAN radio-frequency quadrupole (RFQ) cooler and buncher, a linear RFQ filled with inert He gas at 10$^{-2}$ mbar \cite{Brunner2012}. Cooled ion bunches were sent to the TITAN MR-ToF-MS for mass measurement. The TITAN MR-ToF-MS determines the masses of ions via their time-of-flight over a given path and kinetic energy \cite{Wollnik1990,Plab2013}. Since the mass resolution is proportional to the time-of-flight ($R = t/2\Delta t$), a long flight path is desired and achieved via two electrostatic mirrors. Isochronous reflection of ions by these mirrors for a sufficient number of turns achieves the desired resolution \cite{Reiter2021}. The TITAN MR-ToF-MS consists of two primary sections; a preparation section and an analyzer section. Cooled ion bunches were injected into the preparation section, which consists of a series of RFQs which further cool ion bunches for 3 ms before delivery to the analyzer section. Once inside the mass analyzer, bunches underwent between 350 and 520 isochronous turns between mirrors for a total time-of-flight of $\sim 10$ ms before ejection onto MagneToF detector \cite{Stresau2006} for time-of-flight detection. To minimize contaminant species in our spectra, a mass range selector consisting of two electrodes inside the mass analyzer deflected away any remaining non-isobaric beam products \cite{Dickel2015}. To measure and detect masses of very low signal-to-background ratios ($< 1$ to $ 10^{4}$), mass-selective re-trapping was employed, where ions of interest are dynamically recaptured in the injection trap after a defined flight time in the mass analyzer \cite{Dickel2017}. Recaptured ions are subsequently re-injected into the mass analyzer for a time-of-flight measurement. This process is highly mass-selective, and allows for the separation of isobaric contaminants from ions of interest during the mass measurement process \cite{Dickel2017,Reiter2021,Beck2021,Izzo_In2021}. Delivered beams contained many atomic and molecular contaminant species alongside the Ca, Ti and V species of interest. An example of a mass spectrum taken during the campaign is shown in Fig.~\ref{fig:58u_spectra}. An initial beam assessment was done during the on-line experiment to identify and assign species to all peaks present in the spectrum. Species identity was confirmed by the occurrence of a species across multiple mass units. The identity of Ti in a spectrum was determined via subsequent measurements with TRILIS lasers off and on. In a lasers off measurement, one of the TRILIS lasers was blocked, removing one of the resonant excitation steps and preventing ionization of Ti. This resulted in a decrease in the overall Ti yield, and subsequently a decrease in time-of-flight counts of Ti in a spectrum. A large count increase ($\sim$ a factor 4) during a lasers on measurement unambiguously confirmed the identification of Ti. More details on the experimental campaign of Ca, Ti and V masses at TITAN can be found in \cite{Dunling2021}. At the NSCL, isotopes of interest were produced via the in flight method \cite{Aysto2001}, where a 130 MeV/u $^{76}$Ge primary ion beam was impinged on a natural Be target $\sim 0.4$~g/cm$^2$ thick. Desired fragments were separated from contaminants via the A1900 fragment separator \cite{Morrissey2003}, and progressed towards a gas catcher \cite{Sumithrarachchi2020}, where they were stopped as ions in a high-purity He gas. Ions were extracted from the gas catcher as a low-energy, continuous beam and transported to a dipole magnet mass separator for separation of non-isobaric products. Ions of interest, which were primarily singly-charged oxides formed from interactions inside the gas catcher, were sent to the LEBIT facility \cite{Ringle2013}. Continuous beams entering the LEBIT facility were sent to a cooler and buncher for cooling, accumulation, and bunching, and released as ion bunches \cite{Schwarz2016}. Bunches were delivered to the LEBIT Penning trap mass spectrometer, where isobaric contaminants were cleaned away via application of a dipolar RF field \cite{Kwiatkowski2015} before the mass measurement. Mass measurements of ions within the LEBIT Penning trap were completed via determination of the cyclotron frequency ($\omega_c$) of the ion around a 9.4 T magnetic field using the time-of-flight ion-cyclotron-resonance (ToF-ICR) technique \cite{Dilling2018}. In ToF-ICR, an ion's slow magnetron drift is converted into a fast modified cyclotron motion through the application of a quadrupole radio-frequency (RF) pulse at $\omega_{rf}$ near the cyclotron frequency. At $\omega_{rf} = \omega_c$, and with a well-chosen duration of the RF pulse, full conversion to the fast modified cyclotron motion can be achieved, resulting in a maximum in radial kinetic energy and a minimum in time-of-flight to a downstream detector. Scanning $\omega_{rf}$ over a series of frequencies around $\omega_c$, results in a time-of-flight spectra with a minimum at $\omega_c$, as seen in Figure \ref{fig:tof_icr_curve}. Standard excitation schemes with RF pulse durations between 50 ms and 500 ms, described in more detail in \cite{Konig1995}, were used. \begin{figure}[tb] \begin{center} \includegraphics[width=0.9\columnwidth]{58u_spectra.png} \caption{A sample mass spectrum taken at $A = 58$ with the TITAN MR-ToF-MS. The identified ion species are labeled.} \label{fig:58u_spectra} \includegraphics[width=0.8\columnwidth]{tof_resonance.png} \caption{A ToF-ICR spectrum taken for $^{55}$Ti$^{16}$O$^+$ with the LEBIT Penning trap mass spectrometer. The fit result of an analytical function described in \cite{Konig1995}, shown in red, is used to extract the minimum in time-of-flight that occurs at $\omega_{rf} = \omega_c$.} \label{fig:tof_icr_curve} \end{center} \end{figure} \section{Analysis} Time-of-flight spectra taken with the TITAN MR-ToF-MS are converted to mass spectra via the relationship: \begin{equation} \frac{m_{\text{ion}}}{q} = C(t_{\text{ion}}-t_0)^2 \end{equation} where $t_0$ is 0.18 $\mu$s as determined offline and $C$ is a calibration factor determined from a high-statistics, well-known reference ion in each spectrum as listed in Table \ref{tab:mass_table}. To account for time-dependent drifts of times-of-flight, a time-resolved calibration (TRC) was performed \cite{Ayet2019} using the mass data acquisition software package \cite{Dickel2019}. Peak centroids were determined by fitting hyper-EMG functions \cite{Purushothaman2017} using the {\sc emgfit} Python library \cite{emgfit}. Statistical uncertainties are generated based on techniques described in \cite{Ayet2019,emgfit}. Systematic uncertainties of the MR-ToF-MS system are described in detail in \cite{Reiter2021,Ayet2019}, and total to a value of $\sim 2.0 \times10^{-7}$. This uncertainty is dominated by the uncertainties due to ion-ion interactions ($3.3 \times10^{-8}$ per detected ion), the non-ideal switching of mirrors ($ 7.0 \times10^{-8}$), and a further unknown systematic error ($\sim 1.9 \times10^{-7}$). \begin{figure}[tb] \begin{center} \includegraphics[width=1\columnwidth]{AMEcomp.png} \caption{A plot comparing mass values from \cite{Wang2021} to experimental mass results from this work. Grey bands represent error bars given on values in \cite{Wang2021}.} \label{fig:mass_compAME} \end{center} \end{figure} \begin{table*}[ht] \centering \caption{Results of the mass measurements performed, compared to the values recommended by AME2020 \cite{Wang2021}. Included is the mass ratio ($m_{\text{ionic,IOI}}/m_{\text{ionic,ref}}$) between the ionic masses of the ion of interest (IOI) and the reference ion for all measurements. Differences are $m_{\text{new}} - m_{\text{lit}}$. All mass values are in keV. All TITAN and LEBIT measurements were measured as singly-charged ions. } \begin{tabular}{c c c c c c c} \toprule Facility & Nuclide & Mass Excess & Literature & Difference & $\quad$ Reference Ion & Mass Ratio \\ \hline TITAN & $^{54}$Ca & -25119\,\,\,\,\,\,(12) & -25160\,\,\,\,\,\,(50) & 41(51) & $\quad$ $^{54}$Cr\,$^{+}$ & 1.000\,633\,250\,\,\,\,(239) \\ LEBIT & $^{52}$Ti & -49478.6(2.2) & -49477.7(2.7) & -0.9(3.5) & $\quad$ $^{48}$Ti\,$^{16}$O\,$^{+}$ & 0.999\,979\,3643\,\,(887) \\ TITAN & $^{54}$Ti & -45738\,\,\,\,\,\,(27) & -45744\,\,\,\,\,\,(16) & 6(31) & $\quad$ $^{54}$Cr\,$^{+}$ & 1.000\,222\,872\,\,\,\,(546) \\ LEBIT & $^{55}$Ti & -41827.0(5.7) & -41832\,\,\,\,\,\,(29) & 5(30) & $\quad$ $^{46}$Ti\,$^{19}$F\,$^{+}$ & 0.999\,922\,402\,\,\,\,(133) \\ TITAN & $^{55}$Ti & -41815\,\,\,\,\,\,(22) & -41832\,\,\,\,\,\,(29) & 17(36) & $\quad$ $^{55}$Cr\,$^{+}$ & 1.000\,259\,788\,\,\,\,(428) \\ TITAN & $^{56}$Ti & -39390\,\,\,\,\,\,(21) & -39420\,\,\,\,(100) & 30(102) & $\quad$ $^{56}$Cr\,$^{+}$ & 1.000\,305\,043\,\,\,\,(401) \\ TITAN & $^{54}$V & -49899\,\,\,\,\,\,(10) & -49898\,\,\,\,\,\,(11) & -1(15) & $\quad$ $^{54}$Cr\,$^{+}$ & 1.000\,140\,049\,\,\,\,(195) \\ TITAN & $^{55}$V & -49138.2(6.6) & -49125\,\,\,\,\,\,(27) & -13(28) & $\quad$ $^{55}$Cr\,$^{+}$ & 1.000\,116\,697\,\,\,\,(129) \\ TITAN & $^{56}$V & -46268\,\,\,\,\,\,(14) & -46180\,\,\,\,(180) & -88(181) & $\quad$ $^{56}$Cr\,$^{+}$ & 1.000\,173\,049\,\,\,\,(274) \\ LEBIT & $^{56}$V & -46198\,\,\,\,\,\,(36) & -46180\,\,\,\,(180) & -18(184) & $\quad$ $^{34}$S\,$^{19}$F$_2$\,$^{+}$ & 0.999\,731\,060\,\,\,\,(540) \\ TITAN & $^{57}$V & -44382.4(8.3) & -44440\,\,\,\,\,\,(80) & 58(80) & $\quad$ $^{57}$Cr\,$^{+}$ & 1.000\,153\,511\,\,\,\,(161) \\ LEBIT & $^{57}$V & -44371\,\,\,\,\,\,(15) & -44440\,\,\,\,\,\,(80) & 69(81) & $\quad$ $^{12}$C$_3$\,$^{1}$H$_5$\,$^{16}$O$_2$$^{+}$ & 0.998\,881\,613\,\,\,\,(220) \\ TITAN & $^{58}$V & -40361\,\,\,\,\,\,(44) & -40430\,\,\,\,\,\,(100) & 69(109) & $\quad$ $^{58}$Fe\,$^{+}$ & 1.000\,403\,867\,\,\,\,(820) \\ \hline \end{tabular}% \label{tab:mass_table}% \end{table*} ToF-ICR spectra taken with the LEBIT Penning trap, e.g. Fig.~\ref{fig:tof_icr_curve}, were fit with the analytical function described in \cite{Konig1995} to extract the cyclotron frequency. The atomic mass is determined via the ratio: \begin{equation} R = \frac{\omega_{\text{c,ref}}}{\omega_c} = \frac{m - m_e}{m_{\text{ref}} - m_e} \end{equation} where $m_e$ is the mass of the electron, $\omega_c$ and $m$ the cyclotron frequency and mass of the ion of interest, and $\omega_{\text{c,ref}}$ and $m_{\text{ref}}$ the cyclotron frequency and mass of a reference ion. Reference ions were required to have well-known literature masses (with values taken from \cite{Wang2021}) and the same $A/q$ as the ion of interest, and were measured either before or after ion-of-interest measurements to account for potential fluctuations of the magnetic field. Ionization potentials and molecular binding energies are not considered in any of our mass determinations, as they are smaller than 20 eV and do not contribute at the level of precision we attain. Systematic uncertainties, for example, arising from a small misalignment of the trapping axis and the magnetic field, or small deviations from a perfect quadrupole electric potential were determined to contribute at the $\sim 10^{-10}$ level, and thus were negligible \cite{Gulyuz2015}. \section{Results} \begin{figure}[tb] \begin{center} \includegraphics[width=1\columnwidth]{CaTiV_s2n_new_v3.png} \caption{A graph of the $S_{2n}$ in the $Z = 19-24$ isotope chains based on mass values from \cite{Wang2021} (black symbols), and Ca, Ti and V mass measurements from this work (red symbols). For the cases of $^{55}$Ti, $^{56}$V and $^{57}$V where both TITAN and LEBIT measured a mass value, the more precise mass value was used in the determination of the plotted $S_{2n}$ value.} \label{fig:s2n} \end{center} \end{figure} Table \ref{tab:mass_table} reports the masses of all isotopes measured in this work, as well their mass excesses as found in literature. Figure \ref{fig:mass_compAME} compares our mass results to data presented in \cite{Wang2021}. All masses had been previously measured in some capacity to varying precisions; our measurements are in agreement with the earlier results within $0.9\sigma$. Our measurements of $^{54}$Ca, $^{52}$Ti, $^{55}$Ti, $^{56}$Ti, $^{55}$V, $^{56}$V, $^{57}$V and $^{58}$V represent an increase in precision over previous values, with all but two of these representing a precision increase of a factor two or more. Mass measurements of $^{55}$Ti, $^{56}$V and $^{57}$V were completed at both facilities, with results from both campaigns reported in Table \ref{tab:mass_table}. The reported measurements for $^{55}$Ti, $^{56}$V and $^{57}$V agree to 0.5$\sigma$, 1.8$\sigma$ and 0.7$\sigma$, respectively. \begin{figure}[tb] \begin{center} \includegraphics[width=1\columnwidth]{CaTiV_delta2n_offsetv3.png} \caption{A graph of the $\delta_{2n}$ for the $Z = 19-24$ isotope chains. Values are offset by amounts as presented in the legend. Unfilled symbols represent values from this work. The disappearance of a peak at $N = 32$, signifying the disappearance of a shell closure at $N = 32$, can be seen as proton number increases.} \label{fig:delta2n} \end{center} \end{figure} In order to probe the shell structure in the region encompassed by our measurements, we consider the two-neutron separation energies ($S_{2n}$), defined as: \begin{equation} S_{2n}(N,Z) = m(N-2,Z) + 2m_n - m(N,Z) \end{equation} with $m(N,Z)$ the mass excess and $m_n$ the mass of the neutron. Figure \ref{fig:s2n} shows $S_{2n}$ values for the $Z = 19-24$ isotopes. Our results show that the steep drop-off at $N = 32$ seen in the Ca and Sc isotopes chains is not present in the Ti and V chains. This confirms the disappearance of the $N = 32$ shell closure at and above $Z = 22$ as presented in \cite{Leistenschneider2018}. No drop-off is seen at $N = 34$ for any of the presented isotope chains. To further probe the structures seen in separation energy trends, we consider the empirical shell gap parameter ($\delta_{2n}$), given as: \begin{equation} \delta_{2n} = S_{2n}(N,Z) - S_{2n}(N+2,Z) \end{equation} Figure \ref{fig:delta2n} shows $\delta_{2n}$ for the $Z = 19-24$ isotopes. As a pseudo-derivative of $S_{2n}$, low $\delta_{2n}$ values decreasing towards zero indicate a flattening in $S_{2n}$, whereas a sharp peak indicates a steep drop-off in $S_{2n}$ at $N$. Such sharp peaks, when at even neutron numbers, are typically indicative of shell closures \cite{Leistenschneider2018}. A decrease in $\delta_{2n}$ from Ca towards V is seen at $N = 32$, strongly signaling the disappearance of the $N = 32$ shell closure. Additionally, our data does not show any peak-like behavior at $N = 34$, and thus does not support the presence of a shell closure at $N = 34$. However, mass measurements of more neutron-rich isotopes are needed to extend the mass surface and fully characterize the region surrounding $N = 34$. \begin{figure}[tb] \begin{center} \includegraphics[width=0.84\columnwidth]{CaTiV_model_compv3.png} \caption{A graph comparing $\delta_{2n}$ experimental values (solid lines) to theoretical calculations (dashed lines) from \cite{Stroberg2021}. Values are offset by amounts as presented in the legend. Unfilled symbols represent values from this work.} \label{fig:model_comp} \end{center} \end{figure} A comparison of our results to recent calculations using the valence-space in-medium similarity renormalization group (VS-IMSRG) \cite{Stroberg2021} is presented in Figure \ref{fig:model_comp}. The VS-IMSRG calculations generally overshoot the experimental results, but follow the overall trends seen in the mass surface, including the disappearance of the $N = 32$ shell closure at higher proton numbers. Additionally, no evidence for a $N = 34$ shell closure is seen in the calculations. More refinement of ab-initio calculations is ultimately needed for a complete picture of the $N = 32$ and $N = 34$ region. \section{Summary} Measurements of neutron-rich Ca, Ti and V isotopes were performed at the TITAN facility in Canada using its MR-ToF-MS and the LEBIT facility in the U.S. using its Penning trap mass spectrometer. These results, totaling 13 mass measurements, include eight masses which increase precisions with respect to previous measurements. These measurements refine the nuclear mass surface around $N = 32$ and $N = 34$, and confirm the waning of the $N = 32$ shell closure as $Z = 20$ is exceeded. The refined mass surface also does not support the presence of an $N = 34$ shell closure. Mass data on more neutron-rich nuclides is ultimately needed to further understand the nuclear structure of the region. \begin{acknowledgments} We would like to thank J. Lassen and the laser ion source group at TRIUMF for their development of the relevant laser scheme as well as the NSCL staff, the ISAC Beam Delivery group, and M. Good for their technical support. This work was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada under Grants No. SAPIN-2018-00027, No. RGPAS-2018-522453, and No. SAPPJ-2018-00028, the National Research Council (NRC) of Canada through TRIUMF, the U.S. National Science Foundation through Grants No. PHY-1565546, No. PHY-2111185, and No. PHY-1811855, the U.S. Department of Energy, Office of Science under Grants No. DE-FG02-93ER40789 and No. DE-SC0015927, the German Research Foundation (DFG), Grant No. SCHE 1969/2-1, the German Federal Ministry for Education and Research (BMBF), Grants No. 05P19RGFN1 and No. 05P21RGFN1, and the Hessian Ministry for Science and Art through the LOEWE Center HICforFAIR, by the JLU and GSI under the JLU-GSI strategic Helmholtz partnership agreement. E.D. acknowledges financial support from the U.K.-Canada Foundation. \end{acknowledgments}
1,116,691,498,185
arxiv
\section{Introduction} Toxic comment detection on social media has proven to be essential for content moderation. According to the French Minister of Education, 18\% of French students were victims of harassment on social networks in 2021. At the same time, the number of posts on these platforms has been increasing. In 12 years, the number of tweets per day has increased tenfold to reach 500 million today\footnote{\href{https://www.internetlivestats.com/twitter-statistics/}{Twitter usage statistics - Internet Live Stats}}. This shows that the rapid and targeted detection of toxic comments on social networks became a crucial issue in ensuring the cohesion of society. Therefore, this can only be done by automating online moderation. Nowadays, the types of models performing well on text classification and representing state-of-the-art are transformer-based models \citep{32} such as BERT \citep{33}. \citet{34} refined a pre-trained BERT model for identifying offensive language, automatically categorizing hate types, and identifying the target of the comment. \citet{35} used BERT for training and testing cross-data sets, \citet{36} separately refined BERT on several datasets for hate speech and offensive language detection. In this study, we compare state-of-the-art models in natural language processing, such as BERT, and in vision applied to text, such as ResNet and Vision Transformers. To our best knowledge, we did not find in the state-of-the-art a detailed comparison of all these models on a wide range of metrics using the same training conditions and same training and testing datasets. As never before, the same methodology and dataset are used throughout our analysis to focus on performance, bias measurement, and inference time. We have tuned each of our models to achieve the best performance. This work's result should help determine which model can be used in practice. Our comparison was performed using the same training and testing datasets extracted from the Civil Comments 2019\footnote{\href{https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data}{Jigsaw unintended bias in toxicity classification Kaggle}\label{civil-comments-data}}. This dataset is a multi-label dataset with imbalanced classes provided by Jigsaw/Conversation AI. For this dataset, we know the targeted identity for some comments, so that we can evaluate the biases during classification. The rest of the paper is organized as follows: Section \ref{sec:methodology} describes the dataset, and the models used in the comparison. The experiment protocol and the results' analysis are presented in sections~\ref{sec:experiments} and~\ref{sec:result}. Finally, section \ref{sec:conclusion} concludes the paper. \section{Methodology} \label{sec:methodology} \subsection{Dataset} In 2017, the comment-hosting platform Civil Comments closed. The company has made its 1.8 million comments public to support research to understand and improve civility detection in online conversations. The Jigsaw team supports this action; each comment was shown to 10 annotators and asked them to “Rate the toxicity of the comment”. To ensure the accuracy of the ratings, some comments were seen by more than 100 annotators. For all comments, the value obtained at the end for each class is the fraction of positive annotations over the number of annotators. All comments were classified into seven categories: \texttt{toxicity}, \texttt{severe\_toxicity}, \texttt{obscene}, \texttt{threat}, \texttt{insult}, \texttt{indentity\_attack}, and \texttt{sexual\_explicit}. \begin{table}[ht] \centering { \footnotesize \begin{tabular}{lp{4cm}} \hline {\bf Category} & {\bf Identity Options}\\\hline Gender & Male, Female, Transgender, Other gender \\\hline Sexual Orientation & Heterosexual, Homosexual, Bisexual, Other sexual orientation \\\hline Religion & Christian, Jewish, Muslim, Hindu, Buddhist, Atheist, Other religion \\\hline Race or ethnicity & Black, White, Latino, Other race or ethnicity\\\hline Disability & Physical disability, Intellectual or learning disability. Psychiatric disability or mental illness, Other disability\\\hline \end{tabular} } \caption{List of identity options presented to the annotators.} \label{tab:listidentity} \end{table} \begin{table}[ht] \centering \begin{tabular}{lcc} \hline {\bf Subgroup} & {\bf Count} & {\bf Percent Toxic} \\\hline all comments & 1 999 516 & 7.99\% \\\hline male & 48 870 & 15.05\% \\ female & 58 584 & 13.66\% \\ transgender & 2 759 & 21.13\% \\ heterosexual & 1 432 & 22.56\% \\ homosexual & 12 062 & 28.28\% \\\hline \end{tabular} \caption{Percentage of comments labeled toxic for a selection of identities.} \label{tab:peridentity} \end{table} In addition, a subset of 450,000 examples from this dataset was tagged with identity (Table~\ref{tab:listidentity}) using a list of questions, such as "What genders are referenced in the comment?" or "What races or ethnicities are referenced in the comment?”. Again, the score obtained for each identity class is the fraction of annotators who mentioned the identity out of the number of evaluators. We can see in Table~\ref{tab:peridentity} that there is an imbalance in toxicity percentage annotation between different identities. \subsection{Preprocessing} The comments are labeled with the probabilities of belonging to a class (for all toxicity and identity classes). To determine whether a comment is considered positive or negative for a class, we applied a threshold: if the probability is greater than 0.5; we assume that the comment is positive for that class, otherwise negative. We notice that the classes are highly unbalanced. The label \texttt{severe\_toxicity} is rarely activated on the whole dataset, as shown in Table \ref{tab:counthatetype}. For this reason, this class has been removed from the classes to be predicted to limit the number of classes to six. \begin{table}[ht] \centering \begin{tabular}{lr} \hline {\bf Hate subtype} & {\bf Count} \\\hline toxicity & 159 782 \\ severe\_toxicity & 13 \\ obscene & 10 671 \\ sexual\_explicit & 5 127 \\ identity\_attack & 14 761 \\ insult & 118 079 \\ threat & 4 725 \\ \hline \end{tabular} \caption{Count of comments for each subtype of hate speech.} \label{tab:counthatetype} \end{table} The following transformations are applied to each comment: \begin{itemize} \item Remove HTML tags \item Remove URL \item Remove diacritics \item transform to lowercase \item Remove white space \item Remove NA or empty \end{itemize} The dataset available on Kaggle\foottotoref{civil-comments-data} already provided a split into train and test subsets. It is assumed that the distributions of labels and subgroups between the two subsets are similar but not exact. To deal with the problem of unbalanced dataset, during the training step, we re-balance the toxicity classes. To do this, we will apply a negative down-sampling: we keep only 10\% of the randomly chosen examples without toxicity (all 6 toxicity classes not enabled), and we keep all the examples with at least one of the 6 classes enabled. In fine, there are as many examples with all the negative classes as there are examples with at least one positive class. In total, the size of the training set is 310 000 examples. It is important to note that no re-balancing is done on the test subset. \subsection{Models} Most of the trained transformers are based on BERT, it stands for Bidirectional Encoder Representations from Transformers, Google developed it in 2018. It is a Transformer-based model that only uses the encoder part of the Transformer. BERT model can also be used for classification. It uses a specific token \texttt{<CLS>} in the beginning of each sequence for classification purposes. In our comparison, we assume BERT as our baseline model. Despite the excellent results in different benchmarks, this is a model that has some limitations. Since the release of BERT, different models were proposed to address some BERT limitations. For this reason, we will investigate the performance of recent transformer language models: DistilBERT \citep{39}, AlBERT \citep{38}, RoBERTa \citep{37}, XLM RoBERTa \citep{41}, BERTweet \citep{42}, HateBERT \citep{40}, XLNet \citep{43} and Compact Convolutional Transformer (CCT) \citep{44}. \subparagraph{DistilBERT} was proposed by \citet{39}. It is a distilled \citep{distill} version of the BERT model. The new model has 40\% less parameters, runs 60\% faster while preserving over 95\% of BERT’s performances. \subparagraph{AlBERT} \citep{38}, which stands for “A Lite BERT”, was made available in an open source version by Google in 2019. The model was built with the original BERT structure, but designed to drastically reduce parameters (by 89\%) using sharing parameters across the hidden layers of the network, and factorizing the embedding layer. All of this was accomplished with an accuracy reduction of 82.3\% to 80.1\% on average over a list of datasets. \subparagraph{RoBERTa} \citep{37} is a modification of BERT model proposed by Facebook AI in 2019. To improve end-task performance, Roberta uses a byte-level Byte-Pair-Encoding \citep{bytepair_encoding} as a tokenizer. Hence, the tokenizer contains more than 50k words, which increases the embedding layer size and the number of learning parameters. Regarding prior training, Roberta has been trained on Masked Language Modeling (MLM) and Causal Language Modeling (CLM). In causal language modeling, the model tries to predict masked token with only the left or right tokens in the sentence, which makes the prediction unidirectional. \subparagraph{XLM RoBERTa} \citep{41} is a multilingual version of RoBERTa. It is pre-trained on 2.5 TB of filtered CommonCrawl data containing 100 languages. \subparagraph{BERTweet} \citep{42} is a BERT-based model trained on a huge English tweet corpus proposed by Nvidia. It was trained using the Roberta procedure on language modeling task. The corpus used for the training is about 820 millions (80Go) of English tweets. In order to train the model, huge capacity resources were needed: 8 v100 GPUs with 32 GB each. BERTweet has showed to outperform Roberta base model on the following tweets task: Part-of-speech tagging, Named-entity recognition and text classification. \subparagraph{HateBERT} \citep{40} is a model published in Association for Computational Linguistics conference. It uses a pretrained BERT base model. This model has been fine-tuned for a language modeling task on specific social network dataset RAL-E (Reddit Abusive Language English dataset). The dataset is made of 1 492 740 different sentences from Reddit and contains hate speech, offensive and abusive phrases. The model has also been fine-tuned on 3 different datasets: OfensEval, AbusEval and HatEval beating the state of the art on these 3 datasets. \subparagraph{XLNet} \citep{43} is a large bidirectional transformer that uses improved training methodology, larger training dataset, and more computational power. XLNet outperformed BERT on 20 tasks, such as question answering, natural language inference, sentiment analysis, etc.\\ For all these models, we concatenate the output of the last 4 layers in one large features vector and stacked two dense layers to get a vector of size 6 which corresponds to the 6 toxicity classes to be predicted. Pre-trained models were used, and the models weights were unfrozen during training. Multiple research have been done regarding features extraction in Transformers. Results presented in \citep{33} inspired our study and comparison. The paper shows that the concatenation of the four last layers from the encoder gives better results than using only the last layer. \subparagraph{Compact Convolutional Transformer (CCT)} \citep{44} is a Transformer-based architecture for vision. The original paper shows that CCT can lead to good results on image and on text datasets with fewer parameters compared to Transformer based models. In previous research, some sought to use transformers on images: Vision Transformer (ViT) \citep{dosovitskiy2020vit}. The main idea of these models is to use the advantages of Transformers on images to extract information that cannot be brought out by convolutions. Unlike ViT, CCT combines convolutions and Transformer attention layers. CCT first uses a convolution tokenization on the image, while Vision Transformer uses patch-based tokenization. This layer applies a certain number of convolutions that produce a set of maps that are reshaped (flatten) and directly used by an optional positional embedding layer. The embedding is then fed to a series of Transformer encoder layers and pooled before being used by dense layers for classification. We have recovered this type of transformer for our study to use it again on text. Since CCT works only on images, we used Glove pre-trained embedding to represent the sentences from the dataset as images. We padded the sentences to a fixed length and concatenated each embedded word to form a matrix representing a one channel image. The training was done from scratch, and we used a pre-trained GloVe embedding enriched during the training. Global Vectors for Word Representation (GloVe) \citep{pennington-etal-2014-glove} is a model used to find word vectors. It uses a co-occurrence matrix to consider the global context of the words in the sentence. Semantic relationships between words can be extracted from the co-occurrence matrix. To compare BERT-based models with other more traditional models, we trained a Bidirectional GRU and a Bidirectional LSTM from scratch. For each one, three layers of RNN and one layer of embedding unfreeze GloVe \citep{pennington-etal-2014-glove} were used. Several ResNet \citep{45} with a depth of 44 and 56 were also trained from scratch. We used pre-trained GloVe embedding. In some sessions, we froze the embedding. \section{Experiments} \label{sec:experiments} \subsection{Training} All models are trained over three epochs, with a batch size of 32 examples, except for CCT, where we limit ourselves to 8 per batch due to the lack of VRAM. We use the AdamW optimizer. We use the positive weighted Binary Cross Entropy (pwBCE) as a loss function. This loss function adds weights on the positive samples to consider them as much as the negatives. For the RoBERTa model, we use three other loss functions which are the Binary Cross Entropy (BCE), Focal Loss (FL) \citep{46}, and positive weighted Focal Loss (pwFL). The FL reduces the loss attributed to well-ranked examples and focuses on examples with poorly ranked classes, usually due to class imbalance. pwFL corresponds to the same trick as for pwBCE applied to FL. To measure the model's performance, we use similar metrics that were used during the kaggle\foottotoref{civil-comments-data}: Macro AUROC, Macro F1 and Micro F1 with a threshold of 0.5, Precision and Recall. To understand the model's complexity, we also measure the inference time per batch calculated on the test set. The average inference time per batch is computed from 6,000 batches during the inference phase on the test set. As we can see in Table~\ref{tab:perfbias}, the hate speech detection models could make biased predictions for particular identities who are already the target of such abuse. To measure such unintended model bias, we rely on the AUC-based metrics developed by \citet{23}. These include Subgroup AUC (Sub. AUC), Background Positive Subgroup Negative (BPSN) AUC, and Background Negative Subgroup Positive (BNSP) AUC. \subparagraph{The sub. AUC} measures the AUROC for each identity using toxic and normal posts from the test set that mention the identity under consideration. A higher value means that a model is less likely to confuse the normal post that mentions the community with a toxic post that does not. \subparagraph{The BPSN AUC} measures the AUROC for each identity, using normal posts that mention the identity and toxic posts that do not mention the identity under consideration. A higher value means that a model is less likely to confuse the normal post that mentions the community with a toxic post that does not. \subparagraph{The BNSP AUC} measures the AUROC for each identity using toxic posts that mention the identity and normal posts that do not mention the identity under consideration from the test set. A higher value means that the model is less likely to confuse a toxic post that mentions the community with a normal post without one.\\ To combine these metrics across identities, we used the generalized mean (GM) or power mean with exponent $p$, which was already used by the Jigsaw/Conversation AI Team during a Kaggle competition\foottotoref{civil-comments-data}. So, we report the following three bias metrics for our comparison: \begin{itemize} \item \textbf{GMB-Subgroup-AUC} is the GM for the Subgroup AUC \item \textbf{GMB-BPSN-AUC} is the GM of the BPSN AUC \item \textbf{GMB-BNSP-AUC} is the GM of the BNSP AUC \end{itemize} We restrict the evaluation to the test set only. By having this restriction, we can evaluate models in terms of bias reduction. Only identities with more than 500 examples in the test dataset will be included in the evaluation calculation. \section{Results} \label{sec:result} \begin{table*}[htb] \resizebox{1.0\linewidth}{!}{ \begin{tabular}{lllllllllll} \toprule & & & \multicolumn{5}{c}{Performance} & \multicolumn{3}{c}{Bias} \\ \cmidrule(lr){4-8} \cmidrule(lr){9-11} & & & AUROC & Macro F1 & Micro F1 & Precision & Recall & GMB Sub. & GMB BPSN & GMB BNSP \\ Model type & Id & Model name & & & & & & & & \\ \midrule \multirow[c]{9}{*}{BERT} & 0 & AlBERT & {\cellcolor[HTML]{F1EBF5}} \color[HTML]{000000} 0.9790 & {\cellcolor[HTML]{03517E}} \color[HTML]{F1F1F1} 0.3463 & {\cellcolor[HTML]{045E93}} \color[HTML]{F1F1F1} 0.4786 & {\cellcolor[HTML]{045687}} \color[HTML]{F1F1F1} 0.3247 & {\cellcolor[HTML]{F2ECF5}} \color[HTML]{000000} 0.9104 & {\cellcolor[HTML]{DCDAEB}} \color[HTML]{000000} 0.8674 & {\cellcolor[HTML]{FEF6FA}} \color[HTML]{000000} 0.8998 & {\cellcolor[HTML]{2685BB}} \color[HTML]{F1F1F1} 0.9513 \\ & 1 & BERTweet & {\cellcolor[HTML]{FEF6FB}} \color[HTML]{000000} 0.9816 & {\cellcolor[HTML]{0567A1}} \color[HTML]{F1F1F1} 0.3616 & {\cellcolor[HTML]{056FAF}} \color[HTML]{F1F1F1} 0.4928 & {\cellcolor[HTML]{04639B}} \color[HTML]{F1F1F1} 0.3363 & {\cellcolor[HTML]{FAF3F9}} \color[HTML]{000000} 0.9216 & {\cellcolor[HTML]{F9F2F8}} \color[HTML]{000000} 0.8780 & {\cellcolor[HTML]{F7F0F7}} \color[HTML]{000000} 0.8945 & {\cellcolor[HTML]{D9D8EA}} \color[HTML]{000000} 0.9603 \\ & 2 & DistilBERT & {\cellcolor[HTML]{F8F1F8}} \color[HTML]{000000} 0.9804 & {\cellcolor[HTML]{3B92C1}} \color[HTML]{F1F1F1} 0.3879 & {\cellcolor[HTML]{3B92C1}} \color[HTML]{F1F1F1} 0.5115 & {\cellcolor[HTML]{167BB6}} \color[HTML]{F1F1F1} 0.3572 & {\cellcolor[HTML]{ECE7F2}} \color[HTML]{000000} 0.9001 & {\cellcolor[HTML]{F5EEF6}} \color[HTML]{000000} 0.8762 & {\cellcolor[HTML]{D3D4E7}} \color[HTML]{000000} 0.8740 & {\cellcolor[HTML]{FFF7FB}} \color[HTML]{000000} \bfseries 0.9644 \\ & 3 & HateBERT & {\cellcolor[HTML]{F2ECF5}} \color[HTML]{000000} 0.9791 & {\cellcolor[HTML]{056FAE}} \color[HTML]{F1F1F1} 0.3679 & {\cellcolor[HTML]{05659F}} \color[HTML]{F1F1F1} 0.4844 & {\cellcolor[HTML]{045C90}} \color[HTML]{F1F1F1} 0.3292 & {\cellcolor[HTML]{F7F0F7}} \color[HTML]{000000} 0.9165 & {\cellcolor[HTML]{F1EBF4}} \color[HTML]{000000} 0.8744 & {\cellcolor[HTML]{F2ECF5}} \color[HTML]{000000} 0.8915 & {\cellcolor[HTML]{C6CCE3}} \color[HTML]{000000} 0.9589 \\ & 4 & RoBERTa BCE & {\cellcolor[HTML]{FDF5FA}} \color[HTML]{000000} 0.9813 & {\cellcolor[HTML]{FFF7FB}} \color[HTML]{000000} \bfseries 0.4749 & {\cellcolor[HTML]{8EB3D5}} \color[HTML]{000000} 0.5359 & {\cellcolor[HTML]{589EC8}} \color[HTML]{F1F1F1} 0.3836 & {\cellcolor[HTML]{E0DEED}} \color[HTML]{000000} 0.8891 & {\cellcolor[HTML]{FEF6FA}} \color[HTML]{000000} 0.8800 & {\cellcolor[HTML]{F1EBF4}} \color[HTML]{000000} 0.8901 & {\cellcolor[HTML]{E8E4F0}} \color[HTML]{000000} 0.9616 \\ & 5 & RoBERTa FL & {\cellcolor[HTML]{FFF7FB}} \color[HTML]{000000} \bfseries 0.9818 & {\cellcolor[HTML]{F4EEF6}} \color[HTML]{000000} 0.4648 & {\cellcolor[HTML]{BBC7E0}} \color[HTML]{000000} 0.5524 & {\cellcolor[HTML]{86B0D3}} \color[HTML]{000000} 0.4017 & {\cellcolor[HTML]{DBDAEB}} \color[HTML]{000000} 0.8839 & {\cellcolor[HTML]{FFF7FB}} \color[HTML]{000000} \bfseries 0.8807 & {\cellcolor[HTML]{FFF7FB}} \color[HTML]{000000} \bfseries 0.9010 & {\cellcolor[HTML]{D2D3E7}} \color[HTML]{000000} 0.9597 \\ & 6 & RoBERTa pwBCE & {\cellcolor[HTML]{FBF3F9}} \color[HTML]{000000} 0.9809 & {\cellcolor[HTML]{045E93}} \color[HTML]{F1F1F1} 0.3541 & {\cellcolor[HTML]{05659F}} \color[HTML]{F1F1F1} 0.4845 & {\cellcolor[HTML]{045B8F}} \color[HTML]{F1F1F1} 0.3284 & {\cellcolor[HTML]{FBF4F9}} \color[HTML]{000000} 0.9232 & {\cellcolor[HTML]{F0EAF4}} \color[HTML]{000000} 0.8741 & {\cellcolor[HTML]{FBF4F9}} \color[HTML]{000000} 0.8982 & {\cellcolor[HTML]{ADC1DD}} \color[HTML]{000000} 0.9575 \\ & 7 & RoBERTa pwFL & {\cellcolor[HTML]{FBF4F9}} \color[HTML]{000000} 0.9809 & {\cellcolor[HTML]{0566A0}} \color[HTML]{F1F1F1} 0.3612 & {\cellcolor[HTML]{0567A2}} \color[HTML]{F1F1F1} 0.4861 & {\cellcolor[HTML]{045D92}} \color[HTML]{F1F1F1} 0.3297 & {\cellcolor[HTML]{FDF5FA}} \color[HTML]{000000} 0.9254 & {\cellcolor[HTML]{EFE9F3}} \color[HTML]{000000} 0.8734 & {\cellcolor[HTML]{F3EDF5}} \color[HTML]{000000} 0.8920 & {\cellcolor[HTML]{D2D3E7}} \color[HTML]{000000} 0.9597 \\ & 8 & XLM RoBERTa & {\cellcolor[HTML]{F2ECF5}} \color[HTML]{000000} 0.9790 & {\cellcolor[HTML]{023D60}} \color[HTML]{F1F1F1} 0.3368 & {\cellcolor[HTML]{034A74}} \color[HTML]{F1F1F1} 0.4680 & {\cellcolor[HTML]{03456C}} \color[HTML]{F1F1F1} 0.3135 & {\cellcolor[HTML]{FBF4F9}} \color[HTML]{000000} 0.9230 & {\cellcolor[HTML]{E1DFED}} \color[HTML]{000000} 0.8689 & {\cellcolor[HTML]{EBE6F2}} \color[HTML]{000000} 0.8859 & {\cellcolor[HTML]{B9C6E0}} \color[HTML]{000000} 0.9581 \\ \cline{1-11} CCT & 9 & CCT & {\cellcolor[HTML]{023858}} \color[HTML]{F1F1F1} 0.9505 & {\cellcolor[HTML]{034973}} \color[HTML]{F1F1F1} 0.3428 & {\cellcolor[HTML]{0569A4}} \color[HTML]{F1F1F1} 0.4874 & {\cellcolor[HTML]{0872B1}} \color[HTML]{F1F1F1} 0.3507 & {\cellcolor[HTML]{4A98C5}} \color[HTML]{F1F1F1} 0.7983 & {\cellcolor[HTML]{023858}} \color[HTML]{F1F1F1} 0.8133 & {\cellcolor[HTML]{3991C1}} \color[HTML]{F1F1F1} 0.8307 & {\cellcolor[HTML]{023858}} \color[HTML]{F1F1F1} 0.9447 \\ \cline{1-11} \multirow[c]{3}{*}{CNN} & 10 & Freeze GloVe ResNet44 & {\cellcolor[HTML]{034A74}} \color[HTML]{F1F1F1} 0.9526 & {\cellcolor[HTML]{9EBAD9}} \color[HTML]{000000} 0.4189 & {\cellcolor[HTML]{CACEE5}} \color[HTML]{000000} 0.5591 & {\cellcolor[HTML]{EEE8F3}} \color[HTML]{000000} 0.4631 & {\cellcolor[HTML]{023858}} \color[HTML]{F1F1F1} 0.7053 & {\cellcolor[HTML]{045A8D}} \color[HTML]{F1F1F1} 0.8219 & {\cellcolor[HTML]{023858}} \color[HTML]{F1F1F1} 0.7876 & {\cellcolor[HTML]{0A73B2}} \color[HTML]{F1F1F1} 0.9499 \\ & 11 & Unfreeze GloVe ResNet44 & {\cellcolor[HTML]{71A8CE}} \color[HTML]{F1F1F1} 0.9660 & {\cellcolor[HTML]{EBE6F2}} \color[HTML]{000000} 0.4566 & {\cellcolor[HTML]{FFF7FB}} \color[HTML]{000000} \bfseries 0.5958 & {\cellcolor[HTML]{FFF7FB}} \color[HTML]{000000} \bfseries 0.4835 & {\cellcolor[HTML]{1E80B8}} \color[HTML]{F1F1F1} 0.7759 & {\cellcolor[HTML]{509AC6}} \color[HTML]{F1F1F1} 0.8421 & {\cellcolor[HTML]{86B0D3}} \color[HTML]{000000} 0.8493 & {\cellcolor[HTML]{65A3CB}} \color[HTML]{F1F1F1} 0.9540 \\ & 12 & Unfreeze GloVe ResNet56 & {\cellcolor[HTML]{509AC6}} \color[HTML]{F1F1F1} 0.9639 & {\cellcolor[HTML]{1E80B8}} \color[HTML]{F1F1F1} 0.3778 & {\cellcolor[HTML]{358FC0}} \color[HTML]{F1F1F1} 0.5098 & {\cellcolor[HTML]{1C7FB8}} \color[HTML]{F1F1F1} 0.3604 & {\cellcolor[HTML]{CDD0E5}} \color[HTML]{000000} 0.8707 & {\cellcolor[HTML]{7EADD1}} \color[HTML]{F1F1F1} 0.8487 & {\cellcolor[HTML]{75A9CF}} \color[HTML]{F1F1F1} 0.8445 & {\cellcolor[HTML]{B4C4DF}} \color[HTML]{000000} 0.9579 \\ \cline{1-11} \multirow[c]{2}{*}{RNN} & 13 & BiGRU & {\cellcolor[HTML]{D7D6E9}} \color[HTML]{000000} 0.9748 & {\cellcolor[HTML]{045687}} \color[HTML]{F1F1F1} 0.3492 & {\cellcolor[HTML]{045A8D}} \color[HTML]{F1F1F1} 0.4762 & {\cellcolor[HTML]{045483}} \color[HTML]{F1F1F1} 0.3232 & {\cellcolor[HTML]{EEE9F3}} \color[HTML]{000000} 0.9036 & {\cellcolor[HTML]{AFC1DD}} \color[HTML]{000000} 0.8573 & {\cellcolor[HTML]{B0C2DE}} \color[HTML]{000000} 0.8616 & {\cellcolor[HTML]{D6D6E9}} \color[HTML]{000000} 0.9600 \\ & 14 & BiLSTM & {\cellcolor[HTML]{DAD9EA}} \color[HTML]{000000} 0.9754 & {\cellcolor[HTML]{0569A5}} \color[HTML]{F1F1F1} 0.3638 & {\cellcolor[HTML]{328DBF}} \color[HTML]{F1F1F1} 0.5089 & {\cellcolor[HTML]{197DB7}} \color[HTML]{F1F1F1} 0.3586 & {\cellcolor[HTML]{D3D4E7}} \color[HTML]{000000} 0.8761 & {\cellcolor[HTML]{CED0E6}} \color[HTML]{000000} 0.8636 & {\cellcolor[HTML]{D7D6E9}} \color[HTML]{000000} 0.8758 & {\cellcolor[HTML]{A4BCDA}} \color[HTML]{000000} 0.9569 \\ \cline{1-11} XLNet & 15 & XLNet & {\cellcolor[HTML]{F7F0F7}} \color[HTML]{000000} 0.9800 & {\cellcolor[HTML]{023858}} \color[HTML]{F1F1F1} 0.3336 & {\cellcolor[HTML]{023858}} \color[HTML]{F1F1F1} 0.4586 & {\cellcolor[HTML]{023858}} \color[HTML]{F1F1F1} 0.3045 & {\cellcolor[HTML]{FFF7FB}} \color[HTML]{000000} \bfseries 0.9287 & {\cellcolor[HTML]{F0EAF4}} \color[HTML]{000000} 0.8738 & {\cellcolor[HTML]{E6E2EF}} \color[HTML]{000000} 0.8834 & {\cellcolor[HTML]{D2D3E7}} \color[HTML]{000000} 0.9597 \\ \bottomrule \end{tabular} } \caption{Model performance results.} \label{tab:perfbias} \end{table*} \begin{figure}[ht] \centering \includegraphics[width=.6\linewidth]{images/perf_and_params_per_model_df.pdf} \caption{Model performance, depending on the inference time per batch and the number of trainable parameters. The number associated with each point corresponds to the model id in Table \ref{tab:perfbias}. All models have a batch of 32 samples, except CCT, which uses a batch of 8.} \label{graph:perftime} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{images/identity_aur_per_sub_selected_model_df.pdf} \caption{Community-wise results for each bias-metrics on the toxicity class. Only the most relevant models are shown here for the sake of readability. Thus, we have kept only the BERT, RNN, and CNN models with the best performance on AUROC or on the AUC bias metrics.} \label{graph:perfidentity} \end{figure} \subsection{Performances} According to Figure \ref{graph:perftime} and Table \ref{tab:perfbias}, in general on the AUROC metric, BERT, RNN and XLNet have better scores than the others. As the comments are pretty short on average (27 tokens), RNN keep in memory a large part of the message to make a good prediction. We probably would have seen a more significant performance gap between BERT and RNN if the comments had been longer. RoBERTa with Focal Loss models offers the best performance on biases and AUROC. All BERT, regardless of size and optimizations, have similar performance. A DistilBERT or an AlBERT is as good as a HateBERT. Even if we notice that RoBERTa with the Focal Loss gives less biased results on the identity groups than DistillBERT. XLM RoBERTa does not differ from the others: the model has learned about other languages, but it does not give an advantage in detecting hateful comments. Across the models, Recall is often very high, and Precision remains low. In other words, the models are more sensitive to hateful comments but generate more false positives. On the other hand, these same models detect many more true positive hate comments. Within BERT, if we look at the RoBERTa with the different training losses (BCE, pwBCE, FL, pwFL), we see relatively close scores at the end. None of the tested training losses improve the learning of the models compared to a simple BCE. We even note that the positive weights (pwBCE and pwFL) obtain worse F1 scores than the BCE or FL, but the Recall is 0.04 higher, and the accuracy is smaller by 0.1. The Bi-GRU and Bi-LSTM have equivalent performance in terms of AUROC and F1 scores. Nevertheless, we can show that all models have a little more difficulty classifying toxic comments and insults than explicit sexual comments. \subsection{Bias} Overall, if we look at the results presented in Table \ref{tab:perfbias}, we see that the models have a GMB BNSP greater than 0.95. In other words, the models have no difficulty differentiating between hateful comments targeting a community and generic comments (without targeting a particular identity). On the contrary, we observe that the scores for GMB BPSN and GMB Sub are lower than those for GMB BNSP, often below 0.90. Thus, all the models present an association bias between identities and insults. They will tend to detect as being insults the positive comments about a community. But this bias depends on the model type. From Table \ref{tab:perfbias}, we see that BERT and RNN models are generally less sensitive to this bias by having slightly higher GMB BPSN and GMB Sub. In contrast, convolution-based models such as CNN and CCT tend to be more sensitive to this association bias. CNN seek to capture patterns with convolutions. From Figure \ref{graph:perfidentity}, all models score worse on average on BPSN and Sub. AUC for the \texttt{black}, \texttt{homosexual}, \texttt{muslim}, and \texttt{white} communities compared to the other communities. For BPSN, this means that models have difficulty differentiating between insults that do not target identity and healthy comments about a community. Thus, these same models will tend to have more association bias and detect healthy comments about these communities as toxic. For Sub. AUC, this means that when a comment targets an identity such as \texttt{black}, \texttt{gay}, \texttt{muslim}, or \texttt{white}, the models will have more difficulty distinguishing between hateful and non-hateful comments. If we now look in more detail for each model and each identity, we notice again that \textit{RoBERTa with FL}, \textit{BiLSTM}, and \textit{XLNet} are less affected by that bias than \textit{Unfreeze GloVe ResNet56} and \textit{CCT}. There is even a difference of 0.05 on the Sub AUC for comments targeting communities such as \texttt{jewish} or \texttt{muslim} between \textit{RoBERTa with FL} and \textit{Unfreeze GloVe ResNet56}. Similarly, there is a difference of up to 0.1 on the BPSN AUC for the \texttt{black}, \texttt{homosexual} and \texttt{muslim} communities. This shows that on these identities, which are particularly affected by hateful comments, the BERT, RNN, and XLNet models are less subject to association bias than the CNN and CCT. \subsection{Inference time} From Figure \ref{graph:perftime}, with performances quite close to the BERT type models, RNNs have an inference time, per batch, 5 to 8 times smaller than BERT or XLNet. As expected, DistilBERT, with the smallest inference time tested in our study, is 2 times higher than Bi-GRU and Bi-LSTM, even if the performance difference is 0.005 in AUROC. The CNN ends up with an inference time shorter than most BERT and larger than the longest RNN tested, but with much lower performance than RNNs or BERT. With the same inference time per batch, DistilBERT does better. We also notice that freezing the embedding does not decrease the inference time, but decreased the model's performance. Finally, the CCT offers disappointing performances with a very long inference time per batch, especially when we know that we have reduced the batch size from 32 to 8 for this particular model. \section{Conclusion} \label{sec:conclusion} All BERTs have similar performance regardless of the size, optimizations, or language used to pre-train the models. More broadly, BERT, RNN, and XLNet have almost similar performance. RNNs are much faster at inference than any of the BERT tested. RNNs remain a good compromise between performance and inference time for multi-label detection of hateful comments. RoBERTa with Focal Loss models offers the best performance on biases and AUROC. However, DistilBERT combines both good classification performance and a low inference time per batch. Even if the models are all affected by the bias of associating identities with toxicity, BERT, RNN, and XLNet are less sensitive to that than CNN and CCT. \bibliographystyle{rnti}
1,116,691,498,186
arxiv
\section{Experimental} Polycrystalline samples of (Fe$_{1-x}$Co$_{x}$)$_{3}$Mo$_{3}$N, where $0 \le x \le 1$, were synthesized by a solid-state reaction of a mixture of transition metal oxides in a H$_{2}$-N$_{2}$ mixed gas stream \cite{Prior}. Fe$_{2}$O$_{3}$, Co$_{3}$O$_{4}$, and MoO$_{3}$ were mixed in a molar ratio of $(1-x)/2:x/3:1$ and placed in a silica tube; and the mixture was then fired in a gas stream of N$_{2}$ containing 10\% H$_{2}$ at 700$^{\circ}$C for 48 h followed by heat treatment at 1000$^{\circ}$C for 48 h. In order to homogenize the samples, the heat treatment, interspersed with intermediate grinding, was repeated at least four times, resulting in a systematic variation in the magnetism against $x$, which is not in accordance with the literature data \cite{Prior2}. The magnetization $M$ at low fields was measured using a SQUID magnetometer, MPMS (Quantum Design) equipped in the LTM center, Kyoto University. The high-field magnetization for field strengths up to 54 T was measured for pure Fe$_{3}$Mo$_{3}$N using a pulse magnet equipped in ISSP at 4.2--100 K. Polycrystalline powder was filled into a cylindrical polyethylene tube that measured 6 mm in length and 2.5 mm in diameter. For the neutron scattering experiments, approximately 20 g of the powder was packed into a vanadium tube with a diameter of 20 mm. A triple-axis spectrometer ISSP-HER installed at the C1-1 cold neutron guide of the research reactor JRR-3M at Japan Atomic Energy Agency (JAEA), Tokai, was employed. All measurements were performed at a fixed final wave vector $k_\textrm{f} = 1.45$ \AA$^{-1}$, with a collimation sequence of guide-$40^\prime$-open-open, and with a horizontal focusing analyzer. The energy resolution was estimated to be 0.22 meV at zero energy transfer. Figure \ref{fig2} shows the results of the high-field magnetization measurements. The signal from the pulse magnet corresponding to $dM/dH$ (at 4.2 K) is plotted against the external magnetic field $H$ in the inset of Fig.\ \ref{fig2}(a). $dM/dH$ shows diverging behavior at $\sim$14 T, corresponding to a sharp jump in the $M$-$H$ curve, which was obtained by integrating $dM/dH$. The metamagnetic fields $H_\mathrm{C}$ are different for the field-increasing and decreasing processes because of the first-order transition, although the difference (hysteresis width) $\Delta H$ is much smaller (0.56 T at 4.2 K) than that for a typical IEM \cite{IEMreview}. $\Delta H$ decreases with $T$ and vanishes at $T_\mathrm{CM} \simeq 40$ K (Fig.\ \ref{fig2}(c)), which is the temperature at which the first-order IEM disappears, and corresponds to the substantial magnetic interaction in this compound. The increase in $H_\mathrm{C}$ is roughly proportional to $T^2$ up to $T_\mathrm{CM}$ (Fig.\ \ref{fig2}(b) closed circles), as commonly observed in IEM \cite{IEMreview}. In the field-increasing process, a small step was observed at a field (defined as $H_\mathrm{E}$) slightly grater than $H_\mathrm{C}$. $H_\mathrm{E}$ decreases gradually with $T$ and disappears at $\sim$20 K (Fig.\ \ref{fig2}(b) open circles). The magnetization jump is sharp unlike that observed in other IEMs \cite{IEMreview} and rather similar to that of a spin flip in a localized spin system. Successive magnetic transitions in the magnetic field are frequently observed in local spin systems; however, they are unlikely to be a part of a conventional IEM with uniform spin polarization, thus suggesting the presence of competing interactions or multiple order parameters. The effect of impurity doping was investigated by replacing Co with Fe. We have successfully synthesized a solid solution between Fe$_3$Mo$_3$N and the Pauli paramagnetic Co$_3$Mo$_3$N; the lattice parameter was found to vary linearly with the Co fraction $x$ (Vegard's law) in accordance with the literature data \cite{Prior2}. As a typical example, the magnetic data of (Fe$_{0.8}$Co$_{0.2}$)$_3$Mo$_3$N are presented in Figs.\ \ref{fig3}(a) and (b). At high $T$, $\chi$ follows the CW law well ($p_\mathrm{eff} = 2.55$ $\mu _\textrm{B}$/Fe, $\theta = 17.1$ K). The Arrott plot ($M^2$ against $H/M$ plot) shows good linearity and intersects the vertical axis below 22 K, indicating the occurence of spontaneous magnetization $p_\mathrm{s}$ (=$0.28\:\mu _\textrm{B}$ at 5 K) below the Curie temperature $T_\mathrm{C} = 22$ K, despite the fact that both end compounds have nonmagnetic ground states. The value of $p_\mathrm{eff}/p_\mathrm{s}$ is much larger than unity, as is characteristic of a typical weak itinerant electron ferromagnet. Similar analyses were carried out for other specimens to enable the construction of the magnetic phase diagram shown in Fig.\ \ref{fig3}(c) (detailed magnetic data will be published elsewhere). The characteristics of this phase diagram are as follows. A ferromagnetic phase appears by slight doping ($x < 0.05$). With an increase in $x$, $T_\mathrm{C}$ increases rapidly up to a maximum at $x \sim 0.2$ and decreases almost linearly after the maximum up to $x \sim 0.7$. It should be noted that the $T_\mathrm{CM}$ of pure Fe$_3$Mo$_3$N is much higher than the maximum value of $T_\mathrm{C}$ and is smoothly connected to the variation in $T_\mathrm{C}$ for $x \ge 0.2$. In other words, long-range ferromagnetism appears to be suppressed particularly at $x \sim 0$, in spite of strong magnetic correlation. To obtain information on the magnetic interactions in Fe$_3$Mo$_3$N, inelastic neutron scattering experiments were performed with the powder; unfortunately, single crystals have not been obtained. Figure \ref{fig4} shows the wave number ($K$) dependences of quasi-elastic neutron scattering measured at 5.5 K. Relatively strong scattering was observed, centered at finite but relatively small $K$, which roughly corresponds to the mean distances between Fe atoms, suggesting the presence of AF spin correlation together with the F correlation that was detected from macroscopic measurements. The $E$-scan spectra, shown in the inset, were fitted with a Lorentzian quasi-elastic scattering function. The spectral widths, i.e., characteristic energies of spin fluctuations, are obtained as 0.4, 0.3 and 0.9 meV for $K = 1.22$, 1.78, and 2.83 in the reciprocal lattice unit, respectively. Let us understand the geometrical characteristics of the SQ lattice on the basis of the Heisenberg model. Although Fe$_3$Mo$_3$N appears to behave like an itinerant electron magnet, we believe that its spin density is relatively localized at the atomic site, as deduced from the strong electron correlation. The SQ lattice consists of $16d$ and $32e$ sites arranged in the $Fd\bar{3}m$ space group. In actual $\eta$-carbide-type compounds, the nearest-neighbor (nn) and next-nearest-neighbor (nnn) distances are close and become exactly the same when $z = 0.3$, where $z$ is the coordinate of the $32e$ site $(z, z, z)$. Experimentally, $z = 0.2937$ \cite{Bem} and similar values \cite{Alconchel} have been reported for Fe$_3$Mo$_3$N, where the nn $16d$-$32e$ distance is 2.387 \AA\ and nnn $32e$-$32e$ distance is 2.549 \AA\ at room temperature. Here, we consider only these nn and nnn interactions, namely, $J_1$ and $J_2$ interactions (see Fig.\ \ref{fig1}), respectively, and neglect the interactions of the neighbours located further along. The signs of $J_1$ and $J_2$ are likely to vary because, empirically, both the F and the AF states appear in doped Fe-based $\eta$-carbide-type compounds \cite{Sviridov}. First, let us understand the nature of an isolated SQ. In an extreme case, with $J_1 < 0$ and $J_2 = 0$, the SQ is not frustrated because the AF spins can be alternately assigned to the eight atoms. Needless to say, the same is the case for $J_1 >0$ and $J_2 = 0$. On the other hand, when $J_1 = 0$ and $J_2 < 0$, the isolated $32e$ tetrahedron is a typical frustrated unit. These facts suggest that the SQ is frustrated only when $J_2$ is negative and dominant. It should be noted that this is true not only when $J_1 < 0$ but also when $J_1 > 0$. To verify whether this is true for an infinite SQ lattice, we calculated the dispersions of the Fourier transform of the exchange integral matrix $J(q)$ among 12 Fe atoms in a unit cell by assuming various $J_1/J_2$ ratios. Two typical results are shown in Fig.\ \ref{Jq}. A flat dispersion is found along the highest branch when $J_1$ is not dominant (Fig.\ \ref{Jq}(a)), suggesting the degeneracies of the ground state, i.e., the presence of geometric frustration \cite{Harris}. On the other hand, $J(q)$ takes a maximum at the $\Gamma$ point, where the magnitude of $J_1$ is sufficiently larger than that of $J_2$ (Fig.\ \ref{Jq}(b)), indicating that $J_1$ tends to release the frustration. We confirmed that similar dispersions were obtained independent of the sign of $J_1$. Thus, the SQ lattice is a unique frustrated system in which frustration is controlled by the $|J_1|/J_2$ ratio but not affected by the sign of $J_1$. The anomalous magnetism of Fe$_3$Mo$_3$N can be easily understood on the basis of these characteristics. The experimental results indicate the coexistence of the F and the AF interactions, which are formally denoted as $J_1$ and $J_2$, respectively, as one of the simplest possibilities. The suppression of long-range order in pure Fe$_3$Mo$_3$N is due to the frustration in the SQ lattice ($J_2$ dominant case). In other words, the frustrated AF interaction suppresses the F correlation to its marginal point. Metamagnetism can be interpreted as the transition to the $J_1$ dominant state, assisted by the external field. The onset of static magnetism by slight doping is one of common features of a frustrated system. It is not surprising that the F order is stabilized when the F interaction coexists with the AF interaction. We have tentatively calculated the spin correlation function $S(K, E)$ for the SQ lattice, assuming (1) $J_1 > 0$ and dominant $J_2 <0$, (2) 2 up and 2 down spins in the $32e$ tetrahedron, (3) an up spin at a $16d$ site when the neighboring $32e$ spins are 2 up and 1 down, and vice versa, (4) no correlation between different SQs, and (5) the magnetic form factor of Fe$^{3+}$. Some of these assumptions may not be valid for the actual case, but have been made for the sake of simplicity. $S(K, E)$ thus calculated and, furthermore, averaged over equal $K$ is illustrated by the solid curve in Fig.\ \ref{fig4}. Although $S(K, E)$ should ideally be compared with the $E$ averaged values, there is reasonable agreement between the experimental results and the calculations carried out using our model, indicating that our model is consistent with the experimental results. Although we have applied the localized moment scheme here, we have to reconcile the Heisenberg frustration and the electron itinerancy in further studies. In summary, we found a steep metamagnetic transition in the $\eta$-carbide-type Fe-based compound Fe$_3$Mo$_3$N from the nonmagnetic NFL near F-QCP to a field-induced F state, and we established a magnetic phase diagram of the (Fe$_{1-x}$Co$_{x}$)$_{3}$Mo$_{3}$N system, where the F order is stabilized by a slight doping of the nonmagnetic element. Neutron scattering measurements suggested that the AF and the F correlations coexist in Fe$_3$Mo$_3$N. We proposed that the SQ lattice is a new geometrically frustrated system, in which the F and the AF interactions compete to select frustrated and non-frustrated states. We applied this model to explain the suppression of long-range order in pure Fe$_3$Mo$_3$N. In other words, the emergence of the F-QCP is interpreted as being the result of frustration. Although a number of $\eta$-carbide-type transition-metal compounds are known to exist, their quantum physical properties have been less studied. We believe that these compounds are good candidates for testing geometric frustration in metallic magnets and searching for new and exotic quantum phenomena. \section*{Acknowledgement} This study was supported by a Grant-in-Aid for Scientific Research on Priority Areas ``Novel States of Matter Induced by Frustration," a Grant-in-Aid for the Global COE Program, ``International Center for Integrated Research and Advanced Education in Materials Science," and a Grant-in-Aid for Young Scientists (B) 21760531 from the Ministry of Education, Culture, Sports, Science and Technology of Japan. \section*{Author Contributions} T.W, S.T. and Y.Tabata conceived the experiments and analyzed the data. K.S, A.K. and K.K. provide valuable help on the pulse-high-field experiments and T.Y. and M.Y. on the neutron scattering experiment. Y.Takahashi made theoretical calculations and discussion. W.T. and H.N. co-wrote the paper. H.N. raid our and promoted all the works. All authors discussed the results and commented on the manuscript. \section*{Competing financial interests} The authors declare no competing financial interests.
1,116,691,498,187
arxiv
\section{Introduction} Selection of a suitable loss function is crucial in order to train a neural network for a desired task. For a given neural network architecture \cite{resnet,vgg} and optimization procedure \cite{kingma2014adam}, the profile of the loss function largely governs what is learnt by the neural network and its generalization on unseen data. The above statement well applies to the choice of a similarity metric in the learning process. The deep learning based approaches involve minimizing a similarity metric between the ground truth and the predictions in a way or another. Though, the way it is performed may vary from task to task. For example, the visual perception tasks such as image classification \cite{deng2009imagenet,vgg}, image segmentation \cite{kumar2019semi, pspnet}, it is performed using cross entropy, whereas, for the other tasks such as forecasting \cite{oord2016wavenet}, image generation \cite{gan}, depth estimation \cite{godard2017unsupervised}, it is performed using $\mathcal{L}_2$, $\mathcal{L}_1$ metrics. Generative adversaries \cite{gan} based learning algorithms also largely depends on the choice of similarity metric. \par In the area of machine vision, there are certain tasks for which the groundtruth can be not be obtained easily. It is primarily due to a very high cost of measurement devices or unavailability of labelling process. These tasks include depth estimation using images \cite{godard2017unsupervised}, visual / LiDAR odometery \cite{kumar2018real}. From the perspective of the autonomous vehicles and robotics, the above tasks are undeniably important. For this reason, development of unsupervised learning techniques for these tasks have recently gained attention \cite{godard2017unsupervised, zhan2018unsupervised, li2018undeepvo, almalioglu2019ganvo}. The proposed methods in this direction extensively use similarity metric minimization between various information sources. \par From the above discussion, it is quite evident that similarity metric plays an important role in the learning process. In general, $\mathcal{L}_1,\mathcal{L}_2$ are the most preferred choice for this purpose. These losses, despite their popularity, do not provide the desired results in many cases. It is mainly due to the fact that these are pointwise operators and do not account for statistical information while matching. For example, in images, $\mathcal{L}_1/\mathcal{L}_2$ loss penalizes the neural network on a per-pixel basis. The statistical properties are thus left unaccounted. To address this issue, Structural Similarity (SSIM) index \cite{ssim} has recently become popular and is being used as an alternative to $\mathcal{L}_1,\mathcal{L}_2$. The SSIM index is computed over a window instead of a pixel and is based on local statistics. In practice, the aforementioned losses are used in conjunction with each other which increases the number of individual loss functions, leading to an increased complexity in order to tune the loss weights \cite{janai2018unsupervised}. \par Keeping in mind the above observations, in this work, we explore the potential of $\mathcal{MI}$ \cite{kl,mi,entropy} to train deep neural networks for supervised / unsupervised learning of a task. $\mathcal{MI}$ is essentially an information theoretic measure to reason about the statistical independence of two random variables. An interesting property of $\mathcal{MI}$ is that it operates on probability distributions instead of the data directly. Therefore, $\mathcal{MI}$ does not depend on the signal type i.e. images or time-series and proves to be a powerful measure in many areas. For this reason, we consider $\mathcal{MI}$ as a potential alternative measure of similarity. Despite, its diverse applications, the expression of $\mathcal{MI}$ is infeasible to be used directly for the training a neural network (Sec.~\ref{sec:mi}). However, the interesting properties of $\mathcal{MI}$ encourage us to dive deep into the problem and lead us to contribute through this paper as follows: \begin{itemize} \item Feasibility of $\mathcal{MI}$ formulation for deep learning tasks. \item An $\mathcal{MI}$ inspired novel loss function $\mathcal{L}_{LMI}$. \item DeepMI: a fuzzy logic based framework to train DNNs using $\mathcal{L}_{LMI}$ or $\mathcal{L}_{MI}$. \end{itemize} In the next section, we discuss the related work. In Sec.~\ref{sec:mi}, we brief $\mathcal{MI}$ and in Sec. ~\ref{sec:deepmi}, we discuss the limitations of regular $\mathcal{MI}$ expression and develop $\mathcal{L}_{LMI}$ along with gradient calculation required for back propagation. In Sec.~\ref{sec:exp}, we experimentally verify the importance of DeepMI through a number of unsupervised learning tasks. Finally, Sec.~\ref{sec:conc} provides conclusion about the paper. \section{Related Work} Literature on Mutual Information is diverse and vast. Therefore, we limit our discussion only to the most relevant works in this area. Mutual Information \cite{entropy,mi,kl} is a fundamental measure of information theory which provides a sense of independence between random variables. It has been widely used in a variety of applications. The works \cite{viola1997alignment,maes1997multimodality} are typical examples which exploit $\mathcal{MI}$ in order to align medical images. $\mathcal{MI}$ has also been successfully used in speech recognition \cite{bahl1986maximum}, machine vision and robotic applications. \cite{pandey2012toward} is a typical example in the area of autonomous vehicles to register 3D points clouds obtained by LIDARs. Apart from that, $\mathcal{MI}$ has widely been used in independent component analysis \cite{hyvarinen2000independent}, key feature selection \cite{kwak2002input,peng2005feature}. From the above applications of $\mathcal{MI}$ into diverse area, $\mathcal{MI}$ can be thought as a pivotal measure. \par The works \cite{viola1997alignment, maes1997multimodality, pandey2012toward} are non-parametric approaches which maximize $\mathcal{MI}$ to achieve the desired purpose. $\mathcal{MI}$ is operated upon the distributions of the raw signals or the features extracted. For example, \cite{viola1997alignment} used image histograms, whereas \cite{pandey2012toward} uses the distributions of 3D points in a voxel. In these techniques the feature extraction is quite important which is handcrafted. In the past decade, deep neural network architectures \cite{resnet, vgg} have been proved to be excellent in learning high quality embeddings / feature from the input data in an entirely unsupervised manner which in turn are used for various tasks \cite{rcnn, fastrcnn, fasterrcnn, maskrcnn,ssd, pspnet,gan}. Therefore, we believe that bringing deep learning framework in conjunction with $\mathcal{MI}$ can be extremely useful. However, so far, there does not exist any unified standard framework which can be used for this purpose. It is mainly due to the issues related with $\mathcal{MI}$. For example, the distributions required for mutual information are not exact, instead they are only the approximations of the true distribution \cite{darbellay1999estimation}. Also, these approximations are not differentiable, thus making it difficult for $\mathcal{MI}$ to be included in the deep learning methods \cite{belghazi2018mine}. Since, affordable deep learning methods have only recently emerged, the learning process is mostly based on the traditional losses \cite{janai2018unsupervised,godard2017unsupervised, zhan2018unsupervised, li2018undeepvo, almalioglu2019ganvo}. A very recent work \cite{belghazi2018mine} proposes to use $\mathcal{MI}$ with neural networks. However, the work mainly addresses to estimate the distributions using neural networks and does not talk about per-sample $\mathcal{MI}$ which is required for the tasks such as \cite{viola1997alignment, maes1997multimodality, pandey2012toward}. \par The works \cite{godard2017unsupervised, zhan2018unsupervised, li2018undeepvo, almalioglu2019ganvo,bian2019unsupervised} in the area of depth estimation and visual odometery using deep neural networks in an unsupervised fashion are typical examples where $\mathcal{MI}$ can be employed. These works only utilize the losses such as $\mathcal{L}_1, \mathcal{L}_2, \mathcal{L}_{SSIM}$. We believe that since $\mathcal{MI}$ has successfully been employed in diverse applications, it is worth developing a well defined and benchmarked $\mathcal{MI}$ based framework for deep learning. Based on the motivation, in this paper, we explore the possibility and feasibility of the above idea of using $\mathcal{MI}$ for robotics applications. Our intention is not to outcast the existing losses, instead to bring in $\mathcal{MI}$ into deep learning, and to build a baseline in the paper to open the doors to new research area in this direction. \section{Mutual Information ($\mathcal{MI}$)} \label{sec:mi} For any two random variables $X$ and $Y$, the measure $\mathcal{MI}$ is defined as.% \begin{equation} \mathcal{I}(X;Y) = \mathcal{H}(X) + \mathcal{H}(Y) - \mathcal{H}(X,Y),~~~~~ \mathcal{I}(X;Y) \geq 0 \label{eq_mi} \end{equation} \begin{equation} \begin{aligned} \mathcal{H}(X) &= -\sum_{x\in X }p_X^x log(p_X^x), \\ \mathcal{H}(Y) &= -\sum_{y\in Y }p_Y^y log(p_Y^y), \\ \mathcal{H}(X,Y) &= -\sum_{x\in X }\sum_{y\in Y }p_{XY}^{xy} log(p_{XY}^{xy}) \end{aligned} \label{eq_hxhyhxy} \end{equation} \par where $\mathcal{H}(X), \mathcal{H}(Y)$ represent the entropy \cite{entropy} of $X$, entropy of $Y$, whereas $\mathcal{H}(XY)$ represents the joint entropy of $X,Y$ when both variables are co-observed. The symbols $p_X, p_Y$ and $p_{XY}$ represent the marginal of $X$, marginal of $Y$ and joint probability density function (pdf) of $X,Y$ respectively. \par Mutual Information is an important term in the information theory as it provides a measure of statistical independence between two random variables based on the distribution. In other words, $\mathcal{MI}$ governs that how well one can explain about a random variable $X$ after observing another random variable $Y$ or vice-versa. The expression of $\mathcal{MI}$ in Eq.~\ref{eq_mi} is defined in terms of entropies. For any random variable $X$, its entropy quantifies uncertainty associated with its occurrence. \subsection{$\mathcal{MI}$ as a similarity metric} $\mathcal{MI}$ is a convex function and attains global minima when any two random variables under consideration are independent. Mathematically, ${\mathcal{MI}} \to 0$ when the variables are independent, whereas, ${\mathcal{MI}\to \{\mathcal{H}(X) =\mathcal{H}(Y)\}}$ when both the variables are identical statistically. This property of $\mathcal{MI}$ can readily be employed to quantify similarity between two signals. However, while doing so, the definition of $\mathcal{MI}$ has to be interpreted in a quite different manner. \par To better understand, let us consider an example of image matching, provided two images $X$ and $Y$. In order to measure the similarity between the images using $\mathcal{MI}$, the image itself can not be considered as a random variable. Because in that case, $p_X, p_Y$ and $p_{XY}$ shall be meaningless. In other words, per sample $\mathcal{MI}$ is not defined. Hence, instead of an image, its pixel values are considered as a random variable over which the relevant distributions can be defined. The pixel values may refer to intensity, color, gradients etc. In order to compute the similarity score, first the marginals and joint pdfs over the selected variable has to be computed, and the similarity can be obtained by using the Eq.~\ref{eq_mi}. While doing so, Eq.~\ref{eq_hxhyhxy} needs to be rewritten as given below. \begin{equation} \begin{aligned} \mathcal{H}(X) &= -\sum_{i=1}^{N}p_X^i log(p_X^i), \\ \mathcal{H}(Y) &= -\sum_{i=1}^{N}p_Y^i log(p_Y^i),\\ \mathcal{H}(X,Y) &= -\sum_{i=1}^{N}\sum_{j=1}^{N}p_{XY}^{ij} log(p_{XY}^{ij}) \end{aligned} \label{eq_hxhyhxyps} \end{equation} Where $N$ is the number of bins in the pdf. \par As an another example, we can consider matching of two time series signals by using $\mathcal{MI}$. Following the above discussion, the two signal instances under consideration can not be considered as random variables, instead their instantaneous values are considered as random variable. It must be noticed that the choice of random variable depends on the application. \section{DeepMI Framework} \label{sec:deepmi} To understand the concept of DeepMI, consider the task of image reconstruction using autoencoders. In order to minimize the gap between an input image and the reconstructed image, the $\mathcal{MI}$ has to be maximized. The regular $\mathcal{MI}$ expression however, can not be used directly for this purpose. It is primarily because $\mathcal{MI}$ attains global minima when both random variables are dissimilar and our optimal point which is $\sup~{\mathcal{MI}}$, is not well defined as $\mathcal{MI}$ is unbounded above. Although, various normalized versions of $\mathcal{MI}$ have also been proposed \cite{nmi} in the literature, the previously discussed issues still remain intact. Hence, normalized $\mathcal{MI}$ ($\mathcal{NMI}$) also can not serve our purpose. \par The above challenges encourage us to develop linearized mutual information $\mathcal{LMI}$ which attains a global minima when the two images are exactly the same. In order to achieve this, we turn towards the the working of the $\mathcal{MI}$ and make a following important insight. \subsection{A key insight to MI} Consider two images $X$ and $Y$, with $p_X \in \mathbb{R}^{N}, p_Y \in \mathbb{R}^{N}$ and $p_{XY} \in \mathbb{R}^{N \times N}$ as their marginals and joint pdfs respectively. The dimensions of $p_X, p_Y$ is $N\times 1$, whereas it is $N\times N$ for $p_{XY}$. From, Eq.~\ref{eq_mi},~\ref{eq_hxhyhxy}, we can immediately say that $\mathcal{MI}\to 0$ when two signals are dissimilar while ${\mathcal{MI}\to \{\mathcal{H}(X) =\mathcal{H}(Y)\}}$ when the signals are exactly the same. Hence, for the images $X$ and $Y$ to be identical, the necessary but not sufficient condition is that $p_X$ and $p_Y$ should be same. While, in order to guarantee, the following has to be satisfied. \begin{equation} p_X^i = p_Y^i = p_{XY}^{ii},~~~ p_{XY_{\vert i \neq j}}^{ij} = 0,~~~~~ i,j=1,2,...,N \end{equation} In other words, when $X \equiv Y$, the off-diagonal elements of $p_{XY}$ are zero while all the diagonal elements are non-zero (depending on a distribution) and equals to $p_{X}$ and $p_Y$ simultaneously. The above insight leads us to derive an expression for the $\mathcal{LMI}$ function to train deep networks. \subsection{$\mathcal{LMI}$ Derivation} We know that for any probability density function Eq.~\ref{eq_pdfsum} and \ref{eq_pdfsum1} holds. These equations represent a $1$D and a $2$D probability density function respectively. \begin{align} \footnotesize \sum_{i=1}^{N}p^i_X = \sum_{i=1}^{N}p^i_Y =1, \label{eq_pdfsum} \\ \sum_{i=1}^{N}\sum_{j=1}^{N}p^{ij}_{XY}=1 \label{eq_pdfsum1} \end{align} Rewriting Eq.~\ref{eq_pdfsum1} as combination of its diagonal $(i=j)$ and off-diagonal elements $(i\neq j)$, we get \begin{align} & \sum_{i=1}^{N}\sum_{\substack{j=1\\i \neq j}}^{N}p_{XY}^{ij} + \sum_{i=1}^{N}p_{XY}^{ii} =1 \label{eq:diagoffdiag} \\ & \sum_{i=1}^{N}\sum_{\substack{j=1\\i \neq j}}^{N}p_{XY}^{ij} + \sum_{i=1}^{N}p_{XY}^{ii} ~+ \sum_{i=1}^{N}p_{XY}^{ii} - \sum_{i=1}^{N}p_{XY}^{ii} =1 \end{align} \begin{equation} \begin{aligned} &\sum_{i=1}^{N}\sum_{\substack{j=1\\i \neq j}}^{N}p_{XY}^{ij} + \sum_{i=1}^{N}p_{XY}^{ii} ~+ \sum_{i=1}^{N}p_{XY}^{ii} - \sum_{i=1}^{N}p_{XY}^{ii} + \\ & \sum_{i=1}^{N}p_{X}^{i} - \sum_{i=1}^{N}p_{X}^{i} + \sum_{i=1}^{N}p_{Y}^{i} - \sum_{i=1}^{N}p_{Y}^{i} = 1 \end{aligned} \end{equation} \begin{equation} \begin{aligned} & \sum_{i=1}^{N}\sum_{\substack{j=1\\i \neq j}}^{N}p_{XY}^{ij} + \sum_{i=1}^{N}\vert p_{XY}^{ii} -p_{X}^{i}\vert ~+ \\ & \sum_{i=1}^{N}\vert p_{XY}^{ii} -p_{Y}^{i}\vert - \sum_{i=1}^{N}p_{XY}^{ii} + 1+ 1 \geq 1 \end{aligned} \end{equation} \begin{equation} \begin{aligned} & \sum_{i=1}^{N}\sum_{\substack{j=1\\i \neq j}}^{N}p_{XY}^{ij} + \sum_{i=1}^{N}\vert p_{XY}^{ii} -p_{X}^{i}\vert ~+ \\ & \sum_{i=1}^{N}\vert p_{XY}^{ii} -p_{Y}^{i}\vert - \sum_{i=1}^{N}p_{XY}^{ii} + 1 \geq 0 \label{eq:deepmiderivation} \end{aligned} \end{equation} Now, reffering to Eq.~\ref{eq:diagoffdiag}, we can write \begin{equation} \sum_{i=1}^{N}p_{XY}^{ii} \leq 1 ~~~\Rightarrow~~~ 1- \sum_{i=1}^{N}p_{XY}^{ii} \geq 0 \label{eq:pxybound} \end{equation} Using the above into Eq.~\ref{eq:deepmiderivation}, we get \begin{equation} \begin{aligned} & \mathcal{L}_{LMI} = \frac{1}{3} \Big( \sum_{i=1}^{N}\sum_{\substack{j=1\\i \neq j}}^{N}p_{XY}^{ij} + \\ & \sum_{i=1}^{N}\vert p_{XY}^{ii} -p_{X}^{i}\vert + \sum_{i=1}^{N}\vert p_{XY}^{ii} -p_{Y}^{i}\vert \Big) & \geq 0 \end{aligned} \label{eq:deepmi} \end{equation} Where the factor $\nicefrac{1}{3}$ is included to ensure $\mathcal{LMI} \leq 1$. It is obtained by replacing each of the three terms to their maximum. The equality of the equation to $0$ will hold \textit{iff} $p_X^i = p_Y^i = p_{XY}^{ii},~~p_{XY_{\vert i \neq j}}^{ij}=0~~\forall~i,j \in 1,2,..,N$. i.e. two images match perfectly. Hence, the L.H.S. of the equation Eq.~\ref{eq:deepmi} is treated as the objective function which we call the $\mathcal{LMI}$ function. The ``$\mathcal{L}$'' stands for ``linearized'' which arises because the $\mathcal{LMI}$ formulation is linear in the elements of the pdfs, whereas the ``$\mathcal{MI}$'' term arises because at the equality, the regular $\mathcal{MI}$ expression will also be maximized. The $\mathcal{LMI}$ formulation is quite interesting because it is essentially a combination of three different losses weighted equally. The $\mathcal{L}_{LMI}$ formulation is quite intuitive and the gradients are straightforward to compute. \subsection{Fuzzy Probability Density Function} The $\mathcal{LMI}$ function utilizes the pdfs $p_X,p_Y$ and $p_{XY}$ which are discrete in nature. These are typically obtained by computing an $N$ bin histogram followed by a normalization step with $||.||_1=1$. As per the standard procedure to compute a regular histogram, first, a bin-id corresponding to an observation of the random variable is computed and later, the count of the respective bin is incremented by unity. The computation of the bin-id is carried out by a $ceil$ or $floor$ operation which is not differentiable. While performing the rounding step, the actual contribution of the observation is lost. Thus, both the incremental and rounding procedure prevent the gradient flow which is needed during the training process. \par To better understand the above, let $h_X$ be an $N$ bin histogram of the random variate $X$ and $x\in X$ be an observation. In the case of a regular histogram, the bin-id $x_b$ corresponding to an observation $x$ is computed as follows: \begin{equation} \begin{aligned} x_b = \frac{\hat{x}}{bin\_res},~~~& ~~~ \hat{x} = \frac{x-min_X}{max_X-min_X}, \\ bin\_res &= \frac{max_X-min_X}{N} \end{aligned} \label{eq:bin} \end{equation} where $max_X, min_X$ are the maximum and minimum values which the variable $X$ can attain at any instant. Typically for $8$-bit images, $[min_X,max_X]=[0,255]$. From the Eq.~\ref{eq:bin}, it can be noticed that the value of $x_b$ is not necessarily an integer. In this case, $x_b$ is rounded to the nearest integer by using $ceil\equiv \lceil x_b \rceil$ or $floor \equiv \lfloor x_b \rfloor$, depending upon one's convention. Now, the count of $x_b$ is incremented by one. Therefore, it becomes evident that the rounding procedure and the unit incremental procedure do not allow the gradient computation w.r.t. the observations. To cope up with this, we employ fuzzification strategy in order to ensure valid gradients during back-propagation. \subsubsection{Fuzzification of $p_X$, $p_Y$} In order to fuzzify $h_X$, instead of one as in Eq.~\ref{eq:bin}, we compute two bins corresponding to $x\in X$ i.e. $x_0 = \lfloor x_b \rfloor$ and $x_1 = \lceil x_b \rceil$. We define a membership function for each of the two bins as follows. \begin{equation} m_{x_0} = 1-(x_b-x_0),~~~m_{x_1} = (x_b-x_0) \label{eq:mempxpy} \end{equation} where $m_{x_0}, m_{x_1}$ are the membership functions of $x_0$ and $x_1$. Essentially, $m_{x_0} + m_{x_1} = 1$. While performing the unit incremental step, the count of the bins $x_0$ and $x_1$ are incremented by $m_{x_0}$ and $m_{x_1}$ respectively instead of increasing by one. With the help of fuzzification, it can be inferred that now the gradients of $x_b$ w.r.t. $m_{x_0}$ and $m_{x_1}$ are fully defined (Sec.~\ref{sec:derivatives}). The above steps are followed in order to compute $p_X$ and $p_Y$, while the normalization step being performed at the end. As a matter of convention, the memberships corresponding to $y_b$ for a $y\in Y$ are denoted by $m_{y_0}$ and $m_{y_1}$. \subsubsection{Fuzzification of $p_{XY}$} The fuzzification of a joint pdf $p_{XY}$ is simply an extension of previous steps. For a regular $2$D histogram, the unit incremental procedure is applied to the bin location defined by the coordinate $(x_b,y_b)$ (Eq.~\ref{eq:bin}) which in this case as well, need not to be exactly an integral value. Therefore, to ensure valid gradient flow in this case, four memberships are defined which corresponds to the four coordinates top-left, top-right, bottom-left, and bottom-right w.r.t $(x_b,y_b)$. Mathematically, these four coordinates are given by $(x_0,y_0)$, $(x_1,y_0)$, $(x_0,y_1)$, $(x_1,y_1)$, and their respective memberships can be written as: \begin{equation} \begin{aligned} &m_{x_0 y_0} = m_{x_0}m_{y_0},~~~ m_{x_1 y_0} = m_{x_1}m_{y_0},\\ &m_{x_0 y_1} = m_{x_0}m_{y_1},~~~ m_{x_1 y_1} = m_{x_1}m_{y_1} \end{aligned} \label{eq:mempxy} \end{equation} While performing the unit incremental step, the count of the four bins mentioned above is incremented by their respective membership value. \subsection{Back-propagation through DeepMI framework} \label{sec:derivatives} From the Eq.~\ref{eq:mempxy}, it can be inferred that, gradient of $x$ w.r.t. $\mathcal{L}_{LMI}$ depends on $p_X^{x_0},p_X^{x_0},p_{XY}^{x_0y_0},p_{XY}^{x_0y_1}, p_{XY}^{x_1y_0},p_{XY}^{x_1y_1}$. Therefore, we can write: \begin{equation} \frac{\partial \mathcal{L}}{\partial x} = \sum_{i=0}^{1} \frac{\partial \mathcal{L}}{\partial p_{X}^{x_i}}~ \frac{\partial p_{X}^{x_i}}{\partial x} + \sum_{i=0}^{1}\sum_{j=0}^{1}\frac{\partial \mathcal{L}}{\partial p_{XY}^{x_iy_i}}~ \frac{\partial p_{XY}^{x_iy_i}}{\partial x} \end{equation} Using chain rule: \begin{equation} \begin{aligned} \frac{\partial p_{X}^{x_i}}{\partial x} = \frac{\partial p_{X}^{x_i}}{\partial \hat{p}_{X}^{x_i}} \times \frac{\partial \hat{p}_{X}^{x_i}}{\partial {m_{x_i}}} \times \frac{\partial m_{x_i}}{\partial x_b} \times \\ \frac{\partial x_b}{\partial \hat{x}} \times \frac{\partial \hat{x}}{\partial x},~~p_{X}^{x_i} = \frac{\hat{p}_{X}^{x_i}}{\sum_{i=1}^{N}\hat{p}_{X}^{x_i}} \end{aligned} \end{equation} and similarly, \begin{equation} \begin{aligned} \frac{\partial p_{XY}^{x_iy_j}}{\partial x} = \frac{\partial p_{XY}^{x_iy_j}}{\partial \hat{p}_{XY}^{x_iy_j}} & \times \frac{\partial \hat{p}_{XY}^{x_iy_j}}{\partial m_{x_iy_j}} \times \frac{\partial m_{x_iy_j}}{\partial m_{x_i}} \times \frac{\partial m_{x_i}}{\partial x_b} \\ & \times \frac{\partial x_b}{\partial \hat{x}} \times \frac{\partial \hat{x}}{\partial x}, ~~~ p_{XY}^{x_iy_j} = \frac{\hat{p}_{XY}^{x_iy_j}}{\sum_{i=1}^{N}\hat{p}_{XY}^{x_iy_j}} \end{aligned} \end{equation} Each of the above partial derivatives can be easily computed using the Eq.~\ref{eq:deepmi}~-~\ref{eq:mempxy} via the chain rule. It must be noticed that in the gradient calculations, $x_0,x_1$ does not occur which would have been there in case of regular histograms, leading to undefined derivatives. Similarly, gradients can be also be computed for an observation $y$. \subsection{$\mathcal{L}_{LMI}$ Implementation} The formulation of $\mathcal{L}_{LMI}$ seems quite intuitive. There is however a consideration which must be accounted while using it for signal matching, especially in the scenarios where one of the signal is the groundtruth and the another is the estimated version of the first by a neural network. \par To understand this, consider two images $X$ and $Y$, where $X$ is the image to be reconstructed and $Y$ is the reconstruction or simply the output of a neural network. The marginals $p_X,p_Y$ and the joint $p_{XY}$ of $X$ and $Y$ are computed using the fuzzification procedure as described previously. As we know that these distributions are the approximated version of the underlying distribution, therefore, undersampling or oversampling of the underlying distribution is possible. By nature the distributions $p_X$, $p_Y$ and $p_{XY}$ do not have a well defined mathematical expressions in such scenarios. Moreover, both the $p_X$ and $p_Y$ can be obtained using $p_{XY}$, therfore, while backpropagation, the gradients only w.r.t $p_{XY}^{ij}$ are backpropagated. Mathematically, gradients w.r.t. $p_Y$ should also be backpropagated because calculations of $p_Y$ is dependent on $Y$. However doing so, disturbs the training process and affects the neural network performance. After our experimental study, we mark this observation as an outcome of the distributions approximation procedure. \subsection{Hyperparameters} \label{sec:hyper} The DeepMI framework has $N$ as the only hyperparameter. The number of bins $N$ mainly depicts that how precisely the $\mathcal{L}_{LMI}$ should penalize the network while matching. For example, consider an $8$-bit image with dynamic range $[0,255]$. Now, we set $N=255$, the network will be penalized very strongly while matching. While if the $N$ is set to a small number, the network will be forced to focus only on the important details. This property can be quite useful in cases where two images from different cameras need to be matched while both the images have different brightness levels. \section{Experiments} \label{sec:exp} In this section, we benchmark the effectiveness and the applicability of the DeepMI framework. Due to the unavailability of a standard evaluation procedure, we define three tasks on which a deep neural network is trained in an unsupervised manner. The three tasks vary in their difficulty levels from baseline, moderate, to extremely difficult from the learning perspective. As an experimental study, the training of each task is performed using different loss functions $\mathcal{L}_1, \mathcal{L}_2, \mathcal{L}_{SSIM}$ along with $\mathcal{L}_{LMI}$. For $\mathcal{L}_{SSIM}$, we use a $3 \times 3$ block filter following \cite{godard2017unsupervised}. All the performance metrics are provided in Table-\ref{tab:unified} and Table-\ref{tab:effectN} for quantitative analysis with the best scores highlighted in the \textcolor{blue}{blue}. For training, we use \texttt{base\_lr} = $0.001$, \texttt{lr\_policy}=$poly$, \texttt{ADAM} optimizer with $\beta_1=0.9$ and $\beta_2=0.99$ unless otherwise stated. The encoder-decoders \cite{unet} have a filter size of $3\times3$ and the number of filters equals to $16,32,64,128,256$ for the five stages for each task. The whole framework has been implemented as a layer-wise architecture in C++ and the codes will be available at the link provided in the beginning. \subsection{Unsupervised Bar Alignment (\texttt{Exp$_1$})} This experiment consists of two binary images where the second image is a spatially transformed version of the first i.e. a $2$D rigid body transform is defined between the two images. Both the images consist of a black background and a white rectangular bar. The bar sized of $50\times 125$ in an image ($192\times 640$) has two degrees-of-freedom (DoF): $tx$, and $\theta$ which correspond to horizontal motion and in-plane rotation. The goal of the experiment is to learn the $2$DoF parameters in an unsupervised manner. For the training purpose, we generate a dataset of $1500$ images with $1000+500$ train-test split. The dataset is generated by transforming the bar in the image by randomly generating $tx \in [-100,100]$ pixels and $\theta \in [-40,40]$ degrees. The training of this experiment is performed using \texttt{SGD} with Nesterov momentum $=0.95$ for $5$ epochs. \par The unsupervised training pipeline for the task is depicted in Fig.~\ref{fig:exp1}. While training, the neural network takes two images, source ($I_s$) and target ($I_t$) as input and predicts $tx, \theta$. The image $I_s$ is then warped ($\hat{I}_s$) using fully differentiable spatial-transformer-networks (STN) \cite{jaderberg2015spatial} and $\hat{I}_s$ is obtained. Now, the neural network is penalized using back-propagation to force $\hat{I}_s \to I_t$. In the testing phase, the neural network predicts $tx, \theta$ on the test data and Mean-Absolute-Error (MAE) is reported between the predictions and the groundtruth $tx, \theta$, already stored during the data generation process. From the Table-\ref{tab:unified} under the column \texttt{Exp$_1$}, it can be noticed that the network trained using $\mathcal{L}_{LMI}$ formulation exhibits better performance as compared to the rest of the loss functions. Fig.~\ref{exp1} shows a few qualitative results of this experiment. In this task, almost all of the losses perform equally well. It can be verified visually as well as from the quantitative results provided in the Table~\ref{tab:unified}. \begin{figure}[t] \centering \colorlet{conv}{white!60!cyan} \colorlet{concat}{white!60!orange} \begin{tikzpicture} \node (outer) [draw=white!80!black,rounded corners=0.25mm,scale = 0.75]{ \tikz{ \node (s1) [draw=cyan, fill=conv, minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = -4 ex, yshift = 0 ex] {}; \node (s2) [draw=cyan, fill=conv,right of = s1,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 5ex, xshift = -4ex] {}; \node (s3) [draw=cyan, fill=conv,right of = s2,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 4ex, xshift = -4ex] {}; \node (s4) [draw=cyan, fill=conv,right of = s3,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 3ex, xshift = -4ex] {}; \node (s5) [draw=cyan, fill=conv,right of = s4,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 2ex, xshift = -4ex] {}; \draw [->,very thin] (s1) -- (s2); \draw [->,very thin] (s2) -- (s3); \draw [->,very thin] (s3) -- (s4); \draw [->,very thin] (s4) -- (s5); \node (s12) [draw=cyan, fill=conv, minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = -4 ex, yshift = -7 ex] {}; \node (s22) [draw=cyan, fill=conv,right of = s1,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 5ex, xshift = -4ex, yshift = -7 ex] {}; \node (s32) [draw=cyan, fill=conv,right of = s2,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 4ex, xshift = -4ex, yshift = -7 ex] {}; \node (s42) [draw=cyan, fill=conv,right of = s3,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 3ex, xshift = -4ex, yshift = -7 ex] {}; \node (s52) [draw=cyan, fill=conv,right of = s4,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 2ex, xshift = -4ex, yshift = -7 ex] {}; \draw [->,very thin] (s12) -- (s22); \draw [->,very thin] (s22) -- (s32); \draw [->,very thin] (s32) -- (s42); \draw [->,very thin] (s42) -- (s52); \node (stn) [draw=green!70!black,right of = s5,minimum width=2.5ex, minimum height = 2ex, xshift = -3 ex, yshift=-3.5ex]{\scriptsize STN}; \draw [->,very thin] (s5) -| (stn) node[xshift=-1ex, yshift=4.5ex]{\scriptsize $tx$}; \draw [->,very thin] (s52) -| (stn) node[xshift=-1ex, yshift=-4.5ex]{\scriptsize $\theta$}; \node (c1) [draw=orange,fill=concat,left of=s1, minimum width=1.5ex,rounded corners=0.25mm, minimum height = 5ex, xshift = 4 ex, yshift = -3.5 ex] {}; \node (i1) [draw=none,left of=c1,minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = 3 ex, yshift = 1.5 ex] {\scriptsize $I_s$}; \node (i2) [draw=none,left of=c1,minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = 3 ex, yshift = -1.5 ex] {\scriptsize $I_t$}; \draw [->,very thin] (i1) -- ($(c1.west) + (0ex,1.5ex)$); \draw [->,very thin] (i2) -- ($(c1.west) - (0ex,1.5ex)$); \draw [->,very thin] (c1) |- (s1); \draw [->,very thin] (c1) |- (s12); \draw [->,very thin] ($(i1.north) - (0ex,1.5ex)$) -- ($(i1.north) + (0ex,2.5ex)$) -| ($(stn.north) + (1ex,0ex)$); \node (loss) [draw=red!70!black,right of = stn,minimum width=2.5ex, minimum height = 2ex, xshift = -1 ex, yshift=0ex]{\scriptsize $\mathcal{L}$}; \draw [->,very thin] (stn) |- (loss); \draw [->,very thin] ($(i2.south) + (0ex,1.5ex)$) -- ($(i2.south) - (0ex,2.5ex)$) -| (loss); \node (img1) [draw=none,minimum width=1ex, minimum height = 2ex, xshift = -4 ex, yshift=-13ex]{\includegraphics[scale=0.08]{bar0.png}}; \node (img2) [draw=none, right of = img1, minimum width=1ex, minimum height = 2ex, xshift = 6 ex, yshift=0ex]{\includegraphics[scale=0.08]{bar1.png}}; }}; \end{tikzpicture} \caption{Unsupervised learning framework for the \texttt{Exp$_1$}. The blue box is a convolutional block, orange box is a concatenation block. STN~\cite{jaderberg2015spatial}, and $\mathcal{L}$ is a loss function.} \label{fig:exp1} \end{figure} \begin{figure}[t] \centering \begin{tikzpicture} \FPeval{xoffset}{8.9} \FPeval{yoffset}{3.1} \FPeval{imscale}{0.059} \foreach \i in {0,1,...,4} { \node (bai\i) [draw=none,rounded corners=0.6mm, xshift=0 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{baralign_i_l1_0.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=1 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{baralign_t_l1_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=2 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{baralign_a_l1_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=3 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{baralign_a_l2_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=4 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{baralign_a_ssim_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=5 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{baralign_a_mi_\i.png}}; } \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=0 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{Input}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=1 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{Target}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=2 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_1$}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=3 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_2$}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=4 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_{SSIM}$}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=5 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_{LMI}$}; \end{tikzpicture} \caption{Qualitative results of \texttt{Exp$_1$}} \label{exp1} \end{figure} \begin{figure*}[t] \centering \begin{tikzpicture} \FPeval{xoffset}{15.9} \FPeval{yoffset}{5.1} \FPeval{imscale}{0.109} \foreach \i in {0,1,...,4} { \node (bai\i) [draw=none,rounded corners=0.6mm, xshift=0 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{mp_i_l1_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=1 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{mp_t_l1_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=2 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{mp_m_l1_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=3 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{mp_p_l1_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=4 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{mp_p_l2_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=5 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{mp_p_ssim_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=6 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{mp_p_mi_\i.png}}; } \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=0 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{Input}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=1 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{Target}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=2 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{GT Mask}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=3 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_1$}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=4 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_2$}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=5 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_{SSIM}$}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=6 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_{LMI}$}; \end{tikzpicture} \caption{Qualitative results of \texttt{Exp$_2$}} \label{exp2} \end{figure*} \begin{figure}[t] \centering \colorlet{conv}{white!60!cyan} \colorlet{concat}{white!60!orange} \begin{tikzpicture} \node (outer) [draw=white!80!black,rounded corners=0.25mm,scale = 0.75]{ \tikz{ \node (s1) [draw=cyan, fill=conv, minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = -4 ex, yshift = 0 ex] {}; \node (s2) [draw=cyan, fill=conv,right of = s1,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 5ex, xshift = -4ex] {}; \node (s3) [draw=cyan, fill=conv,right of = s2,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 4ex, xshift = -4ex] {}; \node (s4) [draw=cyan, fill=conv,right of = s3,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 3ex, xshift = -4ex] {}; \node (s5) [draw=cyan, fill=conv,right of = s4,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 2ex, xshift = -4ex] {}; \node (s5d) [draw=cyan, fill=conv,right of = s5,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 2ex, xshift = -4ex] {}; \node (s4d) [draw=cyan, fill=conv,right of = s5d,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 3ex, xshift = -4ex] {}; \node (s3d) [draw=cyan, fill=conv,right of = s4d,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 4ex, xshift = -4ex] {}; \node (s2d) [draw=cyan, fill=conv,right of = s3d,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 5ex, xshift = -4ex] {}; \node (s1d) [draw=cyan, fill=conv,right of=s2d, minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = -4 ex, yshift = 0 ex] {}; \draw [->,very thin] (s1) -- (s2); \draw [->,very thin] (s2) -- (s3); \draw [->,very thin] (s3) -- (s4); \draw [->,very thin] (s4) -- (s5); \draw [->,very thin] (s5) -- (s5d); \draw [->,very thin] (s5d) -- (s4d); \draw [->,very thin] (s4d) -- (s3d); \draw [->,very thin] (s3d) -- (s2d); \draw [->,very thin] (s2d) -- (s1d); \node (dot) [draw=black,right of = s1d,minimum width=2.5ex, minimum height = 2ex, xshift = -1.5 ex, yshift=0ex,circle,scale=0.6]{}; \node (dot1) [draw=black,fill=black,right of = s1d,minimum width=2.5ex, minimum height = 2ex, xshift = -1.5 ex, yshift=0ex,circle,scale=0.2]{}; \draw [->,very thin] (s1d) -- (dot) node[xshift=-2.5ex, yshift=1ex]{\scriptsize $M_s$}; \node (c1) [draw=orange,fill=concat,left of=s1, minimum width=1.5ex,rounded corners=0.25mm, minimum height = 5ex, xshift = 4 ex, yshift = 0 ex] {}; \node (i1) [draw=none,left of=c1,minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = 3 ex, yshift = 1.5 ex] {\scriptsize $I_s$}; \node (i2) [draw=none,left of=c1,minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = 3 ex, yshift = -1.5 ex] {\scriptsize $I_t$}; \draw [->,very thin] (i1) -- ($(c1.west) + (0ex,1.5ex)$); \draw [->,very thin] (i2) -- ($(c1.west) - (0ex,1.5ex)$); \draw [->,very thin] (c1) |- (s1); \draw [->,very thin] ($(i1.north) - (0ex,1.5ex)$) -| ($(i1.north) + (0ex,2.5ex)$) -| (dot); \node (loss) [draw=red!70!black,right of = dot,minimum width=2.5ex, minimum height = 2ex, xshift = -3 ex, yshift=0ex]{\scriptsize $\mathcal{L}$}; \draw [->,very thin] (dot) -- (loss); \draw [->,very thin] ($(i2.south) + (0ex,1.5ex)$) -- ($(i2.south) - (0ex,2.5ex)$) -| (loss); \node (img1) [draw=none,minimum width=1ex, minimum height = 2ex, xshift = 2 ex, yshift=-9.6ex]{\includegraphics[scale=0.08]{mask0.png}}; \node (img2) [draw=none, right of = img1, minimum width=1ex, minimum height = 2ex, xshift = 6 ex, yshift=0ex]{\includegraphics[scale=0.08]{mask1.png}}; }}; \end{tikzpicture} \caption{Unsupervised learning framework for \texttt{Exp$_2$}.} \label{fig:exp2} \end{figure} \subsection{Unsupervised Mask Prediction (\texttt{Exp$_2$})} This experiment consists of two ordinary grayscale images: source ($I_s$) and target ($I_t$). From the the image $I_t$, a significantly large rectangular region is cropped out and filled with zeros. The end goal of the experiment is to predict a mask $M_s$ which depicts similar and dissimilar regions between $I_s$ and $I_t$. To achieve this, a neural network is trained in an unsupervised manner. For the learning process, we generate a dataset of $1500$ images with $1000+500$ train-test split. The dataset is generated by randomly cropping a rectangular region from $I_s$ and filling th cropped region with zeros. The obtained image is referred as $I_t$. The cropping region has a size of $40\times 200$. \par Fig.~\ref{fig:exp2} shows the the unsupervised training framework for this task. We use UNet \cite{unet} network architecture. The network is trained to predict $M_s$ such that $I_s * M_s \to I_t$. This experiment is essentially a binary segmentation task, therefore, we adopt intersection-over-union metric (IoU) which is widely used to quantify the segmentation performance. In order to best evaluate different training losses, we provide IoU scores by thresholding the predicted mask at different confidence levels. It is done in order to examine the network's capability to push the feature embeddings of similar and dissimilar regions significantly apart. In other words, a perfectly trained network will exhibit same IoU scores at various threshold levels. From the Table-\ref{tab:unified} under \texttt{Exp$_2$}, it is clear that the proposed $\mathcal{L}_{LMI}$ formulation shows consistent and better performance over the other loss functions across various thresholding levels. \par Fig.~\ref{exp2} shows a few qualitative results corresponding to this experiment. The results visualized are thresholded at $0.10$ (IoU$_{.10}$). It can be seen that $\mathcal{L}_1$ and $\mathcal{L}_{SSIM}$ performs worst whereas $\mathcal{L}_2$ and $\mathcal{L}_{LMI}$ performs marginally equally. The visual observation can also be verified quantitatively from the Table~\ref{tab:unified}, under \texttt{Exp$_2$}. In the quantitative results, $\mathcal{L}_1$ and $\mathcal{L}_{SSIM}$ performs worst in increasing order, as verified visually. On the other hand, $\mathcal{L}_2$ and $\mathcal{L}_{LMI}$ have only marginal difference, which can be also be verified visually by zooming in the results and examining the boundary of white region. \subsection{Unsupervised Depth Estimation (\texttt{Exp$_3$})} Depth estimation has been a long standing task in front of the computer vision community. This task has worldwide industrial importance because depth perception is a must for autonomous robotics and vehicles. Due to the advancements in the supervised learning techniques, researchers have developed several methods to predict depth using neural networks by employing supervised learning techniques. However, obtaining accurate groundtruth for supervised learning of this task is extremely challenging because it requires very expensive measurement instruments such as LIDARs. Hence, this task has gained considerable amount of attention in the recent years in order to develop unsupervised learning frameworks for this task. The existing methods make use of several loss functions in order to learn the depth effectively. \par In this experiment, We demonstrate the learning of a neural network for the task of depth estimation using stereo images in an unsupervised manner. We form the seminal work for this task \cite{godard2017unsupervised} as our basis. Our aim is not to show improvements in the datasets, instead we emphasis that how the DeepMI framework can be easily integrated for these real world applications. Hence, instead of a bigger dataset, we use $87$ grayscale rectified stereo images from KITTI \cite{kitti} sequence-$113$. We select this sequence because it is quite difficult sequence from the perspective of this task. \par The framework to carryout the experiment is shown in Fig.~\ref{fig:exp3}. It must be noticed that instead of the three different losses as in \cite{godard2017unsupervised}, we only use one loss for the evaluation. This is done in order to demonstrate that $\mathcal{L}_{LMI}$ provides stronger gradients and alone can lead to improved results. We report MAE between the predicted depth and the groundtruth measurements. From the Table-\ref{tab:unified} under \texttt{Exp$_3$}, one can notice that the neural network trained using $\mathcal{L}_{LMI}$ outperforms other five variants by large margin. For the case of $\mathcal{L}_1, \mathcal{L}_2$, the \texttt{base\_lr} is lowered to $0.00001$ to prevent gradient explosion. \begin{table}[!t] \centering \caption{ Quantitative analysis} \label{tab:unified} \arrayrulecolor{white!0!black} \setlength{\arrayrulewidth}{0.15ex} \setlength{\tabcolsep}{0.4pt} \begin{tabular}{c c c c c c c c c} \hline \multirow{2}{*}{\makecell{$\mathcal{L}$}} & \multicolumn{2}{c}{\makecell{\texttt{Exp$_1$}}} & \multicolumn{5}{c}{\makecell{\texttt{Exp$_2$}}} & \multicolumn{1}{c}{\makecell{\texttt{Exp$_3$}}} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-8} \cmidrule(lr){9-9} & \makecell{MAE$_{tx}$} & \makecell{MAE$_{\theta}$} & \makecell{IoU$_{.05}$} & \makecell{IoU$_{.10}$} & \makecell{IoU$_{.20}$} & \makecell{IoU$_{.40}$} & \makecell{IoU$_{.50}$} & \makecell{MAE($m$)} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-3} \cmidrule(lr){4-8} \cmidrule(lr){9-9} $ \mathcal{L}_1$ & $1.52$ & $3.44$ & $.48$ & $.61$& $.65$ & $.68$ & $.68$ & $105.46$ \\ $\mathcal{L}_2$ & $1.93$ & $3.65$ & $.66$ & $.68$& $.68$ & $.68$ & $.68$ & $39.78$ \\ $\mathcal{L}_{SSIM}$ & $1.25$ & $3.95$ & $.50$ & $.52$& $.68$ & $.69$ & $\textcolor{blue}{.81}$ & $.46$ \\ $\mathcal{L}_{SSIM} + \mathcal{L}_{2}$ & $-$ & $-$ & $-$ & $-$& $-$ & $-$ & $-$ & $.64$ \\ \cmidrule(lr){1-1} \cmidrule(lr){2-3} \cmidrule(lr){4-8} \cmidrule(lr){9-9} $\mathcal{L}_{LMI}$ & $\textcolor{blue}{1.04}$ & $\textcolor{blue}{3.18}$ & $\textcolor{blue}{.70}$& $\textcolor{blue}{.70}$ & $\textcolor{blue}{.71}$ & $\textcolor{blue}{.80}$ & $\textcolor{blue}{.81}$ & $\textcolor{blue}{.27}$ \\ \hline \end{tabular} \end{table} \par Fig.~\ref{exp3} shows a few qualitative results for this experiment. From the figure, it can be noticed that, both the $\mathcal{L}_1$ and $\mathcal{L}_{2}$ perform poorly, whereas $\mathcal{L}_{SSIM}$ performs quite better than them. This indicates the reason behind recent adaptation of $\mathcal{L}_{SSIM}$ over $\mathcal{L}_1$ and $\mathcal{L}_{2}$ for image matching purpose. \begin{figure*}[t] \centering \begin{tikzpicture} \FPeval{xoffset}{18.6} \FPeval{yoffset}{5.8} \FPeval{imscale}{0.128} \foreach \i in {0,1,...,4} { \node (bai\i) [draw=none,rounded corners=0.6mm, xshift=0 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{de_i_l1_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=1 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{de_t_l1_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=1 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{de_d_l1_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=2 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{de_p_l1_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=3 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{de_p_l2_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=4 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{de_p_ssim_\i.png}}; } \foreach \i in {0,1,...,4} { \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=5 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{\includegraphics[scale= \imscale]{de_p_mi_\i.png}}; } \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=0 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{Input}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=1 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{Target}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=2 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_1$}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=3 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_2$}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=4 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_{SSIM}$}; \FPeval{i}{5} \node (bat\i) [draw=none,rounded corners=0.6mm, xshift=5 * \xoffset ex, yshift=-(\i-1)*\yoffset ex]{$\mathcal{L}_{LMI}$}; \end{tikzpicture} \caption{Qualitative results of \texttt{Exp$_3$}. More brighter the depth map, more closer is an object}. \label{exp3} \end{figure*} \begin{figure}[!t] \centering \colorlet{conv}{white!60!cyan} \colorlet{concat}{white!60!orange} \begin{tikzpicture} \node (outer) [draw=white!80!black,rounded corners=0.25mm,scale = 0.75]{ \tikz{ \node (s1) [draw=cyan, fill=conv, minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = -4 ex, yshift = 0 ex] {}; \node (s2) [draw=cyan, fill=conv,right of = s1,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 5ex, xshift = -4ex] {}; \node (s3) [draw=cyan, fill=conv,right of = s2,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 4ex, xshift = -4ex] {}; \node (s4) [draw=cyan, fill=conv,right of = s3,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 3ex, xshift = -4ex] {}; \node (s5) [draw=cyan, fill=conv,right of = s4,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 2ex, xshift = -4ex] {}; \node (s5d) [draw=cyan, fill=conv,right of = s5,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 2ex, xshift = -4ex] {}; \node (s4d) [draw=cyan, fill=conv,right of = s5d,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 3ex, xshift = -4ex] {}; \node (s3d) [draw=cyan, fill=conv,right of = s4d,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 4ex, xshift = -4ex] {}; \node (s2d) [draw=cyan, fill=conv,right of = s3d,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 5ex, xshift = -4ex] {}; \node (s1d) [draw=cyan, fill=conv,right of=s2d, minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = -4 ex, yshift = 0 ex] {}; \draw [->,very thin] (s1) -- (s2); \draw [->,very thin] (s2) -- (s3); \draw [->,very thin] (s3) -- (s4); \draw [->,very thin] (s4) -- (s5); \draw [->,very thin] (s5) -- (s5d); \draw [->,very thin] (s5d) -- (s4d); \draw [->,very thin] (s4d) -- (s3d); \draw [->,very thin] (s3d) -- (s2d); \draw [->,very thin] (s2d) -- (s1d); \node (warp) [draw=green!70!black,right of = s1d,minimum width=2.5ex, minimum height = 2ex, xshift = 1 ex, yshift=0ex]{\scriptsize Warp}; \draw [->,very thin] (s1d) -- (warp) node[xshift=-5ex, yshift=1ex]{\scriptsize $D_0$}; \node (loss) [draw=red!70!black,right of = warp,minimum width=2.5ex, minimum height = 2ex, xshift = 0 ex, yshift=0ex]{\scriptsize $\mathcal{L}$}; \draw [->,very thin] (warp) |- (loss); \node (s12) [draw=cyan, fill=conv, minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = -4 ex, yshift = -7ex] {}; \node (s22) [draw=cyan, fill=conv,right of = s1,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 5ex, xshift = -4ex, yshift = -7ex] {}; \node (s32) [draw=cyan, fill=conv,right of = s2,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 4ex, xshift = -4ex, yshift = -7ex] {}; \node (s42) [draw=cyan, fill=conv,right of = s3,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 3ex, xshift = -4ex, yshift = -7ex] {}; \node (s52) [draw=cyan, fill=conv,right of = s4,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 2ex, xshift = -4ex, yshift = -7ex] {}; \node (s52d) [draw=cyan, fill=conv,right of = s52,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 2ex, xshift = -4ex, yshift = 0ex] {}; \node (s42d) [draw=cyan, fill=conv,right of = s52d,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 3ex, xshift = -4ex, yshift = 0ex] {}; \node (s32d) [draw=cyan, fill=conv,right of = s42d,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 4ex, xshift = -4ex, yshift = 0ex] {}; \node (s22d) [draw=cyan, fill=conv,right of = s32d,rounded corners=0.25mm,minimum width=1.5ex, minimum height = 5ex, xshift = -4ex, yshift = 0ex] {}; \node (s12d) [draw=cyan, fill=conv,right of=s22d, minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = -4 ex, yshift = 0ex] {}; \draw [->,very thin] (s12) -- (s22); \draw [->,very thin] (s22) -- (s32); \draw [->,very thin] (s32) -- (s42); \draw [->,very thin] (s42) -- (s52); \draw [->,very thin] (s52) -- (s52d); \draw [->,very thin] (s52d) -- (s42d); \draw [->,very thin] (s42d) -- (s32d); \draw [->,very thin] (s32d) -- (s22d); \draw [->,very thin] (s22d) -- (s12d); \node (warp2) [draw=green!70!black,right of = s12d,minimum width=2.5ex, minimum height = 2ex, xshift = 1 ex, yshift=0ex]{\scriptsize Warp}; \draw [->,very thin] (s12d) -- (warp2) node[xshift=-5ex, yshift=1ex]{\scriptsize $D_1$}; \node (loss2) [draw=red!70!black,right of = warp2,minimum width=2.5ex, minimum height = 2ex, xshift = 0 ex, yshift=0ex]{\scriptsize $\mathcal{L}$}; \draw [->,very thin] (warp2) |- (loss2); \node (i1) [draw=none,left of=s1,minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = 3 ex, yshift = 0 ex] {\scriptsize $I_0$}; \node (i2) [draw=none,left of=s12,minimum width=1.5ex,rounded corners=0.25mm, minimum height = 6ex, xshift = 3 ex, yshift = 0 ex] {\scriptsize $I_1$}; \draw [->,very thin] (i1) |- (s1); \draw [->,very thin] (i2) |- (s12); \draw [->,very thin] ($(i1.south) + (0ex,1.5ex)$) -| ($(i1.south) - (0ex,0.3ex)$) -| ($(warp2.north) + (1ex,0ex)$); \draw [->,very thin] ($(i2.north) - (0ex,1.5ex)$) -| ($(i2.north) + (0ex,0.3ex)$) -| ($(warp.south) - (1ex,0ex)$); \draw [->,very thin] ($(i1.north) - (0ex,1.5ex)$) -| ($(i1.north) + (0ex,0.5ex)$) -| (loss); \draw [->,very thin] ($(i2.south) + (0ex,1.5ex)$) -- ($(i2.south) - (0ex,0.5ex)$) -| (loss2); \node (img1) [draw=none,minimum width=1ex, minimum height = 2ex, xshift = 5 ex, yshift=-13.1ex]{\includegraphics[scale=0.045]{d0.png}}; \node (img2) [draw=none, right of = img1, minimum width=1ex, minimum height = 2ex, xshift = 7 ex, yshift=0ex]{\includegraphics[scale=0.045]{d1.png}}; }}; \end{tikzpicture} \caption{Unsupervised learning framework for \texttt{Exp$_3$}. Warp~\cite{godard2017unsupervised}.} \label{fig:exp3} \end{figure} \par Further, it can be noticed that, visually the results of $\mathcal{L}_{LMI}$ are the most pleasing as well as consistent among all. As a comparison between the $\mathcal{L}_{SSIM}$ and $\mathcal{L}_{LMI}$, we can see that the depth estimations of the former contains texture copy \cite{godard2017unsupervised} artifacts, whereas these are absent in latter. It can also be verified by taking a closer look at the car in the bottom right of the image. It can be seen that for the case of $\mathcal{L}_{SSIM}$, the depth map of the car has holes near edges and also contains severe texture copy artifacts near the number plate and other body area of the car. These artifacts, on the other hand are not present in the case of $\mathcal{L}_{LMI}$. This shows the clear effectiveness of $\mathcal{L}_{LMI}$ loss and the DeepMI framework. \subsection{Effect of number of bins $N$} \label{sec:exphparams} Table-\ref{tab:effectN} shows the effect of the hyperparameter $N$ on each of the three tasks. For \texttt{Exp$_1$}, it can be noticed that the performance for all the four values of $N$ is same. It has to be the case because the images are binary in this task. For \texttt{Exp$_2$}, there is considerable drop in the performance for $N=3$ for IoU$_{0.05}$ (highlighted in \textcolor{red}{red}). It is because, the images in this case have multiple grayscale levels and details about which can not be efficiently captured. The same is the case for \texttt{Exp$_3$} when $N=3$. Overall, we can see that $\mathcal{L}_{LMI}$ has significant advantages over other loss functions. Through our experimentations, it sufficient to keep the value of $N \leq 25$ for a signal having dynamic range $\in[0,255]$. \subsection{A unified discussion on the experiments} \label{sec:compminmi} From the Table-\ref{tab:unified}, it can be noticed that the information theory based measures are much more consistent as compared to the other losses. Also, it can be noticed that for the \texttt{Exp$_3$}, the $\mathcal{L}_{LMI}$ proves to be better over the case when $\mathcal{L}_{SSIM}$ and $\mathcal{L}_2$ are used together. Also, it is noticeable that $\mathcal{L}_{LMI}$ shows consistent and best scores amongst all variants of losses. In the experiments, our intention has not been to outweigh the existing losses, instead to mark the potential of information theory based methods in deep learning for real world applications. \begin{table}[t] \centering \caption{Effect of bin size $N$ on $\mathcal{L}_{LMI}$} \label{tab:effectN} \arrayrulecolor{white!0!black} \setlength{\arrayrulewidth}{0.15ex} \setlength{\tabcolsep}{2.5pt} \begin{tabular}{c c c c c c c c c} \hline \multirow{2}{*}{\makecell{$N$}} & \multicolumn{2}{c}{\makecell{\texttt{Exp$_1$}}} & \multicolumn{5}{c}{\makecell{\texttt{Exp$_2$}}} & \multicolumn{1}{c}{\makecell{\texttt{Exp$_3$}}} \\ \cmidrule(lr){2-3} \cmidrule(lr){4-8} \cmidrule(lr){9-9} & \makecell{MAE$_{tx}$} & \makecell{MAE$_{\theta}$} & \makecell{IoU$_{.05}$} & \makecell{IoU$_{.10}$} & \makecell{IoU$_{.20}$} & \makecell{IoU$_{.40}$} & \makecell{IoU$_{.50}$} & \makecell{MAE($m$)} \\ \cmidrule(lr){1-1} \cmidrule(lr){2-3} \cmidrule(lr){4-8} \cmidrule(lr){9-9} $3$ & $1.04$ & $3.18$ & \textcolor{red}{$.41$} & $.51$& $.52$ & $.50$ & \textcolor{blue}{$.81$} & $.41$\\ $11$ & $0.89$ & \textcolor{blue}{$2.69$} & $.50$ & $.59$& $.61$ & $.66$ & \textcolor{blue}{$.81$} & \textcolor{blue}{$.27$}\\ $15$ & \textcolor{blue}{$0.78$} & $2.89$ & \textcolor{blue}{$.66$} & \textcolor{blue}{$.67$}& \textcolor{blue}{$.68$} & $.69$ & \textcolor{blue}{$.81$} & $.28$\\ $25$ & $0.99$ & $4.36$ & $.47$ & $.48$& $.58$ & \textcolor{blue}{$.79$} & $.79$ & $.29$\\ \hline \end{tabular} \end{table} \section{Conclusion} \label{sec:conc} In this paper, we proposed an end-to-end differentiable framework ``DeepMI'' and a novel similarity metric $\mathcal{LMI}$ to train deep neural networks. The metric is mutual information ($\mathcal{MI}$) inspired and cops up with the difficulty to be able to train a deep neural network using the ($\mathcal{MI}$) expression. The metric is based on probability density functions which makes it signal agnostic. The density functions are discrete in nature for real world signals (images, time series) and can not support backpropagation. Therefore, a fuzzification strategy to support smooth backward gradient flow through the density functions is also developed. We show that the neural network trained using $\mathcal{L}_{MI}$ metric, outperforms their counterparts which are trained using $\mathcal{L}_1, \mathcal{L}_2, \mathcal{L}_{SSIM}$. Additionally, we also show that it can be easily integrated for real world applications. \par The DeepMI framework can be thought of as an effort to club the deep learning and mutual information together for real world applications. Through this work, we believe that, the learning based methods in several areas such as autonomous vehicles, robotic vision, speech / audio can be greatly benefitted in terms of the performance. The extensive experimental study in this work, can be used as a ground to develop extensions to the DeepMI framework in order to further improve the overall algorithmic performance. We also believe that inclusion of DeepMI into deep learning frameworks shall open door to new applications. \medskip \small \bibliographystyle{ieeetr}
1,116,691,498,188
arxiv
\section{Introduction} \label{sec:introduction} In the standard $\Lambda$CDM cosmological model, the formation of the first stars occurred in the redshift range $30 \gtrsim z \gtrsim 20$ (\citealt{BL01, Miralda_E_2003, Bromm_Yoshida_2011, Stacy_2014, Hummel_2014}, for a recent and extensive review see also \citealt{Bromm_2013}) in mini-halos, i.e. dark matter structures with virial temperatures $T_{\rm vir} \simlt 10^4 \, \mathrm{K}$ and total masses $M_h \simlt 10^8 \, \mathrm{M}_{\odot}$. The concurrent formation of the first black holes (\citealt{Bellovary_2011, Volonteri_Bellovary_2012, Agarwal_2013, Agarwal_2014}, see \citealt{Haiman_2013} for an updated review), as a final product of the evolution of the first massive stars, represents a second key event during the same cosmic epoch. The appearance of these classes of objects likely had a very strong impact on both the interstellar and the intergalactic medium, due to their radiative and mechanical feedback \citep{Ricotti_2011_parametric, Petri_2012, Ricotti_2012_DC, Jeon_2012, Tanaka_2012, Maiolino_2012, Ricotti_2013_luminosity, Nakauchi_2014}. The process of cooling and fragmenting the gas plays a role of key importance in this cosmic epoch. Mini-halos primarily cool their metal-free gas through molecular hydrogen line emission. Under intense irradiation in the Lyman-Werner band (LW, $E_{\gamma}=11.2-13.6 \, \mathrm{eV}$), \HH is photo-dissociated via the two-step Solomon process (see the original study by \citealt{Draine_1996}), so that cooling (i.e. the formation of stars and stellar mass black holes) is quenched \citep{Visbal_2014}. The UV specific intensity in the LW band, $J_\nu= J_{21}^* \times 10^{-21}$ erg s$^{-1}$cm$^{-2}$Hz$^{-1}$\,sr$^{-1}$, required for quenching is somewhat uncertain, but lies in the range $J_{21}^*=0.1-1$ (see, e.g., \citealt{Machacek_2001} and \citealt{Fialkov_2013}). When instead primordial, atomic-cooling halos ($T_{\rm vir}>10^4 \, \mathrm{K}$) are exposed to a LW flux of even higher intensity, $J_{\nu} > J_{\nu}^{\bullet}$ (\citealt{Loeb_Rasio_1994,Eisenstein_Loeb_1995,Begelman_2006,Lodato_Natarajan_2006, Regan_2009, Shang_2010,Johnson_2012, Agarwal_2012, Latif_2013}) the destruction of \HH molecules allows a rapid, isothermal collapse, initially driven by \ion{H}{1} Ly$\alpha$ line cooling, later replaced by two-photon emission. The precise value of $J_{\nu}^{\bullet}$ depends on several factors, but there is a general consensus that it should fall in the range $30 < J_{21}^{\bullet} < 1000$, depending on the spectrum of the sources \citep{Sugimura_2014}. Several theoretical works (\citealt{Bromm_Loeb_2003, Begelman_2006, Volonteri_2008, Shang_2010, Johnson_2012}) have shown that the result of this collapse is the formation of a Direct Collapse Black Hole (DCBH) of mass $M_\bullet \approx 10^{4-5} \, \mathrm{M_\odot}$, likely passing through the intermediate super-massive protostar phase, either directly collapsing into a compact object due to general relativistic instability (\citealt{Shibata_Shapiro_2002,Montero_2012}) or at the end of a very rapid evolution if the super-massive star reaches the Zero-Age Main Sequence (ZAMS, for a more detailed discussion see \citealt{Ferrara_2014}). Once formed, the subsequent accretion of the remaining gas from the parent halo leads to a further growth of the DCBH into a fully-fledged Intermediate Mass Black Hole (IMBH) of mass $M_\bullet \approx 10^{5-6} \, \mathrm{M_\odot}$. This scenario is confirmed by cosmological simulations, as those presented by \cite{Latif_2013}, who have shown that under the previous conditions (atomic-cooling halos irradiated by $J_{\nu}>J_{\nu}^{\bullet}$) very strong accretion flows up to $\approx 1 \, \mathrm{M_{\odot}} \, \mathrm{yr^{-1}}$ may take place. As calculations by \cite{Hosokawa_2013} and \cite{Schleicher_2013} suggest that the effects of radiative feedback from the accreting protostar is weak, due to its cool ($\approx 6000$ K) photosphere, it has been possible to safely extend the simulations to longer time scales \citep{Latif_2013b}, up to the formation of a $\sim 10^5 \, \mathrm{M_{\odot}}$ central condensation\footnote{Even higher masses can result if magnetic fields, suppressing fragmentation, are included \citep{Latif_2014}.}. In our work, we follow the accretion history onto a newly formed DCBH, with special emphasis on the dynamical and radiative properties of the inner parts of the parent halo, providing most of the accretion material to the central black hole. In particular, we aim at clarifying a number of aspects, including (i) the accretion time scale; (ii) the time-dependent emitted luminosity; (iii) the final outcome of the accretion phase, with a progressive depletion of the gas or a final outburst; (iv) the fraction of the halo gas accreted by the black hole and that ejected in the outskirts by radiative feedback. Previous works have already attempted to describe the accretion process onto black holes of different sizes. \cite{Sakashita_1974} employed the method of similarity solutions to describe the time evolution of the accretion process in the optically thin regime (i.e. $\tau \lsim 1$, where $\tau$ is the optical depth). \cite{Tamazawa_1974}, instead, used a full radiative transfer approach to describe the process in the optically thick regime (i.e. $\tau \gsim 1$) assuming steady state, i.e. without any explicit time dependence in the accretion flow. In contrast with these previous works, we aim at a full time-dependent description in the optically thick regime, similarly to the somewhat idealized approach used by \citep{Ricotti_2011_parametric, Ricotti_2012_DC, Ricotti_2013_luminosity, Johnson_2013}, but extending these studies in several aspects and including a more complete physical description of the accretion process. Recent works (\citealt{Alexander_2014, Volonteri_2014,Madau_2014}) have proposed the occurrence of brief, recurring but strongly super-critical accretion episodes (with rates even $50-100$ times larger than the Eddington limit) to explain the rapid black hole mass build-up at high redshifts. An early phase of stable super-critical quasi-spherical accretion in the BHs was also proposed in \cite{Volonteri_2005}. Such large accretion rates may be sustainable in the so-called ``slim disk" solution (\citealt{Begelman_1982,Paczynski_1982,Mineshige_2000,Sadowski_2009,Sadowski_2011}), an energy-advecting, optically thick accretion flow that generalize the standard thin disk model \citep{Shakura_Sunyaev_1976}. In these radiatively inefficient models the radiation is trapped into the high-density gas and is advected inward by the accretion flow: as a consequence, the luminosity is only mildly dependent on the accretion rate and very large flows are sustainable. These works, while investigating the accretion flow at much smaller spatial scales, offer an interesting perspective in the discussion of the implications of our efforts, as detailed in Sec. \ref{sec:disc_concl}. The present work is the first necessary step towards a precise prediction of the observable properties of IMBHs, whose existence has remained so far in the realm of theoretical, albeit physically plausible, speculations. This effort seems particularly timely given the advent of powerful telescopes as the James Webb Space Telescope (JWST), becoming available in the next few years, and high-$z$ ultra-deep surveys like the HST Frontier Fields \citep{Coe_2014}. In practice, we aim at determining the Spectral Energy Distribution of IMBHs in order to build diagnostic tools able to uniquely identify these sources among the other high-$z$ ones. If successful, this strategy would not only represent a breakthrough in the study of the first luminous objects in the Universe, but it would also shed some light on the puzzles provided by the interpretation of the formation of SMBHs (see e.g. \citealt{Fan_2001, Mortlock_2011,Petri_2012}) and the excess recently found in the power spectrum of the cosmic Near Infrared Background fluctuations (for an overview, see \citealt{Yue_2013}). These issues will be discussed in more details in Sec. \ref{sec:disc_concl}. The outline of this paper is as follows. In $\S 2$ we describe the physics and equations of the radiation-hydrodynamic problem we aim at solving, along with the numerical implementation and initial conditions. In $\S 3$ we present the results of our simulations, providing a full picture of the accretion and feedback processes. Finally, in $\S 4$ we provide some discussion and the conclusions. The Appendix contains some more technical aspects of the simulations. Throughout, we adopt recent Planck cosmological parameters \citep{Planck_Parameters_2013}: $(\Omega_m, \Omega_{\Lambda}, \Omega_b, h, n_s, \sigma_8 )= (0.32, 0.68, 0.05, 0.67, 0.96, 0.83)$. \section{Physical and numerical implementation} \label{sec:methods} The present study is based on a series of radiation-hydrodynamic simulations. Our code is designed to execute a fully consistent treatment of uni-dimensional spherically-symmetric hydrodynamic (HD) equations and a simplified version of Radiative Transfer (RT) equations. The simulated spatial region is large enough to allow us to neglect deviations from spherical symmetry, e.g. the presence of an accretion disk which may form at much smaller scales. As detailed in \cite{deSouza_2013}, the primordial halos which generated the DCBHs rotate very slowly, with a mean value of the spin parameter $\lambda= Jc/(GM_h^2)=0.0184$, where $J$ is the angular momentum of the halo with mass $M_h$ and $c$ is the speed of light. Under these conditions, deviations from the spherical symmetry become important at the centrifugal radius $R_c =\lambda^2 G M_h^2/(c^2M_{\bullet}) \sim 10^{-6} \, \mathrm{pc} \sim 100 \, R_{\rm S}$, with $R_{\rm S}$ denoting the Schwarzschild radius: this value is $\sim 10^5$ times smaller than the internal boundary of our simulations. Our resolution is designed to resolve the Bondi radius (or gravitational radius, see \citealt{Bondi_1952}): \begin{equation} R_B = \frac{2GM_{\bullet}}{c_{s(\infty)}^2} = 1.5 \, \mathrm{pc} = 5\times10^{-4} \, R_{\rm vir} = 10^{8} \, R_{\rm S} \end{equation} where $R_{vir}$ is the virial radius of the halo and $c_{s(\infty)}$ is the sound speed at large distances from the accretion boundary, defined as: \begin{equation} c_{s(\infty)} = \sqrt{\gamma \frac{p(R_B)}{\rho(R_B)}} \sim 15 \, \mathrm{km \, s^{-1}} \end{equation} Here, $\gamma = 5/3$ is the ratio of specific heats, $p(R_B)$ and $\rho(R_B)$ are the gas pressure and mass density at the Bondi radius, respectively. The Bondi radius largely varies during our simulations, but, for clarity reasons, all distances in the plots are expressed in terms of its initial value $R_B(t=t_0)$. Interestingly, even Adaptive Mesh Refinement cosmological simulations cannot resolve this spatial radius (see, e.g. \citealt{Pallottini_2013} where the maximum resolution is $\sim 5 \, \mathrm{kpc}$) and therefore they have to resort to some kind of sub-grid prescriptions for the black hole growth. In this context, the usual methodology to deal with black holes is to suppose that they irradiate with luminosities $L \approx L_{Edd}$ where: \begin{equation} L_{Edd} = 3.2 \times 10^4 \, \left(\frac{M_{\bullet}}{\rm M_{\odot}}\right) \, {\rm L_{\odot}} \label{eq:L_edd} \end{equation} is the Eddington luminosity. The domain of our simulations spans approximately from $0.1 \, R_B$ to $2 \, R_B$. This spatial dynamic range is designed to encompass with the highest possible resolution the spatial region of interest for the full simulation, i.e. from the radius of gravitational influence ($\sim R_B$) down to the smallest radius ($\sim 0.2 \, R_B$) reached by the propagating density wave (see Sec. \ref{sec:results}). This spatial range has been successfully tested to verify the convergence of the main quantities derived in this work. The natural time scale of the problem is given by the free-fall time at the Bondi radius: \begin{equation} t_{ff} \simeq \frac{1}{\sqrt{G\rho(R_B)}} \simeq 10^5 \, \mathrm{yr}; \label{tff} \end{equation} Here $\rho(R_B) \sim 3 \times 10^{-19} \, \mathrm{g \, cm^{-3}}$ is the mass density at the Bondi radius at the beginning of the simulations. The time scale for the full RT simulation is about $\sim 10^3 \, t_{ff}$, due to the presence of radiation pressure which slows down the collapse. In the following subsections we describe the physics included in the simulations, separating the HD and the RT parts for clarity reasons. Some more technical aspects (boundary conditions, two-stream approximation, heating and cooling terms and photon diffusion) are deferred to the Appendix. \subsection{Hydrodynamics} We solve the standard system of ideal, non-relativistic Euler's equations, i.e. neglecting viscosity, thermal conduction and magnetic fields, for a primordial (H-He) composition gas, spherically accreting onto a central DCBH, supposed at rest; the angular momentum of the gas with respect to the central object is zero. The code evolves in time the following system of conservation laws for mass, momentum and energy, solving for the radial component: \begin{equation} \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \mathbf{v}) = 0 \label{mass} \end{equation} \begin{equation} \frac{\partial \mathbf{q}}{\partial t} + \nabla \cdot (\mathbf{q} \times \mathbf{v})= -(\gamma -1)\nabla E -\nabla p_{rad} +\mathbf{g}\rho \label{momentum} \end{equation} \begin{equation} \frac{\partial E}{\partial t} + \nabla \cdot (E \mathbf{v})= -(\gamma - 1) E \nabla \cdot \mathbf{v} + (H - C) \label{energy} \end{equation} where $\rho$ is the gas mass density, $\mathbf{v}$ is the gas velocity (taken to be positive in the outward direction), $\mathbf{q}=\rho \mathbf{v}$ is the momentum density and $E$ is the energy density. Moreover $p_{rad}$ is the additional radiation pressure, $\mathbf{g}(r)$ is the gravitational field generated by the central black hole and $(H-C)$ are the heating and cooling terms. In the following, we neglect the vector notation since we only consider the radial components of the previous quantities. The total energy density $E$ is given by the relation: \begin{equation} E = \rho \epsilon + \frac{\rho v^2}{2} \end{equation} where $\epsilon$ is the specific gas thermal energy: \begin{equation} \epsilon = \frac{1}{\gamma - 1} \frac{1}{\mu} RT \end{equation} Here, $T$ is the gas temperature, $R$ is the gas constant and $\mu$ is the mean molecular weight which, for a primordial gas with helium fraction $Y_p = 0.2477$ \citep{Peimbert_2007} and no metals $Z_p=0$, is equal to $\mu=1.15$. The gas thermal pressure is given by the usual equation for ideal gases: \begin{equation} P_g = \frac{1}{\mu} \rho R T, \label{EOS} \end{equation} while the gravitational acceleration is \begin{equation} g(r,t) = \frac{GM_{\bullet}(t)}{r^2} \end{equation} The value of the black hole mass $M_{\bullet}(t)$ changes with time, due to the accretion, with the following set of rules, where $\dot{M}_{\bullet}=4 \pi r^2 \rho |v|$: \begin{equation} \begin{cases} \dot{M}_{\bullet}(t) \neq 0 \, \, \, \Leftrightarrow \, \,\, v(r=r_{min},t)<0 \\ M_{\bullet}(t) = M_0 + (1-\eta)\int_0^t \! \dot{M}_{\bullet} \, \mathrm{d}t \end{cases} \end{equation} where $M_0 = 10^5 \, {\rm M_{\odot}} $ is the initial value for the mass we adopt for the DCBH, $\dot{M}_{\bullet}$ denotes the time derivative of $M_{\bullet}$ and $\eta$ is the efficiency factor for mass-energy conversion (see the RT subsection below for more details). The system Eqs. \ref{mass} - \ref{energy} are solved with a Linearized Roe Riemann solver \citep{Roe_1981}, a method based on Roe's matrix characteristic decomposition, which offers superior quality in treating the diffusion in hydrodynamic quantities. The time-stepping algorithm is a classical Runge-Kutta third-order scheme. The time step is computed from the well-known CFL condition: \begin{equation} v \, \frac{\Delta t}{\Delta r} = C< C_{max} \end{equation} where $C$ is the dimensionless Courant number and $C_{max} =0.8$. \subsection{Radiative Transfer} In our simulations, all radiation-related quantities are integrated over frequencies: this treatment serves as a good approximation for the main radiative features. Our RT modeling builds upon the works by \cite{Tamazawa_1974} for what concerns the general theoretical framework; we also exploit the computationally effective scheme by \cite{Novak_2012}. In the following we first introduce the relevant RT equations and then discuss their numerical implementation. In the usual notation, $J$ is the intensity of the radiation field, $S$ is the source function, $H$ ($K$) is the first (second) moment of intensity. All these quantities are functions of time and position. The closure relation between the second moment of intensity $K$ and the intensity $J$ is given by the so-called Eddington factor ${\cal F}$ ({\citealt{Hummer_1971, Tamazawa_1974}), here defined as: \begin{equation} {\cal F} = \frac{K}{J} = \frac{1 + \tau / \tau_0}{1 + 3 \tau / \tau_0} \end{equation} where $\tau_0$ is the optical depth at which the Eddington factor becomes equal to $1/2$. The source total luminosity $L$ and $H$ are related at any point of the spatial domain by $L = 16 \pi^2 r^2 H$. In turn, $L$ depends on the gas accretion rate $\dot{M}_{\bullet}$ onto the IMBH (for $v<0$): \begin{equation} L_{\bullet} = \eta c^2 \dot{M}_{\bullet} = \eta c^2 \left( 4 \pi r^2 \rho |v| \right) \end{equation} where we employ an efficiency factor $\eta = 0.1$ (see e.g. \citealt{Yu_2002, Madau_2014}). Generally speaking, the efficiency factor ranges from $\eta = 0.057$ for a Schwarzschild (i.e. non-rotating) black hole to $\eta=0.32$ for a maximally rotating object (see \citealt{Thorne_1974}). We then define $L_{\bullet}$ the luminosity evaluated at the innermost grid cell, located at $r=r_{min}$: \begin{equation} \begin{cases} L_{\bullet} \equiv \eta c^2 \left[ 4 \pi r_{min}^2 \rho(r_{min}) |v(r_{min})| \right] & \text{if } v(r_{min}) <0 \\ L_{\bullet} \equiv 0 & \text{if } v(r_{min}) \geq 0 \end{cases} \label{L_BH_definition} \end{equation} In addition, we define $f_{Edd} \equiv \dot{M}_{\bullet}/\dot{M}_{Edd}$: the accretion is super-critical (super-Eddington) if $f_{Edd}>1$. Note that, only in the case of a fixed value of the efficiency factor $\eta$, $f_{Edd} \equiv \dot{M}_{\bullet}/\dot{M}_{Edd} = L/L_{Edd}$ holds. The acceleration caused by radiation pressure is then: \begin{equation} a_{rad} = \frac{\kappa(\rho,T) L}{4 \pi r^2 c} \label{a_rad} \end{equation} where $\kappa(\rho,T)$ is the total opacity of the gas: in our simulations the Thomson electron scattering, bound-free and $H^-$ opacity terms are included. The Thomson opacity is calculated following the temperature-dependent prescription given in \cite{Begelman_2008}, namely: \begin{equation} \kappa_T(T) = 0.2 (1+\mu) \frac{1}{1+(T/T_{*})^{-\beta}} \, \mathrm{cm^2 \, g^{-1}}, \end{equation} with $\beta=13$. Below $T\sim T_{*} = 8000 \, \mathrm{K}$, the opacity rapidly falls due to the decrease of the ionized fraction: as a consequence, also the effectiveness of the radiation pressure is quenched. The bound-free opacity is computed through the Kramers' approximation for a metal-free gas ($Z=0$) (see e.g. \citealt{Introduction_Modern_Astrophysics}): \begin{equation} \kappa_{bf}(\rho,T) = 4\times10^{22} (1+X_p) \rho T^{-3.5} (1-\chi_{ion}) \end{equation} where $X_p$ is the primordial hydrogen fraction. The opacity due to the negative hydrogen ion $H^-$ is computed as (see again e.g. \citealt{Introduction_Modern_Astrophysics}): \begin{equation} \kappa_{H^-}(\rho,T) = 3.5\times10^{-27} \rho^{0.5} T^{7.7} (1-\chi_{ion}) \end{equation} With a metal-free gas ($Z=0$), within the ranges of mass densities, temperatures and ionized fractions in our spatial domain, the dominant source of opacity is the Thomson one. Our definition of the optical depth $\tau$ is: \begin{equation} \tau(R) \equiv - \int_{0}^{R} \kappa \rho \, \mathrm{d}r \end{equation} The quantity reported in the graphs is the optical depth for an external observer. As we will see, due to feedback effects, accretion onto the central object occurs in an intermittent manner. It is then useful to introduce the duty-cycle, defined as the fraction of time spent accreting within a given time frame of duration $T_{tot}$: \begin{equation} {\cal D} \equiv \frac{\Delta t_{accr}}{T_{tot}} \end{equation} The instantaneous value of the duty-cycle is computed dividing the total integration time $T_{tot}$ in slices and computing the duty-cycle with respect to each slice. The equations for steady and spherically-symmetric transfer of radiation have been derived, e.g. by \cite{Castor_1972} and \cite{Cassinelli_Castor_1973}. For our problem, it is appropriate to assume steady-state RT equations since the light-crossing time at the Bondi radius, $R_B/c \approx 5 \, \mathrm{yr}$, is negligible with respect to the free-fall time, see Eq. \ref{tff}. The full RT equations are: \begin{align} \nonumber &\frac{v}{c} \frac{d}{dr} \left( \frac{4 \pi J}{\rho} \right) + 4 \pi K \frac{v}{c} \frac{d}{dr} \left( \frac{1}{\rho} \right) - \frac{4 \pi}{\rho} \frac{v}{c} \left(\frac{3K - J}{r}\right) = \\ &=-\frac{1}{4 \pi \rho r^2} \frac{dL}{dr} -4 \pi \kappa(J-S) \label{eq:full_RT_1} \end{align} \begin{align} \frac{dK}{dr} + \left(\frac{3K - J}{r}\right) + \frac{v}{c}\frac{dH}{dr} -\frac{2}{r} \frac{v}{c} H -\frac{2}{\rho} \frac{v}{c} \frac{d\rho}{dr} H= -\rho \kappa H \label{eq:full_RT_2} \end{align} \\ The term $\kappa(J-S)$ handles the gas heating and cooling, as the $(H-C)$ terms in the energy equation, see Eq. \ref{energy}. These equations are correct up to the first order in $\beta = v/c$ and are suitable for high-speed accretion flows, where $\beta$ is not negligible. In the previous equations, the transition from the optically thin to the optically thick regime is described by the density-dependent Eddington factor ${\cal F}$ (decreasing from $1$ to $1/3$ with increasing optical depth) and by the term $\sim \rho \kappa(\rho, T)$. \cite{Novak_2012} presented several computationally-effective non-relativistic RT formulations which yield the correct behavior both in optically thin and optically thick regimes. These formulations are convenient because they allow to obtain accurate results with a lower computational complexity. We defer the interested reader to the full derivation in \cite{Novak_2012} and we write down only the final formulation. \begin{equation} \frac{dL}{dr} = 4 \pi r^2 (\dot{E} - 4 \pi \rho \kappa J) \label{N1} \end{equation} \begin{equation} \frac{dJ}{dr} = -\frac{2 J w}{r} - \frac{(3-2w) \rho \kappa L}{16 \pi^2 r^2} \label{N2} \end{equation} In these expressions, $w$ is an interpolation parameter which controls the transition from optically thin to optically thick regimes and is a function of the position: it ranges from $w=0$ (for an isotropic radiation field, i.e. in the optically thick regime) to $w=1$ in the optically thin regime. Furthermore, $\dot{E}$ is the source term, playing the same role of $S$ in the relativistic treatment (see the Appendix for a more detailed description). Eq. \ref{N1} is derived from the full RT Eqs. \ref{eq:full_RT_1}-\ref{eq:full_RT_2} by neglecting the $v/c$ terms and using $\dot{E}$ instead of $S$. Eq. \ref{N2}, instead, is derived by using the specific values of the Eddington factor ${\cal F} \equiv K/J$ and the interpolation factor $w$ in each regime: (${\cal F}=1$, $w=1$) for the optically thin and (${\cal F}=1/3$, $w=0$) for the optically thick. Finally, we need to specify the boundary conditions. As in the study of stellar interiors, the second order differential equation for $L(r)$, or the two first-order ODEs in $L(r)$ and $J(r)$ (Eqs. \ref{N1}-\ref{N2}), requires two boundary conditions, at the inner and outer boundaries. The innermost cell of the grid radiates the luminosity produced by the accretion flow onto the black hole (see Eq. \ref{L_BH_definition}), so that $L(r_{min}) \equiv L_{\bullet}$. Far from the black hole, the luminosity is expected to resemble a point source because the scattering becomes negligible, so that: \begin{equation} J(r_{max}) \equiv \frac{L(r_{max})}{16 \pi^2 r_{max}^2} \end{equation} In order to solve the set of boundary-value differential Eqs. \ref{N1}-\ref{N2}, the so-called shooting method with Newton-Raphson iteration \citep{Numerical_Recipes} works well up to the optical depth of a few. Beyond this limit, the shooting method becomes unstable and it is necessary to switch to a more powerful relaxation method \citep{Numerical_Recipes}. However, following the evolution of the system for physically relevant time scales ($\sim 100 \, \mathrm{Myr}$) requires such a large number of steps that even the relaxation method becomes computationally not viable. To overcome this problem, we follow the two-stream approximation method outlined in the Appendix B of \cite{Novak_2012} and sketched in our Appendix. \begin{table*} \begin{minipage}{170mm} \begin{center} \caption{Simulation parameters for the two test simulations (T1 and T2) and for the full one (FS) which includes a complete solution of the radiative transfer.} \label{tab:outline} \begin{tabular}{|c|c|c|c|} \hline\hline Parameter & T1 & T2 & FS\\ \hline $r_{min}$ [pc] & $0.16 $ & $0.16$ & $0.16$ \\ $r_{max}$ [pc] & $3.2 $ &$3.2 $ &$3.2$ \\ Integration time [yr] & $3.2 \times 10^{4}$ & $3.2 \times 10^{4} $ & $1.4 \times 10^{8} $\\ Radiation pressure & no & yes & yes \\ Energy equation & adiabatic & adiabatic & non-adiabatic \\ \end{tabular} \end{center} \end{minipage} \end{table*} \subsection{Initial Conditions} We model the spherically-symmetric gas accretion onto a seed black hole of initial mass $M_0=10^5 \, \mathrm{M}_{\odot}$, assumed at rest at the center of a dark matter halo of virial temperature $T_{vir} \sim 10^4 \, \mathrm{K}$ and total mass (dark matter and baryons) $M_h = 6.2 \times 10^7 \, \mathrm{M}_{\odot}$ at redshift $z=10$. For $r \ll R_{vir}$ most of the mass is baryonic: for this reason we supposed that the gas mass contained in our computational domain ($r<2R_B \ll R_{vir}$) is $M_{gas} = (\Omega_b/\Omega_m) \, M_h = 9.6 \times 10^6 \, \mathrm{M_{\odot}}$ (similarly to what has been done in \citealt{Latif_2013}, see their Fig. 5). This assumption is reasonable since, during the formation process of the DCBH, most of the baryonic mass of the halo is expected to fall into its gravitational influence (the halo has no spin, then no centrifugal barrier is formed). We assume that the gas follows the density profile derived from the simulations in \cite{Latif_2013}, which is well approximated by the following functional form: \begin{equation} \rho(r) = \frac{\rho_0}{1+(r/a)^2} \end{equation} where $a$ is the core radius, estimated from Fig. 1 of \cite{Latif_2013} as $a \sim 270 \, \mathrm{AU} \sim 1.3 \times 10^{-3} \, \mathrm{pc}$: this scale is too small to be resolved by our simulations. Normalization to the gas mass content, $M_{gas}$, gives the central density value $\rho_{0} \sim 10^{-11} \, \mathrm{g\,cm^{-3}}$. From the initial prescription for the density field and assuming an isothermal profile with $T= 10^4 \, \mathrm{K}$ (as in \citealt{Latif_2013} and leading to a very small value, $\sim 10^{-3}$, of the ionized fraction throughout the domain), the initial conditions for the pressure field are derived directly from the equation of state, Eq. \ref{EOS}. The initial conditions for the radial velocity are set to a very small, spatially constant value $v_0 <0$. After a very brief ($ \ll t_{ff}$) transient, the system adjusts to a velocity profile consistent with a rapid accretion across the inner boundary of the grid. In addition, the initial density profile is also rapidly modified within the Bondi radius, while outside the gravitational influence of the black hole it remains almost unaltered. The spatial grid for all simulations extends from $r_{min} = 5.0\times 10^{17} \, \mathrm{cm} \approx 0.16 \, \mathrm{pc}$ to $r_{max} = 1.0\times 10^{19} \, \mathrm{cm} \approx 3.2 \, \mathrm{pc}$, with $3000$ logarithmically spaced bins. This range is optimized for our computational target, allowing to follow the entire displacement of the density wave and extending out to the Bondi radius. In our work we do not have the sufficient spatial resolution to resolve the $\mathrm{H \, II}$ region produced by the accretion-generated radiation (but see e.g. \citealt{Milosavljevic_2009} where the $\mathrm{H \, II}$ region is resolved). A simple estimate of the extension of the Stromgren radius in our case yields a dimension of $\sim 0.03 \, \mathrm{pc}$, which is in very good accordance with the radius reported in \cite{Ricotti_2011_parametric}. For further details about the simulations, e.g. the boundary conditions, see the Appendix. \section{Results} \label{sec:results} We present our results starting from the two test simulations T1 (pure hydro) and T2 (hydro + radiation pressure) in order to highlight some physical key features of the problem. Next, we turn to the analysis of the full simulation, FS, that in addition includes the complete solution of the radiative transfer; the FS simulation contains our main findings. The simulation parameters used in the three runs are reported in Table \ref{tab:outline}. The total integration time $T_{tot}$ for the runs T1 and T2 has been chosen in order to show the most important features of their radiation-hydrodynamic evolution, while for the FS simulation it corresponds to the total history of the system. \subsection{Test simulations} \subsubsection{T1: Pure hydro} Simulation T1 is a purely adiabatic hydrodynamic simulation. The only two forces acting on the gas are produced by the gravitational field of the black hole and by the pressure gradient; therefore, it corresponds to the ``classical" Bondi solution \citep{Bondi_1952}, with the only exception consisting in the limited gas reservoir, which prevents a genuine steady state to take place. The spatial profiles predicted by T1 are shown in Fig. \ref{fig:HD_spatial}. \begin{figure*} \includegraphics[angle=0,width=0.45\textwidth]{HYDRO_Density.pdf} \includegraphics[angle=0,width=0.45\textwidth]{HYDRO_Velocity.pdf}\\ \vspace{-0.1cm} \includegraphics[angle=0,width=0.45\textwidth]{HYDRO_Temperature.pdf} \includegraphics[angle=0,width=0.45\textwidth]{HYDRO_ionized_fraction.pdf}\\ \vspace{-0.1cm} \includegraphics[angle=0,width=0.45\textwidth]{HYDRO_Accretion_Rate.pdf} \includegraphics[angle=0,width=0.45\textwidth]{HYDRO_Luminosity_out.pdf}\\ \caption{Spatial profiles for the T1 simulation: total integration time is $T_{tot} \approx 3.2 \times 10^{4}\, \mathrm{yr}$. The purple line labeled as ``IC" represents the initial conditions: where this line is not present the corresponding initial condition is impossible to show on the plot. The colored lines correspond to different times of the simulation, equally spaced such that $T_{tot} = i \, \Delta t$ with $\Delta t = 3200 \, \mathrm{yr}$ and $i=1, \, ... \, 10$ (for clarity reasons, only the labels $i=1$ and $i=10$ are shown on the plot). All horizontal axes are in units of $R_B$. The density panel reports the classical Bondi solution with a black dot-dashed line. The accretion rate is the mass flux at a given radius and it is plotted with the velocity sign, i.e. a positive (negative) value corresponds to an outflow (inflow).} \label{fig:HD_spatial} \end{figure*} The evolution occurs over a free-fall time scale, $t_{ff} \sim 3.2 \times 10^4 \, \mathrm{yr}$ during which the system progressively approaches the Bondi solution, reported on the density panel as a black dot-dashed line, in which the gas density scales as: \begin{equation} \rho(r) \propto \left( \frac{r}{r_B} \right) ^{-3/2} \end{equation} This scaling cannot be sustained for $t \gg t_{ff}$, due to the fact that the gas reservoir is limited (this is not the case in the classical Bondi solution) and ceases to be valid near the Bondi radius, where the gravitational influence of the black hole terminates. The system is progressively emptied from the inner part of the environment, due to the absence of radiation pressure: the density near the inner boundary drops by a factor $\sim 5$ during the simulation time and the effect is propagated outward, up to the Bondi radius. The velocity of the gas increases with time as well, stabilizing to a value of order $\sim -100 \, \mathrm{km \, s^{-1}}$ at the inner boundary, which is the result of an equilibrium between the gravitational acceleration and the decreasing thermal pressure of the gas. The temperature of the infalling gas increases, reaching peaks of $4 \times 10^4 \, \mathrm{K}$ in the inner regions. The temperature profile is reflected in the behavior of the ionized fraction (see the Appendix for the relevant equations), which is smaller than $1$ only when the temperature drops below the $\sim 2 \times 10^4 \, \mathrm{K}$ level. The small jumps visible at the outer boundary of the temperature spatial profile are due to the different rates of change of density and pressure (remind that $T \propto p/\rho$, as in Eq. \ref{EOS}). The gas, outside the sphere of gravitational influence of the black hole, is swept away by the thermal pressure ($dp/dr < 0$ in the entire spatial domain, so that the related force is always in the outward direction) causing an abrupt change in the spatial scaling of pressure and density. This effect is reflected most dramatically on the ionized fraction, since this quantity is very sensitive to modifications of the temperature around the level $\sim 10^4 \, \mathrm{K}$. These effects are unimportant for the overall evolution of the system and are visible also in the T2 simulation. The flow accretes at strongly super-Eddington rates at all times, reaching peaks of $f_{Edd} \equiv \dot{M}_{\bullet}/\dot{M}_{Edd} \sim 2000$: the Eddington limit does not apply in this case due to the absence of radiation pressure. The accretion rate progressively decreases due to the density drop: this causes the slow reduction of the luminosity at the innermost cell, clearly visible in the bottom-right panel of Fig. \ref{fig:HD_spatial}. The emitted luminosity is obscured by high column densities at the beginning of the simulation, while the drop of the density decreases the optical depth with time, down to a value $\sim 6$. The spatial profile of the emitted luminosity is described by Eq. \ref{N1} (and, more specifically, by Eq. \ref{N3} in the Appendix). The IMBH mass at the end of the T1 simulation is $M_{\bullet} \approx 2.0\times10^5 \, \mathrm{M}_{\odot}$. \subsubsection{T2: Adding radiation pressure} In the T2 simulation an outward radiative force, corresponding to a fixed (i.e. not tied to $\dot{M}_{\bullet}$) value of the luminosity $\hat{L} = 2 \times 10^{43} \, \mathrm{erg \, s}^{-1}$, is added to the gravitational force and to the pressure gradient, while the energy equation is still adiabatic. The radiative force is active only when the black hole is accreting, i.e. when $v(r_{min})<0$. The aim of the T2 simulation is to show in a simplified way the effect of the radiation pressure on the gas. Fig. \ref{fig:luminosity_fix} is a comparison between the fixed value of the luminosity employed in T2 and the luminosity resulting from the T1 simulation, computed through the accretion rate $\dot{M}_{\bullet}$ at $r=r_{min}$ with Eq. \ref{L_BH_definition}, but not included in T1. The latter is $\sim 3$ orders of magnitude larger, due to the absence of radiation pressure quenching the accretion flow. The Eddington luminosity is also shown for comparison, its progressive increase being due to the change in $M_{\bullet}$. The value of $\hat{L}$ is set in order to be at any time larger than the corresponding $L_{Edd}$. The radiation pressure can modify the accretion flow in two ways. If $f_{Edd} \equiv \dot{M}_{\bullet}/\dot{M}_{Edd}<1$, the effect is a decrease of the accretion rate $\dot{M}_{\bullet}$. Naming $\dot{M}_{T1}$ the accretion rate without any radiative force (i.e. the one in the T1 simulation) and $\dot{M}_R$ the accretion rate with the addition of a sub-Eddington radiation pressure, it is easy to show that the following relation holds: \begin{equation} \dot{M}_R = \dot{M}_{T1} \sqrt{1-f_{Edd}} \end{equation} If, instead, $f_{Edd} \geq 1$, the accretion is intermittent (i.e. ${\cal D}<1$): the infalling gas is swept away by the radiation pressure and some time is needed to re-establish the accretion. Under the simplifying assumption that the initial infalling velocity of the gas is slow, it is possible to show that, if $f_{Edd} \geq 1$, an estimate of the value of the duty-cycle is given by: \begin{equation} {\cal D} = \left( 2f_{Edd}-1\right)^{-1} \label{DC_estimate} \end{equation} Under these assumptions, ${\cal D} \equiv 1$ for $f_{Edd}=1$, while for $f_{Edd}>1$ it steadily decreases. The T2 simulation is an example of the latter case, with $f_{Edd} \sim 1.5$. From the previous analysis we expect two major differences with respect to the T1 simulation, namely: (i) the IMBH accretes $\sim 50\%$ less mass (if the total integration times are equal) because ${\cal D}$ is smaller by a factor $\sim 2$, and (ii) the IMBH produces some feedback effect on the surrounding gas. This is exactly what we observe in the T2 simulation. The final mass is $M_{\bullet} = 1.4\times10^5 \, \mathrm{M}_{\odot}$, i.e. the black hole accreted $\sim 60\%$ less mass than in the T1 simulation, in agreement with our rough estimate in Eq. \ref{DC_estimate}. In addition, the spatial profiles for this simulation, shown in Fig. \ref{fig:RAD_FORCE_spatial}, manifestly provide some evidence of the effect that the radiation pressure exerts on the gas. A density wave, produced by the radiative feedback, propagates in the outward direction with velocities up to $\sim 20 \, \mathrm{km\, s}^{-1}$. It is interesting to note that this wave is mildly supersonic, since the value of the sound speed in a gas at $T \sim 2.6 \times 10^4 \, \mathrm{K}$ is $\sim 19 \, \mathrm{km \, s^{-1}}$. In addition, the positive values of the accretion flow measured in a large part of the spatial domain indicate the occurrence of a gas outflow from the external boundary. The high-density wave, with temperatures as high as $\sim 2.8 \times 10^4 \, \mathrm{K}$, is followed by a rarefaction zone, where the temperature drops to $\sim 7000 \, \mathrm{K}$ (the temperature profile follows the pressure, i.e. the cooling is adiabatic), decreasing the ionized fraction to very small ($\sim 10^{-3}$) values as well. The accretion flow promptly (after $\sim 6000 \, \mathrm{yr}$) stabilizes to a value $f_{Edd} \sim -500$ at the innermost cell: this value is set by the fixed radiation pressure of the T2 simulation, which indirectly sets also the velocity at which the accretion flow is re-established at the end of each idle phase (the larger is the radiation pressure, the longer is the time needed for accretion to re-start, then the larger is the resulting mass inflow). This general framework is explained by the following scenario: when the black hole accretes, the fixed super-critical emitted luminosity sweeps away the surrounding gas, affecting a spherical region of radius $r_{\tau}$, defined such that $\tau(r_{\tau}) = 1$. The gas located at $r \gg r_{\tau}$ is also accelerated upward, not by the radiation pressure in this case but by the thermal pressure, and acquires a positive velocity. When the irradiation is temporarily shut down, the gas located at $r \ll r_{\tau}$ is affected by the strong gravitational field of the black hole and falls back in due course. \begin{figure} \vspace{-1\baselineskip} \hspace{-0.5cm} \begin{center} \includegraphics[angle=0,width=0.45\textwidth]{RAD_FORCE_luminosity_trend.pdf} \caption{Comparison between the emitted luminosity for different simulations (note that the vertical axis is broken). The red line is the luminosity of the T1 simulation (computed from $\dot{M}_{\bullet}$ at $r=r_{min}$ with Eq. \ref{L_BH_definition}, but not included in the physics). In green, the fixed luminosity value used for the T2 simulation $\hat{L}=2\times10^{43} \, \mathrm{erg \, s}^{-1}$. In blue, for comparison, the Eddington luminosity for the T2 simulation, whose value increases along with $M_{\bullet}$. The value of $\hat{L}$ is always above the Eddington limit.} \label{fig:luminosity_fix} \end{center} \end{figure} \begin{figure*} \includegraphics[angle=0,width=0.45\textwidth]{RAD_FORCE_Density.pdf} \includegraphics[angle=0,width=0.45\textwidth]{RAD_FORCE_Velocity.pdf}\\ \vspace{-0.1cm} \includegraphics[angle=0,width=0.45\textwidth]{RAD_FORCE_Temperature.pdf} \includegraphics[angle=0,width=0.45\textwidth]{RAD_FORCE_ionized_fraction.pdf}\\ \vspace{-0.1cm} \includegraphics[angle=0,width=0.45\textwidth]{RAD_FORCE_Accretion_Rate.pdf} \includegraphics[angle=0,width=0.45\textwidth]{RAD_FORCE_Tau.pdf}\\ \caption{Spatial profiles for the T2 simulation: total integration time is $T_{tot} \approx 3.2 \times 10^{4}\, \mathrm{yr}$. A fixed radiative force, determined by the luminosity $\hat{L}=2\times 10^{43} \, \mathrm{erg \, s}^{-1}$ and active only when $v(r_{min})<0$, is added to the gravity and to the pressure gradient, as explained in the text. The colored lines correspond to different times of the simulation, equally spaced such that $T_{tot} = i \, \Delta t$ with $\Delta t = 3200 \, \mathrm{yr}$ and $i=1, \, ... \, 10$ (for clarity reasons, only the labels $i=1$ and $i=10$ are shown on the plot).} \label{fig:RAD_FORCE_spatial} \end{figure*} \subsection{The Full Simulation} The aim of the FS simulation is to describe the accretion flow onto a DCBH with initial mass $M_0 = 10^5 \, \mathrm{M_{\odot}}$. The final integration time for this simulation, when all the gas contained in the halo is expelled by radiation pressure, is $\sim 142 \, \mathrm{Myr}$. The forces acting on the gas are the gravity of the black hole, the pressure gradient and the radiation pressure. The differences with respect to the previous T1 and T2 simulations are: (i) the radiation pressure is computed self-consistently, i.e. with Eq. \ref{L_BH_definition} and (ii) the energy equation is non-adiabatic, with the inclusion of bremsstrahlung cooling and atomic cooling (see the Appendix). \subsubsection{Spatial structure and time evolution} Broadly speaking, the simulation allows the identification of three distinct evolutionary phases of the system, described in turn below. \begin{enumerate} \item{\textbf{Initial Transient Phase}: the gas, initially almost at rest (as detailed in Sec. \ref{sec:methods}), is accelerated downward, progressively increasing $\dot{M}_{\bullet}$ and, as a consequence, the emitted luminosity $L_{\bullet}$, as shown in Fig. \ref{fig:luminosity_trend_start}. This plot shows that the emitted luminosity increases by $\sim 3$ orders of magnitude in only $\sim 200 \, \mathrm{yr}$, a fraction $\sim 10^{-6}$ of the full evolution of the system. This process is self-regulated, due to the interconnection between gravity, accretion rate and radiation pressure: the gravity tends to increase the accretion rate by accelerating the gas downward, while the emitted luminosity acts against the infall by providing an outward acceleration. This evolutionary phase lasts until the emitted luminosity becomes comparable to the Eddington limit (see Eq. \ref{eq:L_edd}), approximately $10^{43} \, \mathrm{erg \, s}^{-1}$ at the beginning of the simulation. Above this threshold, the radiation pressure is able to sweep the gas away from the inner boundary and the accretion process becomes intermittent (${\cal D}< 1$). This initial phase lasts $\sim 200-300 \, \mathrm{yr}$: the emitted luminosity and the fractional mass accreted ($\Delta M_t/M_0 = (M_t-M_0)/M_0$) are shown in Fig.\ref{fig:luminosity_trend_start}. An estimate of the duration $T_t$ of this initial phase is easily computed from the following argument. We request that during $T_t$ the luminosity $L_{\bullet}=\eta c^2 \dot{M}_{\bullet}$ becomes equal to the Eddington luminosity: \begin{equation} L=\eta c^2 \frac{dM}{dt} \equiv L_{Edd} = \frac{4 \pi G m_p c}{\sigma_T} M \end{equation} By means of integrating between $t=0$ when $M(t) = M_0$ and $T_t$ when $M(t) = M_t$ and calling $\Delta M_t = M_t-M_0$ we obtain: \begin{equation} T_t = \frac{\eta c \sigma_T}{4 \pi G m_p} \ln \left( 1 + \frac{\Delta M_t}{M_0} \right) \approx \frac{\eta c \sigma_T}{4 \pi G m_p} \frac{\Delta M_t}{M_0} \end{equation} where the last approximation is valid for $\Delta M_t/M_0 \ll 1$. This equation, with the value $\Delta M/M_0 \approx 7 \times 10^{-6}$ taken from Fig. \ref{fig:luminosity_trend_start}, gives the expected time scale $T_t \sim 300 \, \mathrm{yr}$. Defining $t_{Edd} \equiv (c \sigma_T)/(4 \pi G m_p)$ the Eddington time scale, the previous formula becomes: \begin{equation} T_t \approx \eta t_{Edd} \frac{\Delta M}{M_0} \end{equation} Interestingly, if we suppose that $M(t) \propto M_0$ (see e.g. \citealt{Volonteri_2014, Madau_2014}), as in: \begin{equation} M(t) = M_0 \, \exp{ \left[ \left( \frac{1-\eta}{\eta} \right) \frac{t}{t_{Edd}} \right] } \end{equation} the time scale $T_t$ is strictly independent on the initial black hole mass $M_0$. It is relevant to investigate in detail the mechanism which leads the system from a stable accretion at $L \sim L_{Edd}$ to the unstable and intermittent phase shown in Fig. \ref{fig:luminosity_trend_start} for $t \gtrsim 270 \, \mathrm{yr}$. When the black hole starts to accrete at $\dot{M}_{\bullet} \sim \dot{M}_{Edd}$, the gas near the inner boundary is swept away, so that the accretion is temporarily interrupted and the radiation pressure is turned off. The emitted luminosity does not affect the outer parts of the domain, for $r \gg r_{\tau}$, see the similar discussion for the T2 simulation. The gas located in this section continues its infall, counteracted only by the thermal pressure exerted from the internal layers, and eventually feeds the black hole with increasingly larger mass inflows. The difference with respect to the T2 simulation is due to the direct dependence of the emitted luminosity from the mass inflow in the FS case. More specifically, this mechanism is schematically explained in Fig. \ref{fig:mechanism}. In the first panel, the black hole is accreting mass from the innermost shell, which is collapsing just as the outer one. When the black hole irradiates with $L \gtrsim L_{Edd}$, the radiation pressure acts only on the innermost shell (supposing for simplicity that the outer shell is located at $r \gg r_{\tau}$) which is swept outward, while the outer shell continues the infall. During this period, the black hole is not irradiating. The innermost shell eventually terminates its outward movement, due to the gravitational pull of the black hole. The innermost shell merges with the outer one, so that its overall density is increased. Eventually, the merged shells approach the accretion boundary and re-start the cycle with a larger accretion flow, i.e. with the emission of a higher luminosity. \begin{figure} \vspace{-1\baselineskip} \hspace{-0.5cm} \begin{center} \includegraphics[angle=0,width=0.5\textwidth]{FULL_luminosity_trend_start.pdf} \caption{Luminosity emitted and fractional mass accreted ($\Delta M_t /M_0 =(M_t-M_0)/M_0$) during the initial phase, lasting $\sim 200-300 \, \mathrm{yr}$. The corresponding Eddington luminosity is also shown, for comparison.} \label{fig:luminosity_trend_start} \end{center} \end{figure} \begin{figure} \vspace{-0.5\baselineskip} \hspace{-0.5cm} \begin{center} \includegraphics[angle=0,width=0.45\textwidth]{FULL_mechanism.pdf} \caption{Schematic description of the mechanism which progressively increases the emitted luminosity. Here we consider only two mass shells, but the system must be thought to be composed of an infinite number of them. The black hole is at the center of each of the four panels, in black. Brown circles are mass shells: the darker the shell, the higher the density. The smallest circle is the accretion boundary: it becomes orange when the black hole irradiates. Green arrows indicate accretion through the inner boundary, red arrows indicate a movement of the mass shell. The black dot-dashed line indicates the radius $r_{\tau}$. See the text for a detailed description of the panels.} \label{fig:mechanism} \end{center} \end{figure} } \item{\textbf{Main Accretion Phase}: this phase lasts $\sim 142 \, \mathrm{Myr}$ and is characterized by a duty-cycle ${\cal D} \sim 0.48$ and an average accretion rate $f_{Edd} \simeq 1.35 $: the accretion is super-critical on average. This value is computed as a global average of $f_{Edd}$ over the entire integration time, including the idle phases (when the accretion does not take place) and it is in substantial agreement with the approximated (i.e. valid for small inflow velocities) relation given in Eq. \ref{DC_estimate}. \begin{figure*} \includegraphics[angle=0,width=0.45\textwidth]{FULL_Density.pdf} \includegraphics[angle=0,width=0.45\textwidth]{FULL_Velocity.pdf}\\ \vspace{-0.1cm} \includegraphics[angle=0,width=0.45\textwidth]{FULL_Temperature.pdf} \includegraphics[angle=0,width=0.45\textwidth]{FULL_ionized_fraction.pdf}\\ \vspace{-0.1cm} \includegraphics[angle=0,width=0.45\textwidth]{FULL_Accretion_Rate.pdf} \includegraphics[angle=0,width=0.45\textwidth]{FULL_Luminosity_out.pdf}\\ \vspace{-0.1cm} \includegraphics[angle=0,width=0.45\textwidth]{FULL_Pressure.pdf} \includegraphics[angle=0,width=0.45\textwidth]{FULL_Tau.pdf}\\ \caption{Spatial profiles for the FS simulation: total integration time is $T_{tot} \approx 1.4 \times 10^{8}\, \mathrm{yr}$ and $T_{tot} = i \, \Delta t$ with $\Delta t = 35 \, \mathrm{Myr}$ and $i=1, \, ... \, 4$ (only the labels $i=1$ and $i=4$ are shown). The luminosity lines are present only at data dumps when the IMBH is emitting.} \label{fig:FULL_spatial} \end{figure*} Fig. \ref{fig:FULL_spatial} shows the spatial profiles for the FS simulation. From their analysis, three main features are evident: (i) a density wave approaching the center, (ii) strong waves moving towards larger radii, visible from the velocity profile and (iii) the progressive emptying of the outer regions (discussed below). The density wave, with over-densities as high as $\sim 1$ order of magnitude with respect to the surrounding volume, moves in the inward direction, contrarily to the T2 simulation. This difference is due to the fact that in the FS simulation the radiation pressure and the accretion rate are interconnected, while in the T2 run the former is fixed: this joint evolution leads to a smooth increase in the emitted luminosity and to a different response of the system. When the IMBH accretes at super-critical rates, the emitted radiation promptly interrupts the mass inflow and consequently the radiation pressure. The accelerated gas moves in the outward direction creating a pressure jump of a factor $\sim 5$ between the two sides of the discontinuity front (see the pressure spatial profile in Fig. \ref{fig:FULL_spatial}). Eventually, the gravitational acceleration of the IMBH inverts the velocity and the accretion starts again. The radiation pressure affects only a small volume, in the inner section of the accretion flow where $\tau \sim 10-100$, as the optical depth spatial profile shows: the internal layers ($r \ll r_{\tau}$) of the gas distribution are intermittently reached by the radiation pressure, while the external layers ($r \gg r_{\tau}$) are in a quasi free-fall state. For this reason, similarly to the mechanism detailed in Fig. \ref{fig:mechanism}, the density wave progressively moves towards the center, leading to an increase of $\sim 1$ order of magnitude in the density measured at the innermost cell, as Fig. \ref{fig:Density_velocity_trend} shows. The top of the density wave $R_{dw}$ moves inward with the time scaling: $R_{dw} \propto t^{-0.7}$. The optical depth of the inner regions is increased along with the density: this additional effect progressively decreases the volume where the radiation pressure is effective. The luminosity panel is described in a separate subsection below. The velocity spatial profile shows that the outer regions ($R \gsim 0.5 R_B$) are swept by waves of high-speed ($10-20 \, \mathrm{km \, s}^{-1}$) gas. Note that in some regions of the spatial domain, where the temperature drops near $\sim 10^4 \, \mathrm{K}$, weak shocks are produced in the flow, with Mach numbers ${\cal M} \sim 1.3-2.0$. This volume, while not affected by radiation pressure, is strongly affected by the thermal pressure exerted from the internal layers: the pressure spatial profile shows, in the external regions, pressure jumps as high as $\sim 6-7$ orders of magnitude. As the radiation pressure is intermittent, the net result is formation of waves in the surrounding gas, whose magnitude and frequency increase with time: an always increasing energy is transported outward with this mechanism. Finally, the temperature spatial profile shows values as high as $\sim 10^7 \, \mathrm{K}$ in the proximity of the inner boundary, at late stages of accretion, caused by the very large pressure. The temperature spatial profile is reflected in the ionized fraction: the ionized volume expands outward with time. The complicated behavior of the mass flux spatial profile is a symptom of chaotic motions occurring in the environment, caused mainly by the intermittent irradiation. The last data dump of the FS simulation shows a very small mass flux in the external layers, due to the fact that $\dot{M}_{\bullet}(r) \propto \rho (r)$. \begin{figure} \vspace{-1\baselineskip} \hspace{-0.5cm} \begin{center} \includegraphics[angle=0,width=0.5\textwidth]{FULL_density_velocity_trend.pdf} \caption{Time evolution of density and velocity computed at the innermost cell of the spatial grid. It most clearly shows, along with the following Fig. \ref{fig:Luminosity_accretion_trend}, the final evolutionary phase. At the time $\sim 142 \, \mathrm{Myr}$, a jump of a factor $\sim 3$ in the velocity marks the final act in the evolution of the system: the remaining gas is eventually ejected outward.} \label{fig:Density_velocity_trend} \end{center} \end{figure} \item{\textbf{Final Burst}: this events marks the end of the accretion flow onto the IMBH, $\sim 142 \, \mathrm{Myr}$ in the simulation. The gas is swept away and the accretion rate goes to zero: the black hole becomes a dark relic, having exhaled its ``last gasp". As the central density increases, the emitted luminosity rises as well, as shown in Fig. \ref{fig:Luminosity_accretion_trend}, standing always a factor $\sim 2-3$ higher than the Eddington luminosity. This trend is highlighted by the shaded region in the same plot, which shows the value of $f_{Edd}$: after an initial transient period lasting $\sim 20 \, \mathrm{Myr}$ (caused by the necessity to stabilize the accretion flow), the gap with respect to the Eddington luminosity increases with time: the emission is progressively more super-critical. The value of $f_{Edd}$ here is computed including the idle phases: being the duty-cycle $\sim 0.48$, the value of $f_{Edd}$ computed considering only the active phases would be doubled. The radiated luminosity reaches the value $\sim 3\times 10^{45} \, \mathrm{erg \, s^{-1}}$ ($f_{Edd} \equiv \dot{M}_{\bullet}/\dot{M}_{Edd} = L/L_{Edd} \sim 3$) and the density in the external layers drops: the remaining mass inside the computational domain is a factor $\sim 25$ lower than the initial one, due to both the accretion onto the compact object and the outflow (see below). Differently with respect to the T2 simulation, when the radiation pressure was fixed to a super-critical value, its interconnection with the mass inflow progressively voids the halo outside-in. The external layers are not able to exert a sufficient pressure to contain the expansion of the radiation-driven shell and, as a consequence, the remaining gas is ejected from the system. This effect is most clearly visible in Fig. \ref{fig:Density_velocity_trend}, which shows the velocity evolution measured at the innermost cell. The general increasing trend ($\sim 10^{-3} \, {\rm km \, s^{-1} \, Myr^{-1}}$) of the central velocity is abruptly changed by a jump of a factor $\sim 3$. The internal layer starts to move outward with velocities of $\sim 0.5 \, \mathrm{km/s}$. The same effect is hinted at in Fig. \ref{fig:Luminosity_accretion_trend}, where the accretion (and consequently the emitted luminosity) is abruptly interrupted. After a transient period, the velocity should be re-inverted, but the very low value of the gas mass still inside $R_B$ strongly suggests that the evolution time scale of this system with $M_0 = 10^5 \, \mathrm{M_\odot}$ is indeed of order $150 \, \mathrm{Myr}$.} \begin{figure} \vspace{-1\baselineskip} \hspace{-0.5cm} \begin{center} \includegraphics[angle=0,width=0.48\textwidth]{FULL_Luminosity_accretion_trend.pdf} \caption{Time evolution of the emitted luminosity and of $f_{Edd}$ computed at the innermost cell. The smoothness of the lines is due to an averaging process over a time much longer than the typical idle periods of the accretion. The value of $f_{Edd}$ is computed as a running average over a window period much longer than the typical duration of the duty-cycle and includes the idle phases (while the average values of the luminosity do not). The blue dashed line shows the corresponding time evolution of the Eddington luminosity, which increases as the black hole mass grows. At the time $\sim 142 \, \mathrm{Myr}$ the accretion is abruptly terminated by the final burst.} \label{fig:Luminosity_accretion_trend} \end{center} \end{figure} } \end{enumerate} \subsubsection{Black hole growth} After having investigated the space and time evolution of the accretion flow, we devote some further analysis to the black hole growth, more specifically to the accretion time scale and to the final mass balance. An important diagnostic quantity for the accretion process is the duty-cycle, defined in Sec. \ref{sec:methods}. A direct comparison between the three simulations shows that the evolutionary time scales are quite different. As a simple estimator, Table \ref{tab:time_doubles} reports for each simulation the time $t_{2M_0}$ needed for the black hole to double its initial mass, along with the average accretion rate $<\dot{M}>$. For instance, it is instructive to compare this time scale for the T1 simulation ($\sim 10^4 \, \mathrm{yr}$) and for the FS simulation ($\sim 5 \times 10^6 \, \mathrm{yr}$): the average duty-cycles of these simulations are different, i.e. the FS system spends a smaller fraction of time accreting. The duty-cycle is strictly dependent on the magnitude of the radiative force, as we have shown in the subsection dedicated to the T2 simulation with simple analytic arguments. In the FS simulation the duty-cycle stabilizes to an average value $\sim 0.48$, in agreement with the prediction of the approximated Eq. \ref{DC_estimate} (see also \citealt{Ricotti_2012_DC}). \begin{table*} \begin{minipage}{170mm} \begin{center} \caption{Diagnostic quantities for the two test (T1 and T2) simulations and for the full one (FS), specifically the time $t_{2M_0}$ needed to double the initial mass of the black hole and the average accretion rate $<\dot{M}>$. The final mass for the FS simulation is $\sim 7 \times 10^6 \, \mathrm{M_\odot}$.} \label{tab:time_doubles} \begin{tabular}{|c|c|c|c|} \hline\hline Parameter & T1 & T2 & FS\\ \hline $t_{2M_0}$ [yr] & $\sim 3\times 10^4$ & $\sim 4 \times10^4$ & $\sim 5\times10^6$ \\ $<\dot{M}>$ [$\mathrm{M_{\odot} \, yr^{-1}}$] & $\sim 3.1$ & $\sim 1.3$ & $\sim 0.1$ \\ \end{tabular} \end{center} \end{minipage} \end{table*} Furthermore, we have calculated the quantity of gas accreted by the IMBH and the amount ejected from the system, before the final burst. The density spatial profile in Fig. \ref{fig:FULL_spatial} shows that during the late stages of the simulation the outer layers of the volume are almost empty: in about $\sim 120 \, \mathrm{Myr}$ (the time of the last complete data dump) the density drops by $\sim 7$ orders of magnitude. The matter is partly accreted by the central object, partly ejected from the outer boundary of the system by high-speed waves, as described above. Our final results for the mass balance are summarized in Fig. \ref{fig:mass_trend}. The baryonic mass of the halo is reduced by a factor $\sim 25$ from the beginning of the simulation. Most of this mass ($\sim 90\%$) is accreted onto the black hole, in spite of an average super-Eddington emission, while $\sim 10\%$ is ejected by outflows. Starting from a DCBH of mass $M_0 = 10^5 \, \mathrm{M_{\odot}}$, the final mass of the black hole is $M_{\bullet} \sim 7 \times 10^6 \, \mathrm{M}_{\odot}$, a fully fledged SMBH. \begin{figure} \vspace{-1\baselineskip} \hspace{-0.5cm} \begin{center} \includegraphics[angle=0,width=0.5\textwidth]{FULL_mass_trend.pdf} \caption{Final mass balance for the FS simulation. All masses are normalized to the initial value of the gas mass, $M_{gas} \sim 9.6 \times 10^6 \, \mathrm{M_{\odot}}$. The dashed purple line is only a reference for the unitary value. The IMBH mass line is smoother and extended to a longer time than the others because the corresponding output value is saved at a very high frequency, while other quantities are more discretized in time.} \label{fig:mass_trend} \end{center} \end{figure} \subsubsection{Radiation emission} The study of the luminosity emitted from the IMBH is one of the main objectives of this work. Due to the high values of the optical depth (see its spatial profile in Fig. \ref{fig:FULL_spatial}), the luminosity is almost completely obscured for an external observer. The column density reaches a value of $1.3 \times 10^{25} \, \mathrm{cm^{-2}}$ in the late stages of the FS simulation, as can be roughly estimated from the line number 4 in the density panel of Fig. \ref{fig:FULL_spatial}, considering a mean value $10^{-17} \, \mathrm{g \, cm^{-3}}$ for the mass density ($10^{7} \, \mathrm{cm^{-3}}$ for the number density) between $0.1 R_B$ and $0.5 R_B$. The spatial profile of the emitted luminosity is shown in Fig. \ref{fig:FULL_spatial} for the two data dumps when the black hole is irradiating. The luminosity emitted via bremsstrahlung and atomic cooling is, in this spatial range, completely negligible. The lines show two important facts: (i) the luminosity emitted near the accretion boundary slowly increases, due to the mechanism already detailed in this Section and (ii) the radiation which escapes from the outer boundary decreases and is blocked at progressively smaller radii. The latter effect is due to the accumulation of matter at lower radial distances, having in mind that $dL/dr \propto - \rho \kappa L$, see Eq. \ref{N3} in the Appendix. In fact, the density measured at the innermost cell increases by a factor $\sim 10$ during the time evolution of the system, as Fig. \ref{fig:Density_velocity_trend} demonstrates. To conclude, the luminosity emitted during the accretion process onto an IMBH at high-z is obscured for most of the system's evolution. We predict that it might be observable during the final burst of radiation. In order to have a rough estimate of the observability of the latter phase, we suppose that the peak luminosity $L_{peak} \sim 3 \times 10^{45} \, \mathrm{erg \, s^{-1}}$ is emitted in a small ($\Delta \lambda / \lambda \ll 1$) range of Far-IR wavelengths centered at $\lambda = 2 \, \mathrm{\mu m}$ with a flat spectrum (for a detailed study of the contribution of DCBHs sources to the cosmic infrared background, see \citealt{Yue_2013}). From the present study it is hard to estimate the duration of the final burst, due to the lack of the necessary time resolution; however, we can put a solid lower limit of about $5 \, \mathrm{yr}$. This aspect needs to be investigated more thoroughly by future work. For a source located at $z=10$, the radiation intensity would be ${\cal I} \approx 10^{-6} \, \mathrm{Jy}$, which is observable by the future JWST with only $\sim 100 \, \mathrm{s}$ of integration, yielding a Signal-to-Noise ratio\footnote{Estimate performed with the JWST prototype Exposure Time Calculator (ETC): http://jwstetc.stsci.edu/etc.} of $\sim 250$. Of course, in order to produce more accurate predictions for the observability, it is necessary to take into account the frequency of these events at high-z and the exact emission spectrum of the source: we defer this task to future work. \section{Discussion and Conclusions} \label{sec:disc_concl} In this work we have investigated the radiation-hydrodynamic evolution of the spherical accretion flow onto a DCBH with initial mass $M_0=10^5 \, \mathrm{M_{\odot}}$ and gas mass in the parent halo $M_{gas} = 9.6 \times 10^6 \, \mathrm{M_{\odot}}$. The IMBH accretes for $\sim 142 \, \mathrm{Myr}$ with an average duty-cycle $\sim 0.48$ and on average super-critical accretion rates, with $f_{Edd} \simeq 1.35$, i.e. $\dot{M}_{\bullet} \simeq 0.1 \, \mathrm{M_\odot \, yr^{-1}}$. The emitted luminosity increases with time, as a consequence of the progressive rise in the mass inflow. The radiation pressure creates strong waves moving, with velocities as high as $\sim 20 \, \mathrm{km \, s^{-1}}$, in the outer ($r \gsim 0.5 \, R_B$) section of the inflow. These waves produce shocks in some regions of the flow. At the end of the main evolutionary phase $\sim 90\%$ of the gas mass has been accreted onto the compact object, in spite of an average super-Eddington emission, while $\sim 10\%$ has been ejected with outflows. The accretion is terminated when the emitted luminosity reaches the value $\sim 3\times 10^{45} \, \mathrm{erg \, s^{-1}} \sim 3 \, L_{Edd}$ and the related radiation pressure ejects all the remaining gas mass (which, at the final time, is a factor of $\sim 25$ lower than the initial one). We estimate that this final burst of radiation, lasting at least $5 \, \mathrm{yr}$ in the rest-frame, should be observable by the future JWST. We have identified three different phases of the accretion (the initial phase, the main accretion phase and the final burst), detailing their main characteristics in turn. We predict that the accretion flow, on average, occurs at mildly super-critical rates for the total evolution of the system (except the very initial transient phase). Recently, \cite{Alexander_2014}, \cite{Volonteri_2014} and \cite{Madau_2014}, but see also \cite{Volonteri_2005}, have suggested that brief but strongly super-critical accretion episodes (with rates as large as $f_{Edd} \gsim 50$) might explain the rapid black hole mass build-up at $20\gsim z \gsim 7$. Very large and prolonged accretion rates may be sustainable in the so-called ``slim disk" solution (\citealt{Begelman_1982,Paczynski_1982,Mineshige_2000,Sadowski_2009, Sadowski_2011}), an energy-advecting, optically thick flow that generalizes the standard thin disk model \citep{Shakura_Sunyaev_1976}. In these radiatively inefficient models the radiation is trapped (see also the diffusion condition in Appendix) into the high-density gas and is advected inward by the accretion flow: this happens when the photon diffusion time exceeds the time scale for accretion. This would allow a very mild dependence of the emitted luminosity $L$ on the normalized accretion rate $f_{Edd}$, which is usually described as a logarithmic relation (\citealt{Mineshige_2000, Wang_2003}): $L/L_{Edd} \sim 2[1+\ln(f_{Edd}/50)]$, valid for $f_{Edd} \geq 50$. In our work, we analyze the accretion flow on very large scales ($R \sim R_B$, i.e. the accretion disk is beyond our resolution limit) and photons are never trapped (see the Appendix). For this reason, the accretion flow is radiatively efficient ($L/L_{Edd} \propto f_{Edd}$) and the accretion rates are only mildly super-critical: $f_{Edd} \simeq 1.35$ on average. In our case, the idle phases are caused by the necessity to re-establish the downward accretion flow after the radiation burst, while in the strongly super-critical models they are caused by the need for replenishing the gas reservoirs (e.g. by galaxy mergers, see e.g. \citealt{Volonteri_2014}). In our simulation, at smaller radial distances ($R \ll R_B$) we expect that the radiation eventually reaches the trapping condition. This is neglected in the present implementation of the simulation, but could critically modify the radiative properties of the source, especially in the light of the aforementioned recent studies. Some aspects of the simulation may be improved, namely: \begin{enumerate} \item The accreting gas, at smaller radii, should form an accretion disk. If the accretion flow has a non-zero angular momentum with respect to the central body, the gas will reach a centrifugal barrier (caused by the steeper radial scaling of the centrifugal acceleration, $\sim r^{-3}$, with respect to the gravitational acceleration, $\sim r^{-2}$) from which it can accrete further inward only if its angular momentum is transported away. This would at least partly modify the irradiation mechanism. \item A full spectral analysis of the source needs a more accurate description of the interaction between radiation and matter. \item At smaller radial distances the photons should be trapped and the accretion flow should become energy-advective, i.e. radiatively inefficient, as described above. An appropriate modeling of the inefficient accretion flow would then be required. \item The magnetic field may also significantly affect the accretion flow structure and behavior, as already pointed out in e.g. \cite{Sadowski_2014} and \cite{McKinney_2014}. The inclusion of an appropriate modeling of the magnetized plasma would then be required. \end{enumerate} This work is the basis upon which a full spectral analysis of these sources is to be constructed: this would be the key to unveil the eventual existence of IMBHs during the Cosmic Dawn era. The existence of IMBHs at high redshifts, although not yet confirmed by observations, would represent a breakthrough in our knowledge of the primordial Universe. Their presence would be also relevant for at least two reasons. First, the formation of IMBHs in the early Universe would ease the problem of the presence of SMBHs with masses $M_{\bullet} \sim 10^9 \, \mathrm{M_{\odot}}$ at redshifts as high as $z\approx 7.085$ \citep{Mortlock_2011}. These massive seeds would play a role of paramount importance in giving a jump start to the accretion process. Second, the formation of IMBHs at high redshifts could provide a possible interpretation of the near-infrared cosmic background fluctuations \citep{Yue_2013} and its recently detected cross-correlation with the X-ray background \citep{Cappelluti_2013}. This interpretation would be even more plausible if the primordial population of IMBHs is proved to be highly obscured. The observation of IMBHs could then provide the missing pieces for the solution of these intriguing puzzles. \vspace{+0.5cm} We thank L. Ciotti, G. S. Novak and M. Volonteri for useful comments and suggestions. \section{Appendix} \label{sec:appendix} This Appendix contains some more technical details about the modules HD and RT, such as the boundary conditions for velocity and density, the two-stream approximation, the heating and cooling terms and the photon diffusion condition. \subsection{Additional Boundary Conditions} In addition to the boundary conditions for the luminosity, already detailed in Sec. \ref{sec:methods}, we need to specify the behavior of velocity and density at the innermost and outermost cells of the spatial grid. The spatial boundary conditions are: (i) outflow for the inner boundary (with restrictions on velocities, see below) and (ii) void for the outer boundary. An outflow boundary condition forces the derivatives of the quantities of interest to be zero, i.e. artificially extends the spatial domain outside the boundary. The restriction for the boundary velocity $v_{b}$ is the following: \begin{equation} v_{b} = \begin{cases} v(r_{min}) & \text{if } v(r_{min})<0 \\ 0 & \text{if } v(r_{min})>0 \end{cases} \end{equation} and is meant to prevent the replenishment of the computational domain by the gas coming from unresolved spatial scales. A void boundary condition, on the contrary, constrains the quantity of interest to be zero outside the computational domain: the system composed by the IMBH and its parent halo (up to $\sim 2 R_B$) is isolated in space. \subsection{The two-stream approximation} The RT method we used is based on \cite{Novak_2012} and relies on the two-stream approximation, i.e. the luminosity is expressed as the sum of an ingoing and an outgoing radiation stream. When the optical depth is low, photons of the ingoing stream at any given radius $r_0$ are likely to successfully traverse the inner parts of the halo ($r<r_0$) and emerge as outgoing photons at the same radius, but with $\varphi = \varphi + \pi$ if $\varphi$ is the azimuthal angle. In this case, all the radiation emitted by a source term is to be added to the outgoing stream. If, instead, the optical depth is large, the ingoing photons are likely to be absorbed for $r<r_0$. Then, only half of the emitted photons should be added to the outgoing stream. The other half should be added to the ingoing stream, where they will in due course be absorbed. The resulting equations for the two radiation streams are: \begin{equation} \frac{dL_{out}}{dr} = 4 \pi r^2 \psi \dot{E} - \rho \kappa L_{out} \label{N3} \end{equation} \begin{equation} \frac{dL_{in}}{dr} = 4 \pi r^2(1- \psi) \dot{E} - \rho \kappa L_{in} \label{N4} \end{equation} where $\psi(r)$ is the fraction of photons emitted at a given radius that are likely to belong to the outgoing stream. The simplest estimate of the quantity $\psi(r)$ is given in \cite{Novak_2012}: \begin{equation} \psi(r) = 1 - \frac{1}{2} \left[\frac{1}{1+e^{-\tau}}\right]\left[\frac{r_1^2}{\mathrm{max}(r_1^2,r^2)}\right] \end{equation} where $\tau$ is the optical depth from $r$ to infinity and $r_1$ is the radial distance from the center where $\tau$ reaches unity. These equations must be complemented with the boundary conditions: \begin{equation} \begin{cases} L_{in}(r_{max}) = 0 \\ L_{out}(r_{min}) = L_{\bullet} \end{cases} \end{equation} and the expression for the radiative acceleration, Eq. \ref{a_rad}, must be changed into the following one: \begin{equation} a_{rad} = \frac{\kappa(\rho, T) (L_{out} - L_{in})}{4 \pi r^2 c} \end{equation} \subsection{Heating and Cooling} This paragraph deals with the non-adiabatic regime, i.e. the source/sink term $\dot{E}$ (Eqs. \ref{N1}-\ref{N2} and \ref{N3}-\ref{N4}). Differently from the implementation in \cite{Novak_2012}, where $\dot{E}$ accounts for the energy transfer among different frequency bands, in our case the interaction between matter and radiation is purely elastic (i.e. the frequency of the interacting photon is unchanged) and this term expresses the energy emitted or absorbed by matter per unit time and per unit volume. For a gas at $T \lesssim 10^4\, \mathrm{K}$ (i.e. below the atomic hydrogen cooling threshold) the energy equation is purely adiabatic. Instead, for a gas at $T \gsim 10^4\, \mathrm{K}$, the term $\dot{E}$ takes into account the bremsstrahlung cooling and the atomic cooling (recombination and collisional excitation, for H and He): \begin{equation} \dot{E} = 2 \left(\psi - \frac{1}{2}\right) \chi_{ion} n^2 {\cal S} \end{equation} Here, $n$ is the number density and $\psi$ is the fraction of photons emitted at a given radius that are likely to belong to the outgoing stream (see above in this Appendix). The term $2 \left(\psi - \frac{1}{2}\right)$ is the fraction of transmitted (i.e. not absorbed) photons, since $\psi$ is defined as $\psi = (1+p_{trans}/2)$, where $p_{trans}$ is the transmission probability. The term ${\cal S}$ includes all the coefficients for bremsstrahlung and atomic cooling (in units $\mathrm{erg \, cm^{3} \, s^{-1}}$ as reported in \citealt{Maselli_03} and \citealt{Cen_1992}). Moreover, the term $\chi_{ion}$ is the ionized fraction, which accounts for the fraction of atoms that can contribute to the bremsstrahlung. This last term is calculated as $\chi_{ion} = \left(\alpha_b/\gamma + 1\right)^{-1}$, where $\alpha_b$ is the recombination rate and $\gamma$ is the collisional ionization rate. Both of them are expressed in $\mathrm{cm}^3 \mathrm{s}^{-1}$ and are defined as (see e.g. \citealt{Maselli_03}): \begin{equation} \alpha_b = \frac{8.4\times 10^{-11}}{\sqrt{T}} \left(\frac{T}{1000}\right)^{-0.2} \left[1.0+\left(\frac{T}{10^6}\right)^{0.7}\right]^{-1} \end{equation} \begin{equation} \gamma = 1.27\times10^{-11} \sqrt{T} \, e^{-157809.1/T} \left[1.0+\left(\frac{T}{10^5}\right)^{0.5}\right]^{-1} \end{equation} \subsection{Photon Diffusion Condition} Given the very high values of the optical depth (as high as $\sim 100$ for the FS simulation), it is important to check whether a proper treatment for photon diffusion is required. Photons in the diffusive regime are advected inward with the gas, rather than being diffused out of the accretion flow. In this condition, the trapped infalling material should be considered a ``quasi-star", similar to the phase advocated by \cite{Begelman_2008}, with an atmosphere in Local Thermodynamic Equilibrium: the emission from the inner section of the accreting flow would then be thermal. \cite{Begelman_1978} gives a very practical way to assess the occurrence of the diffusive behavior of photons. Photons displaced at some radius $r$ are trapped-in if: \begin{equation} \tau(r) \, \frac{v(r)}{c} > 1 \end{equation} Throughout the paper, we refer to this formula as the diffusion condition, which is never met in our grid, because the maximum values reached are of order $\tau(r) \, v(r)/c \approx 10^{-3}$. This proves that the photons inside the simulated spherical volume in our work are never in the diffusive regime. It is likely that this condition is actually met at smaller radii, where the gas may form a structure quite similar to a stellar atmosphere. We defer the investigation of this possibility to future work.
1,116,691,498,189
arxiv
\section{Introduction} \label{Introduction} The regulation of star formation in galaxies and building-up of their stellar mass are hot topics and major unresolved issues in the research of the galaxy formation and evolution. Galactic scale winds in cooperation with the gas feeding and the star formation efficiency are believed to be processes important for understanding of how galaxies evolve. About twenty years ago, two seminal papers \citep{1998AJ....115.2285M, 1998A&A...331L...1S} proposed the idea that the energy released from an active galactic nucleus (AGN) can heat up and blow out the cold gas as well as effectively quench the formation of new stars, in the case of efficient coupling with the interstellar medium in its host galaxy. Over the last two decades, lots of efforts have been made in searching for the AGN feedback evidence \citep{2017ApJ...834...30F, 2019ApJ...870...37D}, including quantifying the importance of feedback process \citep{2007ApJ...656..699D, 2017ApJ...844...37D, 2018MNRAS.473.4077P} in galaxy evolution. Nevertheless, consensus has not been reached on the effect of AGN feedback, which is associated with both suppressing and triggering the star formation. In theory, massive and fast outflows can suppress the star formation in the host galaxy by removing and heating the ISM. Near-IR IFU observations of $z \sim$ 1 $-$ 3 quasars have revealed a spatial anti-correlation between the location of the fast outflowing gas and the star formation in the host galaxy\citep{2012A&A...537L...8C, 2015ApJ...799...82C, 2016A&A...591A..28C}. However, only a small fraction of the outflowing gas may escape the host halo, while a large fraction may fall back onto the galaxy at later times\citep{2014A&A...568A..14A}. The energy and angular momentum carried by this part of gas may be injected into the halo, which can heat the gas and prevent it from cooling, as well as halt the accretion of the fresh gas. Moreover, not all forms of feedback suppress the star formation. Fast outflows can also induce the star formation in the galactic disk \citep{2013ApJ...772..112S} or in the galactic winds \citep{2012MNRAS.427.2998I} through the compression of molecular clouds. The wealth of observational data from the most advanced facilities point towards the complexity of physical conditions in galactic winds. First of all, most of the AGNs in the local universe are hosted by the spiral galaxies with non-ignorable star formation. It is unclear whether AGN or star formation activity primarily drive the winds \citep{2005ARA&A..43..769V}. Secondly, different physical mechanisms, i.e. the radiative energy and/or the mechanical effects \citep{2008MNRAS.383..119D}, can also drive galactic outflows. It is often unclear which mechanism is dominant for the winds, and if we are able to disentangle different mechanisms. Identifying the primary driver and physical mechanism of galactic-scale gaseous winds requires progress in the quality of available data, which should include multi-wavelength information as well as spatial resolved spectra. In the cases of spatially resolved spectroscopic observations, the geometry of the outflows may be helpful for assessing the dominant driving process. For example, \cite{2015ApJ...806...84L} showed that the base of the outflow in a low-luminosity Seyfert galaxy is coincident with the unresolved nucleus, supporting the hypothesis that AGN is indeed the predominant ionizing source of the outflowing gas. \cite{2010ApJ...721..505R} found that gas in NGC 839 is concentrated in a bi-conical polar funnel, which can be interpreted as shock-excited superwind. An integral field spectroscopic (IFS) study of two nearby luminous infrared galaxies (LIRGs), IC 1623 and NGC 3256, exhibited the evidence of widespread shock excitation induced by ongoing merger activity \citep{2011ApJ...734...87R}. Optical IFS observations of the Mice, a major merger between two massive gas-rich spirals NGC 4676A and B, uncovered that both galaxies show bi-cones of the high ionized gas extending along their minor-axis \citep{2014A&A...567A.132W}. As in the cases above, the AGN, shock or merger activity can work as the driving mechanism for outflows. The common characteristic in outflow is the presence of ionized gas in the bi-cones. Using long-slit spectra from the Hubble Space Telescope (HST), \cite{2005AJ....130..945D} presented detailed information on a face-on galaxy with ionized bi-cone, NGC4151, and proposed a model with bi-conical NLR (see Figure 12 in \cite{2005AJ....130..945D}). The inclination of the galaxy and bi-cone are 20$^{\circ}$ and 45$^{\circ}$, respectively, thus the intersection angle is only 25$^{\circ}$. \cite{2010AJ....140..577F} and \cite{2010AJ....139..871C} developed geometric models of the NLRs and the inner disks of MRK 573 and MRK 3. Both models uncovered the possibility that the inner disk can spatially traverse the cones in both sides. It can be inferred that a certain number of galaxies with bi-conical NLRs are different from the classical model of M82 where the bi-cone is exactly perpendicular to the disk. More than 50 years ago, \citet{1963ApJ...137.1005L} discovered an evidence of an explosion at the center of M82. Prior to this, few works discussed the galactic winds. The progress was restricted because of the lack of comprehensive data over the full electromagnetic spectra at comparable sensitivity and spatial resolution. Observations of the entire electromagnetic spectrum has become possible over the last fifty years. Spatially resolved, spectral maps provided by the IFS enable us to study galactic winds in detail. Recently, based on the survey of MaNGA, we catch an edge-on galaxy, SDSS J171359.00+333625.5, with AGN-driven winds. The discovery of galactic winds with small fibre bundle is great news for the multi-bundle instruments like SAMI \citep{2012ApJ...761..169F}, MANGA as well as the next big version Hector. We discover clear bi-conical outflows in this case, as well as detailed gas and stellar kinematics, which offers an excellent opportunity to study fueling and feedback processes. The sample selection and methodology of data processing is presented in Section \ref{sec:data}. We show the kinematics of gas and stars, as well as the ionization status in bi-cone in Section \ref{sec:properties}. Our understandings on these results and two unusual properties of this galaxy are discussed in Section \ref{sec:discussion}. Finally, a summary is given in Section \ref{sec:summary}. Throughout this paper, we adopt a set of cosmological parameters as follows: $H_0=70\,{\rm km~s}^{-1}\,{\rm Mpc}^{-1}$, $\Omega_m=0.30$, $\Omega_{\Lambda}=0.70$. \section{Data Analysis} \label{sec:data} MaNGA is one of three core programs in the fourth-generation Sloan Digital Sky Survey (SDSS-IV) \citep{2015ApJ...798....7B, 2016AJ....152...83L}. MaNGA employs the Baryon Oscillation Spectroscopic Survey (BOSS) spectrographs \citep{2013AJ....146...32S} on the 2.5m Sloan Foundation Telescope \citep{2006AJ....131.2332G}. This survey aims to conduct IFU observation for a representative sample, which consists of about 10,000 nearby galaxies with stellar mass $\log (M_{\ast}/M_{\odot}$) $\geq$ 9 and redshift $0.01 < z < 0.15$ \citep{2017AJ....154...28B}. Two dual-channel BOSS spectrographs \citep{2013AJ....146...32S} provide simultaneous wavelength coverage from 3600 to 10,000$\rm \AA$. The spectral resolution is $\sim$2000, allowing measurements of all strong emission line species from {\hbox{[O\,{\sc ii}]}}$\lambda$3727 to {\hbox{[S\,{\sc ii}]}}$\lambda$6731. The MaNGA sample and data products used here were drawn from the SDSS Data Release 15 (DR15) \citep{2019ApJS..240...23A}, which includes $\sim$4633 galaxies observed through July 2016 (the first three years of the survey). Using the demarcation\\ $\log$({\hbox{[O\,{\sc iii}]}}$\lambda5007/H\rm \beta$)$=$0.61$/$($\log$({\hbox{[N\,{\sc ii}]}}$\lambda6583/\rm H\alpha$)$-$0.47)$+$1.19 \citep{2001ApJ...556..121K}, we firstly select the AGN galaxies from the MaNGA sample. After that we use the criterion\\ $\log$({\hbox{[O\,{\sc iii}]}}$\lambda5007/\rm H\beta$)$=$1.05$\log$({\hbox{[N\,{\sc ii}]}}$\lambda6583/\rm H\alpha$)$+$0.45 \citep{2007MNRAS.382.1415S}, and further select 113 Seyfert galaxies. We then inspect the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and H$\alpha$ equivalent width (EQW) maps of these Seyfert galaxies by eyes and find 13 of them having bi-conical features in {\hbox{[O\,{\sc iii}]}}$\lambda$5007 EQW maps. Among these 13 galaxies, SDSS J171359.00+333625.5 is a Seyfert 2 galaxy with the spectroscopic redshift of $z \sim$ 0.039, and it has the most prominent bi-cone feature as well as interesting gas and stellar kinematics. In this paper we will focus on this object and explore its unusual properties. The MaNGA data reduction pipeline (DRP) \citep{2016AJ....152...83L} provides spectral datacubes. We reanalyze the DRP spectra of this object using the principal component analysis (PCA) method as described in \cite{2016MNRAS.463..913J}. We directly use the principal components (PCs) and library of model spectra given by \cite{2012MNRAS.421..314C} to get the velocity field of stars. We shift the best-fitting model from $-$1000 km s$^{-1}$ to 1000 km s$^{-1}$ by a step size of 2 km s$^{-1}$. For each step, we calculate the reduced $\chi^{2}$ between the best-fitting model and the observed spectrum. The velocity of stars at a certain spaxel is determined by the fit with the lowest $\chi^{2}$ value. After that, we model the stellar continuum of each spectrum using the BC03 \citep{2003MNRAS.344.1000B} stellar population synthesis models and separate the stellar continuum and absorption lines from the nebular emission lines. The best-fitting continuum model is then subtracted from the observed spectrum to give the pure emission line spectrum. Each emission line is then fitted with one Gaussian component using MPFIT \citep{2009ASPC..411..251M}. We include {\hbox{[O\,{\sc ii}]}}$\lambda$3727, {\hbox{H$\beta$}}, {\hbox{[O\,{\sc iii}]}}$\lambda\lambda$4959,5007, {\hbox{H$\alpha$}}, {\hbox{[N\,{\sc ii}]}}$\lambda\lambda$6548,6584, {\hbox{[S\,{\sc ii}]}}$\lambda\lambda$6717,6731 emission lines in our line fitting. The center and width of these lines are not tied with each other. We limit the shift of the line center to be in the range of [$-$300, 300] km s$^{-1}$ after redshift correction. The line widths are limited by the range of [0, 500] km s$^{-1}$. \section{Properties of SDSS J171359.00+333625.5} \label{sec:properties} \subsection{Kinematics} \subsubsection{\rm Counter-rotating Gas and Stellar Components} \label{Subsec:counter-rotator} In Figure \ref{fig1}, we show the gas and stellar kinematics of SDSS J171359.00+333625.5. Figure \ref{fig1}a is the SDSS $g,r,i$ color image. Figure \ref{fig1}b shows the stellar velocity field for the spaxels with spectral signal-to-noise ratio (S/N) larger than 2 per pixel. Figure \ref{fig1}c shows the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 velocity field for spaxels with {\hbox{[O\,{\sc iii}]}}$\lambda$5007 S/N larger than 3. Figure \ref{fig1}d shows the {\hbox{H$\alpha$}} velocity field for spaxels with H$\alpha$ emission line S/N larger than 3. The position angles of gas and stellar velocity fields are measured using the IDL package \textsc{\small KINEMETRY} \citep{2006MNRAS.366..787K}. The position angle (PA) is defined as the counter-clockwise angle between north and a line that bisects the velocity field of gas or stars on the receding side. The black solid lines in Figure \ref{fig1}b and \ref{fig1}c mark the PAs of gas and stellar velocity fields. The kinematic misalignment between gas and stars is defined as $\Delta$PA = $|$PA$_{\rm gas}$ - PA$_{\ast}|$, where PA$_{\rm gas}$ is the PA of ionized gas and PA$_{\ast}$ is the PA of stars. It turns out that SDSS J171359.00+333625.5 is a counter-rotator with $\Delta$PA $\sim$ 179.8$^{\circ}$. Comparing the rotation in the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and H$\alpha$ velocity fields, we find that the H$\alpha$ tends to trace the gaseous disk, while the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 extends to the bi-cone region. We then build a pure circular disk rotation field expressed as \begin{equation} V(R,\psi) = V_{0} + V_{c}(R)\sin i\cos \psi, \label{eqvr} \end{equation} where $R$ is the radius of a circular ring in the galactic plane, $V_{0}$ is the systemic velocity, $V_{c}$ is the circular velocity at radius $R$ and $\psi$ is the azimuthal angle measured from the major-axis in the galactic plane \citep{2006MNRAS.366..787K} with a coverage of [-$\pi/2$, $\pi/2$]. $V_{0}$ approaches zero after the redshift correction, and $V_{c}(R)$ is defined as the H$\alpha$ rotation velocity along the major-axis. Figure \ref{fig1}e shows the circular disk model. We subtract the disk model from {\hbox{[O\,{\sc iii}]}}$\lambda$5007 velocity field. The residuals are shown in Figure \ref{fig1}f. The inclination angle ($i$) is calculated using the algorithm given by \cite{1926ApJ....64..321H}, in which the observed axis ratio is compared to the intrinsic value $q_{0}$. We use \textsc{\small KINEMETRY} to fit the galaxy axis ratio ($q$), and the result is $q = 0.28$. The relation between the inclination angle and axial ratio is given by \begin{equation} \cos^{2}i = \frac{q^{2} - q_{0}^{2}}{1 - q_{0}^{2}} \label{eqvr} \end{equation} \citep{1926ApJ....64..321H}, where $q_{0}$ = 0.2. By following this equation we estimate that SDSS J171359.00+333625.5 is a nearly edge-on galaxy with $i = 78^{\circ}$. \cite{2016NatCo...713269C} identified nine blue star forming counter-rotators from MaNGA, whose gas and stars have $\Delta$PA $>$ 150$^{\circ}$. The central regions of blue counter-rotators have younger stellar populations and more intense, ongoing star formation than their outskirts \citep{2019ApJ...882..145B}. All these phenomena suggest a picture in which the progenitor accretes the counter-rotating gas from the cosmic web or gas-rich dwarf galaxies, followed by redistribution of the angular momentum through the gas-gas collisions between the external and pre-existing gas. These counter-rotators largely accelerate gas inflows, leading to the fast centrally-concentrated star formation. As the gas flows into the central regions and feeds the central black hole, we expect to see activation of the AGN. This is the case of SDSS J171359.00+333625.5, a Seyfert 2 galaxy with bolometric AGN luminosity of 1.19$\times$10$^{44}$ erg s$^{-1}$. \subsubsection{\rm Different {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and H$\alpha$ Kinematics} \label{Different [OIII] and Ha kinematics} Figure \ref{fig1}c and \ref{fig1}d show an evidence that H$\alpha$ rotates in the same direction but faster than {\hbox{[O\,{\sc iii}]}}$\lambda$5007. To quantify the difference on rotation velocity between the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and H$\alpha$, we show the residual velocity (V$_{\rm [OIII]}$ $-$ V$_{\rm H\alpha}$) in Figure \ref{fig2}a. It is clear that this residual map is not a random fluctuation, but more resembles a regular rotation. The rotation evidence can also be found in Figure \ref{fig1}f (at least along the major-axis), which shows the residuals between {\hbox{[O\,{\sc iii}]}}$\lambda$5007 rotation and the circular disk model. The first impression is that the pattern of residuals is consistent with kinematics of stars. To minimize the influence of gas kinematics in the bi-conical outflows, we focus on the spaxels located along the major axis. Figure \ref{fig2}b shows the relation between V$_{\rm [OIII]}$ and V$_{\rm H\alpha}$ along the major-axis, and it is obvious from this panel that the velocity of H$\alpha$ is higher than {\hbox{[O\,{\sc iii}]}}$\lambda$5007 at the same spaxels. We show the correlation between V$_{\rm Stars}$ and (V$_{\rm [OIII]}$ $-$ V$_{\rm H\alpha}$) along the major-axis in Figure \ref{fig2}c. The blue dots (with Y-position in the range of $-$3 $\sim$ 4 arcsec) roughly follow the one-to-one correlation. Over this range outlined with the black dots, the difference between {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and H$\alpha$ decreases as the radius increases. In Figure \ref{fig2}d, we show (V$_{\rm [OIII]}$ $-$ V$_{\rm H\alpha}$) (black dots) and V$_{\rm Stars}$ (red dots) as a function of Y-position for clearer exhibition. In Figure \ref{fig2}d, we can see that the stellar velocity does not change smoothly with radii and it has a turnover at $|\rm Y| \sim 5$, which is roughly consistent with the turn-over position of the difference (V$_{\rm [OIII]}$ $-$ V$_{\rm H\alpha}$). In Figure \ref{fig3}a and \ref{fig3}b, we show the velocity dispersion fields of {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and H$\alpha$, respectively. We find that H$\alpha$ velocity dispersion peaks at the galaxy center while {\hbox{[O\,{\sc iii}]}}$\lambda$5007 peaks at the bi-cone region. Figure \ref{fig3}c shows the gas velocity dispersion as a function of the Y-position along major-axis. Again, we find different behaviors of {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and H$\alpha$, that H$\alpha$ velocity dispersion peaks at the center and decreases as radius increases, while {\hbox{[O\,{\sc iii}]}}$\lambda$5007 velocity dispersion keeps roughly being constant of 100 km s$^{-1}$ along major-axis. We understand that the H$\alpha$ traces the galaxy potential well (with a bulge in the center and a disk in the outskirts), but the behavior of {\hbox{[O\,{\sc iii}]}}$\lambda$5007 is really wired in both velocity and velocity dispersion. To verify our conclusions, we check raw spectra from each MaNGA fiber. The result of our verification ensures that interpolation had negligible effect on both centers and widths of emission lines, so the difference between {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and {\hbox{H$\alpha$}} is credible. Besides, we fit {\hbox{[O\,{\sc iii}]}}$\lambda$5007 in three schemes and the fitting results extracted from four pixels are plotted in Figure \ref{fig4} with the reduced chi-squared values labelled in the top of each panel. The specific positions of the four pixels are marked by red squares in the first column of Figure \ref{fig4}. In the second column, we fit a Gaussian (blue) to {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and limit the velocity of line center same as {\hbox{H$\alpha$}}. Based on these four panels we find that the difference between {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and {\hbox{H$\alpha$}} in the rotation velocity is apparent, and the chi-squared values ($\chi_{1}^{2}$) are much bigger compared to other two schemes ($\chi_{2}^{2}$ \& $\chi_{3}^{2}$). In the third column, we fit {\hbox{[O\,{\sc iii}]}}$\lambda$5007 with double Gaussians and limit the two lines' centers to share the same rotation velocity with {\hbox{H$\alpha$}} (blue) and stars (green) respectively. If superposed profiles (orange) optimally depict the data, then the difference between {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and {\hbox{H$\alpha$}} could be the result of collision between original and accreted clumps of gas. The last column shows a Gaussian fit to the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 line (red), with the only requirement that the line center displacement to range between $-$300 and 300 km s$^{-1}$. Comparing $\chi_{2}^{2}$ and $\chi_{3}^{2}$, we can tell that the best scheme is untied single Gaussian fit (in the last column), which means that the difference between {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and {\hbox{H$\alpha$}} is caused by other kinematic processes instead of the gas collision. We further check the behavior of {\hbox{[O\,{\sc ii}]}}$\lambda\lambda$3726,3729, {\hbox{H$\beta$}}, {\hbox{[O\,{\sc iii}]}}$\lambda$4959, {\hbox{[N\,{\sc ii}]}}$\lambda\lambda$6548,6584 and {\hbox{[S\,{\sc ii}]}}$\lambda\lambda$6716,6731, and find that the {\hbox{[O\,{\sc ii}]}}$\lambda\lambda$3726,3729 and {\hbox{[O\,{\sc iii}]}}$\lambda$4959 have similar velocity and velocity dispersion to {\hbox{[O\,{\sc iii}]}}$\lambda$5007. At the same time, the {\hbox{H$\beta$}}, {\hbox{[N\,{\sc ii}]}}$\lambda\lambda$6548,6584 and {\hbox{[S\,{\sc ii}]}}$\lambda\lambda$6716,6731 closely follow H$\alpha$. In summary, the kinematics of oxygen, including rotation velocity and velocity dispersion, is totally different from the other elements like hydrogen, nitrogen and sulphur. On the one hand, oxygen rotates slower than the other elements, although they share the same rotation direction; on the other hand, hydrogen, nitrogen and sulphur follow the galaxy potential well with a central bulge plus an outer disk while oxygen does not. \subsection{The Bi-conical, Narrow-line Region} \label{subsec:bi-cone} In Figure \ref{fig1}c, we find that there are bi-cone like redshifted regions (marked by the grey dashed hyperbola) in the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 velocity field. In this section, we further confirm the bi-cone features from both {\hbox{[O\,{\sc iii}]}}$\lambda$5007 EQW and BPT \citep{1981PASP...93....5B} diagrams. Figure \ref{fig5}b shows the BPT diagram for the spaxels whose S/Ns in the {\hbox{H$\beta$}}, {\hbox{[O\,{\sc iii}]}}$\lambda$5007, {\hbox{H$\alpha$}} and {\hbox{[N\,{\sc ii}]}}$\lambda$6583 are larger than 2. \cite{1981PASP...93....5B} demonstrated that it is possible to distinguish AGNs from normal star-forming galaxies by considering the intensity ratios of two pairs of relatively strong emission lines. The spaxels located in the AGN region are color-coded by the perpendicular distance from the \cite{2001ApJ...556..121K} line (black solid line in Figure \ref{fig5}b). Alternatively, we can separate the shock from the AGN emission using the demarcation line defined by \cite{2010ApJ...711..818S}. Figure \ref{fig5}d shows the resolved BPT diagram for SDSS J171359.00+333625.5. The blue pixels represent the star forming region, the green pixels mark the composite region of contribution from both AGN/shock and star formation, the yellow pixels trace the shock signal, while the orange pixels represent the AGN region. It is clear from this resolved BPT diagram that the disk of this galaxy is star forming (blue). The bi-cone region dominated by the AGN radiation (orange) with weak contamination from the shocks (yellow) is located perpendicular to the disk. The interface between galactic disk and bi-cone falls in the composite region (green). However, we keep in mind that {\hbox{[O\,{\sc iii}]}}$\lambda$5007 has totally different kinematics (both in velocity and velocity dispersion) from the other three BPT lines (H$\alpha$, {\hbox{[N\,{\sc ii}]}} and {\hbox{[S\,{\sc ii}]}}), which may indicate that {\hbox{[O\,{\sc iii}]}} originates from the different regions. In Figure \ref{fig5}c, we show the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 EQW map. The bi-cone region (marked by the black dashed hyperbola) with higher {\hbox{[O\,{\sc iii}]}}$\lambda$5007 EQW is obvious in this panel, and this bi-cone region is larger than the kinematic bi-cone outflows (grey dashed hyperbola from Figure \ref{fig1}c). We plot the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 EQW map over the SDSS $g,r,i$ color image to give an idea about the scale of bi-cone, see Figure \ref{fig5}a. The estimated scale of bi-cone region is about 8.26 kpc, while the effective radius of SDSS J171359.00+333625.5 is only 3.99 kpc. This means that the bi-cone spans far beyond the galaxy disk, which also evidences that the contribution of star formation in ionization of bi-cone is negligible. Combining the strong {\hbox{[O\,{\sc iii}]}}$\lambda$5007 emission region in Figure \ref{fig5}c and the AGN region (orange) in Figure \ref{fig5}d (see the black hyperbolas as guide lines), we conclude that the gas in the bi-cone outflows is primarily ionized by the central active black hole with negligible contribution from star formation or shocks. Taking {\hbox{[O\,{\sc iii}]}}$\lambda$5007 as a probe of NLR, we can estimate that the opening angle of cones is close to 80$^{\circ}$. Figure \ref{fig5}c also shows that the two cones in this galaxy are not symmetric with respect to the galactic mid-plane: the {\hbox{[O\,{\sc iii}]}} EQW in the eastern cone is stronger. Considering that this is not a totally edge-on galaxy with an inclination angle of 78$^{\circ}$, we tend to suggest that the eastern cone is in the front side of the disk. In this case, we observe the radiation from the eastern cone directly, while the strength of emission from the western cone is weakened after travelling through the clumps of gas and dust in the galaxy disk. Difference in the strength between {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and the permitted emission lines, such as {\hbox{H$\beta$}} and {\hbox{H$\alpha$}}, is primarily contributed by the disk. The extinction of the disk can be calculated using the flux ratio {\hbox{H$\alpha$}}$/${\hbox{H$\beta$}}. It comes out that the {\hbox{H$\alpha$}}$/${\hbox{H$\beta$}} ratio in the eastern part of the disk is higher than in the west, which supports the assumption that the front cone is in the east. \section{Discussion} \label{sec:discussion} Figure \ref{fig6}a shows a deep (about two magnitudes deeper than SDSS in $g$-band observation) image of SDSS J171359.00+333625.5 obtained by 2.3-m Bok Telescope. From this panel, we find five blobs (marked by red squares) around our target. Two of these blobs are located within the MaNGA bundle (labelled by $'$2 4$'$), and the spectra of them show that the redshifts of these blobs are similar to SDSS J171359.00+333625.5. Figure \ref{fig6}b shows the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 velocity field and we can find that some blueshifted {\hbox{[O\,{\sc iii}]}}$\lambda$5007 features overlap with two blobs (marked by red stars). There is also a blueshifted region without any obvious counterpart blob outside the eastern cone with X-position smaller than $-$7 arcsec. The {\hbox{[O\,{\sc iii}]}}$\lambda$5007 emission along the interface of this blueshifted region and redshifted eastern cone show double-peaked structures. Galaxies with gas and stars counter-rotating are the key manifestations of the regulation by external processes, i.e. major mergers, minor mergers or gas accretion, which could bring counter-rotating gas into the galaxies. The existence of a large number of blobs around SDSS J171359.00+333625.5 should be the result of interaction, which also leads to the complicate kinematics of this galaxy. SDSS J171359.00+333625.5 holds abundant ionized gas which is accompanied by the recycle of feeding and feedback, thus it is a great laboratory for us to understand these physical processes in detail. However, with MaNGA data alone, we do not plan to over-interpret the observational results. Futher observations, like higher S/N and spatial resolved IFU data from VLT/MUSE and cold gas detection from ALMA, are required to uncover the mysteries of this galaxy. In this section, we summarise two most unusual properties of this galaxy, and discuss the possible explanations. \subsection{Why is the Gas in the Bi-cone Region Redshifted?} It is out of our expectation that the gas (tracted by {\hbox{[O\,{\sc iii}]}}$\lambda$5007, marked by grey dashed hyperbola in Figure \ref{fig1}c) in both the eastern and western cones are redshifted. In the typical AGN-driven winds picture, we would find a blueshifted cone in which the gas is moving towards the Earth and a redshifted cone where the gas is moving away. The two redshifted cones are also shown in Figure \ref{fig1}f (marked by black dashed hyperbola) where the disk model is subtracted from the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 velocity field. In order to understand the origin of the redshifted kinematics in the two cones, we go through the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 structures spaxel by spaxel, finding the evidence that the {\hbox{[O\,{\sc iii}]}}$\lambda$5007 emission lines have double peak structures in 10 $\sim$ 20 spaxels between the grey and black dashed hyperbolas in Figure \ref{fig1}c. However, both the resolution and S/N of these spectra are not high enough for us to separate the two kinematic components robustly. Thus, we can only list the possible explanations in this section. One possible explanation is that the projected velocity field of gas is a combination of both galactic winds in the bi-cone and rotation of the disk. Figure \ref{fig7}a shows a schematic drawing of winds along the bi-conical surface centered on the nucleus of a disk-shaped galaxy \citep{1990ApJS...74..833H}, whose spatial structure is similar to M82 \citep{1988Natur.334...43B}. The cone's axis (dashed horizontal line) extends along the minor-axis of the disk. In the first quadrant (the top half of western cone), the projected velocity of winds along the line-of-sight is redshifted, and the disk rotation will lead to a blueshifted velocity along the line-of-sight. We find redshifted kinematics because this region is winds dominated, while the blueshifted disk hides behind the cone. The redshifted cone in the second quadrant (top-east) can be explained in a similar way. In the third and fourth quadrants (the bottom-east and bottom-west cones), the projected velocity is a combination of redshifted disk rotation and blueshifted winds, and it is disk dominated in these two quadrants since the disk is not blocked by the winds. \cite{2015ApJ...799...83C} studied the outflows in NGC 4151 (quite similar to SDSS J171359.00+333625.5 in the spatial structure of bi-cone outflows and disk), and concluded that the peak mass outflow rate in NLR is 3.0 M$_{\sun}$ yr$^{-1}$. As we know, the gas inflow rate should be much larger than this value since most part of the gas has formed (or is forming) stars in the disk, another part has flown into the central black hole. Thus, we expect strong gas inflows in SDSS J171359.00+333625.5, which is just like the NGC 4151. The counter-rotating gas and stellar kinematics also supports this expectation since the collision between accreted counter-rotating gas and pre-existing gas will lead to the re-distribution of angular momentum and trigger strong gas inflows \citep{2016NatCo...713269C, 2016MNRAS.463..913J}. Figure \ref{fig6}a gives the evidence that SDSS J171359.00+333625.5 is a complicated system with not only galactic winds and disk components, but also the surrounding blobs that could be the source of accretion gas. Considering the existence of strong inflows in this galaxy, here comes another possibility that great amount of gas in the disk is being efficiently accreted into the central black hole, and the central AGN feedback is blowing the ionized gas out along the bi-cone. A geometric model and its planform are given in Figure \ref{fig7}b and \ref{fig7}c. An important point in this model is confirming the reality that the AGN-driven outflows are not necessarily perpendicular to the disk (which has been illustrated in the latter part of introduction). Generally speaking, {\hbox{[O\,{\sc iii}]}}$\lambda$5007 in the east is mainly contributed by the front cone (dark-red cone in Figure \ref{fig7}b), and all the winds in this region are redshifted. The blueshifted inflows along the disk are too weak to affect the line center of {\hbox{[O\,{\sc iii}]}}$\lambda$5007, but can result in the double peaked structures in emission lines. The emission in the west is mainly contributed by the inflowing gas along the disk, which is close to the line-of-sight (lower grey arrow in Figure \ref{fig7}c). Another question we need to figure out is why all the blueshited winds seem to have negligible effect on the projected velocity field in the west. The $V$-band extinction reveals that disk is optically thick in this region, so the radiation from the western cone is greatly weakened. \subsection{Is the Special Kinematics of Oxygen Element a Mystery?} As discussed in \ref{Different [OIII] and Ha kinematics}, the behavior of {\hbox{[O\,{\sc iii}]}}$\lambda$5007 is really wired in both velocity and velocity dispersion. Figure \ref{fig1}c and \ref{fig1}d show that {\hbox{[O\,{\sc iii}]}}$\lambda$5007 and {\hbox{H$\alpha$}} share the same direction in rotation, but {\hbox{H$\alpha$}} rotates faster than {\hbox{[O\,{\sc iii}]}}$\lambda$5007. As we know, {\hbox{[O\,{\sc iii}]}}$\lambda$5007 is a forbidden line which is collision excited and can only exist in the low density region, while {\hbox{H$\alpha$}} emission is photo-ionization excited. If this is the origin of special kinematics of oxygen element, we would expect to find similar behavior in the other forbidden lines, i.e. {\hbox{[N\,{\sc ii}]}}$\lambda\lambda$6548,6584 and {\hbox{[S\,{\sc ii}]}}$\lambda\lambda$6716,6731. However, the truth is that only {\hbox{[O\,{\sc ii}]}}$\lambda\lambda$3726,3729 and {\hbox{[O\,{\sc iii}]}}$\lambda$4959 share similar velocity and velocity dispersion with {\hbox{[O\,{\sc iii}]}}$\lambda$5007. While {\hbox{H$\beta$}}, {\hbox{[N\,{\sc ii}]}}$\lambda\lambda$6548,6584 and {\hbox{[S\,{\sc ii}]}}$\lambda\lambda$6716,6731 have similar velocity and velocity dispersion to {\hbox{H$\alpha$}}. The second possibility is that {\hbox{[O\,{\sc iii}]}}$\lambda$5007 is more sensitive to shock than {\hbox{H$\alpha$}}. However, we keep in mind that we are presenting the velocity and velocity dispersion along major-axis where we do not expect strong shock generated from the interaction between winds and the interstellar medium. We have to admit that we do not quite understand the origin of the wired kinematics behavior of {\hbox{[O\,{\sc iii}]}}$\lambda$5007. \section{Summary} \label{sec:summary} In this paper, we study the kinematic properties of a local edge-on Seyfert galaxy, SDSS J171359.00+333625.5, with X-shaped bi-cone ionized by AGN. The spatially resolved data from MaNGA survey provide us an opportunity to uncover the specific kinematics of feeding and feedback. $\bullet$ The gas and stellar components are counter-rotating with respect to each other. Collision between the pre-existing and accreted gas can re-distribute the angular momentum and lead to the gas inflow. $\bullet$ The kinematics of oxygen traced by the {\hbox{[O\,{\sc ii}]}}$\lambda$3727,3729 and {\hbox{[O\,{\sc iii}]}}$\lambda$4959,5007 lines, including the gas rotation velocity and velocity dispersion, is totally different from the other elements like hydrogen (as traced by {\hbox{H$\beta$}} \& {\hbox{H$\alpha$}}), nitrogen (as traced by {\hbox{[N\,{\sc ii}]}}$\lambda\lambda$6548,6584) and sulfur (as traced by {\hbox{[S\,{\sc ii}]}}$\lambda\lambda$6716,6731). We notice that oxygen rotates slower than the other elements. On the other hand, nitrogen, sulfur and hydrogen follow the galaxy potential with higher velocity dispersion in the central region of bulge and decrease as radius increases towards the disk region, whereas the velocity dispersion of oxygen keeps being roughly a constant over the major-axis. $\bullet$ The distribution of {\hbox{[O\,{\sc iii}]}} EQW is completely consistent with the X-shaped bi-cone, suggesting that the gas is primarily ionized by the central active black hole. The ionized gas in bi-cone region is receding with respect to the center. $\bullet$ Two possible models can be used to explain the redshift in the bi-cone region. One is that the projected velocity field of gas is a combination of both winds in the bi-cone and the rotation of the disk. Another possibility is that great amount of gas in the disk is being efficiently accreted into the central black hole, and the blue-shifted winds in the west are totally obscured by the disk.\\ {\noindent \bf Acknowledgements} This work is supported by the Y. C acknowledges support from the National Key R\&D Program of China (No. 2017YFA0402700), the National Natural Science Foundation of China (NSFC grants 11573013, 11733002, 11922302). National Natural Science Foundation of China (Nos. 11873032, 11433005, 11173016) and by the Research Fund for the Doctoral Program of Higher Education of China (No. 20133207110006). DB is partly supported by grant RSCF 19-12-00145. Funding for the Sloan Digital Sky Survey IV has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions. SDSS- IV acknowledges support and resources from the Center for High-Performance Computing at the University of Utah. The SDSS web site is www.sdss.org. SDSS-IV is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrof\'{i}sica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut f\"{u}r Astrophysik Potsdam (AIP), Max-Planck-Institut f\"{u}r Astronomie (MPIA Heidelberg), Max-Planck-Institut f\"{u}r Astrophysik (MPA Garching), Max-Planck-Institut f\"{u}r Extraterrestrische Physik (MPE), National Astronomical Observatory of China, New Mexico State University, New York University, University of Notre Dame, Observat\'{o}rio Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Aut\'{o}noma de M\'{e}xico, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
1,116,691,498,190
arxiv
\section{Introduction} Much efforts have been devoted to the study of high Reynolds number turbulence, assuming the properties of local homogeneity and isotropy. Under these assumptions it has been shown~\cite{mon71} that, in between the integral scale at which energy is fed into the flow and the dissipative scale at which viscosity smoothes out the velocity gradients, there exists an ``inertial range'' where: {\it (i)} the velocity spatial power spectrum follows a power law $E(k) \sim \epsilon^{2/3} k^{-5/3}$, {\it (ii)} the energy transfer rate $\epsilon$ is related to the third order structure function $S_{3}(r) = \langle \delta u^{3}(r) \rangle = - \frac{4}{5} \epsilon r$. The first point has been verified experimentally in numerous experiments and constitutes the main success of Kolmogorov's K41 mean field theory~\cite{fri95}. The second point is generally admitted although experimental verifications are rarer~\cite{gagne}. In addition, it has been observed that the probability distribution of the amplitudes of velocity increments depends on the width of the increment. It varies from a gaussian PDF at integral scale to the development of stretched exponential tails at the smallest scales. This feature defines intermittency, and traces back to the non-uniformity in space of the energy transfer rate. These three observations are the building blocks of the studies of homogeneous, isotropic tubulence (HIT). It is the aim of our work to study experimentally how these observations are modified in a situation of inhomogeneous turbulence. Partial studies have been made near weak vortices~\cite{Chilla}, in inhomogeneous wakes~\cite{Wesfreid,Camussi} or near a boundary layer~\cite{Leveque,Toschi}. Here we study turbulence in the vicinity of a very intense large scale vortex. Our motivations are twofold: first, most high Reynolds numbers flows are inhomogenous and the question of the ``universality'' of the features of HIT should be addressed. A second motivation is that it has been proposed that such coherent structures play a significant role in the statistical properties of turbulence~\cite{tsinober,andreotti,jimenez,malecot}. \section{Experimental set-up and measurements} The experimental apparatus and measurement techniques are those described in~\cite{pin98}. A strong axial vortex is formed in the gap between two corotating disks -- fig.~1. The control parameters of the flow are the disks angular velocities: $\Omega_{1}$, $\Omega_{2}$, and the mean integral Renolds number: $Re = {R^{2} \sqrt{\Omega_{1}^{2} + \Omega_{2}^{2}} }/{\nu}.$ Essentially two regimes are observed. The first one, hereafter labeled (GR), is when $\Omega_{1} \sim \Omega_{2}$: the core of the flow has a strong global rotation and a stable large scale axial vortex is present. On the other hand, when one disk rotates at a much faster rate than the other, the flow is dominated by differential rotation; a strong axial flow is produced and one observes intermittent sequences of formation and bursting of large scale axial vortices~\cite{pin98} -- in the following, we label (DR) this regime. In each case, the integral Reynolds number of the flow exceeds $5 \, . 10^{5}$ so that the flow is `turbulent'. \begin{figure} \begin{center} \epsfysize=3cm \epsfbox{lettInTurb_fig01.eps} \end{center} \caption{Experimental setup. The fluid is confined in a cylindrical vessel of height 40~cm and radius 11.7~cm. The disks, of radius $R=9.8$~cm are set a distance $H=30$~cm apart. They are driven by two d.c. motors whose rotation rates $\Omega_1, \Omega_2$ are adjustable in the range $[15, 45]$~Hz. The hot-film probe is located just above the midplane, at variable distance $r$ from the axis of rotation. } \label{fig1} \end{figure} Local velocity measurements are performed as in \cite{pin98}. The hot-film probe is set parallel to the axis of rotation, so that it measures $u_{r\theta} = \sqrt{u_{r}^{2} + u_{\theta}^{2}}$ (here, $(r,\theta, z)$ are the conventional cylindrical coordinates with $z$ parallel to the rotation axis). The sampling frequency $Fs$ is 78125~Hz for measurements in the (DR) regime and $39062.5$~Hz in (GR) -- In each case, at least $10^7$ data points are collected. We stress that although the flow has in most locations a well defined mean value $\overline{u}$ of the velocity $u_{r\theta}$ and fluctuation levels $u_{\rm rms}$ comparable or less to thoses reported in jet turbulence, we do not attempt to use and justify a `Taylor hypothesis' here and we analyse our result as time series. \section{Spectra} A first question regarding such a turbulence concerns the range of scales of motion and the energy distribution among them. In fig.~2 we show the power spectra of velocity time series recorded at different distances from the rotation axis. It can be observed, as a first sign of the flow inhomogeneity, that the spectra depend very much on the location of the probe and on the flow configuration. \begin{figure}[h] \begin{center} \epsfysize=5cm \epsfbox{lettInTurb_fig02.eps} \end{center} \caption{Power spectra of velocity times series recorded at distances $(0$, $0.5$, $1$, $1.5$, $2$, $2.5$, $3$, $3.5$, $4$, $4.5$, $5.5)$~cm from the rotation axis. (a): $\Omega_{1} = \Omega_{2} = 30$~Hz; (b) : $\Omega_{1} = 40$~Hz, $\Omega_{2} = 12$~Hz. } \label{fig2} \end{figure} In all cases, we observe the existence of an `inertial range' $\omega \in [\omega_{I}, \omega_{T}]$ where the spectrum follows a power law $E(\omega) \propto \omega^{-\alpha}$. However both the range $\omega_{I}/\omega_{T}$ and the exponent $\alpha$ vary. \\ - In (GR), $\alpha$ increases from about 1.65 in the outer region, to 2.50 at the rotation axis; meanwhile the scaling domain is also reduced. A more detailed analysis of the spectra shows that the energy in the larges scales remains approximately constant while both the frequency and the energy at the end of the 'inertial range' decrease with $r$. As a result, the proximity of the stable vortex structure reduces the range of scales of the turbulent motion and their energy content. This is consistent with earlier findings~\cite{mel93,hos94} which have shown that global rotation tends to decrease the intensity of small scales motion. \\ - In (DR), $\alpha$ decreases from again about 1.65 in the outer region, to $1.1$ at the rotation axis. The scaling range does not vary. Here, one observes that the energy in the dissipative region is almost independent of the position $r$ of measurement. It is the energy measured in the large scales that decreases with $r$. We thus find that the dynamics of the large scales influences the entire range of scales. It is surprising to observe that some self-similarity is retained since the spectra follow a power law (most curves exhibit almost a decade of scaling behaviour): there is no scale separation where one would observe the pathologies of the injection at large scales and where the turbulence would recover the characteristics of HIT at small scales. \section{Third order structure function} The energy transfer across the scales is usually illustrated by the behaviour of the third order structure function. In HIT, this is justified by the K\'arm\'an-Howarth relationship. But an anistropic equivalent can be established~\cite{fri95}. We show in fig.~3 the evolution of the third order moment of the velocity increments $S_{3}(\tau) = \langle \left( u(t) - u(t+\tau)\right)^{3} \rangle_{t}$, computed at different distances from the rotation axis. \\ - In (GR), we observe that in the outer region, $S_{3}(\tau)$ is a negative, bell-shaped curve. Negative values of the skewness indicate that the cascade proceeds in the forward direction, from the large scales to the dissipative ones, as in HIT. At such limited Reynolds numbers, one cannot expect a plateau in plots of $S_{3}(\tau)/\tau$ (even less with value $-4/5$), but the maximum of $|S_{3}(\tau)/\tau|$ is consistent with the value of the power consumption per unit mass $\epsilon$, measured from the electric consumption of the motors~\cite{lab96}. On the contrary going towards the axis the amplitude of $S_{3}(\tau)$ diminishes rapidly. Near the vortex core, the skewness eventually becomes positive in an intermediate range of increments. This indicates that the cascade to small scales is first reduced, and then reversed by the influence of the coherent rotation of the vortex. Energy is accumulated in the large scale where it stabilizes the dynamics of the large scale vortex.\\ \begin{figure}[h] \begin{center} \epsfysize=4cm \epsfbox{lettInTurb_fig03.eps} \end{center} \caption{Evolution with increment length of the third order moment of the velocity. (a): (GR) regime; (b): (DR) regime. $S_{3}(\tau)$ is computed at distances $r = 0.5(\circ), \; 1.5(\Diamond), \; 2.5(\Box), \; 3.5(\triangle), \; 4.5(\star)$~cm from the axis. The horizontal axis is in units of the sampling frequency.} \label{fig3} \end{figure} - This picture is quite different in the irregular (DR) regime. There, except at the rotation axis, the skewness has the behaviour traditionnally observed in HIT. The cascade proceeds forward; the maximum shows little variation with the location of the probe and has a value which is, again, consistent with the motors power consumption. At the axis, $S_{3}(\tau)$ drops suddenly; even its sign becomes unclear and it may even be slightly positive in a range of scales. Altogether, the analysis of the third order moment shows that the structure and dynamics of the flow at large scale strongly influences the entire energy transfer across the scales. \section{Intermittency} A first observation is that intermittency is always present, regardless of the flow regime and probe location. Indeed, for every measurement, the PDFs of velocity increment change from a quasi-gaussian shape at integral scale to the development of stretched exponential tails at small scales, see fig.4 (the PDFs shown in this figure are for signed quantities). \begin{figure}[h] \begin{center} \epsfysize=4cm \epsfbox{lettInTurb_fig04.eps} \end{center} \caption{Evolution with increment length of PDFs of the velocity increments. (a): (GR) regime; (b): (DR) regime. The PDFs are computed at distance $r = 2.5$~cm from the axis and for increments $\tau=(1,5,21,49,116,271,635,1485)/Fs$. Values of the velociy have been normalized by the $rms$ fluctuation level. The curves are translated for clarity.} \label{fig4} \end{figure} However, as it has now become customary, we analyse here intermittency through the behaviour of moments of the absolute values of the velocity increments:\\ $ S_{p}(\tau) = <|\delta u (\tau) |^{p} > = < |u(t+\tau) -u(t) |^{p}>_{t} $. A feature common to all flow configurations and every distance to the axis, is that the evolution of $S_{p}(\tau)$ is well modeled by a self-similar multiplicative cascade in the sense that one can write: $$ \log( S_{p}(\tau) ) = H(p) n(\tau) \; \; \; , $$ in a large interval of increments~\cite{castaing,pinton}. Here, $H(p)$ is the Laplace transform of the cascade propagator~\cite{casdub} which reflects the hierarchy of amplitudes of the velocity increments and $n(\tau)$ describes the speed at which the cascade proceeds along the scales. Experimentally, we have checked the above relation in the following manner: \\ {\it (i)} the plot of one moment against an other yields an extended scaling region, with exponent $H(p)/H(q)$ if one plots $S_p$ vs. $S_q$ -- this is the ESS property~\cite{ben93}. Since the decomposition of a moment $S_{p}(\tau)$ into the two functions $H$ and $n$ is defined up to an arbitrary multiplicative constant, we set $H(3) = 1$. Unlike the context of HIT, where this choice relies on the expected scaling of the third order structure function, it is here {\it a priori} arbitrary.\\ {\it (ii)} once that first step is done, and the values $H(p)$ obtained, one computes for every order the function $n_{p}(\tau) \equiv log(S_{p}(\tau))/H(p)$. We so verify that $n_{p}(\tau)$ is independent of $p$ and obtain a unique function $n(\tau)$ for every measurement. \begin{figure}[h] \begin{center} \epsfysize=4cm \epsfbox{lettInTurb_fig05.eps} \end{center} \caption{Evolution of the cascade depth. (a): (GR) regime; (b): (DR) regime. $n(\tau)$ is computed at distances $r = 0.5(\circ), \; 1.5(\Diamond), \; 2.5(\Box), \; 3.5(\triangle), \; 4.5(\star)$~cm from the axis.} \label{fig5} \end{figure} Following this procedure, the results are:\\ - Up to experimental errors, the functions $H(p)$ do not depend on the measurement location nor on the flow regime. The values ($H(p)=0.42,0.70,1.27,1.51,1.73 \; {\rm for} \; p=1,2,4,5,6$) are those reported in HIT in numerous experiments~\cite{fri95,arn95}.\\ - The functions $n(\tau)$ do depend on the measurement location and flow regime. They are shown in figure~5 (the curves are normalized so that $n = 0$ at integral time delays). We do not discuss hereafter the functional form of the $n(\tau)$ curves but the relative behaviour of the curves in each regime, when $r$ is varied. In (GR) one observes that in the outer region a larger range of scales is covered with a given number of cascade steps $n$ than in the vicinity of the vortex core. The slopes $dn(\tau)/d\tau$ increase as $r$ decreases, meaning that increasing number of steps are needed to cascade between any two given scales. Again, this is consistent with the idea that rotation prevents the 3D cascade and with our previous observation that it reduces the energy transfer to smaller scales. An inverse effect is observed in the (DR) regime: as measurements are performed closer to the rotation axis, $n(\tau)$ increases and is less steep, meaning that a wider range of scales is reached in a given number of cascade steps. This is consistent with a reduced slope of the velocity power spectra since such an efficient cascade provides an enhanced distribution of energy in the smaller scales. \section{Concluding remarks} Our foremost original observation is that some major caracteristics of HIT are retained in the vicinity of large scale coherent vortex structures: {\it (i)} the spectra develop an ``inertial range'' in the sense that power law scaling regions exist, {\it (ii)} the 3rd order structure functions yield consistent descriptions of the energy transfers and {\it (iii)} intermittency is always present and can be adequately described by a self-similar multiplicative cascade model. Second is that the structure and dynamics of the large scales influence the entire range of scales of motion. The combined study of power spectra, third order structure function and cascades parameters $H$ and $n$, give valuable informations that show that the turbulent cascade is reduced in the presence of global rotation and enhanced in the vicinity of very unstable vortex structures. Note that in the (GR) regime, eventhough the energy cascade is reversed and the spectra evolve towards a ``$-3$'' slope in the inertial zone, intermittency is observed: this a sharp difference with 2D turbulence~\cite{paret} where no intermittency is detected although deviations from Gaussianity may be present~\cite{boff99}. \bigskip\centerline{***}\medskip It is a pleasure to acknowledge many useful discussions with B. Andreotti and B. Castaing. \bigskip\centerline{***}\medskip \vskip-12pt
1,116,691,498,191
arxiv
\section{Commutative origins} Henry Helson is known for his work in harmonic analysis, function theory, invariant subspaces and related areas of commutative functional analysis. I don't know the extent to which Henry realized, however, that some of his early work inspired significant developments in noncommutative directions - in the area of non-self adjoint operator algebras. Some of the most definitive results were obtained quite recently. I think he would have been pleased by that - while vigorously disclaiming any credit. But surely credit is due; and in this note I will discuss how his ideas contributed to the noncommutative world of operator algebras. It was my good fortune to be a graduate student at UCLA in the early 1960s, when the place was buzzing with exciting new ideas that had grown out of the merger of classical function theory and the more abstract theory of commutative Banach algebras as developed by Gelfand, Naimark, Raikov, Silov and others. At the same time, the emerging theory of von Neumann algebras and $C^*$-algebra s was undergoing rapid and exciting development of its own. One of the directions of that noncommutative development - though it went unrecognized for many years - was the role of ergodic theory in the structure of von Neumann algebras that was pioneered by Henry Dye \cite{DyeGpI}, \cite{DyeGpII}. {\em That} Henry would become my thesis advisor. I won't say more about the remarkable development of noncommutative ergodic theory that is evolving even today since it is peripheral to what I want to say here. I do want to describe the development of a class of non-self-adjoint operator algebras that relates to analytic function theory, prediction theory and invariant subspaces: Subdiagonal operator algebras. It is rare to run across a reference to Norbert Wiener's book on prediction theory \cite{wiePred} in the mathematical literature. That may be partly because the book is directed toward an engineering audience, and partly because it was buried as a classified document during the war years. Like all of Wiener's books, it is remarkable and fascinating, but not an easy read for students. It was inspirational for me, and was the source from which I had learned the rudiments of prediction theory that I brought with me to UCLA as a graduate student. Wiener was my first mathematical hero. Dirichlet algebras are a broad class of function algebras that originated in efforts to understand the {\em disk algebra} $A\subseteq C(\mathbb T)$ of continuous complex-valued functions on the unit circle whose negative Fourier coefficients vanish. Several paths through harmonic analysis or complex function theory or prediction theory lead naturally to this function algebra. I remind the reader that a {\em Dirichlet algebra} is a unital subalgebra $A\subseteq C(X)$ ($X$ being a compact Hausdorff space) with the property that $A + A^* = \{f + \bar g: f,g\in A\}$ is sup-norm-dense in $C(X)$; equivalently, the real parts of the functions in $A$ are dense in the space of real valued continuous functions. One cannot overestimate the influence of the two papers of Helson and Lowdenslager (\cite{hl1}, \cite{hl2}) in abstract function theory and especially Dirichlet algebras. Their main results are beautifully summarized in Chapter 4 of Ken Hoffman's book \cite{hofBanSp}. Along with a given Dirichlet algebra $A\subseteq C(X)$, one is frequently presented with a distinguished complex homomorphism $$ \phi: A\to \mathbb C $$ and because $A+ A^*$ is dense in $C(X)$, one finds that there is a unique probability measure $\mu$ on $X$ (of course I really mean unique {\em regular Borel} probability measure) that represents $\phi$ in the sense that \begin{equation}\label{eq1} \phi(f)= \int_X f\,d\mu, \qquad f\in A. \end{equation} Here we are more concerned with the closely related notion of weak$^*$-Dirichlet algebra $A\subseteq L^\infty(X,\mu)$, in which uniform density of $A+A^*$ in $C(X)$ is weakened to the requirement that $A+A^*$ be dense in $L^\infty(X,\mu)$ relative to the weak$^*$-topology of $L^\infty$. Of course we continue to require that the linear functional (\ref{eq1}) should be multiplicative on $A$. \section{going noncommutative} von Neumann algebras and $C^*$-algebra s of operators on a Hilbert space $H$ are self-adjoint -- closed under the $*$-operation of $\mathcal B(H)$. But most operator algebras do not have that symmetry; and for non-self-adjoint algebras, there was little theory and few general principles in the early 1960s beyond the Kadison-Singer paper \cite{ksTriang} on triangular operator algebras (Ringrose's work on nest algebras was not to appear until several years later). While trolling the waters for a thesis topic, I was struck by the fact that so much of prediction theory and analytic function theory had been captured by Helson and Lowdenslager, while at the same time I could see diverse examples of operator algebras that seemed to satisfy noncommutative variations of the axioms for weak$^*$-Dirichlet algebras. There had to be a way to put it all together in an appropriate noncommutative context that would retain the essence of prediction theory and contain important examples of operator algebras. I worked on that idea for a year or two and produced a Ph.D. thesis in 1964 -- which evolved into a more definitive paper \cite{arvAnal}. At the time I wanted to call these algebras {\em triangular}; but Kadison and Singer had already taken the term for their algebras \cite{ksTriang}. Instead, these later algebras became known as {\em subdiagonal} operator algebras. Here are the axioms for a (concretely acting) subdiagonal algebra of operators in $\mathcal B(H)$. It is a pair $(\mathcal A, \phi)$ consisting of a subalgebra $\mathcal A$ of $\mathcal B(H)$ that contains the identity operator, is closed in the weak$^*$-topology of $\mathcal B(H)$, all of which satisfy \begin{enumerate} \item[SD1:] $\mathcal A+\mathcal A^*$ is weak$^*$-dense in the von Neumann algebra $\mathcal M$ it generates. \item[SD2:] $\phi$ is a conditional expectation, mapping $\mathcal M$ onto the von Neumann subalgebra $\mathcal A\cap\mathcal A^*$. \item[SD3:] $\phi(AB)=\phi(A)\phi(B)$, for all $A,B\in\mathcal A$. \end{enumerate} What [SD2] means is that $\phi$ should be an idempotent linear map from $\mathcal M$ onto $\mathcal A\cap\mathcal A^*$, that carries positive operators to positive operators, is continuous with respect to the weak$^*$-topology, and is faithful in the sense that for every positive operator $X\in \mathcal M$, $\phi(X)=0\implies X=0$. We also point out that these axioms differ slightly from the original axioms of \cite{arvAnal}, but are equivalent when the algebras are weak$^*$-closed. Examples of subdiagonal algebras: \begin{enumerate} \item The pair $(\mathcal A,\phi)$, $\mathcal A$ being the algebra of all lower triangular $n\times n$ matrices, $\mathcal A\cap \mathcal A^*$ is the algebra of diagonal matrices, and $\phi: M_n\to \mathcal A\cap \mathcal A^*$ is the map that replaces a matrix with its diagonal part. \item Let $G$ be a countable discrete group which can be totally ordered by a relation $\leq$ satisfying $a\leq b\implies xa\leq xb$ for all $x\in G$. There are many such groups, including finitely generated free groups (commutative or noncommutative). Fix such an order $\leq$ on $G$ and let $x\mapsto \ell_x$ be the natural (left regular) unitary representation of $G$ on its intrinsic Hilbert space $\ell^2(G)$, let $\mathcal M$ be the weak$^*$-closed linear span of all operators of the form $\ell_x$, $x\in G$, and let $\mathcal A$ be the weak$^*$-closed linear span of operators of the form $\ell_x$, $x\geq e$, $e$ denoting the identity element of $G$. Finally, let $\phi$ be the state of $M$ defined by $$ \phi(X) = \langle X\xi,\xi\rangle, \qquad X\in M, \quad \xi=\chi_{e}. $$ If we view $\phi$ as a conditional expectation from $\mathcal M$ to the algebra of scalar multiples of the identity operator by way of $X\mapsto \phi(X)\mathbf 1$, then we obtain a subdiagonal algebra of operators $(\mathcal A,\phi)$. \item There are natural examples of subdiagonal algebras in $II_1$ factors $\mathcal M$ that are based on ergodic measure preserving transformations that will be familiar to operator algebraists (see \cite{arvAnal}). \end{enumerate} In order to formulate the most important connections with function theory and prediction theory, one requires an additional property called {\em finiteness} in \cite{arvAnal}: there should be a distinguished tracial state $\tau$ on the von Neumann algebra $\mathcal M$ generated by $\mathcal A$ that preserves $\phi$ in the sense that $\tau\circ\phi=\tau$. Perhaps we should indicate the choice of $\tau$ by writing $(\mathcal A, \phi,\tau)$ rather than $(\mathcal A,\phi)$, but we shall economize on notation by not doing so. Recall that the simplest form of {\em Jensen's inequality} makes the following assertion about functions $f\neq 0$ in the disk algebra: {\em $\log|f|$ is integrable around the unit circle, and the geometric mean of $|f|$ satisfies \begin{equation}\label{je} |\frac{1}{2\pi}\int_\mathbb T f(e^{i\theta})\,d\theta| \leq \exp \frac{1}{2\pi}\int_\mathbb T \log|f(e^{i\theta})|\,d\theta. \end{equation} } In order to formulate this property for subdiagonal operator algebras we require the determinant function of Fuglede and Kadison \cite{fkDet} - defined as follows for invertible operators $X$ in $\mathcal M$: \begin{equation*} \Delta(X) = \exp \tau(\log|X|), \end{equation*} $|X|$ denoting the positive square root of $X^*X$. There is a natural way to extend the definition of $\Delta$ to arbitrary (noninvertible) operators in $\mathcal M$. For example, when $\mathcal M$ is the algebra of $n\times n$ complex matrices and $\tau$ is the tracial state, $\Delta(X)$ turns out to be the positive $n$th root of $|\det X|$. Corresponding to (\ref{je}), we will say that a finite subdiagonal algebra $(\mathcal A,\phi)$ with tracial state $\tau$ satisfies {\em Jensen's inequality} if \begin{equation}\label{Je} \Delta(\phi(A))\leq \Delta(A), \qquad A\in\mathcal A, \end{equation} and we say that $(\mathcal A,\phi)$ satisfies {\em Jensen's formula} if \begin{equation}\label{Jf} \Delta(\phi(A)) = \Delta (A),\qquad A\in\mathcal A\cap\mathcal A^{-1}. \end{equation} It is not hard to show that (\ref{Je})$\implies$(\ref{Jf}). Finally, the connection with prediction theory is made by reformulating a classical theorem of Szeg\"o, one version of which can be stated as follows: For every positive function $w\in L^1(\mathbb T,d\theta)$ one has \begin{equation*} \inf_f\int_\mathbb T|1+f|^2 w\,d\theta= \exp\int_\mathbb T \log w \,d\theta, \end{equation*} $f$ ranging over trigonometric polynomials of the form $a_1e^{i\theta}+\cdots+a_ne^{in\theta}$. In the noncommutative setting, there is a natural way to extend the definition of determinant to weak$^*$-continuous positive linear functionals $\rho$ on $\mathcal M$, and the proper replacement for Szeg\"o's theorem turns out to be the following somewhat peculiar statement: For every weak$^*$-continuous state $\rho$ on $\mathcal M$, \begin{equation}\label{St} \inf\rho(|D+A|^2) = \Delta(\rho), \end{equation} the infimum taken over $D\in\mathcal A\cap\mathcal A^*$ and $A\in\mathcal A$ with $\phi(A)=0$ and $\Delta(D)\geq 1$. In the 1960s, there were several important examples for which I could prove properties (\ref{Je}), (\ref{Jf}) and (\ref{St}); but I was unable to establish them in general. The paper \cite{arvAnal} contains the results of that effort. Among other things, it was shown that every subdiagonal algebra is contained in a unique {\em maximal} one, and that maximal subdiagonal algebras admit {\em factorization}: {\em Every invertible positive operator in $\mathcal M$ has the form $X=A^*A$ for some $A\in\mathcal A\cap\mathcal A^{-1}$.} Factorization was then used to show the equivalence of these three properties for arbitrary {\em maximal} subdiagonal algebras. \section{Resurrection and Resurgence} I don't have to say precisely what {\em maximality} means because, in an important development twenty years later, Ruy Exel \cite{exl} showed that the concept is unnecessary by proving the following theorem: {\em Every (necessarily weak$^*$-closed) subdiagonal algebra is maximal.} Thus, factorization holds {\em in general} and the three properties (\ref{Je}), (\ref{Jf}), (\ref{St}) are {\em always} equivalent. Encouraging as Exel's result was, the theory remained unfinished because no proof existed that Jensen's inequality, for example, was true in general. Twenty more years were to pass before the mystery was lifted. In penetrating work of Louis Labuschagne and David Blecher \cite{labSz}, \cite{bl2}, \cite{bl3}, \cite{bl5} it was shown that, not only are the three desired properties true in general, but virtually all of the classical theory of weak$^*$-Dirichlet function algebras generalizes appropriately to subdiagonal operator algebras. I hope I have persuaded the reader that there is an evolutionary path from the original ideas of Helson and Lowdenslager, through 40 years of sporadic progress, to a finished and elegant theory of noncommutative operator algebras that embodies a remarkable blend of complex function theory, prediction theory, and invariant subspaces. \bibliographystyle{alpha} \newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1} \newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1}
1,116,691,498,192
arxiv
\section{Introduction}\label{intro} Asteroids are small rocky bodies left over from the planet formation era. Although their internal structure is not well constrained, the images of a handful of asteroids have provided us with the clues to the origins of their surface features \cite[e.g.,][]{Asphaug2002ASTE,Chapman2004AREPS}. Perhaps the most prominent feature is craters that indicate the history of impacts by other objects. However, there are other surface features that are more subtle and need explanations. For example, \shortcite{Asphaug2001LPI} pointed out that the blocks on (433) Eros do not obey any obvious dynamical distribution such as association with potential source craters or sweep-up correlated with the asteroid rotation. These blocks are likely ejecta fragments, but most of them are not seen with pits that would indicate collisions at meters per second into the low-gravity regolith. They proposed that size sorting in asteroid regolith could explain such a distribution of blocks. Another potential example of this size sorting is (25143) Itokawa, where smooth and rugged regions are observed. \shortcite{Miyamoto2007Sci} proposed that the substantial vibrations and the resulting granular convection could explain such a difference in distributions of particles. Besides the abundance of boulders, Itokawa is also characterized by a lack of major craters \cite[][]{Fujiwara2006Sci}. Since the total volume of boulders on Itokawa is larger than the available volume in the identified craters, all the boulders cannot be from these craters. This may further support the size-sorting scenario on Itokawa. Alternatively, the absence of large fresh craters might be explained if the most recent impact erased its own crater \cite[e.g.,][]{Michel2009Icar}. Furthermore, the boulders on Itokawa could also be explained by assuming that Itokawa was made by the reaccumulation of fragments from a catastrophically disrupted parent body \cite[][]{Michel2013AA}. In this paper, we explore whether the size sorting due to global oscillations is possible on asteroids such as Eros and Itokawa. \shortcite{Asphaug2001LPI} identified two phenomena in granular systems that might have direct relevance to asteroids. One of them is the inelastic segregation that could form regions of granular solid via the aggregation of inelastically colliding objects. The other is the size segregation via the so-called Brazil Nut Effect (BNE) that describes the rise of a particle embedded in an oscillating system of smaller particles in a confined environment \cite[e.g.,][]{Rosato1987PhRvL}. We focus on this latter effect in this paper. The BNE has been studied in many different setups, but rarely in the low-gravity environments that are relevant to asteroids. There are some recent exceptions, such as \shortcite{Tancredi2012MNRAS} and \shortcite{Guttler2013PhRvE}, and we compare our results with theirs in Sections~\ref{results_Tancredi12} and \ref{results_lowg}. There are two distinct models of the BNE --- one of them is the intruder model, where the number of large particles is small compared to the number of smaller ones, and the other is the binary mixture model, where both small and large particles occupy comparable volumes. These models behave differently under vibrations, partly because interactions between large particles become significant for the latter \cite[e.g.,][]{Sanders2004PhRvL}. The internal structure of asteroids is not well known, and has so far been only guessed at through theoretical and numerical works \cite[e.g.,][]{Asphaug2002ASTE,Richardson2002ASTE}. Due to this lack of knowledge, we assume in this paper that the intruder model is appropriate for asteroids. The behavior of the intruder model differs depending on the oscillation speeds \cite[e.g.,][]{Kudrolli2004RPPh}. When the oscillation is weak, the particles are in the dense limit, where the contacts between particles last for a long time. As the oscillation speed increases, the system can become vibro-fluidized, where the particles behave like a fluid and their interactions can be treated as binary collisions. In the vibro-fluidized limit, the intruder's behavior depends on the size ratio and the density ratio of the constituent particles. The intruder rises to the surface when its density is lower than that of the surrounding smaller particles, and sinks when its density is larger \cite[e.g.,][]{Ohtsuki1995JPSJ,Hsiau1997PT,Shishodia2001PhRvL,Breu2003PhRvL,Huerta2004PhRvL}. In the dense limit, the intruder rises independent of the density ratio \cite[e.g.,][]{Liffman2001GM,Huerta2004PhRvL}. All of our simulations are likely to be in the dense limit most of the time, because most particles have a number of neighboring particles in contact. As we describe in the next section, we use a version of the {\it N}-body code PKDGRAV that can handle systems with long-lasting contacts. In this paper, we investigate the Brazil Nut Effect as a potential mechanism to sort particles in asteroids and to explain distributions of boulders on asteroids' surfaces. The efficiency of the BNE depends on many different parameters; furthermore, those corresponding parameters in asteroids are unknown. Therefore, we first investigate the efficiency of the BNE through a wide range of initial conditions and then apply the model to the low-gravity environments that are suitable for asteroids. In Section~\ref{method}, we introduce our numerical methods and the choice of initial conditions. In Section~\ref{results}, we study the dependence of the BNE on various parameters including the coefficients of restitution, the friction constants, the oscillation speeds, and the depth of particle beds (Sections~\ref{results_eps}--\ref{results_vel}). We also compare our models with previous studies and find good agreements (Sections~\ref{results_vel} and \ref{results_Tancredi12}). The BNE in low-gravity environments is investigated in Section~\ref{results_lowg}, and we apply the critical conditions to observed asteroids. Finally in Section~\ref{summary}, we discuss and summarize our study. \section{Method}\label{method} In this section, we introduce our numerical code PKDGRAV (Section~\ref{method_pkdgrav}) as well as the initial conditions used for our simulations (Section~\ref{method_ic}). \subsection{Numerical Method: PKDGRAV}\label{method_pkdgrav} PKDGRAV is a parallel $N$-body gravity tree code \cite[][]{Stadel2001PhDT} adapted for particle collisions \cite[][]{Richardson2000Icar,Richardson2009PSS,Richardson2011Icar}. Originally, collisions in PKDGRAV were treated as idealized single-point-of-contact impacts between rigid spheres. However, such an instantaneous collision assumption is not appropriate for a dense system like a particle bed, where particle contacts can last many timesteps. Recently, \shortcite{Schwartz2012GM} added a soft-sphere option to PKDGRAV that handles long-lasting contacts with reaction forces dependent on the degree of overlap (a proxy for surface deformation) and contact history. We use this version of the code for our study of the BNE. The code uses a 2nd-order leapfrog integrator, with accelerations due to gravity and contact forces recomputed each step. Various types of user-definable confining walls are available that can be combined to provide complex boundary conditions for the simulations. For example, we use an infinitely tall cylinder and box as our container, as described in the following subsection. The code also includes an optional variable gravity field based on a user-specified set of rules. This allows us to change the magnitude and a direction of gravity in the simulation (see Section~\ref{results_lowg} for details). The spring/dash-pot model used in PKDGRAV's soft-sphere implementation is described fully in \shortcite{Schwartz2012GM}. Briefly, a (spherical) particle overlapping with a neighbor or confining wall feels a reaction force in the normal and tangential directions determined by spring constants ($k_n$, $k_t$), with optional damping and effects that impose static, rolling, and/or twisting friction. The damping parameters ($C_n$, $C_t$) are related to the conventional normal and tangential coefficients of restitution used in hard-sphere implementations, $\epsilon_n$ and $\epsilon_t$. The static, rolling, and twisting friction components are parameterized by dimensionless coefficients $\mu_s$, $\mu_r$, and $\mu_t$, respectively. Plausible values for these various parameters are obtained through comparison with laboratory experiments. Careful consideration of the soft-sphere parameters is needed to ensure internal consistency, particularly with the choice of $k_n$, $k_t$, and timestep --- a separate code is provided to assist the user with configuring these parameters correctly. For most of our simulations, $k_n=2.261947\times10^9\,{\rm g \, s^{-2}}$ and timestep $1.851201\times10^{-6}\,$s are adopted, which permits a maximum particle speed of $200\,{\rm cm \, s^{-1}}$ with no more than 1 \% overlap between spheres. As a conservative estimate, we consider a maximum particle speed that is larger than the maximum oscillation speed of our default case of $93.9\,{\rm cm \, s^{-1}}$. For other parameters, $\mu_t$ is set to zero all the time for simplicity, while $\epsilon_n$, $\epsilon_t$, $\mu_s$ and $\mu_r$ are varied over wide ranges. We will investigate the effect of $\mu_t$ in future work. The numerical approach has been validated through comparison with laboratory experiments; e.g., \shortcite{Schwartz2012GM} demonstrated that PKDGRAV correctly reproduces experiments of granular flow through cylindrical hoppers, specifically the flow rate as a function of aperture size, and \shortcite{Schwartz2013Icar} demonstrated successful simulation of laboratory impact experiments into sintered glass beads using a cohesion model coupled with the soft-sphere code in PKDGRAV. Most recently, \shortcite{Schwartz2014inprep} applied the code to low-speed impacts into regolith in order to test asteroid sampling mechanism design. \subsection{Initial Conditions}\label{method_ic} Unless noted otherwise, we use the same particle distributions in an infinitely tall cylinder with a diameter of $10\,$cm as the initial condition for all our simulations. In our intruder model, there are 1800 small particles and 1 larger particle, where the diameters of small and large particles are $d_s=1\,$cm and $d_l=3\,$cm, respectively. Both types of particles are assumed to have the same mass density of $2.7\,{\rm g \, cm^{-3}}$, which corresponds to the density of aluminium. For our default simulation, we adopt the coefficients of restitution $\epsilon_n=\epsilon_t=0.5$, static friction $\mu_s=0.7$, and rolling friction $\mu_r=0.1$. \footnote{We note that $\epsilon_t$ used here is not the true tangential coefficient of restitution which is difficult to specify in soft-sphere simulations \cite[see][]{Schwartz2012GM}. Still, $\epsilon_t$ has a one-to-one mapping to a dimensionless quantity $C_t$ as mentioned in Section~\ref{method_pkdgrav} and is defined as $C_t \equiv -2\ln \epsilon_t\sqrt{k_t\mu/(\pi^2+(\ln\epsilon_t)^2)}$.} The choices of parameters are rather arbitrary. However, as we see below, the BNE is relatively insensitive to the exact choice of the coefficients of restitution. The BNE is sensitive to the choice of the friction constants, so we choose the values that result in the occurrence of BNE as default values. In terms of real materials, these constants nearly correspond to the oxidized Al that was used in the experiments of \shortcite{Clement1992PhRvL}, where they studied a vibrationally excited pile of beads and observed spontaneous heaping and convection. Besides particle-particle interactions that are described above, PKDGRAV also handles particle-wall interactions. We use the same coefficients of restitution and friction constants as the particles for all of the cylinder walls. The simulations are divided into two stages --- the filling of the cylinder with particles, and the oscillation of the filled cylinder. In the first stage, the large particle is initially placed at the floor of the cylinder (whose center is the origin of the coordinate), while small particles are suspended in the air, $10-130\,$cm above the bottom panel. The free-fall simulation is done under Earth gravity $g=980\,{\rm cm \ s^{-2}}$, and the cylinder is filled with small particles in $\sim0.5\,$s. For the low gravity simulations in Section~\ref{results_lowg}, this stage is done under the corresponding gravitational environment. Under the influence of Earth gravity, particles fill to a height of just under $22\,$cm. The schematic figure of our system at the end of the first stage is shown in Figure~\ref{fig_ron}. In the second stage, the entire cylinder is vertically shaken in a sinusoidal manner with a given amplitude $A$ and frequency $\omega$ as $z=A\sin(\omega t)$, where $z$ is the height of the base of the cylinder relative to an arbitrary zero point. The default amplitude and frequency are $A=d_s=1\,$cm, and $\omega=3\sqrt{a_g/A}=93.9\,{\rm rad \, s^{-1}}$. Here, $a_g$ is a gravitational acceleration, and $a_g=g$ is assumed for most cases except for Section~\ref{results_lowg}. Thus, the maximum speed for the default shake is $v_{max}=\omega A=93.9\,{\rm cm \, s^{-1}}$. Most of our simulations are done on a single core of a modern CPU such as Intel Xeon and AMD Opteron. For the simulations in this paper, the runtime varied from $\sim 10\,$hours to $\sim10\,$days for 150 cycles, depending on the number of particles as well as the choice of parameters such as oscillation frequencies. In general, low oscillation frequency simulations take longer than high frequency ones, because a longer time is required to complete 150 cycles of oscillation. \begin{figure} \includegraphics[width=84mm]{figs/bne_diagram.eps} \caption{Schematic diagram of a cross-section of the experiment after an infinitely long cylindrical container (with a diameter of $10\,$cm) is filled up with small particles (up to $\sim 22\,$cm from the origin), but before shaking begins. The large cyan particle (the intruder) is initially located at the floor. \label{fig_ron}} \end{figure} \section{Results}\label{results} In this section, we present the results of our simulations. First, we vary several parameters and investigate how they affect the BNE. \subsection{Dependence on Coefficients of Restitution}\label{results_eps} In this subsection, we study the dependence of BNE on normal and tangential coefficients of restitution ($\epsilon_n$ and $\epsilon_t$, respectively). For all the simulations in this subsection, we assume that the static and rolling friction constants are $\mu_s=0.7$ and $\mu_r=0.1$, respectively, and that the oscillation amplitude and frequency are $A=d_s=1\,$cm and $\omega=3\sqrt{g/d_s}=93.9\,{\rm rad \, s^{-1}}$, respectively. The height evolution during the oscillation simulation for our default case is shown in Figure~\ref{fig_epsn0.5_epst0.5}. The intruder's height is compared to the median height of small particles $\bar{z}_s$. After $\sim90$ cycles of the oscillations, the intruder rises to the top of the particle bed. Since the height of the particle beds is constantly changing due to oscillations, we define that the intruder arrives at the surface when $z_l>2 \bar{z}_s$. \begin{figure} \includegraphics[width=84mm]{figs/evol_height_cycles_omega3_A1cm_zcomparison_ver2.eps} \caption{Height evolution of the intruder (solid line) compared with evolution of the median height of the small particles (dashed line). Here, $\epsilon_n=\epsilon_t=0.5$, $\mu_s=0.7$ and $\mu_r=0.1$ are assumed. The intruder rises from the bottom of the cylinder to the surface of the particle bed in $\sim90$ cycles. \label{fig_epsn0.5_epst0.5}} \end{figure} To estimate the variation in our oscillation simulations, we generate 10 independent particle distributions with 1800 small particles and 1 intruder, and perform oscillation simulations by using the same parameters for all of them. For the default parameter set, we find that the average rise cycle (number of oscillations needed for the intruder to rise to the top) is $\bar{\tau}_{cyc}=92.0$ and the standard deviation is $\sigma=6.7$. Now we explore the different sets of restitution coefficients. We change normal and tangential coefficients of restitution from $0.1-0.9$ with $\Delta\epsilon=0.1$, and find that the BNE occurs in all of the 81 cases. Moreover, we find that the rise time of an intruder seems relatively independent of the choices of coefficients of restitution. In Figure~\ref{fig_epsn0.5_epst0.1-0.9}, we compare simulations with $\epsilon_n=0.5$ and $\epsilon_t=0.1-0.9$ as an example. \begin{figure} \includegraphics[width=84mm]{figs/evol_height_cycles_all_zcomparison_epsn05_median.eps} \caption{Height evolution of the intruder (solid line) for $\epsilon_n=0.5$ and $\epsilon_t=0.1-0.9$. All of the rise cycles are consistent with the mean rise cycle determined from the default case within $\sim3\sigma$, where $\sigma$ is the default case's standard deviation that is defined in text. The line colors of red, magenta, orange, yellow, green, cyan, blue, navy, and purple correspond to $\epsilon_t=0.1-0.9$ in an increasing order. \label{fig_epsn0.5_epst0.1-0.9}} \end{figure} To understand this similarity further, we estimate the variations in oscillation simulations by using 10 different particle distributions for 9 different combinations of restitution coefficients $\epsilon_n=\epsilon_t=0.1-0.9$. The mean rise cycle and the standard deviation for each set is compared in the left panel of Figure~\ref{fig_sigma_epsn_epst}. The mean rise cycle decreases from $\epsilon_n=\epsilon_t=0.1$ to $\epsilon_n=\epsilon_t=0.6$ and then slightly increases from $\epsilon_n=\epsilon_t=0.6$ to $\epsilon_n=\epsilon_t=0.8$. The rise cycle for $\epsilon_n=\epsilon_t=0.9$ increases very sharply from $\epsilon_n=\epsilon_t=0.8$. In fact, the variations of the rise cycles are consistent within $2\sigma$ for all the cases except for $\epsilon_n=\epsilon_t=0.9$. The right panel of Figure~\ref{fig_sigma_epsn_epst} shows the rise cycle estimated for each combination of $\epsilon_n$ and $\epsilon_t$ by using the same particle distribution. We also plot the mean cycle and the variations estimated from 80 simulations with $\epsilon_n=\epsilon_t=0.1-0.8$. The rise cycles of many combinations appear within $1\sigma$ from the mean cycle, and most appear within $2\sigma$. A clear exception is the $\epsilon_n=0.9$ cases that tend to have shorter rise cycles than the others for small values of $\epsilon_t$. There is an indication that the rise time might become shorter for $\epsilon_n=0.9$ and longer for $\epsilon_t=0.9$. It is unclear whether this represents poor sampling or a true trend. However, we should note that the BNE does not occur when the collisions are perfectly elastic either in normal or in tangential direction (i.e., either $\epsilon_n$ or $\epsilon_t$ is 1.0). It is possible that systems with bouncier particles behave differently from those with less elastic ones, because such systems could transit from the dense system to the vibro-fluidized system at a lower oscillation frequency. Since our goal here is to understand the overall trend of the BNE, we defer a more detailed investigation on high values of coefficients of restitution for future work. The coefficients of restitution for asteroid constituents are not well constrained. Recently, \shortcite{Durda2011Icar} performed head-on collision experiments between two granite spheres with diameters of 1 m, and estimated that the coefficient of restitution is $\epsilon_n=0.83\pm0.06$ for collision speeds up to $\sim1.5\,{\rm m \, s^{-1}}$. This normal coefficient of restitution value is relatively large, but is still expected to lead to a general BNE behavior, independent of the value of $\epsilon_t$. In summary, our results indicate that the BNE is largely independent of the exact choices of coefficients of restitution, except in the most elastic cases. Thus, for the rest of the paper, we assume $\epsilon_n=\epsilon_t=0.5\,$ unless it is noted otherwise. \begin{figure*} \includegraphics[width=0.48\textwidth]{figs/sigma_epsn_epst_ver2.eps} \includegraphics[width=0.48\textwidth]{figs/sigma_epsn_epst.eps} \caption{Left: The mean rise cycle (circles) and the standard deviation $\sigma$ estimated from 10 simulations each for $\epsilon_n=\epsilon_t=0.1-0.9$. The blue and orange regions correspond to $1\sigma$ and $2\sigma$, respectively. Except for $\epsilon_n=\epsilon_t=0.9$, all of these sets have variations in the rise cycles that are consistent within $2\sigma$. Right: the rise cycles for simulations with various $\epsilon_n$ and $\epsilon_t$. The red, magenta, orange, yellow, green, cyan, blue, navy, and purple lines correspond to $\epsilon_n=0.1-0.9$ in an increasing order. The black solid, dashed, and dotted lines indicate the mean rise cycle, $1\sigma$, and $2\sigma$ estimated from 80 simulations with $\epsilon_n=\epsilon_t=0.1-0.8$. \label{fig_sigma_epsn_epst}} \end{figure*} \subsection{Dependence on Friction}\label{results_mu} In this subsection, we study the dependence of BNE on static and rolling friction constants ($\mu_s$ and $\mu_r$, respectively). For all the simulations in this subsection, we assume that the normal and tangential coefficients of restitution are $\epsilon_n=\epsilon_t=0.5$, and that the oscillation amplitude and frequency are $A=d_s=1\,$cm and $\omega=3\sqrt{g/d_s}=93.9\,{\rm rad \, s^{-1}}$, respectively. We change the static friction over $\mu_s=0.0-1.0$ and rolling friction over $\mu_r=0.0-0.2$. We note that, for cohesionless materials, the static friction coefficient is related to the angle of repose of the loose material by $\tan\phi = \mu_s$. Thus, $\mu_s=1.0$ corresponds to material with a relatively high (45-degree) angle of repose, and sampling from $\mu_s=0.0$ to $1.0$ covers a good range of plausible material properties. The rolling friction does not have as nice a physical correspondence as $\mu_s$, but the friction comes in as a torque due to the normal force acting at a contact point of two particles \cite[][]{Schwartz2012GM}. In Figure~\ref{fig_mus_mur}, we plot the instantaneous heights of intruders after 150 oscillations for each simulation. We find that the efficiency of the BNE depends steeply on the friction constants. The figure indicates that the BNE requires a high enough $\mu_s\gtrsim 0.5$ and a low enough $\mu_r\lesssim0.2$. \begin{figure} \includegraphics[width=84mm]{figs/id1800_height_epsn05_epst05_rho27.eps} \caption{Final intruder heights after 150 oscillations as a function of $\mu_s$ and $\mu_r$. Each rectangular region represents a specific choice of $\mu_s$ and $\mu_r$ and is color coded by the final intruder height (see color legend to the right of the plot). For all the simulations, coefficients of restitution are set to $\epsilon_n=\epsilon_t=0.5\,$. There is a sharp transition from no-BNE to BNE regions. A threshold static friction is necessary for the BNE to occur, but high static or rolling friction diminishes the BNE. \label{fig_mus_mur}} \end{figure} The difference between non-BNE and BNE regions is illustrated in Figure~\ref{fig_mus0.3_mus0.7}. By dividing the initial distributions of particles into 11 layers (i.e., each layer is $\sim2\,$cm thick), we plot the height evolution of particles which are initially in the uppermost and lowermost layers, along with that of the intruder particle. There is little vertical mixing of particles when there is no BNE (left panel), while particles are well-mixed when the BNE is observed (right panel). Convection of particles is observed in all the simulations with the BNE, where particles descend along the wall and ascend in the central region. More precisely, small particles follow gradual rise and fall cycles throughout the entire particle distribution while the intruder rises but does not fall. Our results agree with many previous works that have observed convection along with the BNE \cite[e.g.,][]{Knight1993PhRvL,Poschel1995EL}. Furthermore, the trend seen in Figure~\ref{fig_mus_mur} agrees with the implications of \shortcite{Clement1992PhRvL}. They experimentally investigated a two-dimensional pile of equal-sized beads under a vertical sinusoidal oscillation, and found that the convection is not observed for polished aluminum beads with $\mu_s=0.2$, but is observed for oxidized ones with $\mu_s=0.8$. In their experiments, both kinds of beads have a normal restitution coefficient of 0.5, which is comparable to our default value. In our simulations, for $\mu_s=0.2$, the BNE was not observed for any value of $\mu_r$, while for $\mu_s=0.8$, the BNE was seen for all the values of $\mu_r$ we tested. \begin{figure*} \includegraphics[width=0.48\textwidth]{figs/evol_height_cycles_allparticles_epsn05_epst05_mus03_mur01.eps} \includegraphics[width=0.48\textwidth]{figs/evol_height_cycles_allparticles_epsn05_epst05_mus07_mur01.eps} \caption{Height evolutions of small particles in the uppermost layer (red) and the lowermost layer (blue) are compared with the corresponding evolution of the intruder (black). All the parameters are the same except that $\mu_s=0.3$ and $\mu_s=0.7$ in the left and right panels, respectively. When the BNE does not occur (left), particle layers are well-separated throughout the simulation. On the other hand, when the BNE occurs (right), particle layers are well-mixed due to convection. \label{fig_mus0.3_mus0.7}} \end{figure*} Convection depends both on particle-particle and on particle-wall frictions. Figure~\ref{fig_mu0} shows cases where particle-particle (left) and particle-wall (right) friction constants are set to zero for our default case. In both cases, convection does not occur and the BNE is severely suppressed compared to the default case in Figure~\ref{fig_epsn0.5_epst0.5}, where both of these friction constants have the default values. These figures indicate that kinetic friction (which is related to $\epsilon_n$ and $\epsilon_t$) alone is not enough to initiate the BNE. \begin{figure*} \includegraphics[width=0.48\textwidth]{figs/evol_height_cycles_zcomparison_mu0particles_median.eps} \includegraphics[width=0.48\textwidth]{figs/evol_height_cycles_zcomparison_mu0walls_median.eps} \caption{Comparison of the height evolution of the intruder with the median height of the small particles. Left and right panels show the cases where particle-particle frictions are set to zero (left) and particle-wall frictions are set to zero (right), respectively. The other parameters are set to the usual default values. \label{fig_mu0}} \end{figure*} Figure~\ref{fig_mus_mur} also indicates that high rolling friction diminishes the BNE. The higher rolling friction means that it is more difficult for particles to rotate with respect to each other. Let us consider three particles lined up side by side. If we move the middle particle upward, then the two neighboring particles start rotating. For higher rolling friction, the energy loss in this process is greater and thus particles tend to lock to each other, which in turn slows down the convection. The slower rise of the intruder in the high-static-friction region is also due to the difficulty of the particles to move relative to one another. The friction constants of asteroids are even less constrained than the coefficients of restitution. Recently, \shortcite{Yu2014inprep} performed avalanche experiments of similar-size gravels that were collected from a stream bed, and estimated the restitution coefficients and static constants by using numerical simulations on PKDGRAV. They found $\epsilon_n=\epsilon_t=0.55$, $\mu_s=1.31$ and $\mu_r=3.0$ reproduced their experiments well. These values are not necessarily unique for gravels, but the restitution coefficients are comparable to our default values while the static and rolling frictions are beyond the values we have investigated in this study (see Figure~\ref{fig_mus_mur}). If such gravels represent small particles in asteroids of interest, it would be very difficult to have convection and thus the BNE, because the particles will be simply locked to each other. However, their studies approximate gravels with spheres, and therefore could overestimate these values. More realistic modeling of particles would be necessary in the future. In the rest of the paper, we assume $\mu_s=0.7$ and $\mu_r=0.1$ as our default values. Again, $\mu_s=0.7$ is comparable to the friction constant estimated by \shortcite{Clement1992PhRvL} for oxidized Al. \shortcite{Yu2014inprep} also considered two other types of material: smooth ($\mu_s=\mu_r=0.0$) and glass ($\mu_s=0.43$ and $\mu_r=0.1$). Our default case has the higher static friction than glass but not as high as that of gravel. \subsection{Dependence on Oscillation Speeds and Bed Depths}\label{results_vel} In this subsection, we study the dependence of the BNE on oscillation amplitude and frequency. For all the simulations in this subsection, we assume $\epsilon_n=\epsilon_t=0.5$, $\mu_s=0.7$, and $\mu_r=0.1$. The oscillation speeds are varied for three different bed depths and two different cylinder widths. The default case of 1800 + 1 particles in a cylinder with a diameter of $10\,$cm has a bed depth of $\sim22\,$cm. The shallower case of 900 + 1 particles has a depth of $\sim13\,$cm, and the deeper case of 3600 + 1 particles has a depth of $\sim47\,$cm in the same cylinder. We also performed one set of simulations in a wider cylinder with a diameter of $20\,$cm. That case is comprised of 3600 + 1 particles and a depth of $\sim13\,$cm; it can be compared with the shallow-bed case described above. \begin{figure*} \includegraphics[width=0.48\textwidth]{figs/omega_A_height_ver5_median.eps} \includegraphics[width=0.48\textwidth]{figs/omega_A_height_ver5_N901_median.eps} \caption{ Final intruder heights after 150 oscillations as a function of $\tilde{\omega}$ and $\tilde{A}$. Each rectangular region represents a specific choice of $\tilde{\omega}$ and $\tilde{A}$ and is color coded by the final intruder height (see color legend to the right of the plot). For all the simulations, $\epsilon_n=\epsilon_t=0.5$, $\mu_s=0.7$, and $\mu_r=0.1$ are assumed. Left and right panels are the cases with $N_s=1800$ and $900$, respectively. Interior to the black curve is the BNE region estimated from the 2D simulations by \shortcite{Godoy2008PhRvE}. The dashed part corresponds to the region beyond their investigation. Open and filled symbols are taken from Figure~6 of \shortcite{Godoy2008PhRvE} and represent BNE and no-BNE cases, respectively. The circles correspond to the pseudo-2D experiments by \shortcite{Duran1994PhRvE}, while the downward triangles correspond to the 3D experiments by \shortcite{Knight1993PhRvL}. The upward triangles and the diamonds are both 2D simulations by \shortcite{Saez2005PhRvE} and \shortcite{Poschel1995EL}, respectively. Orange and red curves are $\tilde{\Gamma}=1$ and $\tilde{v} = \tilde{v}_c$, respectively, and the convection is expected above the solid portions of these curves. \label{fig_omega_A}} \end{figure*} The final heights of the intruders are plotted for the dimensionless oscillation amplitude $\tilde{A}=0.5-3.0$ and the dimensionless oscillation frequency $\tilde{\omega}=0.5-4.0$ in Figure~\ref{fig_omega_A}, where \begin{equation} \tilde{A}\equiv \frac{A}{d_s},\quad \tilde{\omega}\equiv \frac{\omega}{\sqrt{g/d_s}} \ . \end{equation} Since the diameter of a small particle is $d_s=1\,$cm in our simulations, our default case corresponds to $\tilde{A}=1$ and $\tilde{\omega}=3$. The left panel of Figure~\ref{fig_omega_A} shows our default case with 1800 + 1 particles and the right panel shows the shallower bed case with 900 + 1 particles. The result for the deepest bed with 3600 + 1 particles (not shown here) looks similar to the default case. All of these simulations indicate that the BNE occurs for cases with sufficiently large amplitudes and frequencies. \begin{table*} \footnotesize \begin{minipage}{180mm} \caption{Model Parameters and Setups \label{tab_init}} \begin{tabular}{lccccccc}\hline \hline Reference & $N_s$ & $\rho_l/\rho_s$ & $d_l/d_s$ & Restitution coefficients & Friction constants & Type & Shake \\ \hline Default case of present work & 1800 & 1 & 3 & $\epsilon_n=\epsilon_t=0.5$ & $\mu_s=0.7$, $\mu_r=0.1$ & 3DS & sinusoidal \\ \shortcite{Hejmady2012PhRvE} & & $0.30-2.34$ & $8.5-11.3$ & & & 2DE & sinusoidal \\ \shortcite{Godoy2008PhRvE} & 1200 & 1 & 8 & $\epsilon_n=\epsilon_t=0.98$ & $\mu_s=\mu_d=0.7$ & 2DS & parabolic \\ \shortcite{Duran1994PhRvE} & & 1 & 12.9 & 0.5 & 0.8 & 2DE & sinusoidal \\ \shortcite{Knight1993PhRvL} & & 1 & $3-12.5$ & & & 3DE & sinusoidal taps \\ \shortcite{Saez2005PhRvE} & 3300 & & 13 & 0.6 & 0.97 & 2DS & sinusoidal \\ \shortcite{Poschel1995EL} & 950 & 1 & $3.5-4.7$ & & 0.5 & 2DS & sinusoidal \\ \shortcite{Tancredi2012MNRAS} & 1000 & 1 & 3 & $\epsilon_n=0.8-0.9$, $\epsilon_t=0.95^*$ & $\mu_d=0.6$ & 3DS & displacements \\ \hline \hline \end{tabular} \begin{tablenotes} \footnotesize \item{Column 2: number of small particles, Column 3: density ratio of large to small particles, Column 4: diameter ratio of large to small particles, Column 7: dimension of simulations (S) or experiments (E), Column 8: oscillation type. The value with * is estimated from Figure 4 in \shortcite{Tancredi2012MNRAS}.} \end{tablenotes} \end{minipage} \end{table*} The figure also compares our results with the previous works listed in Table~\ref{tab_init}. Since every work uses different setups, it is difficult to make a direct comparison. One of the difficulties lies in a density dependence of the BNE. In the vibro-fluidized regime, a high-enough density ratio $\rho_l/\rho_s$ could lead to the reverse BNE, where an intruder sinks to the bottom \cite[e.g.,][]{Ohtsuki1995JPSJ,Shishodia2001PhRvL}, while in the dense limit, when particles experience enduring contacts, an intruder appears to rise independent of the density ratio \cite[e.g.,][]{Liffman2001GM,Huerta2004PhRvL}. Since we are interested in the standard BNE, we compare our models with previous works that assume comparable densities for small and large particles so that an intruder rises for both the vibro-fluidized regime and the dense limit. \shortcite{Godoy2008PhRvE} selected several investigations that assume the same density for large and small particles, and showed that they all follow similar transition lines that separate BNE and no-BNE regions. These works are also plotted using different symbols in Figure~\ref{fig_omega_A}. The distribution of open (BNE) and filled (no BNE) symbols agrees well with the general trend of our simulations. \shortcite{Duran1994PhRvE} experimentally studied the BNE in a quasi-two-dimensional bed of aluminum beads, and identified two segregation mechanisms depending on accelerations: arching ($1.0 \lesssim \tilde{\Gamma} \lesssim 1.5$) and convection ($\tilde{\Gamma} \gtrsim 1.5$), where $\tilde{\Gamma} = \tilde{A}\,\tilde{\omega}^2$ is the dimensionless acceleration. The orange line in Figure~\ref{fig_omega_A} corresponds to $\tilde{\Gamma} = 1$; the BNE is expected to take place to the right of this line. The agreement is particularly good for a shallower bed case. The default bed case, however, indicates that $\tilde{\Gamma} \gtrsim 1$ is not a sufficient condition for the BNE. \shortcite{Hejmady2012PhRvE} also experimentally studied the BNE by using a large acrylic disk embedded in a quasi-two-dimensional bed of mustard seeds. They showed that $\tilde{\Gamma} > 1$ is not a sufficient condition for bulk convection to occur, and proposed that the oscillation speed also needs to exceed some critical value $v_{osc} > v_{c}$. We estimate the critical oscillation speed for our simulations in Figure~\ref{fig_vosc_vrise}, where the rise speed of an intruder is plotted as a function of the scaled maximum oscillation speed $\tilde{v}_{osc}=\tilde{A}\,\tilde{\omega}$ for three different bed depths and two different cylinder widths. Here, the rise speed is defined as the bed depth (defined as $2\bar{z}_s$) divided by the rise time (determined from the rise cycle defined in Section~\ref{results_eps}). Different from \shortcite{Hejmady2012PhRvE}, we plot the rise speed instead of the rise time so that $\tilde{v}_c$ is clearly determined from the oscillation speed that corresponds to the zero rise speed. We find that all of these cases have comparable critical speeds. For our default bed depth, we find $\tilde{v}_c\sim0.97$, while for shallower and deeper bed cases, we find $\tilde{v}_c\sim0.84$ and $0.91$, respectively. For the case of a shallow-bed in a wide cylinder, we find $\tilde{v}_c\sim1.04$. These are comparable to the critical value found in \shortcite{Hejmady2012PhRvE} ($\tilde{v}_c\sim1.26$ for $v_c=16.5\,{\rm cm \, s^{-1}}$). The estimated critical speeds are plotted using red lines in Figure~\ref{fig_omega_A}. From these figures, we conclude that both $\tilde{\Gamma}\gtrsim1$ and $\tilde{v}_{osc}\gtrsim\tilde{v}_c$ need to be satisfied for the BNE to take place. \begin{figure} \includegraphics[width=84mm]{figs/vosc_vave_ver4.eps} \caption{Rise speed compared with the maximum oscillation speed. Blue circles, orange down-pointing triangles, and green up-pointing triangles correspond to the default case ($N=1800+1$), a shallow-bed case ($N=900+1$), and a deep-bed case ($N=3600+1$), respectively. Red squares represent a shallow-bed in a wider cylinder with $N=3600+1$. All of these cases have a similar critical oscillation speed of $\tilde{v}_c\sim1$. From the best-fit lines, the critical oscillation speeds are $\tilde{v}_c\sim0.97$, $0.84$, $0.91$, and $1.04$ for each case, respectively. See discussion in Section~\ref{results_vel}. \label{fig_vosc_vrise}} \end{figure} An important indication from Figure~\ref{fig_vosc_vrise} is that the rise speed is proportional to the oscillation speed. We will come back to this point in Section~\ref{results_lowg}. Furthermore, Figure~\ref{fig_vosc_vrise} indicates that there is an optimal bed depth for the BNE, since the rise speed increases from $N_s=900$ to $1800$, and then decreases from $N_s=1800$ to $3600$. Such a depth-dependence of the rise time agrees with the recent experimental result by \shortcite{Guttler2013PhRvE}. Comparison of the shallow bed case in the default cylinder to that in the wide cylinder (i.e., orange and red lines, respectively) indicates that the BNE may take place slower in a wider cylinder. For the wide-cylinder cases, we find that the convection direction is the same as the default cylinder cases --- the particles descend along the wall and ascend near the central region. For the high-end of oscillation speeds, however, we find that the intruder also shows a ``whale'' effect, where the intruder does not necessarily stay at the top of the cylinder but keeps moving up and down with the convective current. This indicates that the region of the convection roll along the wall is thick enough to pass the intruder downward in the wide-cylinder cases. We would like to pay a particular attention to the comparison of our work with \shortcite{Godoy2008PhRvE}. They numerically studied the BNE by using the molecular dynamics code developed by \shortcite{Risso2005PhRvE}. In their simulations, the size ratio of the intruder and surrounding disks is $d_l/d_s=8$, restitution coefficients $\epsilon_n=\epsilon_t=0.98$, and static and dynamic friction coefficients $\mu_s=\mu_d=0.7$. Here, $\mu_d$ follows the Coulomb's law of friction where the magnitude of kinetic friction is independent of speed of slippage. There are 1200 small particles with one intruder, and the 2D box has a width of $40\,d_s$. Their oscillations are not sinusoidal, but are given in a periodic vertical parabolic movement with the base acceleration of $\pm \frac{8}{\pi^2}A\,\omega^2$. Since their acceleration is proportional to the maximum acceleration of the sinusoidal oscillation $A\,\omega^2$, we expect that our results can be compared to theirs reasonably well. Instead of changing oscillation amplitude and frequency like we have done in our work, \shortcite{Godoy2008PhRvE} varied the dimensionless acceleration and speed which they defined as $\Gamma'=\frac{8}{\pi^2}\,\tilde{A}\,\tilde{\omega}^2$ and $\zeta'=\sqrt{2}\,\tilde{A}\,\tilde{\omega}$, respectively, and found the transition line above which the BNE is observed. We also plot their transition line as the black line in Figure~\ref{fig_omega_A}. The agreement between their simulations and ours is good in the low-frequency region ($\tilde{\omega}\lesssim2.5$), but our results disagree in the higher frequency region. There are several possibilities that lead to the difference between our results and those of \shortcite{Godoy2008PhRvE}. First, our simulations are three-dimensional (3D), while theirs are two-dimensional (2D). However, this is unlikely to be the critical difference, especially since our 3D results agree well with quasi-2D experimental results by \shortcite{Hejmady2012PhRvE}. Second, we adopt $\epsilon_n=\epsilon_t=0.5$, while they use $\epsilon_n=\epsilon_t=0.98$. \shortcite{Kudrolli2004RPPh} proposed a condition that separates the vibro-fluidized regime from the dense regime: $n_{\rm layer}(1-\epsilon) < 1$, where $n_{\rm layer}$ is the number of particle layers. According to this condition, our simulations have $n_{\rm layer} \sim 22$ and $\epsilon=0.5$ and thus are likely to be in the dense regime of lasting contacts between particles, while their simulations have $n_{\rm layer} \sim 30$ and $\epsilon=0.98$ and may be in the vibro-fluidized regime, especially for high accelerations. To address this, we repeated our simulations for $\epsilon_n=\epsilon_t=0.98$. However, the results are still consistent with $\tilde{\Gamma}\gtrsim1$ and $\tilde{v}_{osc}\gtrsim\tilde{v}_c$, rather than with the relation proposed by \shortcite{Godoy2008PhRvE}. Third, it is possible that our oscillation speeds are too low to turn the BNE off. Thus, we extended our default simulations up to $\tilde{\omega}=8$ and $\tilde{A}=4$. However, the BNE is observed for all oscillation speeds we tested. The disagreement may also be due to the differences between our codes, or other differences between our initial setups. Future studies would need to investigate, both numerically and experimentally, whether the BNE turns off for high oscillation frequencies or not. In summary, we have found that the BNE occurs for the oscillation frequency and oscillation amplitude above certain values, and that critical conditions are well approximated by $\tilde{\Gamma}\gtrsim1$ and $\tilde{v}_{osc}\gtrsim1$ for the parameters we tested. Our results show the same general trend as other works that use comparable densities for small and large particles but assume different initial conditions otherwise. In Section~\ref{results_lowg}, we investigate the effects of oscillations under the low-gravity environments, and discuss a possibility of having such oscillations due to impact-generated seismic waves. \subsection{Comparison with Tancredi et al. (2012)}\label{results_Tancredi12} We compared our simulations with previous works listed in Table~\ref{tab_init} in the last subsection. We left out the results of \shortcite{Tancredi2012MNRAS} from this discussion due to the very different oscillation style they used. In this subsection, we use initial conditions as close to theirs as we could construct, and then compare the results. \shortcite{Tancredi2012MNRAS} studied granular processes under various gravitational environments, and observed size segregation in response to shaking. Instead of the sinusoidal oscillation, they applied multiple vertical displacements of the floor of the simulation box at a constant speed. The duration of a displacement is $dt=0.1\,$s, and the time between displacements varies from $2-15\,$s, depending on the gravitational environments. The floor's speed ($v_f=0.3-10\,{\rm m \, s^{-1}}$) is linearly increased from 0 to the final value in 20 displacements. To mimic their oscillations, we do not use the vertical displacements, but instead increase the oscillation amplitudes linearly so that the maximum speeds reach their final floor speeds in 20 cycles. The oscillation amplitude is chosen to be equal to one displacement $A=v_f\,dt$. This makes the oscillation frequency $\omega=10\,{\rm rad \, s^{-1}}$ for all the simulations. Their simulation box has a size of $12\,$m$\times12\,$m and a height of $150\,$m. Due to the lack of viscoelastic and frictional interactions between particles and walls in their ESyS-PARTICLE code, they glued a layer of small particles on the floor. Since our code handles the wall-particle interactions, we do not have the glued layer. Following their setup, we create an infinitely-tall box with the floor area of $12\,$m$\times12\,$m and fill that box with 1000 small spheres with mean radius $0.25\,$m and standard deviation $0.01\,$m as well as one large sphere with radius $0.75\,$m. The restitution coefficients are estimated from their sections 2.3.1--2.3.3 as follows. In the head-on collision simulation of two equal spheres (see their section~2.3.1), they obtained restitution coefficients of $0.8-0.9$ for the impact speeds over $1-2\,{\rm m \, s^{-1}}$. Thus, we adopt $\epsilon_n=0.85$ for the normal coefficient of restitution between particles. In the grazing collision simulation of two equal spheres (see their section~2.3.2), they found that the ratio of final to initial speeds is $\sim0.95$ for the low-speed collisions with friction. Thus, we adopt $\epsilon_t=0.95$ for the tangential coefficient of restitution between particles. In the bouncing ball simulation against the floor with a layer of glued spheres (see their section~2.3.3), they found a coefficient of restitution of 0.593. We adopt $\epsilon_n=\epsilon_t=0.593$ not only for the interactions between particles and the floor, but those for all the walls of the box. The treatment of friction is also different between the codes. We chose $\mu_s=0.6$ so that our friction constant becomes comparable to theirs at the threshold between static and dynamic friction. Since their code does not take account of rolling friction, we set $\mu_r=0.0$. With this setup, we performed oscillation simulations for the maximum speeds of 0.3, 1, 3, 5, and $10\,$m/s and show the results in Figure~\ref{fig_Tancredi}. The figure is meant to be compared with Figure~9 in \shortcite{Tancredi2012MNRAS}. Despite the differences between our codes and shaking styles, our results qualitatively agree with theirs, apart from a difference in rise time. For the low oscillation speeds (0.3 and $1\,{\rm m \, s^{-1}}$), the intruder stays at the bottom of the box. As the maximum oscillation speed increases, the intruder starts rising, showing the familiar BNE. The duration of the maximum speed in our oscillation simulation (a fraction of the oscillation period of $0.63\,$s) is shorter than theirs. Since the rise speed is proportional to the oscillation speed (see Section~\ref{results_vel}), it is understandable that our results show a consistently slow rising time for the intruders compared to \shortcite{Tancredi2012MNRAS}. \begin{figure} \includegraphics[width=84mm]{figs/evol_height_cycles_all_Tancredi12_omega10.eps} \caption{Evolution of the height of the intruder for the maximum oscillation speeds of 0.3, 1, 3, 5, and $10\,{\rm m \, s^{-1}}$. A thick black line is drawn at a height of $2.84\,$m to represent the height of 1000 small particles with a random close packing and maximum bulk porosity of 0.64. The figure qualitatively agrees with Figure 9 in Tancredi et al. (2012). \label{fig_Tancredi}} \end{figure} \subsection{Scaling in Low-gravity Environments}\label{results_lowg} In this subsection, we apply our model to the low-gravity environments characteristic of asteroids. The goal here is to check whether the BNE occurs in such environments, and to understand the gravity dependence of the rise time of an intruder. We expect that the BNE is scalable by adjusting the oscillation frequency according to the gravity field. For example, our default case has an oscillation frequency of $\omega=3\sqrt{a_g/d_s}=93.9\,$rad/s under the gravitational acceleration of Earth $a_g=g=980\,{\rm cm \, s^{-2}}$. We change the oscillation frequencies for the gravity fields comparable to the Moon, (1) Ceres, (87) Sylvia, Eros, and Itokawa accordingly. These bodies are chosen to compare our results with those in \shortcite{Tancredi2012MNRAS}. We tested four different parameter sets --- $(\tilde{\omega},\, \tilde{A}) = (1,\, 0.5)$, $(1,\, 1)$, $(3,\, 1)$, and $(3,\, 3)$. As expected, we find that the results look similar for different gravity environments with the same scaled oscillation speeds. For $(\tilde{\omega},\, \tilde{A}) = (1,\, 0.5)$ and $(1,\, 1)$, the BNE does not occur for any gravity fields, while for $(\tilde{\omega},\, \tilde{A}) = (3,\, 1)$ and $(3,\, 3)$, the BNE occurs for all the cases. The comparison of the evolution of an intruder's height for $(\tilde{\omega},\, \tilde{A}) = (3,\, 1)$ is plotted in Figure~\ref{fig_lowg}. We find that the results are all consistent with the uncertainties typical to the oscillation simulations. The figure confirms the expectation that the BNE simulations are scalable over a wide range of gravity fields. \begin{figure} \includegraphics[width=84mm]{figs/evol_height_cycles_all_Earth-Itokawa.eps} \caption{Evolution of height of the intruder as a function of oscillation cycles for the gravities of Earth, Moon, Ceres, Sylvia, Eros, and Itokawa. \label{fig_lowg}} \end{figure} In Figure~\ref{fig_grav_vel}, we compare our simulations under different gravity fields and the maximum oscillation speeds. The open and filled circles correspond to the runs with and without the BNE, respectively. The orange line is the critical speed we estimated in Section~\ref{results_vel}. As expected, BNEs are observed above $\tilde{v}_c$, while BNEs do not occur below the critical value. For a comparison, we also plot BNE (open diamonds) and no-BNE cases (filled diamonds) from low-gravity simulations in \shortcite{Tancredi2012MNRAS}. Since their results also show a similar transition from no BNE to BNE around a constant $\tilde{v}_c$, we would expect that there is a critical floor speed necessary for the BNE in their simulations as well. \shortcite{Tancredi2012MNRAS} estimated the floor speed threshold as $v_{\rm thre}=1.12 a_g^{0.42}$, which has a slightly different dependency on the gravitational acceleration from our relation of $v_{c}\sim \sqrt{d_s a_g}$. This difference may not be surprising since threshold speeds in \shortcite{Tancredi2012MNRAS} are determined by relatively sparse data. \begin{figure} \includegraphics[width=84mm]{figs/grav_vel_ver2.eps} \caption{The scaled oscillation speeds compared with the scaled gravitational acceleration. The solid orange line corresponds to the critical speed $\tilde{v}_c=1.0$ estimated in Section~\ref{results_vel}. Open and filled symbols correspond to BNE and no-BNE cases in our simulations (circles) and in \shortcite{Tancredi2012MNRAS} (diamonds). For data from \shortcite{Tancredi2012MNRAS}, we simply follow their Figure~11 to distinguish BNE and no-BNE cases. \label{fig_grav_vel}} \end{figure} It is also informative to plot the rise speed of an intruder as a function of the gravity field. Figure~\ref{fig_grav_vrise} shows that, for both $(\tilde{\omega},\, \tilde{A}) = (3,\, 1)$ and $(3,\, 3)$, the rise speed is proportional to $\sqrt{a_g}$, rather than $a_g$. This is understandable in our case, since the driving frequency of our oscillation simulations is $\omega \propto \sqrt{a_g}$. The result is consistent with our finding in Section~\ref{results_vel} that the rise speed is proportional to the maximum oscillation speed. Recently, \shortcite{Guttler2013PhRvE} experimentally studied the BNE both in the laboratory and in a parabolic flight to mimic the reduced-gravity conditions comparable to Mars and the Moon. They found that the rise speed was not proportional to $\sqrt{a_g}$, but closer to $a_g$. In fact, their best fit was obtained for an exponential function. The difference seen in our rise speed dependences with gravity may be partly due to the difference in our shaking profiles. While we use a sinusoidal oscillation, their oscillation acceleration is approximated by a square-wave function. Moreover, due to the nature of a parabolic flight, they need to stop the oscillations every time the hyper-gravity phase kicks in, which could have compacted the particle distributions and slowed the rise of an intruder. Future studies need to investigate the dependence of rise speed (or equivalently rise time) on the gravity field further. \begin{figure*} \includegraphics[width=0.48\textwidth]{figs/grav_vrise_omega3_A1cm_ver2_median.eps} \includegraphics[width=0.48\textwidth]{figs/grav_vrise_omega3_A3cm_ver2_median.eps} \caption{Change of the rise speed of an intruder depending on the gravity field. Left and right panels correspond to $(\tilde{\omega},\, \tilde{A}) = (3,\, 1)$, and $(3,\, 3)$, respectively. Solid and dashed lines are proportional to $\sqrt{a_g}$ and $a_g$, respectively. The rise speeds of both sets of our simulations have a $\sqrt{a_g}$ dependence. \label{fig_grav_vrise}} \end{figure*} Finally, we estimate the critical oscillation speeds of the BNE for observed asteroids. From the critical conditions confirmed in Section~\ref{results_vel}, we can derive two conditions for critical oscillation speeds, \begin{eqnarray} \tilde{v}_c\gtrsim 1.0 & \rightarrow & v \gtrsim \sqrt{d_s a_g} \ {\rm, \, and} \\ \tilde{\Gamma}_c\gtrsim 1.0 & \rightarrow & v \gtrsim \sqrt{A a_g} \ . \end{eqnarray} In Figure~\ref{fig_vcrit}, we plot these conditions for Itokawa (left panel) and Eros (right panel) by assuming three different diameters for a typical small particle (1, 10, and 100\,cm). When the particle size is $d_s=10\,$cm, the oscillation speeds need to be larger than $v_c\sim1\,{\rm cm \, s^{-1}}$ for the BNE to take place on Itokawa, and larger than $v_c\sim2.5\,{\rm cm \, s^{-1}}$ on Eros. Thus, as expected from the relation $v_c \propto \sqrt{a_g}$, size segregations occur for weaker oscillations on smaller asteroids. The upper limit of the BNE oscillation speeds can be set by the escape speed. The escape speeds are $\sim16.5\,{\rm cm \, s^{-1}}$ for Itokawa and $\sim9.71\,{\rm m \, s^{-1}}$ for Eros, and are plotted as black solid lines in Figure~\ref{fig_vcrit} (note that the latter line overlaps with the top boarder of the panel). For a small asteroid like Itokawa that has a diameter $< 500\,$m, the BNE is allowed only for small oscillation amplitudes $\lesssim 10\,$m if a typical small particle size is 100\,cm. For smaller particles, a wider range of conditions is possible. For a larger asteroid like Eros that has a diameter of $\sim20\,$km, a typical small particle size can be larger than 100\,cm. How do these oscillation speeds compare to the characteristic seismic speeds in asteroids? From \shortcite{Asphaug1996Icar}, the estimated critical oscillation speeds are consistent with the speeds of seismic motions that could create bright annuli around craters on small asteroids such as Ida. We estimate the critical seismic speeds below and plot them in Figure~\ref{fig_vcrit} as solid and dashed green lines. The impact energy of a projectile can be written as \begin{equation} E_i = \frac{2\pi}{3}\rho_p R_p^3 v_p^2 \ , \end{equation} where $p$ denotes the projectile, $\rho$ is the density, $R$ is the spherical radius, and $v_p$ is the impact speed. Following \shortcite{Richardson2005Icar}, we also write the total seismic energy of an asteroid due to a seismic wave speed $v_s$ as \begin{equation} E_s = \frac{2\pi}{3}\rho_t R_t^3 v_s^2 \ . \end{equation} Here, $t$ denotes the target asteroid and $v_s=2\pi f A$, where $f$ and $A$ are the seismic frequency and maximum displacement, respectively, from \shortcite{Richardson2005Icar}. More complex representations may be appropriate for purely agglomerate bodies, but since there is no good understanding of the actual structure of agglomerate asteroids or the transmission of seismic waves within such a medium, we have assumed that the waves are transmitted in a simplified fashion (i.e., a sinusoidal oscillation). When a fraction $\eta$ of the kinetic energy of an impactor is converted into seismic energy (i.e., $E_s=\eta E_i$), the seismic speed $v_s$ on an asteroid is estimated as \begin{equation} v_s = \sqrt{\eta \frac{\rho_p}{\rho_t}\left(\frac{R_p}{R_t}\right)^3} v_p \end{equation} By defining a specific impact energy as \begin{equation} Q_{s} = \frac{1}{2}\frac{\rho_p}{\rho_t}\left(\frac{R_p}{R_t}\right)^3 v_p^2 \ , \end{equation} we can rewrite a seismic speed as $v_s = \sqrt{2\eta Q_{s}}$. To estimate a seismic speed for a certain impact, we consider an extreme case where an impact leads to a catastrophic disruption (i.e., an impact that results in the largest remnant having half the mass of the original target) and define the specific impact energy as $Q_s=Q_{s,D}$. \shortcite{Jutzi2010Icar} performed numerical simulations of asteroid break-ups, and estimated such specific energy threshold for disruption $Q_{s,D}$. They provided the following convenient power-law scaling: \begin{equation} Q_{s,D} = Q_0 \left(\frac{R_t}{\rm cm}\right)^\alpha + B\rho_t\left(\frac{R_t}{\rm cm}\right)^\beta \ , \end{equation} where $Q_0$, $B$, $\alpha$, and $\beta$ are fitting constants. The first term represents the strength regime where fragments are held together by material strength alone, and the second term represents the gravity regime where fragments are gravitationally bound. The transition between these two regimes is estimated to occur for target diameters between 100 and 200\,m \cite[][]{Benz1999Icar}. The fitting constants depend on a variety of parameters such as internal structures, tensile and shear strengths of constituents, and impact speeds. \shortcite{Jutzi2010Icar} considered two models for the target's internal structure --- a purely monolithic non-porous target and a porous target that consists of a body with pores that are smaller than the thickness of the shock front, and determined the fitting constants for impact speeds of $3$ and $5\, {\rm km \, s^{-1}}$. Since both Itokawa and Eros have low bulk densities compared to ordinary chondrites \cite[][]{Wilkison2002Icar,Fujiwara2006Sci}, we adopt the fitting parameters for a porous target. By assuming impact seismic efficiency of $\eta = 10^{-4}$ \cite[][and references therein]{Richardson2005Icar}, we plot critical seismic speeds $v_s$ for impact speeds of $3$ and $5\, {\rm km \, s^{-1}}$ as dashed and solid green lines, respectively, in Figure~\ref{fig_vcrit}. Regions below these lines correspond to seismic speeds expected for smaller impactors that will not destroy the asteroids. For Itokawa, we find that critical seismic speeds would be comparable to, but slightly larger than, the escape speed, and larger than the minimum oscillation speeds required for the BNE. For Eros, critical seismic speeds would be smaller than the escape speed, but they are still larger than the estimated minimum oscillation speeds. Thus, in both cases, we expect that the critical BNE oscillation speeds are comparable to the seismic speeds that can create craters. It is also informative to speculate how long it might take for a large block to rise to the surface of an asteroid. From Figure~\ref{fig_vosc_vrise}, we find that the rise speed of an intruder is more than an order of magnitude slower than the maximum oscillation speed for these three bed depths. Assuming an oscillation speed of $1\,{\rm cm \, s^{-1}}$ and a depth of 100\,m from the shortest axis of Itokawa of $\sim200\,$m, we can estimate that the rise time would be a few hours if the rise speed is comparable to the oscillation speed, and more than a day if the rise speed is one tenth of the oscillation speed. Similarly, by assuming an oscillation speed of $100\,{\rm cm \, s^{-1}}$ and a depth of 5.5\,km from the shortest axis of Eros of $\sim11\,$km, the rise time would be about 90\,minutes for the oscillation speed, and about 15\,hours for one tenth of that speed. This implies that unless the seismic shaking lasts for more than a couple of hours, one impact might not be enough for a large block to rise to the surface, and that multiple impacts would be necessary to change the surface significantly. \begin{figure*} \includegraphics[width=0.48\textwidth]{figs/A_omega_vseismic_ver3_Itokawa.eps} \includegraphics[width=0.48\textwidth]{figs/A_omega_vseismic_ver3_Eros.eps} \caption{The critical oscillation speeds necessary for the BNE that is estimated from $\tilde{\Gamma}_c=1.0$ and $\tilde{v}_c=1.0$ for Itokawa (left panel) and Eros (right panel). Blue, orange, and red lines correspond to a typical small particle's size of 1, 10, and 100\,cm, respectively. Green solid and dashed lines are the seismic speeds estimated from an impact that lead to a catastrophic disruption, and correspond to the impact speeds of $5\, {\rm km \, s^{-1}}$ and $3\, {\rm km \, s^{-1}}$, respectively. Larger projectiles would destroy asteroids (above these lines), while smaller ones would generate seismic speeds (below these lines) that are comparable to the critical oscillation speeds for the BNE. The solid black line is the escape speed. \label{fig_vcrit}} \end{figure*} A potential problem for size sorting by multiple impacts is that the oscillation directions will likely vary for different impacts. To check this point, we performed simulations where we changed the direction and magnitude of {\it effective} gravity instead of oscillating cylinders sinusoidally. First, we consider the most ideal case where the impact is always applied in the vertical direction of the cylinder. We use the initial conditions identical to the default case, and change only the direction of the gravity as follows. Initially, the gravity is in the $-z$ direction with the magnitude of the Earth's gravity. Once the simulation starts, we apply a ``jolt'' to the cylinder by changing the gravity vector first to the $+z$ direction and then back to the $-z$ direction within a short period of time that is randomly chosen from $0.1-0.3\,$s. We then keep on applying the Earth gravity in the $-z$ direction for a randomly chosen period of time from $0.3-0.4\,$s. After this break, we again apply a jolt as described above, and repeat these steps a number of times. Here, the choice of the duration of the jolts is arbitrary. In this set up, we find that the intruder comes to the surface within $\sim20\,$s, which is comparable to the rise time of the low-speed oscillation (e.g., $\tilde{\omega}=1.0$ and $\tilde{A}=2.0$). Next, we consider a less ideal case where the impact is applied in a random direction relative to the cylinder. Similar to the above case, the cylinder is initially under Earth's gravity in the $-z$ direction. To mimic a jolt, we randomly choose the gravity magnitude from $-2g$ to $2g$ in each direction of $x$, $y$, and $z$, and smoothly change the magnitude and direction of the gravity vector from and then back to the initial one within $0.1-0.3\,$s. After the break of $0.3-0.4\,$s, we apply another random jolt and repeat these steps a number of times. We find that the rise of the intruder is much slower in this setup. After $\sim180\,$s, the intruder is about one quarter of the bed depth from the bottom of the cylinder. Therefore, we expect that the rise time would be much longer than the ones estimated from our sinusoidal oscillations if the oscillations are applied from random directions. If, on the other hand, the oscillations are applied approximately perpendicular to a particle bed, the rise time would be less affected. \section{Discussion and Conclusions}\label{summary} We have studied the BNE in an intruder model by using the $N$-body code PKDGRAV with a soft-sphere collision model, and explored its effect on asteroids. We have also investigated the dependence of the BNE on the particle properties, oscillation conditions, and gravitational environments. Similar to previous studies, we have found that convection is the major driving source for the BNE. Our main conclusions are the following. \begin{enumerate} \item{The occurrence of the BNE as well as the rise time of an intruder is largely independent of the choice of coefficients of restitution (see Section~\ref{results_eps}). For highly elastic particles ($\epsilon \gtrsim 0.9$), however, the rise time might differ significantly from less elastic cases.} \item{The occurrence of the BNE depends on the values of friction constants (see Section~\ref{results_mu}). Both too high and too low friction constants diminish convection and thus the BNE.} \item{The critical condition for the BNE agrees well with the limits of $\tilde{\Gamma}\gtrsim1$ and $\tilde{v}_{osc}\gtrsim\tilde{v}_c\sim1$ (see Section~\ref{results_vel}). These conditions also agree well with previous simulations and experiments that have comparable densities for small and large particles.} \item{The BNE is scalable for different gravitational environments by choosing the oscillation frequency that corresponds to the gravitational acceleration (i.e., $\omega\propto \sqrt{a_g}$, see Section~\ref{results_lowg}). Thus, the same level of size sorting is expected for a smaller oscillation speed on a small asteroid compared to a big one.} \item{The rise speed of an intruder is proportional to the oscillation speed (see Sections~\ref{results_vel} and \ref{results_lowg}). Also, there might be an optimal bed depth for a certain oscillation speed to achieve the fast rise of an intruder.} \item{For a wide cylinder, the convection roll along the wall may be thick enough to pass the intruder downward, leading to a ``whale'' effect (see Section~\ref{results_vel}).} \item{The BNE is expected to occur on an asteroid for seismic speed that is comparable to non-destructive impacts (see Section~\ref{results_lowg}). We have compared the critical oscillation speed for the BNE with the critical seismic speed, and found that the region for the BNE overlaps with that of the seismic shaking.} \item{We also estimate the rise time of a large block from the central region to the surface of an asteroid, and predict that multiple impacts or long-lasting seismic shaking might be required for the BNE to significantly change the asteroid surface. A potential problem which could affect such multiple impacts is that the BNE might slow down significantly for randomly oriented oscillations.} \end{enumerate} The efficiency of the BNE depends on many properties. In this paper, we have explored the dependence on the coefficients of restitution, the friction constants, the oscillation frequency and amplitude, the particle bed depth, and the gravity. One of the potential directions for the future study would be to investigate how the BNE changes from the dense limit to the vibro-fluidized regime. In particular, it would be interesting to perform the BNE experiments in the high-speed region to understand whether the BNE keeps on occurring as our code predicts, or turns off sharply as suggested by \shortcite{Godoy2008PhRvE}. The rise time of an intruder is important to estimate the efficiency of the BNE on asteroids. Our work shows that the rise speed linearly scales with the oscillation speed and is proportional to $\sqrt{a_g}$. Although this result is intuitive, we should check whether such a trend would hold for different shaking models as well since most of our simulations assume sinusoidal oscillations. Interestingly, the analytical model developed by \shortcite{Jiongming1998NCimD} also estimates that the rise speed is proportional to $\sqrt{a_g}$. In this paper, we did not mention the effects of the particle's size distribution, the container width, or the container shape. We assume the idealized system where all of the small particles have the same size (except for Section~\ref{results_Tancredi12}) and the size ratio of large to small particles is $d_l/d_s=3$. However, particles in actual asteroids will generally follow some size distribution. Previous studies have shown that the BNE occurs only for a radius ratio of large and small particles greater than some threshold \cite[e.g.,][]{Duran1994PhRvE,Vanel1997PhRvL}. The size ratio we chose for our simulations ($d_l/d_s=3$) is near the threshold according to these experiments. Thus, we expect that the size ratio might need to be near this value or larger for the BNE to take place. Also, a preliminary study we performed indicates that the rise speed of an intruder slows down when a size distribution of small particles is introduced. Such effects on the rise speed should be studied more carefully in future work. Differences due to the container could also be significant. The experimental studies by \shortcite{Hejmady2012PhRvE} suggest that both bed heights and widths affect the rise time of an intruder. Furthermore, previous studies have shown that ``reverse'' convection (i.e., in which the particle flow ascends along the wall and descends around the center) is seen for granular materials in a container with outwardly slanting walls \cite[e.g.,][]{Grossman1997PhRvE} and thus the intruder is trapped at the bottom rather than at the bed surface in such a container \cite[e.g.,][]{Knight1993PhRvL}. Indeed, when we use a bowl instead of a cylinder as a container, we observe that the intruder goes up and down in the granular bed, but never comes to the surface for a density ratio of $\rho_l/\rho_s=1$. Our further tests with $\rho_l/\rho_s=0.5$ and $2$ show that the BNE occurs for the former, but not for the latter. The rise of the intruder in the former case is consistent with the expected behavior of the normal fluid. There is a related issue when we consider the BNE in an asteroid. The BNE requires the container wall to create the convective flow, but there is no actual wall such as the ones we considered here in an asteroid. However, it is plausible that a very large body could act as a wall for the smaller particles. For future work, we intend to investigate the effect of having no lateral walls by adopting periodic boundary conditions, and also to improve our current study by modelling self-gravitating rubble piles. One of the most important fundamental problems is the absence of knowledge of the coefficients of restitution and friction constants of particles in asteroids. Currently, our knowledge of these values relies on experiments, for example, the collision experiment done by \shortcite{Durda2011Icar} on 1\,m-size granite spheres or the avalanche experiment done by \shortcite{Yu2014inprep} with gravels. However, it is not clear whether such objects have the same mechanical behavior as materials composing asteroids of interest. There are some future missions that are expected to return asteroid samples, such as Hayabusa~2, OSIRIS-REx, and potentially MarcoPolo-R. These efforts will lead to a better understanding of surface features and will provide invaluable knowledge of asteroid compositions as well as their response to external solicitations, such as that of the sampling tool. \section*{Acknowledgements} We thank Carsten G\"{u}ttler for useful discussions, and the referee, Gonzalo Tancredi, for careful reading and detailed comments. SM is supported by an Astronomy Center for Theory and Computation Prize Fellowship at the University of Maryland, and also by the Dundee Fellowship at the University of Dundee. PM and SRS acknowledge support from the french space agency CNES. All of the simulations in this paper were done on the YORP cluster in the Department of Astronomy at UMD and on the Deepthought High-Performance Computing Cluster at UMD. This material is based partly on work supported by the US National Aeronautics and Space Administration under Grant Nos. NNX08AM39G, NNX10AQ01G, and NNX12AG29G issued through the Office of Space Science and by the National Science Foundation under Grant No. AST1009579. \bibliographystyle{mn2e}
1,116,691,498,193
arxiv
\section{Introduction} \label{sec:intro} The \emph{(directed) Bruhat graph} $\widehat{\Gamma}$ of a Coxeter group $W$ is the directed graph with vertex set $W$ and directed edges $w \to wt$ whenever $\ell(wt)>\ell(w)$; we write $\widehat{\Gamma}(u,v)$ for its restriction to a Bruhat interval $[u,v] \subset W$. These graphs appear ubiquitously in the combinatorics of Coxeter groups and Bruhat order \cite{dyer-bruhat-graph}, the topology of flag, Schubert, and Richardson varieties as the GKM-graph for the natural torus action \cite{GKM, guillemin-holm-zara}, and in the geometry of these varieties and related algebra, for example in the context of Kazhdan--Lusztig polynomials \cite{blundell2021towards, Brenti-combinatorial-formula, davies2021advancing, Dyer-hecke-algebras}. In all of these contexts, the directions of the edges, and sometimes additional edge labels, are centrally important. In this work, however, we study the associated \emph{undirected} graphs $\Gamma(u,v)$. In particular, from the perspective of the undirected graph, it is very natural to study graph automorphisms (in contrast, the directed Bruhat graph $\widehat{\Gamma}$ has very few automorphisms \cite{Waterhouse}), and these automorphisms end up having close connections to previous work on smooth Schubert varieties \cite{lakshmibai-sandhya, Carrell-smoothness}, self-dual Bruhat intervals \cite{self-dual}, Billey--Postnikov decompositions \cite{billey-postnikov, richmond-slofstra-fiber-bundle}, and special matchings \cite{SM-advances}. \subsection{Regular, vertex-transitive, and self-dual Bruhat graphs} The following well-known theorem, combining results of Lakshmibai--Sandhya \cite{lakshmibai-sandhya} and Carrell--Peterson \cite{Carrell-smoothness}, helped establish the fundamentality of both the Bruhat graph and pattern avoidance conditions in the combinatorial and geometric study of Schubert varieties. \begin{theorem}[Lakshmibai--Sandhya \cite{lakshmibai-sandhya}, Carrell--Peterson \cite{Carrell-smoothness}] \label{thm:smoothness} The following are equivalent for a permutation $w$ in the symmetric group $\mathfrak{S}_n$: \begin{enumerate} \item[\normalfont(S1)] the Bruhat graph $\widehat{\Gamma}(w)$ is a regular graph, \item[\normalfont(S2)] the permutation $w$ avoids the patterns $3412$ and $4231$, \item[\normalfont(S3)] the poset $[e,w]$ is rank-symmetric, and \item[\normalfont(S4)] the Schubert variety $X_w$ is smooth. \end{enumerate} \end{theorem} In light of (S3), it is natural to ask whether $[e,w]$ is in fact self-dual as a poset when $X_w$ is smooth. This turns out to not always be the case, but the smaller class of self-dual intervals also admits a nice characterization by pattern avoidance: \begin{theorem}[G.--G. \cite{self-dual}] \label{thm:self-dual} The following are equivalent for a permutation $w \in \mathfrak{S}_n$: \begin{enumerate} \item[\normalfont(SD1)] the Bruhat interval $[e,w]$ is self-dual as a poset, and \item[\normalfont(SD2)] the permutation $w$ avoids the patterns $3412$ and $4231$ as well as $34521, 54123, 45321,$ and $54312$. \end{enumerate} \end{theorem} In our first main theorem here, we characterize by pattern avoidance those permutations $w$ such that $\Gamma(e,w)$ is \emph{vertex-transitive}; this characterization implies that this class of permutations sits nicely between the classes of self-dual permutations (Theorem~\ref{thm:self-dual}) and smooth permutations (Theorem~\ref{thm:smoothness}). \begin{theorem} \label{thm:vertex-transitive} The following are equivalent for a permutation $w \in \mathfrak{S}_n$: \begin{enumerate} \item[\normalfont(VT1)] the undirected Bruhat graph $\Gamma(e,w)$ is a vertex-transitive graph, \item[\normalfont(VT2)] the permutation $w$ avoids the patterns $3412$ and $4231$ as well as $34521$ and $54123$. \end{enumerate} \end{theorem} Since vertex-transitive graphs are necessarily regular, it is clear that the permutations from Theorem~\ref{thm:vertex-transitive} are a subset of those from Theorem~\ref{thm:smoothness}, and this is borne out by comparing conditions (S2) and (VT2). It is not at all conceptually clear, however, why the self-dual permutations of Theorem~\ref{thm:self-dual} should in turn be a subset of those from Theorem~\ref{thm:vertex-transitive}, even though this fact is easily seen by comparing conditions (VT2) and (SD2). A conceptual bridge between these two classes of permutations is provided by Conjecture~\ref{conj:orbit-is-interval}. \begin{conj} \label{conj:orbit-is-interval} Let $w \in \mathfrak{S}_n$ and let $\mathcal{O}=\{\varphi(e) \mid \varphi \in \aut(\Gamma(e,w))\}$ be the orbit of the identity under graph automorphisms of $\Gamma(e,w)$, then \[ \mathcal{O}=[e,v] \] for some $v \leq w$. \end{conj} Indeed, if $[e,w]$ is self-dual, then $w \in \mathcal{O}$, and so if Conjecture~\ref{conj:orbit-is-interval} holds we must have $\mathcal{O}=[e,w]$. That is, $\Gamma(w)$ must be vertex-transitive. In the course of the proof of Theorem~\ref{thm:vertex-transitive} (Section~\ref{sec:vertex-transitive}) and the refinement of Conjecture~\ref{conj:orbit-is-interval} in Section~\ref{sec:orbits}, we are led to consider certain automorphisms of $\Gamma(u,v)$ arising from perfect matchings on the Hasse diagram of $[u,v]$. That these automorphisms are the same thing as the previously well-studied \emph{special matchings} on $[u,v]$ is the subject of our second main theorem. \subsection{Special matchings and Bruhat automorphisms} \emph{Special matchings} (see the definition in Section~\ref{sec:intro-SM}) on Bruhat intervals were introduced \cite{SM-original, SM-advances} because they can be used to define a recurrence for \emph{Kazhdan--Lusztig $R$-polynomials} \cite{kazhdan-lusztig-polynomials} which allows for the resolution of the \emph{Combinatorial Invariance Conjecture} in the case of lower intervals $[e,w]$. These matchings are intended to generalize many of the combinatorial properties of the matching on $W$ induced by multiplication by a simple reflection $s$. Special matchings on Bruhat intervals and related posets have since found several other combinatorial and topological applications and been generalized in several ways \cite{SM-topology, SM-diamonds, SM-zircon}, and special matchings on lower Bruhat intervals have been completely classified \cite{SM-lower-classification}. In Theorem~\ref{thm:SM-equals-automorphism-classical-type} and Conjecture~\ref{conj:SM-equals-automorphism-general} below we give a new characterization of special matching of Bruhat intervals $[u,v]$ in terms of automorphisms of $\Gamma(u,v)$. This characterization is notable because it expresses the special matching condition, originally formulated as a condition only on Bruhat covers, as a condition on the global structure of the undirected Bruhat graph. A Coxeter group $W$ is called \emph{right-angled} if every pair of simple generators either commutes or generates an infinite dihedral group. \begin{theorem} \label{thm:SM-equals-automorphism-classical-type} Let $W$ be a right-angled Coxeter group or the symmetric group and let $u \leq v$ be elements of $W$. Then a perfect matching of the Hasse diagram of $[u,v]$ is a special matching if and only if it is an automorphism of $\Gamma(u,v)$. \end{theorem} \begin{conj} \label{conj:SM-equals-automorphism-general} Theorem~\ref{thm:SM-equals-automorphism-classical-type} holds for arbitrary Coxeter groups $W$. \end{conj} \subsection{Outline} In Section~\ref{sec:background}, we cover background and definitions relating to Coxeter groups, Bruhat order and Bruhat graphs, Billey--Postnikov decompositions, and special matchings. In Section~\ref{sec:automorphisms} we describe several sources of automorphisms of $\Gamma(u,v)$: left and right multiplication by a simple generator $s$ via the Lifting Property, and \emph{middle multiplication}, which we introduce, when the interval admits a particularly nice Billey--Postnikov decomposition. In Section~\ref{sec:vertex-transitive} we prove Theorem~\ref{thm:vertex-transitive}, classifying vertex-transitive intervals $[e,w]$. In Section~\ref{sec:orbits} we give a more precise version of Conjecture~\ref{conj:orbit-is-interval} in terms of \emph{almost reducible decompositions} and some partial results towards resolving the conjecture. Section~\ref{sec:automorphisms} proves Theorem~\ref{thm:SM-equals-automorphism-classical-type} and one direction of Conjecture~\ref{conj:SM-equals-automorphism-general}, establishing a close connection between automorphisms of $\Gamma(u,v)$ and special matchings on $[u,v]$. The proof of Theorem~\ref{thm:SM-equals-automorphism-classical-type} relies on a structural property of Bruhat order, the existence of upper bounds of \emph{butterflies}, which may be of independent interest. This property is discussed and proven for the symmetric group and right-angled Coxeter groups in Section~\ref{sec:butterflies}. \section{Background and definitions} \label{sec:background} \subsection{Coxeter groups and reflections} We refer the reader to \cite{bjorner-brenti} for basic definitions and background for Coxeter groups. For a Coxeter group $W$ with simple generators $S=\{s_1,\ldots,s_r\}$ and an element $w \in W$, an expression $w=s_{i_1}\cdots s_{i_{\ell}}$ is a \emph{reduced word} of $w$ if it is of minimal length, and in this case $\ell=\ell(w)$ is the \emph{length} of $w$. The \emph{reflections} $T$ are the $W$-conjugates of the simple reflections. The \emph{(left) inversion set} of $w$ is \[ T_L(w) \coloneqq \{ t \in T \mid \ell(tw)<\ell(w)\}, \] and the \emph{(left) descent set} of $w$ is $D_L(w)\coloneqq S \cap T_L(w)$. \emph{Right} inversion and descent sets $T_R(w),D_R(w)$ are defined analogously, using instead right multiplication by $t$. It is not hard to see that $\ell(w)=|T_L(w)|=|T_R(w)|$. Given $J \subseteq S$, the \emph{parabolic subgroup} $W_J$ is the subgroup of $W$ generated by $J$, viewed as a Coxeter group with simple generators $J$. Each coset $wW_J$ for $W_J$ in $W$ contains a unique element $w^J$ of minimal length, and this determines a decomposition $w=w^Jw_J$ with $w_J \in W_J$ and $\ell(w)=\ell(w^J)+\ell(w_J)$. The set $W^J \coloneqq \{w^J \mid w \in W\}$ is the \emph{parabolic quotient} of $W$ with respect to $J$, and has the following alternative description: \[ W^J = \{ u \in W \mid D_R(u) \cap J = \emptyset \}. \] If $W$ is finite, it contains a unique element $w_0$ of maximum length, and the image $w_0^J$ of $w_0$ in any parabolic quotient is the unique longest element of $W^J$. We write $w_0(J)$ for the longest element of the parabolic subgroup $W_J$. \subsection{Bruhat graphs and Bruhat order} The \emph{directed Bruhat graph} $\widehat{\Gamma}$ of $W$ is the directed graph with vertex set $W$ and directed edges $w \to wt$ whenever $t$ is a reflection with $\ell(wt)>\ell(w)$. Note that, since $T$ is closed under conjugation, the ``left" and ``right" versions of $\widehat{\Gamma}$ in fact coincide. The \emph{(undirected) Bruhat graph} $\Gamma$ is the associated simple undirected graph. The directed graph $\widehat{\Gamma}$ is much more commonly considered in the literature, and often called ``the Bruhat graph" but, since our focus in this work is on the undirected graph $\Gamma$, when directedness is not specified we mean the undirected graph. The \emph{(strong) Bruhat order} $(W,\leq)$ is the partial order on $W$ obtained by taking the transitive closure of the relation determined by $\widehat{\Gamma}$. We write $[u,v]$ for the interval $\{w \in W \mid u \leq w \leq v\}$ in Bruhat order. For $u \leq v$, we write $\widehat{\Gamma}(u,v)$ and $\Gamma(u,v)$ for the restrictions of $\widehat{\Gamma}, \Gamma$ to the vertex set $[u,v]$; when $u$ is the identity element $e$, we sometimes write simply $\widehat{\Gamma}(v)$ and $\Gamma(v)$. The following fundamental properties of Bruhat order will be of use throughout the paper. \begin{prop}[Exchange Property] \label{prop:exchange-property} Let $w \in W$ and $t \in T$ be such that $\ell(wt)<\ell(w)$, and let $s_{i_1}\cdots s_{i_k}$ be any (not-necessarily-reduced) expression for $w$, then for some $j$ we have \[ wt=s_{i_1}\cdots \widehat{s_{i_j}} \cdots s_{i_k}. \] \end{prop} \begin{prop}[Subword Property] \label{prop:subword-property} Let $u,v \in W$, then $u \leq v$ if and only if some (equivalently, every) reduced word for $v$ contains a reduced word for $u$ as a subword. \end{prop} \begin{prop}[Lifting Property] \label{prop:lifting-property} Let $u \leq v$. If $s \in D_L(v) \setminus D_L(u)$, then $su<v$ and $u<sv$; analogously, if $s \in D_R(v) \setminus D_R(u)$, then $us<v$ and $u<vs$. \end{prop} \begin{prop} \label{prop:monotonicity-of-projection} Let $u \leq v$, then for any $J \subseteq S$ we have $u^J \leq v^J$. \end{prop} \subsection{Billey--Postnikov decompositions} For $w \in W$, we write $\supp(w)$ for the \emph{support} of $w$: the set of simple reflections appearing in some (equivalently, every) reduced word for $w$. \begin{defin}[Billey--Postnikov \cite{billey-postnikov}, Richmond--Slofstra \cite{richmond-slofstra-fiber-bundle}] \label{def:BP-decomposition} Let $W$ be a Coxeter group and $J \subseteq S$, the parabolic decomposition $w=w^Jw_J$ of $w$ is a \emph{Billey--Postnikov decomposition} or \emph{BP-decomposition} if \[ \supp(w^J) \cap J \subseteq D_L(w_J). \] \end{defin} BP-decompositions were introduced by Billey and Postnikov in \cite{billey-postnikov} in the course of their study of pattern avoidance criteria for smoothness of Schubert varieties in all finite types. The following characterizations of BP-decompositions will be useful to us. \begin{prop}[Richmond--Slofstra \cite{richmond-slofstra-fiber-bundle}] \label{prop:BP-equivalences} For $w \in W$ and $J \subseteq S$, the following are equivalent: \begin{enumerate} \item $w=w^Jw_J$ is a BP-decomposition, \item the multiplication map $\left( [e,w^J] \cap W^J \right) \times [e,w_J] \to [e,w]$ is a bijection, \item $w_J$ is the maximal element of $W_J \cap [e,w]$. \end{enumerate} \end{prop} \subsection{Special matchings and Kazhdan--Lusztig polynomials} \label{sec:intro-SM} The \emph{Hasse diagram}, denoted $H(P)$, of a poset $P$ is the undirected graph with vertex set $P$ and edges $(x,y)$ whenever $x \lessdot_P y$ is a cover relation in $P$. Note that the Hasse diagram $H(W)$ of Bruhat order on $W$ is a (non-induced) subgraph of $\Gamma$, as the two graphs share the vertex set $W$, but $H(W)$ contains only those edges $(x,y)$ of $\Gamma$ such that $| \ell(x) - \ell(y) | =1$. A \emph{perfect matching} of a graph $G$ is a fixed-point-free involution $M:G \to G$ such that $(x,M(x))$ is an edge of $G$ for all $x \in G$. \begin{defin}[Brenti \cite{SM-original}, Brenti--Caselli--Marietti \cite{SM-advances}] \label{def:special-matching} A perfect matching $M$ on the Hasse diagram of a poset $P$ is a \emph{special matching} if, for every cover relation $x \lessdot_P y$, either $M(x)=y$ or $M(x) <_P M(y)$. \end{defin} The following general property of special matchings will be useful. \begin{prop}[Brenti--Caselli--Marietti \cite{SM-advances}] \label{prop:sm-restricts-to-interval} Let $x<_P y$ and let $M$ be a special matching on a poset $P$. If $x<M(x)$ and $M(y)<y$, then $M$ restricts to a special matching on the poset $[x,y]$. \end{prop} \section{Automorphisms and middle multiplication} \label{sec:automorphisms} \subsection{Elementary automorphisms}\label{sub:elementary-automorphisms} The following result of Waterhouse shows that $\widehat{\Gamma}$ has no nontrivial automorphisms as a directed graph. \begin{theorem}[Waterhouse \cite{Waterhouse}] Let $W$ be an irreducible Coxeter group which is not dihedral, then $\aut((W,\leq))$ (equivalently, $\aut(\widehat{\Gamma})$) is generated by the graph automorphisms of the Dynkin diagram of $W$ and the group inversion map on $W$. \end{theorem} In this paper we study the much richer sets of automorphisms of $\Gamma$ and particularly of its subgraphs $\Gamma(u,v)$. The simplest automorphisms of $\Gamma$ not coming from automorphisms of $\widehat{\Gamma}$ follow from the Lifting Property: \begin{prop} \label{prop:right-left-multiplication-give-autos} Let $u,v \in W$ with $u \leq v$ and suppose $s \in D_L(v) \setminus D_L(u)$, then left multiplication $L_s: W \to W$ by $s$ restricts to a graph automorphism of $\Gamma(u,v)$. Similarly, if $s \in D_R(v) \setminus D_R(u)$, then right multiplication $R_s: W \to W$ by $s$ restricts to a graph automorphism of $\Gamma(u,v)$. \end{prop} \begin{proof} As left and right multiplication commute, and as $\Gamma$ may equivalently be defined either in terms of left or right multiplication by reflections, it is clear that $L_s,R_s$ define automorphisms of $\Gamma$. We just need to check that, under the hypotheses, they preserve the Bruhat interval $[u,v]$. Suppose $s \in D_L(v) \setminus D_L(u)$ and $x \in [u,v]$. If $s \in D_L(x)$, then by the Lifting Property (Proposition~\ref{prop:lifting-property}) we have $u \leq sx$, thus, since $sx<x\leq v$ we have $sx \in [u,v]$. Similarly, if $s \not \in D_L(x)$, then the Lifting Property implies that $u\leq x<sx \leq v$, so again $sx \in [u,v]$. The case of right multiplication is exactly analogous. \end{proof} We say the element $w\in W$ has a \emph{disjoint support decomposition} if it may be expressed as a nontrivial product $w=w'w''$ with $\supp(w') \cap \supp(w'') = \emptyset$ (note that, in this case, we have $w'=w^J$ and $w''=w_J$ with $J=\supp(w'')$). \begin{prop} \label{prop:disjoint-support} Let $w=w'w''$ be a disjoint support decomposition, then: \begin{align*} \widehat{\Gamma}(w) &\cong \widehat{\Gamma}(w') \times \widehat{\Gamma}(w''), \\ \Gamma(w) &\cong \Gamma(w') \times \Gamma(w''),\\ [e,w] &\cong [e,w'] \times [e,w'']. \end{align*} In each case, the isomorphism is given by group multiplication. \end{prop} \begin{proof} The latter two assertions follow from the first. The first assertion can be easily seen by choosing a reduced word $w=s_{i_1}\cdots s_{i_k} s_{i_{k+1}} \cdots s_{i_{\ell}}$ such that $s_{i_1}\cdots s_{i_k}=w'$ and $s_{i_{k+1}} \cdots s_{i_{\ell}}=w''$ and applying the Subword Property and Exchange Property. \end{proof} Proposition~\ref{prop:disjoint-support} implies in particular that when $w=w'w''$ is a disjoint support decomposition, $\Gamma(w),\widehat{\Gamma}(w),$ and $[e,w]$ have automorphisms induced by automorphisms for $w',w''$. \subsection{Middle multiplication} A key observation of this work is that, when $w$ admits a particularly nice parabolic decomposition which is sufficiently close to a disjoint support decomposition, $\Gamma(w)$ has additional automorphisms coming from \emph{middle multiplication}. These automorphisms will be key to the classification in Section~\ref{sec:vertex-transitive} of vertex-transitive Bruhat graphs $\Gamma(w)$ in the symmetric group. \begin{prop} \label{prop:middle-mult-is-automorphism} Suppose $w=w^Jw_J$ is a BP-decomposition of $W$ and in addition we have $\supp(w^J) \cap \supp(w_J) =\{s\}$, then the middle multiplication map \[ \phi: x \mapsto x^J s x_J, \] is an automorphism of the Bruhat graph $\Gamma(w)$. \end{prop} \begin{proof} Suppose without loss of generality that $J=\supp(w_J)$. By Proposition~\ref{prop:BP-equivalences}, we have that $y \mapsto (y^J,y_J)$ is a bijection between $[e,w]$ and $([e,w^J]\cap W^J) \times [e,w_J]$. We need to verify that if $x=yt$ for $x,y \in [e,w]$ and $t \in T$ then $\phi(x)=\phi(y)t'$ for some $t' \in T$. If $x_J=y_J$, then \begin{align*} \phi(x)&=x^Jsy_J=\phi(x)y_J^{-1}sy_J \\ \phi(y)&=\phi(y)y_J^{-1}sy_J. \end{align*} Thus, since $\phi$ acts on $x,y$ just by right multiplication, we are done. The case $x^J=y^J$ is similar. Thus we are left with the case where $x^J \neq y^J$ and $x_J \neq y_J$. Suppose without loss of generality that $\ell(y)>\ell(x)$. Fix some reduced word $y=s_{i_1} \cdots s_{i_{\ell}}$ such that $s_{i_1} \cdots s_{i_k}=y^J$ and $s_{i_{k+1}} \cdots s_{i_{\ell}}=y_J$. By the Exchange Property (Proposition~\ref{prop:exchange-property}), we have a not-necessarily-reduced word $x=s_{i_1} \cdots \widehat{s_{i_j}} \cdots s_{i_{\ell}}$ for some $1 \leq j \leq \ell$. If $j > k$, so that we are deleting a letter from $y_J$, then $z=s_{i_k} \cdots \widehat{s_{i_j}} \cdots s_{i_{\ell}}$ still lies in $W_J$, and so uniqueness of parabolic decompositions implies that $z=x_J$ and $x^J=y^J$, a contradiction. Thus it must be that $j \leq k$. Furthermore, we must have $z'=s_{i_1} \cdots \widehat{s_{i_j}} \cdots s_{i_{k}} \not \in W^J$. By definition, this means that $z'$ has some right descent from $J$, and since $\supp(z') \subseteq \supp(y^J) \subseteq \supp(w^J)$ and $\supp(w^J) \cap J = \{s\}$, it must be that $s \in D_R(z')$. This implies that $(x^J,x_J)=(z's,sy_J)$. Now, \begin{align*} \phi(x)&=x^Jsx_J=z'sy_J \\ \phi(y)&=y^Jsy_J. \end{align*} So $\phi(x)\phi(y)^{-1}=z'(y^J)^{-1}$. We know $z'=y^Jt''$ for some $t'' \in T$ by the Exchange Property, so this expression becomes \[ y^Jt''(y^J)^{-1} \in T. \] \end{proof} \section{Vertex transitive Bruhat graphs} \label{sec:vertex-transitive} To prove Theorem~\ref{thm:vertex-transitive}, we introduce another condition: \begin{itemize} \item[(VT3)] \textit{the element $w$ is almost-polished}, \end{itemize} and show its equivalence to (VT1) and (VT2). \begin{defin}\label{def:almost-polished} Let $(W,S)$ be a finite Coxeter system. An element $w\in W$ is \emph{almost-polished} if there exists pairwise disjoint subsets $S_1,\ldots,S_k\subset S$ such that each $S_i$ is a connected subset of the Dynkin diagram and coverings $S_i=J_i\cup J_i'$ with $i=1,\ldots,k$ so that \[w=\prod_{i=1}^k w_0(J_i)w_0(J_i\cap J_i')w_0(J_i').\] \end{defin} Note that if we reorder the $S_i$'s, a possibly different almost-polished element can be obtained. Unlike \emph{polished elements} \cite{self-dual}, for almost-polished elements, we do not require that $J_i\cap J_i'$ is totally disconnected. We then prove Theorem~\ref{thm:vertex-transitive} via (VT1)$\Rightarrow$(VT2)$\Rightarrow$(VT3)$\Rightarrow$(VT1). \subsection{(VT1)$\Rightarrow$(VT2)} For a simple graph $G$ and $v\in V(G)$, let $N_d(v)$ be the set of vertices with distance exactly $d$ from $v$. In particular, $N_0(v)=\{v\}$ and for the Bruhat graph $\Gamma(w)$, $N_1(w)=\{wt_{ij}\:|\: w(i)>w(j)\}$ where $t_{ij}$ is the transposition $(i\ j)$ with $i<j$. For $\varphi\in\aut(G)$, it is clear that if $\varphi(u)=v$, then $\varphi(N_d(u))=N_d(v)$ for all $d=1,2,\ldots$. We start with a simple lemma concerning $\aut(\Gamma(w))$, which intuitively says that $\varphi\in\aut(\Gamma(w))$ ``preserves triangles" in $N_1(w)$ and $N_1(e)$. \begin{lemma}\label{lem:preserve-triangle} Let $\varphi\in\aut(\Gamma(w))$ such that $\varphi(w)=e$. \begin{enumerate} \item If $i<j<k$ and $w(i)>w(j)>w(k)$, then for some $a<b<c$ with $w\geq t_{ac}$, \[\varphi(\{wt_{ij},wt_{ik},wt_{jk}\})=\{t_{ab},t_{ac},t_{bc}\}.\] \item If $a<b<c$ and $w\geq t_{ac}$, then for some $i<j<k$ and $w(i)>w(j)>w(k)$, \[\varphi^{-1}\big(\{t_{ab},t_{ac},t_{bc}\}\big)=\{wt_{ij},wt_{ik},wt_{jk}\}.\] \end{enumerate} \end{lemma} \begin{proof} Recall that for $a<b<c$, $t_{ac}\geq t_{ab}$ and $t_{ac}\geq t_{bc}$. For (1), we notice that the three elements $wt_{ij},wt_{ik},wt_{jk}\in N_1(w)$ have two common neighbors in $wt_{ij}t_{jk}=wt_{ik}t_{ij}, wt_{jk}t_{ij}=wt_{ik}t_{jk}\in N_2(w)$. In order for $\varphi(wt_{ij})=t_{x_1y_1}$, $\varphi(wt_{ik})=t_{x_2y_2}$ and $\varphi(wt_{jk})=t_{x_3y_3}$ to have two common neighbors in $N_2(e)$, $\{x_1,y_1\}$, $\{x_2,y_2\}$, $\{x_3,y_3\}$ must pairwise intersect, so they must be $\{a,b\}$, $\{a,c\}$, $\{b,c\}$ in some order, for some $a<b<c$. For (2), the exact same reasoning works by analysing $N_2(w)$. \end{proof} \begin{proof}[Proof of implication (VT1)$\Rightarrow$(VT2)] Assume $\Gamma(w)$ is vertex-transitive. In particular, it is a regular graph so Theorem~\ref{thm:smoothness} implies that $w$ avoids $3412$ and $4231$. The other two patterns $34521$ and $54123$ are inverse to each other. Since $\Gamma(w)$ and $\Gamma(w^{-1})$ are isomorphic, it suffices to show that $w$ avoids $34521$. Now for the sake of contradiction, let $w$ contain the pattern $34521$ at indices $a_1<\cdots<a_5$, and let $\varphi\in\aut(\Gamma(w))$ such that $\varphi(w)=e$. Let $\varphi(wt_{a_4a_5})=t_{xy}$ where $x<y$. Note that $w$ contains the pattern $321$ at indices $a_i<a_4<a_5$ for $i=1,2,3$. By Lemma~\ref{lem:preserve-triangle}, let $\varphi(\{wt_{a_ia_4},wt_{a_ia_5},wt_{a_4a_5}\})$ be the three transpositions with indices in $x,y$ and $c_i$. We now view the indices $a_1,a_2,a_3$ and $c_1,c_2,c_3$ with symmetric roles and divide into the following cases. Case 1: $x<c_1<c_2<y$. Since $w\geq t_{xy}>t_{xc_2}$, by Lemma~\ref{lem:preserve-triangle}(2), $\varphi^{-1}(\{t_{xc_1},t_{xc_2},t_{c_1c_2}\})=\{wt_{ij},wt_{ik},wt_{jk}\}$ for some $i<j<k$. We know that $\varphi^{-1}(t_{xc_1})=wt_{a_1a_4}$ or $wt_{a_1a_5}$ and $\varphi^{-1}(t_{xc_2})=wt_{a_2a_4}$ or $w_{t_{a_2a_5}}$. In order for them to have common indices, we must have either $(i,j,k)=(a_1,a_2,a_4)$ or $(i,j,k)=(a_1,a_2,a_5)$. But $wt_{a_1a_2}>w$ is not a vertex in $\Gamma$, resulting in a contradiction. Case 2: $x<c_1<y<c_2$. The positioning of $c_2$ implies $w\geq t_{xc_2}\geq t_{c_1c_2}$. By Lemma~\ref{lem:preserve-triangle}(2), $\varphi^{-1}(\{t_{c_1y},t_{c_1c_2},t_{yc_2}\})=\{wt_{ij},wt_{ik},wt_{jk}\}$ for some $i<j<k$. We know that $\varphi^{-1}(t_{c_1y})=wt_{a_1a_4}$ or $wt_{a_1a_5}$ and $\varphi^{-1}(t_{yc_2})=wt_{a_2a_4}$ or $w_{t_{a_2a_5}}$. This leads to the same contradiction as above due to $wt_{a_1a_2}>w$. By symmetry, Case 1 and Case 2 together cover all the situations where one of $\{c_1,c_2,c_3\}$ lie in the open interval $(x,y)$. We can then assume that $c_i<x$ or $c_i>y$ for $i=1,2,3$. By symmetry and the pigeonhole principle, we can assume that at least two of $\{c_1,c_2,c_3\}$ are greater than $y$. Case 3: $x<y<c_1<c_2$. Again, we have $w>t_{yc_2}$. By Lemma~\ref{lem:preserve-triangle}(2), we consider the indices $y<c_1<c_2$ and apply the exact same arguments as in Case 1 and Case 2 to $\varphi^{-1}(\{t_{c_1y},t_{yc_2},t_{c_1c_2}\})=\{wt_{ij},wt_{ik},wt_{jk}\}$ to derive a contradiction. \end{proof} \subsection{(VT2)$\Rightarrow$(VT3)} Throughout this section, assume $w\in \mathfrak{S}_n$ avoids the four permutations, $3412$, $4231$, $34521$ and 54123, in (VT2). We view permutations as their permutation matrix, with indices increasing from left to right and from top to bottom. We apply the standard decomposition techniques of smooth permutations, as in \cite{Gasharov,Oh-Postnikov-Yoo} and especially Section 3.2 of \cite{self-dual}, of which we follow the notations. Our goal is to write $w$ as a product of $w_0(K)$'s, for subsets $K\subset S=\{s_1,\ldots,s_{n-1}\}$, via induction. Consider the top-left corner region \[C=\{(a,w(a))\:|\: 1\leq a\leq w^{-1}(1),\ 1\leq w(a)\leq w(1)\}\] which is rectangle formed by $(1,w(1))$ and $(w^{-1}(1),1)$. Let $C=\{(c_1,w(c_1)),\ldots,(c_t,w(c_t))\}$ where $c_1<\cdots<c_t$. Let $K_1=\{s_1,\ldots,s_{t-1}\}$. Since $w$ avoids $4231$, $w(c_1)>\cdots>w(c_t)$. Also consider the rectangle to the right of $C$ and on the bottom of $C$: \begin{align*} R=&\{(a,w(a))\:|\: 1<a<w^{-1}(1),\ w(a)>w(1)\}, \\ L=&\{(a,w(a))\:|\: 1<w(a)<w(1),\ a>w^{-1}(1)\}. \end{align*} The positioning of these regions can be seen in Figure~\ref{fig:smooth-permutation}. \begin{figure}[h!] \centering \begin{tikzpicture}[scale=0.4] \draw(0,0)--(7,0); \draw(0,0)--(0,-7); \draw(0,-4)--(7,-4); \draw(4,0)--(4,-7); \node at (4,0) {$\bullet$}; \node at (0,-4) {$\bullet$}; \node at (2,-5.5) {$L$}; \node at (5.5,-2) {$R$}; \node at (1,-1) {$C$}; \node at (2.4,-2.4) {$\bullet$}; \node at (3.2,-1.2) {$\bullet$}; \node at (1.2,-3.2) {$\bullet$}; \node[left] at (0,0) {$c_1$}; \node[left] at (0,-1.2) {$c_2$}; \node[left] at (0,-2.4) {$\vdots$}; \node[left] at (0,-4) {$c_t$}; \end{tikzpicture} \caption{Analyzing the structure of smooth permutations.} \label{fig:smooth-permutation} \end{figure} Since $w$ avoids $3412$, at least one of $R$ and $L$ is empty. If both are empty, we say that $w$ is of type n with parameter $K_2$. If $R$ is nonempty, we say that $w$ is of type r; and if $L$ is nonempty, we say that $w$ is of type l. Notice that if $w$ is of type n, $w':=w\cdot w_0(K_1)=w_0(K_1)\cdot w$ also avoids the four patterns in (VT2), with $\supp(w')\subset S\setminus K_1$. We call $w'$ the \emph{one-step reduction} of $w$ and we can then straightforwardly deduce that $w$ is almost-polished by $w'$ being almost-polished. We will come back to this later. The case of type l and type r are dual to each other by taking inverses. So for now, we assume that $w$ is of type r, i.e. $L=\emptyset$, and further study its structure. Divide $R=R_1\sqcup\cdots\sqcup R_{t-1}$ where $R_i=\{(a,w(a))\:|\: c_i<a<c_{i+1}\}$. If $R_1=\cdots=R_{t-2}=\emptyset$, then $R_{t-1}\neq\emptyset$ and we say that $w$ is of type r0 with parameter $K_1$ and let $w'=w_0(K_1)\cdot w$ be the \emph{one-step reduction} of $w$. Here we use the condition that $w$ avoids $34521$ and $54123$. Since $w$ avoids $34521$, the coordinates in $R_1\cup\cdots\cup R_{t-2}$ must be decreasing, i.e. from top right to bottom left. At the same time, since $w$ avoids $4231$, we have that at most one of $R_1,\ldots,R_{t-2}$ is nonempty. If $R_p\neq\emptyset$, let $I_1=\{p+1,p+2,\ldots,t-1\}\subset K_1$ and say that $w$ is of type r1 with parameter $(K_1,I_1)$. Then let $w'=w_0(I_1)w_0(K_1)w$ be the \emph{one-step reduction} of $w$. Also note that within $R_{t-1}$ (which can be empty or nonempty), there are no coordinates to the left of any coordinates in $R_p$ since $w$ avoids $4231$. A visualization is shown in Figure~\ref{fig:one-step-reduction}. \begin{figure}[h!] \centering \begin{tikzpicture}[scale=0.4] \node at (4,0) {$\bullet$}; \node at (3.2,-1) {$\bullet$}; \node at (2.4,-2) {$\bullet$}; \node at (1.6,-4) {$\bullet$}; \node at (0.8,-5) {$\bullet$}; \node at (0,-7) {$\bullet$}; \draw(0,0)--(0,-10); \draw(0,0)--(12,0); \draw(4,0)--(4,-10); \draw(0,-7)--(12,-7); \draw[dashed](3.2,-1)--(12,-1); \draw[dashed](2.4,-2)--(12,-2); \draw[dashed](1.6,-4)--(12,-4); \draw[dashed](0.8,-5)--(12,-5); \node at (10.5,-0.5) {$R_1{=}\emptyset$}; \node at (10.5,-1.5) {$R_2{=}\emptyset$}; \node at (10.5,-3) {$R_3{\neq}\emptyset$}; \node at (10.5,-4.5) {$R_4{=}\emptyset$}; \node at (10.5,-6) {$R_{t-1}$}; \node at (5,-3.6) {$\bullet$}; \node at (6,-3.2) {$\bullet$}; \node at (7,-2.8) {$\bullet$}; \node at (8,-2.4) {$\bullet$}; \node at (2,-8.5) {$L=\emptyset$}; \draw[dashed](8,-2.4)--(8,-7); \node at (6,-6) {$\emptyset$}; \end{tikzpicture} \qquad\qquad\qquad \begin{tikzpicture}[scale=0.4] \node at (0,0) {$\bullet$}; \node at (0.8,-1) {$\bullet$}; \node at (1.6,-2) {$\bullet$}; \node at (4.0,-4) {$\bullet$}; \node at (3.2,-5) {$\bullet$}; \node at (2.4,-7) {$\bullet$}; \draw(0,0)--(0,-10); \draw(0,0)--(12,0); \draw(4,0)--(4,-10); \draw(0,-7)--(12,-7); \draw[dashed](1.6,-2)--(12,-2); \draw[dashed](1.6,-2)--(1.6,-10); \node at (5,-3.6) {$\bullet$}; \node at (6,-3.2) {$\bullet$}; \node at (7,-2.8) {$\bullet$}; \node at (8,-2.4) {$\bullet$}; \draw[dashed](3.2,-5)--(12,-5); \draw[dashed](8,-2.4)--(8,-10); \node at (2.8,-8.5) {$\emptyset$}; \node at (10,-3.5) {$\emptyset$}; \end{tikzpicture} \caption{The permutation $w$ on the left and its one-step reduction $w'$ on the right, with type r1 and parameters $K_1=\{s_1,\ldots,s_5.\}$, $I_1=\{s_4,s_5\}$.} \label{fig:one-step-reduction} \end{figure} As a summary, the types of $w$ can be n, l0, l1, r0 or r1. \begin{lemma}\label{lem:one-step-reduction-avoid-patterns} Let $w$ avoid $3412$, $4231$, $34521$ and $54123$ as above. Then the one-step reduction $w'$ of $w$ also avoids these four patterns. \end{lemma} \begin{proof} We first deal with the critical case where $w$ is of type r1 (or l1) with parameters $K_1=\{s_1,\ldots,s_{t-1}\}$, $I_1=\{p+1,\ldots,t-1\}$ as above. Since $w'(1)=1,\ldots,w'(p)=p$, these indices cannot be involved in any of the four patterns of interest. At the same time, $w'$ restricted to the last $n-p$ indices equals $w$ restricted to the same indices, which avoids these four patterns. As a result, $w'$ avoid these patterns as well. The cases n, r0 and l0 follow from the same arguments. \end{proof} The one-step reduction $w'$ lives in a strictly smaller parabolic subgroup of $\mathfrak{S}_n$. And Lemma~\ref{lem:one-step-reduction-avoid-patterns} allows us to continue the reduction. In particular, if $w$ is of type n, l0 or r0, then $\supp(w')\subset S\setminus K_1$, and the next step of reduction can then be analyzed from scratch. \begin{lemma}\label{lem:one-step-after-r1} Let $w$ be of type r1 with one-step reduction $w'$ and parameters $K_1$ and $I_1$. Then $w'$ can be of type n, l0, r0 with parameter $K_2$, or l1 with parameters $K_2$ and $I_2$, where $I_1=K_1\cap K_2$ and there are no edges between $I_1$ and $I_2$ in the Dynkin diagram of $\mathfrak{S}_n$. \end{lemma} \begin{proof} Keep the notation from above and let $K_1=\{s_1,\ldots,s_{t-1}\}$ and $I_1=\{s_{p+1},\ldots,s_{t-1}\}$. Let $|R_p|=q>0$, then by construction, $K_2=\{s_{p+1},\ldots,s_{t-1+q}\}$ so we immediately have $K_1\cap K_2=I_1$. For the new permutation $w'$, if it is of type r, then it is of type r0 since the new regions $R_1',\ldots,R_{t+q-p-2}'$, which are subsets of $R_1,\ldots,R_{p-1},R_{p+1},\ldots,R_{t-2}$, must be empty. See Figure~\ref{fig:one-step-reduction}. The permutation $w'$ can be of type n or l. If it is of type l, by dividing $L'$ into $L_1'\sqcup\cdots\sqcup L_{t+q-p-1}'$ analogously as before, we see that $L_1'=\cdots=L_{t-p-1}'=\emptyset$ because these regions belong to $L=\emptyset$. By construction, this means $t\notin I_2$ so the consecutive intervals $I_1$ and $I_2$ do not have edges between them. \end{proof} We are now ready to fully decompose $w$ and show that it is almost-polished. \begin{proof}[Proof of implication (VT2)$\Rightarrow$(VT3)] Let $w\in \mathfrak{S}_n$ avoid the four patterns of interest and keep the notations in this section. We use induction where the base cases $n=1,2$ are vacuously true. Let $w^{(1)}=w$ and continue to do one-step reduction of $w^{(i)}$ to obtain $w^{(i+1)}$, until $w^{(m)}$ is of type n, r0 or l0 whose one-step reduction equals $w^{(m+1)}$. Note that this is possible because the one-step reduction of type r1 or l1 never equals the identity. By Lemma~\ref{lem:one-step-after-r1}, as $i$ increases from $1$ to $m-1$, $w^{(i)}$ alternates between type r1 and type l1. Let $w^{(i)}$ have parameters $K_i$ and $I_i$ as above and let $S_1=K_1\cup K_2\cup\cdots K_{m-1}\cup K_m$. Here, the $K_i$'s and $I_i's$ are consecutive intervals ordered from left to right, respectively. Moreover by Lemma~\ref{lem:one-step-after-r1}, $K_i\cap K_{i+1}=I_i$ and the smallest index in $I_{i+1}$ is at least $2$ bigger than the largest index in $I_i$. See Figure~\ref{fig:KiIis} for an example of these intervals. \begin{figure}[h!] \centering \begin{tikzpicture}[scale=1.0] \draw(0,0)--(13,0); \draw[dashed](13,0)--(14,0); \node at (0,0) {$\bullet$}; \node at (1,0) {$\bullet$}; \node at (2,0) {$\bullet$}; \node at (3,0) {$\bullet$}; \node at (4,0) {$\bullet$}; \node at (5,0) {$\bullet$}; \node at (6,0) {$\bullet$}; \node at (7,0) {$\bullet$}; \node at (8,0) {$\bullet$}; \node at (9,0) {$\bullet$}; \node at (10,0) {$\bullet$}; \node at (11,0) {$\bullet$}; \node at (12,0) {$\bullet$}; \node at (13,0) {$\bullet$}; \draw(1,0.3)--(3,0.3); \draw(1,-0.3)--(3,-0.3); \draw (1,0.3) arc (90:270:0.3); \draw (3,-0.3) arc (-90:90:0.3); \draw(3,0.4)--(6,0.4); \draw(3,-0.4)--(6,-0.4); \draw (3,0.4) arc (90:270:0.4); \draw (6,-0.4) arc (-90:90:0.4); \draw(5,0.3)--(8,0.3); \draw(5,-0.3)--(8,-0.3); \draw (5,0.3) arc (90:270:0.3); \draw (8,-0.3) arc (-90:90:0.3); \draw(8,0.4)--(9,0.4); \draw(8,-0.4)--(9,-0.4); \draw (8,0.4) arc (90:270:0.4); \draw (9,-0.4) arc (-90:90:0.4); \draw(10,0.3)--(12,0.3); \draw(10,-0.3)--(12,-0.3); \draw (10,0.3) arc (90:270:0.3); \draw (12,-0.3) arc (-90:90:0.3); \node[above] at (2,0.3) {$K_1$}; \node[above] at (4.5,0.3) {$K_2$}; \node[above] at (6.5,0.3) {$K_3$}; \node[above] at (8.5,0.3) {$K_4$}; \node[below] at (3,-0.35) {$I_1$}; \node[below] at (5.5,-0.35) {$I_2$}; \node[below] at (8,-0.35) {$I_3$}; \end{tikzpicture} \caption{The intervals in the reduction process.} \label{fig:KiIis} \end{figure} Since $w^{(m)}$ is of type n or r1 or l1, $w^{(m+1)}=w^{(m)}w_0(K_m)$ or $w^{(m)}w_0(K_m)$ and also $\supp(w^{(m+1)})\subset S\setminus S_1$. Without loss of generality, assume $w=w^{(1)}$ is of type r1, then we have \[w^{(m+1)}=\big(\cdots w_0(I_3)w_0(K_3)w_0(I_1)w_0(K_1)w w_0(K_2)w_0(I_2)w_0(K_4)w_0(I_4)\cdots\big)w_0(K_m) \] or \[w^{(m+1)}=w_0(K_m)\big(\cdots w_0(I_3)w_0(K_3)w_0(I_1)w_0(K_1)w w_0(K_2)w_0(I_2)w_0(K_4)w_0(I_4)\cdots\big).\] Unpacking, we have the following possibilities: \[w=\begin{cases} \cdots w_0(K_{m-2})w_0(I_{m-2})(w^{(m+1)}w_0(K_m))w_0(I_{m-1})w_0(K_{m-1})\cdots\\ \cdots w_0(K_{m-2})w_0(I_{m-2})(w_0(K_m)w^{(m+1)})w_0(I_{m-1})w_0(K_{m-1})\cdots\\ \cdots w_0(K_{m-1})w_0(I_{m-1})(w^{(m+1)}w_0(K_m))w_0(I_{m-2})w_0(K_{m-2})\cdots\\ \cdots w_0(K_{m-1})w_0(I_{m-1})(w_0(K_m)w^{(m+1)})w_0(I_{m-2})w_0(K_{m-2})\cdots\\ \end{cases}.\] As $w^{(m+1)}$ commutes with $w_0(K_i)$ and $w_0(I_i)$ where $i\leq m-1$, in all the possibilities above, we can move $w^{(m+1)}$ all the way to the left or all the way to the right. Moreover, $w_0(I_i)$ commutes with $w_0(K_j)$ if $j\neq i,i+1$. Thus, in all the possibilities above, we can move $w_0(I_i)$'s towards the middle, forming $w_0(I_1\cup I_2\cup\cdots \cup I_{m-1})$. Let $J_1=K_1\cup K_3\cup\cdots$ and $J_1'=K_2\cup K_4\cup\cdots$. We have $J_1\cap J_2=I_1\cup I_2\cup\cdots\cup I_{m-1}$, $w_0(J_1)=w_0(K_1)w_0(K_3)\cdots$, $w_0(J_1')=w_0(K_2)w_0(K_4)\cdots$. The above four possibilities can then be written as \[w=\begin{cases} w_0(J_1)w_0(J_1\cap J_1')w_0(J_1')w^{(m+1)}\\ w^{(m+1)}w_0(J_1)w_0(J_1\cap J_1')w_0(J_1')\\ w_0(J_1')w_0(J_1\cap J_1')w_0(J_1)w^{(m+1)}\\ w^{(m+1)}w_0(J_1')w_0(J_1\cap J_1')w_0(J_1)\\ \end{cases}.\] As $\supp(w^{(m+1)})\subset S\setminus S_1$ where $S_1=J_1\cup J_1'$, by induction on the almost-polished element $w^{(m+1)}$, or continuing such decomposition into factors of the form $w_0(J_i)w_0(J_i\cap J_i')w_0(J_i')$, we exactly recover the definition of almost-polished elements (Definition~\ref{def:almost-polished}). \end{proof} \subsection{(VT3)$\Rightarrow$(VT1)} The following lemma is straightforward. \begin{lemma}\label{lem:VT-preserved-under-product} If $G_1$ and $G_2$ are two simple graphs that are vertex-transitive, then $G_1\times G_2$ is vertex-transitive. \end{lemma} \begin{proof} For any two vertices $(u_1,u_2),(v_1,v_2)\in G_1\times G_2$, we want to show that there exists $f\in\aut(G_1\times G_2)$ that sends $(u_1,u_2)$ to $(v_1,v_2)$. Since $G_1$ is vertex-transitive, there exists $f_1\in \aut(G_1)$ such that $f_1(u_1)=v_1$. It is clear that $f_1\times \mathrm{id}\in\aut(G_1\times G_2)$. Then since $G_2$ is vertex-transitive, there exists $f_2\in\aut(G_2)$ such that $f_2(u_2)=v_2$. Now $(\mathrm{id}\times f_2)\circ (f_1\times\mathrm{id})(u_1,u_2)=(v_1,v_2)$. \end{proof} \begin{proof}[Proof of implication (VT3)$\Rightarrow$(VT1)] Let $w$ be almost-polished (Definition~\ref{def:almost-polished}). To show that $\Gamma(w)$ is vertex-transitive, by Proposition~\ref{prop:disjoint-support} and Lemma~\ref{lem:VT-preserved-under-product}, we can reduce to the case where $w=w_0(J)w_0(J\cap J')w_0(J')$. In fact, the elementary automorphisms (Section~\ref{sub:elementary-automorphisms}) are enough in this case. We first show that $\big(w_0(J)w_0(J\cap J')\big)\cdot w_0(J')$ is length-additive. It suffices to show that $w_0(J)w_0(J\cap J')$ does not contain any right-descent in $J'$. This is because the simple generators in $J\cap J'$ cannot be in $D_R(w_0(J)w_0(J\cap J'))$ as they get canceled out after multiplying $w_0(J)$ by $w_0(J\cap J')$; and the simple generators in $J'\setminus J$ are not even in the support of $w_0(J)w_0(J\cap J')$. As a result, $D_R(w)\supset J'$, $w^{J'}=w_0(J)w_0(J\cap J')$ and $w_{J'}=w_0(J')$. For any $u\in\Gamma(w)$, $u\leq w$ so $u^{J'}\leq w^{J'}\leq w_0(J)$ and $u_{J'}\leq w_0(J')$. Analogously, $D_L(w)\supset J$. By Proposition~\ref{prop:right-left-multiplication-give-autos}, left multiplying by any element in $W(J)$ and right multiplying by any element in $W(J')$ give automorphisms. In particular, $u^{J'}\in W(J)$ and $u_{J'}\in W(J')$ so $u=u^{J'}u_{J'}$ is in the same orbit as the identity element under $\aut(\Gamma(w))$. This precisely means that $\Gamma(w)$ is vertex-transitive. \end{proof} This completes the proof of Theorem~\ref{thm:vertex-transitive}. \section{Identity orbits in Bruhat graphs} \label{sec:orbits} In this section we describe a more precise version of Conjecture~\ref{conj:orbit-is-interval}, taking into account the automorphisms described in Section~\ref{sec:automorphisms} and the classification of vertex-transitive Bruhat graphs given in Section~\ref{sec:vertex-transitive}. In light of Proposition~\ref{prop:disjoint-support}, it is sufficient to consider permutations $w \in \mathfrak{S}_n$ which have full support and do not admit a disjoint support decomposition; we call such permutations \emph{Bruhat irreducible}. \begin{defin} \label{def:almost-reducible} A Bruhat irreducible permutation $w\in \mathfrak{S}_n$ is \emph{almost reducible} at $(J,i)$ if $w=w^Jw_J$ is a BP-decomposition with $\supp(w^J) \cap J = \{s_i\}$ and $s_i\notin D_L(w),D_R(w)$. \end{defin} \begin{prop} If a Bruhat irreducible $w\in \mathfrak{S}_n$ is almost reducible at $(J,i)$, then $J=\{s_1,\ldots,s_i\}$ or $\{s_i,\ldots,s_{n-1}\}$. \end{prop} \begin{proof} Let $J=J^{(1)}\sqcup\cdots\sqcup J^{(k)}$ be a decomposition of $J$ into connected components of the Dynkin diagram such that $s_i\in J^{(1)}$. If $k\geq2$, then the parabolic decomposition of $w$ with respect to $J^{(k)}$ contradicts $w$ being Bruhat irreducible, so $k=1$ and $J$ is a connected interval. Similarly, if $\supp(w^J)$ has a connected component not adjacent to $i$, $w$ cannot be Bruhat irreducible. Moreover, $J\neq\{s_i\}$ since $s_i\notin D_R(w)$. \end{proof} Note that a Bruhat irreducible $w\in \mathfrak{S}_n$ is almost reducible at $(\{s_i,\ldots,s_{n-1}\},i)$ if and only if $w^{-1}$ is almost reducible at $(\{s_1,\ldots,s_i\},i)$. \begin{defin} A Bruhat irreducible permutation $w\in \mathfrak{S}_n$ is \emph{right-almost-reducible} at $i$ if it is almost reducible at $(\{s_i,\ldots,s_{n-1}\},i)$ and is \emph{left-almost-reducible} at $i$ if it is almost reducible at $(\{s_1,\ldots,s_{i}\},i)$. \end{defin} \begin{prop}\label{prop:almost-sep-supp} If $w\in \mathfrak{S}_n$ is right-almost-reducible at $i$, then \begin{enumerate} \item $\max\{w(1),\ldots,w(i-1)\}=i+1$, \item $w(i)>i+1$, and \item the elements of $\{1,\ldots,i+1\}\setminus\{w(1),\ldots,w(i-1)\}$ appear out of order in $w$. \end{enumerate} \end{prop} \begin{proof} Consider the permutation $w_J$, which fixes $1,\ldots,i-1$, in one-line notation. By definition, $s_i\in D_L(w_J)$ meaning that $i+1$ appears before $i$ in $w_J$. Since $w$ is Bruhat irreducible, $i\in\supp(s_iw_J)$ so $s_iw_J(i)\neq i$ and thus $w_J(i)>i+1$. Now $w^J$ permutes the values $1,2,\ldots,i+1$ of $w_J$. This means that $w(i)=w_J(i)>i+1$. We clearly have $w(1),\ldots,w(i-1)\in\{1,\ldots,i+1\}$. Since $i\in\supp(w^J)$, we necessarily have $i+1$ among in $w(1),\ldots,w(i-1)$. The last item follows because $i+1$ appears before $i$ in $w_J$. \end{proof} \begin{cor} \label{cor:i-commute-with-descent} If $w\in \mathfrak{S}_n$ is almost reducible at $(J,i)$, then $s_i$ commutes with the elements of $D_L(w)\cap D_R(w)$. \end{cor} \begin{proof} Assume without loss of generality that $J=\{s_i,\ldots,s_{n-1}\}$. By Proposition~\ref{prop:almost-sep-supp}(1), $i+1$ appears before $i+2$ in $w$, so $s_{i+1}\notin D_L(w)$. By Proposition~\ref{prop:almost-sep-supp} (1) and (2), $w(i-1)\leq i+1<w(i)$ so $s_{i-1}\notin D_R(w)$. \end{proof} \begin{cor} \label{cor:i-commute-with-j} If $w\in \mathfrak{S}_n$ is right-almost-reducible at $i$ and left-almost-reducible at $j$, then $i\neq j$ and $s_is_j=s_js_i$. \end{cor} \begin{proof} We first show that $i\neq j$. Assume the opposite that both $w$ and $w^{-1}$ are right-almost-reducible at $i$. By condition (3) of Proposition~\ref{prop:almost-sep-supp}, assume $i+1>w(a)>w(b)$ where $i<a<b$, since $w(i)>i+1$. By condition (1) on $w^{-1}$, we know that $w(i+1)\leq i-1$ so $i+1$ is one of $a,b$ and it has to be $a$ because $a<b$. However, $w(b)<w(a)\leq i-1$ but $w^{-1}(w(b))=b>a=i+1$, contradicting condition (1) on $w^{-1}$. To show that $s_i$ and $s_j$ commute, we can assume to the contrary that $j=i-1$, since if $j=i+1$, we may consider the same problem on $w^{-1}$. Now $w$ is right-almost-reducible at $i$ and left-almost-reducible at $i-1$ so $w^{-1}$ is right-almost-reducible at $i-1$. By condition (2) of $w$, $w(i)>i+1$ but by condition (1) of $w^{-1}$, $w(i)<i$, a contradiction. \end{proof} \begin{defin} For a Bruhat irreducible permutation $w \in \mathfrak{S}_n$, let \[ \{i_1<\cdots<i_k\} = \{i \mid \text{$w$ is right-almost-reducible at $i$}\} \] and define $A_R(w):=s_{i_1}\cdots s_{i_k}$. Similarly, let $\{j_1<\cdots<j_t\}$ be the set of $j$ at which $w$ is left-almost-reducible and define $A_L(w):=s_{j_t}\cdots s_{j_1}.$ \end{defin} \begin{cor} Let $w$ be Bruhat irreducible. Then the following three elements commute pairwise: \[A_R(w),A_L(w),w_0(D_L(w)\cap D_R(w)).\] \end{cor} The following is a strengthened version of Conjecture~\ref{conj:orbit-is-interval}. \begin{conj} \label{conj:strengthened-orbit-conjecture} Let $w \in \mathfrak{S}_n$ be Bruhat irreducible and let $\mathcal{O}$ denote the orbit of $e$ under graph automorphisms of $\Gamma(w)$. Define \[ v(w) \coloneqq w_0(D_L(w))\cdot A_R(w) \cdot w_0(D_L(w)\cap D_R(w)) \cdot A_L(w)\cdot w_0(D_R(w)), \] then $\mathcal{O}=[e,v(w)]$. \end{conj} \begin{prop} \label{prop:confirm-conj-for-vt} Let $w \in \mathfrak{S}_n$ be Bruhat irreducible and such that $\Gamma(w)$ is vertex-transitive, then $v(w)=w$, so Conjecture~\ref{conj:strengthened-orbit-conjecture} holds in this case. \end{prop} \begin{proof} If $w$ is right-almost-reducible at $i$, then Proposition~\ref{prop:almost-sep-supp} and Definition~\ref{def:almost-reducible} imply that the values $i, i+1, w(i), a, b$ appear from left to right in the one-line notation for $w$ and form an occurrence of the pattern $34521$, where $a,b$ are the smallest two elements of $\{1,\ldots,i+1\} \setminus \{w(1),\ldots,w(i-1)\}$. This is impossible by Theorem~\ref{thm:vertex-transitive} since $\Gamma(w)$ is assumed to be vertex transitive. Similarly, if $w$ were left-almost-reducible at $j$, then $w$ would contain an occurrence of the pattern $54123$, again violating Theorem~\ref{thm:vertex-transitive}. Thus $A_R(w)=A_L(w)=e$, and $v(w)=w_0(D_L(w))\cdot w_0(D_L(w)\cap D_R(w)) w_0(D_R(w))$ which is the expression for $w$ as a Bruhat irreducible almost-polished element. Since $\Gamma(w)$ is vertex-transitive, we have $\mathcal{O}=[e,w]=[e,v(w)]$. \end{proof} The following proposition shows that the element $v(w)$ is indeed in the identity orbit of $\Gamma(w)$. An automorphism of $\Gamma(w)$ sending $e$ to $v(w)$ may be obtained by composing various left, right, and middle multiplication automorphisms (see Section~\ref{sec:automorphisms}). \begin{prop} \label{prop:v-is-in-orbit} Let $w \in \mathfrak{S}_n$ be Bruhat irreducible and let $\mathcal{O}$ be the orbit of $e$ under graph automorphisms of $\Gamma(w)$, then $v(w) \in \mathcal{O}$. \end{prop} \begin{proof} By Proposition~\ref{prop:right-left-multiplication-give-autos} we may compose left (or right) multiplication automorphisms to send $e$ to $w_0(D_L(w) \cap D_R(w))$. Then, for each $i$ such that $w$ is right-almost-reducible at $i$, by Proposition~\ref{prop:middle-mult-is-automorphism} we may apply the automorphism of middle multiplication by $s_i$, doing so in the order $i_k > i_{k-1} > \cdots > i_1$, and similarly for indices $j_1 < \cdots < j_t$ at which $w$ is left-almost-reducible. Since all of these $s_i$ and $s_j$ commute with each other and with the simple generators in $D_L(w) \cap D_R(w)$ by Corollaries~\ref{cor:i-commute-with-descent} and \ref{cor:i-commute-with-j}, the middle multiplication is equivalent to left multiplication at each stage, and the resulting product is equal to $A_R(w)w_0(D_L(w) \cap D_R(w))A_L(w)$. Finally, applying left and right multiplication by $w_0(D_L(w))$ and $w_0(D_R(w))$ respectively, we obtain an automorphism sending $e$ to $v(w)$. \end{proof} \section{Special matchings and Bruhat automorphisms} \label{sec:special-matchings} \subsection{The connection to special matchings} The proof of Proposition~\ref{prop:v-is-in-orbit} and Conjecture~\ref{conj:strengthened-orbit-conjecture} would together imply that the left, right, and middle multiplication automorphisms suffice to determine the identity orbit of $\Gamma(w)$ under graph automorphisms. Left or right multiplication by a descent of $w$ determines a special matching of the Hasse diagram $H([e,w])$ of $[e,w]$, essentially by construction; in Proposition~\ref{prop:middle-mult-is-automorphism} below, we observe that the same is true for middle multiplication. \begin{prop} \label{prop:middle-mult-is-special-matching} Suppose $w=w^Jw_J$ is a BP-decomposition of $w$ and in addition we have $\supp(w^J) \cap \supp(w_J) =\{s\}$, then the middle multiplication map \[ \phi: x \mapsto x^J s x_J, \] is a special matching of $[e,w]$. \end{prop} \begin{proof} Clearly $\phi$ gives a perfect matching on $H([e,w])$, so it suffices to check that for $x \lessdot y \in [e,w]$ we have $\phi(x)=y$ or $\phi(x)<\phi(y)$. By Proposition~\ref{prop:middle-mult-is-automorphism}, $\phi$ is an automorphism of $\Gamma(w)$, so $\phi(x),\phi(y)$ differ by multiplication by a reflection, and in particular either $\phi(x)<\phi(y)$ or $\phi(y)<\phi(x)$. In the first case we are done, so suppose $\phi(y)<\phi(x)$. By Proposition~\ref{prop:monotonicity-of-projection}, and by the construction of middle multiplication, we have: \[ y^J = \phi(y)^J \leq \phi(x)^J = x^J. \] Since $x^J \leq y^J$, we must in fact have $x^J=y^J$. Now, $\phi(y)_J=sy_J$ and $\phi(x)_J=sx_J$; since $\phi(y)<\phi(x)$ and $\phi(y)^J=\phi(x)^J$, it must be that $sy_J<sx_J$. We also know $x_J < y_J$, so by the Lifting Property we must have $sy_J=x_J$ and thus $\phi(y)=x$. \end{proof} The fact that left, right, and middle multiplication determine special matchings, and the conjectural fact that they determine at least the identity orbit structure of Bruhat graphs, suggest a close connection between special matchings and automorphisms of Bruhat graphs. This connection is made explicit in Theorem~\ref{thm:SM-equals-automorphism-classical-type}, whose proof occupies the remainder of this section, and in Conjecture~\ref{conj:SM-equals-automorphism-general}. \subsection{Special matching are Bruhat automorphisms} In Theorem~\ref{thm:SM-implies-auto} we prove one direction of Theorem~\ref{thm:SM-equals-automorphism-classical-type} and Conjecture~\ref{conj:SM-equals-automorphism-general} for arbitrary Coxeter groups. This implies that special matchings on Bruhat intervals, although defined by a local condition (that is, a condition on cover relations), respect the global structure of Bruhat graphs. We first prove Lemma~\ref{lem:undirected-hexagon}, an extension to $\Gamma$ of a property of $\widehat{\Gamma}$ given in the proof of Proposition 3.3 of \cite{dyer-bruhat-graph}. \begin{lemma} \label{lem:undirected-hexagon} Let $u \leq v$ be elements of a Coxeter group $W$ and suppose that there exist elements $x_1,\ldots,x_6 \in [u,v]$ such that $\overline{x_1x_2}, \overline{x_1x_3}, \overline{x_2x_4}, \overline{x_2x_5}, \overline{x_3x_4}, \overline{x_3x_5}, \overline{x_4x_6}, \overline{x_5x_6}$ are edges in $\Gamma(u,v)$. Then $\overline{x_1x_6}$ is an edge in $\Gamma(u,v)$. \end{lemma} \begin{proof} Let $t_1,\ldots,t_8$ be the reflections corresponding to the known edges given in the statement of the lemma. The same argument as in Proposition 3.3 of \cite{dyer-bruhat-graph} implies that $W'=\langle t_1, \ldots, t_8 \rangle$ is a dihedral reflection subgroup of $W$. By Theorem 1.4 of \cite{dyer-bruhat-graph}, the Bruhat graph of $W'$ agrees with the induced subgraph $\Gamma|_{W'}$, so it suffices to check the lemma in the case $W$ is dihedral. In this case $\Gamma$ is easy to describe: we have $\overline{xy}$ if and only if $\ell(y)-\ell(x)$ is odd. By assumption, each of $\ell(x_2)-\ell(x_1), \ell(x_4)-\ell(x_2),$ and $\ell(x_6)-\ell(x_4)$ is odd, thus $\ell(x_6)-\ell(x_1)$ is also odd, and $\overline{x_1x_6}$ is an edge. \end{proof} \begin{theorem} \label{thm:SM-implies-auto} Let $u \leq v$ be elements of a Coxeter group $W$. Any special matching $M$ of the Hasse diagram $H([u,v])$ is an automorphism of $\Gamma(u,v)$. \end{theorem} \begin{proof} Let $M$ be a special matching of $H([u,v])$. We will prove by induction on $k$ that if $\overline{xy}$ is an edge of $\Gamma(u,v)$ with $\ell(y)-\ell(x)=k$, then $\overline{M(x)M(y)}$ is also an edge. Consider first the case $k=1$, so $x \lessdot y$. If $M(x)=y$ or $|\ell(M(y))-\ell(M(x))|=1$, we are done by the defining property of special matchings, so assume that $M(x)\lessdot x$ and $y \lessdot M(y)$. Since all height-two intervals in Bruhat order are diamonds, there exist elements $M(x) \lessdot x' \neq x$ and $y \neq y' \lessdot M(y)$. Since $M$ is a special matching, we must have $M(x) \lessdot M(y') \lessdot y$. Again applying the diamond property to $[M(x),y]$, we conclude that $M(y')=x'$, so in particular $x' \lessdot y'$. Now, the elements $M(x),x,x',y,y',M(y)$ form a subgraph of $\Gamma$ of the type described in Lemma~\ref{lem:undirected-hexagon}, and so $\overline{M(x)M(y)}$ is an edge by the lemma. Suppose now that $k>1$. By the proof of Proposition 3.3 from \cite{dyer-bruhat-graph}, we know that there exist elements $x_2,x_3,x_4,x_5$ with directed edges $x=x_1 \to x_2,x_3; x_2\to x_4,x_5; x_3 \to x_4,x_5; x_4,x_5 \to x_6=y$ in $\widehat{\Gamma}(u,v)$. By induction, we know that $\overline{M(x_i)M(x_j)}$ is an edge in $\Gamma(u,v)$ for each of these edges $x_i \to x_j$. Thus $M(x_1),\ldots,M(x_6)$ form a subgraph of $\Gamma$ of the type described in Lemma~\ref{lem:undirected-hexagon}, and so again we conclude that $\overline{M(x)M(y)}$ is an edge. \end{proof} \subsection{Bruhat automorphisms are special matchings} In Proposition~\ref{prop:auto-implies-SM} below we give a converse to Theorem~\ref{thm:SM-implies-auto} for certain Coxeter groups. Theorem~\ref{thm:SM-implies-auto} and Proposition~\ref{prop:auto-implies-SM} together imply Theorem~\ref{thm:SM-equals-automorphism-classical-type}. \begin{prop} \label{prop:auto-implies-SM} Let $u \leq v$ be elements of a Coxeter group $W$ which is right-angled or a symmetric group, then any perfect matching of $H([u,v])$ which is an automorphism of $\Gamma(u,v)$ is a special matching. \end{prop} The proof of Proposition~\ref{prop:auto-implies-SM} relies on the following structural property of Bruhat order, Lemma~\ref{lem:butterfly}, whose proof is contained in Section~\ref{sec:butterflies}. \begin{defin}\label{def:butterfly} We say that elements $x_1,x_2,y_1,y_2$ of a Coxeter group $W$ form a \emph{butterfly} if $x_1 \lessdot y_1,y_2$ and $x_2 \lessdot y_1,y_2$. \end{defin} The butterfly structures are essential to the analysis of Bruhat automorphisms and special matchings, and are of interest on their own. We will explore more about butterflies in Section~\ref{sec:butterflies}. \begin{lemma} \label{lem:butterfly} Let $W$ be a Coxeter group which is right-angled or the symmetric group, let $u \leq v$, and suppose that $x_1,x_2,y_1,y_2 \in [u,v]$ form a butterfly. Then there is an element $z \in [u,v]$ with $y_1,y_2 \lessdot z$. \end{lemma} \begin{proof}[Proof of Proposition~\ref{prop:auto-implies-SM}] Let $u \leq v$ be elements of a Coxeter group $W$ which is right-angled or the symmetric group, and let $M$ be a perfect matching of $H([u,v])$ which is an automorphism of $\Gamma(u,v)$. Suppose that $M$ is not a special matching; since $M$ is a $\Gamma(u,v)$-automorphism, the violation of the special matching property must consist of elements $x \lessdot y$ with $M(y) \lessdot M(x)$. Choose $x,y$ so that $y$ has maximal length among all such violations in $[u,v]$. Now, note that $x,M(y),y,M(x)$ form a butterfly, so by Lemma~\ref{lem:butterfly} there exists an element $z \in [u,v]$ with $y,M(x) \lessdot z$. We must have $M(z)>z$, for otherwise each of $y,M(x),$ and $M(z)$ would each cover both $x$ and $M(y)$, but this substructure cannot occur in Bruhat order of a Coxeter group (see Theorem 3.2 of \cite{SM-advances}). Since height-two intervals in Bruhat order are diamonds (see Chapter 2 of \cite{bjorner-brenti}), there exists an element $w\neq z$ with $y \lessdot w \lessdot M(z)$. Suppose that $M(w)<w$, then since $M$ is an automorphism of the Bruhat graph we must have $M(w) \lessdot z$ and $M(y) \lessdot M(w)$. Now, since $y \lessdot z$, we know $M(y) \to M(z)$ in $\widehat{\Gamma}(u,v)$, but the height-three interval $[M(y),M(z)]$ contains at least three elements---$y,M(w),$ and $M(x)$ at height one, contradicting Proposition 3.3 of \cite{dyer-bruhat-graph}. We conclude that $w\lessdot M(w)$. However this too is a contradiction, since $w \lessdot M(z)$ is a violation of the special matching condition with $\ell(M(z))>\ell(y)$. Thus $M$ must be a special matching. \end{proof} We conjecture that a slight weakening of Lemma~\ref{lem:butterfly} holds for arbitrary Coxeter groups, and this would imply the same for Proposition~\ref{prop:auto-implies-SM}, and thus resolve Conjecture~\ref{conj:SM-equals-automorphism-general}. \begin{conj} \label{conj:general-butterfly} Let $W$ be any Coxeter group, let $u \leq v \in W$, and suppose that the elements $x_1,x_2,y_1,y_2 \in [u,v]$ form a butterfly. Then there is an element $z \in [u,v]$ with $y_1,y_2 \lessdot z$ or with $z \lessdot x_1,x_2$. \end{conj} \begin{remark} The weakening of Lemma~\ref{lem:butterfly} conjectured for general Coxeter groups in Conjecture~\ref{conj:general-butterfly} is necessary even for finite Coxeter groups. For example, there exists a butterfly in the finite Coxeter group of type $F_4$ which has a lower bound $z \lessdot x_1,x_2$ but no upper bound $y_1,y_2 \lessdot z'$. \end{remark} \section{Covers of butterflies in Bruhat order}\label{sec:butterflies} \subsection{Butterflies in finite Weyl groups} Recall that a butterfly consists of four elements with $x_1,x_2\lessdot y_1,y_2$ in Bruhat order. We first do some general analysis on butterflies in finite Weyl groups. Consider the transpositions $t_{ab}=x_a^{-1}y_b$ where $a,b\in\{1,2\}$. Then $t_{11}t_{21}=t_{12}t_{22}$. By Lemma 3.1 of \cite{dyer-bruhat-graph}, $W'=\langle t_{11},t_{12},t_{21},t_{22}\rangle$ is a reflection subgroup of $W$. We say that this butterfly $x_1,x_2,y_1,y_2$ is \emph{of type} $A_2$ if $W'$ is isomorphic to the Coxeter group of type $A_2$, and same with type $B_2$ and $G_2$. By Theorem 1.4 of \cite{dyer-bruhat-graph}, the subposet (and the directed subgraph) on $x_1W'$ of $W$ is isomorphic to that of a rank $2$ Coxeter group of this type. This means that $W'$ cannot be of type $A_1\times A_1$, because a butterfly cannot be embedded in the Bruhat order of the type $A_1\times A_1$ Coxeter group, which looks like a diamond \begin{tikzpicture}[scale=0.15] \draw(-1,0)--(0,1)--(1,0)--(0,-1)--(-1,0); \end{tikzpicture}. In finite classical types, only type $A_2$ and type $B_2$ butterflies exist. Moreover, if this butterfly is of type $A_2$, we must have that $x_1W'$ consists of $u<x_1,x_2\lessdot y_1,y_2<z$ with edges from $u$ to $x_1,x_2$ and edges from $y_1,y_2$ to $z$ in the Bruhat graph. Similarly in the case of type $B_2$, we must have that $x_1W'$ consists of $u<a_1,a_2<b_1,b_2<c_1,c_2<z$ with edges in the Bruhat graph $u\rightarrow a_1,a_2\rightarrow b_1,b_2\rightarrow c_1,c_2\rightarrow z$ where $\{x_1,x_2\}=\{a_1,a_2\}$, $\{y_1,y_2\}=\{b_1,b_2\}$ or $\{x_1,x_2\}=\{b_1,b_2\}$, $\{y_1,y_2\}=\{c_1,c_2\}$. Let $\Phi\subset E$ be a root system for the finite Weyl group $W$, where $E$ is the ambient vector space with an inner product $\langle-,-\rangle$, with a chosen set of positive roots $\Phi^+$ and simple roots $\Delta$. For $\alpha\in\Phi^+$, write $s_{\alpha}\in T$ be the reflection across $\alpha$. Recall that the inversion set is $\mathrm{Inv}_R(w)=\{\alpha\in\Phi^+\:|\: w\alpha\in\Phi^+\}$ so that $T_R(w)=\{s_{\alpha}\:|\: \alpha\in \mathrm{Inv}_R(w)\}$. We say that a butterfly in a finite Weyl group $W$ \emph{is generated by} $\alpha,\beta\in\Phi^+$ if $s_{\alpha}$ and $s_{\beta}$ generate the subgroup $W'$ and that $\alpha$ and $\beta$ form a set of simple roots in the root subsystem $\Phi$ restricted to the $2$-dimensional vector space spanned by $\alpha$ and $\beta$. Note that the generators $\{\alpha,\beta\}$ of a butterfly is fixed. To analyze butterflies, we start with some a simple lemma on Bruhat covers. \begin{lemma}\label{lem:weyl-cover} In a finite Weyl group, $w\lessdot ws_{\alpha}$ if and only if $\alpha\notin\mathrm{Inv}_R(w)$ and there does not exist $\beta_1,\beta_2\in\mathrm{Inv}_R(w)$ such that $\beta_2=-s_{\alpha}\beta_1$. Moreover, if $w\lessdot ws_{\alpha}$ and $\beta\in\Phi^+$ satisfies $s_{\alpha}\beta\in\Phi^-$, then $\beta\in\mathrm{Inv}_R(w)$ if and only if $\beta\in\mathrm{Inv}_R(ws_{\alpha})$. \end{lemma} \begin{proof} Consider the following partition of $\Phi^+$ with respect to $\alpha\in\Phi^+$: \begin{enumerate} \item the root $\alpha$ itself; \item roots $\gamma\in\Phi^+$ such that $s_{\alpha}\gamma=\gamma$; \item roots $\gamma\in\Phi^+$ such that $s_{\alpha}\gamma\in\Phi^+$ but $s_{\alpha}\gamma\neq\gamma$; \item roots $\beta\in\Phi^+$ such that $s_{\alpha}\beta\in\Phi^-$. \end{enumerate} We pair up roots in (3) by $(\gamma,s_{\alpha}\gamma)$ and pair up roots in (4) by $(\beta,-s_{\alpha}\beta)$. Assume $\alpha\notin\mathrm{Inv}_R(w)$ and compare $\mathrm{Inv}_R(w)$ with $\mathrm{Inv}_R(ws_{\alpha})$. First, $\alpha\notin\mathrm{Inv}_R(w)$ and $\alpha\in\mathrm{Inv}_R(ws_{\alpha})$, and for each root $\gamma$ in (2), we have $\gamma\in\mathrm{Inv}_R(w)\Leftrightarrow\gamma\in\mathrm{Inv}_R(ws_{\alpha})$. Similarly, for $\gamma$ in (3), we have $\gamma\in\mathrm{Inv}_R(w)\Leftrightarrow s_{\alpha}\gamma\in\mathrm{Inv}_R(ws_{\alpha})$. Thus, roots in (1), (2) and (3) each contribute $1$ to the quantity $|\mathrm{Inv}_R(ws_{\alpha})|-|\mathrm{Inv}_R(w)|$. We now examine (4). Let $\beta$ be a root in (4) and also write $\beta_1=\beta$ and $\beta_2=-s_{\alpha}\beta$. We have $s_{\alpha}\beta=\beta-c\alpha$, where $c=2\langle\alpha,\beta\rangle/\langle\alpha,\alpha\rangle\in\mathbb{Q}_{>0}$. As $\alpha\notin\mathrm{Inv}_R(w)$, we know $w\alpha\in\Phi^+$. So $w(\beta_1+\beta_2)=cw\alpha>0$, meaning that at most one of $\beta_1,\beta_2$ belong to $\mathrm{Inv}_R(w)$. Moreover, $\beta_1\notin\mathrm{Inv}_R(w)$ if and only if $\beta_2\in\mathrm{Inv}_R(ws_{\alpha})$. This means that if none of $\beta_1,\beta_2$ belong to $\mathrm{Inv}_R(w)$, then both belong to $\mathrm{Inv}_R(ws_{\alpha})$, contributing $2$ to $|\mathrm{Inv}_R(ws_{\alpha})|-|\mathrm{Inv}_R(w)|$; and if one of them belongs to $\mathrm{Inv}_R(w)$, then the same one belongs to $\mathrm{Inv}_R(ws_{\alpha})$. Note that $w\lessdot ws_{\alpha}$ is equivalent to $|\mathrm{Inv}_R(ws_{\alpha})|-|\mathrm{Inv}_R(w)|=1$. Considering the above contributions from each category of roots, we obtain the desired result. \end{proof} Note that Lemma~\ref{lem:weyl-cover} is also equivalent to saying that $ws_{\alpha}\lessdot w$ if and only if $\alpha\in\mathrm{Inv}_R(w)$ and there does not exist $\beta_1,\beta_2\in\mathrm{Inv}_R(w)$ such that $\beta_2=-s_{\alpha}\beta_1$. In simply-laced types, assume $\langle\alpha,\alpha\rangle=2$ for all roots $\alpha\in\Phi$, then all inner products between different positive roots take on values in $\{0,1,-1\}$, and the condition $\beta_2=-s_{\alpha}\beta_1$ is equivalent to $\beta_1+\beta_2=\alpha$. \begin{lemma}\label{lem:simply-laced-join} Let $W$ be a finite Weyl group of simply-laced types, and let $x_1,x_2\lessdot y_1,y_2$ form a butterfly. Then there exists $u\lessdot x_1,x_2$ and $z\gtrdot y_1,y_2$ in $W$. \end{lemma} \begin{proof} Since $W$ is simply-laced, this butterfly can only be of type $A_2$. Let $u<x_1,x_2\lessdot y_1,y_2\lessdot z$ be this type $A_2$ subposet. We will show that $u$ is covered by $x_1$ and $x_2$. By taking the dual statement, we will have $y_1,y_2\lessdot z$ as well. Let this butterfly be generated by $\alpha,\beta\in\Phi^+$ and $us_{\alpha}=x_1$, $us_{\beta}=x_2$, $us_{\alpha}s_{\beta}=y_2$, $us_{\beta}s_{\alpha}=y_1$. We have $\langle\alpha,\beta\rangle=-1$, $\alpha+\beta=s_{\alpha}\beta=s_{\beta}\alpha\in\Phi^+$, and we also know that in this $A_2$, $\alpha\in\mathrm{Inv}_R(x_1),\mathrm{Inv}_R(y_2),\mathrm{Inv}_R(z)$, $\beta\in\mathrm{Inv}_R(x_2),\mathrm{Inv}_R(y_1),\mathrm{Inv}_R(z)$, $\alpha+\beta\in\mathrm{Inv}_R(y_1),\mathrm{Inv}_R(y_2),\mathrm{Inv}_R(z)$. If $x_2$ does not cover $u$ (or equivalently, $x_1$ does not cover $u$), by Lemma~\ref{lem:weyl-cover}, there exists $\gamma_1,\gamma_2\in\mathrm{Inv}_R(x_2)$ such that $\gamma_1+\gamma_2=\beta$, or equivalently, $\gamma_2=-s_{\beta}\gamma$. We have \[\langle\gamma_1,\alpha+\beta\rangle+\langle\gamma_1,\alpha+\beta\rangle=\langle\beta,\alpha+\beta\rangle=1.\] Since all inner products between different positive roots lie in $\{0,1,-1\}$, we can without loss of generality assume that $\langle\gamma_1,\alpha+\beta\rangle=0$ and $\langle\gamma_2,\alpha+\beta\rangle=1$. Now $s_{\alpha+\beta}\gamma_1=\gamma_1$. Since $\gamma_1\in\mathrm{Inv}_R(x_2)$ and $y_1=s_{\alpha+\beta}x_2$, we have $\gamma_1\in\mathrm{Inv}_R(y_1)$. Moreover, since $\langle\gamma_2,\alpha+\beta\rangle=1$, $s_{\alpha+\beta}\gamma_2=\gamma_2-(\alpha+\beta)=-(\alpha+\gamma_1)\in\Phi^-$. By Lemma~\ref{lem:weyl-cover}, as $x_2\lessdot y_1$, $\gamma_2\in\mathrm{Inv}_R(y_1)$. But $\gamma_1,\gamma_2\in\mathrm{Inv}_R(y_1)$ with $s_{\beta}\gamma_2=-\gamma_1$, contradicting $y_1\gtrdot y_1s_{\beta}=x_1$. \end{proof} \subsection{Butterflies in the symmetric group} For $w\in \mathfrak{S}_n$, define its \emph{rank-matrix} to be $w[i,j]=|\{a\in[i]\:|\: w(a)\geq j\}|$, which can be viewed as the number of dots weakly in the bottom left corner in the permutation matrix of $w$, for all $i,j\in[n]$. See Figure~\ref{fig:rank-matrix-type-A}. \begin{figure}[h!] \centering \begin{tikzpicture}[scale=0.4] \draw(0,0)--(5,0)--(5,-5)--(0,-5)--(0,0); \draw[dashed](0,-1)--(5,-1); \draw[dashed](0,-2)--(5,-2); \draw[dashed](0,-3)--(5,-3); \draw[dashed](0,-4)--(5,-4); \draw[dashed](1,0)--(1,-5); \draw[dashed](2,0)--(2,-5); \draw[dashed](3,0)--(3,-5); \draw[dashed](4,0)--(4,-5); \node at (0.5,0.5) {$1$}; \node at (1.5,0.5) {$2$}; \node at (2.5,0.5) {$3$}; \node at (3.5,0.5) {$4$}; \node at (4.5,0.5) {$5$}; \node at (-0.5,-0.5) {$1$}; \node at (-0.5,-1.5) {$2$}; \node at (-0.5,-2.5) {$3$}; \node at (-0.5,-3.5) {$4$}; \node at (-0.5,-4.5) {$5$}; \node at (0.5,-2.5) {$\bullet$}; \node at (1.5,-4.5) {$\bullet$}; \node at (2.5,-0.5) {$\bullet$}; \node at (3.5,-3.5) {$\bullet$}; \node at (4.5,-1.5) {$\bullet$}; \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.4] \draw(0,0)--(5,0)--(5,-5)--(0,-5)--(0,0); \draw[dashed](0,-1)--(5,-1); \draw[dashed](0,-2)--(5,-2); \draw[dashed](0,-3)--(5,-3); \draw[dashed](0,-4)--(5,-4); \draw[dashed](1,0)--(1,-5); \draw[dashed](2,0)--(2,-5); \draw[dashed](3,0)--(3,-5); \draw[dashed](4,0)--(4,-5); \node at (0.5,0.5) {$1$}; \node at (1.5,0.5) {$2$}; \node at (2.5,0.5) {$3$}; \node at (3.5,0.5) {$4$}; \node at (4.5,0.5) {$5$}; \node at (-0.5,-0.5) {$1$}; \node at (-0.5,-1.5) {$2$}; \node at (-0.5,-2.5) {$3$}; \node at (-0.5,-3.5) {$4$}; \node at (-0.5,-4.5) {$5$}; \node at (0.5,-0.5) {$1$}; \node at (0.5,-1.5) {$1$}; \node at (0.5,-2.5) {$1$}; \node at (0.5,-3.5) {$0$}; \node at (0.5,-4.5) {$0$}; \node at (1.5,-0.5) {$2$}; \node at (1.5,-1.5) {$2$}; \node at (1.5,-2.5) {$2$}; \node at (1.5,-3.5) {$1$}; \node at (1.5,-4.5) {$1$}; \node at (2.5,-0.5) {$3$}; \node at (2.5,-1.5) {$2$}; \node at (2.5,-2.5) {$2$}; \node at (2.5,-3.5) {$1$}; \node at (2.5,-4.5) {$1$}; \node at (3.5,-0.5) {$4$}; \node at (3.5,-1.5) {$3$}; \node at (3.5,-2.5) {$3$}; \node at (3.5,-3.5) {$2$}; \node at (3.5,-4.5) {$1$}; \node at (4.5,-0.5) {$5$}; \node at (4.5,-1.5) {$4$}; \node at (4.5,-2.5) {$3$}; \node at (4.5,-3.5) {$2$}; \node at (4.5,-4.5) {$1$}; \end{tikzpicture} \caption{The rank-matrix (right) for the permutation $w=35142$ (left).} \label{fig:rank-matrix-type-A} \end{figure} The following lemma is immediate from observation. \begin{lemma}\label{lem:rank-change-outside-box} Let $w\in \mathfrak{S}_n$ and $w'=w\cdot (i,j)$. Then $w[a,b]=w'[a,b]$ for the coordinate $(a,b)$ outside, or on the top or right boundary, of the rectangle formed by $(i,w(i))$ and $(j,w(j))$. \end{lemma} The following characterization of the strong order in $\mathfrak{S}_n$ is well-known. \begin{lemma}[Theorem 2.1.5 of \cite{bjorner-brenti}]\label{lem:strong-order-type-A} For $w,v\in \mathfrak{S}_n$, $w\leq v$ if and only if $w[i,j]\leq v[i,j]$ for all $i,j\in[n]$. \end{lemma} We are now ready to prove Lemma~\ref{lem:butterfly} for the symmetric group. \begin{proof}[Proof of Lemma~\ref{lem:butterfly} in the case of type $A_{n-1}$] Let $x_1,x_2\lessdot y_1,y_2$ be a butterfly in $[u,v]$. By Lemma~\ref{lem:simply-laced-join}, there exists $z\gtrdot y_1,y_2$ so it suffices to show that $z\leq v$. Suppose that $y_1=z\cdot (i,j)$, $y_2=z\cdot (j,k)$ with $i<j<k$ and $z(i)>z(j)>z(k)$. There is no dot strictly inside the rectangle formed by $(i,z(i))$ and $(j,z(k))$ in the permutation matrix of $z$, because otherwise $y_2$ would have inversions at some $e_p-e_i$ and $e_j-e_p$, contradicting $y_2\gtrdot y_2\cdot (i,j)$ by Lemma~\ref{lem:weyl-cover}. Likewise, we see that there are only one dot, $(j,z(j))$, in the interior rectangle formed by $(i,z(i))$ and $(k,z(k))$ in the permutation matrix of $z$. To show that $z\leq v$, by Lemma~\ref{lem:strong-order-type-A}, it suffices to show that $z[a,b]\leq v[a,b]$ for all $a,b\in[n]$. By Lemma~\ref{lem:rank-change-outside-box}, $z[a,b]=y_1[a,b]$ if $(a,b)$ is not in the interior or on the left or bottom boundary on the rectangle formed by $(i,z(i))$ and $(j,z(j))$; $z[a,b]=y_2[a,b]$ if $(a,b)$ is not in the interior or the bottom left boundary of the rectangle $(j,z(j))$ and $(k,z(k))$. Noticing that these regions are disjoint, we have $z[a,b]=y_1[a,b]$ or $y_2[a,b]$. But $v\geq y_1,y_2$ so $v[a,b]\geq \max{y_1[a,b],y_2[a,b]}\geq z[a,b]$, which gives $z\leq v$ as desired. \end{proof} \subsection{Butterflies in right-angled Coxeter groups} Throughout this section, let $W$ be a right-angled Coxeter group and let $S$ be its generating set. By definition, any two elements $s_i,s_j\in S$ either commute or have no relations. Recall a well-known result of Tits \cite{Tits-words} that says all reduced expressions of $w$ are connected by moves of the form $s_is_j \cdots = s_js_i \cdots$ with $m_{ij}$ factors on each side, and in this case, only $m_{ij}=2$ needs to be considered. In other words, any two reduced expressions of $w\in W$ are connected by commutation moves. \begin{lemma}\label{lem:descent-if-can-move} Let $w=s_{i_1}\cdots s_{i_{\ell}}$ be any reduced word of $w$. Then $s\in D_L(w)$ if and only if the minimal $j$ such that $s_{i_j}=s$ commutes with $s_{i_1},\ldots,s_{i_{j-1}}$. \end{lemma} \begin{proof} First, if $s$ commutes with $s_{i_1},\ldots,s_{i_{j-1}}$, then we can move it all the way to the left via commutation moves to obtain a reduced word of $w$ starting with $s$, which means $s\in D_L(w)$. On the other hand, if $s\in D_L(w)$, then we use another reduced word of $w$ that starts with $s$. Keeping track of this $s$ and applying commutation moves, we see that only $s_i$'s commuting with $s$ can ever appear on the left of this $s$, which always stays as the first appearance of $s$ in any reduced word. We are done because any two reduced words of $w$ are connected via commutation moves. \end{proof} \begin{lemma} If $y$ covers two elements $x_1,x_2$ and $s\in D_L(x_1), D_L(x_2)$, then $s\in D_L(y)$. \end{lemma} \begin{proof} Let $y=s_{i_1}\cdots s_{i_{\ell}}$ be a reduced expression of $y$. By the Subword Property, assume that $x_1=s_{i_1}\cdots \hat{s}_{i_a}\cdots s_{i_{\ell}}$, $x_2=s_{i_1}\cdots \hat{s}_{i_b}\cdots s_{i_{\ell}}$ with $a<b$. Let $s_{i_j}=s$ be the first appearance of $s$ in this reduced of $y$. The existence of $j$ follows from $s\in D_L(x_1),D_L(x_2)$. Case 1.: $a\leq j<b$. The prefixes of length $j$ in $y$ and $x_2$ are the same. By Lemma~\ref{lem:descent-if-can-move}, since $s\in D_L(x_2)$, $s$ commutes with $s_{i_1},\ldots,s_{i_{j-1}}$. And by Lemma~\ref{lem:descent-if-can-move} again, $s\in D_L(y)$. Case 2: $j\geq b$. The first appearance of $s$ in $x_1$ must be at index $j$, meaning that $s$ commutes with $s_{i_1},\ldots,s_{i_{a-1}}$ and $s_{i_{a+1}},\ldots,s_{i_{j-1}}$. The first appearance of $s$ in $x_2$ must be after index $a$, meaning that $s$ commutes with $s_{i_a}$ as well. Together, we see that $s$ commutes with all the $s_i$'s before index $j$ so $s\in D_L(y)$. \end{proof} We are now ready to provide a detailed analysis on the structures of butterflies in right-angled Coxeter groups. For $s,s'\in S$ that do not commute, define an element \[A^{(m)}(s,s')=ss'ss'\cdots\in W\] with $m$ copies of $s$ and $s'$ multiplied in an alternating way. \begin{lemma}\label{lem:butterfly-right-angle-structure} Let $x_1,x_2\lessdot y_1,y_2$ form a butterfly in a right-angled Coxeter group $W$. Then we have the length-additive expressions: $x_1=u\cdot A^{(m)}(s,s')\cdot v$, $x_2=u\cdot A^{(m)}(s',s)\cdot v$, $\{y_1,y_2\}=\{u\cdot A^{(m+1)}(s,s')\cdot v,u\cdot A^{(m+1)}(s',s)\cdot v\}$ for some $m\geq1$, $u,v\in W$ and $s,s'\in S$ that do not commute. \end{lemma} \begin{proof} Use induction on $\ell(x_1)$. If $x_1$ and $x_2$ have a common left descent $s$, then all these four elements have the same left descent $s$ and we can instead consider the butterfly $sx_1,sx_2\lessdot sy_1,sy_2$. Thus, assume that $x_1$ and $x_2$ do not have any common left descents, and similarly do not have any common right descents. Choose a reduced word $y_1=s_{i_1}s_{i_2}\cdots s_{i_k}$ and by the Subword Property, let $x_2$ be obtained from $y_1$ by deleting $s_{i_a}$ and $x_1$ be obtained from $y_1$ by deleting $s_{i_b}$ with $a<b$. We must have $a=1$, since otherwise $s_{i_1}$ is a common descent of $x_1$ and $x_2$. Similarly $b=k$. Moreover, $s_{i_1}$ must be the unique descent of $x_1$ since any other potential descent $s_{i_c}$ will be a descent of $x_2$ by Lemma~\ref{lem:descent-if-can-move}. Similarly, write $y_2=s_{i_1}'s_{i_2}'\cdots s_{i_k}'$ then we analogously have $x_1,x_2\in\{s_{i_1}'\cdots s_{i_{k-1}}', s_{i_2}'\cdots s_{i_{k}}'\}$. If $x_1=s_{i_1}'\cdots s_{i_{k-1}}'$, then $s_{i_1}'=s_{i_1}$ since $x_1$ has a unique left descent, which means $y_2=s_{i_1}'x_2=s_{i_1}x_2=y_1$, a contradiction. Thus, we have \[\begin{cases} x_1=&s_{i_1}s_{i_2}\cdots s_{i_{k-1}}=s_{i_2}'s_{i_3}'\cdots s_{i_k}'\\ x_2=&s_{i_2}s_{i_3}\cdots s_{i_k}=s_{i_{1}}'s_{i_2}'\cdots s_{i_{k-1}}' \end{cases}\] and each one of $x_1$ and $x_2$ has a unique left descent and a unique right descents. Let $s=s_{i_1}$ and $s'=s_{i_1}'$, which are different. We now use induction on $p=1,\ldots,k$ to show that: $s_{i_p}s_{i_{p+1}}\cdots s_{i_k}$ has a single left descent at $s_{i_p}$, $s_{i_p}'s_{i_{p+1}}'\cdots s_{i_k}'$ has a single left descent at $s_{i_p}'$ and that $s_{i_p}=s$, $s_{i_p}'=s'$ if $p$ is odd, $s_{i_p}=s'$, $s_{i_p}'=s$ if $p$ is even. As for the base case $p=1$, we need to show that $y_1$ has a single left descent at $s$. Note that we already know that $x_1$ has a single left descent at $s$ so the possibility that $y_1=x_1s_{i_k}$ has another left descent is that $s_{i_k}$ commutes with $x_1$, which is impossible as we also know that $s_{i_k}$ cannot get pass $s_{i_{k-1}}$. As a result, $y_1$ has a single left descent at $s$ and analogously $y_2$ has a single left descent at $s'$. For the inductive step, assume the claims are true for $p-1$. By the induction hypothesis, $s_{i_j}=s_{i_{j+1}}'$ and $s_{i_j}'=s_{i_{j+1}}$ for $j\leq p-2$. This means that we have $s_{i_{p-1}}\cdots s_{i_{k-1}}=s_{i_p}'\cdots s_{i_{k}}'$. By the induction hypothesis, $s_{i_{p-1}}\cdots s_{i_{k}}$ has a single left descent so $s_{i_{p-1}}\cdots s_{i_{k-1}}$ has a single left descent at $s_{i_{p-1}}$, which must equal $s_{i_p}'$ because of this equality. Similarly, $s_{i_p}\cdots s_{i_{k}}$ has a single left descent at $s_{i_{p-1}}'$ which also gives $s_{i_p}=s_{i_{p-1}}'$. Thus, both the single descent statement and the exact values of $s_{i_p},s_{i_p}'$ go through. As a result, we see that $x_1=A^{(k-1)}(s,s')$, $x_2=A^{(k-1)}(s',s)$, while $y_1=A^{(k)}(s,s')$, $y_2=A^{(k)}(s',s)$. So we are done. \end{proof} The following lemma is then straightforward. \begin{lemma}\label{lem:right-angle-descent-down} Let $x_1,x_2\lessdot y_1,y_2$ form a butterfly in a right-angled Coxeter group $W$. If $q\in D_L(y_1)\cap D_L(y_2)$, then $q\in D_L(x_1)\cap D_L(x_2)$. \end{lemma} \begin{proof} Write $x_1,x_2,y_1,y_2$ in the form as in Lemma~\ref{lem:butterfly-right-angle-structure}. Pick any reduced word of $y_1$ compatible with the decomposition $y_1=u\cdot A^{(m+1)}(s,s')\cdot v$. If $q\in D_L(u)$, then we clearly have $q\in D_L(x_1)\cap D_L(x_2)$. Similarly, if the first appearance of $q$ is inside $v$, meaning that $q$ commutes with the simple generators before it in $v$, and with $s$ and $s'$, and with $u$, by Lemma~\ref{lem:descent-if-can-move}, then we have $q\in D_L(x_1)$ and $q\in D_L(x_2)$ as well. Lastly, if the first appearance of $q$ in $y_1$ is inside $A^{(m+1)}(s,s')$, meaning that $q=s$ and $s$ does not appear in $u$, then $q$ cannot be a left descent of $y_2$ by Lemma~\ref{lem:descent-if-can-move}, a contradiction. \end{proof} It is now clear that for a butterfly $x_1,x_2\lessdot y_1,y_2$ in a right-angled Coxeter group, $y_1$ and $y_2$ have (at least, and in fact) two upper covers which are $u\cdot A^{(m+2)}(s,s')\cdot v$ and $u\cdot A^{(m+2)}(s',s)\cdot v$ with notations as in Lemma~\ref{lem:butterfly-right-angle-structure}. We are now ready to prove the main theorem of this section, which resolves the right-angled case in Lemma~\ref{lem:butterfly}. \begin{lemma}\label{lem:butterfly-right-angle-join} Let $x_1,x_2\lessdot y_1,y_2$ form a butterfly in a right-angled Coxeter group $W$. If $w\geq y_1,y_2$, then there exists some $z$ which cover both $y_1$ and $y_2$ such that $w\geq z$. \end{lemma} \begin{proof} Use induction on $\ell(x_1)$, and on top of that, use induction on $\ell(w)$. Write $x_1=u\cdot A^{(m)}(s,s')\cdot v$, $x_2=u\cdot A^{(m)}(s',s)\cdot v$, $y_1=u\cdot A^{(m+1)}(s,s')\cdot v$, $y_2=u\cdot A^{(m+1)}(s',s)\cdot v$ as in Lemma~\ref{lem:butterfly-right-angle-structure}. Take any left descent $q\in D_L(w)$. If $q\in D_L(y_1)\cap D_L(y_2)$, by Lemma~\ref{lem:right-angle-descent-down}, $q\in D_L(x_1)\cap D_L(x_2)$. This means that we can consider the butterfly $qx_1,qx_2\lessdot qy_1,qy_2$, with $qw\geq qy_1,qy_2$, via either the Subword Property or the Lifting Property of the strong order. By the induction hypothesis, there exists $z'\gtrdot qy_1,qy_2$ such that $qw\geq z'$. Now, $z=qz'$ is what we want. Similarly, if $q\notin D_L(y_1),D_L(y_2)$, then $qw\geq y_1,y_2$ so by the induction hypothesis on $\ell(w)$, we have $w\geq qw\geq z\gtrdot y_1,y_2$ as desired. For the critical case, assume $q\in D_L(y_1)$ and $q\notin D_L(y_2)$. Pick any reduced reduced word of $y_1$ compatible with the decomposition $y_1=u\cdot A^{(m+1)}(s,s')\cdot v$. If the first (leftmost) appearance of $q$ is in $u$, then $q\in D_L(y_2)$, a contradiction. If the first appearance of $q$ is in the part of $v$, then $q$ can be moved all the way past $s$ and $s'$ and $u$, meaning that $q\in D_L(y_2)$, a contradiction. Thus, the first appearance of $q$ in $y_1$ is in the part of $A^{(m+1)}(s,s')$. This means that $q=s$, and that $s$ commutes with $u$. Since $w\geq y_2$, $q\in D_L(w)$, $q\notin D_L(y_2)$, the Lifting Property says that $w\geq qy_2$. At the same time, \[qy_2=s\cdot u\cdot A^{(m+1)}(s',s)\cdot v=u\cdot A^{(m+2)}(s,s')\cdot v.\] Let $z=qy_2$ and we see that $w\geq z\gtrdot y_1,y_2$ as desired. \end{proof} \section*{Acknowledgements} We are very grateful to Thomas Lam and Grant Barkley for their helpful comments and suggestions. \bibliographystyle{plain}
1,116,691,498,194
arxiv
\section{Introduction} The ``biological impossibility'' of reprogramming adult somatic cells to the pluripotent state had been accepted as a dogma for a long time in biology~\cite{Solter:84}. This view was radically changed by the work of John B. Gurdon in 1962 who showed that a nucleus from a fully differentiated frog intestinal epithelial cell could generate a functioning tadpole upon transplantation into an enucleated egg~\cite{Gurdon:62,Gurdon:06}. In another seminal work, Shinya Yamanaka and co-workers demonstated for the first time in 2006, that four transcription factors (Sox4, Oct2, Klf-4 and c-Myc) were capable of reprogramming an adult mouse fibroblast cell to pluripotency ~\cite{Yamanaka:06}. These induced pluripotent stem cells (iPSC) were fully germline-competent and were used to clone fully functioning adult mice ~\cite{Yamanaka:07, Jaenisch:07, Wernig:07}. The discovery of germline-competent iPSCs has opened up a new avenue for understanding the process of cellular differentiation, besides offering a new source for developing stem cells for tissue regeneration and other biomedical applications, without the ethical concerns of harvesting embryonic stem cells. Transcription factor based somatic cell reprogramming has since been shown to be a robust process, and human pluripotent cells have also been developed from somatic cells using a combination of transcription factors, using the SOKM protocol ~\cite{Yamanaka:07} as well as using other TFs such as NANOG and Lin28 in place of Klf-4 and c-Myc ~\cite{Yu:07, Thomson:09}. While induced pluripotency has been characterised for a number of different cell lines, understanding the key gene regulatory networks and molecular mechanisms that underlie the process remains a key outstanding challenge ~\cite{Nagy:10,Rizzino:10,Hochedlinger:10}. Cell development and differentiation has been interpreted in light of Waddington's epigenetic landscape~\cite{Waddington:57}, visualized as a set of marbles rolling down a hill with the position of the marble indicative of the state of cellular development. Thus undifferentiated cells all start at the same state at the top of the hill and end up in different valleys corresponding to their differentiated states at the bottom of the hill depending on the surface topography. These differentiated cell states are separated by barriers which prohibit their spontaneous transformation from one state to another. Though visually compelling and despite past attempts a quantification of Waddington's landscape has been attempted only recently~\cite{Wang:11,Ferrell:12}. Cell developmental circuits have been modeled as self-regulatory networks, where a transcription factor promotes its own production~\cite{Ferrell:12,Wang:11} as well as inhibits the production of other TFs (in multi-variable models) ~\cite{Wang:11}. Such TF regulated gene networks are known to accurately represent cell fate decision pathways in biological models. A two variable self-activating and mutually inhibiting gene network, has been found in various tissues where a multipotent cell undergoes a binary decision process~\cite{Huang:10,Enver:07,Wang:11}. One known instance is when the Common Myleoic Progenitor (CMP) differentiates into either the myeloid or the erythroid fates, depending on the expression levels of the PU.1 and the GATA1 transcription factors~\cite{Enver:09,Enver:07,Wang:11}. Such models have been useful in providing a quantitative description of developmental landscapes that correspond to the spirit of Waddington's landscape, with different basins of attraction representing the valleys of the differentiated states. An important aspect of the reprogramming process is identifying the pathways through which a fully differentiated somatic cell is programmed back to pluripotency, and in particular, whether the path a cell takes in going from a somatic state to a pluripotent state is the same as the reverse pathway. Also of interest is characterising the possible intermediate states in the process. Recent experiments by Nagy and Nagy \cite{Nagy:10} have shed some light on the path the cell takes as it is reprogrammed back to a pluripotent state. They studied the reprogramming of differentiated secondary mouse fibroblast cells that were derived from induced pluripotent stem cells, and encoded the four Yamanaka factors under the control of doxycycline promoters. Thus expression of the four factors and induction of pluripotency in entire populations of the fibroblasts could be achieved by treating cultures with the drug doxycycline. They found that there were two distinct timescales in the reprogramming process, a Point of No-Return (PNR) time, below which, the cessation of the doxycycline input leaves the cell in the somatic state. The second characteristic time, called the Commitment to Pluripotent State (CPS) time, denotes the time beyond which application of doxycycline, commits the cell to the pluripotent cell fate. In between these two timescales, the PNR and the CPS, they found that the cell reached an undetermined state, which was neither somatic nor pluripotent, but rather signals the presence of a novel intermediate state in the reprogramming process. Cessation of the doxycycline input during this period results in neither return to somatic, nor progress to pluripotent states. They denoted this novel intermediate state as the ``Area 51" state. However, the characteristics of this state has not yet been determined. The presence of an intermediate state in the reprogramming pathway promises to be an useful tool in understanding the mechanics of the uphill process. Further, a full understanding of the Area 51 state could lead to enhanced control over the reprogramming process, such as offering the possibility to create and maintain lineage-committed cells that have various applications. In this paper, we propose a theoretical framework that can lead to such intermediate states in the context of a gene-regulatory network. We propose that when modeling a gene network, an important physical factor that has so far not been taken into account is the effect of delays in the self-regulatory feedback mechanisms. The reprogramming of a somatic cell to pluripotency is a complex multistep reaction that involves both structural modifications to the chromatin network as well as changes in gene expression patterns~\cite{Carvalho:10}. These changes arise in response to the expression levels in the gene regulatory network, and are modeled by a self-regulating feedback loop. However since these changes occur in a finite time, the feedback loop should in fact depend on the state of the system at a previous instant of time, leading to delays. While delay differential equations have been used to study diverse systems~\cite{Mackey:77} such as modeling disease onset in physiological systems~\cite{Alexander:08}, and discrete time population models~\cite{Kuang:93}, we show that they may be critical in developing a mathematical framework for understanding the nature of the epigenetic landscape. In this paper, we show the importance of time delays in the context of a gene-regulatory network. We model the regulatory network through the dynamics of a single differentiation regulator, denoted by $x$, that promotes its own synthesis through a feedback loop. While real life regulatory circuits in the cell depend on two or more differentiation regulators, the main aim of this paper is to show the effects of time delays in such circuits, and a single-variable genetic circuit offers a model system in which to study such effects. Such single variable circuits are similar to the models proposed for progesterone-induced Xenopus oocyte maturation~\cite{Ferrell:03,Ferrell:98,Ferrell:09,Ferrell:12}, and might also be applicable to scenarios where a single transcription factor such as $Myod$ has been shown to induce a change of cell fate from fibroblast to myoblast~\cite{Lassar:87}. We define the single-variable regulatory model in the next section, and discuss the results as a function of the parameters of the model. A discussion of the importance and applicability of the resulting phase diagram to systems of differentiating cells and its extension to more realistic gene regulatory networks are discussed in the final section. \section{Model and Results} Gene regulatory networks that control cell fate differentiation has been modeled by self-activating genes. While actual gene regulatory networks inside the cell may consist of multiple genes which have a complex interdependence on each other, one variable or two-variable gene networks provide an useful model to illustrate some of the basic principles of cell-fate determination. We first introduce a single-variable model for cell differentiation, where a single regulator $x$ self-regulates its own synthesis, as proposed by Ferrell~\cite{Ferrell:12}. The equations governing the rate of change of expression of a single gene is given by \begin{equation} \frac{d x}{d t} = \alpha_0 + \alpha_1 \frac{x^n}{S^{n} + x^n} - \beta x, \label{eq:ferrell} \end{equation} where the first term represents an external input $\alpha_0$ that is constantly applied. The second term represents a feedback dependant self-regulation, modelled by a Hill function of order $n$. The third term models degradation process through a mass action process with the degradation rate $\beta$. The right hand side of Eq.~\ref{eq:ferrell} can be integrated with respect to the variable $x$ to give an ``effective potential'' landscape having two stable minima corresponding to different levels of expression of the gene. This can be seen in Fig.~\ref{Figure1}(a). The two stable fixed points correspond to $x=\tilde{x}_{1}$, and $x=\tilde{x}_{2}$ respectively ($\tilde{x}_{1} = 0$, and $\tilde{x}_{2} \approx 2$ for $\alpha_{0}=0$) with an unstable extremum at $x=x^{*}$ ($x^{*} = 1$, for $\alpha_0 = 0$). In the absence of drive the final gene expression level is crucially dependent on its initial value $x(t=0)$. Therefore if $x(t=0) = [0, 1-\epsilon]$ the system approaches $x=\tilde{x}_{1}$, while if $x(t=0) = [1 + \epsilon, \infty]$, the fixed point $x=\tilde{x}_{2}$ is chosen. Furthermore, in this model beyond a critical value of the external input ($\alpha_0 > \alpha_c$), the minimum at $x=\tilde{x}_{1}$ becomes unstable and the long time steady state is always $x=\tilde{x}_{2}$. This is in line with Ferrell's idea that saddle-node bifurcations are inconsistent with Waddington's landscape picture as there are no alternative end point states. In his work Ferrell~\cite{Ferrell:12} further introduces a two variable gene regulatory circuit as a model mimicking lateral inhibition and demonstrates pitchfork bifurcation commensurate with Waddington's picture. A similar two variable model had been proposed around the same time by Wang \textit{et al.}\cite{Wang:11}. Motivated by these gene regulatory network models that attempt at developing a quantitative picture of Waddington's landscape we propose a simple generic single-gene regulatory network model similar to Ferrell~\cite{Ferrell:12,Wang:11} incorporating time-dependent drive and delay. The rate of change of the gene regulator $x$ in this model is described by \begin{equation}\label{eq:one-ge-circuit} \frac{d x}{d t} = \alpha_0 \Theta \left[d - t \right] + \alpha_1 \frac{x^n(t-\tau)}{S^{n} + x^n(t-\tau)} - \beta x(t), \end{equation} where $\alpha_0$, $\alpha_1$ and $\beta$ have the same meanings as Eq.\ref{eq:ferrell}. However unlike that model both the chemical drive as well as the feedback are functions of time. The Heaviside function multiplying the $\alpha_0$ term represents the fact that the external input is applied for a finite time interval $d$, while the self-regulatory term is dependent on the state of the regulator $x$ at a previous instant of time $t-\tau$. The time delay in the self-regulation term in Eq.~\ref{eq:one-ge-circuit} can have several possible physical origins, including multi-step chemical reactions and cell shape changes. We have assumed no such delay in the degradation term as it does not have biochemical warrant at the same level as the self-regulation and it does not affect the general results in our model. \begin{figure} \includegraphics[width=8cm]{Fig1.eps} \caption{Somatic ($x = 0$), induced pluripotent ($x \approx 2$), and Area $51$ cells in a single gene regulatory circuit. Panel (a) shows steady state values for Eq.~\ref{eq:one-ge-circuit} without drive or delay ($\alpha_0 = 0$, $d = 0$). Depending on the initial value $x(t=0)$, the somatic (red solid line) and the iPS cells (blue solid line) are stable. The unstable state $x = 1$ (green line) is also shown. Panel (b) shows corresponding steady states with a non-zero drive ($\alpha_0 = 0.5$), a decay constant $\beta = 0.5$, and the coefficient of self promotion $\alpha_1 = 1.0$. Depending on the duration $d = 2$ (red solid) the somatic, or $d = 3$ (blue solid) iPS cells are chosen. Panel (c) shows $x(t)$ vs.\ $t$ corresponding to Eq.~\ref{eq:one-ge-circuit} for a delay of $\tau = 500$ and for drive $d = 10$ (red line), and $d = 1000$ (blue line) indicating stability of somatic and iPS states. Panel (d) shows $x(t)$ vs.\ $t$ for $d = 500$ with sustained fluctuations between the iPS and somatic states.}\label{Figure1} \end{figure} We numerically integrated Eq.~\ref{eq:one-ge-circuit} for different values of the delay time $\tau$ and drive $d$. Figure \ref{Figure1}(b) represents the results of the single gene regulatory circuit without delay and with a chemical drive acting for a finite interval $d$ on an initial state $x = 0$. The self-promotion rate coefficient is $\alpha_1 = 1$ and the decay constant $\beta = 0.5$. Further, the amplitude of the chemical drive is parameterised by $\alpha_0 = 0.5$. We find that for a value of $\alpha_0 < \alpha_c$ and the duration of the drive $d$ less than a critical value $d_c (\approx 2$), the long time steady state is $x=0$. If however the drive is applied for a duration longer than $d_c$, starting from a state $x(t=0) = 0$ the system transitions to the other minimum $x \approx 2$. Identifying the $x=0$ state as a somatic and $x \approx 2$ as the pluripotent state, the above process describes inducing pluripotency via a chemical drive. Figure~\ref{Figure1}(c) shows the variation of $x(t)$ vs $t$ starting from the somatic state $x=0$ for $d=10$, and $d=1000$, and a time delay $\tau = 500$ for the same set of parameters $\alpha_0$, $\alpha_1$ and $\beta$. As seen in the figure for $d=10$, the system relaxes back to the $x=0$ steady state, while for $d=1000$ the pluripotent state $x \approx 2$ is chosen. Sharp spikes showing attempted transitions between the two states are also seen. In the intermediate regime when the drive $d$ is of the same order of magnitude as the delay $\tau$, the trajectory of $x(t)$ shows sustained oscillations. This is shown in Fig.~\ref{Figure1}(d). We interpret such sustained oscillations as the cells which are caught in a limbo between the pluripotent and the somatic states and conjecture that these states are possibly the ones seen in the experiments by Nagy et al.~\cite{Nagy:10} termed ``Area 51''. The chemical drive $\alpha_0$ is then interpreted as the doxycycline input to somatic cells having a non-zero value, corresponding to a finite rate of basal synthesis, which is switched off ($\alpha_0 = 0$) beyond the input time. \begin{figure} \includegraphics[width=8cm]{Fig2illustrate.eps} \caption{Fluctuations in the ``Area 51'' region as a combined result of time-dependent drive and delay. The top panel shows sustained oscillations for the parameters of Fig.~\ref{Figure1}(d). The bottom panels indicate the oscillations in the transient ($500 \leq t \leq 540$) and sustained oscillatory ($7500 \leq t \leq 7650$) regions.}\label{Fig2illustrate} \end{figure} The oscillations seen in some solutions of Eq.~\ref{eq:one-ge-circuit} is an inherent feature of delay differential equations~\cite{Mackey:77}. These oscillations as shown in Fig.~\ref{Figure1}(d) are investigated in greater detail in figure~\ref{Fig2illustrate}. It is possible to analyse the time of occurence of these sharp spikes. If the drive duration is smaller than the delay time, \textit{i.e.} $d < \tau$, $x$ initially increases from its zero value as a function of time. Once the drive is withdrawn the dynamics of the system is completely dominated by the degradation term and as a result $x$ decreases. This behavior continues till $t = \tau$ when the self-regulation term promoting gene activity becomes non-zero, and as a result $x$ increases monotonically till a time $d + \tau$. At this time the self-regulatory term picks up the values of $x$ from the earlier cycle which was dominated by degradation kinetics. This can be generalised to state that the downward spikes occur at $t_p = d + p \tau$, while the upturns occur at $t = q \tau$. The slope of the first downturn is completely dictated by $\beta$ while the upturn slope turns out to be a nonlinear function of $\alpha_1$ and $\beta$. For the situation in which $d > \tau$ the first upward turn occurs at $t = \tau$ followed by a downturn upon reduction of the drive at $t = d + \tau$. Following this oscillations are repeated at $t=t_p$ as discussed above. The preceding analysis is strictly valid in the initial time regime, where the spikes occur singly, as shown in Fig.~\ref{Fig2illustrate}(b). At later times, the single spikes give way to a double spike, with two spikes occurring in quick succession, as shown in Fig.~\ref{Fig2illustrate}(c). A complete description of the behaviour of the oscillations in this later time regime requires a full non-linear analysis of the original equation. \begin{figure} \includegraphics[width=8cm]{Fig2.eps} \caption{Phase diagram showing regions where somatic and pluripotent states are stable as a function of the delay time $\tau$. The phase boundaries indicating point of no return (blue open circles), $d_{PNR}$, and those committed to the pluripotent state (red triangles), $d_{CPS}$ are indicated. The region between the two states mark the region when the cell fate attains neither fixed point, but oscillates indefinitely, termed ``Area 51''\cite{Nagy:10}.}\label{Figure2} \end{figure} The two critical time scales alluded to earlier, the ``point of no-return'', and ``commitment to pluripotent state''is seen in Figure \ref{Figure1}(c) and (d) respectively. These indicate threshold values such that for $d < d_{PNR}$ the system would return to their somatic state, while for $d > d_{PNR}$ the cell fate is changed. The second threshold corresponds to the drive being on for a duration $d > d_{CPS}$ which results in a final pluripotent cellular state. The intermediate region of drives $d_{PNR} < d < d_{CPS}$ defines the ``Area 51'' region. Taking cue from our numerical results discussed above we draw a phase diagram showing the domain of ``Area 51'' as functions of $d$ and $\tau$ in a single gene regulatory circuit incorporating time dependent drive and delay dynamics. Figure~\ref{Figure2} demonstrates the variation of the two thresholds $d_{PNR}$ and $d_{CPS}$ as a function of the delay $\tau$. For $0 \leq \tau \leq 50$, the two threshold values are almost the same, \textit{i.e.} $d_{PNR} \approx d_{CPS}$. In this regime the system transitions from the somatic state to the induced pluripotent state once the duration of the drive is greater than $d_{PNR}$. However for larger values of $\tau$ the two threshold values are different exposing an intermediate regime marked by sustained oscillations. As seen from the graph $d_{CPS}$ monotonically increases with delay $\tau$ while some fluctuations in $d_{PNR}$ is observed. With increasing $\tau$ the ``Area 51'' region widens as can be seen in Figure~\ref{Figure2}. \section{Discussion} We have illustrated the importance of time delays in feedback circuits in the context of a simple gene regulatory network in which the state of differentiation is regulated by a single differential regulator. The energy landscape of the model, in the absence of delays, has two minimas, denoting the pluripotent and differentiated states. Introducing a delayed self-regulation term changes the landscape such that there is now a region in phase space, in which the system has a long-lived oscillatory nature. We propose that such oscillatory states may underlie the existence of novel intermediate states observed in the reprogramming of mouse somatic cells, and denoted by ``Area 51''. Further experiments with fast decaying reporters which are proxies for pluripotency or somatic cell markers would be needed to validate our hypothesis of a fluctuating intermediate state. In order to model more realistic differentiation events, one would need to study higher dimensional systems where the number of differential regulators is more than one. Two variable gene-regulatory models~\cite{Wang:11} offer a straight forward generalisation of these ideas to mimic realistic cell differentiation scenarios. For a full description of the dynamics of the reprogrammed cell due to the four Yamanaka factors, one needs to study the effect of delays in a four variable model, and map out the effect of the interplay of these four variables on the intermediate state. The switch from the somatic state to the pluripotent state is accompanied by various changes inside the cell, including changes in the chromatin structure, loss of somatic cell specific markers, and reactivation of endogenous genes essential for pluripotency and self-renewal, among others. Recent experiments suggest that the various changes associated with pluripotency occur in a well-defined sequential manner. For instance, the pluripotency marker of mouse pluripotent cells, SSEA-1 appears to be expressed in the very early stages of pluripotency ~\cite{Jaenisch:08,Stadtfeld:08}, while the reactivation of endogenous genes such as Oct4, Nanog and Sox2 occurs late in the reprogramming process. It is probable that the rapid fluctuations predicted by the delayed-self-regulation model proposed here arise only in the context of one or a few of these pluripotency markers, instead of the full state of the cell switching from somatic to pluripotent. Thus experiments designed to validate this hypothesis of a fluctuating intermediate state need to identify the probable candidates for such switching. Another area of interest in the context of induced pluripotent cells is whether there is an inherent asymmetry to the landscape. Nagy \textit{et al.} does not comment whether the ``Area 51'' is encountered if we perform the reverse experiment, \textit{i.e.} start from the pluripotent state and induce differentiation by keeping the cells in a chemical environment for different durations. Further experiments are needed to map out the landscape as a pluripotent cell divides under the influence of a time-dependent stimuli. Such experiments would then provide an additional input to the model to facilitate understanding of the full epigenetic landscape. The concept of time delays, possibly induced by remodelling of cellular architecture, is an important one in the differentiation context, as reorganization events inside the cell that accompany a change in cell state take place over a time scale of days~\cite{Foster:11}. Thus when modelling the epigenetic landscape through dynamical equations, one must consider the effect of delays on differentiation pathways. Similar oscillatory behavior has also been observed in other related biological systems, such as the Epithelial to Mesenchymal transition in early embryonic development and cancer metastasis~\cite{ConacciSorrel:03,Vuoriluoto:11}. In both these situations the oscillations arise from time dependent remodelling of the cytoskeleton. Thus the concept of delays may be important in other biological contexts too and should prove a useful tool in the design of predictive experiments. \section{Acknowledgements} MM $\&$ BC acknowledges financial support from Institute of Advanced Studies Durham University, the Department of Mathematical Sciences, and the Biophysical Sciences Institute, via the IAS-BSI COFUND fellowship. BC acknowledges hospitality of the Isaac Newton Institute, Cambridge University.
1,116,691,498,195
arxiv
\section{Introduction} One of the essential results in probability theory on groups is Kesten's theorem \cite{Kesten}: the probability of return to identity of a random walk on a group $\Gamma$ decreases exponentially fast if and only if $\Gamma$ is non amenable. A natural question is to extend this to other subsets: for which subsets does the random walk escape with exponential rate? Many authors has studied the case where the subset is a subgroup of $\Gamma$: see for example \cite{em1}, \cite{bekk} and in particular \cite[Theorem 51]{lin} where it is shown that the probability that a random walk on $\Gamma$ returns to a subgroup $H$ decreases exponentially fast to zero if and only if the Scheirer graph of $\Gamma /H$ is non amenable. \\ In this note we look at random walks on Zariski dense subgroups of algebraic groups (such as $SL_2(\R)$) and we look at the escape from proper algebraic subvarieties. Such questions have an interest in their own right since they allow us to study the delicate behavior of the random walk but they have also been recently involved in other domains such as the theory of expander graphs. We are referring here among others to the works of Bourgain and Gamburd \cite{bg1},\cite{bg}, Breuillard and Gamburd \cite{gambbreui} and Varju \cite{Varju}. In \cite{gambbreui} for instance it is shown that there is an infinite set of primes $p$ of density one, such that the family of all Cayley graphs of $ SL_2(\Z/p\Z)$ is a family of expanders. A crucial part of the proof is to take a random walk on $ SL_2(\Z/p\Z)$ and to show that the probability of remaining in a subgroup decreases exponentially fast to zero and uniformly. In \cite[Corollary 1.1.]{bg} the following statement was established: consider the group $SL_d(\Z)$ ($d\geq 2$), the uniform probability measure on a finite symmetric generating set and $(S_n)_{n\in \N}$ the associated random walk, then for every proper algebraic variety $\mathcal{V}$ of $SL_d(\mathbb{C})$, $\p (S_n \in \mathcal{V}) $ decreases exponentially fast to zero. \\ Kowalski \cite{Kowalski} and Rivin \cite{Rivin} were interested in similar questions: for example they were able to estimate the probability that a random walk in $SL_d(\Z)$ lies in the set of matrices with reducible characteristic polynomial. The techniques used by Kowalski and Rivin are arithmetic sieving ones.\\ In this article, we develop a more probabilistic approach allowing us to deal with random walks on arbitrary Zariski dense subgroups of semi-simple algebraic groups. In the particular case of $SL_2(\R)$, we obtain (see Theorem \ref{ch2tr}) that a random walk whose measure generates a non-elementary subgroup escapes with probability tending to one exponentially fast from every algebraic variety. Our method relies on the theory of random matrix products developed in the 60's by Kesten and Furstenberg and in the 70's-80's by the French school: in particular Bougerol, Guivarc'h, Le Page and Raugi.\\ We also apply our techniques to generic Zariski density. Let $\Gamma_1$ and $\Gamma_2$ be two Zariski dense subgroups of $SL_d(\R)$ ($d\geq 2$). We prove in Theorem \ref{ch2generic1} that one can exhibit a probability measure on each of the subgroups such that two independent random walks will eventually generate a Zariski dense subgroup. We have proved in \cite{aoun} that the latter subgroup is also free. This gives consequently a ``probabilistic'' version of the Tits alternative \cite{tits}.\\ All the random variables will be defined on a probability space $(\Omega, \mathcal{F},\p)$, the symbol $\E$ will refer to the expectation with respect to $\p$ and ``a.s.'' to almost surely. If $\Gamma$ is a topological group, $\mu$ a probability measure on $\Gamma$, we define a sequence of independent random variables $\{X_n; n\geq 0\}$ with the same law $\mu$. We denote for every $n\in \N^*$ by $S_n=X_n \cdots X_1$ the $n^{th}$ step of the random walk. \\ First let us present the result we obtain for $SL_2(\R)$. We will say that a probability measure $\mu$ on $SL_2(\R)$ is non elementary if the group generated by its support is non elementary, i.e. Zariski dense in $SL_2(\R)$ or equivalently not virtually solvable. \begin{theo} Let $\mu$ be a non elementary probability measure on $SL_2(\R)$ having an exponential moment (see Section \ref{ch2subprelprob} for a definition of this notion). Then for every proper algebraic subvariety $\mathcal{V}$ of $SL_2(\R)$, $$\displaystyle \limsup_{n\rightarrow \infty} \big[\p (S_n \in \mathcal{V})\big]^{\frac{1}{n}} < 1$$ In particular, every proper algebraic subvariety is transient, that is a.s. $S_n$ leaves $\mathcal{V}$ after some time. \\ More precisely, if $P$ is a non constant polynomial equation in the entries of the $2\times 2$ matrices of $SL_2(\R)$, then there exists $\lambda>0$ such that: $$\frac{1}{n} \log |P(S_n)| \underset{n\rightarrow \infty}{\overset{\textrm {a.s.}}{\longrightarrow}} \lambda$$ A large deviation inequality holds as well: for every $\epsilon>0$: \begin{equation}\displaystyle \limsup_{n\rightarrow \infty} \Big[\p \left( \big|\frac{1}{n} \log |P(S_n)| - \lambda\big| > \epsilon \right) \Big]^{\frac{1}{n}}< 1\label{ch2lgdevia}\end{equation} \label{ch2tr}\end{theo} Theorem \ref{ch2tr} is in fact a particular case of a more general statement: Theorem \ref{ch2tr1} below. If $G$ is the group of real points of an algebraic group $\mathbf{G}$, $m$ a Cartan projection (see Section \ref{ch2subprel}), $\mu$ a probability measure on $G$, then the Kingman subadditive ergodic theorem allows us to define a vector $Liap(\mu)$ (see Definition / Proposition \ref{ch2liapuvector}) in the Weyl chamber of $G$ which is the almost sure limit of $\frac{1}{n}m(S_n)$ . \begin{theo}\label{ch2tr1} Let $\mathbf{G}$ be a semi-simple algebraic group defined and split over $\R$\footnote{For example, $\mathbf{G}=\mathbf{SL}_d$, $d\geq 2$.}, $G=\mathbf{G}(\R)$ its group of real points, $\Gamma$ a Zariski dense subgroup of $G$, $\mathcal{V}$ a proper algebraic subvariety of $\mathbf{G}$ defined over $\R$, $\mu$ a probability on $G$ with an exponential moment (see Section \ref{ch2subprelprob}) such that its support generates $\Gamma$. Then, there exists a finite union of hyperplanes $H_1,\cdots, H_r$ in the Weyl chamber (see Section \ref{ch2subcartan}) depending only on $\mathcal{V}$ such that if $Liap(\mu)\not \in H_1 \cup \cdots \cup H_r$ then, \begin{equation}\displaystyle \limsup_{n\rightarrow \infty} \big[\p (S_n \in \mathcal{V})\big]^{\frac{1}{n}} < 1\label{ch2olli1}\end{equation} Probability measures, whose support generates $\Gamma$, satisfying the condition $Liap(\mu) \not\in H_1 \cup \cdots H_r$ exist (See Lemma \ref{ch2lyapunovcone}). A large deviation inequality similar to (\ref{ch2lgdevia}) holds as well. \end{theo} Theorem \ref{ch2tr1} clearly implies Theorem \ref{ch2tr}: indeed, everything we want to show is that the Lyapunov exponent associated to $\mu$ (see Definition \ref{ch2liapou}) is non zero (positive). This is ensured by Furstenberg's theorem \cite{Furst}. \\ \begin{remarque} The number $\lambda$ that appears in Theorem \ref{ch2tr} or \ref{ch2tr1}, should be seen as a generalization of the classical Lyapunov exponent (see Definition \ref{ch2liapou}). In fact, it will be the Lyapunov exponent relative to the probability measure $\rho(\mu)$ where $\rho$ is some rational representation of $\mathbf{G}$. \end{remarque} \begin{remarque} Our method doesn't allow us to estimate $\p(S_n\in \mathcal{V})$ when $Liap(\mu)$ belongs to the finite union of hyperplanes $H_i$ defined by the variety $\mathcal{V}$. Example 2 of Section \ref{ch2example} illustrates this. \end{remarque} Let us justify why we will look at the escape from algebraic subvarieties and not from $C^1$ submanifolds for instance. Kac and Vinberg proved in \cite{vinberg} (see also \cite{benoist}) that there exist discrete Zariski dense subgroups of $SL_3(\R)$ preserving a $C^1$ (but not algebraic) manifold on the projective plane (in fact, such manifolds are obtained as the boundary of a divisible convex in $P^2(\R)$). Let $\Gamma$ be such a group, $\mathcal{C}$ such a manifold and $\mathcal{V}=\{x\in \R^3\setminus\{0\}; [x] \in \mathcal{C}\}\cup\{0\}$ where $[x]$ denotes the projection of $x\neq 0$ on $P^2(\R)$. Note that $\mathcal{V}$ is differentiable outside $0$. Then, for every $x\in \mathcal{V}$, every $n\in \N$, $\p (S_n x \in \mathcal{V}) = 1$. By way of contrast, we show in the following statement that for proper algebraic subvarieties the latter quantity decreases exponentially fast to zero.\\ \begin{theo}\label{ch2tr2} Let $\Gamma$ be a Zariski dense subgroup of $SL_d(\R)$ ($d\geq 2$), $\mu$ a probability measure with an exponential moment whose support generates $\Gamma$. Then for every proper algebraic subvariety $\mathcal{V}$ of $\R^d$, every non zero vector $x$ of $\R^d$ we have: $$\displaystyle \limsup_{n\rightarrow \infty} \big[\p (S_n x \in \mathcal{V})\big]^{\frac{1}{n}} < 1$$\end{theo} As discussed at the beginning of the introduction, it is interesting to study the transience of proper subgroups. It follows from Varju's paper (see \cite[Propositions 8 and 9]{Varju}) that if $\mathbf{E}$ is a simple algebraic group defined over $\R$, $\mathbf{G}$ the direct product of $r$ copies of $\mathbf{E}$ (with $r\in \N^*$), $\Gamma$ a Zariski dense subgroup of $G=\mathbf{G}(\R)$, then there exists a symmetric probability measure $\mu$ on $\Gamma$ whose support generates $\Gamma$ such that the probability that the associated random walk escapes from a proper algebraic subgroup decreases exponentially fast to zero. We will show that this in fact holds for all probability measures with an exponential moment whose support generates $\Gamma$ and for every semi-simple algebraic group $\mathbf{G}$, namely: \begin{theo}\label{ch2tr3} Let $\mathbf{G}$ be a semi-simple algebraic group defined over $\R$, $G$ its group of real points assumed without compact factors, $\Gamma$ a Zariski dense subgroup of $G$ and $\mu$ a probability measure with an exponential moment whose support generates $\Gamma$. Then for every proper algebraic subgroup $\mathbf{H}$ of $\mathbf{G}$, $$\displaystyle \limsup_{n\rightarrow \infty} \big[\p (S_n \in H)\big]^{\frac{1}{n}} < 1$$ where $H$ is the group of real points of $\mathbf{H}$. \end{theo} The bound obtained by Varju is uniform over the subgroups. Unfortunately our bound in Theorem \ref{ch2tr3} is not.\\ Our estimates will be applied to show that Zariski density in linear groups is generic in the following sense: \begin{theo}\label{ch2generic11} Let $G$ be the group of real points of a semi-simple algebraic group split over $\R$. Let $\Gamma_1,\Gamma_2$ be two Zariski dense subgroups of $G$. Then there exist probability measures $\mu_1$ and $\mu_2$ with an exponential moment whose support generate respectively $\Gamma_1$ and $\Gamma_2$ such that for some $c\in ]0,1[$ and all large $n$, $$\p (\textrm{$\langle S_{1,n},S_{2,n}\rangle$ is Zariski dense and free}) \geq 1-c^n$$ where $\{S_{2,n}; n \geq 0\}$ and $\{S_{2,n}, n \geq 0\}$ are two independent random walks on $\Gamma_1$ (resp. $\Gamma_2$) associated respectively to $\mu_1$ and $\mu_2$ on $\Gamma_1$ (resp. $\Gamma_2$). This implies that almost surely, for $n$ big enough, the subgroup $\langle S_{1,n},S_{2,n}\rangle$ is Zariski dense and free. \end{theo} See Section \ref{ch2subzariski} for the comparison of these results with Rivin's in \cite{genericrivin}. \begin{remarque} The fact that $\{w\in \Omega; \langle M_n(w), {M_n}'(w) \rangle \textrm{\;is Zariski dense}\}$ is measurable will follow from Lemma \ref{ch2strongtits}. \end{remarque} \subsection{Outline of the paper} In order to prove Theorem \ref{ch2tr1} (or \ref{ch2tr2}, \ref{ch2tr3}), one can clearly suppose that $\mathcal{V}$ is a proper hypersurface (i.e. the common zeroes of one polynomial equation). We will do so in all the paper.\\ In Section \ref{ch2example}, we provide two examples to explain the general idea of the proofs.\\ Section \ref{ch2sublinear} is purely algebraic. To every proper algebraic hypersurface $\mathcal{V}$ of $\mathbf{G}$ we associate a rational real representation $\rho$ of $\mathbf{G}$ such that $g\in \mathcal{V}$ is equivalent to: the matrix coefficients of $\rho(g)$ satisfy a linear condition ``$(L)$''. Thus we have ``linearized'' our variety. This can be seen as a generalization of the well-known Chevalley theorem (Theorem \ref{ch2chevalley}) concerning the particular case of subgroups. In Section \ref{ch2subprel} we recall standard facts about semi-simple algebraic groups and their rational representations.\\ In Section \ref{ch2subproba} we give some additional results to the theory of random matrix products. They will be used in Section \ref{ch2subproof1} in order to show that $\rho(S_n)$ may verify $(L)$ only with a probability decreasing exponentially in $n$.\\ We consider a random walk on a Zariski dense subgroup $\Gamma$ of the real points of a semi-simple algebraic group. First we define the Lyapunov vector, which is the normalized Cartan projection of the random walk. We recall in Theorem \ref{ch2guimo} that it belongs to the interior of the Weyl chamber. In lemma \ref{ch2lyapunovcone}, we show that for every finite union of hyperplanes in the Weyl chamber, one can always find a probability measure whose support generates $\Gamma$ such that the Lyapunov vector does not belong to this union (this is the condition stated in Theorem \ref{ch2tr1}). Next, we will be interested in the behavior of the components of the random walk in the Cartan decomposition. In Theorems \ref{ch2ratioA} and \ref{ch2conv}, we give new and shorter proof of the exponential convergence in the $KAK$ decomposition we obtained in our previous work \cite{aoun}. Unlike \cite{aoun} when we were working on an arbitrary local field, we will take advantage during the proofs of the fact that our matrices are real valued. Theorem \ref{ch2ratioA} shows the exponential decay of the ratio between the first two $A$-components of the random walk in the KAK decomposition. This is a version in expectation of the fact that the Lyapunov vector belongs to the interior of the Weyl chamber. The proof will follow easily from a large deviations theorem of Le Page in $GL_d(\R)$. We note that we proved a similar result in \cite{aoun} but with different techniques, the reason is that a large deviation result over an arbitrary local field is not present in the literature. \\ Theorem \ref{ch2conv} establishes the exponential convergence of the $K$-parts.\\ In Section \ref{ch2subproof1}, we prove our mains results: Theorems \ref{ch2tr1}, \ref{ch2tr2} and \ref{ch2tr3}. The key is Theorem \ref{ch2theo1} which computes the probability that a random walk on a linear algebraic group verifies a linear condition on the matrix coefficients. No irreducibility assumptions are made, a genericity condition on the geometry of the Lyapunov vector is however needed. \\ Finally in Section \ref{ch2subzariski}, we apply Theorem \ref{ch2theo1} to prove Theorem \ref{ch2generic11}. We compare our results with Rivin's in \cite{genericrivin}.\\ \paragraph{Acknowledgments} I sincerely thank Emmanuel Breuillard and Yves Guivarc'h for fruitful discussions, remarks and advices. I thank also Igor Rivin for his interest and his comments. \section{Examples}\label{ch2example} In this section, we give examples to illustrate the ideas and methods we will use in the next section to prove our main results. \subsection{Example 1} This example illustrates Theorem \ref{ch2tr2}.\\ Let $\Gamma$ be Zariski dense subgroup of $SL_3(\R)$ ($SL_3(\Z)$ for example). Consider a probability measure $\mu$ on $SL_3(\R)$ with an exponential moment (see Section \ref{ch2subprelprob}) whose support generates $\Gamma$. For example, if $\Gamma$ is finitely generated, choose a probability measure whose support is a finite symmetric generating set. Let $S_n=X_n\cdots X_1$ be the associated random walk. We write $S_n$ in the canonical basis of $M_{3,3}(\R)$: $$S_n=\left( \begin{array}{ccc} a_n & b_n & c_n \\ d_n & e_n & f_n \\ g_n & h_n & i_n \\ \end{array} \right)$$ We propose to see if the following probability decreases exponentially fast to zero: $$p_n=\p (a_n^2 - a_ne_n +2a_nd_n - a_nb_n -b_nd_n=0)$$ In other words if $\mathcal{V}$ is the proper algebraic hypersurface of $SL_3(\R)$ defined by $\mathcal{V}=\{\left( \begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \end{array}\right)\in \Gamma; a^2-ae+2ad-ab-bd=0\}$, then we are interested in estimating $\p(S_n\in \mathcal{V})$. \paragraph{Step 1: Linearization of the algebraic hypersurface $\mathcal{V}$.\\} Let $E$ be the vector space of homogenous polynomials on three variables $X,Y,Z$ of degree $2$. The group $SL_3(\R)$ acts on $E$ by the formula: $g \cdot P(X,Y,Z)=P\left( g^t (X,Y,Z) \right)$ where $g^t$ is the transposed matrix of $g$ when $g$ is expressed in the canonical basis. Let us write down this representation. We will consider the basis $\{X^2, Y^2,Z^2, XY, XZ, XY\}$ of $E$. \begin{eqnarray} SL_3(\R) & \overset{\rho}{\longrightarrow} & GL(E) \simeq GL_6(\R)\nonumber\\ \left( \begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \\ \end{array} \right) & \mapsto & \left( \begin{array}{cccccc} a^2 & b^2 & c^2 & ab & ac & bc \\ d^2 & e^2 & f^2 & de & df & ef \\ g^2 & h^2 & i^2 & gh & gi & hi \\ 2ad & 2be & 2cf & ae+bd & af+cd & bf+ec \\ 2ag & 2bh & 2ci & ah+gb & ai+cg & bi+ch \\ 2dg & 2eh & 2fi & dh+eg & di+gf & ei+hf \\ \end{array} \right)\nonumber \end{eqnarray} In what follows we identify $E$ with $\R^6$ by sending $\{X^2,Y^2,XY,XZ,YZ\}$ to the canonical basis $\{e_i; i=1, \cdots, 6\}$. Then it is clear that $$\mathcal{V}=\{g\in SL_3(\R); \rho(g) (e_1-e_4)\in H\}$$ where $H$ is the hyperplane in $E$ defined by $H=\{x=(x_i)_{i=1}^6\in \R^6; x_1+x_4=0\}$.\\ We say that we have linearized the hypersurface $\mathcal{V}$. This method generalizes easily and yields Lemma \ref{ch2lemma} which holds for arbitrary hypersurfaces.\\ Note that, for $x=e_1-e_4$, $$p_n=\p \left( \rho(S_n) x \in H \right)$$ \paragraph{Random matrix products in $GL_6(\R)$\\} We have now a probability measure $\rho(\mu)$, image of $\mu$ under $\rho$, on $GL_6(\R)$ with an exponential moment. The smallest closed group $G_{\rho(\mu)}$ containing the support of $\rho(\mu)$ is a Zariski dense subgroup of $\rho(SL_3(\R))$. One can verify that $\rho$ is in fact $SL_3(\R)$-irreducible. Since $SL_3(\R)$ is Zariski connected, we deduce that $G_{\rho(\mu)}$ is a strongly irreducible (Definition \ref{defdef}) subgroup of $GL_6(\R)$. Moreover, the group $\rho\left(SL_3(\R)\right)$ contains clearly a proximal element, then by Goldsheild-Margulis theorem \cite{Margulis} (see Theorem \ref{ch2margulis} for the statement), the same applies for $G_{\rho(\mu)}$.\\ Thus, we can use the theory of random matrix products which gives (see Lemma \ref{ch2hyperplane}) what we wanted to prove, i.e.: $$\limsup_{n\rightarrow +\infty} \frac{1}{n} \log {\p \left(\rho(S_n)x \in H \right)}< 0$$ A word about the proof: if $[x]$ denote the projection of $x\in \R^6 \setminus\{0\}$ in the projective space $P(\R^6)$, then $\rho(S_n)[x]$ converges in law towards a random variable $Z$ with law the unique $\mu$-invariant probability measure $\nu$ on the projective space $P(\R^6)$. It can be shown that the speed of convergence is exponential in a certain sense. Moreover, almost surely, $Z$ cannot belong to the hyperplane $H$ because $\nu$ is proper. More precisely, we can control the probability that the distance between $Z$ and a fixed hyperplane $H$ be small. \begin{remarque} This method does not give an estimate of the growth of $Q(S_n)$ where $Q$ is the polynomial that defines $\mathcal{V}$. We will see in the next section (Theorem \ref{ch2theo1}) how such quantities can be estimated. \end{remarque} \subsection{Example 2} This example illustrates situations in which we are unable to obtain the exponential decrease of the probability of lying in a subvariety for all probability measures (see the statement of Theorem \ref{ch2tr1}).\\ As in Example 1, consider a probability measure on $SL_3(\R)$ with an exponential moment whose support generates a Zariski dense subgroup of $SL_3(\R)$. Say that we would like to estimate the following probability: $$q_n=\p (a_ne_n-b_nd_n+2e_n=0)$$ Let $\mathcal{S}$ be the following hypersurface of $SL_3(\R)$: $\mathcal{S}=\{ae-bd+2e=0\}$ so that $q_n=\p(S_n\in \mathcal{S})$. Consider the natural action of $SL_3(\R)$ on $F=\bigwedge^2 \R^3 \oplus \R^3$. Denote by $\eta$ this representation and write $\eta=\eta_1\oplus \eta_2$. We fix the basis $(e_1\wedge e_2, e_1 \wedge e_3, e_2\wedge e_3, e_1,e_2,e_3)$ of $F$. Formally, we have: \begin{eqnarray} SL_3(\R) & \overset{\eta}{\longrightarrow} & GL(F) \simeq GL_6(\R)\nonumber\\ \left( \begin{array}{ccc} a & b & c \\ d & e & f \\ g & h & i \\ \end{array} \right) & \mapsto & \left( \begin{array}{cccccc} ae-bd & af-cd & bf-ec & 0 & 0 & 0 \\ ah-gb & ai-gc & bi-hc & 0 & 0 & 0 \\ dh-eg & di-gf & ei-hf & 0 & 0 & 0 \\ 0 & 0 & 0 & a & b & c \\ 0 & 0 & 0 & d & e & f \\ 0 & 0 & 0 & g & h & i \\ \end{array} \right)\nonumber \end{eqnarray} Thus $$\mathcal{S}=\{g\in SL_3(\R); \eta(g)x\in H\}$$ where $x=e_1\wedge e_2+e_2$ and $H=\{x\in \R^6; x_1+2x_5=0\}$. Hence, we have linearized our variety $\mathcal{S}$ as in Example 1. The difference between these two examples is that the representation $\eta$ is no longer irreducible ($\eta_1$ and $\eta_2$ are its irreducible sub-representations). Hence we cannot use the same argument as in Example 1.\\ However, we will see in the proof of Theorem \ref{ch2theo1} that we are able to solve the problem if the top Lyapunov exponents of $\eta_1(\mu)$ and $\eta_2(\mu)$ are distinct.\\ Let us calculate them. If $\lambda_1,\lambda_2$ are top two Lyapunov exponents of $\mu$\footnote{$\lambda_1=\lim_{n\rightarrow +\infty}{\frac{1}{n}\E (\log ||S_n||)}$ and $\lambda_1+\lambda_2=\lim_{n\rightarrow +\infty}{\frac{1}{n}\E (\log ||\bigwedge^2 S_n||)}$}, then the top Lyapunov exponent of $\eta_1(\mu)$ is $\lambda_1+\lambda_2$ and the one corresponding to $\eta_2(\mu)$ is clearly $\lambda_1$. So the problem occurs when $\lambda_2=0$. This can happen for example when $\mu$ is a symmetric probability measure (i.e. the law of $X_1$ is the same as $X_1^{-1}$).\\ However, we can still find a probability measure whose support generates $\Gamma$ such that $\lambda_2\neq 0$, see Lemma \ref{ch2lyapunovcone}. \section{Linearization of algebraic varieties} \label{ch2sublinear} Let $\mathbf{G}$ be a semi-simple algebraic group defined on $\R$, $G$ its group of real points.\\ The goal of this section is to linearize every algebraic hypersurface of $\mathbf{G}$. More precisely, for every proper algebraic hypersurface $\mathcal{V}$ defined over $\R$, we associate a finite dimensional rational real representation $(\rho,V)$ of $\mathbf{G}$, a linear form $L$ of $End(V)$ such that $\mathcal{V}=\{g\in \mathbf{G}; L\left(\rho(g) \right)=0\}$. In fact, we will find a representation $(\rho,V)$ of $\mathbf{G}$, a line $D$ in $V$, a hyperplane $H$ in $V$ defined over $\R$ such that $\mathcal{V}=\{g\in \mathbf{G}; g\cdot D\subset H\}$ (see Lemma \ref{ch2lemma}). This has to be seen as a generalization of the well-known Chevalley theorem for subgroups (see Theorem \ref{ch2chevalley}). \begin{defi}[Matrix coefficients] If $(V,\rho)$ a finite dimensional representation of $G$, $ \langle \cdot, \cdot \rangle$ a scalar product on $V$, we call $\langle\rho(g)v,w\rangle$ for $v,w \in V$ a matrix coefficient and we denote by $C(\rho)$ the span of the matrix coefficients of the representation $\rho$, thus a function $f\in C(\rho)$ can be written $L \circ \rho$ where $L$ is a linear form on the vector space $End(V)$.\label{ch2matrix}\end{defi} Let $\rho_1, \cdots , \rho_r$ be independent $\R$-rational irreducible representations of $\mathbf{G}$. Any $f_1 \in C(\rho_1),\cdots ,f_r\in C(\rho_r)$ are linearly independent provided that the representation $\rho_i$ are pairwise non-isomorphic (see the proof of the Lemma \ref{ch2lemma} below). The set of elements of $G$ where such a linear dependance is realized defines clearly an algebraic hypersurface of $\mathbf{G}$. The following lemma says also that each algebraic hypersurface can be realized in this way. \begin{lemme}\label{ch2lemma} For every algebraic hypersurface $\mathcal{V}$ of $\mathbf{G}$ defined over $\R$, there exist a representation $(\rho,V)$ of $\mathbf{G}$, a line $D$ in $V$, a hyperplane $H$ of $V$ defined over $\R$ such that $\mathcal{V}= \{g\in \mathbf{G}; g\cdot D \subset H\}$. In particular, there exist a representation $(\rho,V)$ of $\mathbf{G}$ \textbf{whose irreducible sub-representations}, say $\rho_1,\cdots,\rho_r$, \textbf{occur only once},\; $f_1\in C(\rho_1),\cdots,f_r\in C(\rho_r)$ such that: \begin{equation}\mathcal{V}(\R)=\{g\in G; \sum_{i=1}^r {f_i(g)}=0\}\label{ch2expression}\end{equation} $\mathcal{V}$ is proper if and only if at least one of the $f_i$'s is non zero.\\ This is equivalent to say that there exists $A\in End(V_1)\oplus\cdots \oplus End(V_r)$ such that: $$\mathcal{V}(\R)=\{g\in G;\; Tr\left(\rho(g)A\right)=0\}$$ with $\mathcal{V}$ proper if and only if there exists $i=1,\cdots,r$ such that the restriction of $A$ to $V_i$ is non zero. Here $Tr(M)$ denotes the trace of the endomorphism $M$. \end{lemme} \begin{proof}[Proof] Let $\R[\mathbf{G}]$ be the algebra of functions on $\mathbf{G}$, $\mathbf{G}$ acting on $\R[\mathbf{G}]$ by right translations: $g\cdot f(x)=f(x g)$ $\forall g,x \in \mathbf{G}$, $P$ the generator of the ideal vanishing on $\mathcal{V}$ (which is of rank one since $\mathcal{V}$ is a hypersurface). Then\; $g \in \mathcal{V} \Longleftrightarrow g \cdot P(1)=0$.\; Consider the sub-representation $V=Vect(g\cdot P,g \in G)$. By \cite[Chapter 8, Proposition 8.6]{Humphreys}, $V$ is a finite dimensional $\R$-rational representation of $\mathbf{G}$. When $\mathcal{V}$ is proper, the subspace $H=\{f\in V; f(1)=0\}$ is a hyperplane defined over $\R$ so that $g \in \mathcal{V} \Longleftrightarrow g \cdot P\in H$ and the first part of lemma is proved. $\mathbf{G}$ being semi-simple, we decompose $(\rho,V)$ into irreducible sub-representations : $V=\oplus_{i=1}^rV_i$. Decomposing $P$ in the $V_i's$ gives easily (\ref{ch2expression}) with the only difference that the $V_i's$ are not necessarily pairwise non isomorphic. Suppose for instance that $V_1\simeq V_2$. In this case, there exists an invertible matrix $M$ such that $\rho_2(g)=M\rho_1(g)M^{-1}$ for every $g\in \mathbf{G}$. Let $f_i=L_i \circ \rho_i$ where $L_i$ is a suitable linear form on $End(V_i)$ for $i=1,2$. Then $f_2=\widetilde{L_2} \circ \rho_1$ where $\widetilde{L_2}$ is the linear form defined on $End(V_1)$ by $\widetilde{L_2}(h)=L_2(M h M^{-1})$, $h\in End(V_1)$. Consequently, $f_2$ can be seen in $C(\rho_1)$ so that $f_1+f_2\in C(\rho_1)$ and $V_2$ can be dropped. By updating $r$ if necessary, we obtain (\ref{ch2expression}). At least one of the $f_i$'s is non zero, otherwise $\mathcal{V}$ would be $\mathbf{G}$.\\ $\bullet$ For the converse, we will show that if $\rho_1,\cdots,\rho_r$ are pairwise non isomorphic representations of $\mathbf{G}$, then any $(f_1,\cdots ,f_r)\in C(\rho_1)\times\cdots\times C(\rho_r)$ are linearly independent. A simple argument using the Peter-Weyl theorem will immediately give the result for compact groups and a unitary trick will allow us to conclude.\\ If $G$ were a compact Lie group, the proof would be a consequence of Peter-Weyl theorem for representations of compact groups (see for example \cite{weyl}): let $\sum$ be the collection of all irreducible representations of $G$ pairwise non isomorphic, $L^2(G)$ the set of all square integrable functions with respect to the Haar measure on $G$, then $\{\sqrt{dim(\rho)}\rho_{i,j};\;\rho \in \sum;\; 1 \leq i,j\leq dim(\rho)\}$ forms an orthonormal basis of $L^2(G)$, where $\rho_{i,j}$ denotes the function on $G$ defined by $\rho_{i,j}(g)=\langle\rho(e_i),e_j\rangle$ for a certain basis $\{e_1,\cdots,e_{dim(\rho)}\}$ of the representation. We deduce immediately the linear independence of any $f_1,\cdots,f_r$, where $f_i \in C(\rho_i)$ for each $i$. \\ Now we return to the general case. If $\sum_{i=1}^r{ \lambda_if_i(g)}=0$ for all $g\in G=\mathbf{G}(\R)$ then by Zariski density, the same holds for all $g\in \mathbf{G}(\mathbb{C})$. We decompose the $\rho_i$'s into $\mathbf{G}(\mathbb{C})$-irreducible representations. For sake of simplicity, we keep the notation $f_i$'s to denote the new matrix coefficients that follow from this decomposition. The Lie algebra $\mathfrak{g}$ of $\mathbf{G}(\mathbb{C})$ has a compact real form $\mathfrak{g}_0$ (i.e. $\mathfrak{g}_0 \bigotimes_{R} \mathbb{C} = \mathfrak{g}$). To $\mathfrak{g}_0$ corresponds a subgroup $G_0$ of $\mathbf{G}(\mathbb{C})$ which is compact and Zariski sense in $\mathbf{G}(\mathbb{C})$. Hence an irreducible real representation of $\mathbf{G}(\mathbb{C})$ is $G_0$-irreducible. We conclude using the previous paragraph concerning Peter-Weyl theorem for compact groups. \end{proof} \subsection{The particular case of subgroups} Let $\mathbf{G}$ be an algebraic group. The linearization of proper subgroups of $\mathbf{G}$ is Chevalley's theorem: \begin{theo}[Chevalley]\label{ch2chevalley} \cite{Humphreys} Let $\mathbf{H}$ be a proper subgroup of $\mathbf{G}$, then there exist a rational representation $(\rho,V)$ of $\mathbf{G}$, a line $D$ in $V$ such that $\mathbf{H}=\{g\in \mathbf{G}; g\cdot D=D \}$. \end{theo} In the particular case where the subgroup $\mathbf{H}$ is reductive, that is contains no proper connected unipotent subgroups, we have the following stronger statement: \begin{prop}\cite{borel} Let $\mathbf{H}$ be a proper reductive subgroup of $\mathbf{G}$, then there exists a rational real representation $(\rho,V)$ of $\mathbf{G}$, a non zero vector $x$ of $V$ such that $\mathbf{H}=\{g\in \mathbf{G}; g\cdot x=x \}$.\label{ch2reductive}\end{prop} The converse is true and is a theorem of Matsushima \cite{matsu} (see also \cite{invariant} for a recent proof). \section{Preliminaries on algebraic groups} \label{ch2subprel} \subsection{The Cartan decomposition} \label{ch2subcartan} Let $\mathbf{G}$ be a semi-simple algebraic group defined over $\R$, $G$ its group of real points, $\mathbf{A}$ be a maximal $\R$-split torus of $\mathbf{G}$, $\mathbf{X(A)}$ be the group of $\R$-rational characters of $\mathbf{A}$, $\Delta$ be the system of roots of $\mathbf{G}$ restricted to $\mathbf{A}$, $\Delta^+$ the system of positive roots (for a fixed order) and $\Pi$ the system of simple roots (roots than cannot be obtained as product of two positive roots).\\ We consider the natural order on $\mathbf{X(A)}$: $\chi_1>\chi_2$ if and only if there exist non negative integers $\{n_\alpha; \alpha\in \Pi\}$ with at least one non zero $n_\alpha$ such that $\frac{\chi_1}{\chi_2}=\prod_{\alpha\in \Pi} {\alpha^{n_\alpha}}$.\\ Finally define $A^{\circ}=\{a\in A; \chi(a)\in ]0;+\infty[\;\forall \chi \in \mathbf{X(A)}\}$ and set $$A^{+}=\{a\in A^{\circ}\;;\;\alpha(a)\geq 1\;;\;\forall \alpha\in \Pi\}$$ Then there exists a compact $K$ of $G$ such that $$G=KA^+K\;\;\;\;\;\textrm{Cartan or $KAK$ decomposition}$$ (see \cite[Chapter 9, Theorem 1.1]{helg}) We denote by $\mathfrak{a}$ the Lie algebra of $\mathbf{A}$. The exponential map is a bijection between $A$ and $\mathfrak{a}$. A Weyl chamber is $\mathfrak{a}^+$. We denote by $m$ the corresponding Cartan projection $m: G \longrightarrow \mathfrak{a}^+$. \subsection{Rational representations of algebraic groups} A reference for this section is \cite{Humphreys} and \cite{Tits1}. If $(\rho,V)$ is an $\R$-rational representation of $\mathbf{G}$ then $\chi\in X(\mathbf{A})$ is called a weight of $\rho$ if it is a common eigenvalue of $\mathbf{A}$ under $\rho$. We denote by $V_\chi$ the weight space associated to $\chi$ which is $V_\chi=\{x\in V; \rho(a)x=\chi(a)x\;\forall\;a\in \mathbf{A}\}$. The following holds: $V=\oplus_{\chi\in X(\mathbf{A})}{V_\chi}$. Irreducible representations $\rho$ are characterized by a particular weight $\chi_\rho$ called highest weight which has the following property: every weight $\chi$ of $\rho$ different from $\chi_\rho$ is of the form $\chi=\frac{\chi_\rho}{\prod_{\alpha\in \Pi}{\alpha^{n_\alpha}}}$, where $n_\alpha \in \N$ for every simple root $\alpha$. The $V_{\chi}$'s are not necessarily of dimension $1$. When $\mathbf{G}$ is $\R$-split, $V_{\chi_\rho}$ is one dimensional. Recall that an element $\gamma\in GL_d(\R)$ is called proximal if it has a unique eigenvalue of maximal modulus. A representation $\rho$ of a group $\Gamma$ is said to be proximal if the group $\rho(\Gamma)$ has a proximal element. Thus, we obtain \begin{lemme}\label{ch2split} Every $\R$-rational irreducible representation of an $\R$-split semi-simple algebraic group is proximal \end{lemme} Let $\Theta_\rho=\{\alpha\in \Pi;\;\chi_\rho/\alpha \;\textrm{is a weight of $\rho$}\}$. \begin{prop}\cite{Tits1} For every $\alpha\in \Pi$, let $w_\alpha$ be the fundamental weight associated to $\alpha$. Then, there exists an $\R$-rational representation $(\rho_\alpha,V_\alpha)$ of $\mathbf{G}$ whose highest weight is a power of $w_\alpha$ and whose highest weight space $V_{w_\alpha}$ is one-dimensional. Moreover, $\Theta_{\rho_\alpha}=\{\alpha\}$ \label{ch2tits}\end{prop} We record below a basic fact about root systems (\cite{bourbaki}). \begin{prop} Every root $\alpha\in \Delta$ is of the form: $\alpha=\prod_{\beta\in \Pi}{w_\beta^{n_\beta}}$, with $n_\beta\in \Z$, for every $\beta\in \Pi$. \label{ch2funda}\end{prop} \paragraph{Mostow theorem}\cite[\S 2.6]{Mostow1} \label{ch2mostow} Let $G=KAK$ be the Cartan decomposition of $G$, $(\rho,V)$ an irreducible rational real representation of $\mathbf{G}$. There exists a scalar product on $V$ for which the elements of $\rho(K)$ are orthogonal and those of $\rho(A)$ are symmetric. In particular, the weight spaces are orthogonal with respect to it. The norm on $V$ induced by this scalar product is qualified by ``good''. \subsection{Standard Parabolic subgroups and their representations} \label{ch2subparabolic} A reference for this section is \cite[\S 4]{gpesred}.\\ For every subset $\theta\subset \Pi$, denote $\mathbf{A}_\theta=\{a\in \mathbf{A}; \alpha(a)=1 \forall \alpha \in \theta\}$ and let $\mathbf{L}_\theta$ be its centralizer in $\mathbf{G}$. Denote by $\mathfrak{g}$ the Lie algebra of $\mathbf{G}$ and for every $\alpha\in \Delta$ denote by $\mathbf{U}_\alpha$ the unique closed unipotent subgroup of $\mathbf{G}$ with Lie algebra: $\mathfrak{u}_{\alpha}=\mathfrak{g}_{\alpha}\oplus \mathfrak{g}_{2\alpha}$ where $\mathfrak{g}_{i\alpha} =\{X \in \mathfrak{g}; Ad(a)(X) = \alpha(a)^{i} X \;\forall a\in \mathbf{A}\}$.\\ Let $[\theta]\subset \Delta$ be the set of roots which can be written as integral combination of roots of $\theta$. Denote by $\mathbf{U}_{\theta}$ the unipotent closed subgroup of $\mathbf{G}$ whose Lie algebra is $$\mathfrak{u}_{\theta}=\bigoplus_{\alpha \in \Delta^+\setminus([\theta]\cap \Delta^+)}{\mathfrak{u}_\alpha}$$ We set $$\mathbf{P}_\theta=\mathbf{L}_\theta \mathbf{U}_\theta$$ This is the standard parabolic subgroup associated to $\theta$. Its Lie algebra is $$\mathfrak{p}_\theta=\mathfrak{z}\oplus \bigoplus_{\alpha \in \Delta^+\cup[\theta]}{\mathfrak{u}_\alpha}$$ where $\mathfrak{z}$ is the Lie algebra of $\mathbf{Z}$, the centralizer of $\mathbf{A}$ in $\mathbf{G}$. Notice that $\mathbf{P}_{\Pi}=\mathbf{G}$.\\ The following lemma will be useful to us for the proof of Theorem \ref{ch2tr3}. \begin{lemme} Let $(\rho,V)$ be a rational irreducible representation of $\mathbf{G}$ and consider $\theta \subset \Pi$. The line generated by every non zero vector $x$ in the highest weight space of $V$ is fixed by $\mathbf{P}_\theta$ if $\beta\not\in \Theta_\rho$ for every $\beta\in\theta$. In particular, the line generated by any highest weight vector $x_\alpha$ of the representation $(\rho_\alpha,V_\alpha)$ defined in Proposition \ref{ch2tits} is fixed by the standard parabolic $\mathbf{P}_\theta$ whenever $\alpha\not\in \theta$. \label{ch2lemmepar}\end{lemme} \begin{proof}[Proof] Let $\chi_\rho$ be the highest weight of $\rho$. We look at the action of the Lie algebra $\mathfrak{g}$ on $V$. It is clear that $\mathfrak{g}_{-\beta}\cdot v \in V_{\chi_\rho-\beta}$ for every $v\in V_{\chi_\rho}$ and $\beta\in \Pi$. If $\beta\not\in \Theta_\rho$, then $\chi_\rho-\beta$ is not a weight of $\rho$ and hence $V_{\chi_\rho-\beta}=0$. The last part of the lemma is just recalling that the representation $\rho_\alpha$ defined in Proposition \ref{ch2tits} satisfies $\Theta_{\rho_\alpha}=\{\alpha\}$ \end{proof} \section{Random matrix products - convergence in the Cartan decomposition} \label{ch2subproba} We will use in this section standard results in the theory of random matrix products. A nice reference is the book of Bougerol and La Croix \cite{bougerol}.\\\\\ \subsection{Preliminaries} \label{ch2subprelprob} In the following, $G=\mathbf{G}(\R)$ is the group of real points of a semi-simple connected algebraic group, $\Gamma$ a Zariski dense subgroup of $G$, $\mu$ a probability measure whose support generates $\Gamma$, $(\rho,V)$ an irreducible $\R$-rational representation of $\mathbf{G}$ and $\chi_\rho$ its highest weight. Let $\{X_n;n\in \N^*\}$ be independent random variables on $\Gamma$ with the same law $\mu$ and $S_n=X_n\cdots X_1$ the associated random walk . Fix a Cartan decomposition of $G$ such that the section $G\rightarrow KAK$ be measurable and denote for every $n\in \N^*$, $S_n=K_nA_nU_n$ the corresponding decomposition of $S_n$. If $\theta$ is a probability measure on $GL_d(\R)$, we denote by $G_\theta$ the smallest closed subgroup containing the support of $\theta$. \\ We consider the basis of weights of $V$ and the ``good norm'' given by Mostow theorem (Paragraph \ref{ch2mostow}). It induces a $K$-invariant norm on $\bigwedge^2 V$ and hence a $K$-invariant distance $\delta(\cdot,\cdot)$ on the projective space $P(V)$, called Fubini-Study distance, defined by: $\delta([x],[y])=\frac{||x \wedge y ||}{||x|| ||y||}; [x],[y]\in P(V)$.\\ We fix an orthonormal basis on each weight space $V_\chi$, and for an element $g\in End(V)$, $g^t$ will be the transpose matrix of $g$ in this basis.\\ $G$ is isomorphic to a Zariski closed subgroup of $SL_d(\R)$ for some $d\in \N^*$ (see \cite{Humphreys}). Let $i$ be such an isomorphism. We say way that $\mu$ has a moment of order one (resp. an exponential moment) if for some (or equivalently any) norm on $End(\R^d)$, $\int{\log||i(g)||d\mu(g)}< \infty$ (resp. for some $\tau>0$, $\int{||i(g)||^\tau d\mu(g)}< \infty$). Lemma \ref{ch2exponential} below shows that is indeed a well defined notion, i.e. the existence of a moment of order one or an exponential moment is independent of the embedding. \begin{lemme}\label{ch2exponential} Let $G\subset SL(V)$ be the $\R$-points of a semi-simple algebraic group $\mathbf{G}$ and $\rho$ a finite dimensional $\R$-algebraic representation of $\mathbf{G}$. If $\mu$ has a moment of order one (resp. an exponential moment) then the image of $\mu$ under $\rho$ has also a moment of order one (resp. exponential moment). \end{lemme} \begin{proof}[Proof] Each matrix coefficient $(\rho(g))_{i,j}$ of $\rho(g)$, for $g\in G$, is a fixed polynomial in terms of the matrix coefficients of $g$. Since for the canonical norm, $||g||\geq 1$ for every $g\in G$, we see that there exists $C>0$ such that $||\rho(g)||\leq ||g||^C$ for every $g\in G$. This suffices to show the lemma. \end{proof} Let us recall some definitions and well-known results. \begin{defi} A subgroup $\Gamma$ of $GL_d(\R)$ is called strongly irreducible if and only if the identity component of its Zariski closure does not fix a proper subspace. It is called proximal if it contains a proximal element (see Section \ref{ch2subprel}). \label{defdef}\end{defi} The key result which prevents our results from being generalized to an arbitrary local field is Goldsheid-Margulis theorem we recall here \begin{theo}\cite{Margulis} Let $d\geq 2$. A subgroup of $GL_d(\R)$ is strongly irreducible and proximal if and only if its Zariski closure is. \label{ch2margulis}\end{theo} \subsection{Geometry of the Lyapunov vector} First, let us recall the definition of the Lyapunov exponent. \begin{dfprop}[Lyapunov exponent] If $\mu$ is a probability measure on $GL_d(\R)$, $||\cdot||$ a matricial norm on $End(V)$, $S_n=X_n \cdots X_1$ the corresponding random walk, then the Lyapunov exponent $L_{\mu}$ is $L_{\mu}=\lim \frac{1}{n} \E(\log ||S_n||)$ which exists by simple application of the subadditive lemma.\\ When $\mu$ have a moment of order one, the following a.s. limit holds $L_{\mu}=\lim \frac{1}{n} \log ||S_n||$. It can be proved via the Kingman subadditive ergodic theorem \cite{Kingman}. \label{ch2liapou}\end{dfprop} A useful result will be the following \begin{prop}\cite[Corollary 4 page 53]{bougerol} Let $\theta$ be a probability measure on $GL_d(\R)$ with a moment of order one and such that $G_\theta:=\overline{\langle Supp(\theta) \rangle }$ is strongly irreducible. Then for every sequence $\{x_n;n\geq 0\}$ of vectors in $\R^d$ converging to some non zero vector $x\in \R^d$, $\frac{1}{n} \log ||S_n x_n|| \underset{n \rightarrow \infty}{\overset{\textrm{a.s.}}{\longrightarrow}} L_\theta$. \label{ch2bougerol}\end{prop} \begin{remarque} In \cite{bougerol}, the condition is made on the smallest closed sub-semi-group $\Gamma_\theta$ containing the support of $\theta$. There is no difference taking $\Gamma_\theta$ or $G_\theta$ because they have the same Zariski closure. Hence if one is strongly irreducible than the other satisfies the same property. This remark applies also for later applications when proximality is envolved (see for example the statement of Theorem \ref{ch2hausdorff}). This is due to Golsheild-Margulis theorem (Theorem \ref{ch2margulis}) which is special to the field of real numbers.\end{remarque} \begin{dfprop}[Lyapunov vector] Suppose that $\mu$ has a moment of order one. Then the Lyapunov vector is the constant vector in the Weyl chamber $\mathfrak{a}^+$ of $G$ (see Section \ref{ch2subcartan}) defined as the following a.s. limit: $$\frac{1}{n} m(S_n) \underset{n \rightarrow \infty}{\overset{\textrm{a.s}}{\longrightarrow}} Liap(\mu)$$ where $m$ is the Cartan projection (Section \ref{ch2subcartan}). \label{ch2liapuvector}\end{dfprop} \begin{proof}[Proof] Let $\alpha\in \Pi$. Express $\alpha$ in terms of the fundamental weights (Proposition \ref{ch2funda}), $\alpha=\prod_{\beta \in \Pi}{w_\beta^{n_\beta}}$ where $n_\beta\in \Z$ for every $\beta \in \Pi$. For every $\beta\in \Pi$, consider the rational real irreducible representation $(\rho_\beta, V_\beta)$ given by Proposition \ref{ch2tits} and a good norm on $V_\beta$ (Paragraph \ref{ch2mostow}). By the definition of $\rho_\beta$, there exists an integer $l_\beta$ such that for every $n\in \N^*$, $||\rho_\beta(S_n)||=w_\beta^{l_\beta}(A_n)$. Hence, \begin{equation}\label{ch2nadet}\frac{1}{n}\log \;\alpha(A_n)=\sum_{\beta\in \Pi}{\frac{n_\beta}{l_\beta}\; \frac{1}{n}\log \;||\rho_\beta(S_n)||}\end{equation} By Definition/Proposition \ref{ch2liapou}, $\lim \frac{1}{n}\log \;\alpha(A_n) \overset{a.s.}{=} \sum_{\beta\in \Pi}{\frac{n_\beta}{l_\beta}\; L_{\rho_{\beta}(\mu)}}$. Thus $Liap(\mu)$ is well defined. \end{proof} \begin{theo}\cite{Guivarch} Suppose that $\mu$ has a moment of order one. Then the Lyapunov vector $Liap(\mu)$ belongs to the interior of the Weyl chamber $\mathfrak{a}^+$, i.e. $\alpha\left(Liap(\mu) \right)>0$ $\forall \alpha\in \Pi$. \label{ch2guimo} \end{theo} \begin{remarque} When the local field in not $\R$, the Lyapunov vector does not necessarily belong to the interior of $\mathfrak{a}^+$. The reason is that Goldsheild-Margulis theorem (Theorem \ref{ch2margulis}) is valid only over the real field. \end{remarque} For the reader's convenience, we include a proof of Theorem \ref{ch2guimo}. \begin{proof}[Proof] Without loss of generality, one can suppose $\Omega=G{^\N}=\{w=(w_i)_{i\in \N^*}; w_i\in G\}$, $\p$ the probability measure for which the coordinates $w_i$ are independent with law $\mu$ and $\mathcal{F}$ the $\sigma$-algebra generated by the coordinate maps $w_i$.\\ We want to show that for every $\alpha\in \Pi$, $l:=\lim\frac{1}{n} \log \;\alpha(A_n)>0$. By equation (\ref{ch2nadet}), $l$ is the following constant: $l=\sum_{\beta\in \Pi}{\frac{n_\beta}{l_\beta}\; L_{\rho_\beta(\mu)} }$. Let $X=\prod_{\beta\in \Pi}{P(V_\beta)}$, $s$ the application on $G\times X$ defined by: $$s\left(g, ([x_\beta])_{\beta\in \Pi_\alpha}\right) = \sum_{\beta\in \Pi_\alpha}{\frac{n_\beta}{l_\beta}\; \log \;\frac{||\rho_\beta(g)x_\beta||}{||x_\beta||}}$$ It is immediate that $s$ is an additive cocycle on $G\times X$ for the natural action of $G$ on $X$. Since $X$ is compact, one can choose a $\mu$-invariant measure $\nu$ on $X$.\\ Consider the dynamical system $E=\Omega\times X$, the distribution $\eta=\p \otimes \nu$ on $E$, the shift $\theta: E \rightarrow E$, $\left((g_0,\cdots),x\right) \longmapsto \left((g_1,\cdots),g_0 \cdot x\right)$. Since $\nu$ is $\mu$-invariant, $\eta$ is $\theta$-invariant. We extend the definition domain of $s$ from $G\times X$ to $G^\N \times X$ by setting $s(\omega,x):=s(g_0,x)$ if $\omega=(g_0,\cdots)$. Since $\mu$ has a moment of order one, Lemma \ref{ch2exponential} shows that the same holds for the image probability measure $ \rho_\beta( \mu)$ for every $\beta\in \Pi$. Hence $s\in L_1(\eta)$. In consequence, we can apply the ergodic theorem (see \cite[Theorem 6.21]{brei}) which shows that $\frac{1}{n}{\sum_{i=0}^n {s\circ \theta^i (\omega,x)}}$ converges for $\eta$-almost every $(\omega,x)$ to a random variable $Y$ whose expectation is $\iint{s(g,x)d\mu(g)d\nu(x)}$. Since $s$ is a cocycle,\; $s\left(S_n(\omega),x\right)={\sum_{i=0}^n {s\circ \theta^i (\omega,x)}}$. Hence, $$ \lim_{n\rightarrow\infty}{\frac{1}{n} s\left(S_n(\omega),x\right)} = Y\;\;\;\;\;;\;\;\;\; \E_\eta(Y)=\iint{s(g,x)d\mu(g)d\nu(x)}$$ But using Proposition \ref{ch2bougerol}, we see that a.s. $Y=l$ so that $$l= \iint{s(g,x)d\mu(g)d\nu(x)}$$ By lemma \ref{ch2ergogui} below, $l$ is positive if for $\eta$-almost every $(\omega,x)$, $s(S_n(w),x) \underset{n\rightarrow\infty}{\longrightarrow} +\infty$. Again by Proposition \ref{ch2bougerol}, for $\eta$-almost every $(w,x)$, $s(S_n(w),x)$ has the same behavior at infinity as the $\p$-almost everywhere behavior of $$\sum_{\beta\in \Pi_\alpha}{\frac{n_\beta}{l_\beta}\; \frac{1}{n}\log \;{||\rho_\beta(S_n)||}}= \log\;\alpha(A_n)$$ In consequence, it suffices to show that $\alpha(A_n)\underset{n\rightarrow\infty}{\overset{\textrm{a.s.}}{\longrightarrow}}+\infty$. Indeed, the representation $\rho_\alpha$ is strongly irreducible because $G$ is Zariski connected. By Zariski density of $\Gamma$, the same holds for $\rho_\alpha(\Gamma)$. Moreover, by Goldsheild-Margulis Theorem (Theorem \ref{ch2margulis}) , $\rho_\alpha(\Gamma)$ is also proximal. By \cite[Theorem 3.1 page 50]{bougerol}, a.s. every limit point of $\frac{\rho_\alpha(S_n)}{||\rho_\alpha(S_n)||}$ is a rank one matrix. Hence, if $\rho_\alpha(A_n)=diag\left(a_1(n),\cdots,a_d(n)\right)$, then a.s. $a_2(n)/a_1(n)$ converges a.s. to zero. But $\Theta_{\rho_\alpha}=\{\alpha\}$ so that $\alpha(A_n)=a_1(n)/a_2(n)\;\underset{n\rightarrow\infty}{\longrightarrow}+\infty$. \end{proof} \begin{lemme}\cite{dekk} Let $G$ be a group, $X$ be a $G$-space, $(X_n)_{n\in \N^*}$ a sequence of independent elements of $G$ with distribution $\mu$ and $s$ an additive cocycle on $G\times X$. Suppose that $\nu$ is a $\mu$-invariant probability measure on $X$ such that:\\ (i) \; $\iint {s^+ (g,x) d\mu(g) d\nu(x)}< \infty$ \\ (ii)\; For $\p \otimes \nu$-almost every $(w,x)$, \; $\lim_{n \rightarrow \infty} {s \left(X_n(w)\cdots X_1(w),x\right ) }=+ \infty $. \\ Then $s$ is in $L^1 (\p \otimes \nu)$ and $\iint{s(g,x) d\mu (g) d\nu(x)}>0$\label{ch2ergogui}\end{lemme} The following lemma describes the geometry of the Lyapunov vector inside the Weyl chamber. \begin{lemme} Let $\Gamma$ be a Zariski dense subgroup of $G$. Then for every finite union $F$ of hyperplanes in $\mathfrak{a}$ (see Section \ref{ch2subcartan} for the definition of $\mathfrak{a}$), there exist a probability measure $\mu$ on $\Gamma$ with an exponential moment whose support generates $\Gamma$ and whose Lyapunov vector $Liap(\mu)$ is not included in $F$. In consequence, if $(V_1,\rho_1),\cdots, (V_r,\rho_r)$ are pairwise non isomorphic irreducible representations of $\mathbf{G}$, then one can exhibit a probability measure $\mu$ whose support generates $\Gamma$, a permutation $\sigma$ of $\{1,\cdots, r\}$ such that $L_{\rho_{\sigma(1)}(\mu)}>\cdots>L_{\rho_{\sigma(r)}(\mu)}$ (See Definition \ref{ch2liapou}). \label{ch2lyapunovcone}\end{lemme} \begin{proof}[Proof] We recall the definition of the Jordan projection. Every element $g\in G$ has a decomposition: $g=g_e g_h g_u$ with $g_e$ elliptic (i.e. included in a compact subgroup), $g_h$ hyperbolic (i.e. conjugated to an element $a(g)$ in $A^+$) and $g_u$ unipotent commuting with $g_h$. The Jordan projection $j: G \longrightarrow \mathfrak{a}^+$ is defined by $\lambda(g)=\log a(g)$. \\ Y. Benoist proved in \cite{cone1} that the smallest cone ${l}_\Gamma$ in $\mathfrak{a}^+$ containing $j(\Gamma)$ has a non empty interior. Moreover, he showed in \cite{cone2} that $j(\Gamma)$ fills completely ${l}_\Gamma$ in the sense that every open cone in ${l}_\Gamma$ contains an infinite elements of $j(\Gamma)$. We deduce that $j(\Gamma)$ cannot be supported on any finite union of hyperplanes in $\mathfrak{a}$. \\ Let now $F$ be such a finite union of hyperplanes, $g\in \Gamma$ such that $j(g)\not\in F$. The spectral radius formula shows that $\frac{1}{n} m(g^n) \underset{n \rightarrow \infty}{\longrightarrow } j(g)\not\in F$ where $m$ is the Cartan projection (Section \ref{ch2subprel}). This is equivalent to say that the Dirac probability measure $\mu=\delta_g$ supported on $\{g\}$ satisfies $Liap(\mu)\not\in F$.\\ Let us perturb $\mu$ on $\Gamma$, that this define a sequence of probability measure $\mu_n$ with an exponential moment whose support generates $\Gamma$ such that $\mu_n$ converge weakly to $\mu$ , for example $\mu_n=(1-1/n)\mu +\eta/n$ where $\eta$ is a probability measure with an exponential moment whose support generates $\Gamma$. It is easy to see (see for example \cite[Corollary 7.3, page 72-73]{bougerol}) that the Lyapunov vector depends continuously on the probability measure so that $Liap(\mu_n)$ converge to $Liap(\mu)$. Hence, for $n$ big enough, $\mu_n$ is a probability measure on $\Gamma$ with $Liap(\mu_n)\not\in F$.\\ Now we prove the last part of the lemma. Let $\rho_1,\cdots,\rho_r$ be $r$ rational real irreducible representations of $\mathbf{G}$ and denote by $\chi_{\rho_i}$ the highest weight of $\rho_i$. Recall that the set $\Pi$ of simple roots is a basis of the space $X(A)$ of the rational characters of $A$. Hence for every $i=1,\cdots r$, there exist real numbers $\{n_{i,\alpha};\alpha\in \Pi\}$ with at least one non zero number such that: $$\log{\chi_{\rho_i}}=\sum_{\alpha\in \Pi}{n_{i,\alpha}{\log\alpha}}$$ For every $i<j$, denote by $H_{i,j}$ the following hyperplane of $\mathfrak{a}$: $$H_{i,j}=\{x\in \mathfrak{a}; \sum_{\alpha\in \Pi}{n_{i,\alpha}\log{\alpha(x)}}=\sum_{\alpha\in \Pi}{n_{j,\alpha}\log{\alpha(x)}}\}$$ Set $F=\cup_{i<j}{H_{i,j}}$. Applying the first of the lemma shows that there exists a probability measure on $\Gamma$ with an exponential moment such that $Liap(\mu)\not\in F$. This ends the proof because for every $i=1,\cdots,r$, $$L_{\rho_i(\mu)}=\lim \frac{1}{n} \log{ \chi_{\rho_i}(A_n)}$$ \end{proof} \subsection{Estimates in the $A$-part} The following theorem gives an estimates in the $A$-part of the Cartan decomposition of the random walk. It can be proved by the same techniques of \cite{aoun} where the theory of random matrix products is treated over an arbitrary local field. However, since we are working here in $\R$, we will use another route and apply the large deviation theorem of Le Page \cite{Page} in $GL_d(\R)$ we recall below. First, let us state our result: \begin{theo}\label{ch2ratioA} [Ratio in the $A$-component] Suppose that $\mu$ has an exponential moment then for every $\epsilon>0$ and every non zero weight $\chi$ of $\rho$ distinct from $\chi_\rho$, \begin{equation}\label{ch2mamisallile}\displaystyle \limsup_{n\rightarrow \infty} \big[\E [( \frac{\chi(A_n)}{\chi_{\rho}(A_n)})^{\epsilon} ]\big]^{\frac{1}{n}}< 1\end{equation} Moreover, if $\rho_1$, $\rho_2$ are two irreducible rational real representations of $\mathbf{G}$ such that $L_{\rho_1(\mu)}>L_{\rho_2(\mu)}$ (Definition \ref{ch2liapou}), then for every $\epsilon>0$: \begin{equation} \displaystyle \limsup_{n\rightarrow \infty} \big[\E [( \frac{\chi_{\rho_2}(A_n)}{\chi_{\rho_1}(A_n)})^{\epsilon} ] \big]^{\frac{1}{n}}< 1\label{ch2lkle}\end{equation} \end{theo} Before giving the proof, we recall Le Page large deviation theorem in $GL_d(\R)$: \begin{theo}\cite{Page}[Large deviations in $GL_d(\R)$] Let $\mu$ be a probability on $GL_d(\R)$ having an exponential moment and such that $G_\mu$ is strongly irreducible. Let $S_n=X_n\cdots X_1$ be the corresponding random walk. Then for every $\epsilon>0$, $$\displaystyle \limsup_{n\rightarrow \infty} \big[\p \left(\big|\frac{1}{n} \log ||S_n|| - L_\mu \big| >\epsilon \right) \big]^{\frac{1}{n}}< 1$$ \label{ch2page} A similar estimate holds for $\frac{1}{n}\log ||S_n x ||$ for every non zero vector $x\in \R^d$. \end{theo} \begin{proof}[Proof of Theorem \ref{ch2ratioA}] For every $\beta\in \Pi$, a similar large deviation inequality as in Theorem \ref{ch2page} holds for the quantity $\frac{1}{n} \log ||\rho_\beta(S_n)||$ because $\rho_\beta$ is strongly irreducible and $\rho_\beta(\mu)$ has an exponential moment by Lemma \ref{ch2exponential}. Hence by equation (\ref{ch2nadet}) a large deviation inequality holds for $\frac{1}{n} \log \alpha(A_n)$ for every $\alpha\in \Theta$. Since $\chi_\rho/\chi= \prod_{\alpha\in \Pi}{\alpha^{n_\alpha}}$ for non-negative integers $\{n_\alpha; \alpha\in \Pi\}$, we get for $\lambda= - \sum_{\alpha\in \Pi}{n_\alpha\;\lim_{n\rightarrow\infty}\frac{1}{n} \log\;\alpha(A_n)}$ and for every $\epsilon>0$, \begin{equation}\label{ch2ma3ash}\displaystyle \limsup_{n\rightarrow \infty} \big[\p \left( \big|\frac{1}{n} \log\; \frac{\chi(A_n)}{\chi_\rho(A_n)} - \lambda \big| >\epsilon \right) \big]^{\frac{1}{n}}< 1\end{equation} By Theorem \ref{ch2guimo}, $\lambda<0$. Hence, by relation (\ref{ch2ma3ash}), there exists $\rho_1,\rho_2\in ]0,1[$ such that for all large $n$: $\p \left( \frac{\chi(A_n)}{\chi_\rho(A_n)} \geq \rho_1^n \right) \leq \rho_2^n $. Since $\chi(a)\leq \chi_\rho(a)$ for every $a\in A^+$, we get for every $\epsilon>0$, $\E \Big[\left( \frac{\chi(A_n)}{\chi_\rho(A_n)} \right)^\epsilon\Big] \leq \rho_1^{\epsilon n}+\rho_2^n$. This shows (\ref{ch2mamisallile}).\\ By the same large deviation techniques, one can show (\ref{ch2lkle}).\end{proof} \subsection{Estimates in the $K$-parts} Recall that we fix a measurable section of the Cartan decomposition $G \rightarrow KAK$ and the corresponding decomposition of the random walk $S_n$ is denoted by $S_n=K_nA_nU_n$. Our next task is to prove the following theorem which gives the convergence in the $K$-parts of the Cartan decomposition of the random walk.\\ This result was proved in our previous work \cite[Theorem 4.33]{aoun} over an arbitrary local field. We give here another proof special to archimedean fields. \begin{theo}\label{ch2conv}[Exponential convergence of the $K$-components] Suppose that $\mu$ has an exponential moment and $\rho$ is proximal. Let $v_\rho$ be a highest weight vector. Then there exists a random variable $Z$ on the projective space $P(V)$ such that for every $\epsilon>0$: $$\displaystyle \limsup_{n\rightarrow \infty}\big[\E\left( {\delta (U_n^{-1}\cdot [v_\rho],Z) }^{\epsilon} \right)\big]^{\frac{1}{n}} < 1$$ Here, for $M\in GL(V)$, we have denoted by $M^t$ the transpose matrix of $M$ with respect to the basis of weights. We recall that $\delta$ is the Fubini-Study distance (see the beginning of Section \ref{ch2subprelprob}). A similar estimate holds if we replace $U_n$ with $k(X_1\cdots X_n)$ where $k(g)$ is the $K$-component of $g\in G$ for the fixed $KAK$ decomposition in $G$. \end{theo} \begin{proof}[Proof] We recall that by Mostow theorem, there exists a scalar product $\langle\cdot\rangle$ on $V$ such that the weight spaces are orthogonal and $K$ acts by isometries and that we choose an orthonormal basis in each weight space so that $\rho(K)\rho(K)^t$ is the trivial group. \\ For every $n\in \N^*$, every non zero weight $\chi$, we denote by $Q_\chi(n)$ the orthogonal projection on the space $U_n^{-1}\cdot V_\chi= \rho(U_n)^tV_\chi$. In particular $Q_{\chi_\rho}(n)$ is the projection on the line $\R U_n^{-1}\cdot v_{\rho}$, where $v_\rho$ is a highest weight vector (it is one-dimensional because $\rho$ is proximal). We will show that for every $\epsilon>0$ small enough: \begin{equation} \displaystyle \limsup_{n\rightarrow \infty}\big[\E(||Q_{\chi_\rho}(n) - Q_{\chi_{\rho}}(n+1)||^{\epsilon})\big]^{\frac{1}{n}}< 1\label{ch2yara}\end{equation} This ends the proof because if $x$ and $y$ are two non zero vectors of $V$ and $Q_x$ and $Q_y$ are the orthogonal projections on the lines $\R x$ and $\R y$, then $||Q_x - Q_y||\geq \frac{1}{2}\delta([x],[y])$, so that (\ref{ch2yara}) would imply by the Markov property that $\{U_n^{-1}\cdot [v_\rho];\;n\geq 0\}$ is a.s. a Cauchy series in the projective space $P(V)$. Hence it converges to some variable $Z$. By Fatou lemma and the triangular inequality, we get for some $t=t(\epsilon)\in ]0,1[$ and all large $n$: $\E\left({\delta (Z,U_{n}^{-1}\cdot [v_\rho])}^\epsilon\right)\leq \liminf_{m \rightarrow \infty}\;\E\left({\delta (U_{m}^{-1}\cdot [v_\rho],U_{n}^{-1}\cdot [v_\rho])}^\epsilon\right)\leq {t^n}$. \\ Now we prove (\ref{ch2yara}). For every $n\in \N^*$, $\sum_{\chi}{Q_\chi}$ is the identity operator, where the sum is over all the non zero weights of $(\rho,V)$. Moreover, two orthogonal projections commute, hence: $||Q_{\chi_\rho}(n) - Q_{\chi_\rho}(n+1)||\leq \sum_{\chi\neq \chi_\rho} { ||Q_{\chi_\rho}(n+1)Q_{\chi}(n)||+ ||Q_{\chi_\rho}(n)Q_{\chi}(n+1)|| }$. \\ Fix a weight $\chi\neq \chi_\rho$. First we show that $\E(||Q_{\chi_\rho}(n+1)Q_{\chi}(n)||^\epsilon)$ is sub-exponential for every $\epsilon>0$ small enough. This is equivalent to prove that there exists $\eta\in ]0,1[$ such that for all large $n$: $$\E \left( \big[\displaystyle{\sup_{x \in U_n^{-1}\cdot V_\chi; ||x||=1}{|Q_{\chi_\rho}(n+1) (x)|}}\big]^\epsilon \right) \leq \eta^n $$ Let $x\in U_n^{-1}\cdot V_\chi$ of norm one and $y_n=Q_{\chi_\rho}(n+1)(x)$, i.e. the orthogonal projection of $x$ on the line $U_{n+1}^{-1}\cdot V_{\chi_\rho}$. Now we evaluate $||S_{n+1}\cdot x||$ in two different ways. On the one hand, \begin{equation}||S_{n+1}\cdot x||=||X_{n+1}S_n\cdot x||\leq ||\rho(X_{n+1})|| \;||S_n\cdot x||=||\rho(X_{n+1})||\;\chi(A_n)\label{ch21}\end{equation} On the other hand, $\langle S_{n+1}\cdot (x-y_n),S_{n+1}\cdot y_n\rangle=\langle(x-y_n),S_{n+1}^tS_{n+1}\cdot y_n\rangle=0$ because $x-y_n \perp U_{n+1}^{-1}\cdot V_{\chi_\rho}$ and if $y_n=U_{n+1}^{-1}\cdot z_n$ for some $z_n\in V_{\chi_\rho}$, then $S_{n+1}^tS_{n+1}\cdot y_n = U_{n+1}^{-1}A_{n+1}^2 U_{n+1}U_{n+1}^{-1}\cdot z_n={\chi_\rho} ^2(A_{n+1}) y_n\in U_{n+1}^{-1}\cdot V_{\chi_\rho}$. Hence \begin{equation}\label{ch22}||S_{n+1}\cdot x||=\sqrt{||S_{n+1}\cdot y_n||^2+||S_{n+1}\cdot (x-y_n)||^2} \geq ||S_{n+1}\cdot y_n||={\chi_\rho}(A_{n+1})\;||y_n||\end{equation} Combining (\ref{ch21}) and (\ref{ch22}) gives: $$\displaystyle{\sup_{x \in U_n^{-1}\cdot V_\chi; ||x||=1}{||Q_{\chi_\rho}(n+1) (x)||}} =||y_n|| \leq ||\rho(X_{n+1})|| \;\frac{\chi(A_{n})}{\chi_\rho(A_{n+1})}$$ But for every $p\in \N^*$, $||\rho(S_p)||=\chi(A_p)$ (because the norm on $V$ is $K$-invariant). Hence, \begin{equation}\label{ch2badama3}\displaystyle{\sup_{x \in U_n^{-1}\cdot V_\chi; ||x||=1}{||Q_{\chi_\rho}(n+1) (x)||}} =||y_n|| \leq ||\rho(X_{n+1})||\cdot ||\rho(X_{n+1}^{-1})|| \;\frac{\chi(A_{n})}{\chi_\rho(A_{n})}\leq ||\rho(X_{n+1})||^d \;\frac{\chi(A_{n})}{\chi_\rho(A_{n})} \end{equation} Last inequality is due to the relation $||g^{-1}||\leq ||g||^{d-1}$ true for every $g\in SL_d(k)$. By Lemma \ref{ch2exponential}, the probability measure $\rho ( \mu)$ has an exponential moment so that there exists $C\geq 1$ such that for all $\epsilon>0$ small enough $\E (||\rho(X_{n+1})||^\epsilon)< C$. By Theorem \ref{ch2ratioA}, for every $\epsilon>0$ small enough, some $\eta(\epsilon)\in ]0,1[$ and all $n$ large enough: $\E \Big[\left(\frac{\chi(A_{n})}{\chi_\rho(A_{n})}\right)^\epsilon\Big] \leq \eta(\epsilon)^n$. It suffices to apply Cauchy-Schwartz inequality to (\ref{ch2badama3}) to obtain the sub-exponential behavior of $\E \left( ||Q_{\chi_\rho}(n)Q_{\chi}(n+1)||^\epsilon \right)$. To bound $\E(||Q_{\chi_\rho}(n)Q_{\chi}(n+1)||^\epsilon)$ we apply the same reasoning as above: we fix $x\in V_{\chi}(n+1)$ of norm one and denote by $y_n$ its projection on $U_n^{-1}\cdot V_{\chi_\rho}$. Then, we evaluate $||S_n\cdot x||$ in two ways: $$||S_n\cdot x||=||X_{n+1}^{-1}S_{n+1}\cdot x||\leq {||\rho(X_{n+1}^{-1})||}\;\chi (A_{n+1})$$ $$||S_n\cdot x||=\sqrt{||S_n\cdot (x-y_n)||^2 + ||S_n \cdot y_n||^2}\geq ||S_n\cdot y_n||= ||y_n||\chi_{\rho}(A_n)$$ The end of the proof is the same as above. For the law of $Z$, see the following remark. \end{proof} \begin{remarque} \label{ch2conkak}[Identification of the limit] By the Markov inequality and the Borel-Cantelli lemma, Theorem \ref{ch2conv} shows that $U_n^{-1}[v_\rho]$ converges towards some random variable $Z$. In fact, the law of $Z$ is the unique $\rho(\mu)^t$-invariant probability measure on $P(V)$ (see for example \cite[Proposition 3.2 page 50]{bougerol}). \end{remarque} We will also use the following lemma: \begin{lemme} Let $\mu$ be a probability measure on $GL_d(\R)$ with an exponential moment, such that the smallest closed group $G_\mu$ containing the support is $\mu$ is strongly irreducible and proximal, then \begin{equation}\limsup_{n\rightarrow +\infty} \frac{1}{n} \log {\p \left(S_n [x] \in H \right)}< 0\label{decr}\end{equation} uniformly on $x\in \R^d \setminus \{0\}$ and the hyperplanes $H$ of $\R^d$. \label{ch2hyperplane}\end{lemme} \begin{remarque}In \cite[Theorem 4.18]{aoun}, we have proved the previous lemma over an arbitrary local field. We will give here a short proof since we are working in the field of real numbers. \end{remarque} \begin{proof} With the assumptions of the lemma, $S_n[x]$ converges in law towards a random variable $Z$ with law the unique $\mu$-invariant probability measure $\nu$ on the projective space $P(\R^d)$. Moreover, the convergence is with exponential speed in the following sense (see \cite[Chapter V, Theorem 2.5]{bougerol}): there exists $\alpha>0$ such that for every $\alpha$-holderian function on $P(\R^d)$, $$\Big|\E \Big[(f\left(\rho(S_n)[x] \right) \Big) - \int{f d\nu} \Big|\leq ||f||_\alpha \rho^n$$ where $$||f||_\alpha=\underset{[x]\neq [y]\in P(\R^d)}{\sup}{|f([x])-f([y])|}{\delta^\alpha([x],[y])}$$ and $\delta(\cdot, \cdot)$ is the Fubiny-Study distance on the projective space $P(\R^6)$. But the limiting measure $\nu$ has some regularity, its Hausdorff dimension is positive and satisfies: $$\underset{\textrm{$H$ hyperplanes in $R^6$}}{\sup} \nu \left(\{[x]\in P(\R^6); \delta([x],H)\leq \epsilon\} \right)\leq C \epsilon^\alpha$$ for some $C,\alpha>0$ (see \cite[Chapter VI, Corollary 4.2]{bougerol}). We can now easily conclude.\end{proof} Finally, we quote a useful result from \cite{aoun}. \begin{theo}\cite[Theorem 4.35]{aoun}[Asymptotic independence of the $K$-components] With the same assumptions as in Theorem \ref{ch2conv}, there exist \textbf{independent random variables} $Z$ and $T$ with respective laws the unique $\rho(\mu)^t$ (resp. $\rho(\mu)$)- invariant probability measure on $P(V)$ such that for every $\epsilon>0$, every $\epsilon$-holder (real) function $\phi$ on $P(V)\times P(V)$ and all large $n$ we have: $$\big|\E\left(\phi([U_n^{-1}\cdot v_{\rho}],[K_n\cdot v_\rho]) \right) - \E \left( \phi(Z,T) \right) \big|\leq ||\phi||_{\epsilon} \rho(\epsilon)^n $$ where \;\;\;\;\;$||\phi||_\epsilon= \underset{{ [x],[y],[x'],[y']}}{Sup} \;{\frac{\big|\phi([x],[x'])-\phi([y],[y'])\big|}{\delta ([x],[y])^{\epsilon}+\delta([x'],[y'])^{\epsilon}}}$. \label{ch2independence}\end{theo} \section{Proof of the main theorems} \label{ch2subproof1} The proof of the main theorems we presented in the introduction is based on the following \begin{theo} Let $\mathbf{G}$ be a semi-simple algebraic group defined over $\R$, $G$ its group of real points, let $(\rho,V)$ be a rational real representation of $\mathbf{G}$ such that its irreducible sub-representations $(\rho_1,V_1),\cdots,(\rho_r,V_r)$ are pairwise non isomorphic and let finally $A\in End(V_1)\oplus\cdots\oplus End(V_r)$ such that its projection on $End(V_1)$ is non zero. Consider a probability measure $\mu$ on $G$ with an exponential moment and such that $G_\mu:=\overline{\langle Supp(\mu)\rangle}$ is Zariski dense in $G$. Denote by $\{S_n; n\geq 0\}$ the corresponding random walk. Assume that : \begin{enumerate} \item $\rho_1$ is proximal. \item $L_{\rho_1(\mu)}> L_{\rho_i(\mu)}$ , $i=2,\cdots, r$ (see Definition \ref{ch2liapou}). \end{enumerate} Then for every $\epsilon>0$ there exists $\rho(\epsilon)\in ]0,1[$ such that for all large $n$: $$\p \Big(\big| \frac{1}{n} \log|Tr\left(\rho(S_n)A\right)| - L_{\rho_1(\mu)}\big| >\epsilon \Big) \leq \rho(\epsilon)^n$$ In particular, $Tr\left(\rho(S_n)A\right)$ vanishes only with a probability decreasing exponentially fast to zero, and $\frac{1}{n} \log \Big|Tr\left(\rho(S_n) A\right)\Big|$ converges a.s. towards $L_{\rho_1(\mu)}$. \label{ch2theo1}\end{theo} Assumption 1 in Theorem \ref{ch2theo1} is fulfilled whenever $\mathbf{G}$ is $\R$-split (see Lemma \ref{ch2split}). We provide two sufficient conditions for assumption 2 to hold: a probabilistic one and a determinist (algebraic) one. \begin{remarque}[A probabilistic sufficient conditions for assumption $2$] Lemma \ref{ch2lyapunovcone} proves that assumption 2 is fulfilled whenever the Lyapunov vector $Liap(\mu)$ does not belong to a finite union of hyperplanes in the Weyl chamber $\mathfrak{a}^+$ . \end{remarque} \begin{remarque}[An algebraic sufficient conditions for assumption $2$] Let $\chi_i$ be the highest weight of $V_i$, $i=1,\cdots,r$. A necessary condition for $2$ to hold is that $\chi_1/\chi_i=\prod_{\alpha \in\Pi}{\alpha^{n_\alpha}}$ for some non negative integers $\{n_\alpha; \alpha\in \Pi\}$ with at least one non zero $n_\alpha$. This is easily checked using the fact that the Lyapunov vector is in the interior of the weyl chamber (Theorem \ref{ch2guimo}).\\ See the applications of this remark in the proof of Theorem \ref{ch2tr2} \label{ch2nec}\end{remarque} \begin{proof}[Proof] Without loss of generality, we can assume $r=2$. Let $d=dim(V)$, $p=dim(V_1) $, $B_1=(v_1,\cdots,v_p)$ (resp. $B_2=(v_{p+1},\cdots, v_d)$) a basis of $V_1$ (resp. $V_2$) consisting of weight vectors. We impose $v_1$ to be a highest weight. This gives a basis $B=(B_1,B_2)$ of $V$. The scalar products on $V_1$ and $V_2$ given by Theorem \ref{ch2mostow} induce naturally a scalar product on $V$ for which $V_1$ and $V_2$ are orthogonal. In the basis $B$, $\rho(A_n)=diag(\rho_1(A_n),\rho_2(A_n))=diag(a_1(n),\cdots,a_d(n))$ with $a_1(n)=\chi_{\rho_1}(A_n)$ and $a_{p+1}(n)=\chi_{\rho_2}(A_n)$ (notations of Section \ref{ch2subprel}). Let $W_{\rho_i}$ be the set of non zero weights of $(V_i,\rho_i)$, $i=1,2$. A simple computation gives: \begin{eqnarray}Tr(\rho(S_n)A) &=& Tr(\rho(K_n)\rho(A_n)\rho(U_n)A) = Tr(\rho(A_n)\rho(U_n)A \rho(K_n))\nonumber\\ &=&\sum_{i=1}^d {a_i(n) \langle \rho(K_n) v_i,A^t\rho(U_n)^{t}v_i\rangle}\nonumber\end{eqnarray} where $S_n=K_nA_nU_n$ is the Cartan decomposition of $S_n$ (see Section \ref{ch2subcartan}). Since $\rho_1$ is proximal, $a_2(n)=\chi(A_n)$ for some weight $\chi\in W_{\rho_1}$ distinct from $\chi_\rho$. Then, $$Tr(\rho(S_n)A)= \chi_{\rho_1}(A_n) \Big[ \langle K_n\cdot v_{\rho_1},A^tU_n^{-1}\cdot v_{\rho_1}\rangle + \sum_{\chi\neq \chi_{\rho_1} \in W_{\rho_1}}\;{O\left(\frac{\chi(A_n)}{\chi_{\rho_1}(A_n)} \right)}+ \sum_{\chi \in {W_{\rho_2}}}\; {O\left(\frac{\chi(A_n)}{\chi_{\rho_1}(A_n)} \right)}\Big]$$ Le Page large deviations theorem (Theorem \ref{ch2page}) shows that for every $\epsilon>0$ and some $\rho\in ]0,1[$: $$\p \left(exp(nL_{\rho_1(\mu)}-n\epsilon)\leq\chi_{\rho_1}(A_n)\leq exp(nL_{\rho_1(\mu)}+n\epsilon)\right) \geq 1-\rho^n$$ Next we show that for every $\chi\neq \chi_{\rho_1} \in W_{\rho_1}$ and $\chi\in W_{\rho_2}$ and every $\epsilon>0$: $$\displaystyle \limsup_{n\rightarrow \infty}\big[\E \left(\frac{\chi(A_n)}{\chi_\rho(A_n)} \right)^\epsilon \big]^{\frac{1}{n}}< 1$$ Indeed, for $\chi\neq \chi_{\rho_1} \in W_{\rho_1}$, this follows from Theorem \ref{ch2ratioA} and the fact that $\rho_1$ is proximal. For $\chi \in W_{\rho_2}$, this follows also from Theorem \ref{ch2ratioA} and assumption $2$. Hence, by the Markov property, there exist $\epsilon_1,\epsilon_2\in ]0,1[$ such that for all $n$ large enough: $\p \left(\frac{\chi(A_n)}{\chi_\rho(A_n)} \geq \epsilon_1^n \right) \leq \epsilon_2^n $. The following proposition applied to the (non trivial) projection of $A$ on $V_1$ and to the representation $(\rho_1,V_1)$ ends the proof. \end{proof} \begin{prop} Let $\mathbf{G}$ be a semi-simple algebraic group defined over $\R$, $G$ its group of real points, $\Gamma$ a Zariski dense subgroup of $G$, $(\rho,V)$ an irreducible rational real representation of $\mathbf{G}$, $\mu$ a probability measure with an exponential moment and whose support generates $\Gamma$. If $\rho$ is proximal, then for any non zero endomorphism $A\in End(V)$: $$\displaystyle \limsup_{n\rightarrow \infty}\big[\p \left(|\langle K_n\cdot v_{\rho},AU_n^{-1}\cdot v_{\rho}\rangle|\leq t^n \right) \big]^{\frac{1}{n}} < 1$$ where $v_{\rho}$ is a highest weight vector. \label{ch2propp}\end{prop} Before giving the proof, we recall the following remarkable theorem of Guivarc'h: \begin{theo}\label{ch2hausdorff}\cite{Guivarch3} Let $\mu$ be a probability measure on $GL_d(\R)$ having an exponential moment and such that $G_\mu$ is strongly irreducible and proximal. Denote by $\nu$ the unique $\mu$-invariant probability measure on the projective space $P(\R^d)$ . Then there exists $\alpha>0$ (small enough) such that: $$Sup\{\int{\frac{1}{|\langle\frac{x}{||x||},\frac{y}{||y||}\rangle|^\alpha}d\nu([x])\;\;;\;\;y\in \R^d \setminus\{0\}}\}< \infty $$ In particular, if $Z$ is a random variable with law $\nu$, there exists a constant $C>0$ such that: $$Sup\{\p (|\langle Z,\frac{x}{||x||}\rangle|\leq \epsilon );\;\; x\in \R^d\setminus\{0\}\}\leq C\epsilon^\alpha$$\end{theo} \begin{proof}[Proof of Proposition \ref{ch2propp}] \begin{itemize} \item Let $\eta$ the function defined on $P(V)\times P(V) \rightarrow \R$ by $\eta([x],[y]) =|\langle x,Ay\rangle|$ where $x$ and $y$ are two representative of $[x]$ and $[y]$ in the sphere of radius one. The function $\eta$ is lipshitz with lipshitz constant $\leq Max\{1,||A||\}$. \item For every $a>0$, let $\psi_a$ be the function defined on $\R$ by $\psi_a(x)= 1$ if $x\in[-a;a]$; affine on $[-2a;-a[ \cup ]a,2a]$ and zero otherwise. One can easily verify that $\psi_a$ is lipshitz with constant equal to $\frac{1}{a}$.\\ Note also that \begin{equation}\mathds{1}_{[-a,a]} \leq \psi_a \leq \mathds{1}_{[-2a,2a]}\label{ch2yy}\end{equation} \end{itemize} Define for $a>0$, $\phi_a=\psi_a \circ \eta$. By the previous remarks, $\phi_a$ is lipshitz with lipshitz constant: $||\phi_a|| \leq \frac{Max\{1,||A||\}}{a}$.\\ By Theorem \ref{ch2independence} there exist independent random variables $Z$ and $T$ in $P(V)$ such that for any $t\in ]0,1[$, we have: \begin{eqnarray}\p (|\langle K_n\cdot v_\rho,AU_n^{-1}\cdot v_\rho\rangle|\leq t^n ) & \leq & \E \left(\phi_{t^n} ([K_n\cdot v_\rho],[{U_n}^{-1}\cdot v_\rho]) \right) \label{ch2awwalmarra}\\ & \leq & \E \left( \phi_{t^n} (Z,T) \right) + ||\phi_{t^n}|| \rho^n \\ & \leq & \p (|\langle Z,AT\rangle| \leq 2t^n ) + Max\{1,||A||\}\frac{\rho^n}{t^n} \label{ch2explique1} \end{eqnarray} In the last line, we confused between $Z$ and $T$ in $P(V)$ and some representative in the unit sphere. The bounds (\ref{ch2awwalmarra}) and (\ref{ch2explique1}) follow from (\ref{ch2yy}).\\ To prove our proposition, we can clearly suppose $t\in ]\rho,1[$. It suffices then to show that $\p (|\langle Z,AT\rangle| \leq 2t^n )$ is sub-exponential. The law of $T$ is the unique $\rho(\mu)^t$-invariant probability measure $\nu$ on $P(V)$ (Theorem \ref{ch2independence}). Moreover, a general lemma of Furstenberg (see for example \cite[Proposition 2.3 page 49]{bougerol}) shows that $\nu$ is proper. Hence, a.s. $AT\neq 0$. Moreover, we claim that following the stronger statement holds : there exist $C,\alpha>0$ such that for every $t'\in ]0,1[$ and $n\in \N^*$: \begin{equation}\label{ch2tesada2}\p (||AT||\leq t'^n)\leq C t'^{n\alpha}\end{equation} Indeed, $A$ being a non zero endomorphism, there exist a non zero vector of norm one, $v_0$ such that $A^tv_0\neq 0$. Then by Theorem \ref{ch2hausdorff}, $$\p (||AT||\leq t'^n ) \;\leq\; \p(|\langle AT,v_0\rangle|\leq t'^n)\;\leq\; \p(|\langle T,A^tv_0\rangle|\leq t'^n)\;\leq\frac{C}{||A^tv_0||^\alpha} t'^{n\alpha}$$ Hence for every $t'\in ]t,1[$, \begin{eqnarray}\p (|\langle Z,AT\rangle| \leq 2t^n ) &=& \p (|\langle Z,\frac{AT}{||AT||}\rangle| \leq 2\frac{t^n}{||AT||} ) \nonumber\\ &\leq& \p \left(|\langle Z,\frac{AT}{||AT||}\rangle| \leq 2 (t/t')^n \right)+\frac{C}{||A^tv_0||^\alpha}{t'}^{n\alpha} \nonumber\\ &\leq& Sup\{\p \left(\delta(Z,[H])\leq 2 (t/t')^n \right);\textrm{\;\;$H$ hyperplane of $V$}\} + C{t'}^ {n\alpha}\nonumber\end{eqnarray} The last line is by independence of $Z$ and $T$. Theorem \ref{ch2hausdorff} shows that it decreases exponentially fast to zero. \end{proof} As an application, we give the \begin{proof}[Proof of Theorem \ref{ch2tr1}] Lemma \ref{ch2lemma} allows us to be in the situation of Theorem \ref{ch2theo1}, i.e., we have a representation $(\rho,V)$ whose irreducible sub-representations $\rho_1,\cdots, \rho_r$ are pairwise non isomorphic, a endomorphism $A\in End(V_1)\oplus \cdots \oplus End(V_r)$ whose restriction to each $End(V_i)$ non zero such that $\mathcal{V}=\{g\in G; Tr(gA)=0\}$. Lemma \ref{ch2lyapunovcone} allows us to distinguish a representation, say $\rho_1$, whose Lyapunov exponent is the biggest. Lemma \ref{ch2split} shows that this representation is proximal. It suffices to apply Theorem \ref{ch2theo1}. \end{proof} \begin{proof}[Proof of Theorem \ref{ch2tr2}] For every $k\in \N$, let $Sym^k(\R^d)$ be the vector space of homogenous polynomials on $d$ variables of degree $k$. The group $SL_d(\R)$ acts on $Sym^k(\R^d)$ by the formula: $g.P(X_1,\cdots,X_d)=P\left(g^{-1}(X_1,\cdots,X_d) \right)$ for every $g\in SL_d(\R)$, $P\in Sym^k(\R^d)$. A known fact (see for example \cite{fulton}) is that the action of $SL_d(\R)$ on $Sym^k(\R^d)$ is irreducible for every $k\in \N$. \\ Consider now a proper algebraic hypersurface $\widetilde{\mathcal{V}}$ of $\R^d$ defined over $\R$, a non zero vector $x$ of $\R^d$ and denote $\mathcal{V}=\{g\in SL_d(\R); gx\in \widetilde{\mathcal{V}}\}$. Let now $P$ be the polynomial that defines $\widetilde{\mathcal{V}}$, $k$ its degree. The polynomial $P$ can be seen as a vector in $V=\oplus_{i=0}^k {Sym^i(\R^d)}$. Let $\rho_i$ be the action of $SL_d(\R)$ on $Sym^i(\R^d)$. If $P_i$ denotes projection of $P$ on $Sym^i(\R^d)$, then ``$g x \in \mathcal{V} \;\Leftrightarrow\; P(gx)=0 \; \;\Leftrightarrow\; \sum_{i=0}^k {f_i(g^{-1})}=0$'' where $f_i(g)=\rho_i(g) (P_i) (x) \in C(\rho_i)$ (see Definition \ref{ch2matrix}). Moreover, the highest weight of $Sym^i(\R^d)$ is strictly bigger (for the natural order on $X(\mathbf{A})$ defined in Section \ref{ch2subcartan}) than the one of $Sym^{i-1}(\R^d)$, the ratio being the highest weight of the natural representation of $SL_d(\R)$ on $\R^d$. We can then apply Remark \ref{ch2nec} and Theorem \ref{ch2theo1} to the probability measure $\mu^{-1}$. \end{proof} An application of the results of Section \ref{ch2subproba} independent from Theorem \ref{ch2theo1} is the \begin{proof}[Proof of Theorem \ref{ch2tr3}] If the identity component $\mathbf{H}^0$ of $\mathbf{H}$ is reductive, then by Proposition \ref{ch2reductive}, there exists a rational representation $(\rho,V)$ of $\mathbf{G}$ such that the reductive group $\mathbf{H}^0$ fixes a non zero vector $x$ of $V$. By decomposing $\rho$ into irreducible sub-representations, one can assume $(\rho,V)$ to be irreducible. If $h_1,\cdots,h_r$ denote the cosets of the finite group $H/H^0$, then we can write $$\p (S_n \in H) \leq \sum_{i=1}^r {\p (S_n h_i^{-1} \cdot x = x)} \leq \sum_{i=1}^r {\p \left(||\rho(S_n)\frac{ h_i^{-1} \cdot x}{||x||}|| = 1\right)} $$ Since $G$ has no compact factors, $\rho(G)$ is non compact. In particular, $\rho(G_\mu)$ is not contained in a compact subgroup of $SL(V)$ because compact subgroups of $SL(V)$ are algebraic and $\rho(G_\mu)$ is Zariski dense in $\rho(G)$. Hence we can apply Furstenberg theorem (\cite{Furst}) which shows that $L_{\rho(\mu)}>0$ (see Definition \ref{ch2liapou}). Applying Le Page large deviations theorem (Theorem \ref{ch2page}) shows that for every $i=1,\cdots, r$, $\p \left(||S_n\cdot (h_i^{-1} \cdot x)|| \leq exp(nL_{\rho(\mu)}/2) \right)$ decreases exponentially fast to zero. If $\mathbf{H}^0$ is not reductive, then it contains a unipotent Zariski connected $\R$-subgroup $\mathbf{U}$ which is normal in $\mathbf{H}^0$. Hence $\mathbf{H}^0\subset N(\mathbf{U})$, where $N(\mathbf{U})$ is the normalizer of $\mathbf{U}$ in $\mathbf{G}$. By \cite[Corollary 3.9]{elements}, there is an $\R$-parabolic subgroup $\mathbf{P}$ of $\mathbf{G}$ such that $N(\mathbf{U}) \subset \mathbf{P}$. By \cite[Proposition 5.14]{gpesred}, $\mathbf{P}$ is conjugated to one of the standard parabolic subgroups $\mathbf{P}_\theta$, $\theta \subset \Pi$ described in Section \ref{ch2subparabolic}. Hence, by Lemma \ref{ch2lemmepar}, $\mathbf{P}_\theta$ fixes the line generated by the highest weight $x_\alpha$ of $(\rho_\alpha,V_\alpha)$ for every $\alpha\not\in \theta$. Fix such $\alpha$. Hence, $$\mathbf{H}^0\subset\{g\in \mathbf{G}^0; g\cdot [x_\alpha]=[x_\alpha]\}$$ As in the previous paragraph, denote by $h_1,\cdots, h_r$ the cosets of the finite group $H/H^0$. Hence, \begin{equation}\p (S_n \in H)\leq \sum_{i=1}^r{\p(\rho_\alpha(S_n)[h_i^{-1}x_\alpha]=[x_\alpha])}\label{ch2intassa}\end{equation} The representation $\rho_\alpha$ is $G$-irreducible hence by connectedness, strongly irreducible. Moreover, it is proximal because $\Theta_{\rho_\alpha}=\{\alpha\}$, its highest weight space is a line and $G$ has no compact factors. By Golsheild-Margulis theorem (Theorem \ref{ch2margulis}), $\rho_\alpha(\Gamma)$ is proximal. Hence we can apply Lemma \ref{ch2hyperplane} which proves the exponential decay of the probability \ref{ch2intassa}. \end{proof} \section{Application to generic Zariski density and to free subgroups of linear groups} \label{ch2subzariski} \subsection{Statement of the results and commentaries} Let $\mathbf{G}$ be a semi-simple algebraic group defined over $\R$ and $G$ its group of real points. \begin{question} Let $\Gamma$ be a Zariski dense subgroup of $G$. Is it true that two ``random'' elements in $\Gamma$ generate a Zariski dense subgroup of $G$. \end{question} A motivation for this question is the following \begin{question} By the Tits alternative \cite{tits}, any Zariski dense subgroup $\Gamma$ of $G$ contains a Zariski dense free subgroup on two generators. A natural question is to see if this property is generic. In \cite[Theorem 1.1]{aoun}, we proved that two ``random'' elements in $\Gamma$ generate a free subgroup. The question that arises immediately is to see if the latter subgroup is Zariski dense. \label{ch2question2}\end{question} In recent works of Rivin \cite{genericrivin}, he showed the following: \begin{theo} \cite[Corollary 2.11]{genericrivin} Let $\mathbf{G}=\mathbf{SL_d}$ and $\Gamma=SL_d(\Z)$ for some $d\geq 3$. Consider the uniform probability measure on a finite symmetric generating set and denote by $\{S_n, n\geq 0\}$ the associated random walk. Then, for any $g\in \Gamma$, there exists a constant $c(g)\in ]0,1[$ such that $$\p (\langle g,S_n\rangle\textrm{is Zariski dense}) \geq 1-c(g)^n $$ Moreover, $c(g)$ is effective. \label{ch2genericrivin}\end{theo} Passing from the ``1.5 random subgroup'' in Theorem \ref{ch2genericrivin} to the subgroup generated by two random elements is delicate since the constant $c(g)$ depends among others things on the norm of $g$. \\ Using our Theorem \ref{ch2tr1}, we will prove the following \begin{theo}\label{ch2generic1} Let $G$ be the group of real points of a semi-simple algebraic group defined and split over $\R$. Let $\Gamma_1,\Gamma_2$ be two Zariski dense subgroups of $G$. Then there exists probability measures $\mu_1$ and $\mu_2$ respectively on $\Gamma_1$ and $\Gamma_2$ with an exponential moment such that for some $c\in ]0,1[$ and all large $n$, $$\p (\textrm{$\langle S_{1,n},S_{2,n}\rangle$ is Zariski dense and free}) \geq 1-c^n$$ where $\{S_{2,n}; n \geq 0\}$ and $\{S_{2,n}, n \geq 0\}$ are two independent random walks on $\Gamma_1$ (resp. $\Gamma_2$) associated respectively to $\mu_1$ and $\mu_2$. This implies that almost surely, for $n$ big enough, the subgroup $\langle S_{1,n},S_{2,n}\rangle$ is Zariski dense and free. \end{theo} When $\mathbf{G}=\mathbf{SL_2}$, a stronger statement holds. It will follow immediately from our result in \cite{aoun}. \begin{theo}\label{ch2generic2} Let $\Gamma_1,\Gamma_2$ be two Zariski dense subgroups of $SL_2(\R)$. Then for any probability measures $\mu_1$ and $\mu_2$ with an exponential moment whose support generates respectively $\Gamma_1$ and $\Gamma_2$, there exists $c\in ]0,1[$ such that $$\p (\textrm{$\langle S_{1,n},S_{2,n}\rangle$ is Zariski dense}) \geq 1-c^n$$ \end{theo} \begin{remarque} Let us compare Theorem \ref{ch2generic1} with Rivin's Theorem \ref{ch2genericrivin}. The advantage of our method is that it allows us to consider two elements at random and not a ``1.5 random subgroup'', which is crucial to solve Question \ref{ch2question2}. Furthermore, we do not necessarily consider arithmetic groups, neither finitely generated groups: any Zariski dense subgroup $\Gamma$ works. In addition to that, the statement shows that Zariski density is generic for a pair of random elements taken in two groups $\Gamma_1$ and $\Gamma_2$ not necessarily equal. \\ However, the big inconvenient is that our constants are not effective unlike Rivin's. Our result can be applied to prove the ``1.5 random subgroup'' but is less interesting than Rivin results since we don't know if the uniform probability measure on a finite symmetric generating of $SL_d(\Z)$ works. \\ For $d=2$, Theorem \ref{ch2generic2} is more satisfying; there is no restrictions neither on $\mu_1$ nor $\mu_2$. \end{remarque} \subsection{Proofs} \begin{proof}[Proof of Theorem \ref{ch2generic2}] A subgroup of $SL_2(\R)$ is Zariski dense if and only it is not virtually solvable. In particular, a free subgroup of $SL_2(\R)$ is always Zariski dense. But in Theorem \cite[Theorem 2.11]{aoun}, we proved that with the same assumptions as in Theorem \ref{ch2generic2}, $\p(\textrm{$\langle S_{1,n},S_{2,n}\rangle$ is not free} )$ decreases exponentially fast. \end{proof} \begin{proof}[Proof of Theorem \ref{ch2generic1}] The key point is the following \begin{lemme}\cite[Lemma 6.8]{strongtits} Let $k$ be a field of characteristic zero, $\mathbf{G}$ be a semi-simple group defined over $k$, $G=\mathbf{G}(k)$. Then there exists a proper algebraic variety $\mathcal{W}$ of $\mathbf{G} \times \mathbf{G}$ defined over $k$ such that any pair of elements $x,y\in G$ generate a Zariski dense subgroup unless $(x,y)\in \mathcal{W}(k)$. \label{ch2strongtits}\end{lemme} By Lemma \ref{ch2lemma}, there exist a rational real representation $(\rho,V)$ of $\mathbf{G} \times \mathbf{G}$, an endomorphism $A\in End(V_1) \oplus \cdots \oplus End(V_r)$ such that \begin{equation}\label{ch2lkop}\mathcal{W}=\{(g,h)\in \mathbf{G}\times \mathbf{G};\;Tr\left(\rho(g,h)A\right) =0\}\end{equation} Let $\rho_1, \cdots, \rho_r$ the irreducible sub-representations of $\rho$. Since $\Gamma_1 \times \Gamma_2$ is Zariski dense in $\mathbf{G}\times \mathbf{G}$, the proof of Lemma \ref{ch2lyapunovcone} shows that there exist two probability measures $\mu_1$ and $\mu_2$ respectively on $\Gamma_1$ and $\Gamma_2$, a permutation $\sigma$ of $\{1, \cdots, r\}$ such that $L_{\rho_{\sigma(i)}(\mu_1 \otimes \mu_2)} > L_{\rho_{\sigma(i+1)}(\mu_1 \otimes \mu_2)}$ for $i=1, \cdots, r$. Let $T_n$ be the random walk $(S_{1,n},S_{2,n})$ on $\Gamma_1\times \Gamma_2$ (i.e. the one corresponding to the probability measure $\mu_1 \otimes \mu_2$.) By Lemma \ref{ch2strongtits} and identity (\ref{ch2lkop}), \begin{equation}\p (\textrm{$\langle S_{n,1},S_{n,2}\rangle$ is not Zariski dense in $G$}) \leq \p \Big(Tr\left( \rho(T_n) A \right)=0 \Big)\end{equation} Theorem \ref{ch2theo1} shows that the latter quantity decreases exponentially fast to zero. \end{proof} \section{Open problems and questions} \begin{itemize} \item It is interesting to see if the probabilistic methods we used can generalize Theorem \ref{ch2tr1}. More precisely, if $\mu$ is a probability measure with an exponential moment and whose support generates a Zariski dense subgroup of the real points of a semi-simple algebraic group $\mathbf{G}$, is it true that for every proper algebraic subvariety $\mathcal{V}$ of $\mathbf{G}$, $$\limsup\big[ \p (S_n \in \mathcal{V})\big]^{\frac{1}{n}}< 1$$ where $S_n$ the random walk associated to $\mu$. \item The same question for Theorem \ref{ch2generic1} (i.e. replace there exists by for all, and do not assume the semi-simple algebraic group $\mathbf{G}$ $\R$-split.) \end{itemize} \def\cprime{$'$} \def\cprime{$'$}
1,116,691,498,196
arxiv
\section{Introduction} We suppose that the data-generating process $X$ is defined on the stochastic basis $(\Omega, \mathcal{F}, \mathcal{F}_t, P)$ and it is the solution of the one-dimensional stochastic differential equation written as: \begin{equation}\label{yu:tmodel} dX_t=A(X_t)dt+C(X_{t-})dZ_t, \end{equation} where: \begin{itemize} \item The coefficients $A$ and $C$ are Lipschitz continuous. \item The driving noise $Z$ is a standard Wiener process or pure-jump L\'{e}vy process satisfying that for any $q>0$, \begin{equation}\label{yu:momcon} E[Z_1]=0, \quad E[Z_1^2]=1, \quad E[|Z_1|^q]<\infty. \end{equation} \item The initial variable $X_0$ is independent of $Z$, and \begin{equation*} \mathcal{F}_t=\sigma(X_0)\vee\sigma(Z_s|s\leq t) \end{equation*} \end{itemize} As the observations from $X$, we consider the discrete but high-frequency samples $(X_{t_j^n})_{j=0}^n$ with \begin{equation*} t_j^n:=jh_n, \quad T_n:=nh_n\to\infty, \quad nh_n^2\to0. \end{equation*} For $(X_{t_j^n})_{j=0}^n$, $M_1\times M_2$ candidate models are supposed to be given. Here, for each $m_{1}\in\{1,\dots, M_{1}\}$ and $m_{2}\in\{1,\dots, M_{2}\}$, the candidate model $\mathcal{M}_{m_1,m_2}$ is expressed as: \begin{align}\label{yu:canmodel} dX_t=a_{m_{2}}(X_t,\alpha_{m_{2}})dt+c_{m_{1}}(X_{t-},\gamma_{m_{1}})dZ_t, \end{align} and the functional form of $(c_{m_1}(\cdot,\cdot),a_{m_2}(\cdot,\cdot))$ is known except for the $p_{\gamma_{m_1}}$ and $p_{\alpha_{m_2}}$-dimensional unknown parameters $\gamma_{m_{1}}$ and $\alpha_{m_{2}}$ being elements of the bounded convex domains $\Theta_{\gamma_{m_{1}}}\subset\mathbb{R}^{p_{\gamma_{m_{1}}}}$ and $\Theta_{\alpha_{m_{2}}}\subset\mathbb{R}^{p_{\alpha_{m_{2}}}}$. The main objective of this paper is to give a model selection procedure for extracting an ``optimal" model $\mathcal{M}_{m_1^\star,m_2^\star}$ among $\mathcal{M}:=\{\mathcal{M}_{m_1,m_2}| m_{1}\in\{1,\dots, M_{1}\}, m_{2}\in\{1,\dots, M_{2}\}\}$ which reflects the feature of $X$ well. For selecting an appropriate model from the data in hand quantitively, information criteria are one of the most convenient and powerful tools, and have widely been used in many fields. Their origin dates back to Akaike information criterion (AIC) introduced in \cite{Aka73,Aka74} which puts an importance on prediction, and after that, various kinds of criteria have been produced up to the present, for their comprehensive overview, see \cite{BurAnd02}, \cite{ClaHjo08}, and \cite{KonKit08}. Among them, this paper especially sheds light on Bayesian information criterion (BIC) introduced by \cite{Sch78}. It is based on an approximation up to $O_p(1)$-term of log-marginal likelihood, and its original form is as follows: \begin{equation}\label{BIC} \text{BIC}_n=-2l_n(\hat{\theta}_{n}^{\text{MLE}})+p\log n, \end{equation} where $l_n$, $\hat{\theta}_{n}^{\text{MLE}}$, and $p$ stand for the log-likelihood function, maximum likelihood estimator, and dimension of the parameter including the subject model. However, since the closed form of the transition density of $X$ is unavailable in general, to conduct some feasible statistical analysis, we cannot rely on its genuine likelihood; this implies that the conventional likelihood based (Bayesian) information criteria are unpractical in our setting. Such a problem often occurs when discrete time observations are obtained from a continuous time process, and to avoid it, the replacement of a genuine likelihood by some quasi-likelihood is effective not only for estimating parameters included in a subject model but also for constructing (quasi-)information criteria, for instance, see \cite{Uch10}, \cite{FujUch14}, \cite{EguMas18a} (ergodic diffusion model), \cite{UchYos16} (stochastic regression model), and \cite{FasKim17} (CARMA process). Especially, \cite{EguMas18a} used the Gaussian quasi-likelihood in place of the genuine likelihood, and derived quasi-Bayesian information criterion (QBIC) under the conditions: the driving noise is a standard Wiener process, and for each candidate model, there exist $\gamma_{m_1,0}\in\Theta_{m_1}$ and $\alpha_{m_2,0}\in\Theta_{m_2}$ satisfying $c_{m_1}(x,\gamma_{m_1,0})\equiv C(x)$ and $a_{m_2}(x,\alpha_{m_2,0})\equiv A(x)$, respectively. Moreover, by using the difference of the small time activity of the drift and diffusion terms, the paper also gave the two-step QBIC which selects each term separately, and reduces the computational load. In the paper, the model selection consistency of the QBIC is shown only for nested case. In such a case, by considering the (largest) model which contains all candidate models, regularized methods can also be used in the same purpose. Concerning the regularized method for SDE models, for example, see \cite{GreIac12} and \cite{MasShi17}. While when it comes to the estimation of the parameters $\gamma_{m_1}$ and $\alpha_{m_2}$, the Gaussian quasi maximum likelihood estimator (GQMLE) works well for a much broader situation: the driving noise is a standard Wiener process or pure-jump L\'{e}vy process with \eqref{yu:momcon}, and either or both of the drift and scale coefficients are misspecified. For the technical account of the GQMLE for ergodic SDE models, see \cite{Yos92}, \cite{Kes97}, \cite{UchYos11}, \cite{UchYos12}, \cite{Mas13-1}, and \cite{Ueh18}. These results naturally provides us an insight that the aforementioned QBIC is also theoretically valid for the broader situation, and has the model selection consistency even if a non-nested model is contained in candidate models. In this paper, we will show that the insight is true. More specifically, we will give the QBIC building on the stochastic expansion of the log-marginal Gaussian quasi-likelihood. Although the convergence rate of the GQMLE differs in the L\'{e}vy driven or misspecified case, the form is the same as the correctly specified diffusion case, that is, a unified model selection criteria for ergodic SDE models is provided. We will also show the model selection consistency of the QBIC. The rest of this paper is as follows: Section \ref{sec_ass} provides the notations and assumptions used throughout this paper. In Section \ref{sec_res}, the main result of this paper is given. Section \ref{sec_sim} exhibits some numerical experiments. The technical proofs of the main results are summarized in Appendix. \section{Notations and Assumptions}\label{sec_ass} For notational convenience, we previously introduce some symbols used in the rest of this paper. \begin{itemize} \item For any vector $x$, $x^{(j)}$ represents $j$-th element of $x$, and we write $x^{\otimes2}=x^\top x$ where $^\top$ denotes the transpose operator. \item $\partial_x$ is referred to as a differential operator with respect to any variable $x$. \item $x_n\lesssim y_n$ implies that there exists a positive constant $C$ being independent of $n$ satisfying $x_n\leq Cy_n$ for all large enough $n$. \item For any set $S$, $\bar{S}$ denotes its closure. \item We write $Y_j=Y_{t_j}$ and $\Delta_j Y:=Y_j-Y_{j-1}$ for any stochastic process $(Y_t)_{t\in\mathbb{R}^+}$. \item For any matrix valued function $f$ on $\mathbb{R}\times\Theta$, we write $f_s(\theta)=f(X_s,\theta)$; especially we write $f_j(\theta)=f(X_j,\theta)$. \item $I_p$ represents the $p$-dimensional identity matrix. \item $\nu_0$ represents the L\'{e}vy measure of $Z$. \item $P_t(x,\cdot)$ denotes the transition probability of $X$. \item Given a function $\rho:\mathbb{R}\to\mathbb{R}^+$ and a signed measure $m$ on a one-dimensional Borel space, we write \begin{equation}\nonumber ||m||_\rho=\sup\left\{|m(f)|:\mbox{$f$ is $\mathbb{R}$-valued, $m$-measurable and satisfies $|f|\leq\rho$}\right\}. \end{equation} \item $\mathcal{A}$ and $\tilde{\mathcal{A}}$ stand for the infinitesimal generator and extended generator of $X$, respectively. \end{itemize} In the next section, we will first give the stochastic expansion of the log-marginal Gaussian quasi-likelihood for the following model: \begin{equation}\label{ten:model} dX_t=a(X_t,\alpha)dt+c(X_{t-},\gamma)dZ_t, \end{equation} where similar to \eqref{yu:canmodel}, the coefficients have the unknown $p_\gamma$-dimensional parameter $\gamma$ and $p_\alpha$-dimensional parameter $\alpha$. They are supposed to be elements of bounded convex domains $\Theta_\gamma$ and $\Theta_\alpha$, and for the sake of convenience, we write $\theta=(\gamma,\alpha)$ and $\Theta_\gamma\times \Theta_\alpha:=\Theta$. We also assume that either or both of the drift and scale coefficients are possibly misspecified. Especially, we say that the model setting is semi-misspecified diffusion case when the driving noise is a standard Wiener process, the scale coefficient is correctly specified and the drift coefficient is misspecified. Below, we table the assumptions for our main result. \begin{Assumption}\label{Moments} $Z$ is a standard Wiener process, or a pure-jump L\'{e}vy process satisfying that: $E[Z_1]=0$, $E[Z_1^2]=1$, and $E[|Z_1|^q]<\infty$ for all $q>0$. Furthermore, the Blumenthal-Getoor index (BG-index) of $Z$ is smaller than 2, that is, \begin{equation*} \beta:=\inf_\gamma\left\{\gamma\geq0: \int_{|z|\leq1}|z|^\gamma\nu_0(dz)<\infty\right\}<2. \end{equation*} \end{Assumption} \begin{Assumption}\label{Stability} \begin{enumerate} \item There exists a probability measure $\pi_0$ such that for every $q>0$, we can find constants $a>0$ and $C_q>0$ for which \begin{equation}\label{Ergodicity} \sup_{t\in\mathbb{R}_{+}} \exp(at) ||P_t(x,\cdot)-\pi_0(\cdot)||_{h_q} \leq C_qh_q(x), \end{equation} for any $x\in\mathbb{R}$ where $h_q(x):=1+|x|^q$. \item For any $q>0$, we have \begin{equation} \sup_{t\in\mathbb{R}_{+}}E[|X_t|^q]<\infty. \end{equation} \end{enumerate} \end{Assumption} Let $\pi_1$ and $\pi_2$ be the prior densities for $\gamma$ and $\alpha$, respectively. \begin{Assumption}\label{Prior} The prior densities $\pi_1$ and $\pi_2$ are continuous, and fullfil that \begin{equation*} \sup_{\gamma\in\Theta_\gamma} \pi_1(\gamma) \vee \sup_{\alpha\in\Theta_\alpha} \pi_2(\alpha) <\infty. \end{equation*} \end{Assumption} We define an {\it optimal} value $\theta^\star:=(\gamma^\star,\alpha^\star)$ of $\theta$ being chosen arbitrary from the sets $\displaystyle\mathop{\rm argmax}_{\gamma\in\bar{\Theta}_\gamma}\mathbb{G}_1(\gamma)$ and $\displaystyle\mathop{\rm argmax}_{\alpha\in\bar{\Theta}_\alpha}\mathbb{G}_2(\alpha)$ for $\mathbb{R}$-valued functions $\mathbb{G}_1(\cdot)$ (resp. $\mathbb{G}_{2}(\cdot)$) on $\Theta_\gamma$ (resp. $\Theta_\alpha$) defined by \begin{align} &\mathbb{G}_1(\gamma):=-\int_\mathbb{R} \left(\log c^2(x,\gamma)+\frac{C^2(x)}{c^2(x,\gamma)}\right)\pi_0(dx), \label{rf:con.scale}\\ &\mathbb{G}_2(\alpha):=-\int_\mathbb{R} c(x,\gamma^\star)^{-2}(A(x)-a(x,\alpha))^2\pi_0(dx). \label{rf:con.drift} \end{align} From the expression of $\mathbb{G}_1$, $\gamma^\star$ can be regarded as an element in $\Theta_\gamma$ minimizing the Stein's loss; $\alpha^\star$ as an element in $\Theta_\alpha$ minimizing $L_2$-loss. Recall that $\Theta=\Theta_\gamma\times\Theta_\alpha$ is supposed to be a bounded convex domain. Then, we assume that: \begin{Assumption}\label{Identifiability} \begin{itemize} \item $\theta^\star$ is unique and is in $\Theta$. \item There exist positive constants $\chi_\gamma$ and $\chi_\alpha$ such that for all $(\gamma,\alpha)\in\Theta$, \begin{align} &\mathbb{G}_1(\gamma)-\mathbb{G}_1(\gamma^\star)\leq-\chi_\gamma|\gamma-\gamma^\star|^2,\\ &\mathbb{G}_2(\alpha)-\mathbb{G}_2(\alpha^\star)\leq-\chi_\alpha|\alpha-\alpha^\star|^2. \end{align} \end{itemize} \end{Assumption} \begin{Assumption}\label{Smoothness} \begin{enumerate} \item The coefficients $A$ and $C$ are Lipschitz continuous and twice differentiable, and their first and second derivatives are of at most polynomial growth. \item The drift coefficient $a(\cdot,\alpha^\star)$ and scale coefficient $c(\cdot,\gamma^\star)$ are Lipschitz continuous, and $c(x,\gamma)\neq0$ for every $(x,\gamma)$. \item For each $i \in \left\{0,1\right\}$ and $k \in \left\{0,\dots,5\right\}$, the following conditions hold: \begin{itemize} \item The coefficients $a$ and $c$ admit extension in $\mathcal{C}(\mathbb{R}\times\bar{\Theta})$ and have the partial derivatives $(\partial_x^i \partial_\alpha^k a, \partial_x^i \partial_\gamma^k c)$ possessing extension in $\mathcal{C}(\mathbb{R}\times\bar{\Theta})$. \item There exists nonnegative constant $C_{(i,k)}$ satisfying \begin{equation}\label{polynomial} \sup_{(x,\alpha,\gamma) \in \mathbb{R} \times \bar{\Theta}_\alpha \times \bar{\Theta}_\gamma}\frac{1}{1+|x|^{C_{(i,k)}}}\left\{|\partial_x^i\partial_\alpha^ka(x,\alpha)|+|\partial_x^i\partial_\gamma^kc(x,\gamma)|+|c^{-1}(x,\gamma)|\right\}<\infty. \end{equation} \end{itemize} \end{enumerate} \end{Assumption} Define the $p_\gamma\times p_\gamma$-matrix $\mathcal{I}_\gamma$ and $p_\alpha\times p_\alpha$-matrix $\mathcal{I}_\alpha$ by: \begin{align} &\mathcal{I}_\gamma=4\int_\mathbb{R} \frac{(\partial_\gamma c(x,\gamma^\star))^{\otimes2}}{c^4(x,\gamma^\star)}C^2(x)\pi_0(dx)-2\int_\mathbb{R}\frac{\partial_\gamma^{\otimes2}c(x,\gamma^\star)c(x,\gamma^\star)-(\partial_\gamma c(x,\gamma^\star))^{\otimes2}}{c^4(x,\gamma^\star)}(C^2(x)-c^2(x,\gamma^\star))\pi_0(dx),\label{yu:fish1}\\ &\mathcal{I}_\alpha=2\int_\mathbb{R}\frac{(\partial_\alpha a(x,\alpha^\star))^{\otimes2}}{c^2(x,\gamma^\star)}\pi_0(dx)-2\int_\mathbb{R}\frac{\partial_\alpha^{\otimes2} a(x,\alpha^\star)}{c^2(x,\gamma^\star)}(A(x)-a(x,\alpha^\star))\pi_0(dx)\label{yu:fish2}. \end{align} In the correctly specified case, since under Assumption \ref{Identifiability}, we have \begin{equation*} c(x,\gamma^\star)= C(x),\quad a(x,\alpha^\star)=A(x), \qquad \pi_0-a.s., \end{equation*} $\mathcal{I}_\gamma$ and $\mathcal{I}_\alpha$ are reduced to \begin{align*} &\mathcal{I}_\gamma=4\int_\mathbb{R} \frac{(\partial_\gamma c(x,\gamma^\star))^{\otimes2}}{c^2(x,\gamma^\star)}\pi_0(dx),\\ &\mathcal{I}_\alpha=2\int_\mathbb{R}\frac{(\partial_\alpha a(x,\alpha^\star))^{\otimes2}}{c^2(x,\gamma^\star)}\pi_0(dx). \end{align*} \begin{Assumption}\label{Fisher} $\mathcal{I}_\gamma$ and $\mathcal{I}_\alpha$ are positive definite. \end{Assumption} In the rest of this section, we give a brief overview of the stepwise Gaussian quasi-likelihood method for \eqref{ten:model}, and introduce some theoretical results under Assumptions \ref{Moments}-\ref{Fisher}. We consider the following stepwise Gaussian quasi-likelihood (GQL) functions $\mathbb{G}_{1,n}$ and $\mathbb{G}_{2,n}$ on $\Theta_\gamma$ and $\Theta_\alpha$: \begin{align} &\mathbb{G}_{1,n}(\gamma)=-\frac{1}{h_n}\sum_{j=1}^{n} \left\{h_n\log c^2_{j-1}(\gamma)+\frac{(\Delta_j X)^2}{c^2_{j-1}(\gamma)}\right\} \label{rf:scale},\\ &\mathbb{G}_{2,n}(\alpha)=-\sum_{j=1}^{n} \frac{(\Delta_j X-h_na_{j-1}(\alpha))^2}{h_nc^2_{j-1}(\hat{\gamma}_{n})} \label{rf:drift} \end{align} For such functions, we define the (stepwise) Gaussian quasi maximum likelihood estimator (GQMLE) $\hat{\theta}_{n}:=(\hat{\gamma}_{n},\hat{\alpha}_{n})$ in the following manner: \begin{align*} &\hat{\gamma}_{n}\in\mathop{\rm argmax}_{\gamma\in\bar{\Theta}_\gamma}\mathbb{G}_{1,n}(\gamma), \nonumber\\ &\hat{\alpha}_{n}\in\mathop{\rm argmax}_{\alpha\in\bar{\Theta}_\alpha}\mathbb{G}_{2,n}(\alpha). \nonumber \end{align*} \begin{Rem}\label{jointstepwise} Different from the low frequently observed case, the stepwise type estimator exhibits the same asymptotics as the joint type estimator defined as: \begin{equation*} \tilde{\theta}_n\in\mathop{\rm argmax}_{\theta\in\bar{\Theta}} \left[-\frac{1}{h_n}\sum_{j=1}^{n} \left\{h_n\log c^2_{j-1}(\gamma)+\frac{(\Delta_j X-h_na_{j-1}(\alpha))^2}{c^2_{j-1}(\gamma)}\right\}\right], \end{equation*} while possessing the stability of optimization for calculating the estimator. This is because the small time behavior is dominated by the scale term, and thus theoretically, $h_na_{j-1}(\alpha)$ does not affect the estimation of $\gamma$. \end{Rem} By making use of the estimates in the papers \cite{Yos92}, \cite{Kes97}, \cite{UchYos11}, \cite{UchYos12}, \cite{Mas13-1}, and \cite{Ueh18}, the following asymptotic results about $\hat{\theta}_{n}$ can be obtained (or directly follows) under Assumption \ref{Moments}-\ref{Fisher}: let $A_n:=\diag\{a_n I_{p_\gamma}, \sqrt{T_n} I_{p_\alpha}\}$ where $a_n=\sqrt{n}$ in the correctly specified or semi-misspecified diffusion case, and otherwise, $a_n=\sqrt{T_n}$. Then we have \begin{itemize} \item Tail probability estimates: for any $L>0$ and $r>0$, there exists a positive constant $C_L$ such that \begin{equation}\label{eq: TPE} \sup_{n\in\mathbb{N}} P\left(\left|A_n(\hat{\theta}_{n}-\theta^\star)\right|>r\right)\leq \frac{C_L}{r^L}. \end{equation} \item Asymptotic normality: \begin{equation*} A_n(\hat{\theta}_{n}-\theta^{\star})\cil N(0, \mathcal{I}^{-1}\Sigma (\mathcal{I}^{-1})^\top), \end{equation*} where $\mathcal{I}=\begin{pmatrix}\mathcal{I}_\gamma & O \\ \mathcal{I}_{\alpha\gamma} & \mathcal{I}_{\alpha}\end{pmatrix}$ with \begin{equation*} \mathcal{I}_{\alpha\gamma}=2\int_\mathbb{R} \partial_\alpha a(x,\alpha^\star)\partial_\gamma^\top c^{-2}(x,\gamma^\star)(a(x,\alpha^\star)-A(x)) \pi_0(dx), \end{equation*} and the form of $\Sigma:=\begin{pmatrix}\Sigma_\gamma&\Sigma_{\alpha\gamma}\\\Sigma_{\alpha\gamma}^\top&\Sigma_{\alpha}\end{pmatrix}$ is given as follows: \begin{enumerate} \item Correctly specified diffusion case: \begin{equation*} \Sigma=2\mathcal{I}=2\diag\{\mathcal{I}_\gamma, \mathcal{I}_\alpha\}=\begin{pmatrix}8\int_\mathbb{R} \frac{(\partial_\gamma c(x,\gamma^\star))^{\otimes2}}{c^2(x,\gamma^\star)}\pi_0(dx) & O\\ O &4\int_\mathbb{R}\frac{(\partial_\alpha a(x,\alpha^\star))^{\otimes2}}{c^2(x,\gamma^\star)}\pi_0(dx) \end{pmatrix}, \end{equation*} hence the asymptotic variance can be simply written as \begin{equation*} \mathcal{I}^{-1}\Sigma (\mathcal{I}^{-1})^\top=2 \mathcal{I}^{-1}=\begin{pmatrix}\frac{1}{2}\int_\mathbb{R} \frac{(\partial_\gamma c(x,\gamma^\star))^{\otimes2}}{c^2(x,\gamma^\star)}\pi_0(dx) & O\\ O &\int_\mathbb{R}\frac{(\partial_\alpha a(x,\alpha^\star))^{\otimes2}}{c^2(x,\gamma^\star)}\pi_0(dx) \end{pmatrix}. \end{equation*} \item Semi-misspecified diffusion case: \begin{align*} &\Sigma_\gamma =8\int_\mathbb{R} \frac{(\partial_\gamma c(x,\gamma^\star))^{\otimes2}}{c^2(x,\gamma^\star)}\pi_0(dx) \\ &\Sigma_{\alpha\gamma}=0\\ &\Sigma_\alpha=4\int \left[\left(\frac{\partial_\alpha a(x,\alpha^\star)}{c^2(x,\gamma^\star)}-\partial_x f(x)\right)C(x)\right]^{\otimes 2}\pi_0(dx) \end{align*} where the function $f$ is the solution of the following Poisson equation: \begin{align*} \mathcal{A} f^{(j)}(x)&=\frac{\partial_{\alpha^{(j)}} a(x,\alpha^\star)}{c^2(x,\gamma^\star)}(A(x)-a(x,\alpha^\star)), \end{align*} for $j\in\{1,\dots, p_\alpha\}$. \item Misspecified diffusion case: \begin{align*} &\Sigma_\gamma =4\int (\partial_x f_1(x) C(x))^{\otimes 2}\pi_0(dx)\\ &\Sigma_{\alpha\gamma}=4\int \left(\frac{\partial_\alpha a(x,\alpha^\star)}{c^2(x,\gamma^\star)}-\partial_x f_2(x)\right)C^2(x)(\partial_x f_1(x))^\top \pi_0(dx)\\ &\Sigma_\alpha=4\int \left[\left(\frac{\partial_\alpha a(x,\alpha^\star)}{c^2(x,\gamma^\star)}-\partial_x f_2(x)\right)C(x)\right]^{\otimes 2}\pi_0(dx) \end{align*} where the functions $f_1$ and $f_2$ are the solution of the following Poisson equations: \begin{align*} \mathcal{A} f_1^{(j_1)}(x)&=\frac{\partial_{\gamma^{(j_1)}} c(x,\gamma^\star)}{c^3(x,\gamma^\star)}(c^2(x,\gamma^\star)-C^2(x)), \\ \mathcal{A} f_2^{(j_2)}(x)&=\frac{\partial_{\alpha^{(j_2)}} a(x,\alpha^\star)}{c^2(x,\gamma^\star)}(A(x)-a(x,\alpha^\star)), \end{align*} for $j_1\in\{1,\dots, p_\gamma\}$ and $j_2\in\{1,\dots, p_\alpha\}$. \item L\'{e}vy driven case (both correctly specified and misspecified case): \begin{align*} &\Sigma_\gamma=4\int_\mathbb{R}\int_\mathbb{R}\left(\frac{\partial_\gamma c(x,\gamma^\star)}{c^3(x,\gamma^\star)}C^2(x)z^2+g_1(x+C(x)z)-g_1(x)\right)^{\otimes2}\pi_0(dx)\nu_0(dz),\\ &\Sigma_{\alpha\gamma}=-4\int_\mathbb{R}\int_\mathbb{R}\left(\frac{\partial_\gamma c(x,\gamma^\star)}{c^3(x,\gamma^\star)}C^2(x)z^2+g_1(x+C(x)z)-g_1(x)\right)\\ &\qquad \qquad \quad\left(\frac{\partial_\alpha a(x,\alpha^\star)}{c^2(x,\gamma^\star)}C(x)z+g_2(x+C(x)z)-g_2(x)\right)^\top\pi_0(dx)\nu_0(dz),\\ &\Sigma_\alpha=4\int_\mathbb{R}\int_\mathbb{R}\left(\frac{\partial_\alpha a(x,\alpha^\star)}{c^2(x,\gamma^\star)}C(x)z+g_2(x+C(x)z)-g_2(x)\right)^{\otimes2}\pi_0(dx)\nu_0(dz), \end{align*} where the functions $g_1$ and $g_2$ are the solution of the following extended Poisson equations: \begin{align*} \tilde{\mathcal{A}} g_1^{(j_1)}(x)&=-\frac{\partial_{\gamma^{(j_1)}} c(x,\gamma^\star)}{c^3(x,\gamma^\star)}(c^2(x,\gamma^\star)-C^2(x)), \\ \tilde{\mathcal{A}} g_2^{(j_2)}(x)&=-\frac{\partial_{\alpha^{(j_2)}} a(x,\alpha^\star)}{c^2(x,\gamma^\star)}(A(x)-a(x,\alpha^\star)), \end{align*} for $j_1\in\{1,\dots, p_\gamma\}$ and $j_2\in\{1,\dots, p_\alpha\}$ (In the correctly specified case, $g_1$ and $g_2$ are identically 0). \end{enumerate} \end{itemize} We note that in the (semi-)misspecified diffusion case, the asymptotic results on the stepwise GQMLE is not verified. However, by taking the same route to \cite[Theorem 3.1]{Ueh18}, we can easily show the tail probability estimates. Concerning asymptotic normality, it can also be derived from the argument of Remark \ref{jointstepwise}. \begin{Rem} The theory of Poisson equation and extended Poisson equation plays an important role for dealing with the misspecification effect. The former equation corresponds the generator of diffusion processes, and the latter one does the extended generator of Feller Markov processes. The existence and regularity conditions for their solutions are discussed in \cite{ParVer01}, \cite{VerKul11}, and \cite{Ueh18}, and in the diffusion case, \cite[Remark 2.2]{UchYos11} provides the explicit form of $g_1$ and $g_2$ under $p_\gamma=p_\alpha=1$. \end{Rem} \section{Main results}\label{sec_res} Building on the stepwise Gaussian quasi-likelihood $\mathbb{G}_{1,n}$ and $\mathbb{G}_{2,n}$, the next theorem gives the stochastic expansion of the log-marginal quasi-likelihood which is the main result of this paper: \begin{Thm}\label{YU:se} If Assumptions \ref{Moments}-\ref{Fisher} are satisfied for the statistical model \eqref{ten:model}, we have \begin{align*} &\log\left(\int_{\Theta_\gamma}\exp\left(\mathbb{G}_{1,n}(\gamma)\right)\pi_1(\gamma)d\gamma\right)=\mathbb{G}_{1,n}(\hat{\gamma}_{n})-\frac{1}{2}p_\gamma \log n+\log\pi_1\left(\gamma^\star\right)+\frac{p_\gamma}{2}\log 2\pi-\frac{1}{2}\log \det \mathcal{I}_\gamma+o_p\left(1\right),\\ &\log\left(\int_{\Theta_\alpha}\exp\left(\mathbb{G}_{2,n}(\alpha)\right)\pi_2(\alpha)d\alpha\right)=\mathbb{G}_{2,n}(\hat{\alpha}_{n})-\frac{1}{2}p_\alpha \log T_n+\log \pi_2(\alpha^\star)+\frac{p_\alpha}{2}\log 2\pi-\frac{1}{2}\log\det \mathcal{I}_\alpha+o_p(1). \end{align*} \end{Thm} In the present settings, although the scale estimator has the two kinds of convergence rates depending on the model setups, Theorem \ref{YU:se} holds regardless of the convergence rate of the scale estimator. By ignoring the $O_p(1)$ terms in each expansion, we define the two-step quasi-Bayesian information criteria (QBIC) by \begin{align*} &\mbox{QBIC}_{1,n}=\mathbb{G}_{1,n}(\hat{\gamma}_{n})-\frac{1}{2}p_\gamma \log n,\\ &\mbox{QBIC}_{2,n}=\mathbb{G}_{2,n}(\hat{\alpha}_{n})-\frac{1}{2}p_\alpha \log(T_n). \end{align*} Next, we consider model selection consistency of the proposed information criteria. Suppose that candidates for drift and scale coefficients are given as \begin{align} & c_{1}(x,\gamma_{1}),\ldots,c_{M_{1}}(x,\gamma_{M_{1}}), \label{se:ms.c} \\ & a_{1}(x,\alpha_{1}),\dots,a_{M_{2}}(x,\alpha_{M_{2}}), \label{se:ms.a} \end{align} where $\gamma_{m_{1}}\in\Theta_{\gamma_{m_{1}}}\subset\mathbb{R}^{p_{\gamma_{m_{1}}}}$ for any $m_{1}\leq M_{1}$ and $\alpha_{m_{2}}\in\Theta_{\alpha_{m_{2}}}\subset\mathbb{R}^{p_{\alpha_{m_{2}}}}$ for any $m_{2}\leq M_{2}$. Then, each candidate model $\mathcal{M}_{m_{1},m_{2}}$ is given by \begin{align*} dX_t=a_{m_{2}}(X_t,\alpha_{m_{2}})dt+c_{m_{1}}(X_{t-},\gamma_{m_{1}})dZ_t. \end{align*} In each candidate model $\mathcal{M}_{m_{1},m_{2}}$, the functions \eqref{rf:scale} and \eqref{rf:con.scale} are denoted by $G_{1,n}^{(m_{1})}$ and $G_{1}^{(m_{1})}$, respectively. The functions $G_{2,n}^{(m_{2}|m_{1})}$ and $G_{2}^{(m_{2}|m_{1})}$ correspond to \eqref{rf:drift} and \eqref{rf:con.drift} with $\gamma_{m_{1}}$. Using the QBIC, we propose the stepwise model selection as follows. \begin{itemize} \item[(i)] We select the best scale coefficient $c_{\hat{m}_{1,n}}$ among \eqref{se:ms.c}, where $\hat{m}_{1,n}$ satisfies $\{\hat{m}_{1,n}\}=\mathop{\rm argmax}_{m_{1}}\mathrm{QBIC}_{1,n}^{(m_{1})}$ with \begin{align*} &\mathrm{QBIC}_{1,n}^{(m_{1})}=G_{1,n}^{(m_{1})}(\hat{\gamma}_{m_{1},n})-p_{\gamma_{m_{1}}}\log n,\\ &\hat{\gamma}_{m_{1},n}\in\mathop{\rm argmax}_{\gamma_{m_{1}}\in\bar{\Theta}_{\gamma_{m_{1}}}}\mathbb{G}_{1,n}^{(m_{1})}(\gamma_{m_{1}}). \end{align*} \item[(ii)] Under $c_{\hat{m}_{1,n}}$ and $\hat{\gamma}_{\hat{m}_{1,n},n}$, we select the best drift coefficient with index $\hat{m}_{2,n}$ such that $\{\hat{m}_{2,n}\}=\mathop{\rm argmax}_{m_{2}}\mathrm{QBIC}_{2,n}^{(m_{2}|\hat{m}_{1,n})}$, where \begin{align*} &\mathrm{QBIC}_{2,n}^{(m_{2})}=G_{2,n}^{(m_{2}|\hat{m}_{1,n})}(\hat{\alpha}_{m_{2},n})-p_{\alpha_{m_{2}}}\log(T_{n}),\\ &\hat{\alpha}_{m_{2},n}\in\mathop{\rm argmax}_{\alpha_{m_{2}}\in\bar{\Theta}_{\alpha_{m_{2}}}}\mathbb{G}_{2,n}^{(m_{2}|\hat{m}_{1,n})}(\alpha_{m_{2}}). \end{align*} \end{itemize} Through this procedure, we can obtain the model $\mathcal{M}_{\hat{m}_{1,n},\hat{m}_{2,n}}$ as the final best model among the candidates described by \eqref{se:ms.c} and \eqref{se:ms.a}. The {\it optimal value} $\theta_{m_{1},m_{2}}^{\star}=(\alpha_{m_{2}}^{\star},\gamma_{m_{1}}^{\star})$ of $\mathcal{M}_{m_{1},m_{2}}$ is defined in a similar manner as the previous section. We assume that the model indexes $m_{1}^{\star}$ and $m_{2}^{\star}$ are uniquely given as follows: \begin{align*} \{m_{1}^{\star}\}&= \operatornamewithlimits {argmin}_{m_{1}\in\mathfrak{M}_{1}}\mathrm{dim}(\Theta_{\gamma_{m_{1}}}),\\ \{m_{2}^{\star}\}&= \operatornamewithlimits {argmin}_{m_{2}\in\mathfrak{M}_{2}}\mathrm{dim}(\Theta_{\alpha_{m_{2}}}), \end{align*} where $\mathfrak{M}_{1}=\mathop{\rm argmax}_{1\leq m_{1}\leq M_{1}}G_{1}^{(m_{1})}(\gamma_{m_{1}}^{\star})$ and $\mathfrak{M}_{2}=\mathop{\rm argmax}_{1\leq m_{2}\leq M_{2}}G_{2}^{(m_{2}|m_{1}^{\star})}(\alpha_{m_{2}}^{\star})$. Then, we say that $\mathcal{M}_{m_{1}^{\star},m_{2}^{\star}}$ is the {\it optimal model}. That is, the optimal model consists of the elements of optimal model sets $\mathfrak{M}_{1}$ and $\mathfrak{M}_{2}$ which have the smallest dimension. The following theorem means that the proposed criteria and model selection method have the model selection consistency. \begin{Thm} \label{thm:mod.cons} Suppose that Assumptions \ref{Moments}-\ref{Fisher} hold for the all candidate models and that these exists a $\mathcal{M}_{m_{1}^{\star},m_{2}^{\star}}$ is the optimal model. Let $m_{1}\in\{1,\ldots,M_{1}\}\backslash\{m_{1}^{\star}\}$ and $m_{2}\in\{1,\ldots,M_{2}\}\backslash\{m_{2}^{\star}\}$. Then the model selection consistency of the proposed QBIC hold in the following senses. \begin{align*} & \lim_{n\to\infty}\mathbb{P}\left(\mathrm{QBIC}_{1,n}^{(m_{1}^{\star})}-\mathrm{QBIC}_{1,n}^{(m_{1})}>0\right) =1, \\ & \lim_{n\to\infty}\mathbb{P}\left(\mathrm{QBIC}_{2,n}^{(m_{2}^{\star}|\hat{m}_{1,n})}-\mathrm{QBIC}_{2,n}^{(m_{2}|\hat{m}_{1,n})}>0\right)=1. \end{align*} \label{se:thm.modcon} \end{Thm} \begin{Rem} We here consider the case where there are several optimal models. Then, we define the optimal model index sets $\mathfrak{M}_{1}^{\star}$ and $\mathfrak{M}_{2}^{\star}$ by \begin{align*} \mathfrak{M}_{1}^{\star}&= \operatornamewithlimits {argmin}_{m_{1}\in\mathfrak{M}_{1}}\mathrm{dim}(\Theta_{\gamma_{m_{1}}}),\\ \mathfrak{M}_{2}^{\star}&= \operatornamewithlimits {argmin}_{m_{2}\in\mathfrak{M}_{2}}\mathrm{dim}(\Theta_{\alpha_{m_{2}}}), \end{align*} respectively. Applying the proof of Theorem \ref{se:thm.modcon} for each elements of $\mathfrak{M}_{1}^{\star}$ and $\mathfrak{M}_{2}^{\star}$, we can show the model selection consistency with respect to the optimal model sets. \end{Rem} \section{Numerical experiments}\label{sec_sim} In this section, we present simulation results to observe finite-sample performance of the proposed QBIC. We use {\tt yuima} package on R (see \cite{YUIMA14}) for generating data. In the examples below, all the Monte Carlo trials are based on 1000 independent sample paths, and the simulations are done for $(h_{n},T_{n})=(0.01,10), (0.005,10), (0.01,50)$, and $(0.005,50)$ (hence in each case, $n=1000, 2000, 5000$, and $10000$). We simulate the model selection frequencies by using proposed QBIC and compute the model weight $w_{m_{1},m_{2}}$ (\cite[Section 6.4.5]{BurAnd02}) defined by \begin{align} \begin{split} w_{m_{1},m_{2}}&=\frac{\ds{\exp\Big\{-\frac{1}{2}\big(\mathrm{QBIC}_{1,n}^{(m_{1})}-\mathrm{QBIC}_{1,n}^{(\hat{m}_{1,n})}\big)\Big\}}}{\ds{\sum_{k=1}^{M_{1}}\exp\Big\{-\frac{1}{2}\big(\mathrm{QBIC}_{1,n}^{(k)}-\mathrm{QBIC}_{1,n}^{(\hat{m}_{1,n})}\big)\Big\}}} \\ &\quad\times\frac{\ds{\exp\Big\{-\frac{1}{2}\big(\mathrm{QBIC}_{2,n}^{(m_{2}|m_{1})}-\mathrm{QBIC}_{n}^{(\hat{m}_{2,n}|m_{1})}\big)\Big\}}}{\ds{\sum_{\ell=1}^{M_{2}}\exp\Big\{-\frac{1}{2}\big(\mathrm{QBIC}_{2,n}^{(\ell|m_{1})}-\mathrm{QBIC}_{2,n}^{(\hat{m}_{2,n}|m_{1})}\big)\Big\}}}\times100. \label{def:weight} \end{split} \end{align} The model weight can be used to empirically quantify relative frequency (percentage) of the model selection from a single data set. The model which has the highest $w_{m_{1},m_{2}}$ value is regarded as the most probable model. From its definition \eqref{def:weight}, $w_{m_{1},m_{2}}$ satisfies the equation $\sum_{k=1}^{M_{1}}\sum_{\ell=1}^{M_{2}}w_{k,\ell}=100$. \subsection{Ergodic diffusion model}\label{subsec_sim1} Suppose that we have a sample $\mathbf{X}_{n}=(X_{t_{j}})_{j=0}^{n}$ with $t_{j}=jh_{n}$ from the true model \begin{align*} dX_{t}=-\frac{1}{2}X_{t}dt+dw_{t},\quad t\in[0,T_{n}],\quad X_{0}=0, \end{align*} where $T_{n}=nh_{n}$, and $w$ is a one-dimensional standard Wiener process. We consider the following scale (Scale) and drift (Drift) coefficients: \begin{align*} &\;{\bf Scale}\;{\bf 1:} c_{1}(x,\gamma_{1})=\exp\left\{\frac{\gamma_{1,1}+\gamma_{1,2}x+x^{2}}{1+x^{2}}\right\}; \;{\bf Scale}\;{\bf 2:} c_{2}(x,\gamma_{2})=\exp\left\{\frac{\gamma_{2,1}+x+\gamma_{2,3}x^{2}}{1+x^{2}}\right\}; \\ &\;{\bf Scale}\;{\bf 3:} c_{3}(x,\gamma_{3})=\exp\left\{\frac{1+\gamma_{3,2}x+\gamma_{3,3}x^{2}}{1+x^{2}}\right\}; \;{\bf Scale}\;{\bf 4:} c_{4}(x,\gamma_{4})=\exp\left\{\frac{1+\gamma_{4,2}x}{1+x^{2}}\right\}; \\ &\;{\bf Scale}\;{\bf 5:} c_{5}(x,\gamma_{5})=\exp\left\{\frac{1+\gamma_{5,3}x^{2}}{1+x^{2}}\right\}; \;{\bf Scale}\;{\bf 6:} c_{6}(x,\gamma_{6})=\exp\left\{\frac{\gamma_{6,2}x+x^{2}}{1+x^{2}}\right\}; \\ &\;{\bf Scale}\;{\bf 7:} c_{7}(x,\gamma_{7})=\exp\left\{\frac{x+\gamma_{7,3}x^{2}}{1+x^{2}}\right\}, \end{align*} and \begin{align*} {\bf Drift}\;{\bf 1:}\; a_{1}(x,\alpha_{1})=-\alpha_{1}(x-1); \;{\bf Drift}\;{\bf 2:}\; a_{2}(x,\alpha_{2})=-\alpha_{2}x-1; \;{\bf Drift}\;{\bf 3:}\; a_{3}(x,\alpha_{3})=-\alpha_{3}. \end{align*} Each candidate model is given by a combination of these diffusion and drift coefficients; for example, in the case of Scale 1 and Drift 1, we consider the statistical model \begin{align*} dX_{t}=-\alpha_{1}(X_{t}-1)dt+\exp\left\{\frac{\gamma_{1,1}+\gamma_{1,2}X_{t}+X_{t}^{2}}{1+X_{t}^{2}}\right\}dw_{t}. \end{align*} In this example, although the candidate models do not include the true model, the optimal parameter $(\gamma_{m_{1}}^{\star},\alpha_{m_{2}}^{\star})$ and optimal model indices $m_{1}^{\star}$ and $m_{2}^{\star}$ can be obtained by the functions \begin{align*} G_{1}^{(m_{1})}(\gamma_{m_{1}})&=-\int_{\mathbb{R}}\left\{\log c_{m_{1}}(x,\gamma_{m_{1}})^{2}+\frac{1}{c_{m_{1}}(x,\gamma_{m_{1}})^{2}}\right\}\pi_{0}(dx), \\ G_{2}^{(m_{2}|m_{1}^{\star})}(\alpha_{m_{2}})&=-\int_{\mathbb{R}}c_{m_{1}^{\star}}(x,\gamma_{m_{1}^{\star}}^{\star})^{-2}\left\{-\frac{x}{2}-a_{m_{2}}(x,\alpha_{m_{2}})\right\}^{2}\pi_{0}(dx), \end{align*} where $\pi_{0}(dx)=\frac{1}{\sqrt{2\pi}}\exp(-x^{2}/2)dx$. The definition of the optimal model, Tables \ref{simu:tab1}, and \ref{simu:tab2} provide that the optimal model consists of Scale 1 and Drift 1. Table \ref{simu:tab3} summarizes the comparison results of model selection frequency and the mean of $w_{m_{1},m_{2}}$. The indicator of the optimal model defined by Scale 1 and Drift 1 is given by $w_{1,1}$. For all cases, the optimal model is frequently selected, and the value of $w_{1,1}$ is the highest. Also observed is that the frequencies that the optimal model is selected and the value of $w_{1,1}$ become higher as $n$ increases. This result demonstrates the model selection consistency of proposed QBIC. \subsection{Ergodic L\'{e}vy driven SDE model} The sample data $\mathbf{X}_{n}=(X_{t_{j}})_{j=0}^{n}$ with $t_{j}=jh_{n}$ is obtained from \begin{align*} dX_{t}=-\frac{1}{2}X_{t}dt+\frac{1}{1+X_{t-}^{2}}dZ_{t},\quad t\in[0,T_{n}],\quad X_{0}=0, \end{align*} where $T_{n}=nh_{n}$, the driving noise process is the normal inverse Gaussian L\'{e}vy process satisfying $\mathcal{L}(Z_{t})=NIG(3,0,3t,0)$. In this example, we consider the following candidate scale (Scale) and drift (Drift) coefficients: \begin{align*} &\;{\bf Scale}\;{\bf 1:} c_{1}(x,\gamma_{1})=\gamma_{1}; \;{\bf Scale}\;{\bf 2:} c_{2}(x,\gamma_{2})=\exp\left\{\frac{1}{2}(\gamma_{2,1}\cos x +\gamma_{2,2}\sin x)\right\}; \\ &\;{\bf Scale}\;{\bf 3:} c_{3}(x,\gamma_{3})=\frac{\gamma_{3}}{1+x^{2}}; \;{\bf Scale}\;{\bf 4:} c_{4}(x,\gamma_{4})=\frac{1+\gamma_{4}x^{2}}{1+x^{2}}, \end{align*} and \begin{align*} {\bf Drift}\;{\bf 1:}\; a_{1}(x,\alpha_{1})=-\alpha_{1,1}x-\alpha_{1,2}; \;{\bf Drift}\;{\bf 2:}\; a_{2}(x,\alpha_{2})=-\alpha_{2}x; \;{\bf Drift}\;{\bf 3:}\; a_{3}(x,\alpha_{3})=-\alpha_{3}. \end{align*} Each candidate model is constructed in a similar manner as Section \ref{subsec_sim1}. Then, the true model consists of Scale 3 and Drift 2 with $(\gamma_{3},\alpha_{2})=(1,-\frac{1}{2})$. Note that Scale 1, 2, and Drift 3 are misspecified coefficients. From Table \ref{simu:tab4}, we can show that the tendencies of model selection frequency and model weight are analogous to Section \ref{subsec_sim1}. \section{Appendix} \noindent\textbf{Proof of Theorem \ref{YU:se}} Since in the correctly specified and semi-misspecified diffusion cases, Theorem \ref{YU:se} can be shown in a similar way as \cite{EguMas18a}, we consider the cases where the rate of convergence for scale estimator is $\sqrt{T_{n}}$. For the simplification of the following discussion, we hereafter deal with the zero-extended version of $\mathbb{G}_{1,n}(\gamma)$ and $\pi_1(\gamma)$: they vanish outside of $\Theta_\gamma$. Applying the change of variable, we have \begin{align*} &\log\left(\int_{\Theta_\gamma}\exp\left(\mathbb{G}_{1,n}(\gamma)\right)\pi_1(\gamma)d\gamma\right)\\ &=\mathbb{G}_{1,n}(\hat{\gamma}_{n})-\frac{p_\gamma}{2}\log n+\log\left(\int_{\mathbb{R}^{p_\gamma}}\exp\left\{\mathbb{G}_{1,n}\left(\hat{\gamma}_{n}+\frac{t}{\sqrt{n}}\right)-\mathbb{G}_{1,n}(\hat{\gamma}_{n})\right\}\pi_1\left(\hat{\gamma}_{n}+\frac{t}{\sqrt{n}}\right)dt\right). \end{align*} Below we show that \begin{align} &\nonumber\log\left(\int_{\mathbb{R}^{p_\gamma}}\exp\left\{\mathbb{G}_{1,n}\left(\hat{\gamma}_{n}+\frac{t}{\sqrt{n}}\right)-\mathbb{G}_{1,n}(\hat{\gamma}_{n})\right\}\pi_1\left(\hat{\gamma}_{n}+\frac{t}{\sqrt{n}}\right)dt\right)\\ &=\log\pi_1\left(\gamma^\star\right)+\frac{p_\gamma}{2}\log 2\pi-\frac{1}{2}\log \det \mathcal{I}_\gamma+o_p(1).\label{yu: convp} \end{align} It follows from the continuous mapping theorem, \cite[THEOREM 2 (d)]{Fer96} and the estimates of the GQL and GQMLE given in the papers \cite{Yos92}, \cite{Kes97}, \cite{UchYos11}, \cite{UchYos12}, \cite{Mas13-1}, and \cite{Ueh18} that \begin{equation} \frac{1}{n}\sup_{\gamma\in\Theta_\gamma}|\partial_\gamma^3 \mathbb{G}_{1,n}(\gamma)|=O_p(1), \label{yu: sb} \end{equation} and that for any subsequence $\{n_j\}\subset \{n\}$, we can pick a subsubsequence $\{n_{k_j}\}\subset\{n_j\}$ fulfilling that for all $t\in\mathbb{R}^{p_\gamma}$, \begin{align} &\left|\frac{1}{n_{k_j}}\partial^2_\gamma \mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}\right)+\mathcal{I}_\gamma\right|\asc0,\label{yu:conve1} \\ & \sup_{\gamma\in\Theta_\gamma}\left|\frac{1}{n_{k_j}}\mathbb{G}_{1,n_{k_j}}(\gamma)-\mathbb{G}_1(\gamma)\right|\asc0, \label{yu:conve2} \\ &\left|\pi_1\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\pi_1\left(\gamma^\star\right)\right|\asc0, \label{yu:conve3} \\ &\left|\mathbb{G}_1\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_1\left(\gamma^\star+\frac{t}{\sqrt{n_{k_j}}}\right)\right|+\left|\mathbb{G}_1(\hat{\gamma}_{n_{k_j}})-\mathbb{G}_1(\gamma^\star)\right|\asc0. \label{yu:conve4} \end{align} For any $\ep>0$, \eqref{yu: sb} enables us to find large enough $N\in\mathbb{N}$ and $M>0$ such that for all $n\geq N$, \begin{equation*} P\left(\frac{1}{n}\sup_{\gamma\in\Theta_\gamma}|\partial_\gamma^3 \mathbb{G}_{1,n}(\gamma)|>M\right)<\ep. \end{equation*} Since our main focus here is the convergence in probability \eqref{yu: convp}, we can hereafter suppose that \begin{equation*} \frac{1}{n}\sup_{\gamma\in\Theta_\gamma}|\partial_\gamma^3 \mathbb{G}_{1,n}(\gamma)|\leq M \end{equation*} for sufficiently large $n$ and $M$ (and we deal with such $n$ below). We also write the set $E\subset\Omega$ on which \eqref{yu:conve1}-\eqref{yu:conve4} hold. For a positive constant $\delta$, we divide $\mathbb{R}^{p_\gamma}$ into \begin{align*} &D_{1,n}:=\left\{t\in\mathbb{R}^{p_\gamma}: |t|< \delta n^{\frac{1}{2}}\right\}, \\ &D_{2,n}:=\left\{t\in\mathbb{R}^{p_\gamma}: |t|\geq \delta n^{\frac{1}{2}}\right\}. \end{align*} For any set $A$, we define the indicator function $\mbbi_{A}(\cdot)$ by: \begin{align*} \mbbi_{A}(t)=\begin{cases}1,&t\in A,\\0,& \text{otherwise}.\end{cases} \end{align*} First we look at the integration on $D_{1,n}$. For $t\in\mathbb{R}^{p_\gamma}$, we write \begin{equation*} R_{n_{k_j}}=\left|\frac{1}{2n_{k_j}}\partial^2_\gamma \mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}\right)+\frac{1}{2}\mathcal{I}_\gamma\right|+\delta\left|\frac{1}{6n_{k_j}}\int_0^1 \partial^3_\gamma \mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}u\right)du\right|. \end{equation*} For any $\omega\in E$ and $\ep>0$, it follows from \eqref{yu:conve1} and the boundedness of $\frac{1}{n}\sup_{\gamma\in\Theta_\gamma}|\partial_\gamma^3 \mathbb{G}_{1,n}(\gamma)|$ that there exist $N(\omega)\in\mathbb{N}$ and small enough $\zeta(\omega)>0$ being independent of $t$ satisfying that for all $n_{k_j}\geq N(\omega)$ and $\delta<\zeta(\omega)$, \begin{equation*} R_{n_{k_j}}(\omega)\leq \ep. \end{equation*} Hence, for all $t\in\mathbb{R}^{p_\gamma}$ and $\omega\in E$, by taking $\zeta(\omega)$ and $N(\omega)$ sufficiently small, Taylor's expansion around $\hat{\gamma}_{n}$, and \eqref{yu:conve1} yield that for all $\delta<\zeta(\omega)$ and $n_{k_j}\geq N(\omega)$, \begin{align} &\nonumber\exp\left\{\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}\right)\right\}\pi_1\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)\mbbi_{D_{1,n}}(t)\\ &\nonumber\leq \sup_{\gamma\in\Theta_\gamma} \pi_1(\gamma) \exp\left(-\frac{1}{2}\mathcal{I}_\gamma[t,t]+|t|^2R_{n_{k_j}}\right)\\ &\leq \sup_{\gamma\in\Theta_\gamma} \pi_1(\gamma) \exp\left\{-\frac{1}{4}\mathcal{I}_\gamma[t,t]\right\} \label{yu:dconve}, \end{align} and the right-hand-side is integrable over $\mathbb{R}^{p_\gamma}$. The above estimates and \eqref{yu:conve3} also imply that for all $\ep'>0$, $t\in\mathbb{R}^{p_\gamma}$, and $\omega\in E$, we can choose $N(\omega)\in\mathbb{N}$ and $\zeta(\omega)>0$ such that for all $n_{k_j}\geq N(\omega)$ and $\delta<\zeta(\omega)$, \begin{align} &\nonumber\left|\exp\left\{\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}\right)\right\}\pi_1\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\exp\left(-\frac{1}{2}\mathcal{I}_\gamma [t,t]\right)\pi_1\left(\gamma^\star\right)\right|\mbbi_{D_{1,{n_{k_j}}}}(t)\\ &\nonumber\leq \exp\left(-\frac{1}{2}\mathcal{I}_\gamma [t,t]\right)\left(\left|\pi_1\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\pi_1\left(\gamma^\star\right)\right|+\sup_{\gamma\in\Theta_\gamma}\pi_1\left(\gamma\right)\left|\exp\left(|t|^2R_{n_{k_j}}\right)-1\right| \mbbi_{D_{1,{n_{k_j}}}}(t)\right)\\ &\leq \ep'. \label{yu:pconve} \end{align} From \eqref{yu:dconve}, \eqref{yu:pconve} and the dominated convergence theorem, we finally get \begin{align} &\nonumber\log\left(\int_{\mathbb{R}^{p_\gamma}} \exp\left\{\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}\right)\right\}\pi_1\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)\mbbi_{D_{1,n_{k_j}}}(t)dt\right)\\ &=\nonumber\log\Bigg(\int_{\mathbb{R}^{p_\gamma}}\Bigg[\exp\left\{\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}\right)\right\}\pi_1\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)\\ &\nonumber\qquad\qquad\qquad\quad-\exp\left(-\frac{1}{2}\mathcal{I}_\gamma [t,t]\Bigg)\pi_1\left(\gamma^\star\right)\Bigg]\mbbi_{D_{1,n_{k_j}}}(t)dt+\int_{\mathbb{R}^{p_\gamma}}\exp\left(-\frac{1}{2}\mathcal{I}_\gamma [t,t]\right)\pi_1\left(\gamma^\star\right)\mbbi_{D_{1,n_{k_j}}}(t)dt\right)\\ &\nonumber\asc \log\left(\int_{\mathbb{R}^{p_\gamma}}\exp\left(-\frac{1}{2}\mathcal{I}_\gamma [t,t]\right)\pi_1\left(\gamma^\star\right)dt\right)\\ &=\log\pi_1\left(\gamma^\star\right)+\frac{p_\gamma}{2}\log 2\pi-\frac{1}{2}\log \det \mathcal{I}_\gamma.\label{yu:d1} \end{align} Now we move on to the evaluation on $D_{2,n}$. Since \begin{align*} &\frac{1}{n_{k_j}}\left(\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_{1,n_{k_j}}(\hat{\gamma}_{n_{k_j}})\right)\\ &\leq 2\sup_{\gamma\in\Theta_\gamma}\left|\frac{1}{n_{k_j}}\mathbb{G}_{1,n_{k_j}}(\gamma)-\mathbb{G}_1(\gamma)\right|+\mathbb{G}_1\left(\gamma^\star+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_1(\gamma^\star)\\ &+\mathbb{G}_1\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_1\left(\gamma^\star+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_1(\hat{\gamma}_{n_{k_j}})+\mathbb{G}_1(\gamma^\star), \end{align*} the identifiability condition, \eqref{yu:conve2}, and \eqref{yu:conve4} imply that on $D_{2,n}$ and for all large enough $n_{k_j}$, there exists a positive constant $\ep''$ satisfying \begin{equation*}\label{ep} \frac{1}{n_{k_j}}\left(\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}\right)\right)<-\ep'', \end{equation*} almost surely. Thus we arrive at \begin{align} &\nonumber\int_{\mathbb{R}^{p_\gamma}} \exp\left\{\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)-\mathbb{G}_{1,n_{k_j}}\left(\hat{\gamma}_{n_{k_j}}\right)\right\}\pi_1\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)\mbbi_{D_{2,n_{k_j}}}(t)dt\\ &\leq \exp\left(-n_{k_j}\ep''\right)\int_{\mathbb{R}^{p_\gamma}}\pi_1\left(\hat{\gamma}_{n_{k_j}}+\frac{t}{\sqrt{n_{k_j}}}\right)dt \asc0.\label{yu:d2} \end{align} Again applying the converse of \cite[Theorem 2(d)]{Fer96}, \eqref{yu:d1} and \eqref{yu:d2} imply the desired result. As for $\log\left(\int_{\Theta_\alpha}\exp\left(\mathbb{G}_{2,n}(\alpha)\right)\pi_2(\alpha)d\alpha\right)$, the proof is similar, thus we omit its details. \medskip \noindent\textbf{Proof of Theorem \ref{se:thm.modcon}} For proof of Theorem \ref{se:thm.modcon}, we consider the nested model selection case and non-nested model selection case. The (non-)nested model means that the candidate models (do not) include the optimal model. In a similar way as \cite[Theorems 3.3]{EguMas18b} and \cite[Theorems 5.5]{EguMas18a}, we can prove that Theorem \ref{se:thm.modcon} is established for the nested model. Below, we will deal with the non-nested model selection case. In the non-nested model selection case, because of the definition of the optimal model, we have $G_{1}^{(m_{1}^{\star})}(\gamma_{m_{1}^{\star}}^{\star})>G_{1}^{(m_{1})}(\gamma_{m_{1}}^{\star})$ a.s. for every $m_{1}\neq m_{1}^{\star}$. Further, assumptions give the equations \begin{align*} \frac{1}{n}G_{1,n}^{(m_{1})}(\hat{\gamma}_{m_{1},n})&=\frac{1}{n}G_{1,n}^{(m_{1})}(\gamma_{m_{1}}^{\star})+o_{p}(1)=G_{1}^{(m_{1})}(\gamma_{m_{1}}^{\star})+o_{p}(1), \\ \frac{1}{n}G_{1,n}^{(m_{1}^{\star})}(\hat{\gamma}_{m_{1}^{\star},n})&=\frac{1}{n}G_{1,n}^{(m_{1}^{\star})}(\gamma_{m_{1}^{\star}}^{\star})+o_{p}(1)=G_{1}^{(m_{1}^{\star})}(\gamma_{m_{1}^{\star}}^{\star})+o_{p}(1). \end{align*} Hence, for any $m_{1}\in\{1,\ldots,M_{1}\}\backslash\{m_{1}^{\star}\}$, \begin{align} \mathbb{P}\left(\mathrm{QBIC}_{1,n}^{(m_{1}^{\star})}-\mathrm{QBIC}_{1,n}^{(m_{1})}>0\right) &=\mathbb{P}\left\{\frac{1}{n}\left(G_{1,n}^{(m_{1}^{\star})}(\hat{\gamma}_{m_{1}^{\star},n})-G_{1,n}^{(m_{1})}(\hat{\gamma}_{m_{1},n})\right)>\left(p_{\gamma_{m_{1}^{\star}}}-p_{\gamma_{m_{1}}}\right)\frac{\log n}{n}\right\} \nonumber\\ &=\mathbb{P}\left\{G_{1}^{(m_{1}^{\star})}(\gamma_{m_{1}^{\star}}^{\star})-G_{1}^{(m_{1})}(\gamma_{m_{1}}^{\star})>o_{p}(1)\right\} \nonumber\\ &=\mathbb{P}\left\{G_{1}^{(m_{1}^{\star})}(\gamma_{m_{1}^{\star}}^{\star})-G_{1}^{(m_{1})}(\gamma_{m_{1}}^{\star})>0\right\}+o(1) \nonumber\\ &\to1 \label{se:prf.thm.modcon1} \end{align} as $n\to\infty$. As with \eqref{se:prf.thm.modcon1}, we can show that for any $m_{2}\in\{1,\ldots,M_{2}\}\backslash\{m_{2}^{\star}\}$, \begin{align} \mathbb{P}\left(\mathrm{QBIC}_{2,n}^{(m_{2}^{\star}|m_{1}^{\star})}-\mathrm{QBIC}_{2,n}^{(m_{2}|m_{1}^{\star})}>0\right) &\to1. \label{se:prf.thm.modcon2} \end{align} From \eqref{se:prf.thm.modcon1} and \eqref{se:prf.thm.modcon2}, we have \begin{align*} \mathbb{P}\left(\mathrm{QBIC}_{2,n}^{(m_{2}^{\star}|\hat{m}_{1,n})}-\mathrm{QBIC}_{2,n}^{(m_{2}|\hat{m}_{1,n})}>0\right)&=\mathbb{P}\left(\mathrm{QBIC}_{2,n}^{(m_{2}^{\star}|\hat{m}_{1,n})}-\mathrm{QBIC}_{2,n}^{(m_{2}|\hat{m}_{1,n})}>0, \hat{m}_{1,n}=m_{1}^{\star}\right) \\ &\quad+\mathbb{P}\left(\mathrm{QBIC}_{2,n}^{(m_{2}^{\star}|\hat{m}_{1,n})}-\mathrm{QBIC}_{2,n}^{(m_{2}|\hat{m}_{1,n})}>0, \hat{m}_{1,n}\neq m_{1}^{\star}\right) \\ &\leq\mathbb{P}\left(\mathrm{QBIC}_{2,n}^{(m_{2}^{\star}|m_{1}^{\star})}-\mathrm{QBIC}_{2,n}^{(m_{2}|m_{1}^{\star})}>0\right) \\ &\quad+\mathbb{P}\left(\hat{m}_{1,n}\neq m_{1}^{\star}\right) \\ &=\mathbb{P}\left(\mathrm{QBIC}_{2,n}^{(m_{2}^{\star}|m_{1}^{\star})}-\mathrm{QBIC}_{2,n}^{(m_{2}|m_{1}^{\star})}>0\right) \\ &\quad+\mathbb{P}\left(\mathrm{QBIC}_{1,n}^{(m_{1}^{\star})}<\mathrm{QBIC}_{1,n}^{(m_{1}^{\star})}\right) \\ &\to1+0=1. \end{align*} The proof of Theorem \ref{se:thm.modcon} is complete. \subsection*{Acknowledgement} The author wishes to thank the associate editor and the two anonymous referees for careful reading and valuable comments which helped to greatly improve the paper. This work was supported by JST CREST Grant Number JPMJCR14D7, Japan. \clearpage \begin{table}[t] \begin{center} \caption{The values of $G_{1}^{(m_{1})}(\gamma_{m_{1}}^{\star})$ for each candidate diffusion coefficient.} \begin{tabular}{r r r r r r r r r} \hline & & Scale 1 & Scale 2 & Scale 3 & Scale 4 & Scale 5 & Scale 6 & Scale 7 \\ \hline & & & & & & & & \\[-3mm] $G_{1}^{(m_{1})}(\gamma_{m_{1}}^{\star})$ & & -1.2089 & -1.2822 & -1.4833 & -1.6225 & -1.4833 & -1.2602 & -3.2860 \\[1mm] \hline \end{tabular} \label{simu:tab1} \end{center} \end{table} \begin{table}[t] \begin{center} \caption{The values of $G_{2}^{(m_{2}|m_{1}^{\star})}(\alpha_{m_{2}}^{\star})$ for each candidate drift coefficient.} \begin{tabular}{r r r r r} \hline & & Drift 1 & Drift 2 & Drift 3 \\ \hline & & & & \\[-3mm] $G_{2}^{(m_{2}|m_{1}^{\star})}(\alpha_{m_{2}}^{\star})$ & & -0.0624 & -0.8193 & -0.0979 \\[1mm] \hline \end{tabular} \label{simu:tab2} \end{center} \end{table} \clearpage \begin{table}[t] \begin{center} \caption{The mean of model weight $w_{m_{1},m_{2}}$ and model selection frequencies for various situations. The optimal model consists of Scale 1 and Drift 1.} \begin{tabular}{r r l l r r r r r r r} \hline $T_{n}$ & $h_{n}$ & & & Scale $1^{\ast}$ & Scale 2 & Scale 3 & Scale 4 & Scale 5 & Scale 6 & Scale 7 \\ \hline 10 & 0.01 & Drift $1^{\ast}$ & frequency & \bf{409} & 72 & 5 & 1 & 5 & 95 & 70 \\ \multicolumn{2}{r}{$(n=1000)$} & & weight & \bf{30.27} & 7.26 & 0.41 & 0.04 & 0.41 & 7.57 & 5.38 \\ & & Drift 2 & frequency & 60 & 84 & 2 & 0 & 0 & 31 & 22 \\ & & & weight & 5.94 & 6.67 & 0.13 & 0.01 & 0.02 & 2.64 & 1.98 \\ & & Drift 3 & frequency & 125 & 5 & 0 & 0 & 0 & 3 & 11 \\ & & & weight & 22.91 & 2.50 & 0.15 & 0.04 & 0.10 & 3.06 & 2.51 \\ \hline 10& 0.005 & Drift $1^{\ast}$ & frequency & \bf{449} & 86 & 6 & 0 & 4 & 73 & 45 \\ \multicolumn{2}{r}{$(n=2000)$} & & weight & \bf{33.19} & 8.07 & 0.53 & 0.02 & 0.30 & 5.61 & 3.51 \\ & & Drift 2 & frequency & 64 & 96 & 3 & 0 & 0 & 26 & 8 \\ & & & weight & 6.61 & 7.65 & 0.19 & 0.00 & 0.01 & 1.95 & 0.89 \\ & & Drift 3 & frequency & 129 & 4 & 1 & 0 & 0 & 2 & 4 \\ & & & weight & 24.63 & 2.94 & 0.26 & 0.02 & 0.07 & 2.07 & 1.48 \\ \hline 50 & 0.01 &Drift $1^{\ast}$ & frequency & \bf{832} & 58 & 2 & 0 & 1 & 1 & 12 \\ \multicolumn{2}{r}{$(n=5000)$} & & weight & \bf{62.59} & 5.19 & 0.19 & 0.00 & 0.10 & 0.08 & 0.99 \\ & & Drift 2 & frequency & 2 & 13 & 0 & 0 & 0 & 0 & 0 \\ & & & weight & 0.29 & 1.12 & 0.00 & 0.00 & 0.00 & 0.00 & 0.04 \\ & & Drift 3 & frequency & 79 & 0 & 0 & 0 & 0 & 0 & 0 \\ & & & weight & 28.43 & 0.74 & 0.01 & 0.00 & 0.00 & 0.01 & 0.21 \\ \hline 50 & 0.005 & Drift $1^{\ast}$ & frequency & \bf{841} & 59 & 3 & 0 & 2 & 0 & 7 \\ \multicolumn{2}{r}{$(n=10000)$} & & weight & \bf{62.80} & 5.30 & 0.30 & 0.00 & 0.19 & 0.00 & 0.59 \\ & & Drift 2 & frequency & 3 & 13 & 0 & 0 & 0 & 0 & 0 \\ & & & weight & 0.31 & 1.15 & 0.00 & 0.00 & 0.00 & 0.00 & 0.00 \\ & & Drift 3 & frequency & 72 & 0 & 0 & 0 & 0 & 0 & 0 \\ & & & weight & 28.46 & 0.76 & 0.01 & 0.00 & 0.00 & 0.00 & 0.12 \\ \hline \end{tabular} \label{simu:tab3} \end{center} \end{table} \clearpage \begin{table}[t] \begin{center} \caption{The mean of model weight $w_{m_{1},m_{2}}$ and model selection frequencies for various situations. The true model consists of Scale 3 and Drift 2.} \begin{tabular}{r r l l r r r r} \hline $T_{n}$ & $h_{n}$ & & & Scale 1 & Scale 2 & Scale $3^{\ast}$ & Scale 4 \\ \hline 10 & 0.01 & Drift 1 & frequency & 3 & 38 & 169 & 27 \\ \multicolumn{2}{r}{$(n=1000)$} & & weight & 0.54 & 5.07 & 25.09 & 7.94 \\ & & Drift $2^{\ast}$ & frequency & 12 & 72 & {\bf 548} & 131 \\ & & & weight & 0.91 & 5.55 & {\bf 39.94} & 13.28 \\ & & Drift 3 & frequency & 0 & 0 & 0 & 0 \\ & & & weight & 0.10 & 0.50 & 0.78 & 0.30 \\ \hline 10& 0.005 & Drift 1 & frequency & 1 & 36 & 174 & 28 \\ \multicolumn{2}{r}{$(n=2000)$} & & weight & 0.38 & 4.84 & 26.64 & 6.94 \\ & & Drift $2^{\ast}$ & frequency & 11 & 70 & {\bf 557} & 123 \\ & & & weight & 0.81 & 5.29 & {\bf 42.01} & 11.51 \\ & & Drift 3 & frequency & 0 & 0 & 0 & 0 \\ & & & weight & 0.07 & 0.45 & 0.82 & 0.24 \\ \hline 50 & 0.01 &Drift 1 & frequency & 0 & 0 & 68 & 24 \\ \multicolumn{2}{r}{$(n=5000)$} & & weight & 0.00 & 0.01 & 14.44 & 6.70 \\ & & Drift $2^{\ast}$ & frequency & 0 & 1 & {\bf 659} & 248 \\ & & & weight & 0.00 & 0.09 & {\bf 54.09} & 24.66 \\ & & Drift 3 & frequency & 0 & 0 & 0 & 0 \\ & & & weight & 0.00 & 0.00 & 0.00 & 0.00 \\ \hline 50 & 0.005 & Drift 1 & frequency & 0 & 0 & 69 & 20 \\ \multicolumn{2}{r}{$(n=10000)$} & & weight & 0.00 & 0.01 & 15.31 & 5.84 \\ & & Drift $2^{\ast}$ & frequency & 0 & 1 & {\bf 684} & 226\\ & & & weight & 0.00 & 0.09 & {\bf 57.41} & 21.35 \\ & & Drift 3 & frequency & 0 & 0 & 0 & 0 \\ & & & weight & 0.00 & 0.00 & 0.00 & 0.00 \\\hline \end{tabular} \label{simu:tab4} \end{center} \end{table} \clearpage \bibliographystyle{abbrv}
1,116,691,498,197
arxiv
\section{Introduction} Community Question Answering (CQA) platforms, such as Quora\footnote{\url{https://www.quora.com}}, Yahoo! Answers\footnote{\url{https://answers.yahoo.com}}, and Stack Overflow\footnote{\url{https://www.stackoverflow.com}}, are Internet-based crowdsourcing services which enable users to post questions and seek answers from other users. The rapid growth of CQA platforms has attracted much research attention in recent years, such as answer ranking \cite{lyu2019we}, expert finding \cite{li2021askme}, and question retrieval \cite{ruckle2019improved}. The content in these platforms is usually organized as a question and a list of answers, associated with meta-data like topic categories of the question and users' information. In this paper, we define a question and all answers corresponding to this question as a CQA text. CQA texts are often ambiguous, especially with respect to the frequent occurrences of named entities. Specifically, a textual name in CQA texts may refer to many different entities in the real world, and a named entity may be expressed as various surface forms in CQA texts. Nevertheless, entity linking (EL), a popular text disambiguation task that aims to map entity mentions in text to their corresponding entities in a knowledge base, is rarely investigated in the CQA environment. To derive a better understanding of CQA texts, we define a new task of Community Question Answering entity linking (CQAEL), as linking textual entity mentions detected from CQA texts with their corresponding named entities in a knowledge base (KB). For the example shown in Figure \ref{fig:CQAEL}, in the CQA text $Z_2$, the entity mention ``Roosevelt'' in question $q$ may refer to the $32$-th president of the United States ``Franklin Delano Roosevelt'', the $26$-th president of the United States ``Theodore Roosevelt'', or many other named entities which could be referred to as ``Roosevelt''. The goal of the CQAEL task is to link this ambiguous entity mention with its corresponding named entity ``Franklin Delano Roosevelt''. \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{CQAEL.eps} \caption{An illustration for the task of CQAEL. A CQA text is composed of a question and several parallel answers; A CQA text is accompanied with several topic tags, each of which may be involved in other questions; Each answer is associated with a user, who could ask or answer other questions; Entity mentions detected in CQA texts that need to be linked are \underline{underlined}; Candidate mapping entities in a knowledge base for each entity mention are shown via a dashed arrow line and circle; Correct corresponding entities are in \textbf{boldface}.} \label{fig:CQAEL} \end{figure*} CQAEL bridges CQA with KB, and also contributes to both. On one hand, CQAEL plays a fundamental role in a wide range of CQA tasks, such as expert finding \cite{li2021askme}, answer ranking \cite{lyu2019we}, and answer summarization \cite{song2017summarizing}. For example, finding users with expertise to answer some question is a central issue in the CQA platform. Linking ambiguous mentions in the question to their corresponding entities in KBs can further enhance the understanding of the question by leveraging the background knowledge of mapping entities provided by KBs. In such case, we can recommend the question to the best matching expert user. On the other hand, KB enrichment \cite{cao2020open} also benefits from CQAEL. To be specific, CQA texts embody invaluable knowledge about named entities. Developing an EL model capable of bridging CQA texts with KBs can help to enrich and update existing KBs significantly. Nevertheless, CQA texts pose several challenges to entity linking. First, CQA texts (especially questions) are often concise and short, which makes them difficult to provide ample contextual information for context similarity computing in EL. Second, CQA texts are usually informal. Fortunately, however, CQA platforms involve various informative auxiliary data including parallel answers and two types of meta-data (i.e., topic tags and users), which have been proven effective for many CQA tasks \cite{tomasoni2010metadata,zhou2015learning}. We believe these different kinds of auxiliary data could supplement valuable knowledge beneficial for entity linking. Specifically, sometimes an answer may be too short to offer sufficient context for disambiguating the entity mention in it. In such case, other answers under the same question (i.e., parallel answers) can be employed to enrich the context for the entity mention, as answers under the same question are usually relevant. We take the CQA text $Z_1$ in Figure \ref{fig:CQAEL} as an example. Answer $a_2$ in $Z_1$ is too short to supply enough context for linking mention ``AGI" correctly. Meanwhile, its parallel answer $a_1$ in $Z_1$ is much longer and contains the phrase ``Artificial General Intelligence'', i.e., the full name of the entity mention ``AGI'', which prompts the linking system to map ``AGI" with its correct entity via taking this parallel answer into account. What's more, in the CQA platform, topic tags could be added to summarize basic topics in some keywords for each CQA text \cite{zhou2015learning}. In Figure \ref{fig:CQAEL}, the CQA text $Z_2$ is associated with topic tag $t_1$ ``World War II". As we know the fact that Franklin Delano Roosevelt is the president of the United States during the Second World War, the entity mention ``Roosevelt'' in question $q$ of $Z_2$ probably refers to the entity ``Franklin Delano Roosevelt'' rather than ``Theodore Roosevelt'' with the indication of this topic tag. Additionally, users usually ask or answer questions according to their individual interests or experience \cite{lyu2019we}. According to the questions that user $u_{1}$ named ``Allen Marian'' in the CQA text $Z_3$ has asked or answered in the past, we could know that user $u_{1}$ in $Z_3$ is interested in ``Sports". The entity mention ``Michael Jordan'' in answer $a_{1}$ of $Z_3$ that user $u_{1}$ has answered is likely to refer to the basketball player rather than the scientist ``Michael I. Jordan'' and the actor ``Michael B. Jordan'' with the indication of the user interest. However, it is impractical and unreasonable to directly take the auxiliary data as an expansion of the context for the entity mention since the auxiliary data are numerous and noisy. For instance, there are usually more than thousands of questions under one topic tag. Moreover, a user is often interested in multiple domains, so questions they answered or asked are not all relevant to the entity mention that needs to be linked. Accordingly, the biggest challenge of the CQAEL task is how to effectively exploit the auxiliary data to aid entity linking. Traditional entity linking methods \cite{shen2021entity} primarily focus on linking entities in news documents, and are suboptimal over this new task since they do not consider how to make use of various informative auxiliary data specially existing in the CQA platform. To deal with the above issue, we propose a novel transformer-based framework to harness the abundant knowledge embedded in different kinds of auxiliary data to improve entity linking. Specifically, a base module is derived to cross-encode the context of the entity mention and the candidate entity description to perform deep cross-attention between each other. To effectively exploit the auxiliary data, we propose an auxiliary data module, which is able to capture semantic relationships between the candidate entity description and different kinds of auxiliary data (i.e., parallel answers, topic tags, and users) in a unified manner. This auxiliary data module can provide effective and complementary linking evidence mined from auxiliary data and is flexible to be integrated with other EL models, which has been verified by our experiments. The main contributions of this paper are as follows. \begin{itemize} \item We are among the first to explore the task of Community Question Answering entity linking (CQAEL), a new and important problem due to its broad applications. \item We propose a novel transformer-based framework which can leverage different kinds of auxiliary data in the CQA platform effectively to enhance the linking performance. \item We construct a finely-labeled data set named QuoraEL via crawling from Quora for the CQAEL task. Extensive experiments validate the superiority of our proposed framework against state-of-the-art EL methods. We release the data set and codes to facilitate the research towards this new task\footnote{\url{https://github.com/yhLeeee/CQA_EntityLinking}}. \end{itemize} \section{Task and Dataset} \subsection{Task Definition} \label{sec:taskdef} A CQA text denoted by $Z$ includes a question $q$ and a set of its corresponding answers $A = \{a_1, a_2, ..., a_n\}$, where $a_i$ is the $i$-th answer of $q$ (1 $\le $ $i$ $\le $ $n$), i.e., $Z = \{q, A\}$. For each answer $a_i$, the other answers under the same question are considered as its parallel answers $\mathcal{A} = A - \{a_i\}$. Each CQA text is associated with a set of topic tags $T = \{t_1, t_2, ..., t_l\}$, where $t_j$ denotes the $j$-th topic tag (1 $\le j \le l$). We define topic meta-data as $MetaT = \{(t_j, Q_{t_j}) \mid \forall t_j \in T\}$, where $Q_{t_j}$ is a set of questions involving topic tag $t_j$. Additionally, we denote the set of users as $U = \{u_1, u_2, ..., u_n\}$, where $u_i$ is the user who gives answer $a_i$. We define user meta-data as $MetaU = \{(u_i, Q_{u_i}) \mid \forall u_i \in U\}$, where $Q_{u_i}$ represents a set of questions asked or answered by user $u_i$. We regard $\mathcal{A}$, $MetaT$, and $MetaU$ as three kinds of auxiliary data involved in the CQA platform used in this paper. Formally, given a CQA text $Z$ in which a set of entity mentions $M = \{m_1, m_2, ..., m_{|M|}\}$ are identified in advance and a KB containing a set of named entities $E = \{e_1, e_2, ..., e_{|E|}\}$, the task of CQAEL is to find a mapping $M \mapsto E$ that links each entity mention to its corresponding entity in the KB, via leveraging different kinds of auxiliary data in the CQA platform. Before linking, for each entity mention, a small set of potential mapping entities are first chosen by candidate generation methods to prune the search space. Following the previous works \cite{phan2017neupl,fang2019joint}, we adopt the dictionary-based method to generate a set of candidate mapping entities $E_{m}$ for each entity mention $m$. \subsection{Dataset Construction} We create a new data set named QuoraEL to support the study of the CQAEL task. We choose Quora as the data source, which is one of the largest CQA platforms. We use a two-step data set collection process. For the first step, we extract CQA texts as well as their associated topic tags and users via crawling question pages of Quora. To collect two types of meta-data for CQA texts, in the second step we crawl topic meta-data (i.e., a set of questions involving each topic tag) and user meta-data (i.e., a set of questions asked or answered by each user) from topic pages and user homepages of Quora, respectively. We use the July 2019 version of the Wikipedia dump as the reference KB. To annotate entity mentions appearing in the CQA text with their corresponding entities, we perform a two-stage labeling process with automatic labeling first and human labeling later. To be specific, we use the Stanford CoreNLP package\footnote{\url{https://stanfordnlp.github.io/CoreNLP/}} to automatically recognize and link entity mentions to their corresponding Wikipedia entities. Human annotators are introduced next to correct any false labels. The final QuoraEL data set consists of $504$ CQA texts in total, in which there are $2192$ answers, $8030$ labeled entity mentions, and $1165$ topic tags. In average, each CQA text contains $4.35$ answers, $15.93$ labeled entity mentions, and $2.31$ topic tags. For the meta-data, we keep at most $10$ questions for each topic tag and $20$ questions for each user ($10$ questions asked and $10$ questions answered by the user). \section{The Proposed Framework} Our proposed framework is composed of two modules: a base module that mainly utilizes the context of the entity mention for disambiguation, and an auxiliary data module which employs different kinds of auxiliary data to facilitate linking in a unified manner. The overall architecture of our framework is shown in Figure \ref{fig:framework}. \subsection{Base Module} In the base module, two widely used entity linking features are considered, i.e., context similarity and prior probability. \paragraph{Context Similarity.} Measuring the similarity between the context of an entity mention (i.e., mention context) and the description text in an entity's Wikipedia page (i.e., entity description) is a crucial feature for entity linking. We propose to employ an XLNet-based cross-encoder to jointly encode the mention context and the candidate entity description to perform deep cross-attention between each other. XLNet \cite{yang2019xlnet}, a common transformer-based model which integrates the segment recurrence mechanism and relative encoding scheme of Transformer-XL \cite{dai2019transformer} into pretraining, has shown remarkable effect in many NLP tasks. Formally, given an entity mention $m$, for each candidate entity $e \in E_{m}$, we first concatenate the mention context $\mathcal{C}$, the candidate entity description $\mathcal{D}$, and two special tokens from the vocabulary of XLNet as an input sequence. \texttt{[CLS]} indicates the start of the sequence, and \texttt{[SEP]} is added here to separate inputs from different segments. This concatenated string is encoded using XLNet to obtain $\mathbf{h}_{\mathcal{C}, \mathcal{D}}$, a representation for this context-description pair (from the \texttt{[CLS]} token). The context similarity feature $s_{ctxt}(m, e)$ is generated using a dot-product with a learned parameter vector $\mathbf{w}_1$ as follows. \begin{equation} \begin{split} \textbf{h}_{\mathcal{C}, \mathcal{D}} = \textrm{XLNet}(\texttt{[CLS]} & \mathcal{C} \texttt{[SEP]} \mathcal{D} \texttt{[SEP]}), \\ s_{ctxt}(m, e) &= \mathbf{w}_1^{T} \mathbf{h}_{\mathcal{C}, \mathcal{D}} \end{split} \end{equation} \paragraph{Combination with Prior Probability.} Prior probability $p(e|m)$ is the probability of the appearance of a candidate entity $e$ given an entity mention $m$ without considering the context where the mention appears. It is estimated as the proportion of links with the mention form $m$ as anchor text pointing to the candidate entity $e$ in Wikipedia. We combine the context similarity feature with the prior probability feature as follows: \begin{equation} \label{eq:basemodel} s(m, e) = f(s_{ctxt}(m, e), p(e|m)) \end{equation} \noindent where $f(\cdot)$ is a single layer feed-forward network and $s(m, e)$ is the ranking score for each candidate entity. \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{model.eps} \caption{The overall architecture of our framework.} \label{fig:framework} \end{figure} It is worth mentioning that the functionality of this base module is the same as most previous EL methods, all of which output a ranking score for each candidate entity based on the mention context. There are already massive effective EL solutions proposed in the past years \cite{shen2021entity}. Therefore, this base module is not the main focus of our paper and its design principle is simple and replaceable. Our experiments have validated that it is replaceable with other deep learning based EL models. \subsection{Auxiliary Data Module} \label{sec:auxiliarymodule} Besides the mention context utilized by the aforementioned base module, there are various informative auxiliary data in the CQA platform, such as parallel answers and two types of meta-data (i.e., topic tags and users), which contain valuable knowledge beneficial for entity linking. Accordingly, we propose an auxiliary data module to effectively capture the knowledge delivered by different kinds of auxiliary data in a unified manner. This module can provide effective and complementary linking evidence mined from auxiliary data and be flexibly integrated with other EL models, which has been verified by our experiments. In our task setting, each kind of auxiliary data is composed of multiple texts. For instance, parallel answers are a set of answer texts and both types of meta-data are sets of question texts. A na\"ive strategy is to take multiple texts as input directly and apply XLNet-based cross-encoder used in the base module once per text. However, auxiliary data are numerous. Directly applying this cross-encoder on multiple texts needs to encode each text-description pair separately which brings high time complexity. What's more, auxiliary data are noisy. Not all the texts in the auxiliary data are relevant to the given entity mention and helpful for its linking. Roughly treating them equally may incur noises. To effectively exploit different kinds of auxiliary data, we start by selecting several useful texts from each auxiliary data to eliminate uninformative texts, and then another cross-encoder is introduced to jointly encode the candidate entity description and these selected useful texts in a single pass to derive the auxiliary data similarity. \paragraph{Useful Text Selection.} The basic idea is to select a subset of texts which are similar to the mention context from each kind of auxiliary data. The selected ones are regarded as useful texts which may expand the mention context and contribute to the linking process. Three common string similarity measures are adopted here, i.e, Difflib function, Jaro-Winkler similarity, and Levenshtein ratio. Given an entity mention with its associated mention context and a kind of auxiliary data, each text involved in the auxiliary data is scored via averaging these three string similarity measures between the mention context and the text. Ultimately, we regard the top-$k$ texts with the highest string similarity scores as useful texts for each kind of auxiliary data, where $k$ is a hyperparameter. \paragraph{Auxiliary Data Similarity.} Given the selected useful texts for each kind of auxiliary data, we regard them as a valid expansion of the mention context and calculate an auxiliary data similarity feature to indicate the proximity between this kind of auxiliary data and the candidate entity. Intuitively, the more the auxiliary data are semantically similar to the description text of a candidate entity, the more likely this candidate entity is the correct corresponding one. Unfortunately, the concatenation of the candidate entity description and these selected useful texts may usually exceed the token length limit of many transformer-based models such as XLNet. To tackle this issue, we propose to exploit a Longformer-based cross-encoder to jointly encode the candidate entity description and the selected useful texts to perform deep cross-attention between each other. Longformer \cite{beltagy2020longformer} is a modified transformer-based model that replaces the quadratic self-attention mechanism with a memory-efficient version, which combines local attention with sparse global attention. We choose Longformer due to the fact that it performs well in capturing semantic relationships between texts in one-many setting \cite{deyoung-etal-2021-ms} which is similar to ours, and it also allows us to encode thousands of tokens or more without high memory and time complexity. In our setting, an entity mention $m$ is associated with three kinds of auxiliary data, each corresponding to a list of selected useful texts $q_1, q_2, ..., q_k$. Here, these useful texts are in descending order by their string similarity scores. As different kinds of auxiliary data have the same format (i.e., a list of texts), we process them in the same way. In the following, we take topic meta-data $MetaT$ as an example for illustration. For each candidate entity $e$, we concatenate the candidate entity description $\mathcal{D}$, selected useful texts of $MetaT$, and some special tokens as an input sequence. In Longformer, \texttt{[CLS]} and \texttt{[SEP]} are replaced by \texttt{<s>} and \texttt{</s>}. Additionally, \texttt{<d>}, \texttt{</d>} and \texttt{<q>}, \texttt{</q>} are special tokens representing description start and end, text start and end, respectively. The new special tokens are added to the model's vocabulary and randomly initialized before task fine-tuning. As suggested by \cite{beltagy2020longformer}, we assign global attention to the \texttt{<s>} token, and a sliding attention window of $64$ tokens allows each token to attend to its neighbors. We encode this sequence using Longformer to get $\mathbf{h}_{\mathcal{D}, MetaT}$, which is the output of the last hidden layer corresponding to \texttt{<s>}. The auxiliary data similarity feature $s_{aux\_t}(m, e)$ for topic meta-data is yielded via a dot-product with a learned parameter vector $\mathbf{w}_2$ as follows. \begin{equation} \begin{split} \textbf{h}_{\mathcal{D}, MetaT} = \textrm{L}&\textrm{ongformer}(\texttt{<s>}\texttt{<d>} \mathcal{D} \texttt{</d>} \texttt{</s>}\\ &\texttt{<q>}q_1 \texttt{</q>} ... \texttt{<q>} q_k \texttt{</q>} \texttt{</s>}), \\ s_{aux\_t}&(m, e) = \mathbf{w}_2^T \mathbf{h}_{\mathcal{D}, MetaT} \end{split} \end{equation} \noindent Following the unified manner, we could derive the other two auxiliary data similarity features $s_{aux\_p}(m, e)$, $s_{aux\_u}(m, e)$ for parallel answers and user meta-data, respectively. \subsection{Learning} At present, we have obtained three kinds of auxiliary data similarity features. Thereafter, we extend Equation \ref{eq:basemodel} to Equation \ref{eq:auxmodel} via concatenating features involved in the base module with those three auxiliary data similarity features as follows: \begin{equation} \begin{split} \label{eq:auxmodel} s(m, e) = f(&s_{ctxt}(m, e), p(e|m), \\ &s_{aux\_p}(m, e), s_{aux\_t}(m, e), s_{aux\_u}(m, e)) \end{split} \end{equation} \noindent Subsequently, we normalize $s(m, e)$ using a softmax function and choose the candidate entity $e^*$ with the highest ranking score as the corresponding entity for the entity mention $m$ based on the following formulas: \begin{equation} \hat{s}(m, e) =\frac{ \exp(s(m, e)) }{\sum_{e' \in E_m}\exp(s(m, e')) } \end{equation} \begin{equation} e^* = \mathop{argmax}\limits_{e' \in E_{m}} (\hat{s}(m, e')) \end{equation} \noindent where $E_m$ is a set of candidate entities for the entity mention $m$. We utilize a cross-entropy loss for training, which is defined as follows: \begin{equation} \mathcal{L} = \sum_{Z \in \mathcal{Z}}^{} \sum_{m \in M}^{} \sum_{e \in E_m}^{} -(y\log \hat{s}(m, e)) \end{equation} \noindent where $\mathcal{Z}$ denotes a training set of CQA texts and $y \in \{0, 1\}$ denotes the actual label of the candidate entity. If the candidate entity $e$ is the correct corresponding entity for the entity mention $m$, the value of $y$ is $1$; otherwise $0$. The goal of learning is to minimize the loss $\mathcal{L}$. \section{Experiments} \subsection{Experimental Setup} We perform experiments on the newly created QuoraEL data set. We use 5-fold cross-validation and split the CQA texts into training ($70$\%), validation ($10$\%), and testing ($20$\%). For training, we adopt AdamW \cite{loshchilov2018decoupled} optimizer with a warmup rate $0.1$, an initial learning rate $1$e-$5$, and a mini-batch size $2$. Dropout with a probability of $0.1$ is used to alleviate over-fitting. For XLNet and Longformer, their parameters are initialized by the xlnet-base-cased and longformer-base-4096 models, respectively. For the base module, the maximum sequence length is set to $128$. We also experimented with $256$, which results in negligible improvement. For the auxiliary data module, maximum lengths of the candidate entity description and each text are set to $128$ and $64$, respectively. The hyperparameter $k$ is set to $3$, whose impact to the performance will be studied later. Based on our task definition, entity mentions have been recognized in advance and given as the input of the task. Therefore, we adopt accuracy as the evaluation metric, calculated as the number of correctly linked entity mentions divided by the total number of all the input entity mentions, the same as many state-of-the-art EL methods \cite{shen2021entity}. After training $10$ epochs, we select the model with the best performance on the validation set and evaluate its performance on the test set. All experiments are implemented by MindSpore Framework\footnote{\url{https://www.mindspore.cn/en}} with two NVIDIA Geforce GTX 3090 (24GB) GPUs. \begin{table}[t] \centering \begin{tabular}{l|c|c} \toprule \textbf{Models} & Base Setting & Aux Setting \\ \midrule Deep-ED (2017) & 82.56 & 82.97 \\ Ment-Norm (2018) & 82.99 & 83.19 \\ Zeshel (2019) & 88.72 & 88.91 \\ REL (2020) & 80.49 & 81.05 \\ FGS2EE (2020) & 82.59 & 83.07 \\ BLINK (2020) & 87.97 & 87.92 \\ GENRE (2021) & 86.26 & 87.06 \\ \midrule Base Module (ours) & \textbf{89.37} & - \\ Full Module (ours) & - & \textbf{92.02} \\ \bottomrule \end{tabular}% \caption{Effectiveness performance.} \label{tab:effectivenessstudy}% \end{table}% \subsection{Effectiveness Study} We compared our proposed framework with the following state-of-the-art EL models: \begin{itemize} \setlength{\itemsep}{0pt} \item \textbf{Deep-ED} \cite{ganea2017deep} utilizes CRF for joint entity linking and solves the global training problem via truncated fitting LBP. \item \textbf{Ment-Norm} \cite{le2018improving} improves Deep-ED by encoding relations between mentions via latent variables. \item \textbf{Zeshel} \cite{logeswaran2019zero} leverages BERT cross-encoder to address EL in the zero-shot setting. \item \textbf{REL} \cite{van2020rel} is one of the most prominent open source toolkits for EL. \item \textbf{FGS2EE} \cite{hou2020improving} injects fine-grained semantic information into entity embeddings to facilitate EL. \item \textbf{BLINK} \cite{wu2020scalable} utilizes both BERT bi-encoder and cross-encoder for entity linking. \item \textbf{GENRE} \cite{de2021autoregressive} introduces a sequence-to-sequence EL model based on BART \cite{lewis2020bart}. \end{itemize} Table \ref{tab:effectivenessstudy} shows the linking accuracy of all models, in which full module denotes the combination of our base module and our auxiliary data module. The linking performance of all baselines is obtained via running their open-source solutions except REL, which provides a publicly available API. To give a thorough evaluation, we conduct comparison in two settings. One is called Base setting where only mention context is accessible and the other is called Aux setting where auxiliary data is accessible as well. Base setting is the same as traditional EL setting, and Aux setting is a special setting in the CQA environment. For all baselines in the Aux setting, to ensure fair comparison, we expand the mention context using the three kinds of auxiliary data and keep their model input consistent with our framework. In summary, our transformer-based framework consistently surpasses all the state-of-the-art EL baselines in both settings, demonstrating the superiority of our framework in tackling the CQAEL task. To be specific, our base module achieves better performance than several competitive baselines such as Zeshel and BLINK, which are elaborately designed for modeling context similarity, since these two baselines do not leverage prior probability. In addition, we could observe that all the baselines yield slightly better or even worse results after adding auxiliary data. This may be attributed to the fact that auxiliary data are noisy, and these traditional EL models cannot effectively capture the useful knowledge scattered in the auxiliary data to aid EL. Nevertheless, our full module promotes by $2.65$ absolute percentages via integrating our auxiliary data module, exhibiting its effectiveness in leveraging the auxiliary data to enhance the linking performance. \begin{table}[t] \centering \small \begin{tabular}{llcc} \toprule \multicolumn{2}{l}{\multirow{2}[4]{*}{\textbf{Models}}} & \multicolumn{2}{c}{Accuracy (\%)} \\ \cmidrule{3-4} \multicolumn{2}{l}{} & Total & $\bigtriangleup$ \\ \midrule \multicolumn{2}{l}{Base Module} & 89.37 & - \\ \multicolumn{2}{l}{\quad $+$ \textit{Parallel answers}} & 91.55 & +2.18 \\ \multicolumn{2}{l}{\quad $+$ \textit{User}} & 91.26 & +1.89 \\ \multicolumn{2}{l}{\quad $+$ \textit{Topic}} & 91.77 & +2.40 \\ \multicolumn{2}{l}{\quad $+$ \textit{User, Parallel answers}} & 91.61 & +2.24 \\ \multicolumn{2}{l}{\quad $+$ \textit{Topic, User}} & 91.89 & +2.52 \\ \multicolumn{2}{l}{\quad $+$ \textit{Topic, Parallel answers}} & 91.76 & +2.39 \\ \midrule \multicolumn{2}{l}{Full Module} & 92.02 & +2.65 \\ \midrule \midrule \multicolumn{2}{l}{Deep-ED \cite{ganea2017deep}} & 82.56 & - \\ \multicolumn{2}{l}{\quad $+$ \textit{Our Auxiliary Data Module}} & 88.16 & +5.60 \\ \midrule \midrule \multicolumn{2}{l}{Zeshel \cite{logeswaran2019zero}} & 88.72 & - \\ \multicolumn{2}{l}{\quad $+$ \textit{Our Auxiliary Data Module}} & 91.49 & +2.77 \\ \bottomrule \end{tabular}% \caption{Ablation performance.} \label{tab:ablationstudy}% \end{table}% \subsection{Ablation Study} At the top of Table \ref{tab:ablationstudy}, we report the performance of our framework leveraging different kinds of auxiliary data. From the experimental results, we can see that each kind of auxiliary data has a positive contribution to the linking performance, and when all the three kinds of auxiliary data are consolidated together in our full module, it yields the best performance. To study the effectiveness and flexibility of our auxiliary data module, we replace our base module with two popular traditional EL models (i.e., Deep-ED and Zeshel) and make them work with our auxiliary data module, whose results are shown at the bottom of Table \ref{tab:ablationstudy}. Specifically, the ranking score output by the EL model is concatenated with the three auxiliary data similarity features of our auxiliary data module, and then passed through a single layer feed-forward network, similar to Equation \ref{eq:auxmodel}. It can be seen from Table \ref{tab:ablationstudy} that we can obtain significant performance gain when our auxiliary data module is combined with each of the two traditional EL models, which clearly demonstrates the effectiveness and flexibility of our auxiliary data module. This confirms that our auxiliary data module indeed provides effective and complementary linking evidence delivered by different kinds of auxiliary data to help not only our base module, but also other EL models to promote their linking performance. \begin{figure}[t] \centering \includegraphics[width=0.29\textwidth]{k.eps} \caption{Effects of the hyperparameter $k$.} \label{fig:hyper} \end{figure} \subsection{Hyperparameter Study} To investigate the effect of the hyperparameter $k$, i.e., the number of selected useful texts in our auxiliary data module, we conduct experiments with $k$ varied from $0$ to $5$, and plot the results in Figure \ref{fig:hyper}. We can see that our framework generally performs better with a larger $k$, and achieves high and stable accuracy (about $0.92$) when $k \ge 3$. This is due to the fact that more selected useful texts bring more helpful linking evidence, which may lead to better performance. \section{Related Work} Entity linking has gained increasing attention as a fundamental task to bridge unstructured text with knowledge bases, such as Wikipedia and Freebase. It acts as an important pre-processing step for many downstream knowledge-driven applications. A typical EL system often consists of two stages: candidate generation and entity ranking \cite{shen2014entity}. During the entity ranking stage, the key is to measure the similarity between the mention context and the entity description. Early EL works \cite{ratinov2011local} employ hand-crafted features to model this textual coherence. With the extensive application of deep learning, EL studies resort to neural network based methods. Bi-encoders based on CNN \cite{xue2019neural}, RNN \cite{fang2019joint}, and pre-trained language models \cite{fang2020high} are leveraged to encode those two text parts (i.e., mention context and entity description) individually. In this case, the semantic relationships between these two text parts are not fully exploited. Recent EL works \cite{wu2020scalable,tang2021bidirectional} employ BERT to cross-encode these two text parts to perform deep cross-attention between each other and achieve advanced performance. Besides common news documents, entity mentions also appear in multi-source heterogeneous data. Recently, entity linking for web tables \cite{bhagavatula2015tabel}, tweets \cite{ran2018attention}, open knowledge bases \cite{liu2021joint}, and multimodal data \cite{gan2021multimodal} have been successively explored. \cite{wang2017named} only links entity mentions in the question of CQA and does not consider two types of meta-data (i.e., topic tags and users) specially existing in the CQA platform, which is different from ours. \section{Conclusion} In this paper, we present a new task of CQAEL and create a data set QuoraEL to foster further study. We propose a novel transformer-based framework which can leverage different kinds of auxiliary data involved in the CQA platform effectively. An auxiliary data module is proposed to provide effective and complementary linking evidence mined from auxiliary data and is flexibly integrated with other EL models. Extensive experiments demonstrate the effectiveness of our framework against state-of-the-art EL methods. \section*{Acknowledgments} This work was supported in part by National Natural Science Foundation of China (No. U1936206), YESS by CAST (No. 2019QNRC001), and CAAI-Huawei MindSpore Open Fund. {\small \bibliographystyle{named}
1,116,691,498,198
arxiv
\section*{Abstract} We prove several results about the complexity of the role colouring problem. A {\em role colouring} of a graph $G$ is an assignment of colours to the vertices of $G$ such that two vertices of the same colour have identical sets of colours in their neighbourhoods. We show that the problem of finding a role colouring with $1< k <n$ colours is NP-hard for planar graphs. We show that restricting the problem to trees yields a polynomially solvable case, as long as $k$ is either constant or has a constant difference with $n$, the number of vertices in the tree. Finally, we prove that cographs are always $k$-role-colourable for $1<k\leq n$ and construct such a colouring in polynomial time. \section{Introduction} A {\em role colouring} of a graph $G$ is an assignment of colours to the vertices of $G$ such that two vertices of the same colour have identical sets of colours in their neighbourhoods. The concept arises from the study of social networks. Network science is an increasingly important application of graph theory and role colourings are a natural formulation of roles played by nodes in a real-world network~\cite{rossi2014role,sailer1979structural}. This structure was formalised by White and Reitz in terms of graph homomorphisms in \cite{white1983graph}, and developed extensively by Borgatti and Everett~\cite{borgatti1993two,borgatti1992notions,everett1994regular,everett1991role}. A fast, applicable algorithm for finding role colourings is proposed in~\cite{hummell1987strukturbeschreibung,burt1990detecting}. A homomorphism $h$ is said to be {\em locally surjective} if $h$ is surjective when restricted to the neighbourhood set of any vertex. Locally surjective homomorphisms are equivalent to role colourings and they appear in the literature under many other names, \emph{e.g.} role assignment \cite{van2010computing}, role equivalence \cite{burt1990detecting}, regular equivalence \cite{borgatti1993two}. Throughout this paper we use the language of graph colourings and we refer to a role colouring using $k$ colours as a $k$-role-colouring. We consider the computational problem associated with role colourings whose input is a graph $G$ and whose output is a partition of the vertices of $G$ into $k$ non empty subsets satisfying the definition of a role colouring given above. We call this problem $k$-{\sc role-colourability}, or $k$-{\sc rolecol} for short. This problem differs from the more common {\sc colourability} problem in a few important ways. Firstly, every graph with no isolated vertex has a role colouring obtained by giving each vertex the same colour and every graph has a role colouring obtained by giving each vertex its own colour. Secondly a $k$-role-colouring does not usually imply the existence of a $k+1$-role-colouring. This makes makes role colouring inherently different from the original graph colouring problem. Finding role colourings of a given size is known to be NP-complete in general \cite{roberts2001how, fiala2005complete}. For $k\geq 3$, the $k$-{\sc rolecol} problem is NP-complete when restricted to chordal graphs \cite{van2010computing}. However, 2-role-colouring can be solved in polynomial time for chordal graphs \cite{sheng2003two}. Not many other partial results on complexity of role colouring are known. In fact, interval graphs and trees are the only non trivial classes in which a polynomial solution is known to exist, and only for a constant number of colours. The rest of this paper is organised as follows. In section~\ref{sec:planar}, we prove that $k$-{\sc rolecol} remains NP-complete even when restricted to planar graphs, a class that was suggested for examination in \cite{van2010computing} and is one of the most extensively studied in the literature. In Section~\ref{sec:trees}, we give an explicit algorithm that computes a $k$-role-colouring of a tree in polynomial time, as long as $k$ is either constant or has a constant difference with $n$, the number of vertices in $T$. Finally, in Section~\ref{sec:cographs}, we show that every cograph (with at least $k$ vertices) has a $k$-role-colouring, and hence that the decision version of the problem is solvable in polynomial time in this class. Our proof is constructive and gives an explicit algorithm to construct such a colouring. \section{Planar Graphs} \label{sec:planar} In order to prove that $k$-{\sc rolecol} is NP-complete when restricted to planar graphs, we introduce the {\sc satisfiability problem}, defined below. A boolean formula $\phi$ (in {\em conjunctive normal form}) is a set of clauses $C_1,C_2,\ldots$, each of which is a set of variables $x_1,x_2,\ldots$. The variables may take values TRUE or FALSE. For a given assignment of these values to the variables, a clause is said to be {\em satisfied} if at least one of its variables is assigned the value TRUE. A formula is satisfied if each of its clauses is satisfied. The {\sc satisfiability} problem takes a boolean formula on $n$ variables as its input and asks if there is an assignment of TRUE and FALSE to the variables that satisifies the formula. The general {\sc satisfiability} problem was the first to be revealed to be NP-complete \cite{cook1971complexity}, and remains a central problem in theoretical computer science. We will use a reduction from a certain restricted version of {\sc satisfiability}. In order to describe this restricted problem, we define the following graph theoretic notion. The {\em formula graph} $G_\phi$ of a given formula $\phi$ is a bipartite graph whose vertices correspond to the clauses and variables of $\phi$ with an edge between $C$ and $x$ if the variable $x$ appears in the clause $C$. Let {\sc $k$-satisfiability} be the {\sc satisfiability} problem with the restriction that each clause contains at most $k$ variables. The {\sc $3$-satisfiability} problem is NP-complete even when restricted to formulas with planar formula graphs~\cite{lichtenstein1982planar}. In \cite{tovey1984simplified}, Tovey showed that the problem is NP-complete under the restriction that each clause has two or three variables and each variable appears at most three times We call the corresponding problem {\sc $3*,3*$-satisfiability}. We now combine the restrictions imposed by Tovey and planarity to show that {\sc planar $3*,3*$-satisfiability}, which is {\sc $3*,3*$-satisfiability} with planar formula graphs, is also NP-complete. We list a couple of planarity preserving operations that we will need throughout the coming proofs, in an easy lemma. See also~\cite{gross2005graph}. \begin{lemma}\label{planarop} If $G'$ is a graph created from a planar graph $G$ by any of the following operations, then $G'$ is planar. \begin{description} \item[(a)] Adding a path $x,z_1,\ldots,z_k,y$ where $x,y \in V(G)$, and $x,y$ share a face in some planar drawing of $G$, and $z_1,\ldots,z_k$ are new vertices. \item[(b)] Replacing a vertex $x \in V(G)$ with $d_G(x)=k$ by a cycle $z_1,\ldots,z_k$ with edges $z_i,z_{i+1}$, $1 \leq i \leq k-1$ and $z_k,z_1$, and edges $z_i,y_i$, $1 \leq i \leq k$, where $y_1,\ldots,y_k$ are the neigbours in $G$ of $x$ appearing in clockwise order in a planar drawing of $G$. \item[(c)] Attaching a new planar subgraph $H$ to $G$, such that $V(G')=V(G) \cup V(H)$, $E(G')=E(G) \cup E(H) \cup xz$, where $x \in V(G)$, $z \in V(H)$. \end{description} \end{lemma} \begin{proof} \begin{description} \item[(a)] Replacing an edge by a multi-edge does not destroy planarity and replacing an edge $xy$ by a path $x,z_1,\ldots,z_k,y$ clearly does not destroy planarity either. \item[(b)] Cycles are planar and since the neighbours $y_1,\ldots,y_k$ and new vertices $z_1,\ldots,z_k$ are in the same order clockwise, the edges $z_i,y_i$, $1 \leq i \leq k$ do not cross each other or any new cycle-edges. \item[(c)] We take the disjoint union of $G$ and $H$ and draw $G$ such that $x$ is on the outer face and $H$ such that $z$ is on the outer face. Adding an edge between two vertices on the same face does not destroy planarity. \end{description} \end{proof} \begin{lemma} The {\sc planar $3,3$-satisfiability} problem is NP-complete. \end{lemma} \begin{proof} We follow the method~\cite{tovey1984simplified} of reducing any {\sc $3$-satisfiability} problem to a {\sc $3*,3*$-satisfiability} problem. The method is as follows. Suppose that variable $x$ appears in $k$ clauses. Create $k$ new variables $x_1,\ldots,x_k$ and replace the $i$th occurence of $x$ with $x_i$, for $1 \leq i \leq k$. Append the clause $\{ x_i \lor \bar{x}_{i+1} \}$ for $1 \leq i \leq k-1$ and $\{ x_k \lor \bar{x}_1 \}$. This new set of clauses forces the variables $x_i$, $1 \leq i \leq k$, to be either all true or all false. Let $\phi$ be a {\sc planar $3$-satisfiability} problem, with a given planar drawing of $G_\phi$. Suppose that variable $x$ appears in $k$ clauses. Create $k$ new variables $x_1,\ldots,x_k$ and replace vertex $x$ in $G_\phi$ by the cycle on new vertices $x_1,\ldots,x_k$. Replace the $j_i$th occurence of $x$ with $x_i$, for $1 \leq i \leq k$. Such that the clauses $C_{j_1},\ldots,C_{j_k}$ appear in clockwise order in the neighbourhood of $x$ in our given planar drawing of $G_\phi$ (planarity-preserving operation (b) in lemma \ref{planarop}). Finally, replace each edge $x_ix_{i+1}$ by a path $x_i, C_{x_i}, x_{i+1}$, for $1 \leq i \leq k-1$, and edge $x_kx_{1}$ by path $x_k, C_{x_k}, x_{1}$ (planarity-preserving operation (a) in lemma \ref{planarop}). The clause vertices $C_{x_i}$ represent the new clauses $\{ x_i \lor \bar{x}_{i+1} \}$. \end{proof} \begin{theorem}\label{th:rc} $k$-{\sc rolecol}, $k \geq 2$, is NP-hard for connected planar graphs. \end{theorem} \begin{proof} Let $\phi$ be a planar boolean formula on $n$ variables $x_1,x_2,\ldots,x_n$ having $m$ clauses $C_1,C_2,\ldots,C_m$, such that each variable appears at most three times and each clause is of size 2 or 3. Let $G_\phi$ be its formula graph. We will construct a related planar graph $G'_\phi$ that has a $k$-role-colouring if and only if $\phi$ is satisfiable. We split the proof into two cases. First suppose $k=2$. We construct $G'_\phi$ from $G_\phi$ as follows. To each clause vertex $C_j$ we add a path $a_j,b_j$ with edge $b_jC_j$ (lemma \ref{planarop} (c)). Since a variable appears in at most three clauses, at most one of $x_i$ and $\bar{x_i}$ appears twice. Furthermore, we can assume that a variable appears both positively and negatively. Subsequently, one of $x_i$ and $\bar{x_i}$ appears exactly once. If $\bar{x_i}$ appears exactly once, in clause $C_j$, we replace the edge $x_i C_j$ by a path $x_i,\bar{x_i},C_j$. Otherwise, we have that $x_i$ appears exactly once, in which case we relabel the node $x_i$ to $\bar{x_i}$, and we replace the edge $\bar{x_i} C_j$ by a path $\bar{x_i},x_i,C_j$ (lemma \ref{planarop} (a)). Finally, we add a vertex $y_i$ to each pair $x_i$ and $\bar{x_i}$ to form a triangle (lemma \ref{planarop} (a)). See figure \ref{fig:2col}. For each variable $x_i$, the graph $G'_\phi$ will contain a copy of a triangle. With a slight abuse of notation for the sake of readability, we label these vertices $x_i$,$\bar{x_i}$,$y_i$. For each clause $C_j$, $G'_\phi$ contains a path on three vertices labelled $a_j,b_j,C_j$. Again this slight abuse of notation will aid the reader, and we refer to the vertices $x_1,\bar{x_1},\ldots,x_n,\bar{x_n}$ as {\em literal vertices} and the vertices $C_1,\ldots,C_m$ as {\em clause vertices}. We add an edge between a literal vertex $x$ and a clause vertex $C$ if the literal $x$ appears in the clause $C$. Suppose that $G'_\phi$ has a 2-role-colouring. Without loss of generality, the vertex $a_1$ is red. It is easy to see that $b_1$ cannot also be red, for otherwise red vertices could only have red neighbours, and every vertex would then be red which would be a contradiction. Suppose $C_1$ is red. Then, since $b_1$ is blue and has only red neighbours, the neighbourhood of every blue vertex must be red. Let $x$ be a neighbour of $C_1$ other than $b_1$. Since $x$ is a neighbour of a red vertex it must be coloured blue. But $x$ is contained in a triangle, the other two vertices of which must be coloured red. This contradiction shows that $C_1$ must be coloured blue, and we can deduce that each of the paths $a_j,b_j,C_j$ must be coloured red,blue,blue from the fact that the vertex $a_j$ has degree 1 and since a blue vertex must have at least one neighbour of each colour, $a_j$ must be coloured red. \FloatBarrier \begin{figure} \begin{center} \begin{tikzpicture}[thick,scale=0.7, every node/.style={scale=0.8}] \draw[black!50,line width=.8pt] (0,0) -- (1,0); \draw[black!50,line width=.8pt] (0,0) -- (.5,-.8); \draw[black!50,line width=.8pt] (1,0) -- (.5,-.8); \draw[black!50,line width=.8pt] (6,0) -- (7,0); \draw[black!50,line width=.8pt] (6,0) -- (6.5,-.8); \draw[black!50,line width=.8pt] (7,0) -- (6.5,-.8); \draw[black!50,line width=.8pt] (.5,4) -- (0,0); \draw[black!50,line width=.8pt] (.5,4) -- (.5,6); \draw[black!50,line width=.8pt] (.5,4) -- (6,0); \draw[black!50,line width=.8pt] (6.5,4) -- (6.5,6); \draw[black!50,line width=.8pt] (6.5,4) -- (7,0); \filldraw[fill=white, draw=black,line width=1pt] (0,0) circle (.25); \draw[fill=black!100,black!100] (1,0) circle (.25); \draw[fill=black!100,black!100] (.5,-.8) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (7,0) circle (.25); \draw[fill=black!100,black!100] (6,0) circle (.25); \draw[fill=black!100,black!100] (6.5,-.8) circle (.25); \draw[fill=black!100,black!100] (.5,4) circle (.25); \draw[fill=black!100,black!100] (.5,5) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (.5,6) circle (.25); \draw[fill=black!100,black!100] (6.5,4) circle (.25); \draw[fill=black!100,black!100] (6.5,5) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (6.5,6) circle (.25); \draw (-.6,0) node{$x_1$}; \draw (1.6,0) node{$\bar{x_1}$}; \draw (5.4,0) node{$x_2$}; \draw (7.6,0) node{$\bar{x_2}$}; \draw (.5,-1.3) node{$y_1$}; \draw (6.5,-1.3) node{$y_2$}; \draw (-.1,4) node{$C_1$}; \draw (5.9,4) node{$C_2$}; \draw (0,5) node{$b_1$}; \draw (6,5) node{$b_2$}; \draw (0,6) node{$a_1$}; \draw (6,6) node{$a_2$}; \end{tikzpicture} \caption{A graph $G_\phi$ representing the boolean formula $\phi = \{x_1 \cup x_2 \} \cap \{ \bar{x_2} \}$ with a 2-role colouring corresponding to a satisfying assignment where $x_1$ and $\bar{x_2}$ are true.}\label{fig:2col} \end{center} \end{figure} \begin{figure} \begin{center} \begin{tikzpicture}[thick,scale=0.7, every node/.style={scale=0.8}] \node (x1) at (0,0){}; \draw[black!50,line width=.8pt] (0,0) -- (.45,0); \draw[black!50,line width=.8pt,dashed] (.95,0) -- (3.05,0); \draw[black!50,line width=.8pt] (3.55,0) -- (3.75,0); \draw[black!50,line width=.8pt] (4.25,0) -- (4.7,0); \draw[black!50,line width=.8pt] (0,0) -- (1.7,-1.5); \draw[black!50,line width=.8pt] (4.7,0) -- (1.7,-1.5); \draw[black!50,line width=.8pt] (1.7,-1.5) -- (1.7,-2.9); \draw[black!50,line width=.8pt,dashed] (1.7,-4.2) -- (1.7,-2.9); \draw[fill=black!100,black!100] (4.7,0) circle (.25); \draw[pattern=horizontal lines,line width=1pt] (.7,0) circle (.25); \filldraw[fill=black!20, draw=black,line width=1pt] (2,0) circle (.25); \draw[pattern=vertical lines,line width=1pt] (3.3,0) circle (.25); \draw[fill=black!100,black!100] (4,0) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (1.7,-1.5) circle (.25); \draw[pattern=vertical lines,line width=1pt] (1.7,-1.5) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (1.7,-2.2) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (1.7,-2.9) circle (.25); \draw[pattern=horizontal lines,line width=1pt] (1.7,-2.9) circle (.25); \filldraw[fill=black!20, draw=black,line width=1pt] (1.7,-4.2) circle (.25); \draw (-.6,0) node{$x_1$}; \draw (5.3,0) node{$\bar{x_1}$}; \draw (.7,.5) node{$z_{1,1}$}; \draw (1.8,.5) node{$z_{1,k-3}$}; \draw (3.2,.5) node{$z_{1,2k-5}$}; \draw (4,.8) node{$z_{1,2k-4}$}; \draw (2.6,-1.5) node{$y_{1,k-1}$}; \draw (2.6,-2.2) node{$y_{1,k-2}$}; \draw (2.6,-2.9) node{$y_{1,k-3}$}; \draw (2.4,-4.2) node{$y_{1,1}$}; \pgftransformxshift{300} \node (x2bar) at (1.3,0){}; \node (x2) at (-3.4,0){}; \draw[black!50,line width=.8pt] (1.3,0) -- (.85,0); \draw[black!50,line width=.8pt,dashed] (.35,0) -- (-1.75,0); \draw[black!50,line width=.8pt] (-2.25,0) -- (-2.45,0); \draw[black!50,line width=.8pt] (-2.95,0) -- (-3.4,0); \draw[black!50,line width=.8pt] (1.3,0) -- (-1.7,-1.5); \draw[black!50,line width=.8pt] (-3.4,0) -- (-1.7,-1.5); \draw[black!50,line width=.8pt] (-1.7,-1.5) -- (-1.7,-2.9); \draw[black!50,line width=.8pt,dashed] (-1.7,-4.2) -- (-1.7,-2.9); \draw[fill=black!100,black!100] (-3.4,0) circle (.25); \filldraw[pattern=horizontal lines,line width=1pt] (.6,0) circle (.25); \filldraw[fill=black!20, draw=black,line width=1pt] (-.7,0) circle (.25); \draw[pattern=vertical lines,line width=1pt] (-2,0) circle (.25); \draw[fill=black!100,black!100] (-2.7,0) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (-1.7,-1.5) circle (.25); \draw[pattern=vertical lines,line width=1pt] (-1.7,-1.5) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (-1.7,-2.2) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (-1.7,-2.9) circle (.25); \draw[pattern=horizontal lines,line width=1pt] (-1.7,-2.9) circle (.25); \filldraw[fill=black!20, draw=black,line width=1pt] (-1.7,-4.2) circle (.25); \draw (1.9,0) node{$\bar{x_2}$}; \draw (-4,0) node{$x_2$}; \draw (-2.7,.5) node{$z_{2,1}$}; \draw (-2,.5) node{$z_{2,2}$}; \draw (-.8,.5) node{$z_{2,k}$} \draw (.47,.5) node{$z_{2,2k-4}$}; \draw (-2.6,-1.5) node{$y_{2,k-1}$}; \draw (-2.6,-2.2) node{$y_{2,k-2}$}; \draw (-2.6,-2.9) node{$y_{2,k-3}$}; \draw (-2.4,-4.2) node{$y_{2,1}$}; \pgftransformxshift{-270} \pgftransformyshift{100} \draw[black!50,line width=.8pt,dashed] (1.7,3.6) -- (1.7,4.6); \draw[black!50,line width=.8pt] (1.7,1.5) -- (1.7,3.6); \draw[black!50,line width=.8pt] (1.7,1.5) -- (1.7,0); \draw[black!50,line width=.8pt] (1.7,1.5) -- (1,0); \draw[black!50,line width=.8pt] (1.7,1.5) -- (2.4,0); \draw[black!50,line width=.8pt] (2.4,0) -- (1,0); \draw[black!50,line width=.8pt] (node cs:name=x1) -- (1.7,0); \filldraw[fill=white, draw=black,line width=1pt] (node cs:name=x1) circle (.25); \draw[black!50,line width=.8pt] (node cs:name=x2) -- (1.7,0); \filldraw[fill=white, draw=black,line width=1pt] (1.7,0) circle (.25); \draw[pattern=vertical lines,line width=1pt] (1.7,0) circle (.25); \draw[fill=black!100,black!100] (1,0) circle (.25); \draw[fill=black!100,black!100] (2.4,0) circle (.25); \draw[fill=black!100,black!100] (1.7,1.5) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (1.7,2.2) circle (.25); \draw[pattern=vertical lines,line width=1pt] (1.7,2.2) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (1.7,2.9) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (1.7,2.9) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (1.7,3.6) circle (.25); \draw[pattern=horizontal lines,line width=1pt] (1.7,3.6) circle (.25); \filldraw[fill=black!20, draw=black,line width=1pt] (1.7,4.6) circle (.25); \draw (1.9,-.6) node{$C_1$}; \draw (.3,0) node{$u_{1,1}$}; \draw (3.1,0) node{$u_{1,2}$}; \draw (2.4,1.5) node{$v_{1,k}$}; \draw (2.6,2.2) node{$v_{1,k-1}$}; \draw (2.6,2.9) node{$v_{1,k-2}$}; \draw (2.6,3.6) node{$v_{1,k-3}$}; \draw (2.4,4.6) node{$v_{1,1}$}; \pgftransformxshift{280} \draw[black!50,line width=.8pt,dashed] (-1.7,3.6) -- (-1.7,4.6); \draw[black!50,line width=.8pt] (-1.7,1.5) -- (-1.7,3.6); \draw[black!50,line width=.8pt] (-1.7,1.5) -- (-1.7,0); \draw[black!50,line width=.8pt] (-1.7,1.5) -- (-1,0); \draw[black!50,line width=.8pt] (-1.7,1.5) -- (-2.4,0); \draw[black!50,line width=.8pt] (-2.4,0) -- (-1,0); \draw[black!50,line width=.8pt] (node cs:name=x2bar) -- (-1.7,0); \filldraw[fill=white, draw=black,line width=1pt] (node cs:name=x2bar) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (-1.7,0) circle (.25); \draw[pattern=vertical lines,line width=1pt] (-1.7,0) circle (.25); \draw[fill=black!100,black!100] (-1,0) circle (.25); \draw[fill=black!100,black!100] (-2.4,0) circle (.25); \draw[fill=black!100,black!100] (-1.7,1.5) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (-1.7,2.2) circle (.25); \draw[pattern=vertical lines,line width=1pt] (-1.7,2.2) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (-1.7,2.9) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (-1.7,2.9) circle (.25); \filldraw[fill=white, draw=black,line width=1pt] (-1.7,3.6) circle (.25); \draw[pattern=horizontal lines,line width=1pt] (-1.7,3.6) circle (.25); \filldraw[fill=black!20, draw=black,line width=1pt] (-1.7,4.6) circle (.25); \draw (-1.9,-.6) node{$C_2$}; \draw (-.3,0) node{$u_{2,2}$}; \draw (-3.1,0) node{$u_{2,1}$}; \draw (-2.4,1.5) node{$v_{2,k}$}; \draw (-2.6,2.2) node{$v_{2,k-1}$}; \draw (-2.6,2.9) node{$v_{2,k-2}$}; \draw (-2.6,3.6) node{$v_{2,k-3}$}; \draw (-2.4,4.6) node{$v_{2,1}$}; \end{tikzpicture} \caption{A graph $G_\phi$ representing the boolean formula $\phi = \{x_1 \cup x_2 \} \cap \{ \bar{x_2} \}$ with a $k$-role colouring corresponding to a satisfying assignment where $x_1$ and $\bar{x_2}$ are true.}\label{fig:kcol} \end{center} \end{figure} \FloatBarrier Now consider the triangle $x_1,\bar{x_1},y_1$. They cannot all be coloured blue because blue vertices must have red neighbours. No two of them can be red, because red vertices cannot have red neighbours. Therefore exactly one of $x_1,\bar{x_1},y_1$ is coloured red. Observe that each clause vertex must have a red neighbour amongst the literal vertices in its neighbourhood. We may therefore construct a truth assignment by assigning TRUE to the variable $x_i$ if and only if the literal vertex $x_i$ is red. In order to prove the result for $k>2$ we use a slightly different construction, and introduce the following notation. An induced path $\{v_1,v_2,...,v_k\}$ in a graph $G$ is said to be {\em dangling} if $v_1$ is of degree 1 in $G$ and $v_2,\ldots,v_{k-1}$ are of degree 2 in $G$. \begin{lemma} If a connected graph $G$ has a dangling path $P$ of length at most $k$, then in any $k$-role-colouring of $G$ no two vertices of $P$ can have the same colour \end{lemma} \begin{proof} Let $\{v_1,v_2,\ldots,v_k\}$ be a dangling path in $P$ such that $v_1$ has degree 1. We have observed that the closed neighbourhood of a vertex cannot be monochromatic. Without loss of generality, $v_1$ has colour 1 and $v_2$ has colour 2. Suppose that $v_i$ has colour $i$ for $i<j$. Clearly $v_j$ cannot have colour $c<j-2$. Suppose $v_j$ has colour $j-2$ or $j-1$. Then no vertex of colour at least $j$ can have a neighbour of colour at most $j-1$. This contradicts the connectedness of $G$. Therefore without loss of generality, $v_j$ has colour $j$. \end{proof} We now give a description of $G'_\phi$ in the case that $k$ is at least 3. Let $\phi$ be a {\sc planar $3,3$-satisfiability} problem on $n$ variables $x_1,x_2,\ldots,x_n$ having $m$ clauses $C_1,C_2,\ldots,C_m$, and let $G_\phi$ be its formula graph. We construct a related planar graph $G'_\phi$ that has a $k$-role-colouring if and only if $\phi$ is satisfiable. We construct $G'_\phi$ from $G_\phi$ as follows. To each clause vertex $C_j$ we add a dangling path $v_{j,1},\ldots,v_{k,k}$, with edge $v_{k,k},C_j$ (lemma \ref{planarop} (c)). To each pair $v_{k,k},C_j$ we add two vertices $u_{j,1},u_{j,2}$ that both form a triangle with $v_{k,k},C_j$ (lemma \ref{planarop} (a)). As before, one of $x_i$ and $\bar{x_i}$ appears exactly once. If $\bar{x_i}$ appears exactly once, in clause $C_j$, we replace the edge $x_i C_j$ by a path $x_i,z{j,1},\ldots ,z{j,2k-4},\bar{x_i},C_j$. Otherwise we have that $x_i$ appears exactly once, in which case we relabel the node $x_i$ to $\bar{x_i}$, and replace the edge $\bar{x_i} C_j$ by a path $\bar{x_i},z{j,1},\ldots ,z{j,2k-4},x_i,C_j$ (lemma \ref{planarop} (a)). We add a vertex $y_{i,k-1}$ and attach it to $x_i$ and $\bar{x_i}$ (lemma \ref{planarop} (a)). Finally, we add a dangling path $y_{i,1},\ldots,y{i,k-1}$ to the graph through the edge $y{i,k-1},y{i,k}$ (lemma \ref{planarop} (c)). See figure \ref{fig:kcol}. As in the first part of the proof, suppose $G'_\phi$ has a $k$-role-colouring. Observe that the path formed by $v_{i,1},\ldots,v_{i,k}$ is dangling, and must therefore be coloured with $k$ distinct colours. Without loss of generality $v_{1,1}$ has colour 1, and indeed $v_{1,j}$ has colour $j$. Therefore the neighbours of any vertex of colour $j$ must have colour $j-1$ or $j+1$ for $1<j<k$. Consider the vertices $u_{1,1},u_{1,2}$. If either of them has colour $k-1$, then $C_i$ must have colour $k-2$. But $C_1$ is adjacent to $v_{1,k}$ which has colour $k$ which leads to a contradiction. So $u_{1,1}$ and $u_{1,2}$ must have colour $k$ and therefore $C_1$ has colour $k-1$. Since the role-graph of this colouring of $G'_\phi$ is a simple path on the colours $1,\ldots,k$ with $k$ having a self loop, all vertices of degree 1 must have colour 1. So for each clause $C_i$, we have that $v_{i,1}$ has colour 1, $v_{i,2}$ has colour 2, and so on. This implies that every subgraph induced on vertices $C_i,v_{i,1},\ldots,v_{i,k},u_{i,1},u_{i,2}$ has the same colouring as described above for $i=1$. For each variable $x_i$, we have that $y_{i,1}$ has colour 1, $y_{i,2}$ has colour 2,..., $y_{i,k-1}$ has colour $k-1$. This implies that the vertices $x_i$ and $\bar{x}_i$ must have colours $k-2$ and $k$, or vice versa. If $x_i$ receives colour $k-2$ and $\bar{x}_i$ receives colour $k$, then the vertices $z_{i,1}$ through $z_{i,k-3}$ through $z_{i,2k-4}$ must receive colours $k-3$ through $1$ through $k$. Alternatively, if $x_i$ receives colour $k$ and $\bar{x}_i$ receives colour $k-2$, then the vertices $z_{i,1}$ through $z_{i,k-3}$ through $z_{i,2k-4}$ must receive colours $k$ through $1$ through $k-3$. Now we construct a satisfying assignment for $\phi$ from this colouring. If the vertex representing $x_i$ has colour $k-2$ we assign the variable $x_i$ the value TRUE. If the vertex representing $\bar{x_i}$ has colour $k-2$ we assign the variable $\bar{x}_i$ TRUE. Since each vertex representing a clause must have a neighbour of colour $k-2$, each clause now has a variable or its negation that has been assigned TRUE, and therefore $\phi$ is satisfied. \end{proof} \section{Trees} \label{sec:trees} Let $P_m$ denote a path of length $m$, \emph{i.e.} a graph on vertex set $1,\ldots,m+1$ with edges $(i,i+1)$ for $1 \leq i \leq m$. Let $T$ denote a tree, \emph{i.e.} a connected graph on $n$ vertices and $n-1$ edges with no cycles. For a valid role colouring of the vertices of a graph $G$ using $k$ colours, the \emph{role graph} $G^R$ is defined in an obvious way. $G^R$ has $k$ vertices, each one corresponding to a colour used on $V(G)$. Vertices $i$ and $j$ in $V(G^R)$ have an edge in $G^R$ if and only if vertices of colour $i$ are always connected to a vertex of colour $j$ in $G$. The graph $G^R$ may have self-loops. It is easy to see that $\Delta (G^R) \leq \Delta(G)$, $\delta (G^R) \leq \delta(G)$, where $\delta(G)$ and $\Delta(G)$ denote the minimum and maximum degree in $G$, respectively, over all vertices. If $G$ is connected, then $G^R$ must be connected. (The converse is not true.) This holds because a path in $G$ must correspond to a walk in $G^R$. \begin{lemma} A path $P_{n-1}$ can be role coloured using $k$ colours if and only if $n=k+s(k-1)$ or $n=2k+s(2k-1)$, where $s$ is a positive integer, and {\sc Path} $k$-{\sc role colourability} is in $P$. \end{lemma} \begin{proof} By the properties of $G^R$, we see that $P_{n-1}^R$ must be a path. It may have one self-loop on a leaf vertex. The path from vertex $1$ to vertex $n$ on $P_{n-1}$ corresponds to a walk on $P_{n-1}^R$ that must start and end at a vertex of degree one. If $P_{n-1}^R$ contains no self-loops then such walks can be of length $k,k+(k-1),k+2(k-1),\ldots$. If $P_{n-1}^R$ contains one self-loop then such walks can be of length $2k,2k+(2k-1),2k+2(2k-1),\ldots$. Checking whether either of these equalities holds is clearly in $P$, and then colouring the path is in $P$, because for a given $P_{n-1}^R$, there are at most two ways of colouring $P_{n-1}$. \end{proof} \begin{lemma} In a role colouring of a tree $T$, $T^R$ must be a tree with at most one self-loop. \end{lemma} \begin{proof} Let $T^R$ be a role graph with a self-loop on vertex $w \in V(T^R)$. Consider an edge $(v_1,v_2)$ in $T$ connecting two vertices of colour $w$. If we cut the this edge, we have two components that must each contain a subgraph isomorphic to $T^R$ without the selfloop on $w$. However, if $T^R$ contains another self-loop, this process must repeat infinitely many times. \end{proof} \begin{lemma} For trees, $k$-{\sc role colourability} is in $P$, if $k$ is constant or if $n-k$ is constant.. \end{lemma} \begin{proof} It is shown in \cite{fiala2008comparing} that, for a tree $T$, and a known role graph $T^R$ without self-loops, checking role colourability can be done in polynomial time\footnote{The algorithm given is only geared towards deciding role colourability, but it is easily transformed into an algorithm that finds an explicit colouring in polynomial time.}. By Cayley's formula, there are $k^{k-2}$ labelled trees on $k$ vertices, and $(k+1)k^{k-2}$ trees with one or no self-loops. Therefore, one can check colourability for all possible role graphs, of which there are a constant number. Let $T^R$ be a role graph with a self-loop on vertex $v \in V(T^R)$. Consider $T_*^R$, which is constructed as follows. Take two copies of $T^R$ without the self-loop, one with vertex set $1,\ldots,w,\ldots,k$ and a copy with vertex set $1',\ldots,w',\ldots,k'$. Let $T_*^R$ be the union of these two tree with an edge added between $w$ and $w'$. A valid $2k$-role colouring of $T$ according to $T_*^R$ now corresponds to a valid $k$-role colouring according to $T^R$ by merging the colour classes $1$ and $1'$, $2$ and $2'$, \emph{etc}. \begin{claim}\label{claimrep} Let $k'=n-k$. Suppose $T$ has a valid $k$-role colouring. Let $v_1,\ldots,v_t$ be a path where $v_1$ and $v_t$ are vertices of the same colour. Then this path contains vertices of no more than $\lceil t/2 \rceil$ colours. Additionally, if $v_1$ and $v_t$ are removed from $T$, we are left with three components, $T_a,T_b,T_c$, where $T_b$ contains the path $v_1,\ldots,v_t$. Then $T_a$ and $T_c$ must contain the same colour set. \end{claim} \begin{proof} Vertices $v_2$ and $v_{t-1}$ must have the same colour. If not, then the path would correspond to a cycle in $T^R$, which is a contradiction. This argument can be repeated for the path $v_2,\ldots,v_{t-1}$. The neighbours of $v_1$ and $v_t$ that are not on the path $v_1,\ldots,v_t$ must have the same colour sets. Without loss of generality, suppose there are vertices in $T_a$ of a colour that does not appear in $T_c$. Let $v_a$ be such a vertex that is the closest to $v_1$. The second vertex on the path between $v_a$ and $v_1$ is of a colour that appears in $T_c$, but is adjacent to a vertex of a colour that does not. This is a contradiction.\end{proof} \begin{claim}\label{hubunique} If $T$ is $k$-role coloured with $k'=n-k$ a constant, then a vertex which cuts $T$ into more than one component of size $>2k'+1$ must have a unique colour. \end{claim} \begin{proof} Follows directly from claim \ref{claimrep}.\end{proof} We define a \emph{gadget} as follows. A gadget is a maximal subtree of $T$ of size at most $2k'+1$ such that the complement of the gadget in $T$ is connected. The vertices that are adjacent to a gadget but are not in a gadget themselves are called \emph{hubs}. Gadgets are clearly non-overlapping, and hub may have many gadgets connected to it. These definitions are illustrated in figure \ref{fig:hubs}. \begin{claim} Repeating colours can only appear within gadgets and two vertices of the same colour are either in the same gadget or in gadgets adjacent to the same hub. \end{claim} \begin{proof}This follows from claims \ref{claimrep} and \ref{hubunique}.\end{proof} Colour each hub with a unique colour. For every hub, go through each gadget and record all possible colourings of the subgraphs induced on the gadget and the hub, and corresponding gadget role graph (which includes a node for the unique colour of the hub). For any gadget this can be done in constant time. Additionally, keep track of all combinations of multiple gadgets that can be role coloured using the same gadget role graph. This can all be done by brute force in polynomial time, as there are $O(n)$ gadgets, $O(1)$ different gadget role graphs. Now, record a list of all the possible numbers of duplicate-coloured vertices within the hub and its gadgets. Suppose the gadgets are coloured one by one, such that those with duplicate colours are coloured first. There are $O(n-k)=O(1)$ such gadgets with a constant number of possible role colourings each. After these gadgets have been role coloured the other gadgets must be rainbow coloured. Therefore, recording all possible role colourings of the hub and its gadgets that yield no more than $k'$ duplicate-coloured vertices takes $O(n^{k'})$ time. Once all possible colourings for the individual hubs and their gadgets have been recorded, consider all combinations of different hub and gadget colourings. Each hub and its gadgets under one of these colourings contains $\leq k'$ duplicate-coloured vertices, and we have found a successful colouring if there is a set of hub and colouring combinations that add up to $k'$ duplicate-coloured vertices. We need $O(1)$ hubs in such a combination so, again, a brute force search of all combinations is sufficient. If such a combination does not exist, then a $k$-role colouring does not exist. Otherwise, colour the relevant hubs and their gadgets successfully, and rainbow-colour the remaining uncoloured vertices in $T$. This results in a valid $k$-role colouring of $T$. \begin{figure} \begin{center} \begin{tikzpicture} \draw[black!20,line width=20pt] (0:0) -- (-120:1); \draw[fill=black!20,black!20] (-120:1) circle (10pt); \draw[fill=black!20,black!20] (2,1) circle (10pt); \draw[fill=black!20,black!20] (5.4,1.4) circle (10pt); \draw[fill=black!20,black!20] (4.7,-.7) circle (10pt); \draw[black!20,line width=20pt] (0:0) -- (120:1); \draw[black!20,line width=20pt] (120:1) -- (90:1.73); \draw[black!20,line width=20pt] (120:1) -- (150:1.73); \draw[black!20,line width=20pt] (2,0) -- (2,1); \draw[black!20,line width=20pt] (4,0) -- (5.4,1.4); \draw[black!20,line width=20pt] (4,0) -- (4.7,-.7); \draw[fill=black!20,black!20] (90:1.73) circle (10pt); \draw[fill=black!20,black!20] (150:1.73) circle (10pt); \draw[fill=black!20,black!20] (-120:1) circle (10pt); \draw[fill=white!30,white!30] (0:0) circle (10pt); \draw[fill=white!30,white!30] (0:2) circle (10pt); \draw[fill=white!30,white!30] (4,0) circle (10pt); \draw[black!50,line width=.8pt] (0:0) -- (0:1); \draw[black!50,line width=.8pt] (0:0) -- (-120:1); \draw[black!50,line width=.8pt] (0:0) -- (120:1); \draw[black!50,line width=.8pt] (0:1) -- (0:2); \draw[black!50,line width=.8pt] (120:1) -- (90:1.73); \draw[black!50,line width=.8pt] (120:1) -- (150:1.73); \draw[black!50,line width=.8pt] (2,0) -- (2,1); \draw[black!50,line width=.8pt] (2,0) -- (4,0); \draw[black!50,line width=.8pt] (4,0) -- (5.4,1.4); \draw[black!50,line width=.8pt] (4,0) -- (4.7,-.7); \filldraw[fill=white, draw=black,line width=1pt] (0:0) circle (.1); \draw[fill=black!100,black!100] (-120:1) circle (.1); \draw[fill=black!100,black!100] (0:1) circle (.1); \draw[fill=black!100,black!100] (120:1) circle (.1); \filldraw[fill=white, draw=black,line width=1pt] (0:2) circle (.1); \draw[fill=black!100,black!100] (2,1) circle (.1); \draw[fill=black!100,black!100] (3,0) circle (.1); \filldraw[fill=white, draw=black,line width=1pt] (4,0) circle (.1); \draw[fill=black!100,black!100] (90:1.73) circle (.1); \draw[fill=black!100,black!100] (150:1.73) circle (.1); \draw[fill=black!100,black!100] (5.4,1.4) circle (.1); \draw[fill=black!100,black!100] (4.7,-.7) circle (.1); \draw[fill=black!100,black!100] (4.7,.7) circle (.1); \end{tikzpicture} \caption{Hubs are the vertices that separate gadgets from the rest of the tree. Here $k'=1$, hubs are white-filled and gadgets are shaded in grey.}\label{fig:hubs} \end{center} \end{figure} \end{proof} \FloatBarrier \section{Cographs} \label{sec:cographs} \emph{Cographs} are exactly the $P_4$ free graphs~\cite{seinsche1974property}. The join of two graphs $G_1$ and $G_2$ is the graph $G_3=G_1+G_2$ such that $V(G_3)=V(G_1) \cup V(G_2)$ and $E(G_3)=E(G_1) \cup E(G_2) \cup \{ (i,j) | i \in V(G_1), j \in V(G_2) \} $. The disjoint union of two graphs $G_1$ and $G_2$ is the graph $G_3=G_1 \cup G_2$ such that $V(G_3)=V(G_1) \cup V(G_2)$ and $E(G_3)=E(G_1) \cup E(G_2)$. Cographs can be constructed recursively from $K_1$ by disjoint union and join. They are the smallest class of graphs closed under disjoint union and join. Every cograph $G$ has an associated (binary, not necessarily unique) \emph{cotree}, whose leaves correspond to the vertices of $G$, and the non-leaves are labelled ``0" and ``1" to denote disjoint unions and joins, respectively. The cotree describes how $G$ is formed from instances of $K_1$ by successive joins and disjoint unions. Given a cograph, its cotree can be found in linear time~\cite{cournier1994new,mcconnell1994linear}. \begin{theorem} All cographs with $\geq 2$ vertices are 2-role-colourable and 2-{\sc rolecol} for cographs is in P. \end{theorem} \begin{proof} Suppose $G$ is a cograph. If $G$ is not connected, then we can mono-colour each component either red or blue, such that both red and blue are used and such that, if $G$ contains both isolated vertices and components of size $\geq 2$, these two types of components receive different colours. Therefore, suppose $G$ is connected. A connected cograph $G$ with $|V(G)|=2$ is isomorphic to $K_2$, and can both be 2-role-coloured in the obvious way. Suppose that all connected cographs $G'$ with $|V(G')|<k$ can be 2-role-coloured. Suppose $|V(G)|=k>2$ and the last step in the construction of $G$ was a join of graphs $G_1$ and $G_2$ (which it must be if $G$ is connected). We consider three separate cases. \begin{description} \item[(i)] If $|V(G_1)|,|V(G_2)|>1$, then we can 2-role-colour the vertices of $G_1$ (red and blue) and $G_2$ (red and blue). This extends to a valid 2-role-colouring of $G$, because all vertices have red and blue neighbours. \item[(ii)] If $|V(G_1)|=1$, $|V(G_2)|>1$ (without loss of generality) and $G_2$ is 1-role-colourable, then colour the vertex of $G_1$ red and the vertices of $G_2$ blue. This extends to a valid 2-role-colouring of $G$, because the red vertex has only blue neighbours and the blue vertices have only a red neighbour or red and blue neighbours, depending on whether $G_2$ is empty or not. \item[(iii)] If $|V(G_1)|=1$, $|V(G_2)|>1$ (without loss of generality) and $G_2$ is not 1-role-colourable, then colour the vertex of $G_1$ red. The graph $G_2$ is not 1-role-colourable, which means that is is disconnected with isolated vertices and components with $\geq 2$ vertices. Colour the isolated vertices of $G_2$ blue. For each component of $G_2$ with $\geq 2$ vertices, colour one vertex blue and the others red. This extends to a valid 2-role colouring of $G$, because all blue vertices have only red neighbours and all red vertices have both red and blue neighbours. \end{description} Therefore, we can always find a valid 2-role-colouring for $G$. It is easy to see that this method can be executed in polynomial time. We can find a cotree in polynomial time, which gives us a $G_1$ and $G_2$. Then we find the connected components of $G_1$ and $G_2$, which can also be done in polynomial time (by a series of at most $n$ breadth first searches). \end{proof} \begin{theorem} All cographs with $\geq k$ vertices are $k$-role-colourable and $k$-{\sc rolecol} for cographs is in P, where $k>2$. \end{theorem} \begin{proof} We know that all cographs with $\geq 2$ vertices are 2-role-colourable, so suppose that all cographs are $k'$-role-colourable for all $2 \leq k' <k$. Suppose $G$ is a cograph with $|V(G)|=n\geq k$ and the last step in a construction of $G$ was either a join or a disjoint union of $G_1$ and $G_2$, with $|V(G_1)|=n_1$ and $|V(G_2)|=n_2$. Pick $k_1$ and $k_2$ such that $k_1+k_2=k$, $k_1\leq n_1$, $k_2\leq n_2$ and $k_i=1$ only if $n_i=1$, for $i=1,2$. Now, $G_i$ is $k_i$-role-colourable by our inductive assumption, for $i=1,2$. Note that $k_i$ is only equal to 1 if $n_i$ is 1, and $K_1$ is always 1-role-colourable. So, we can colour $G_1$ using $k_1$ colours and $G_2$ using $k_2$ different colours. This extends to a valid $k$-role-colouring of $G$ regardless of whether $G=G_1 \cup G_2$ or $G=G_1 + G_2$. \end{proof} \section{Acknowledgements} Puck Rombach is supported by AFOSR MURI grant FA9550-10-1-0569 and ONR grant N000141210040. Part of this work was undertaken while Puck Rombach was attending the semester program ``Network Science and Graph Algorithms" at the Institute for Computational and Experimental Research in Mathematics (ICERM) at Brown University. \bibliographystyle{plain}
1,116,691,498,199
arxiv
\section{Introduction} We investigate the time-periodic Navier-Stokes equations with a non-zero drift term in the three-dimensional whole-space. More specifically, we consider the system \begin{align}\label{intro_nspastbodywholespace} \begin{pdeq} &\partial_t\uvel -\Delta\uvel -\rey\partial_1\uvel + \grad\upres + \nsnonlin{\uvel} = f && \tin\R^3\times\R,\\ &\Div\uvel =0 && \tin\R^3\times\R \end{pdeq} \end{align} for an Eulerian velocity field $\uvel:\R^3\times\R\ra\R^3$ and pressure term $\upres:\R^3\times\R\ra\R$ as well as data $f:\R^3\times\R\ra\R^3$ that are all $\per$-time-periodic, that is, \begin{align}\label{intro_timeperiodicsolution} \begin{aligned} &\forall (x,t)\in\R^3\times\R:\quad \uvel(x,t) = \uvel(x,t+\per)\quad\tand\quad \upres(x,t) = \upres(x,t+\per) \end{aligned} \end{align} and \begin{align}\label{intro_timeperiodicdata} \begin{aligned} &\forall (x,t)\in\R^3\times\R:\quad f(x,t) = f(x,t+\per). \end{aligned} \end{align} The time period $\per>0$ is a fixed constant. Physically, the system \eqref{intro_nspastbodywholespace} originates from the model of a flow of an incompressible, viscous, Newtonian fluid past an object that moves with velocity $\rey\eone\in\R^3$. We shall consider the case $\rey\neq 0$ corresponding to the case of an object moving with non-zero velocity. The study of the time-periodic Navier-Stokes equations was initiated by \textsc{Serrin} in \cite{Serrin_PeriodicSolutionsNS1959}. \textsc{Serrin} postulated that for time-periodic data $f$, and \emph{any} initial value, the solution $\uvelnonrel(x,t)$ to the corresponding initial-value problem converges as $t\ra\infty$ to some state which, when considered as an initial value in the initial-value problem, yields a time-periodic solution. The rationale behind \textsc{Serrin}'s postulate is that $\uvelnonrel(x,\per n)$, $\per$ being the time period of $f$, converges as $n\ra\infty$ to a state on a periodical orbit. Another approach was introduced independently by \textsc{Yudovich} in \cite{Yudovich60} and \textsc{Prodi} in \cite{Prodi1960}. These authors proposed to obtain a time-periodic solution by considering the Poincar\'{e} map that takes an initial value into the state obtained by evaluation at time $\per$ of the solution to the corresponding initial-value problem, where $\per$ is the period of the prescribed data $f$. A time-periodic solution is then identified as a fixed point of this Poincar\'{e} map. A further technique based on a representation formula derived from the Stokes semi-group was introduced by \textsc{Kozono} and \textsc{Nakao} in \cite{KozonoNakao96}. All the methods described above have in common that they utilize the theory for the initial-value problem. Over the years, a number of investigations based on these methods, or similar ideas involving the initial-value problem in some way, have been carried out: \cite{Prodi1960},\cite{Prouse1963_TimePer2D},\cite{Prouse63},\cite{KanielShinbrot67},\cite{Takeshita69},\cite{Morimoto1972}, \cite{MiyakawaTeramoto82},\cite{Teramoto1983},\cite{Maremonti_TimePer91},\cite{Maremonti_HalfSpace91},\cite{MaremontiPadula96},\cite{Yamazaki2000},\cite{GaldiSohr2004}, \cite{GaldiSilvestre_Per06},\cite{GaldiSilvestre_Per09},\cite{Taniuchi2009},\cite{BaalenWittwer2011},\cite{Silvestre_TPFiniteKineticEnergy12}. None of these papers treat the question of existence and regularity of strong solutions in the case $\rey\neq 0$ of a flow past an obstacle moving with non-zero velocity. Only very recently has this question been investigated by \textsc{Galdi} \cite{GaldiTP2D12,GaldiTP2D13} in the two-dimensional case. As the main result in this paper, existence of a strong solution to the time-periodic Navier-Stokes equations in the three-dimensional whole-space in the case $\rey\neq 0$ is shown for time-periodic data sufficiently restricted in size, that is, a strong solution to \eqref{intro_nspastbodywholespace}--\eqref{intro_timeperiodicdata}. It is further shown that this solution is unique in a large class of weak solutions, that it obeys a balance of energy, and that it is as regular as the data allows for. We shall employ a method that does \emph{not} utilize the corresponding initial-value problem. Instead, we will reformulate \eqref{intro_nspastbodywholespace}--\eqref{intro_timeperiodicdata} as an equivalent system on the group $\grp:=\R^3\times\R/\per\Z$. This approach allows us to derive a suitable representation of the solution in terms of a Fourier multiplier based on the Fourier transform $\FT_\grp$ associated to the group $\grp$. The method allows us to avoid the functional analytic setting of the initial-value problem and instead develop one that seems better suited for the time-periodic case. In \cite{mrtpns} the linearization of \eqref{intro_nspastbodywholespace}--\eqref{intro_timeperiodicdata} was investigated and Banach spaces that establish maximal regularity in an $\LR{q}$-setting were identified. The methods deployed in \cite{mrtpns} were based on a decomposition of the problem into a steady-state problem and a time-periodic problem involving only vector fields with vanishing time-average. We shall employ in this paper both the decomposition and the maximal regularity results established in \cite{mrtpns}. \section{Statement of the main results}\label{StatementOfMainResultSection} We start by defining the function spaces needed to state the main theorems. We denote points in $\R^3\times\R$ by $(x,t)$, and refer to $x$ as the spatial and $t$ as the time variable. We introduce the spaces of real functions \begin{align*} &\CRper(\R^3\times\R) := \setcl{U\in\CRi(\R^3\times\R)}{\forall t\in\R:\ U(\cdot,t+\per)=U(\cdot,t)},\\ &\CRciper\bp{\R^3\times[0,\per]} := \setcl{u\in\CRci\bp{\R^3\times[0,\per]}}{\exists U\in\CRper(\R^3\times\R):\ u=U_{|\R^3\times[0,\per]}},\\ &\CRcisigmaper\bp{\R^3\times[0,\per]} := \setcl{u\in\CRciper(\R^3\times[0,\per])^3}{\Div_x u = 0}. \end{align*} We introduce Lebesgue and Sobolev spaces as completions of the spaces above in different norms. Lebesgue spaces are defined for $q\in[1,\infty)$ by \begin{align*} &\LRper{q}\bp{\R^3\times(0,\per)}:=\closure{\CRciper(\R^3\times[0,\per])}{\norm{\cdot}_{q}},\quad \norm{u}_{q} := \norm{u}_{\LR{q}\bp{\R^3\times(0,\per)}}. \end{align*} Clearly, $\LRper{q}\bp{\R^3\times(0,\per)}$ coincides with the classical Lebesgue space $\LR{q}\bp{\R^3\times(0,\per)}$, and we will therefore omit the subscript $\textrm{per}$ for Lebesgue spaces in the following. Sobolev spaces of $\per$-time-periodic functions are defined for $k\in\N_0$ by \begin{align}\label{MR_DefOfWSRper} \begin{aligned} &\WSRper{k}{q}\bp{\R^3\times(0,\per)}:=\closure{\CRciper(\R^3\times[0,\per])}{\norm{\cdot}_{k,q}},\\ &\norm{\uvel}_{k,q}:=\Bp{\sum_{(\alpha,\beta)\in\N_0^3\times\N_0,\ \snorm{(\alpha,\beta)}\leq k} \norm{\partial_t^\beta\partial_x^\alpha\uvel}^q_q }^{1/q}. \end{aligned} \end{align} In contrast to the Lebesgue spaces, the Sobolev spaces $\WSRper{k}{q}\bp{\R^3\times(0,\per)}$ do \emph{not} coincide with the classical Sobolev spaces $\WSR{k}{q}\bp{\R^3\times(0,\per)}$. For $q\in[1,\infty)$ we define the Lebesgue space of solenoidal vector fields \begin{align*} &\LRsigma{q}\bp{\R^3\times(0,\per)}:=\closure{\CRcisigmaper(\R^3\times[0,\per])}{\norm{\cdot}_{q}}, \end{align*} and the anisotropic Sobolev space of $\per$-time-periodic, solenoidal, vector fields \begin{align}\label{MR_DefOfWSRsigmaper} \begin{aligned} &\WSRsigmaper{2,1}{q}\bp{\R^3\times(0,\per)}:= \closure{\CRcisigmaper(\R^3\times[0,\per])}{\norm{\cdot}_{2,1,q}},\\ &\norm{u}_{2,1,q} := \Bp{\sum_{(\alpha,\beta)\in\N_0^3\times\N_0,\ \snorm{\alpha}\leq 2,\snorm{\beta}\leq 1} \norm{\partial_x^\alpha u}^q_{q} + \norm{\partial_t^\beta u}^q_{q}}^{1/q}. \end{aligned} \end{align} In order to incorporate the decomposition described in the introduction on the level of function spaces, we define on functions $u:\R^3\times(0,\per)\ra\R$ the operators \begin{align}\label{intro_defofprojGernericExpression} \proj u(x,t):=\iper\int_0^\per u(x,s)\,\ds\quad\tand\quad\projcompl u(x,t) := u(x,t)-\proj u(x,t) \end{align} whenever these expressions are well-defined. Note that $\proj$ and $\projcompl$ decompose a time-periodic vector field $u$ into a time-independent part $\proj u$ and a time-periodic part $\projcompl u$ with vanishing time-average over the period. Also note that $\proj$ and $\projcompl$ are complementary projections, that is, $\proj^2=\proj$ and $\projcompl=\id-\proj$. As one may easily verify, \begin{align*} &\proj,\projcompl:\CRciper(\R^3\times[0,\per])\ra\CRciper(\R^3\times[0,\per]), \end{align*} and both projections extend by continuity to bounded operators on $\LRsigma{q}\bp{\R^3\times(0,\per)}$ and $\WSRsigmaper{2,1}{q}\bp{\R^3\times(0,\per)}$. We can thus define \begin{align*} &\LRsigmacompl{q}\bp{\R^3\times(0,\per)} := \projcompl\LRsigma{q}\bp{\R^3\times(0,\per)},\\ &\WSRsigmapercompl{2,1}{q}\bp{\R^3\times(0,\per)}:= \projcompl\WSRsigmaper{2,1}{q}\bp{\R^3\times(0,\per)}. \end{align*} For convenience, we introduce for intersections of such spaces the notation \begin{align*} &\LRsigmacompl{q,r}\bp{\R^3\times(0,\per)}:=\ \LRsigmacompl{q}\bp{\R^3\times(0,\per)}\ \cap\ \LRsigmacompl{r}\bp{\R^3\times(0,\per)},\\ &\WSRsigmapercompl{2,1}{q,r}\bp{\R^3\times(0,\per)}:=\ \WSRsigmapercompl{2,1}{q}\bp{\R^3\times(0,\per)}\ \cap\ \WSRsigmapercompl{2,1}{r}\bp{\R^3\times(0,\per)}. \end{align*} For $q\in(1,2)$ we let \begin{align}\label{MR_DefOfxoseenq} \begin{aligned} &\xoseen{q}(\R^3):=\setcl{\vvel\in\LRloc{1}(\R^3)^3}{\Div\vvel=0,\ \oseennorm{\vvel}{q}<\infty},\\ &\oseennorm{\vvel}{q} := \snorm{\rey}^{\frac{1}{2}}\norm{\vvel}_{\frac{2q}{2-q}} + \snorm{\rey}^{\frac{1}{4}}\norm{\grad\vvel}_{\frac{4}{4-q}} + \snorm{\rey}\norm{\partial_1\vvel}_q+\norm{\grad^2\vvel}_q, \end{aligned} \end{align} which is a Banach space intrinsically linked with the three-dimensional Oseen operator. For $q\in(1,2)$ and $r\in(1,\infty)$ we put \begin{align}\label{MR_DefOfxoseenqr} \begin{aligned} &\xoseen{q,r}(\R^3):=\setcl{\vvel\in\LRloc{1}(\R^3)^3}{\Div\vvel=0,\ \oseennorm{\vvel}{q,r}<\infty},\\ &\oseennorm{\vvel}{q,r} := \oseennorm{\vvel}{q}+\norm{\grad^2\vvel}_r. \end{aligned} \end{align} For $q\in(1,3)$ and $r\in(1,\infty)$ we further define \begin{align}\label{MR_DefOfXpres} \begin{aligned} &\xpres{q,r}\bp{\R^3\times(0,\per)}:=\setc{\upres\in\LRloc{1}\bp{\R^3\times(0,\per)}}{\norm{\upres}_{\xpres{q,r}}<\infty},\\ &\norm{\upres}_{\xpres{q,r}}:=\Bp{\int_0^\per \norm{\upres(\cdot,t)}_{\frac{3q}{3-q}}^q + \norm{\grad_x\upres(\cdot,t)}_q^q\,\dt }^{1/q}+\norm{\grad_x\upres}_{r}. \end{aligned} \end{align} Finally, we let \begin{align*} \DSRNsigma{1}{2}(\R^3):=\overline{\CRcisigma(\R^3)}^{\norm{\grad\cdot}_{2}}=\setc{\uvel\in\LR{6}(\R^3)^3}{\grad\uvel\in\LR{2}(\R^3)^{3\times 3},\ \Div\uvel=0} \end{align*} denote the classical homogeneous Sobolev space of solenoidal vector fields (the latter equality above is due to a standard Sobolev embedding theorem) and \begin{align*} \WSRloc{a,b}{q}(\R^3\times\R):=\setc{\uvel\in\LRloc{q}(\R^3\times\R)}{\partial_x^\alpha\uvel,\partial_t^\beta\uvel\in\LRloc{q}(\R^3\times\R)\text{ for }\snorm{\alpha}\leq a,\snorm{\beta}\leq b} \end{align*} for $q\in[1,\infty)$ and $a,b\in\N_0$. Throughout the paper, we shall frequently consider the restriction of $\per$-time-periodic functions defined on $\R^3\times\R$ to the domain $\R^3\times(0,\per)$. More specifically, without additional notation we implicitly treat $\per$-time-periodic functions $f:\R^3\times\R\ra\R$ as functions $f:\R^3\times(0,\per)\ra\R$. If $f$ is independent on $t$, we may treat it as a function $f:\R^3\ra\R$. We are now in a position to state the main results of the paper. The first theorem establishes existence of a strong solution for sufficiently small data. It it further shown that this solution is unique in large class of weak solutions that can be considered physically reasonable. We first define this class. \begin{defn}\label{UniquenessClassDef} Let $f\in\LRloc{1}\bp{\R^3\times\R}^3$ satisfy \eqref{intro_timeperiodicdata}. We say that $\weakuvel\in\LRloc{1}\bp{\R^3\times\R}^3$ satisfying \eqref{intro_timeperiodicsolution} is a \emph{physically reasonable weak time-periodic solution} to \eqref{intro_nspastbodywholespace} if\footnote{We can consider the restriction $\weakuvel\in\LRloc{1}\bp{\R^3\times(0,\per)}$ as a vector-valued mapping $t\ra\weakuvel(\cdot,t)$. Moreover, it is easy to see that $\proj\weakuvel$ and thus also $\projcompl\weakuvel$ are well-defined as elements in $\LRloc{1}\bp{\R^3\times(0,\per)}$. Consequently, we may also consider $\projcompl\weakuvel$ as a vector-valued mapping $t\ra\projcompl\weakuvel(\cdot,t)$.} \begin{enumerate}[1),leftmargin=\parindent, itemindent=0.2cm] \item\label{UniquenessClassDefProp1} $\weakuvel\in\LR{2}\bp{(0,\per);\DSRNsigma{1}{2}(\R^3)}$, \item\label{UniquenessClassDefProp2} $\projcompl\weakuvel\in\LR{\infty}\bp{(0,\per);\LR{2}(\R^3)^3}$, \item\label{UniquenessClassDefProp3} $\weakuvel$ is a generalized $\per$-time-periodic solution to \eqref{intro_nspastbodywholespace} in the sense that for all test functions $\Phi\in\CRcisigmaper\bp{\R^3\times(0,\per)}$ holds \begin{align}\label{UniquenessClassDefDefofweaksol} \begin{aligned} \int_0^\per\int_{\R^3} -\weakuvel\cdot\partial_t\Phi +\grad\weakuvel:\grad\Phi -\rey\partial_1\weakuvel\cdot\Phi + (\nsnonlin{\weakuvel})\cdot\Phi\,\dx\dt = \int_0^\per\int_{\R^3} f\cdot\Phi\,\dx\dt, \end{aligned} \end{align} \item\label{UniquenessClassDefProp4} $\weakuvel$ satisfies the energy inequality\footnote{The integral on the right-hand side of \eqref{UniquenessClassDefEnergyIneq} is not necessarily well-defined for $f\in\LRloc{1}\bp{\R^3\times\R}^3$ and $\weakuvel$ satisfying \ref{UniquenessClassDefProp1}--\ref{UniquenessClassDefProp2}. Included in the definition of a physically reasonable weak time-periodic solution is therefore an implicit condition that these vector fields possess enough integrability for this integral to be well-defined.} \begin{align}\label{UniquenessClassDefEnergyIneq} \begin{aligned} \int_0^\per\int_{\R^3} \snorm{\grad\weakuvel}^2\,\dx\dt \leq \int_0^\per\int_{\R^3} f\cdot \weakuvel\,\dx\dt. \end{aligned} \end{align} \end{enumerate} \end{defn} \begin{rem}\label{justificationOfPRsol} The characterization of a solution satisfying \ref{UniquenessClassDefProp1}--\ref{UniquenessClassDefProp4} in Definition \ref{UniquenessClassDef} as a \emph{physically reasonable} solution is justified by the physical properties that can be derived from property \ref{UniquenessClassDefProp2} and \ref{UniquenessClassDefProp4}. More precisely, if we consider the fluid flow corresponding to the Eulerian velocity field $\weakuvel$ as the sum of a steady state $\proj\weakuvel$ and a non-steady part $\projcompl\weakuvel$, property \ref{UniquenessClassDefProp2} implies that the kinetic energy of the non-steady part of the flow is bounded. Property \ref{UniquenessClassDefProp4} states that the energy dissipated due to the viscosity of the fluid is less than the input of energy from the external forces. \end{rem} We now state the first main theorem of the paper, which establishes existence of a strong solution unique in the class of \emph{physically reasonable weak solutions}. We shall further show that this solution satisfies an energy equality. The theorem reads: \begin{thm}\label{ExistenceAndUniquenessThm} Let $q\in(1,\frac{6}{5}\big]$, $r\in(4,\infty)$ and $\lambda\neq 0$. There is a constant $\Cc[ExistenceAndUniquenessThmConst]{eps}>0$ such that for any $f\in\LRloc{1}\bp{\R^3\times\R}^3$ satisfying \eqref{intro_timeperiodicdata} and \begin{align}\label{ExistenceAndUniquenessThmDataCond} \norm{f}_{\LR{q}\bp{\R^3\times(0,\per)}} + \norm{f}_{\LR{r}\bp{\R^3\times(0,\per)}} \leq \const{ExistenceAndUniquenessThmConst} \end{align} there is a solution $(\uvel,\upres)\in\LRloc{1}\bp{\R^3\times\R}^3\times\LRloc{1}\bp{\R^3\times\R}$ to \eqref{intro_nspastbodywholespace}--\eqref{intro_timeperiodicsolution} with $\uvel=\vvel+\wvel$ and \begin{align}\label{ExistenceAndUniquenessThmSolSpace} (\vvel,\wvel,\upres)\in\xoseen{q,r}(\R^3)\times\WSRsigmapercompl{2,1}{q,r}\bp{\R^3\times(0,\per)}\times\xpres{q,r}\bp{\R^3\times(0,\per)}. \end{align} Moreover, $\uvel$ belongs to and is unique in the class of \emph{physically reasonable weak solutions} characterized by Definition \ref{UniquenessClassDef}, and it satisfies the energy equality \begin{align}\label{EnergyEqEE} \int_0^\per\int_{\R^3} \snorm{\grad\uvel}^2\,\dx\dt = \int_0^\per\int_{\R^3} f\cdot\uvel\,\dx\dt. \end{align} \end{thm} The second main theorem of the paper concerns regularity properties of strong solutions. More specifically, it is shown that any additional regularity of the data translates into a similar degree of additional regularity for the solution. \begin{thm}\label{RegularityThm} Let $q\in\big(1,\frac{4}{3}\big]$, $r\in(8,\infty)$, $\lambda\neq0$ and $m\in\N_0$. If $f\in\LRloc{1}\bp{\R^3\times\R}^3$ satisfies \eqref{intro_timeperiodicdata} and \begin{align* f\in\WSRper{m}{q}\bp{\R^3\times(0,\per)}^3\cap\WSRper{m}{r}\bp{\R^3\times(0,\per)}^3, \end{align*} then a solution $(\uvel,\upres)\in\LRloc{1}\bp{\R^3\times\R}^3\times\LRloc{1}\bp{\R^3\times\R}$ to \eqref{intro_nspastbodywholespace}--\eqref{intro_timeperiodicsolution} in the class \eqref{ExistenceAndUniquenessThmSolSpace} (with $\uvel=\vvel+\wvel$) satisfies \begin{align* \begin{aligned} &\forall(\alpha,\beta,\kappa)\in\N_0^3\times\N_0^3\times\N_0,\ \snorm{\alpha}\leq m,\ \snorm{\beta}+\snorm{\kappa}\leq m:\\ &\qquad(\partial_x^\alpha\vvel,\partial_x^\beta\partial_t^\kappa\wvel,\partial_x^\beta\partial_t^\kappa\upres)\in \WSRloc{2}{r}(\R^3)\times\WSRloc{2,1}{r}\np{\R^3\times\R}\times\WSRloc{1,0}{r}\np{\R^3\times\R}\quad\text{with}\\ &\qquad(\partial_x^\alpha\vvel,\partial_x^\beta\partial_t^\kappa\wvel,\partial_x^\beta\partial_t^\kappa\upres)\in \xoseen{q,r}(\R^3)\times\WSRsigmapercompl{2,1}{q,r}\bp{\R^3\times(0,\per)}\times\xpres{q,r}\bp{\R^3\times(0,\per)}. \end{aligned} \end{align*} \end{thm} As a corollary to Theorem \ref{RegularityThm}, we state that a strong solution is smooth if the data is smooth. \begin{cor}\label{RegularitySmoothnessCor} Let $q\in\big(1,\frac{4}{3}\big]$, $r\in(8,\infty)$ and $\lambda\neq0$. If $f\in\CRper\bp{\R^3\times\R}^3$, then a solution $(\uvel,\upres)\in\LRloc{1}\bp{\R^3\times\R}^3\times\LRloc{1}\bp{\R^3\times\R}$ to \eqref{intro_nspastbodywholespace}--\eqref{intro_timeperiodicsolution} in the class \eqref{ExistenceAndUniquenessThmSolSpace} (with $\uvel=\vvel+\wvel$) satisfies $\uvel\in\CRper\bp{\R^3\times\R}^3$ and $\upres\in\CRper\bp{\R^3\times\R}$. \end{cor} \section{Notation} Points in $\R^3\times\R$ are denoted by $(x,t)$ with $x\in\R^3$ and $t\in\R$. We refer to $x$ as the spatial and to $t$ as the time variable. For a sufficiently regular function $u:\R^3\times\R\ra\R$, we put $\partial_i u:=\partial_{x_i} u$. For any multiindex $\alpha\in\N_0^3$, we let $\partial_x^\alpha\uvel:= \sum_{j=1}^3 \partial_j^{\alpha_j}\uvel$ and put $\snorm{\alpha}:=\sum_{j=1}^3 \alpha_j$. Moreover, for $x\in\R^3$ we let $x^\alpha:=x_1^{\alpha_1}x_2^{\alpha_2}x_3^{\alpha_3}$. Differential operators act only in the spatial variable unless otherwise indicated. For example, we denote by $\Delta u$ the Laplacian of $u$ with respect to the spatial variables, that is, $\Delta u:=\sum_{j=1}^3\partial_j^2 u$. For a vector field $u:\R^3\times\R\ra\R^3$, we let $\Div u:=\sum_{j=1}^3\partial_j u_j$ denote the divergence of $u$. For $u:\R^3\times\R\ra\R^3$ and $v:\R^3\times\R\ra\R^3$ we let $(\nsnonlinb{u}{v}):\R^3\times\R\ra\R^3$ denote the vector field $(\nsnonlinb{u}{v})_i:=\sum_{j=1}^3\partial_j v_i u_j$. For two vectors $a,b\in\R^3$, we let $a \otimes b\in\R^{3\times 3}$ denote the tensor with $(a\otimes b)_{ij}:=a_ib_j$. We denote by $\idmatrix$ the identity tensor $\idmatrix\in\R^{3\times 3}$. We use the symbol $\embeds$ to denote an embedding $X\embeds Y$ of one vector space $X$ into another vector space $Y$. In the case of topological vector spaces, embeddings are always required to be continuous. For a vector space $X$ and $A,B\subset X$, we write $X=A\oplus B$ iff $A$ and $B$ are subspaces of $X$ with $A\cap B=\set{0}$ and $X=A+B$. We also write $a\oplus b$ for elements of $A\oplus B$. Constants in capital letters in the proofs and theorems are global, while constants in small letters are local to the proof in which they appear. \section{Reformulation in a group setting} We let $\grp$ denote the group \begin{align* \grp:=\R^3\times\R/\per\Z \end{align*} with addition as the group operation. Clearly, there is a natural correspondence between $\per$-time-periodic functions defined on $\R^3\times\R$ and functions defined on $\grp$. We shall take advantage of this correspondence and reformulate \eqref{intro_nspastbodywholespace}--\eqref{intro_timeperiodicdata} and the main theorems in a setting of vector fields defined on $\grp$. For this purpose, we introduce a differentiable structure on $\grp$ and define appropriate Lebesgue and Sobolev spaces. The group $\grp$, endowed with the canonical topology, is a locally compact abelian group. Consequently, it has a Fourier transform associated to it. The main advantage of working in a setting of $\grp$-defined functions is the ability to employ this Fourier transform and express solutions to linear systems of partial differential equations in terms of Fourier multipliers. \subsection{Differentiable structure, distributions and Fourier transform}\label{lt_differentiablestructuresubsection} The topology and differentiable structure on $\grp$ is inherited from $\R^3\times\R$. More precisely, we equip $\grp$ with the quotient topology induced by the canonical quotient mapping \begin{align* \quotientmap :\R^3\times\R \ra \R^3\times\R/\per\Z,\quad \quotientmap(x,t):=(x,[t]). \end{align*} Equipped with the quotient topology, $\grp$ becomes a locally compact abelian group. We shall use the restriction \begin{align*} \bijection:\R^3\times[0,\per)\ra\grp,\quad \bijection:=\pi_{|\R^3\times[0,\per)} \end{align*} to identify $\grp$ with the domain $\R^3\times[0,\per)$. $\bijection$ is clearly a (continuous) bijection. Via $\bijection$, one can identify the Haar measure $\dg$ on $\grp$ as the product of the Lebesgue measure on $\R^3$ and the Lebesgue measure on $[0,\per)$. The Haar measure is unique up-to a normalization factor, which we choose such that \begin{align*} \forall\uvel\in\CRc{}(\grp):\quad \int_\grp \uvel(g)\,\dg = \iper\int_0^\per\int_{\R^3} \uvel\circ\bijection(x,t)\,\dx\dt, \end{align*} where $\CRc{}(\grp)$ denotes the space of continuous functions of compact support. For the sake of convenience, we will omit the $\bijection$ in integrals with respect to $\dx\dt$ of $\grp$-defined functions, that is, instead of $\iper\int_0^\per\int_{\R^3} \uvel\circ\bijection(x,t)\,\dx\dt$ we simply write $\iper\int_0^\per\int_{\R^3} \uvel(x,t)\,\dx\dt$. Next, we define by \begin{align}\label{lt_smoothfunctionsongrp} \CRi(\grp):=\setc{\uvel:\grp\ra\R}{\uvel\circ\quotientmap \in\CRi(\R^3\times\R)} \end{align} the space of smooth functions on $\grp$. For $\uvel\in\CRi(\grp)$ we define derivatives \begin{align* \forall(\alpha,\beta)\in\N_0^3\times\N_0:\quad \partial_t^\beta\partial_x^\alpha\uvel := \bb{\partial_t^\beta\partial_x^\alpha (\uvel\circ\quotientmap)}\circ\bijectioninv. \end{align*} It is easy to verify for $\uvel\in\CRi(\grp)$ that also $\partial_t^\beta\partial_x^\alpha\uvel\in\CRi(\grp)$. With a differentiable structure defined on $\grp$ via \eqref{lt_smoothfunctionsongrp}, we can introduce the space of tempered distributions on $\grp$. For this purpose, we first recall the Schwartz-Bruhat space of generalized Schwartz functions; see for example \cite{Bruhat61}. More precisely, we define for $\uvel\in\CRi(\grp)$ the semi-norms \begin{align*} \rho_{\alpha,\beta,\gamma}(\uvel):=\sup_{(x,t)\in\grp} \snorm{x^\gamma\partial_t^\beta\partial_x^\alpha\uvel(x,t)}\quad \text{for }(\alpha,\beta,\gamma)\in\N_0^3\times\N_0\times\N_0^3, \end{align*} and put \begin{align*} \SR(\grp):=\setc{\uvel\in\CRi(\grp)}{\forall(\alpha,\beta,\gamma)\in\N_0^3\times\N_0\times\N_0^3:\ \rho_{\alpha,\beta,\gamma}(\uvel)<\infty}. \end{align*} Clearly, $\SR(\grp)$ is a vector space and $\rho_{\alpha,\beta,\gamma}$ a semi-norm on $\SR(\grp)$. We endow $\SR(\grp)$ with the semi-norm topology induced by the family $\setcl{\rho_{\alpha,\beta,\gamma}}{(\alpha,\beta,\gamma)\in\N_0^3\times\N_0\times\N_0^3}$. The topological dual space $\TDR(\grp)$ of $\SR(\grp)$ is then well-defined. We equip $\TDR(\grp)$ with the weak* topology and refer to it as the space of tempered distributions on $\grp$. Observe that both $\SR(\grp)$ and $\TDR(\grp)$ remain closed under multiplication by smooth functions that have at most polynomial growth with respect to the spatial variables. For a tempered distribution $\uvel\in\TDR(\grp)$, distributional derivatives $\partial_t^\beta\partial_x^\alpha\uvel\in\TDR(\grp)$ are defined by duality in the usual manner: \begin{align*} \forall \psi\in\SR(\grp):\ \linf{\partial_t^\beta\partial_x^\alpha\uvel}{\psi}:=\linf{\uvel}{(-1)^{\snorm{(\alpha,\beta)}}\partial_t^\beta\partial_x^\alpha\psi}. \end{align*} It is easy to verify that $\partial_t^\beta\partial_x^\alpha\uvel$ is well-defined as an element of $\TDR(\grp)$. For tempered distributions on $\grp$, we keep the convention that differential operators like $\Delta$ and $\Div$ act only in the spatial variable $x$ unless otherwise indicated. We shall also introduce tempered distributions on $\grp$'s dual group $\dualgrp$. We associate each $(\xi,k)\in\R^3\times\Z$ with the character $\chi:\grp\ra\CNumbers,\ \chi(x,t):=\e^{ix\cdot\xi+ik\perf t}$ on $\grp$. It is standard to verify that all characters are of this form, and we can thus identify $\dualgrp = \R^3\times\Z$. By default, $\dualgrp$ is equipped with the compact-open topology, which in this case coincides with the product of the Euclidean topology on $\R^3$ and the discrete topology on $\Z$. The Haar measure on $\dualgrp$ is simply the product of the Lebesgue measure on $\R^3$ and the counting measure on $\Z$. A differentiable structure on $\dualgrp$ is obtained by introduction of the space \begin{align*} \CRi(\dualgrp):=\setc{\wvel\in\CR{}(\dualgrp)}{\forall k\in\Z:\ \wvel(\cdot,k)\in\CRi(\R^3)}. \end{align*} To define the generalized Schwartz-Bruhat space on the dual group $\dualgrp$, we further introduce for $\wvel\in\CRi(\dualgrp)$ the semi-norms \begin{align*} \dualrho_{\alpha,\beta,\gamma}(\wvel):= \sup_{(\xi,k)\in\dualgrp} \snorm{k^\beta \xi^\alpha \partial_\xi^\gamma \wvel(\xi,k)} \quad\text{for }(\alpha,\beta,\gamma)\in\N_0^3\times\N_0\times\N_0^3. \end{align*} We then put \begin{align*} \begin{aligned} \SR(\dualgrp)&:=\setc{\wvel\in\CRi(\dualgrp)}{\forall (\alpha,\beta,\gamma)\in\N_0^3\times\N_0\times\N_0^3:\ \dualrho_{\alpha,\beta,\gamma}(\wvel)<\infty}. \end{aligned} \end{align*} We endow the vector space $\SR(\dualgrp)$ with the semi-norm topology induced by the family of semi-norms $\setc{\dualrho_{\alpha,\beta,\gamma}}{(\alpha,\beta,\gamma)\in\N_0^3\times\N_0\times\N_0^3}$. The topological dual space of $\SR(\dualgrp)$ is denoted by $\TDR(\dualgrp)$. We equip $\TDR(\dualgrp)$ with the weak* topology and refer to it as the space of tempered distributions on $\dualgrp$. So far, all function spaces have been defined as real vector spaces of real functions. Clearly, we can define them analogously as complex vector spaces of complex functions. When a function space is used in context with the Fourier transform, which we shall introduce below, we consider it as a complex vector space. The Fourier transform $\FT_\grp$ on $\grp$ is given by \begin{align*} \FT_\grp:\LR{1}(\grp)\ra\CR{}(\dualgrp),\quad \FT_\grp(\uvel)(\xi,k):=\ft{\uvel}(\xi,k):= \iper\int_0^\per\int_{\R^3} \uvel(x,t)\,\e^{-ix\cdot\xi-ik\perf t}\,\dx\dt. \end{align*} If no confusion can arise, we simply write $\FT$ instead of $\FT_\grp$. The inverse Fourier transform is formally defined by \begin{align*} \iFT:\LR{1}(\dualgrp)\ra\CR{}(\grp),\quad \iFT(\wvel)(x,t):=\ift{\wvel}(x,t):= \sum_{k\in\Z}\,\int_{\R^3} \wvel(\xi,k)\,\e^{ix\cdot\xi+ik\perf t}\,\dxi. \end{align*} It is standard to verify that $\FT:\SR(\grp)\ra\SR(\dualgrp)$ is a homeomorphism with $\iFT$ as the actual inverse, provided the Lebesgue measure $\dxi$ is normalized appropriately. By duality, $\FT$ extends to a mapping $\TDR(\grp)\ra\TDR(\dualgrp)$. More precisely, we define \begin{align*} \FT:\TDR(\grp)\ra\TDR(\dualgrp),\quad \forall\psi\in\SR(\dualgrp):\ \linf{\FT(\uvel)}{\psi}:=\linf{\uvel}{\FT({\psi})}. \end{align*} Similarly, we define \begin{align*} \iFT:\TDR(\dualgrp)\ra\TDR(\grp),\quad \forall\psi\in\SR(\grp):\ \linf{\iFT(\uvel)}{\psi}:=\linf{\uvel}{\iFT({\psi})}. \end{align*} Clearly $\FT:\TDR(\grp)\ra\TDR(\dualgrp)$ is a homeomorphism with $\iFT$ as the actual inverse. The Fourier transform in the setting above provides us with a calculus between the differential operators on $\grp$ and the polynomials on $\dualgrp$. As one easily verifies, for $\uvel\in\TDR(\grp)$ and $\alpha\in\N_0^3$, $l\in\N_0$ we have \begin{align*} \FT\bp{\partial_t^l\partial_x^\alpha\uvel}=i^{l+\snorm{\alpha}}\,\Big(\perf\Big)^l\,k^l\,\xi^\alpha\,\FT(\uvel) \end{align*} as identity in $\TDR(\dualgrp)$. \subsection{Function spaces}\label{lt_functionspacesSection} Having introduced smooth functions on $\grp$ in form of the space $\CRi(\grp)$, we define function spaces of $\grp$-defined functions and vector fields corresponding to the Lebesgue and Sobolev spaces of $\per$-time-periodic functions and vector fields introduced in Section \ref{StatementOfMainResultSection}. We start by putting \begin{align*} &\CRci(\grp):=\setc{\uvel\in\CRi(\grp)}{\supp\uvel\text{ is compact}}. \end{align*} We let $\LR{q}(\grp)$ denote the usual Lebesgue space with respect to the Haar measure $\dg$, and let $\norm{\cdot}_q$ denote the norm. It is standard to verify that $\LR{q}(\grp)\subset\TDR(\grp)$. Classical Sobolev spaces are then defined as \begin{align*} \WSR{k}{q}(\grp):=\setc{\uvel\in\LR{q}(\grp)}{\norm{\uvel}_{k,q}<\infty}, \end{align*} where $\norm{\cdot}_{k,q}$ is defined exactly as in \eqref{MR_DefOfWSRper}, and the condition $\norm{\uvel}_{k,q}<\infty$ expresses that the distributional derivatives of $\uvel$ appearing in the norm $\norm{\cdot}_{k,q}$ all belong to $\LR{q}(\grp)$. We note that $\WSR{k}{q}(\grp)=\closure{\CRci(\grp)}{\norm{\cdot}_{k,q}}$ and $\LR{q}(\grp)=\closure{\CRci(\grp)}{\norm{\cdot}_{q}}$, which can be shown by standard arguments. Next, we let \begin{align*} &\CRcisigma(\grp):=\setc{\uvel\in\CRci(\grp)^3}{\Div\uvel=0}, \end{align*} and define the Banach spaces \begin{align*} \LRsigma{q}(\grp):=\closure{\CRcisigma(\grp)}{\norm{\cdot}_{q}},\quad \WSRsigma{2,1}{q}(\grp):= \closure{\CRcisigma(\grp)}{\norm{\cdot}_{2,1,q}} \end{align*} of solenoidal vector fields, where the norm $\norm{\cdot}_{2,1,q}$ is defined as in \eqref{MR_DefOfWSRsigmaper}. It can be shown that \begin{align} &\LRsigma{q}(\grp)=\setc{\uvel\in\LR{q}(\grp)^3}{\Div\uvel=0}.\label{lt_densitylemmaLRsigmaCharacterization} \end{align} This identity is well-known if the underlying domain is $\R^3$; a proof can be found in \cite[Chapter III.4]{galdi:book1}. Simple modifications to this proof (see \cite[Lemma 3.2.1]{habil}) suffice to establish the identity in the case where $\R^3$ is replaced with $\grp$. For convenience, we put \begin{align*} &\LRsigma{q,r}(\grp):=\LRsigma{q}(\grp)\cap\LRsigma{r}(\grp),&& \norm{\cdot}_{\LRsigma{q,r}(\grp)}:=\norm{\cdot}_q+\norm{\cdot}_r,\\ &\WSRsigma{2,1}{q,r}(\grp):=\WSRsigma{2,1}{q}(\grp)\cap\WSRsigma{2,1}{r}(\grp),&& \norm{\cdot}_{2,1,q,r}:=\norm{\cdot}_{2,1,q}+\norm{\cdot}_{2,1,r}, \end{align*} which are obviously Banach spaces in the associated norms. Recalling \eqref{intro_defofprojGernericExpression}, we define analogously the projection $\proj$ on $\grp$-defined functions: \begin{align* \proj:\CRci(\grp)\ra\CRci(\grp),\quad \proj u(x,t):=\iper\int_0^\per u(x,s)\,\ds \end{align*} and put $\projcompl:=\id-\proj$. We make note of the following properties: \begin{lem}\label{ProjInGrpSettingLem} Let $q\in(1,\infty)$. The projection $\proj:\CRcisigma(\grp)\ra\CRcisigma(\grp)$ extends, by continuity, uniquely to a bounded projection $\proj:\LRsigma{q}(\grp)\ra\LRsigma{q}(\grp)$ and to a bounded projection $\proj:\WSRsigma{2,1}{q}(\grp)\ra\WSRsigma{2,1}{q}(\grp)$. The same is true for $\projcompl$. \end{lem} \begin{proof} Boundedness of $\proj$ in the norms of $\LRsigma{q}(\grp)$ and $\WSRsigma{2,1}{q}(\grp)$ can easily be verified by employing H\"older's and Minkowski's integral inequality; see also \cite[Lemma 4.5]{mrtpns}. \end{proof} \begin{lem}\label{lt_projextensionl1loc} $\proj$ extends uniquely to a projection $\proj:\LRloc{1}(\grp)\ra\LRloc{1}(\grp)$. The same is true for $\projcompl$. \end{lem} \begin{proof} For any $R>0$, $\proj$ extends uniquely, by continuity, to a bounded projection on $\LR{1}(\B_R\times\R/\per\Z)$. Thus, for $\uvel\in\LRloc{1}(\grp)$ the element $\proj\uvel$ is naturally defined in $\LR{1}(\B_R\times\R/\per\Z)$ for any $R>0$. Consequently, $\proj\uvel$, and thus also $\projcompl\uvel$, are well-defined as elements in $\LRloc{1}(\grp)$. \end{proof} \begin{lem}\label{lt_projprojcomplinnerprodl1locfunctions} Let $f,g\in\LRloc{1}(\grp)$. Then \begin{align}\label{lt_projprojcomplinnerprodl1locfunctionsinnerprod} \begin{aligned} \iper\int_0^\per \proj f(x,t) \cdot \projcompl g(x,t)\,\dt=0 \quad\text{for a.e. }x\in\R^3. \end{aligned} \end{align} \end{lem} \begin{proof} This is a simple consequence of the fact that $\proj f$ is independent on $t$. \end{proof} \begin{lem}\label{lt_projsymbollem} The projections $\proj$ and $\projcompl$ extend uniquely, by continuity, to continuous operators $\proj:\TDR(\grp)\ra\TDR(\grp)$ and $\projcompl:\TDR(\grp)\ra\TDR(\grp)$ with \begin{align} &\proj f = \iFT_\grp\bb{\projsymbol\cdot \ft{f}},\label{lt_projsymbollemproj}\\ &\projcompl f = \iFT_\grp\bb{(1-\projsymbol)\cdot \ft{f}}\label{lt_projsymbollemprojcompl}, \end{align} where \begin{align*} \projsymbol:\dualgrp\ra\CNumbers,\quad \projsymbol(\xi,k):= \begin{pdeq} &1 && \tif k=0,\\ &0 && \tif k\neq0. \end{pdeq} \end{align*} \end{lem} \begin{proof} We simply observe for $f\in\SR(\grp)$ that \begin{align*} \FT_\grp\bb{\proj f}(\xi,k) &= \iper\int_0^\per\int_{\R^n}\iper\int_0^\per f(x,s)\,\ds\,e^{-ix\cdot\xi-i\perf k t}\,\dx\dt\\ &= \projsymbol(\xi,k) \int_{\R^n}\iper\int_0^\per f(x,s)\,\ds\,e^{-ix\cdot\xi}\,\dx = \projsymbol(\xi,k)\, \ft{f}(\xi,0) = \projsymbol(\xi,k)\, \ft{f}(\xi,k). \end{align*} The formula extends to $f\in\TDR(\grp)$ by duality. \end{proof} Having introduced the projections $\proj$ and $\projcompl$, we can now define \begin{align*} &\LRsigmacompl{q}(\grp) := \projcompl\LRsigma{q}(\grp),\\ &\LRsigmacompl{q,r}(\grp) := \projcompl\LRsigma{q,r}(\grp)\ = \LRsigmacompl{q}(\grp) \cap\ \LRsigmacompl{r}(\grp),\\ &\WSRsigmacompl{2,1}{q}(\grp):= \projcompl\WSRsigma{2,1}{q}(\grp),\\ &\WSRsigmacompl{2,1}{q,r}(\grp):= \projcompl\WSRsigmacompl{2,1}{q,r}(\grp) = \WSRsigmacompl{2,1}{q}(\grp)\ \cap\ \WSRsigmacompl{2,1}{r}(\grp). \end{align*} Since $\proj u$ is $t$-independent, it is easy to verify that $\proj\LRsigma{q}(\grp)=\LRsigma{q}(\R^3)$. It follows that $\proj$ induces the decomposition \begin{align}\label{lrsigmaDecomp} \LRsigma{q}(\grp)=\LRsigma{q}(\R^3)\oplus\LRsigmacompl{q}(\grp). \end{align} Next, we introduce the Helmholtz projection on the Lebesgue space $\LR{q}(\grp)^3$ by a classical Fourier-multiplier expression: \begin{lem}\label{lt_HelmholtzProjDefLem} The Helmholtz projection \begin{align}\label{lt_HelmholtzProjDefDef} \hproj: \LR{2}(\grp)^3\ra\LR{2}(\grp)^3,\quad \hproj f := \iFT_\grp\Bb{\Bp{\idmatrix - \frac{\xi\otimes\xi}{\snorm{\xi}^2}} \ft{f}} \end{align} extends for any $q\in[1,\infty)$ uniquely to a continuous projection $\hproj:\LR{q}(\grp)^3\ra\LR{q}(\grp)^3$. Moreover, $\hproj\LR{q}(\grp)^3=\LRsigma{q}(\grp)$. \end{lem} \begin{proof} The Fourier multiplier on the right-hand side in \eqref{lt_HelmholtzProjDefDef} is identical to the multiplier of the classical Helmholtz projection in the Euclidean $\R^3$-setting. Boundedness of $\hproj$ on $\LR{q}(\grp)^3$ can thus be derived from boundedness of the classical Helmholtz projection on $\LR{q}(\R^3)^3$. One readily verifies that $\hproj$ is a projection, and that $\Div\hproj f=0$. By \eqref{lt_densitylemmaLRsigmaCharacterization}, $\hproj\LR{q}(\grp)^3\subset\LRsigma{q}(\grp)$ follows. On the other hand, since $\Div f=0$ implies $\xi_j\ft{f}_j=0$, we have $\hproj f=f$ for all $f\in\LRsigma{q}(\grp)$. We conclude $\hproj\LR{q}(\grp)^3=\LRsigma{q}(\grp)$. \end{proof} Since $\hproj:\LR{q}(\grp)^3\ra\LR{q}(\grp)^3$ is a continuous projection, it decomposes $\LR{q}(\grp)$ into a direct sum \begin{align* \LR{q}(\grp)=\LRsigma{q}(\grp)\oplus \gradspace{q}(\grp) \end{align*} of closed subspaces with \begin{align* \gradspace{q}(\grp) := \bp{\id-\hproj}\LR{q}(\grp)^3. \end{align*} We further define \begin{align* &\gradspace{q,r}(\grp):=\gradspace{q}(\grp)\cap\gradspace{q}(\grp),\quad \norm{\cdot}_{\gradspace{q,r}(\grp)}:=\norm{\cdot}_q+\norm{\cdot}_r, \end{align*} which is clearly a Banach space with respect to the associated norm. We introduce the convention that a $\grp$-defined function $\uvel:\grp\ra\R$ can be considered an element of a function space of $\R^3$-defined functions, say $\xspacegeneric(\R^3)$, if and only if $\uvel$ is independent on $t$, and the restriction $u_{|\R^3\times\set{0}}$ belongs to $\xspacegeneric(\R^3)$. In this context, we shall need, in addition to the spaces $\xoseen{q,r}(\R^3)$ defined in \eqref{MR_DefOfxoseenqr}, also the homogeneous Sobolev spaces $\DSR{m}{q}(\R^3)$ and their associated semi-norms: \begin{align*} \begin{aligned} &\DSR{m}{q}(\R^3):=\setc{u\in\LRloc{1}(\R^3)}{\forall\alpha\in\N_0^3\text{ with } \snorm{\alpha}=m:\ \partial^\alpha u \in\LR{q}(\R^3)},\\ &\snorm{u}_{m,q}:=\bigg( \sum_{\snorm{\alpha}=m}\, \int_{\R^3} \snorm{\partial^\alpha u(x)}^q\,\dx \bigg)^\frac{1}{q}. \end{aligned} \end{align*} Moreover, we will deploy the space of solenoidal vector fields \begin{align*} \LRsigma{q}(\R^3):=\closure{\CRcisigma(\R^3)}{\norm{\cdot}_q} \end{align*} and \begin{align*} \LRsigma{q,r}(\R^3):=\LRsigma{q}(\R^3)\cap\LRsigma{r}(\R^3),\quad \norm{\cdot}_{\LRsigma{q,r}(\R^3)}:=\norm{\cdot}_q+\norm{\cdot}_r\,, \end{align*} which is obviously a Banach space in the given norm. Finally, we define for $\grp$-defined functions the norm $\norm{\cdot}_{\xpres{q,r}}$ exactly as in \eqref{MR_DefOfXpres} and let \begin{align*} \xpres{q,r}(\grp):=\setc{\upres\in\LRloc{1}(\grp)}{\norm{\upres}_{\xpres{q,r}}<\infty}. \end{align*} \subsection{Reformulation} Since the differentiable structure on $\grp$ is inherited from $\R^3\times\R$, we can formulate \eqref{intro_nspastbodywholespace}--\eqref{intro_timeperiodicdata} as a system of partial differential equations on $\grp$: \begin{align}\label{ss_nsongrp} \begin{pdeq} &\partial_t\uvel -\Delta\uvel -\rey\partial_1\uvel + \grad\upres + \nsnonlin{\uvel}= f && \tin\grp,\\ &\Div\uvel =0 && \tin\grp, \end{pdeq} \end{align} with unknowns $\uvel:\grp\ra\R^3$ and $\upres:\grp\ra\R$, and data $f:\grp\ra\R^3$. Observe that in this formulation the periodicity conditions are not needed anymore. Indeed, all functions defined on $\grp$ are by construction $\per$-time-periodic. Based on the new formulation above, we obtain the following new formulations of Theorem \ref{ExistenceAndUniquenessThm} and Theorem \ref{RegularityThm} in a setting of $\grp$-defined vector fields. For convenience, in the new formulation we split Theorem \ref{ExistenceAndUniquenessThm} into three parts: the statement of existence, the balance of energy, and the statement of uniqueness. \begin{thm}\label{ss_StrongSolThm} Let $q\in(1,\frac{4}{3}\big]$, $r\in(4,\infty)$ and $\lambda\neq 0$. There is a constant $\Cc[ss_StrongSolThmEps]{eps}>0$ such that for any $f\in\LR{q}(\grp)^3\cap\LR{r}(\grp)^3$ with \begin{align}\label{ss_StrongSolThmDataCond} \norm{f}_{\LR{q}(\grp)} + \norm{f}_{\LR{r}(\grp)} \leq \const{ss_StrongSolThmEps} \end{align} there is a solution $(\uvel,\upres)$ to \eqref{ss_nsongrp} with $\uvel=\vvel+\wvel$ and \begin{align}\label{ss_StrongSolThmSolSpace} (\vvel,\wvel,\upres)\in\xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)\times\xpres{q,r}(\grp). \end{align} \end{thm} \begin{defn}\label{ss_UniquenessClassDef} Let $f\in\LRloc{q}(\grp)^3$. We say that $\weakuvel\in\LRloc{1}(\grp)^3$ is a \emph{physically reasonable weak solution} to \eqref{ss_nsongrp} if, considered as a mapping $t\ra\weakuvel(\cdot,t)$, it satisfies $\weakuvel\in\LR{2}\bp{(0,\per);\DSRNsigma{1}{2}(\R^3)}$, $\projcompl\weakuvel\in\LR{\infty}\bp{(0,\per);\LR{2}(\R^3)^3}$, $\weakuvel$ satisfies \eqref{UniquenessClassDefDefofweaksol} for all $\Phi\in\CRcisigma(\grp)$, and $\weakuvel$ satisfies the energy inequality \eqref{UniquenessClassDefEnergyIneq}. \end{defn} \begin{thm}\label{ss_EnergyEqThm} Let $q\in\big(1,\frac{4}{3}\big]$, $r\in(4,\infty)$, $\rey\neq 0$ and $f\in\LR{q}(\grp)^3\cap\LR{r}(\grp)^3$. A solution $(\uvel,\upres)$ to \eqref{ss_nsongrp} in the class \eqref{ss_StrongSolThmSolSpace} (with $\uvel=\vvel+\wvel$) satisfies the energy equation \eqref{EnergyEqEE}. \end{thm} \begin{thm}\label{ss_UniquenessThm} Let $q\in(1,\frac{6}{5}\big]$, $r\in(4,\infty)$, $\lambda\neq 0$ and $f\in\LR{q}(\grp)^3\cap\LR{r}(\grp)^3$. There is a constant $\Cc[ss_UniquenessThmConst]{eps}>0$ such that if $\norm{f}_q + \norm{f}_r \leq \const{ss_UniquenessThmConst}$, then a solution $(\uvel,\upres)$ to \eqref{ss_nsongrp} in the class \eqref{ss_StrongSolThmSolSpace} (with $\uvel=\vvel+\wvel$) is unique in the class of \emph{physically reasonable weak solutions} characterized by Definition \ref{ss_UniquenessClassDef}. \end{thm} \begin{thm}\label{ss_RegularityThm} Let $q\in\big(1,\frac{4}{3}\big]$, $r\in(8,\infty)$, $\lambda\neq0$ and $m\in\N$. If $f\in\WSR{m}{q}(\grp)^3\cap\WSR{m}{r}(\grp)^3$, then a solution $(\uvel,\upres)$ to \eqref{ss_nsongrp} in the class \eqref{ss_StrongSolThmSolSpace} (with $\uvel=\vvel+\wvel$) satisfies \begin{align* \begin{aligned} &\forall(\alpha,\beta,\kappa)\in\N_0^3\times\N_0^3\times\N_0,\ \snorm{\alpha}\leq m,\ \snorm{\beta}+\snorm{\kappa}\leq m:\\ &\qquad(\partial_x^\alpha\vvel,\partial_x^\beta\partial_t^\kappa\wvel,\partial_x^\beta\partial_t^\kappa\upres)\in \xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)\times\xpres{q,r}(\grp). \end{aligned} \end{align*} \end{thm} The main challenge will now be to prove the theorems above, which will be done in the next section. The advantage obtained at this point, by the reformulation of these theorems in the setting on the group $\grp$, is the ability by means of the Fourier transform $\FT_\grp$ to employ multiplier theory. \section{Proof of main theorems} We will now prove Theorem \ref{ExistenceAndUniquenessThm} and Theorem \ref{RegularityThm}. The proofs reduce to simple verifications once we have established Theorem \ref{ss_StrongSolThm}--Theorem \ref{ss_RegularityThm}. First, however, we recall the maximal regularity results for the linearization of \eqref{ss_nsongrp} from \cite{mrtpns}. Based on the linear theory, the existence of a strong solution as stated in Theorem \ref{ss_StrongSolThm} will then be shown with the contraction mapping principle. Also the regularity properties in Theorem \ref{ss_RegularityThm} will be established from estimates obtained for the linear problem. We shall repeatedly make use of the following embedding property of homogeneous Sobolev spaces: \begin{lem}\label{HomSobEmbHighq} Let $q\in(1,\infty)$ and $r\in(3,\infty)$. Then \begin{align}\label{HomSobEmbHighqEmb} \forall u\in\DSR{1}{r}(\R^3)\cap\LR{q}(\R^3):\quad \norm{u}_\infty \leq \Cc[HomSobEmbHighqConst]{C} \bp{\snorm{u}_{1,r}+\norm{u}_q} \end{align} with $\Cclast{C}=\Cclast{C}(q,r)$. \end{lem} \begin{proof} See \cite[Remark II.7.2]{galdi:book1}. \end{proof} \begin{lem}\label{ss_StrongSolThmEmbeddingLem} Let $q\in\big(1,\frac{4}{3}\big]$ and $r\in(4,\infty)$. Then every $\vvel\in\xoseen{q,r}(\R^3)$ satisfies \begin{align} &\norm{\grad\vvel}_\infty\leq \Cc[ss_StrongSolThmEmbeddingLemConstGradvinfty]{C}\norm{\vvel}_{\xoseen{q,r}(\R^3)},\label{ss_StrongSolThmEmbeddingLemGradvinfty}\\ &\norm{\grad\vvel}_r\leq \Cc[ss_StrongSolThmEmbeddingLemConstGradvr]{C}\norm{\vvel}_{\xoseen{q,r}(\R^3)},\label{ss_StrongSolThmEmbeddingLemGradvr}\\ &\norm{\vvel}_\infty\leq \Cc[ss_StrongSolThmEmbeddingLemConstvinfty]{C}\norm{\vvel}_{\xoseen{q,r}(\R^3)}, \label{ss_StrongSolThmEmbeddingLemvinfty}\\ &\norm{\grad\vvel}_2\leq \Cc[ss_StrongSolThmEmbeddingLemConstGradv2]{C}\norm{\vvel}_{\xoseen{q,r}(\R^3)}. \label{ss_StrongSolThmEmbeddingLemGradv2} \end{align} Moreover, every $\wvel\in\WSRsigmacompl{2,1}{q,r}(\grp)$ satisfies \begin{align} \norm{\wvel}_\infty \leq \Cc[ss_StrongSolThmEmbeddingLemConstwinfty]{C}\norm{\wvel}_{2,1q,r}.\label{ss_StrongSolThmEmbeddingLemwinfty} \end{align} \end{lem} \begin{proof}\newCCtr[c]{ss_StrongSolThmEmbeddingLem} Recall $\eqref{HomSobEmbHighqEmb}$ and observe that \begin{align*} \norm{\grad\vvel}_\infty\leq \const{HomSobEmbHighqConst}\bp{\snorm{\vvel}_{2,r}+\norm{\grad\vvel}_{\frac{4q}{4-q}}}\leq \const{HomSobEmbHighqConst}\norm{\vvel}_{\xoseen{q,r}(\R^3)}, \end{align*} which implies \eqref{ss_StrongSolThmEmbeddingLemGradvinfty}. It follows that $\grad\vvel\in\LR{\frac{4q}{4-q}}(\R^3)\cap\LR{\infty}(\R^3)$ and consequently, since $\frac{4q}{4-q}<r<\infty$, by interpolation that \begin{align*} \norm{\grad\vvel}_r \leq \Cc{ss_StrongSolThmEmbeddingLem}\bp{\norm{\grad\vvel}_\infty + \norm{\grad\vvel}_\frac{4q}{4-q} } \leq \Cclast{ss_StrongSolThmEmbeddingLem} \norm{\vvel}_{\xoseen{q,r}(\R^3)}. \end{align*} This shows \eqref{ss_StrongSolThmEmbeddingLemGradvr}. With \eqref{ss_StrongSolThmEmbeddingLemGradvr} at our disposal, we again employ \eqref{HomSobEmbHighqEmb} and find that \begin{align*} \norm{\vvel}_\infty \leq \const{HomSobEmbHighqConst}\bp{\norm{\grad\vvel}_r + \norm{\vvel}_\frac{2q}{2-q} } \leq \Cc{ss_StrongSolThmEmbeddingLem} \norm{\vvel}_{\xoseen{q,r}(\R^3)}. \end{align*} Thus \eqref{ss_StrongSolThmEmbeddingLemvinfty} follows. To show \eqref{ss_StrongSolThmEmbeddingLemGradv2}, observe, since $q\leq\frac{4}{3}$ and thus $\frac{4q}{4-q}\leq 2$, that \begin{align*} \norm{\grad\vvel}_2 \leq \Cc{ss_StrongSolThmEmbeddingLem}\bp{\norm{\grad\vvel}_{\frac{4q}{4-q}} + \norm{\grad\vvel}_{\infty} } \leq \Cc{ss_StrongSolThmEmbeddingLem} \norm{\vvel}_{\xoseen{q,r}(\R^3)}. \end{align*} Finally, the Sobolev embedding $\WSR{1}{r}(\grp)\embeds \LR{\infty}(\grp)$ for $r>4$, which follows from the classical Sobolev embedding $\WSR{1}{r}\bp{\R^3\times(0,\per)}\embeds \LR{\infty}\bp{\R^3\times(0,\per)}$ since $\bijection$, by lifting, induces an embedding $\WSR{1}{r}(\grp){\embeds}\WSR{1}{r}\bp{\R^3\times(0,\per)}$, implies \eqref{ss_StrongSolThmEmbeddingLemwinfty}. \end{proof} \begin{lem}\label{lt_TPOseenUniqueness} If $\uvel\in\TDR(\grp)$ with $\proj\uvel=0$ satisfies \begin{align}\label{lt_TPOseenUniquenessEquation} \partial_t\uvel -\Delta\uvel -\rey\partial_1\uvel = 0\quad \tin\grp, \end{align} then $\uvel=0$. \end{lem} \begin{proof} Applying $\FT_\grp$ on both sides in \eqref{lt_TPOseenUniquenessEquation}, we deduce that $\bp{i\perf k + \snorm{\xi}^2 -\rey i\xi_1} \uvelft = 0$. Since the polynomial $\snorm{\xi}^2 + i\bp{\perf k -\rey \xi_1}$ vanishes only at $(\xi,k)=(0,0)$, we conclude that $\supp\ft{\uvel}\subset\set{(0,0)}$. However, since $\proj\uvel=0$ we have $\projsymbol\ft{\uvel}=0$, whence $(\xi,0)\notin\supp\ft{\uvel}$ for all $\xi\in\R^3$. Consequently, $\supp\ft{\uvel}=\emptyset$. It follows that $\ft{\uvel}=0$ and thus $\uvel=0$. \end{proof} \begin{lem}\label{lt_OseenUniquenessLem} Let $\vvel\in\LR{q}(\R^3)$ for some $q\in[1,\infty)$. If \begin{align}\label{lt_OseenUniquenessLemEquation} -\Delta\vvel -\rey\partial_1\vvel = 0\quad \tin\R^3, \end{align} then $\vvel=0$. \end{lem} \begin{proof} Applying the Fourier transform $\FT_{\R^3}$ in \eqref{lt_OseenUniquenessLemEquation}, we see that $\bp{\snorm{\xi}^2 -\rey i\xi_1} \ft{\vvel} = 0$. It follows that $\supp\ft{\vvel}\subset\set{0}$, whence $\vvel$ is a polynomial. Since $\vvel\in\LR{q}(\R^3)$, we must have $\vvel=0$. \end{proof} \begin{lem}\label{lt_TPOseenMappingThmLem} Let $q\in(1,\infty)$. Then \begin{align*} &\ALTP:\WSRsigmacompl{2,1}{q}(\grp)\ra\LRsigmacompl{q}(\grp),\quad \ALTP\wvel:= \partial_t\wvel-\Delta{\wvel} -\rey\partial_1{\wvel} \end{align*} is a homeomorphism. Moreover $\norm{\ALTPinverse}\leq \Cc{C}\,\polynomial(\rey,\per)$, where $\Cclast{C}=\Cclast{C}(q)$ and $\polynomial(\rey,\per)$ is a polynomial in $\rey$ and $\per$. \end{lem} \begin{proof} See \cite[Theorem 4.8]{mrtpns}. \end{proof} \begin{lem}\label{lt_OseenMappingThmLem} For $q\in (1,2)$ \begin{align*} \ALOseen:\xoseen{q,r}(\R^3)\ra\LRsigma{q,r}(\R^3),\quad\ALOseen\vvel:= -\Delta{\vvel} -\rey\partial_1{\vvel} \end{align*} a homeomorphism. Moreover $\norm{\ALOseeninverse}\leq\Cc[OseenAprioriConst]{C}$ with $\const{OseenAprioriConst}$ independent on $\rey$. \end{lem} \begin{proof} See \cite[Theorem VII.4.1]{galdi:book1}. \end{proof} \begin{lem}\label{ss_MaxRegThmLem} If $q\in(1,2)$, $r\in(4,\infty)$ and $\rey\neq 0$, then \begin{align* \begin{aligned} &\ALTP: \xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)\ra\LRsigma{q,r}(\grp),\\ &\ALTP(\vvel,\wvel):= \partial_t\wvel-\Delta\bp{\vvel+\wvel} -\rey\partial_1\bp{\vvel+\wvel} \end{aligned} \end{align*} is a homeomorphism. Moreover \begin{align}\label{ss_MaxRegThmOseenInvBound} \norm{\ALTPinverse} \leq \Cc[lt_MaxRegThmOseenMaxRegConst]{C}\, \polynomial(\rey,\per), \end{align} where $\polynomial(\rey,\per)$ is a polynomial in $\rey$ and $\per$, and $\Cclast{C}=\Cclast{C}(q)$. \end{lem} \begin{proof}\newCCtr[c]{ss_MaxRegThm} Recalling $\LRsigma{q,r}(\grp)=\LRsigma{q,r}(\R^3)\oplus\LRsigmacompl{q,r}(\grp)$ from \eqref{lrsigmaDecomp}, Lemma \ref{lt_TPOseenMappingThmLem} and Lemma \ref{lt_OseenMappingThmLem} concludes the proof. \end{proof} \begin{lem}\label{ss_PressureMappingLem} Let $q\in(1,3)$ and $r\in(1,\infty)$. Then \begin{align* \begin{aligned} \gradmap: \xpres{q,r}(\grp)\ra \gradspace{q,r}(\grp),\quad \gradmap\,\upres:=\grad\upres \end{aligned} \end{align*} is a homeomorphism. \end{lem} \begin{proof} See \cite[Lemma 5.4]{mrtpns}. \end{proof} \begin{proof}[Proof of Theorem \ref{ss_StrongSolThm}]\newCCtr[c]{ss_StrongSolThm} We can use the Helmholtz projection to eliminate the pressure term $\grad\upres$ in \eqref{ss_nsongrp}. More precisely, we shall first study \begin{align}\label{ss_nssol} \begin{pdeq} &\partial_t\uvel -\Delta\uvel -\rey\partial_1\uvel + \hproj\bb{\nsnonlin{\uvel}}= \hproj f && \tin\grp,\\ &\Div\uvel =0 && \tin\grp. \end{pdeq} \end{align} After solving \eqref{ss_nssol}, a pressure term $\upres$ can be constructed such that $(\uvel,\upres)$ solves \eqref{ss_nsongrp}. We first show that any pair of vector fields $(\vvel,\wvel)\in\xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)$ satisfies $\nsnonlin{(\vvel+\wvel)}\in\LR{q}(\grp)^3\cap\LR{r}(\grp)^3$. Recalling \eqref{ss_StrongSolThmEmbeddingLemGradvr} and \eqref{ss_StrongSolThmEmbeddingLemvinfty}, we find that \begin{align}\label{ss_StrongSolThmNonlintermEst1} \norm{\nsnonlin{\vvel}}_r \leq \norm{\vvel}_\infty\,\norm{\grad\vvel}_r \leq \Cc{ss_StrongSolThm} \norm{\vvel}_{\xoseen{q,r}(\R^3)}^2. \end{align} Moreover, employing H\"older's inequality and recalling \eqref{ss_StrongSolThmEmbeddingLemGradv2} we deduce \begin{align}\label{xxxUNUSEDLABELxxxss_StrongSolThmNonlintermEst2} \norm{\nsnonlin{\vvel}}_q \leq \norm{\vvel}_{\frac{2q}{2-q}}\,\norm{\grad\vvel}_2 \leq \Cc{ss_StrongSolThm} \norm{\vvel}_{\xoseen{q,r}(\R^3)}^2. \end{align} We also observe that \begin{align}\label{xxxUNUSEDLABELxxxss_StrongSolThmNonlintermEst3} \norm{\nsnonlinb{\vvel}{\wvel}}_r \leq \norm{\vvel}_\infty\,\norm{\grad\wvel}_r \leq \Cc{ss_StrongSolThm} \norm{\vvel}_{\xoseen{q,r}(\R^3)}\,\norm{\wvel}_{2,1q,r} \end{align} and \begin{align}\label{xxxUNUSEDLABELxxxss_StrongSolThmNonlintermEst4} \norm{\nsnonlinb{\vvel}{\wvel}}_q \leq \norm{\vvel}_\infty\,\norm{\grad\wvel}_q \leq \Cc{ss_StrongSolThm} \norm{\vvel}_{\xoseen{q,r}(\R^3)}\,\norm{\wvel}_{2,1q,r}. \end{align} Similarly, recalling \eqref{ss_StrongSolThmEmbeddingLemGradvinfty} we can estimate \begin{align}\label{xxxUNUSEDLABELxxxss_StrongSolThmNonlintermEst5} \norm{\nsnonlinb{\wvel}{\vvel}}_r \leq \norm{\wvel}_r\,\norm{\grad\vvel}_\infty \leq \Cc{ss_StrongSolThm} \norm{\wvel}_{2,1q,r}\,\norm{\vvel}_{\xoseen{q,r}(\R^3)} \end{align} and \begin{align}\label{xxxUNUSEDLABELxxxss_StrongSolThmNonlintermEst6} \norm{\nsnonlinb{\wvel}{\vvel}}_q \leq \norm{\wvel}_q\,\norm{\grad\vvel}_\infty \leq \Cc{ss_StrongSolThm} \norm{\wvel}_{2,1q,r}\,\norm{\vvel}_{\xoseen{q,r}(\R^3)}. \end{align} By \eqref{ss_StrongSolThmEmbeddingLemwinfty} it follows that also \begin{align}\label{xxxUNUSEDLABELxxxss_StrongSolThmNonlintermEst7} \norm{\nsnonlinb{\wvel}{\wvel}}_r \leq \norm{\wvel}_\infty\,\norm{\grad\wvel}_r \leq \Cc{ss_StrongSolThm} \norm{\wvel}_{2,1q,r}^2 \end{align} and \begin{align}\label{ss_StrongSolThmNonlintermEst8} \norm{\nsnonlinb{\wvel}{\vvel}}_q \leq \norm{\wvel}_q\,\norm{\grad\vvel}_\infty \leq \Cc{ss_StrongSolThm} \norm{\wvel}_{2,1,q,r}\,\norm{\vvel}_{\xoseen{q,r}(\R^3)}. \end{align} Combining \eqref{ss_StrongSolThmNonlintermEst1}--\eqref{ss_StrongSolThmNonlintermEst8}, we conclude $\nsnonlin{(\vvel+\wvel)}\in\LR{q}(\grp)^3\cap\LR{r}(\grp)^3$ and \begin{align}\label{ss_StrongSolThmNonlintermEstFinal} \norm{\hproj\bb{\nsnonlin{(\vvel+\wvel)}}}_{\LRsigma{q,r}(\grp)}\leq \Cc[ss_StrongSolThmNonlintermEstFinalConst]{ss_StrongSolThm} \norm{(\vvel,\wvel)}_{\xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)}^2. \end{align} Recalling Lemma \ref{ss_MaxRegThmLem}, we can now define the map \begin{align*} \begin{aligned} &\fpmap:\xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)\ra\xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp),\\ &\fpmap(\vvel,\wvel):=\ALTPinverse\bp{\hproj f-\hproj\bb{\nsnonlin{(\vvel+\wvel)}}}. \end{aligned} \end{align*} By construction, a fixed point $(\vvel,\wvel)$ of $\fpmap$ yields a solution $\uvel:=\vvel+\wvel$ to \eqref{ss_nssol}. As $\xoseen{q,r}(\R^3)$ and $\WSRsigmacompl{2,1}{q,r}(\grp)$ are Banach spaces, we shall employ Banach's fixed point theorem to show existence of such a fixed point. To this end, we recall \eqref{ss_MaxRegThmOseenInvBound} and estimate \begin{align} \begin{aligned}\label{ss_StrongSolThmEstOfL} \norm{\fpmap(\vvel,\wvel)} &\leq \const{lt_MaxRegThmOseenMaxRegConst} \polynomial(\rey,\per)\,\bp{ \norm{\hproj f}_{\LRsigma{q,r}(\grp)} + \norm{\hproj\bb{\nsnonlin{(\vvel+\wvel)}}}_{\LRsigma{q,r}(\grp)} } \\ &\leq \Cc[ss_StrongSolThmFixedpointconst1]{ss_StrongSolThm}\polynomial(\rey,\per)\,\bp{ \const{ss_StrongSolThmEps} + \norm{(\vvel,\wvel)}_{\xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)}^2\, }. \end{aligned} \end{align} Consequently, $\fpmap$ is a self-mapping on the ball $\overline{B_\rho}\subset\xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)$ provided $\rho$ and $\const{ss_StrongSolThmEps}$ satisfy \begin{align* \const{ss_StrongSolThmFixedpointconst1}\polynomial(\rey,\per)\bp{\const{ss_StrongSolThmEps} + \rho^2} \leq \rho. \end{align*} The above inequality is satisfied if we, for example, choose \begin{align}\label{ss_StrongSolThmParameterChoice} \rho:=\frac{1}{4\const{ss_StrongSolThmFixedpointconst1}\polynomial(\rey,\per)},\quad \const{ss_StrongSolThmEps}:= \frac{1}{16\const{ss_StrongSolThmFixedpointconst1}^2\polynomial(\rey,\per)^2}. \end{align} With this choice of parameters, we further have for $(\vvel_1,\wvel_1),(\vvel_2,\wvel_2)\in\overline{B_\rho}$: \begin{align*} \norm{\fpmap(\vvel_1,\wvel_1)-\fpmap(\vvel_2,\wvel_2)} &\leq \const{ss_StrongSolThmFixedpointconst1}\polynomial(\rey,\per)\, \norm{(\vvel_1,\wvel_1)-(\vvel_2,\wvel_2)}^2\\ &\leq \const{ss_StrongSolThmFixedpointconst1}\polynomial(\rey,\per)\,2\rho\, \norm{(\vvel_1,\wvel_1)-(\vvel_2,\wvel_2)} \\ &\leq \half\norm{(\vvel_1,\wvel_1)-(\vvel_2,\wvel_2)}. \end{align*} Thus, $\fpmap$ becomes a contractive self-mapping. By Banach's fixed point theorem, $\fpmap$ then has a unique fixed point in $\overline{B_\rho}$. Finally, we construct the pressure. By \eqref{ss_StrongSolThmNonlintermEstFinal}, $\nsnonlin{\uvel}\in\LR{q}(\grp)^3\cap\LR{r}(\grp)^3$. Recalling Lemma \ref{ss_PressureMappingLem}, the function \begin{align*} \upres:=\gradmap^{-1}\bp{\bb{\id-\hproj}\bp{f-\nsnonlin{\uvel}}} \end{align*} belongs to $\xpres{q,r}(\grp)$. Clearly, $(\uvel,\upres)$ is a solution to \eqref{ss_nsongrp}. \end{proof} \begin{lem}\label{ss_AddRegOfWeakSolLem} Let $\rey\neq 0$ and $\weakuvel\in\LRloc{1}(\grp)^3$ be a generalized solution to \eqref{ss_nsongrp}, that is, it satisfies \eqref{UniquenessClassDefDefofweaksol} for all $\Phi\in\CRcisigma(\grp)$, with $\weakuvel\in\LR{2}\bp{(0,\per);\DSRNsigma{1}{2}(\R^3)}$ and $\projcompl\weakuvel\in\LR{\infty}\bp{(0,\per);\LR{2}(\R^3)^3}$. If for some $q\in(1,\frac{5}{4}\big]$ \begin{align}\label{ss_AddRegOfWeakSolLemDatacond1} f\in\LR{q}(\grp)^3\cap\LR{\frac{3}{2}}(\grp)^3 \end{align} then \begin{align} &\projcompl\weakuvel\in\WSRsigmacompl{2,1}{q}(\grp).\label{ss_AddRegOfWeakSolLemAddRegProjcompl} \end{align} If for some $\tq\in(1,\frac{3}{2}\big]$ \begin{align}\label{ss_AddRegOfWeakSolLemDatacond2} f\in\LR{\tq}(\grp)^3\cap\LR{\frac{3}{2}}(\grp)^3 \end{align} then \begin{align} &\proj\weakuvel\in\xoseen{\tq}(\R^3).\label{ss_AddRegOfWeakSolLemAddRegProj} \end{align} \end{lem} \begin{proof}\newCCtr[c]{ss_AddRegOfWeakSolLem} We first assume \eqref{ss_AddRegOfWeakSolLemDatacond1} for some $q\in(1,\frac{5}{4}\big]$. Put $\weakvvel:=\proj\weakuvel$ and $\weakwvel:=\projcompl\weakuvel$. By assumption $\weakwvel\in\LR{2}(\grp)^3$ and $\grad\weakwvel\in\LR{2}(\grp)^{3\times 3}$, whence \begin{align}\label{ss_AddRegOfWeakSolLemSummabilitywgradwlow} \nsnonlin{\weakwvel}\in\LR{1}(\grp)^3. \end{align} Employing first H\"older's inequality and then a Sobolev-type inequality, see for example \cite[Lemma II.2.2]{galdi:book1} invoked with $n=3$, $q=2$ and $r=\frac{10}{3}$, we estimate \begin{align}\label{ss_AddRegOfWeakSolLemSummabilitywgradwhighpre} \begin{aligned} \iper\int_0^\per\int_{\R^3}\snorm{\nsnonlin{\weakwvel}}^{\frac{5}{4}}\,\dx\dt &\leq \iper\int_0^\per \norm{\grad\weakwvel(\cdot,t)}_2^{\frac{5}{4}}\, \norm{\weakwvel(\cdot,t)}_{\frac{10}{3}}^{\frac{5}{4}}\,\dt\\ &\leq \Cc{ss_AddRegOfWeakSolLem}\,\iper\int_0^\per \norm{\grad\weakwvel(\cdot,t)}_2^{\frac{5}{4}}\, \Bp{\norm{\grad\weakwvel(\cdot,t)}_2^{\frac{3}{5}}\, \norm{\weakwvel(\cdot,t)}_{2}^{\frac{2}{5}}}^{\frac{5}{4}}\,\dt\\ &\leq\Cc{ss_AddRegOfWeakSolLem}\, \bp{\esssup_{t\in(0,\per)}\norm{\weakwvel(\cdot,t)}_2}^{\frac{1}{2}}\cdot \iper\int_0^\per \norm{\grad\weakwvel(\cdot,t)}_2^{2}\,\dx <\infty, \end{aligned} \end{align} whence \begin{align}\label{ss_AddRegOfWeakSolLemSummabilitywgradwhigh} \nsnonlin{\weakwvel}\in\LR{\frac{5}{4}}(\grp)^3. \end{align} We further deduce, by employing first Minkowski's integral inequality, then H\"older's inequality, and finally the Sobolev embedding $\DSRNsigma{1}{2}(\R^3)\embeds\LR{6}(\R^3)$, that \begin{align*} \begin{aligned} \Bp{\int_{\R^3}\bigg|\iper\int_0^\per \nsnonlin{\weakwvel}\,\dt\bigg|^{\frac{3}{2}}\,\dx}^{\frac{3}{2}}&\leq \iper\int_0^\per\Bp{\int_{\R^3} \snorm{\nsnonlin{\weakwvel}}^{\frac{3}{2}}\,\dx}^{\frac{3}{2}}\,\dt \\ &\leq \iper\int_0^\per \Bp{\int_{\R^3} \snorm{\weakwvel}^{6}\,\dx}^{\frac{1}{6}} \Bp{\int_{\R^3} \snorm{\grad\weakwvel}^{2}\,\dx}^{\frac{1}{2}} \,\dt\\ &\leq \iper\int_0^\per \int_{\R^3} \snorm{\grad\weakwvel}^{2}\,\dx\dt<\infty. \end{aligned} \end{align*} Consequently, we have \begin{align}\label{ss_AddRegOfWeakSolLemSummabilityprojwgradwhigh} \proj\bb{\nsnonlin{\weakwvel}}\in\LR{\frac{3}{2}}(\grp)^3. \end{align} Recalling Lemma \ref{lt_projprojcomplinnerprodl1locfunctions}, it is easy to verify from the weak formulation \eqref{UniquenessClassDefDefofweaksol} that \begin{align* \begin{aligned} &\int_{\R^3} -\weakvvel\cdot\partial_t\Phi +\grad\weakvvel:\grad\Phi -\rey\partial_1\weakvvel\cdot\Phi + \Bp{\nsnonlin{\weakvvel} + \proj\bb{\nsnonlin{\weakwvel}}}\cdot\Phi\,\dx = \int_{\R^3} \proj f\cdot\Phi\,\dx \end{aligned} \end{align*} for all $\Phi\in\CRcisigma(\R^3)$. This means that $\weakvvel\in\DSRNsigma{1}{2}(\R^3)$ is a generalized solution to the steady-state problem \begin{align}\label{ss_AddRegOfWeakSolLemEqForvvel} \begin{pdeq} &-\Delta\weakvvel -\rey\partial_1\weakvvel + \hproj\bb{\nsnonlin{\weakvvel}} = \hproj\proj f - \hproj\proj\bb{\nsnonlin{\weakwvel}} && \tin\R^3,\\ &\Div\weakvvel =0 && \tin\R^3. \end{pdeq} \end{align} From \eqref{ss_AddRegOfWeakSolLemSummabilitywgradwlow}, \eqref{ss_AddRegOfWeakSolLemSummabilityprojwgradwhigh}, and assumption \eqref{ss_AddRegOfWeakSolLemDatacond1}, we deduce the summability \begin{align* \proj f - \proj\bb{\nsnonlin{\weakwvel}} \in\LR{q}(\R^3)^3\cap\LR{\frac{3}{2}}(\R^3)^3 \end{align*} for the right-hand side in \eqref{ss_AddRegOfWeakSolLemEqForvvel}. Known results for the steady-state Navier-Stokes problem \eqref{ss_AddRegOfWeakSolLemEqForvvel} then imply \begin{align}\label{ss_AddRegOfWeakSolLemSummabilityvvel} \weakvvel\in\xoseen{q}(\R^3)\cap\xoseen{\frac{3}{2}}(\R^3). \end{align} More specifically, we can employ \cite[Lemma X.6.1]{GaldiBookNew}\footnote{Lemma X.6.1 is new in the latest edition of the monograph \cite{GaldiBookNew}.} which, although formulated for a three-dimensional exterior domain, also holds for solutions to the whole-space problem \eqref{ss_AddRegOfWeakSolLemEqForvvel}. By the additional regularity for $\weakvvel$ implied by \eqref{ss_AddRegOfWeakSolLemSummabilityvvel}, it follows that $\grad\weakvvel\in\LR{2}(\R^3)^{3\times 3}$. Since by assumption $\weakwvel\in\LR{2}(\grp)^3$, we thus have $\nsnonlinb{\weakwvel}{\weakvvel}\in\LR{1}(\grp)^3$. In addition, we can deduce as in \eqref{ss_AddRegOfWeakSolLemSummabilitywgradwhighpre} that $\nsnonlinb{\weakwvel}{\weakvvel}\in\LR{\frac{5}{4}}(\grp)^3$. Consequently, by interpolation \begin{align}\label{ss_AddRegOfWeakSolLemSummabilitywgradv} \nsnonlinb{\weakwvel}{\weakvvel}\in\LR{q}(\grp)^3. \end{align} From \eqref{ss_AddRegOfWeakSolLemSummabilityvvel} we further obtain $\weakvvel\in\LR{\frac{2q}{2-q}}(\R^3)^3$, which combined with $\grad\weakwvel\in\LR{2}(\grp)^{3\times 3}$ yields \begin{align}\label{ss_AddRegOfWeakSolLemSummabilityvgradw} \nsnonlinb{\weakvvel}{\weakwvel}\in\LR{q}(\grp)^3. \end{align} We have now derived enough summability properties for the terms appearing in \eqref{ss_nsongrp} to finalize the proof. Recalling again Lemma \ref{lt_projprojcomplinnerprodl1locfunctions}, it is easy to verify from the weak formulation \eqref{UniquenessClassDefDefofweaksol} that \begin{align}\label{ss_AddRegOfWeakSolLemWeakFormulationwvel} \begin{aligned} &\iper\int_0^\per\int_{\R^3} -\weakwvel\cdot\partial_t\Phi +\grad\weakwvel:\grad\Phi -\rey\partial_1\weakwvel\cdot\Phi\\ &\qquad + \Bp{\projcompl[\nsnonlin{\weakwvel}] + \nsnonlinb{\weakwvel}{\weakvvel} + \nsnonlinb{\weakvvel}{\weakwvel}}\cdot\Phi\,\dx\dt = \iper\int_0^\per\int_{\R^3} \projcompl f\cdot\Phi\,\dx\dt \end{aligned} \end{align} for all $\Phi\in\CRcisigma(\grp)$. The summability of $\weakwvel$ and $\grad\weakwvel$ together the summability properties obtained for $\nsnonlin{\weakwvel}$, $\nsnonlinb{\weakwvel}{\weakvvel}$, and $\nsnonlinb{\weakvvel}{\weakwvel}$ above enables us to extend \eqref{ss_AddRegOfWeakSolLemWeakFormulationwvel} to all $\Phi\in\SR(\grp)$. Thus the system \begin{align* \begin{pdeq} &\partial_t\weakwvel -\Delta\weakwvel -\rey\partial_1\weakwvel = \hproj\projcompl f - \hproj\Bb{\projcompl\bb{\nsnonlin{\weakwvel}} +\nsnonlinb{\weakwvel}{\weakvvel} +\nsnonlinb{\weakvvel}{\weakwvel} } && \tin\grp,\\ &\Div\weakwvel =0 && \tin\grp \end{pdeq} \end{align*} is satisfied as an identity in $\TDR(\grp)$. From \eqref{ss_AddRegOfWeakSolLemSummabilitywgradwlow}, \eqref{ss_AddRegOfWeakSolLemSummabilitywgradwhigh}, \eqref{ss_AddRegOfWeakSolLemSummabilitywgradv}, \eqref{ss_AddRegOfWeakSolLemSummabilityvgradw}, and the assumptions on $f$, we conclude that \begin{align*} \hproj\projcompl f - \hproj\Bb{\projcompl\bb{\nsnonlin{\weakwvel}} +\nsnonlinb{\weakwvel}{\weakvvel} +\nsnonlinb{\weakvvel}{\weakwvel} } \in\LR{q}(\grp)^3. \end{align*} Consequently, Lemma \ref{lt_TPOseenMappingThmLem} combined with Lemma \ref{lt_TPOseenUniqueness} implies \eqref{ss_AddRegOfWeakSolLemAddRegProjcompl}. Finally, assume \eqref{ss_AddRegOfWeakSolLemDatacond2} for some $\tq\in(1,\frac{3}{2}\big]$. In view of \eqref{ss_AddRegOfWeakSolLemSummabilitywgradwlow} and \eqref{ss_AddRegOfWeakSolLemSummabilityprojwgradwhigh}, we deduce \begin{align*} \proj f - \proj\bb{\nsnonlin{\weakwvel}} \in\LR{\tq}(\R^3)^3\cap\LR{\frac{3}{2}}(\R^3)^3. \end{align*} Recalling that $\weakvvel$ solves \eqref{ss_AddRegOfWeakSolLemEqForvvel}, utilizing once more \cite[Lemma X.6.1]{GaldiBookNew} we conclude \eqref{ss_AddRegOfWeakSolLemAddRegProj}. \end{proof} \begin{proof}[Proof of Theorem \ref{ss_EnergyEqThm}]\newCCtr[c]{ss_EnergyEqThm} The proof relies on the summability properties of the solution $\uvel=\vvel+\wvel$ being sufficient to multiply \eqref{ss_nsongrp} with $\uvel$ itself and subsequently integrate over space and time. Due to the different summability properties of $\vvel$ and $\wvel$, it is more convenient to carry out this process for $\vvel$ and $\wvel$ separately. Applying first $\projcompl$ and then $\hproj$ to both sides in \eqref{ss_nsongrp}, we obtain \begin{align}\label{ss_EnergyEqEqForwvel} \partial_t\wvel -\Delta\wvel -\rey\partial_1\wvel = \hproj\projcompl f - \hproj\Bb{\projcompl\bb{\nsnonlin{\wvel}} +\nsnonlinb{\wvel}{\vvel} +\nsnonlinb{\vvel}{\wvel} }, \end{align} which we multiply with $\wvel$ and integrate $\grp$. We can easily verify that the product of $\wvel$ with each term in \eqref{ss_EnergyEqEqForwvel} is integrable over $\grp$. For example, we observe that \begin{align*} \iper\int_0^\per\int_{\R^3} \snorml{\partial_t\wvel\cdot\wvel}\,\dx\dt\leq \norm{\partial_t\wvel}_{4}\norm{\wvel}_{\frac{4}{3}}\leq \norm{\wvel}_{2,1q,r}^2. \end{align*} Similarly, one can verify for all the other terms in \eqref{ss_EnergyEqEqForwvel} that the product with $\wvel$ can be integrated over $\grp$. We thus conclude that \begin{align}\label{ss_EnergyEqTestwithw} \begin{aligned} &\iper\int_0^\per\int_{\R^3} \partial_t\wvel\cdot\wvel -\Delta\wvel\cdot\wvel -\rey\partial_1\wvel\cdot\wvel \,\dx\dt\\ &\qquad = \iper\int_0^\per\int_{\R^3} \projcompl f\cdot \wvel - \projcompl\bb{\nsnonlin{\wvel}}\cdot\wvel -(\nsnonlinb{\wvel}{\vvel})\cdot\wvel -(\nsnonlinb{\vvel}{\wvel})\cdot\wvel\,\dx\dt, \end{aligned} \end{align} where the Helmholtz projection $\hproj$ can be omitted since $\wvel$ is solenoidal. Since $\wvel=\projcompl\wvel$ we can, recalling \eqref{lt_projprojcomplinnerprodl1locfunctionsinnerprod}, also omit the projection $\projcompl$ in the first two terms on the right-hand side. Moreover, the summability properties of $\wvel$ are sufficient to integrate by parts in each term above. Consequently, we obtain \begin{align}\label{ss_EnergyEqTestwithwandelimination} \begin{aligned} \iper\int_0^\per\int_{\R^3} \grad\wvel:\grad\wvel \,\dx\dt = \iper\int_0^\per\int_{\R^3} f\cdot \wvel - (\nsnonlinb{\wvel}{\vvel})\cdot\wvel \,\dx\dt. \end{aligned} \end{align} We now repeat the procedure with $\vvel$ in the role of $\wvel$. Applying first $\proj$ and then $\hproj$ to both sides in \eqref{ss_nsongrp}, we obtain \begin{align}\label{ss_EnergyEqEqForv} -\Delta\vvel -\rey\partial_1\vvel = \hproj\proj f - \hproj\Bb{\proj\bb{\nsnonlin{\wvel}} +\nsnonlin{\vvel} }, \end{align} which we multiply with $\vvel$ and integrate over $\R^3$. Again it should be verified that the product of the terms in \eqref{ss_EnergyEqEqForv} with $\vvel$ is integrable over $\R^3$. This, however, is standard to show. For example, in view of \eqref{ss_StrongSolThmEmbeddingLemvinfty} and the fact that $\frac{2q}{2-q}\leq \frac{q}{q-1}$ it follows that \begin{align*} \snorml{\int_{\R^3}\Delta\vvel\cdot\vvel\,\dx}\leq \norm{\Delta\vvel}_q\norm{\vvel}_{\frac{q}{q-1}} \leq \norm{\vvel}_{\xoseen{q,r}(\R^3)}^2. \end{align*} Similarly, one can verify for all the other terms in \eqref{ss_EnergyEqEqForv} that the product with $\vvel$ can be integrated over $\R^3$. We thus conclude that \begin{align}\label{ss_EnergyEqTestwithv} \begin{aligned} &\int_{\R^3} -\Delta\vvel\cdot\vvel -\rey\partial_1\vvel\cdot\vvel\,\dx = \int_{\R^3} f\cdot \vvel - (\nsnonlin{\wvel})\cdot\vvel - (\nsnonlin{\vvel})\cdot\vvel \,\dx. \end{aligned} \end{align} One may also verify that the summability properties of $\vvel$ are sufficient to integrate by parts in \eqref{ss_EnergyEqTestwithv}. We thereby obtain \begin{align}\label{ss_EnergyEqTestwithvAfterElimination} \begin{aligned} &\int_{\R^3} \grad\vvel:\grad\vvel\dx = \int_{\R^3} f\cdot \vvel + (\nsnonlinb{\wvel}{\vvel})\cdot\wvel \,\dx. \end{aligned} \end{align} Adding together \eqref{ss_EnergyEqTestwithwandelimination} and \eqref{ss_EnergyEqTestwithvAfterElimination} we deduce \begin{align* \begin{aligned} \iper\int_0^\per\int_{\R^3} \snorm{\grad\wvel}^2+\snorm{\grad\vvel}^2\,\dx\dt = \iper\int_0^\per\int_{\R^3} f\cdot (\vvel+\wvel) \,\dx\dt. \end{aligned} \end{align*} Since \begin{align*} \iper\int_0^\per\int_{\R^3} \grad\vvel:\grad\wvel\,\dx\dt = 0, \end{align*} we finally conclude \eqref{EnergyEqEE}. \end{proof} \begin{proof}[Proof of Theorem \ref{ss_UniquenessThm}]\newCCtr[c]{ss_UniquenessThm} Choosing $\const{ss_UniquenessThmConst}\leq\const{ss_StrongSolThmEps}$, we obtain by Theorem \ref{ss_StrongSolThm} a solution $(\uvel,\upres)$ in the class \eqref{ss_StrongSolThmSolSpace} (with $\uvel=\vvel+\wvel$). From the proof of Theorem \ref{ss_StrongSolThm}, in particular \eqref{ss_StrongSolThmEstOfL}--\eqref{ss_StrongSolThmParameterChoice}, we recall that $\uvel\in\overline{\B_{\rho}}\subset\xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)$ with $\rho:=\const{ss_UniquenessThmConst}^\half$, which means that \begin{align}\label{ss_UniquenessThmBoundOnSol} \norm{(\vvel,\wvel)}_{\xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)} \leq \const{ss_UniquenessThmConst}^\half. \end{align} Now recall Definition \ref{ss_UniquenessClassDef} and consider a \emph{physically reasonable weak solution} $\weakuvel$ corresponding to the same data $f$. Put $\weakvvel:=\proj\weakuvel$ and $\weakwvel:=\projcompl\weakuvel$. We shall verify that the regularity of $\weakvvel$ and $\weakwvel$ ensured by Lemma \ref{ss_AddRegOfWeakSolLem} enables us to use $\uvel=\vvel+\wvel$ as a test function in the weak formulation for $\weakuvel=\weakvvel+\weakwvel$. Observe for example that \eqref{ss_AddRegOfWeakSolLemAddRegProj} implies $\weakvvel\in\LR{\frac{2q}{2-q}}(\R^3)^3$, from which it follows, since the H\"older conjugate $\big(\frac{2q}{2-q}\big)'=\frac{2q}{3q-2}$ belongs to the interval $(q,r)$, that \begin{align}\label{ss_UniquenessThmAddSummability1} \weakvvel\cdot\partial_t\wvel\in\LR{1}(\grp). \end{align} Moreover, since by assumption $\weakwvel\in\LR{2}(\grp)^3$, we also have \begin{align}\label{xxxUNUSEDLABELxxxss_UniquenessThmAddSummability2} \weakwvel\cdot\partial_t\wvel\in\LR{1}(\grp). \end{align} In a similar manner, one may verify that \begin{align}\label{ss_UniquenessThmAddSummability3} \grad\weakvvel:\grad\vvel,\ \grad\weakvvel:\grad\wvel,\ \grad\weakwvel:\grad\vvel,\ \grad\weakwvel:\grad\wvel\in\LR{1}(\grp)^3. \end{align} From \eqref{ss_AddRegOfWeakSolLemAddRegProj} and the initial regularity of $\weakvvel$, we obtain $\partial_1\weakvvel\in\LR{q}(\R^3)^3\cap\LR{2}(\R^3)^3$. Thus, since $\vvel\in\LR{\frac{2q}{2-q}}(\R^3)^3$ and the H\"older conjugate $\big(\frac{2q}{2-q}\big)'=\frac{2q}{3q-2}$ belongs to the interval $(q,2)$, we deduce \begin{align}\label{xxxUNUSEDLABELxxxss_UniquenessThmAddSummability4} \partial_1\weakvvel\cdot\vvel\in\LR{1}(\R^3)^3. \end{align} In view of \eqref{ss_AddRegOfWeakSolLemAddRegProjcompl}, the same argument yields \begin{align}\label{xxxUNUSEDLABELxxxss_UniquenessThmAddSummability5} \partial_1\weakwvel\cdot\vvel\in\LR{1}(\grp)^3. \end{align} It is easy to see that \begin{align}\label{xxxUNUSEDLABELxxxss_UniquenessThmAddSummability6} \partial_1\weakvvel\cdot\wvel,\ \partial_1\weakwvel\cdot\wvel\in\LR{1}(\grp)^3. \end{align} By Lemma \ref{ss_StrongSolThmEmbeddingLem}, we have $\vvel\in\LR{\frac{2q}{2-q}}(\R^3)^3\cap\LR{\infty}(\R^3)^3$. Moreover, recalling the embedding $\DSRNsigma{1}{2}(\R^3)\embeds\LR{6}(\R^3)$, we find that $\weakvvel\in\LR{\frac{2q}{2-q}}(\R^3)^3\cap\LR{6}(\R^3)^3$. We thus see that $\vvel,\wvel,\weakvvel\in\LR{4}(\grp)^3$, from which one can deduce that \begin{align}\label{xxxUNUSEDLABELxxxss_UniquenessThmAddSummability7} (\nsnonlin{\weakvvel})\cdot\vvel,\ (\nsnonlin{\weakvvel})\cdot\wvel,\ (\nsnonlinb{\weakvvel}{\weakwvel})\cdot\vvel,\ (\nsnonlinb{\weakvvel}{\weakwvel})\cdot\wvel\in\LR{1}(\grp)^3. \end{align} Lemma \ref{ss_StrongSolThmEmbeddingLem} also yields $\wvel\in\LR{\infty}(\grp)^3$, whence \begin{align}\label{ss_UniquenessThmAddSummability8} (\nsnonlinb{\weakwvel}{\weakvvel})\cdot\vvel,\ (\nsnonlinb{\weakwvel}{\weakvvel})\cdot\wvel,\ (\nsnonlinb{\weakwvel}{\weakwvel})\cdot\vvel,\ (\nsnonlinb{\weakwvel}{\weakwvel})\cdot\wvel\in\LR{1}(\grp)^3. \end{align} Finally, recalling that $\big(\frac{2q}{2-q}\big)'=\frac{2q}{3q-2}\in(q,2)$, the summability of $f$ implies \begin{align}\label{ss_UniquenessThmAddSummability9} f\cdot\vvel,\ f\cdot\wvel\in\LR{1}(\grp)^3. \end{align} From the summability properties \eqref{ss_UniquenessThmAddSummability1}--\eqref{ss_UniquenessThmAddSummability9}, we conclude, by a standard approximation argument, that $\uvel=\vvel+\wvel$ can be used as a test function in the weak formulation for $\weakuvel=\weakvvel+\weakwvel$ and thus obtain \begin{align}\label{ss_UniquenessThmTestOfWeakEqWithStrongSol} \begin{aligned} \iper\int_0^\per\int_{\R^3} -\weakwvel\cdot\partial_t\wvel +\grad\weakuvel:\grad\uvel -\rey\partial_1\weakuvel\cdot\uvel + (\nsnonlin{\weakuvel})\cdot\uvel\,\dx\dt = \iper\int_0^\per\int_{\R^3} f\cdot\uvel\,\dx\dt. \end{aligned} \end{align} We now consider the equation \begin{align}\label{ss_UniquenessThmEqforwvel} \partial_t\uvel -\Delta\uvel -\rey\partial_1\uvel + \grad\upres + \nsnonlin{\uvel}= f \quad\tin\grp \end{align} satisfied by the strong solution. We shall multiply \eqref{ss_UniquenessThmEqforwvel} with $\weakuvel$ and integrate over $\grp$. With the aid of Lemma \ref{ss_AddRegOfWeakSolLem} and Lemma \ref{ss_StrongSolThmEmbeddingLem}, one can verify as above that the resulting integral is well-defined. We thus obtain \begin{align*} \begin{aligned} \iper\int_0^\per\int_{\R^3} \partial_t\wvel\cdot\weakwvel -\Delta\uvel\cdot\weakuvel -\rey\partial_1\uvel\cdot\weakuvel + \grad\upres\cdot\weakuvel + (\nsnonlin{\uvel})\cdot\weakuvel \,\dx\dt = \iper\int_0^\per\int_{\R^3} f\cdot\weakuvel\,\dx\dt. \end{aligned} \end{align*} Recalling \eqref{ss_UniquenessThmAddSummability3}--\eqref{ss_UniquenessThmAddSummability8}, we see that the following integration by parts is valid \begin{align}\label{ss_UniquenessThmTestOfStrongEqWithWeakSol} \begin{aligned} &\iper\int_0^\per\int_{\R^3} \partial_t\wvel\cdot\weakwvel +\grad\uvel:\grad\weakuvel +\rey\uvel\cdot\partial_1\weakuvel - (\nsnonlinb{\uvel}{\weakuvel})\cdot\uvel \,\dx\dt =\iper\int_0^\per\int_{\R^3} f\cdot\weakuvel\,\dx\dt. \end{aligned} \end{align} Adding together \eqref{ss_UniquenessThmTestOfWeakEqWithStrongSol} and \eqref{ss_UniquenessThmTestOfStrongEqWithWeakSol}, we deduce \begin{align}\label{ss_UniquenessThmAfterAddition} \begin{aligned} 2\,\iper\int_0^\per\int_{\R^3} \grad\Uvel:\grad\uvel\,\dx\dt &= \iper\int_0^\per\int_{\R^3} f\cdot\weakuvel\,\dx\dt + \iper\int_0^\per\int_{\R^3} f\cdot\uvel\,\dx\dt \\ &\quad +\iper\int_0^\per\int_{\R^3} \bp{\nsnonlinb{(\uvel-\weakuvel)}{\weakuvel}}\cdot\uvel \,\dx\dt. \end{aligned} \end{align} Since \begin{align*} \begin{aligned} \iper\int_0^\per\int_{\R^3} \snorml{\grad\weakuvel-\grad\uvel}^2\,\dx\dt = \iper\int_0^\per\int_{\R^3} \snorml{\grad\weakuvel}^2+\snorml{\grad\uvel}^2 \,\dx\dt -2\,\iper\int_0^\per\int_{\R^3} \grad\Uvel:\grad\uvel\,\dx\dt, \end{aligned} \end{align*} we can utilize \eqref{ss_UniquenessThmAfterAddition} in combination with the energy equality \eqref{EnergyEqEE} satisfied by $\uvel$ due to Theorem \ref{ss_EnergyEqThm} and the energy inequality \eqref{UniquenessClassDefEnergyIneq} satisfied by $\weakuvel$ to deduce \begin{align}\label{ss_UniquenessThmFinalInEq1} \begin{aligned} \iper\int_0^\per\int_{\R^3} \snorml{\grad\weakuvel-\grad\uvel}^2\,\dx\dt \leq \iper\int_0^\per\int_{\R^3} \bp{\nsnonlinb{(\weakuvel-\uvel)}{\weakuvel}}\cdot\uvel \,\dx\dt. \end{aligned} \end{align} Recalling \eqref{ss_StrongSolThmEmbeddingLemGradv2}, we see that $\grad\uvel\in\LR{2}(\grp)^{3\times 3}$. We already observed that $\uvel,\weakvvel\in\LR{4}(\grp)^3$. Thus, an integration by parts yields \begin{align}\label{ss_UniquenessThmNonlinReformulation1} \iper\int_0^\per\int_{\R^3} \bp{\nsnonlinb{\weakvvel}{\uvel}}\cdot\uvel \,\dx\dt = 0. \end{align} Since $\weakwvel\in\LR{2}(\grp)^3$ and $\uvel\in\LR{\infty}(\grp)$, it further follows that \begin{align}\label{ss_UniquenessThmNonlinReformulation2} \iper\int_0^\per\int_{\R^3} \bp{\nsnonlinb{\weakwvel}{\uvel}}\cdot\uvel \,\dx\dt = 0. \end{align} Adding together \eqref{ss_UniquenessThmNonlinReformulation1} and \eqref{ss_UniquenessThmNonlinReformulation2} we obtain \begin{align}\label{ss_UniquenessThmNonlinReformulation} \iper\int_0^\per\int_{\R^3} \bp{\nsnonlinb{\weakuvel}{\uvel}}\cdot\uvel \,\dx\dt = 0. \end{align} Consequently, we can rewrite \eqref{ss_UniquenessThmFinalInEq1} as \begin{align}\label{ss_UniquenessThmFinalInEq2} \begin{aligned} \iper\int_0^\per\int_{\R^3} \snorml{\grad\weakuvel-\grad\uvel}^2\,\dx\dt \leq \iper\int_0^\per\int_{\R^3} \bp{\nsnonlinb{(\weakuvel-\uvel)}{(\weakuvel-\uvel)}}\cdot\uvel \,\dx\dt. \end{aligned} \end{align} Recalling the embedding $\DSRNsigma{1}{2}(\R^3)\embeds\LR{6}(\R^3)$, we estimate \begin{align}\label{ss_UniquenessThmFinalInEqEst} \begin{aligned} &\snormL{\iper\int_0^\per\int_{\R^3} \bp{\nsnonlinb{(\weakuvel-\uvel)}{(\weakuvel-\uvel)}}\cdot\uvel \,\dx\dt}\\ &\qquad \leq \iper\int_0^\per \norm{\weakuvel(\cdot,t)-\uvel(\cdot,t)}_6\, \norm{\grad\weakuvel(\cdot,t)-\grad\uvel(\cdot,t)}_2\, \norm{\uvel(\cdot,t)}_3\,\dt\\ &\qquad \leq \esssup_{t\in(0,\per)}\norm{\uvel(\cdot,t)}_3\, \iper\int_0^\per \norm{\grad\weakuvel(\cdot,t)-\grad\uvel(\cdot,t)}_2^2\,\dt. \end{aligned} \end{align} Now we finally need the assumption $q\leq \frac{6}{5}$, which implies $\frac{2q}{2-q}\leq 3$. Consequently, from the fact that $\norm{\vvel}_{\frac{2q}{2-q}}\leq \norm{\vvel}_{\xoseen{q,r}(\R^3)}$ and, by Lemma \ref{ss_StrongSolThmEmbeddingLem}, $\norm{\vvel}_{\infty}\leq \const{ss_StrongSolThmEmbeddingLemConstvinfty}\norm{\vvel}_{\xoseen{q,r}(\R^3)}$, we obtain \begin{align}\label{ss_UniquenessThmvvelLR3est} \norm{\vvel}_{\LR{3}(\R^3))} \leq \Cc{ss_UniquenessThm} \norm{\vvel}_{\xoseen{q,r}(\R^3)}. \end{align} Since $\wvel\in\WSRsigmacompl{2,1}{r,q}(\grp)\embeds\WSR{1}{3}(\grp)^3\embeds\WSR{1}{3}\bp{(0,\per);\LR{3}(\R^3)^3}$, standard Sobolev embedding yields $\wvel\in\LR{\infty}\bp{(0,\per);\LR{3}(\R^3)^3}$ with \begin{align}\label{ss_UniquenessThmwvelLR3est} \norm{\wvel}_{\LR{\infty}((0,\per);\LR{3}(\R^3))} \leq \Cc{ss_UniquenessThm} \norm{\wvel}_{2,1,q,r}. \end{align} Combining \eqref{ss_UniquenessThmvvelLR3est} and \eqref{ss_UniquenessThmwvelLR3est}, we obtain \begin{align*} \norm{\uvel}_{\LR{\infty}((0,\per);\LR{3}(\R^3))} \leq \Cc[ss_UniquenessThmuveLinftyL3Const]{ss_UniquenessThm} \norm{(\vvel,\wvel)}_{\xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp)}. \end{align*} This estimate together with \eqref{ss_UniquenessThmBoundOnSol}, \eqref{ss_UniquenessThmFinalInEq2} and \eqref{ss_UniquenessThmFinalInEqEst} finally yields \begin{align*} \begin{aligned} \iper\int_0^\per\int_{\R^3} \snorml{\grad\weakuvel-\grad\uvel}^2\,\dx\dt \leq \const{ss_UniquenessThmuveLinftyL3Const}\, \const{ss_UniquenessThmConst}^\half\, \iper\int_0^\per\int_{\R^3} \snorml{\grad\weakuvel-\grad\uvel}^2\,\dx\dt. \end{aligned} \end{align*} We conclude that $\weakuvel=\uvel$ if $\const{ss_UniquenessThmConst} < \const{ss_UniquenessThmuveLinftyL3Const}^{-\half}$. \end{proof} \begin{rem} The proof of Theorem \ref{ss_UniquenessThm} follows an idea introduced by \textsc{Galdi} in \cite{Galdi1993_LRPR}. The same method was also used in \cite{Silvestre_TPFiniteKineticEnergy12} to show a uniqueness result for the time-periodic Navier-Stokes problem in the case $\rey=0$. \end{rem} \begin{proof}[Proof of Theorem \ref{ss_RegularityThm}]\newCCtr[c]{ss_RegularityThm} By assumption, $(\uvel,\upres)$ is a solution to \eqref{ss_nsongrp} in the class \eqref{ss_StrongSolThmSolSpace} with $\uvel=\vvel+\wvel$. Applying first $\projcompl$ and then $\hproj$ on both sides in \eqref{ss_nsongrp} we obtain \begin{align}\label{ss_RegularityThmEqForwvel} \begin{pdeq} &\partial_t\wvel -\Delta\wvel -\rey\partial_1\wvel = \hproj\projcompl f - \hproj\Bb{\projcompl\bb{\nsnonlin{\wvel}} +\nsnonlinb{\wvel}{\vvel} +\nsnonlinb{\vvel}{\wvel} } && \tin\grp,\\ &\Div\wvel =0 && \tin\grp. \end{pdeq} \end{align} We shall ``take half a derivative in time'' on both sides of $\eqref{ss_RegularityThmEqForwvel}$. We therefore introduce the pseudo-differential operator \begin{align*} \partial_t^\half:\SR(\grp)\ra\SR(\grp),\quad \partial_t^\half\psi:=\iFT_\grp\Bb{\big(i\perf k\big)^\half\ft{\psi}}, \end{align*} which, by duality, extends to an operator $\partial_t^\half:\TDR(\grp)\ra\TDR(\grp)$. Note that $\nsnonlin{\wvel}=\Div\wvel\otimes\wvel$ for a solenoidal vector field $\wvel$. We thus find that \begin{align}\label{ss_RegularityThm_pthalfmultiplierrep} \begin{aligned} \partial_t^\half\Bb{\projcompl\bb{\nsnonlin{\wvel}}}_j &= \iFT_\grp\Bb{\frac{\big(1-\projsymbol(\xi,k)\big)\big(i\perf k\big)^\half (i\xi_l)}{\snorm{\xi}^2+i\perf k}\,(\snorm{\xi}^2+i\perf k)\,\ft{\wvel_j\wvel_l}}\\ &=\iFT_\grp\Bb{\Mmultiplier_l(\xi,k)\,\FT_\grp\bb{(\partial_t-\Delta)[\wvel_j\wvel_l]}} \end{aligned} \end{align} with \begin{align*} \Mmultiplier_l:\dualgrp\ra\CNumbers,\quad \Mmultiplier_l(\xi,k):=\frac{\big(1-\projsymbol(\xi,k)\big)\big(i\perf k\big)^\half (i\xi_l)}{\snorm{\xi}^2+i\perf k}. \end{align*} Observe that the only zero of the polynomial denumerator of $\Mmultiplier_l$ is $(\xi,k)=(0,0)$. When $k=0$, however, the numerator vanishes due to the term $\big(1-\projsymbol(\xi,k)\big)$. Consequently, we see that $\Mmultiplier_l\in\CRi(\dualgrp)$ and that $\Mmultiplier_l$ is bounded. Using the same idea introduced in \cite{mrtpns} based on the transference principle of Fourier multipliers, we will show that $\Mmultiplier_l$ is an $\LR{p}(\grp)$-multiplier for all $p\in(1,\infty)$. For this purpose, we let $\chi$ be a ``cut-off'' function with \begin{align*} \chi\in\CRci(\R;\R),\quad \chi(\eta)=1\ \text{for}\ \snorm{\eta}\leq\half,\quad \chi(\eta)=0\ \text{for}\ \snorm{\eta}\geq 1, \end{align*} and define \begin{align*} \mmultiplier_l:\R^3\times\R\ra\CNumbers,\quad \mmultiplier_l(\xi,\eta):=\frac{\big(1-\chi(\iperf\eta)\big)\big(i \eta\big)^\half \xi_l}{\snorm{\xi}^2+i \eta}. \end{align*} Observe that the numerator in the definition of $\mmultiplier_l$ vanishes in a neighborhood of the only zero $(\xi,\eta)=(0,0)$ of the denumerator. Away from $(0,0)$, $\mmultiplier_l$ is a rational function with non-vanishing denumerator. Consequently, $\mmultiplier_l$ is smooth and bounded. Moreover, as one readily verifies, $\mmultiplier_l$ satisfies \begin{align* \sup_{\epsilon\in\set{0,1}^{4}}\sup_{(\xi,\eta)\in\R^3\times\R} \snorml{\xi_1^{\epsilon_1}\xi_2^{\epsilon_2}\xi_3^{\epsilon_3}\eta^{\epsilon_{4}} \partial_{1}^{\epsilon_1}\partial_{2}^{\epsilon_2}\partial_{3}^{\epsilon_3}\partial_\eta^{\epsilon_{4}} \mmultiplier_l(\xi,\eta)} < \infty. \end{align*} This means that $\mmultiplier_l$ satisfies the condition of Marcinkiewicz's multiplier theorem; see for example \cite[Corollary 5.2.5]{Grafakos1} or \cite[Chapter IV, \S 6]{Stein70}. Consequently, $\mmultiplier_l$ is an $\LR{p}(\R^3\times\R)$-multiplier. Next, we introduce $\Phi:\dualgrp\ra\R^3\times\R$, $\Phi(\xi,k):=(\xi,\perf k)$. Clearly, $\Phi$ is a continuous homomorphism between the topological groups $\grp$ and $\R^3\times\R$, the latter being considered a topological group in the canonical way. Observe that $\Mmultiplier_l=\mmultiplier_l\circ\Phi$. Since $\mmultiplier_l$ is an $\LR{p}(\R^3\times\R)$-multiplier, it follows from the transference principle of Fourier multipliers on groups\footnote{Originally, \textsc{de Leeuw} \cite{Leeuw1965} established the transference principle between the torus group and $\R$. \textsc{Edwards} and \textsc{Gaudry} \cite[Theorem B.2.1]{EdwardsGaudryBook} generalized the theorem of \textsc{de Leeuw} to arbitrary locally compact abelian groups. We employ this general version with groups $\R^3\times\R$ and $\grp:=\R^3\times\R/\per\Z$. A proof of the theorem for this particular choice of groups can be found in \cite[Theorem 3.4.5]{habil}.}, see \cite[Theorem B.2.1]{EdwardsGaudryBook} or \cite[Theorem 3.4.5]{habil}, that $\Mmultiplier_l$ is an $\LR{p}(\grp)$-multiplier. Recalling \eqref{ss_RegularityThm_pthalfmultiplierrep}, we thus obtain \begin{align}\label{ss_RegularityThmpartialthalfEst} \forall p\in(1,\infty):\quad \normL{\partial_t^\half\Bb{\projcompl\bb{\nsnonlin{\wvel}}}}_p \leq \Cc{ss_RegularityThm}\,\norm{(\partial_t-\Delta)[\wvel\otimes\wvel]}_p. \end{align} Due to $\wvel\in\WSRsigmacompl{2,1}{q,r}(\grp)$ and the fact that, by \eqref{ss_StrongSolThmEmbeddingLemwinfty}, $\wvel\in\LR{\infty}(\grp)$, we have $\partial_t\wvel_j\wvel_l\in\LR{q}(\grp)\cap\LR{r}(\grp)$ and $\Delta\wvel_j\wvel_l\in\LR{q}(\grp)\cap\LR{r}(\grp)$. Moreover, since $\frac{r}{2}>q$ we observe that \begin{align}\label{ss_RegularityThmgradgradregularity1} &\grad\wvel_j\cdot\grad\wvel_l\in\LR{q}(\grp)\cap\LR{\frac{r}{2}}(\grp). \end{align} Computing \begin{align*} (\partial_t-\Delta)[\wvel_j\wvel_l] = \partial_t\wvel_j\wvel_l + \wvel_j\partial_t\wvel_l -(\Delta\wvel_j\wvel_l + \wvel_j\Delta\wvel_l + 2 \grad\wvel_j\cdot\grad\wvel_l), \end{align*} we conclude by \eqref{ss_RegularityThmpartialthalfEst} that \begin{align}\label{ss_RegularityThmgradpartialthalfnonlintermregularity1} \partial_t^\half\Bb{\projcompl\np{\nsnonlin{\wvel}}}\in\LR{q}(\grp)\cap\LR{\frac{r}{2}}(\grp). \end{align} We now recall \eqref{ss_StrongSolThmEmbeddingLemGradvinfty}, \eqref{ss_StrongSolThmEmbeddingLemvinfty}, and \eqref{ss_StrongSolThmEmbeddingLemwinfty} to deduce \begin{align*} \begin{aligned} &\partial_t\wvel_j\vvel_l,\, \Delta\wvel_j\vvel_l,\, \Delta\vvel_j\wvel_l,\, \grad\wvel_j\cdot\grad\vvel_l\in\LR{q}(\grp)\cap\LR{r}(\grp). \end{aligned} \end{align*} By the same argument as above, we obtain \begin{align*} \norm{\partial_t^\half\bb{\nsnonlinb{\wvel}{\vvel}+\nsnonlinb{\vvel}{\wvel}}}_{q} \in\LR{q}(\grp)\cap\LR{r}(\grp)\subset\LR{q}(\grp)\cap\LR{\frac{r}{2}}(\grp). \end{align*} We now apply $\partial_t^\half$ to both sides in \eqref{ss_RegularityThmEqForwvel}. Clearly, all differential operators commute with $\partial_t^\half$. Recalling definition \eqref{lt_HelmholtzProjDefDef} of the Helmholtz projection in terms of a Fourier multiplier, we also see that $\partial_t^\half$ commutes with $\hproj$. Similarly, $\partial_t^\half$ commutes with $\projcompl$. Consequently, after applying $\partial_t^\half$ to both sides in \eqref{ss_RegularityThmEqForwvel}, we obtain \begin{align*} \partial_t\bb{\partial_t^\half\wvel} -\Delta\bb{\partial_t^\half\wvel} -\rey\partial_1\bb{\partial_t^\half\wvel} \in\LRsigmacompl{q}(\grp)\cap\LRsigmacompl{\frac{r}{2}}(\grp). \end{align*} Combining now Lemma \ref{lt_TPOseenMappingThmLem} and Lemma \ref{lt_TPOseenUniqueness}, we conclude \begin{align}\label{ss_RegularityThmPartailthalfRegularity} \partial_t^\half\wvel\in\WSRsigmacompl{2,1}{q}(\grp)\cap \WSRsigmacompl{2,1}{\frac{r}{2}}(\grp). \end{align} Since \begin{align*} \begin{aligned} \partial_t\partial_j\wvel &= \iFT_\grp\Bb{\frac{\big(1-\projsymbol(\xi,k)\big)\big(i\perf k\big)^\half (i\xi_j)}{\snorm{\xi}^2+i\perf k}\, \,\FT_\grp\bb{(\partial_t-\Delta)\partial_t^\half\wvel_j}}, \end{aligned} \end{align*} we deduce by analyzing the multiplier \begin{align*} (\xi,k)\ra\frac{\big(1-\projsymbol(\xi,k)\big)\big(i\perf k\big)^\half (i\xi_j)}{\snorm{\xi}^2+i\perf k} \end{align*} in same way as we analyzed $\Mmultiplier_l$ that \begin{align*} \forall p\in(1,\infty):\quad \norm{\partial_t\partial_j\wvel}_p \leq \Cc{ss_RegularityThm} \norm{(\partial_t-\Delta)\partial_t^\half\wvel_j}_p. \end{align*} In view of \eqref{ss_RegularityThmPartailthalfRegularity}, we thus have \begin{align}\label{ss_RegularityThmpartialtpartialjregularity1} \partial_t\partial_j\wvel \in\LR{q}(\grp)\cap\LR{\frac{r}{2}}(\grp). \end{align} Combined with the fact that $\wvel\in\WSR{2,1}{q,r}(\grp)$, it follows that $\grad\wvel\in\WSR{1}{\frac{r}{2}}(\grp)$. Since $\frac{r}{2}>4$, classical Sobolev embedding yields $\WSR{1}{\frac{r}{2}}(\grp)\embeds\LR{\infty}(\grp)$. Thus \begin{align}\label{ss_RegularityThmwvelinfty} \grad\wvel\in\LR{\infty}(\grp). \end{align} With this information, we return to \eqref{ss_RegularityThmgradgradregularity1} and conclude that in fact \begin{align* &\grad\wvel_h\cdot\grad\wvel_m\in\LR{q}(\grp)\cap\LR{r}(\grp). \end{align*} We therefore obtain improved regularity in \eqref{ss_RegularityThmgradpartialthalfnonlintermregularity1}, namely \begin{align* \partial_t^\half\bb{\projcompl\np{\nsnonlin{\wvel}}}\in\LR{q}(\grp)\cap\LR{r}(\grp). \end{align*} Repeating the argument leading up to \eqref{ss_RegularityThmpartialtpartialjregularity1}, we then deduce \begin{align}\label{ss_RegularityThmwpartialtpartialjwvel} \partial_t\partial_j\wvel \in\LR{q}(\grp)\cap\LR{r}(\grp). \end{align} We shall now take a full derivative in time on both sides in \eqref{ss_RegularityThmEqForwvel}. Concerning the terms that will then appear on the right-hand side, we observe, recalling \eqref{ss_StrongSolThmEmbeddingLemGradvinfty}, \eqref{ss_StrongSolThmEmbeddingLemvinfty}, \eqref{ss_StrongSolThmEmbeddingLemwinfty}, \eqref{ss_RegularityThmwvelinfty}, and \eqref{ss_RegularityThmwpartialtpartialjwvel}, that \begin{align*} \begin{aligned} &\nsnonlinb{\partial_t\wvel}{\wvel},\, \nsnonlinb{\wvel}{\partial_t\wvel},\, \nsnonlinb{\partial_t\wvel}{\vvel},\, \nsnonlinb{\vvel}{\partial_t\wvel}\in\LR{q}(\grp)\cap\LR{r}(\grp). \end{aligned} \end{align*} Consequently, we have \begin{align*} \partial_t\bb{\partial_t\wvel} -\Delta\bb{\partial_t\wvel} -\rey\partial_1\bb{\partial_t\wvel} \in\LRsigmacompl{q}(\grp)\cap\LRsigmacompl{r}(\grp). \end{align*} Combining again Lemma \ref{lt_TPOseenMappingThmLem} and Lemma \ref{lt_TPOseenUniqueness}, we conclude the improved regularity \begin{align}\label{ss_RegularityThmPartailtRegularity} \partial_t\wvel\in\WSRsigmacompl{2,1}{q}(\grp)\cap \WSRsigmacompl{2,1}{r}(\grp) \end{align} of the time derivative of $\wvel$. We can establish the same improved regularity of spatial derivatives of $\wvel$. For this purpose we simply observe that \begin{align*} \begin{aligned} &\nsnonlinb{\partial_j\wvel}{\wvel},\, \nsnonlinb{\wvel}{\partial_j\wvel},\, \nsnonlinb{\partial_j\wvel}{\vvel},\, \nsnonlinb{\wvel}{\partial_j\vvel},\, \nsnonlinb{\partial_j\vvel}{\wvel},\, \nsnonlinb{\vvel}{\partial_j\wvel}\in\LR{q}(\grp)\cap\LR{r}(\grp), \end{aligned} \end{align*} which implies, by applying $\partial_j$ on both sides in \eqref{ss_RegularityThmwvelinfty}, that \begin{align*} \partial_t\bb{\partial_j\wvel} -\Delta\bb{\partial_j\wvel} -\rey\partial_1\bb{\partial_j\wvel} \in\LRsigmacompl{q}(\grp)\cap\LRsigmacompl{r}(\grp). \end{align*} Employing yet again Lemma \ref{lt_TPOseenMappingThmLem} and Lemma \ref{lt_TPOseenUniqueness}, we obtain \begin{align}\label{ss_RegularityThmPartailjRegularitywvel} \grad\wvel\in\WSRsigmacompl{2,1}{q}(\grp)\cap \WSRsigmacompl{2,1}{r}(\grp). \end{align} We now turn our attention to $\vvel$. Applying $\proj$ to both sides in \eqref{ss_nsongrp}, we deduce \begin{align}\label{ss_RegularityThmEqForvvel} \begin{pdeq} &-\Delta\vvel -\rey\partial_1\vvel = \hproj\proj f - \hproj\Bb{\proj\bb{\nsnonlin{\wvel}} +\nsnonlin{\vvel} } && \tin\R^3,\\ &\Div\vvel =0 && \tin\R^3. \end{pdeq} \end{align} Recalling \eqref{ss_StrongSolThmEmbeddingLemGradvinfty}, \eqref{ss_StrongSolThmEmbeddingLemvinfty}, \eqref{ss_StrongSolThmEmbeddingLemwinfty} and \eqref{ss_RegularityThmwvelinfty}, one readily verifies \begin{align*} \begin{aligned} &\proj\nb{\nsnonlinb{\partial_j\wvel}{\wvel}},\, \proj\nb{\nsnonlinb{\wvel}{\partial_j\wvel}},\, \nsnonlinb{\partial_j\vvel}{\vvel},\, \nsnonlinb{\vvel}{\partial_j\vvel}\in\LR{q}(\R^3)\cap\LR{r}(\R^3). \end{aligned} \end{align*} Thus, applying $\partial_j$ on both sides in \eqref{ss_RegularityThmEqForvvel} we obtain \begin{align*} -\Delta\bb{\partial_j\vvel} -\rey\partial_1\bb{\partial_j\vvel} \in \LRsigma{q}(\R^3)\cap\LRsigma{r}(\R^3). \end{align*} By Lemma \ref{lt_OseenMappingThmLem} and Lemma \ref{lt_OseenUniquenessLem}, we conclude that \begin{align}\label{ss_RegularityThmPartailjRegularityvvel} \grad\vvel\in\xoseen{q,r}(\R^3). \end{align} Summarizing \eqref{ss_RegularityThmPartailtRegularity}, \eqref{ss_RegularityThmPartailjRegularitywvel}, and \eqref{ss_RegularityThmPartailjRegularityvvel}, we have established similar regularity for the first order derivatives of $\wvel$ and $\vvel$ as we had originally for $\wvel$ and $\vvel$. More precisely, we have \begin{align*} \forall\,\snorm{\alpha}\leq 1,\ \snorm{\beta}+\snorm{\kappa}\leq 1:\quad (\partial_x^\alpha\vvel,\partial_x^\beta\partial_t^\kappa\wvel)\in \xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp). \end{align*} Iterating the argument above with $(\partial_x^\alpha\vvel,\partial_x^\beta\partial_t^\kappa\wvel)$ in the role of $(\vvel,\wvel)$, we obtain the same regularity for all higher order derivatives as well, that is, \begin{align}\label{ss_RegularityThmFinalvwvelregularity} &\forall\snorm{\alpha}\leq m,\ \snorm{\beta}+\snorm{\kappa}\leq m: \quad (\partial_x^\alpha\vvel,\partial_x^\beta\partial_t^\kappa\wvel)\in \xoseen{q,r}(\R^3)\times\WSRsigmacompl{2,1}{q,r}(\grp). \end{align} Concerning the pressure term $\upres$, we clearly have \begin{align}\label{ss_RegularityThmPressureEq} \grad\upres = \bp{\id-\hproj}\bb{f-\nsnonlin{\uvel}}. \end{align} From \eqref{ss_RegularityThmFinalvwvelregularity} one easily deduces \begin{align*} \forall\snorm{\beta}+\snorm{\kappa}\leq m:\quad \partial_x^\beta\partial_t^\kappa\bb{\nsnonlin{\uvel}} \in\LR{q}(\grp)^3\cap\LR{r}(\grp)^3. \end{align*} Taking derivatives in \eqref{ss_RegularityThmPressureEq} and recalling Lemma \ref{ss_PressureMappingLem}, we thus obtain \begin{align*} \forall\snorm{\beta}+\snorm{\kappa}\leq m:\quad \partial_x^\beta\partial_t^\kappa\upres\in\xpres{q,r}(\grp), \end{align*} which concludes the theorem. \end{proof} We have now shown the main results for the reformulated version \eqref{ss_nsongrp} of the system \eqref{intro_nspastbodywholespace}--\eqref{intro_timeperiodicsolution} in the setting of the group $\grp$. It remains to verify that the results carry over to the original time-periodic setting in $\R^3\times\R$. For this purpose, we make two basic observations. \begin{lem}\label{quotientmapLiftingProps} Let $k\in\N_0$ and $q\in(1,\infty)$. The quotient mapping $\quotientmap:\R^3\times\R\ra\grp$ induces, by lifting, an embedding $\WSR{k}{q}(\grp)\embeds\WSRloc{k}{q}\bp{\R^3\times\R}$ with \begin{align}\label{quotientmapLiftingProps_ExtensionOfGrpDerivatives} \forall\snorm{(\alpha,\beta)}\leq k :\quad (\partial_t^\beta\partial_x^\alpha\uvel)\circ\quotientmap = {\partial_t^\beta\partial_x^\alpha (\uvel\circ\quotientmap)}. \end{align} Similarly, lifting by $\quotientmap$ induces the embedding $\WSRsigma{2,1}{q}(\grp)\embeds\WSRloc{2,1}{q}\np{\R^3\times\R}^3$, with the relevant derivatives satisfying \eqref{quotientmapLiftingProps_ExtensionOfGrpDerivatives}, and for $r\geq q$ also $\xpres{q,r}(\grp)\embeds\WSRloc{1,0}{r}\np{\R^3\times\R}$. \end{lem} \begin{proof} Consider $\uvel\in\WSR{k}{q}(\grp)$ and let $\phi\in\CRci(\R^3\times\R)$. Let $\set{\psi_k}_{k\in\Z}\subset\CRci(\R)$ be a partition of unity subordinate to the open cover $\setc{\bp{\frac{k}{2}\per,\np{\frac{k}{2}+1}\per}}{k\in\Z}$ of $\R$. The $\per$-periodicity of $\uvel\circ\quotientmap$ then implies \begin{align*} \int_{\R^3}\int_{\R} \uvel\circ\quotientmap\cdot\partial_t^\beta\partial_x^\alpha\phi\,\dt\dx &=\int_{\R^3}\int_{\R} \uvel\circ\quotientmap\cdot\partial_t^\beta\partial_x^\alpha\Bb{\sum_{k\in\Z}\psi_k\phi}\,\dt\dx\\ &=\sum_{k\in\Z}\,\int_{\R^3}\int_0^\per \uvel\circ\bijection\cdot\partial_t^\beta\partial_x^\alpha\Bb{(\psi_k\phi)(x,t+\frac{k}{2}\per)}\,\dt\dx\\ &=\sum_{k\in\Z}\,\int_{\grp} \uvel\cdot\partial_t^\beta\partial_x^\alpha\Bb{(\psi_k\phi)(\cdot,\cdot+\frac{k}{2}\per)\circ\bijectioninv}\,\dg\\ &=(-1)^{\snorm{(\alpha,\beta)}}\sum_{k\in\Z}\,\int_{\grp} \partial_t^\beta\partial_x^\alpha\uvel\cdot\Bb{(\psi_k\phi)(\cdot,\cdot+\frac{k}{2}\per)\circ\bijectioninv}\,\dg\\ &=(-1)^{\snorm{(\alpha,\beta)}}\int_{\R^3}\int_{\R} \bp{\partial_t^\beta\partial_x^\alpha\uvel}\circ\quotientmap\cdot\phi\,\dt\dx, \end{align*} from which we deduce \eqref{quotientmapLiftingProps_ExtensionOfGrpDerivatives}. The other statements follow analogously. \end{proof} \begin{lem}\label{bijectionHomeomorphism} The mapping $\Pi=\pi_{|\R^3\times(0,\per)}$ induces, by lifting, a homeomorphism between $\WSR{k}{q}(\grp)$ and $\WSRper{k}{q}\bp{\R^3\times(0,\per)}$, $\WSRsigmacompl{2,1}{q,r}(\grp)$ and $\WSRsigmapercompl{2,1}{q,r}\bp{\R^3\times(0,\per)}$, as well as between $\xpres{q,r}(\grp)$ and $\xpres{q,r}\bp{\R^3\times(0,\per)}$. \end{lem} \begin{proof} The spaces $\CRci(\grp)$ and $\CRciper\bp{\R^3\times[0,\per]}$ are dense in the Sobolev spaces $\WSR{k}{q}(\grp)$ and $\WSRper{k}{q}\bp{\R^3\times(0,\per)}$, respectively. By construction of the differentiable structure on $\grp$, lifting by $\bijection$ is a homeomorphism between $\bp{\CRci(\grp),\norm{\cdot}_{k,q}}$ and the space $\bp{\CRciper\bp{\R^3\times[0,\per]},\norm{\cdot}_{k,q}}$. It follows that this mapping extends to a homeomorphism between $\WSR{k}{q}(\grp)$ and $\WSRper{k}{q}\bp{\R^3\times(0,\per)}$. The other statements follow analogously. \end{proof} \begin{proof}[Proof of Theorem \ref{ExistenceAndUniquenessThm}] It is easy to verify that lifting by $\bijection$ is a homeomorphism between $\LR{q}(\grp)$ and $\LR{q}\bp{\R^3\times(0,\per)}$ with $\norm{f\circ\bijectioninv}_{q,\grp}=\per^{-\frac{1}{q}}\norm{f}_{q,\R^3\times(0,\per)}$. We thus choose $\const{ExistenceAndUniquenessThmConst}\leq \bp{\per^{-\frac{1}{q}}+\per^{-\frac{1}{r}}}^{-1}\const{ss_StrongSolThmEps}$, where $\const{ss_StrongSolThmEps}$ is the constant from Theorem \ref{ss_StrongSolThm}. Consider now a vector field $f\in\LR{q}\bp{\R^3\times(0,\per)}^3\cap\LR{r}\bp{\R^3\times(0,\per)}^3$ satisfying \eqref{intro_timeperiodicdata} and \eqref{ExistenceAndUniquenessThmDataCond}. Then $\tf:=f\circ\bijectioninv$ satisfies \eqref{ss_StrongSolThmDataCond}, whence there exists, by Theorem \ref{ss_StrongSolThm}, a solution $(\tuvel,\tupres)\in\LRloc{1}(\grp)^3\times\LRloc{1}(\grp)$ to \eqref{ss_nsongrp} in the class \eqref{ss_StrongSolThmSolSpace} (with $\tuvel=\tvvel+\twvel$). Letting $\uvel:=\tuvel\circ\quotientmap$ and $\upres:=\tupres\circ\quotientmap$, we deduce from Lemma \ref{quotientmapLiftingProps} and Lemma \ref{bijectionHomeomorphism} that $(\uvel,\upres)$ is a solution to \eqref{intro_nspastbodywholespace}--\eqref{intro_timeperiodicsolution} in the class \eqref{ExistenceAndUniquenessThmSolSpace}. By Lemma \ref{quotientmapLiftingProps} we further see that a vector field $\weakuvel$ is a weak solution in the sense of Definition \ref{UniquenessClassDef} corresponding to $f$ if and only if $\tweakuvel:=\weakuvel\circ\bijectioninv$ is a weak solution in the sense of Definition \ref{ss_UniquenessClassDef} corresponding to $\tf$. Consequently, uniqueness of $\uvel$ in the class of physically reasonable weak solutions follows from Theorem \ref{ss_UniquenessThm}. Finally, we obtain directly from Theorem \ref{ss_EnergyEqThm} that $\uvel$ satisfies the energy equality \eqref{EnergyEqEE}. \end{proof} \begin{proof}[Proof of Theorem \ref{RegularityThm}] Follows directly from Theorem \ref{ss_RegularityThm} in combination with Lemma \ref{quotientmapLiftingProps} and Lemma \ref{bijectionHomeomorphism}. \end{proof} \begin{proof}[Proof of Corollary \ref{RegularitySmoothnessCor}] The corollary follows from Theorem \ref{RegularityThm} by a standard localization argument combined with classical Sobolev embedding. \end{proof} \bibliographystyle{abbrv}
1,116,691,498,200
arxiv
\section{Introduction} \label{sec:intro} In today's modern society, catalysts are widely used in the industrial sectors such as petroleum refining, fuel cells, chemical intermediates, pharmaceuticals, reduction of emission, and agrochemicals to increase the reaction rate of the desired chemical reaction\cite{zhu2015engineering,liu2017noble,cai2018alkaline,li2004room,furstner2009gold,zheng2018preface,zhang2015catalysis}. The conventional supported heterogeneous catalysts contain clusters or nanoparticles dispersed on the surfaces of appropriate support (i.e., metal oxides, 2d-materials, or porous metal-organic frameworks nanomaterials). The atom utilization and selectivity of conventional heterogeneous catalysts are very less, as only a part with a suitable size of clusters or nanoparticles participates in the catalytic reaction. Moreover, the remaining portion of clusters or nanoparticles does not participate in the catalytic reaction, and it is not useful, may be involved in unwanted reactions. The heterogeneous catalysts involved in petroleum refining, new energy application, emission reduction, and pharmaceuticals contain noble metal atoms like Pt, Pd, Ru, Rh, Ir, Ag, and Au. These noble metals are costly and low abundant in nature because of that these catalyst does not meet the current increasing demand of industries, resulting minimization of the use of such catalyst alter the catalytic activity of chemical reaction\cite{herzing2008identification,turner2008selective}. To overcome these issues, researchers have found the most promising way to increase the atom utilization and selectivity of catalysts by reducing the size of nanoclusters to isolated individual atoms, resulting in a catalyst containing a single atom on the surface of a support. Single-atom catalysts (SACs) is a new class of catalyst which contain isolated individual isolated atom dispersed or coordinated with the surface atom of support. It exhibits high catalytic activity, stability, selectivity, and 100 \% atom utilization because the whole surface area of a single atom is exposed to reactants in catalytic reactions\cite{liang2015power,wang2018heterogeneous,yang2013single,liu2017catalysis,cheng2019single,wang2019single}. In 2011, Zhang and co-workers\cite{Pt/FeOx} were the first to synthesized and investigate experimentally and theoretically the catalytic activity, selectivity and stability of single-atom Pt$_{1}$/FeO$_{x}$ catalyst for CO oxidation. After that, It has attracted a lot of researchers, and numerous SACs have been synthesized and developed in recent years. By combing different noble atoms with different supports such as metal oxides, 2d-materials, or porous metal-organic frameworks (MOFs) nanomaterials. The SACs fabricated on different supports such as on metal oxides, 2d-materials, MOFs are {[}Pt$_{1}$/FeO$_{x}$\cite{Pt/FeOx}, Rh/ZrO$_{2}$\cite{Rh1/ZrO2}, Pt/$\theta$-Al$_{2}$O$_{3}$\cite{Pt/Al2O3}, Ir$_{1}$/FeO$_{x}$\cite{Ir1/FeOx}, Au$_{1}$/CeO$_{2}$\cite{Au/CeO2}, Au$_{1}$/Co$_{3}$O$_{4}$\cite{Au1/Co3O4}, Au$_{1}$/FeO$_{x}$\cite{Au1/FeOx}, Pd/FeO$_{x}$\cite{Pd/FeOx} , Pd$_{1}$/TiO$_{2}$\cite{Pd/TiO2_liu2016photochemical}{]}, {[} Pt/g-C$_{3}$N$_{4}$\cite{Pt-G-C3N4}, Pt/MoS$_{2}$\cite{Pt/MOS2_li2018synergetic}, Pt/GNS\cite{ALD_Sun}, Pd$_{1}$/graphene\cite{Pd_graphene_ALD_yan}{]}, {[} Co-SAs/N-C\cite{BMOF_yin2016single}, Fe-SAs/N-C\cite{Fe_ZIF-8_chen2017isolated}, Ni-SAs/N-C\cite{Ni-ZIF-8_zhao2017ionic}, and Ru-SAs/N-C\cite{MOF_wang2017uncoordinated}{]}, respectively. It has emerged as a new frontier in catalysis science because of its excellent performance. In recent years, many researchers have reported that SACs shows excellent performance in various catalytic reactions, such as CO oxidation\cite{Pt/FeOx,Ir1/FeOx_CO_liang2014theoretical,Ni1/FeOx_CO_liang2016theoretical,Au1/FeOx,Pt/Al2O3,M1/FeOx_li2014exploration,Pt/CeO2_CO_oxidation_ALD_wang}, water\textminus gas shift (WGS)\cite{Ir1/FeOx,WGS_review_flytzani2012atomically,wgs_thomas2011can,WGS_Au_flytzani2013gold,Au/CeO2_WGS_DFTsong2014mechanistic,Au-OHx/TiO2_WGSyang2013atomically,liang2020dual}, water splitting, hydrogenation reaction, carbon dioxide reduction, etc. Despite the excellent performance of SACs, it has some limitations and disadvantages. The stabilization of single atoms on the surfaces of support is a very challenging process due to the agglomeration of single-atoms. It needs advanced techniques for synthesis, which we have discussed in the next section. Remainder of the paper is organized as follows. In the next section \ref{sec:Synthesis-of-SAC}II, we briefly discuss the advanced synthesis methods of SACs, while in section \ref{sec:Synthesis-of-SAC}III, we describe the application of SACs for different chemical reactions and their reaction mechanism from theoretical aspects. Finally in section \ref{sec:Summary-and-Conclusions}IV, we summarize our review article. \section{\label{sec:Synthesis-of-SAC}Synthesis of Single Atom Catalysis } In this section, we present the various synthesis methods for the fabrication of single-atom catalysts. The stabilization of single atom on the surfaces of metal oxide or two-dimensional materials is a very challenging process due to the agglomeration of single-atoms and the tendency to form nanoparticles and clusters on the surfaces. The agglomeration of single atoms happens because the surface energy of nanoparticles and clusters is less than single-atom. So, advanced synthesis methods such as impregnation method, co-precipitation method, other-wet-chemical synthesis method, atomic layer deposition method, and metal-organic frameworks derived method are used for fabrication single-atom catalysts , are discussed below. \subsection{Impregnation Method} For the synthesis of the single-atom or supported catalyst impregnation method is the simplest and economical method. In this method, a small amount of solution containing active metal precursor is mixed with catalyst support, and using the ion-exchange and adsorption process active metals stabilized on the surface of support. Li et al.\cite{Pt-G-C3N4} synthesized Pt/g-C$_{3}$N$_{4}$ (see Fig. \ref{fig:impregnation_HAADF-STEM} A) by performing liquid phase reaction between graphitic carbon nitride (g-C$_{3}$N$_{4}$) and H$_{2}$PtCl$_{6}$, followed by annealing at low temperature, and this catalyst exhibit high activity for H$_{2}$ evolution. They prepared four samples of supported catalyst with different weight percentage (i.e. 0.075\%, 0.11\%, 0.16\%, 0.38\%) of metal loading, and found that at 0.16 wt\% a bright spot center of Pt atoms are distributed on the surface on the surface of g-C$_{3}$N$_{4}$, can be seen in HAAD-STEM images. When the weight percentage is increased, up to 0.38\% aggregation of small Pt atoms is observed on the surface. Yang et al. \cite{Pt/TiN} prepared Pt/TiN catalyst (see Fig. \ref{fig:impregnation_HAADF-STEM} B) using wetness impregnation method, in which a small amount (0.35 wt\%) of Pt atoms is loaded on the surface of acid-treated TiN support, and this catalyst is found to be active for oxygen reduction reactions, formic acid, and methanol oxidation. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{Impregnation} \par\end{centering} \caption{\label{fig:impregnation_HAADF-STEM} The figure represents the HAADF-STEM images of different single atom catalysts synthesized using impregnation method: (a) Pt/g-C$_{3}$N$_{4}$, (b) Pt/TiN, (c) Pt/LSC, (d) Pt/$\theta-$Al$_{2}$O$_{3}$. (a) reprinted/reproduce from Ref. {[}25{]} with the permission of Wiley-VCH publishing group, copyright 2016, (b) reprinted/reproduce from Ref. {[}43{]} with the permission of Wiley-VCH publishing group, copyright 2016, (c) reprinted/reproduce from Ref. {[}44{]} with the permission of Nature publishing group, copyright 2016, (d) reprinted/reproduce from Ref. {[}18{]} with the permission of American Chemical society publishing group, copyright 2013.} \end{figure} Yang et al. observed the formation of Pt nanoparticles on the surface of support if the weight percentage is increased above 0.35\%. Recently, Choi et al.\cite{choi2016tuning} synthesized highly loaded (5 wt\%) Pt/S-ZTC catalyst (see Fig. \ref{fig:impregnation_HAADF-STEM} C) using wet-impregnation method, in which Pt atom is atomically dispersed on the surface of sulfur-doped zeolite-templated carbon (S-ZTC). The doped sulfur and unique three-dimensional structure of ZTC stabilize the loaded Pt atoms on the support surface. They have reported Pt/S-ZTC exhibit high activity for oxygen reduction reaction. Kwon et al.\cite{Rh1/ZrO2} have studied the activation of methane for methanol production using Rh/ZrO$_{2}$SACs, which the prepared by wet impregnation method. Moses-DeBusk et al.\cite{Pt/Al2O3} have studied the CO oxidation activity of a single Pt atom supported on $\theta$-alumina ($\theta$-Al$_{2}$O$_{3}$). They have synthesized Pt/$\theta$-Al$_{2}$O$_{3}$ SAC (see Fig. \ref{fig:impregnation_HAADF-STEM} D) by mixing alumina in an aqueous solution of chloroplatinic acid, heated at mild temperature for 30 hours, and placed on a rotovap for water evaporation. Resulting free flow powder is kept in an alumina crucible, and pyrolysis is done with elevated temperature 1 $^{\deg}$C/min to 450 $^{\circ}$C for 4 hours for obtaining Pt/$\theta$-Al$_{2}$O$_{3}$ SAC. The HAADF-STEM images of single atoms catalysts Pt/g-C$_{3}$N$_{4}$\cite{Pt-G-C3N4}, Pt/TiN\cite{Pt/TiN}, Pt/LSC\cite{choi2016tuning} and Pt/$\theta-$Al$_{2}$O$_{3}$\cite{Pt/Al2O3} with 0.16 wt\%, 0.35 wt\%, 5wt\% and 0.18 wt\%, respectively, are presented in Fig. \ref{fig:impregnation_HAADF-STEM}. It is challenging to produce uniformly distributed and highly loaded SACs with this method because it depends on the ability of support to adsorb the metal atoms, .i.e, the loading and distribution depends on the number of anchoring sites present on the surface of support. \subsection{Co-precipitation Method } Co-precipitation is a convenient, cost-effective, and less time-consuming method for the synthesis of nanoparticles. This method is slightly different from the impregnation method; here, metals atom are incorporated in the interstitial sites of support, not distributed on the surface of the support. In this method, anionic and cationic solution are mixed and simultaneously nucleation, growth, coarsening, and/or agglomeration processes starts. After agglomeration, we have to followed three more steps, i.e., precipitation, filtration, and calcination, and finally, nanoparticle is obtained. Recently, Zhang's research group have reported that they were the first one to fabricate SAC containing isolated Pt atoms uniformly dispersed on the iron oxide (FeO$_{x}$) support using the co-precipitation method\cite{Pt/FeOx,Pt/FeOx_No_reduction}. Two samples of Pt$_{1}$/FeO$_{x}$ (see Fig. \ref{fig:Coprecipitation-HAAD-STEM} A) were prepared, with 0.17 wt\% and 2.5 wt\%, using an aqueous solution of chloroplatinic acid (H$_{2}$PtCl$_{6}$.6H$_{2}$O) and ferric nitrate Fe(NO$_{3}$)$_{3}$.9H$_{2}$O with precipitation agent sodium carbonate (Na$_{2}$CO$_{3}$) at 50 $^{\circ}$C, and the PH value is maintained around 8. The resulting sample was dried at 60 $^{\circ}$C for five hours and calcined at 400 $^{\circ}$C for five hours. Furthermore, samples were reduced at 200 $^{\circ}$C for half an hour with 10\% H$_{2}$/He flow rate. They also reported that at low Pt loading 0.17 wt\%, uniformly dispersed isolated Pt atom on the FeO$_{x}$ support can be seen HAADF images, whereas, at 2.5 wt\%, mixture of Pt atoms, 2D structure of Pt atoms and cluster of Pt atoms is observed. This SAC shows excellent activity and stability for CO oxidation and NO reduction. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.39]{Coprecipitation} \par\end{centering} \caption{\label{fig:Coprecipitation-HAAD-STEM}The figure represents the HAADF-STEM images of different single atom catalyst synthesized using co-precipitation method: (a) Pt$_{1}$/FeO$_{x}$, (b) Ir$_{1}$/FeO$_{x}$, (c) Au$_{1}$/CeO$_{2}$, (d) Au$_{1}$/FeO$_{x}$. (a) reprinted/reproduce from Ref. {[}16{]} with the permission of Nature publishing group, copyright 2011, (b) reprinted/reproduce from Ref. {[}19{]} with the permission of American Chemical society publishing group, copyright 2013, (c) reprinted/reproduce from Ref. {[}20{]} with the permission of American Chemical society publishing group, copyright 2015, (d) reprinted/reproduce from Ref. {[}22{]} with the permission of Tsinghua university press and Springer publishing group, copyright 2015.} \end{figure} In addition to that Zhang and co-workers, using co-precipitation method, have synthesized series of SACs such as Ir$_{1}$/FeO$_{x}$\cite{Ir1/FeOx} (see Fig. \ref{fig:Coprecipitation-HAAD-STEM} B), Au$_{1}$/CeO$_{2}$\cite{Au/CeO2} (see Fig. \ref{fig:Coprecipitation-HAAD-STEM} C), Au$_{1}$/Co$_{3}$O$_{4}$\cite{Au1/Co3O4}, Au$_{1}$/FeO$_{x}$\cite{Au1/FeOx} (see Fig. \ref{fig:Coprecipitation-HAAD-STEM} D), and Pd/FeO$_{x}$\cite{Pd/FeOx}, which exhibits excellent activity and stability for water-gas shift reactions and CO oxidation. Xing et al.\cite{Pt/TiO2_Xin} have prepared single atom photo-catalyst containing isolated metal atoms (Pt, Pd, Ru and Rh) uniformly dispersed on titanium oxide (TiO$_{2}$) support using co-precipitation method and studied their activity and stability for water-splitting reaction. They have synthesized 4 samples for Pt/TiO$_{2}$ with different metal loading percentage 0.2 wt\%, 0.5 wt\%, 2wt\% and 1Pt/TiO$_{2}$(PD) (pure photo deposited 1 wt\% of Pt nanoparticles), and found that H$_{2}$ evolution rate for 0.2-Pt/TiO$_{2}$ is 169.6 $\mu$mol/h, which is 23, 57, and 136 times more than 1Pt/TiO$_{2}$(PD), 0.5-Pt/TiO$_{2}$ and 2-Pt/TiO$_{2}$, respectively. The H$_{2}$ evolution rate for Pd, Ru, and Rh nanoparticles supported on TiO$_{2}$ is 7, 7 and 13 times less than 0.2-Pt/TiO$_{2}$, respectively. The HAADF-STEM images of single atoms catalysts Pt$_{1}$/FeO$_{x}$\cite{Pt/FeOx}, Ir$_{1}$/FeO$_{x}$\cite{Ir1/FeOx}, Au$_{1}$/CeO$_{2}$\cite{Au/CeO2}, and Au$_{1}$/FeO$_{x}$\cite{Au1/FeOx} with 0.17 wt\%, 0.01 wt\%, 0.05 wt\% and 0.03 wt\%, respectively, are presented in Fig. \ref{fig:Coprecipitation-HAAD-STEM}. The advantages to this method are; it is a simple, rapid, easy to control the particle size and composition of the final product, energy-efficient and does not need organic solvent. Moreover, disadvantages of this method are; it does not apply to uncharged species, trace of impurities also get precipitated, reproducibility problem and does not work well if the reactants have very different precipitation rate. \subsection{Other Wet-Chemical Synthesis Method} Impregnation and Co-precipitation are the traditional wet-chemical synthesis method, but Liu et al.\cite{Pd/TiO2_liu2016photochemical} have used unique wet-chemical synthesis methods for the fabrication of single atom Pd$_{1}$/TiO$_{2}$ catalyst (see Fig. \ref{fig:wet-chemical-synthesis} A) with a high metal loading up to 1.5 wt\%. They dispersed a solution of H$_{2}$PdCl$_{4}$ on the surface of TiO$_{2}$ support, the resulting mixture is exposed to UV rays for 10 min. After that, the irradiated sample is washed with water, and a single atom Pd$_{1}$ /TiO$_{2}$ catalyst is obtained. Form transmission electron microscopy (TEM) images and extended x-ray absorption fine structure (EXAFS) spectra, it is concluded that the formation of Pd clusters or nanoparticles are not observed. This catalyst exhibits very high catalytic activity and stability for the hydrogenation of C=C and C=O. Recently, Li et al.\cite{Pt/MOS2_li2018synergetic} synthesized single atom Pt/MoS$_{2}$ catalyst (see Fig. \ref{fig:wet-chemical-synthesis} B) by injecting solution of K$_{2}$PtCl$_{6}$ using syringe pump into the mixture of MoS$_{2}$ nanosheets, ethanol, and water. During the chemisorption process, Mo atoms are replaced by Pt atoms in MoS$_{2}$ nanosheets. Pt/MoS$_{2}$ catalysts with different Pt loading percentages 0.2, 1.0, 5.0, 7.5 are synthesized by changing the concentration of K$_{2}$PtCl$_{6}$, and EXAFS spectra of all these catalysts confirmed that only isolated Pt is present on the surface of MoS$_{2}$. Researchers also investigated its catalytic activity for the conversion of CO$_{2}$ into methanol without the formation of formic acid. The HAADF-STEM images of single atoms catalysts Pd$_{1}$/TiO$_{2}$\cite{Pd/TiO2_liu2016photochemical}, and Pt$_{1}$/MoS$_{2}$\cite{Pt/MOS2_li2018synergetic} with 1.5 wt\%, and 0.2 wt\%, respectively, are presented in Fig. \ref{fig:wet-chemical-synthesis}. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.35]{wet_chemical} \par\end{centering} \caption{\label{fig:wet-chemical-synthesis}The figure represents the HAADF-STEM images of different single atom catalyst synthesized using other wet-chemical method: (a) Pd$_{1}$/TiO$_{2}$, (b) Pt$_{1}$/MoS$_{2}$. (a) reprinted/reproduce from Ref. {[}24{]} with the permission of Science publishing group, copyright 2016, (b) reprinted/reproduce from Ref. {[}26{]} with the permission of Macmillan Publishers Limited and part of Springer Nature publishing group, copyright 2018.} \end{figure} \subsection{Atomic Layer Depositions Method} Atomic layer deposition (ALD) is a subclass of chemical vapor deposition, attracting many researchers because of its ability to deposit noble metal atoms and their oxides uniformly with a desirable thickness on the substrate by using using sequential and self limitig surface reaction\cite{ALD_Cheng,ALD_george,ALD_neill,ALD_Lu,ALD_Liu}. Generally, In this method, two precursors are used, and the deposition process involves four steps\cite{ALD_cheng_nano_energy}; (1) Initially, precursor is inserted in the chamber and allowed to react with the substrate; (2) Purging of reaction chamber by use of carrier gas; (3) Second precursor is inserted in the reaction chamber and allowed to react with substrate containing first precursor; (4) At last, purging of reaction chamber is done, and sample is obtained. By repeating the cycles, a desired thickness of the precursor can be deposited. Sun et al.\cite{ALD_Sun} synthesized heterogeneous catalysts consists of isolated Pt atoms, Pt-clusters, Pt-nanoparticles dispersed on the surface graphene nanosheets (GNS) using ALD method, and also reported that these novel catalyst shows remarkable catalytic activity for methanol oxidation, almost ten times higher than the commercial carbon supported Pt (Pt/C) catalyst. For the synthesis of Pt/GNS catalyst (methylcyclopentadienly)-trimethylplatinum (MeCpPtMe$_{3}$, 98\% purity) and oxygen (99.9995\%) used as precursors and nitrogen (99.9995\%) use as purge and carrier gas. The HAAD-STEM images of Pt/GNS catalyst synthesized with 50, 100, and 150 ALD cycles, reveals that isolated pt atoms and small cluster (<1 nm) are present in the 50ALD-Pt/GNS (see Fig. \ref{fig:ALD_HAAD_STEM} A), whereas in the 100ALD-Pt/GNS, and 150ALD-Pt/GNS the size cluster approaches to 2 nm and 4 nm, respectively. Recently, Cheng et al.\cite{Pt/N-GNS_ALD_HER_cheng} synthesized Pt/N-GNS SAC by same ALD technique discussed above, in which isolated Pt atoms are uniformly dispersed on the surface of nitrogen-doped graphene nano-sheets, and also investigated its activity for Hydrogen evolution reaction. They also reported, Pt/NGNs exhibits enhanced catalytic activity ($\approx$ 37 times more than Pt/C) and high stability. The Pt loading of 2.1 and 7.6 wt\% for 50 and 100 ALD cycles, respectively, was confirmed by inductively coupled plasma atomic emission spectroscopy. Similarly, as above, a bright spot of isolated Pt atoms, as well as tiny Pt cluster, are observed in 50ALD-Pt/NGNs (see Fig. \ref{fig:ALD_HAAD_STEM} B), whereas in 100ALD-Pt/NGNs, the size of Pt clusters becomes larger and formation of nanoparticles, as well as new cluster, is observed. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{ALD} \par\end{centering} \caption{\label{fig:ALD_HAAD_STEM}The figure represents the HAADF-STEM images of different single atom catalyst synthesized using atomic layer deposition method: (a) Pt/GNS, (b) Pt/N-GNS, (c) Pt$_{1}$/CeO$_{2}$, (d) Pd/GNS. (a) reprinted/reproduce from Ref. {[}27{]} with the permission of Nature publishing group, copyright 2013, (b) reprinted/reproduce from Ref. {[}53{]} with the permission of Nature publishing group, copyright 2016, (c) reprinted/reproduce from Ref. {[}36{]} with the permission of American Chemical society publishing group, copyright 2016, (d) reprinted/reproduce from Ref. {[}28{]} with the permission of American Chemical society publishing group, copyright 2015.} \end{figure} Lu and co-workers\cite{Pd_graphene_ALD_yan} used ALD technique for preparation of single atom Pd1/graphene catalyst (see Fig. \ref{fig:ALD_HAAD_STEM} D), palladium hexafluoroacetylacetate (Pd(hfac)$_{2}$, Sigma Aldrich, 99\%) and formalin (Aldrich, 37\% HCHO and 15\% CH$_{3}$OH in aqueous solution) used as precursors and N$_{2}$ (99.999\%, purity) as carrier and purge gas. Researchers explored the hydrogenation of 1,3-butadiene using Pd$_{1}$/graphene SAC and observed excellent durability for catalytic deactivation and remarkable catalytic performance, i.e., 100\% butenes selectivity and 95\% conversion at 50 $^{\circ}$C. Wang et al.\cite{Pt/CeO2_CO_oxidation_ALD_wang} from Lu group have synthesized single Pt$_{1}$/CeO$_{2}$ (see Fig. \ref{fig:ALD_HAAD_STEM} C) catalyst and studied its activity in water promoted CO oxidation and reported that the contribution of water in production CO2 using Pt$_{1}$/CeO$_{2}$is 50\% via a water-mediated Mars-Van Krevelen (MvK) mechanism. Piernavieja-Hermida et al.\cite{Pd/Al2O3_TiO2_ALD_Piernavieja-Hermida} developed an exciting way to stabilized single Pd atom on the surface of Al$_{2}$O$_{3}$ by depositing an ultra-thin layer of TiO$_{2}$ protective coatings. First, Pd(hfac)$_{2}$precursor is allowed to chemisorbed on the surface of Al$_{2}$O$_{3}$ using ALD; after that, TiO$_{2}$ is deposited on the substrate using tetrachloride and ionized water. The TiO$_{2}$ selectively grows on the substrate, not on the Pd(hfac)$_{2}$ because of the presence of remaining (hfac)$_{2}$, which prevents its growth on Pd. The massive structure of (hfac)$_{2}$ forms a nanocavity around the Pd atoms of same the dimension, and at last, these ligands are removed using formalin (HCHO) for obtaining the TiO$_{2}$ protected Pd/Al$_{2}$O$_{3}$catalyst. They also reported that the thermal stability of this catalyst significantly increased because of the nanocavity thin-film structure. The HAADF-STEM images of single atoms catalysts Pt/GNS\cite{ALD_Sun}, Pt/N-GNS\cite{Pt/N-GNS_ALD_HER_cheng}, Pt$_{1}$/CeO$_{2}$\cite{Pt/CeO2_CO_oxidation_ALD_wang}, and Pd/GNS\cite{Pd_graphene_ALD_yan} with 1.52 wt\%, 2.1 wt\%, 0.22 wt\% and 0.25 wt\%, respectively, are presented in Fig. \ref{fig:ALD_HAAD_STEM}. The major disadvantages of the ALD technique are; it is a time-consuming method and ALD instruments and running cost are very expensive. \subsection{Metal-Organic Frameworks Derived Method} Metal-organic frameworks (MOFs)\cite{MOF_zhang2016efficient,MOF_wang2017uncoordinated,MOF_he2018zirconium} are the porous compound in which of metal ions or clusters are attach with organic ligands to form 1-, 2- or 3- dimensional structure, and could be used as precursors or as support in the synthesis of SACs. Unique characteristics of MOFs such as high surface area, ordered pore structure with uniform sizes makes them ideal substrate for loading of single atom. In the synthesis of SACs, MOFs are emerging as a new research frontier because of the following reasons; (1) Tunable pore size enables MOFs to encapsulate metal precursors and prevent form the agglomeration. (2) The high surface area of MOFs provides a large number of anchoring sites for dispersion of metal precursors. (3) A variety of organic ligands serve as active anchoring sites for various precursors. (4) Using the pyrolysis method, various MOFs can be easily converted into N-doped carbon materials, and act as a ideal substrate for dispersion and stabilization of metal precursors. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.4]{MOF} \par\end{centering} \caption{\label{fig:MOFs_HAAD_STEM-1}The figure represents the HAADF-STEM images of different single atom catalyst synthesized using metal-organic frameworks method: (a) Co SAs/N-C, (b) Fe SAs/N-C, (c) Ni SAs/N-C, (d) Ru SAs/N-C. (a) reprinted/reproduce from Ref. {[}29{]} with the permission of Wiley-VCH publishing group, copyright 2016, (b) reprinted/reproduce from Ref. {[}30{]} with the permission of Wiley-VCH publishing group, copyright 2017, (c) reprinted/reproduce from Ref. {[}31{]} with the permission of American Chemical society publishing group, copyright 2017, (d) reprinted/reproduce from Ref. {[}32{]} with the permission of American Chemical society publishing group, copyright 2017.} \end{figure} Yin et al.\cite{BMOF_yin2016single} from Li group synthesized single atom Co/N-C catalyst (see Fig. \ref{fig:MOFs_HAAD_STEM-1} A) with high metal loading up to 4 wt\%, which consists Co atom dispersed on the surface of nitrogen-doped porous carbon and investigated its activity for oxygen reduction reaction. By performing the pyrolysis of bimetallic Zn/Co metal-organic framework (BMOF) at 800 $^{\circ}$C, the Co and Zn ions are reduced by carbonization of organic ligands and further evaporation of Zn takes place because of its low boiling point and single atom Co/N-C catalyst is obtained. Wang et al.\cite{BMOF_wang2018regulation} from the same group reported that the coordination number of Co atom could be controlled by changing the pyrolysis temperature, for example they fabricated three single atom Co-N$_{4}$, Co-N$_{3}$ and Co-N$_{2}$ catalyst by keeping the pyrolysis temperature at 800, 900 and 100 $^{\circ}$C, respectively. Chen et al.\cite{Fe_ZIF-8_chen2017isolated} also from Li group synthesized isolated Fe atom supported on nitrogen-doped porous carbon (Fe SAs/N-C) catalyst (see Fig. \ref{fig:MOFs_HAAD_STEM-1} B) with metal loading up to 2.16 wt\% and reported its excellent activity for oxygen reduction reaction compared to Pt/C and most non-expensive-metal catalyst. They mixed Fe(acac)$_{3}$ and zeolitic imidazolate frameworks (ZIF-8) and used encapsulated-precursor pyrolysis technique for the synthesis of Fe/N-C catalyst. The molecular-scale cage structure of ZIF-8 formed by assembly of Zn$^{2+}$ and 2-methylimidazole traps one Fe(acac)$_{3}$ molecule. After that, pyrolysis of resulting mixture at 900 $^{\circ}$C under Ar gas converts ZIF-8 into nitrogen-doped porous carbon, and simultaneously Fe(acac)$_{3}$ was reduced by carbonized organic ligands, and Fe SAs/N-C catalyst is obtained. Zhao et al.\cite{Ni-ZIF-8_zhao2017ionic} prepared isolated Ni atom dispersed on the nitrogen-doped porous carbon (Ni SAs/N-C) (see Fig. \ref{fig:MOFs_HAAD_STEM-1} C) with metal loading up to 1.53 wt\% and investigated its activity for electroreduction of CO$_{2}$. The homogeneous aqueous solution of Ni(NO$_{3}$)$_{2}$ was mixed with a solution containing ZIF-8 powder dispersed in n-hexane and actively stirred for 3 hours so that salt completely absorbed, resulting sample was centrifuged and dried at 65 $^{\circ}$C for 6 hours. After that, pyrolysis of the sample at 1000 $^{\circ}$C was done in the presence of Ar gas, during which the ZIF-8 is converted into nitrogen-doped porous carbon, simultaneously Zn atoms evaporate due to its low boiling point, creating nitrogen-rich sites. These sites are occupied by Ni$^{2+}$, and act as a fence and prevent Ni atom from agglomeration; finally, Ni SAs/N-C catalyst is obtained. Wang $et$ $al.$\cite{MOF_wang2017uncoordinated} synthesized Ru SAs/N-C catalyst (see Fig. \ref{fig:MOFs_HAAD_STEM-1} D), which contains single Ru atom dispersed on the nitrogen-doped porous carbon with metal loading percentage 0.30 wt\% and reported that it exhibits high catalytic and selectivity for hydrogenation of quinolines. They used amine derivative MOF UiO-66-NH$_{2}$ (Zr$_{6}$O$_{4}$(OH)$_{4}$(BDC)$_{6}$-NH$_{2}$) for synthesizing of Ru SAs/N-C catalyst, first, they mixed RuCl$_{3}$, ZrCl$_{4}$ and H$_{2}$BDC-NH$_{2}$ with an aqueous solution of DMF and HAs. After that resulting mixture is centrifuged and washed with methanol and DMF and then heated at 700 $^{\circ}$C for 3 hours in the presence of Ar gas, a black powder containing nitrogen-doped porous carbon (N-C) decorated with Ru and small ZrO$_{2}$ species is obtained. The small ZrO$_{2}$ spices attach with N-C were etched by adding HF solution, and Ru SAs/N-C is formed. Wei $et$ $al.$\cite{Pd_Pt_Au_nano_particles_ZIF-8_wei2018direct} synthesized catalyst containing a single Pd atom supported on the nitrogen-doped porous carbon from Pd-nanoparticles by employing the top-down method, and also reported its excellent catalytic activity and selectivity for semi-hydrogenation of acetylene to ethylene. First, ZIF-8 nanocrystal was grown on the surface Pd-nanoparticles, by mixing Pd-nanoparticles in an aqueous solution Zn(NO$_{3}$)$_{2}$ and 2-methylimidazole solution. After that, the resulting mixture was heated at 900 $^{\circ}$C in presences of inert gas for 3 hours, Pd-nanoparticles were transformed in single atoms and distributed within the substrate, and meanwhile, ZIF-8 is converted into nitrogen-doped porous carbon. Finally, single-atom Pd SAs/N-C is obtained having a thermodynamically stable Pd-N$_{4}$ structure. Using the same technique, they have also synthesized Pt SAs/N-C and Au SAs/N-C catalyst. Using ZIF-8 MOF and pyrolysis method, Yang $et$ $al.$\cite{Ni-ZIF-8_nanoparticles_yang2018situ} synthesized Ni SAs/N-C catalyst by transforming Ni nanoparticles into Ni single atom, mostly dispersed on the surface of N-doped porous carbon substrate and tested its activity and selectivity for electroreduction of CO$_{2}$. Recently, Zhang $et$ $al.$\cite{Fe/N-C_zhang2019single} prepared Fe$_{1}$-N-C SAC using porphyrinic MOF (PCN-222), the catalyst contains isolated Fe atom dispersed on the surface of nitrogen-doped porous carbon substrate, and also reported its activity for nitrogen reduction reaction is better than Co$_{1}$-N-C and Ni$_{1}$-N-C. Initially, they synthesized Fe-TPPCOOMeCl (iron (III) meso-tetra(4-methoxycarbonylphenyl) porphine chloride (Fe-TPPCOOMeCl) by dissolving TPPCOOMe and FeCl$_{2}$.4H$_{2}$O in a DMF solution and heated for 6 hours at 160 $^{\circ}$C. Then Fe-TCPP is obtained by mixing Fe-TPPCOOMeCl THF, MeOH and KOH, and heated for 6 hours at 85 $^{\circ}$C. After that, Fe-TCPP, ZrOCl$_{2}$.8H$_{2}$O, H$_{2}$-TCPP, DMF and CF$_{3}$COOH were mixed and heated for 18 hours at 120 $^{\circ}$C for the formation of PCN-222(Fe). At last pyrolysis of PCN-222(Fe) sample is done at 800 $^{\circ}$C for 3 hours in the presence of N$_{2}$ gas, and Fe$_{1}$-N-C catalyst is obtained. The HAADF-STEM images of single atoms catalysts Co SAs/N-C\cite{BMOF_yin2016single}, Fe SAs/N-C\cite{Fe_ZIF-8_chen2017isolated}, Ni SAs/N-C\cite{Ni-ZIF-8_zhao2017ionic}, and Ru SAs/N-C\cite{MOF_wang2017uncoordinated} with 4 wt\%, 2.16 wt\%, 1.53 wt\% and 0.30 wt\%, respectively, are presented in Fig. \ref{fig:MOFs_HAAD_STEM-1}. \section{\label{sec:Application-of-SAC}Application of Single-Atom Catalysis} In recent years, researchers have reported the synthesis and catalytic behavior of many SACs. They found that these SACs show high catalytic activity, selectivity, and stability because of the maximum utilization of single atom (almost 100\% utilization) during reactions and strong bonding between the single atom and the anchoring sites on the supported surfaces. Therefore, the application of many SACs in different catalytic reactions such as CO oxidation, water-gas shift reaction, water splitting reaction, oxygen reduction reaction, methanol oxidation reaction, C-H activation reactions, Hydrogen evolution reaction, carbon dioxide reduction reaction and Hydrogenation reaction, are discussed below. \subsection{CO Oxidation Reaction} In the field of catalyst science, CO oxidation is one of the most studied reaction because of its importance in protecting our environment by purifying poisonous exhaust gases coming from motor vehicles and various Industries\cite{CO_Oxidation_gardner1991comparison,CO_Oxidation_haruta1997size}. Moreover, CO oxidation is the most crucial step in water-gas-shift reaction\cite{WGS_gokhale2008mechanism,WGS_lin2013remarkable} and in fuel cells application for eliminating CO from reforming gas. Zhang and co-workers\cite{Pt/FeOx} were the first to investigate experimentally and theoretically the catalytic activity, selectivity and stability of single-atom Pt$_{1}$/FeO$_{x}$ catalyst for CO oxidation, and relativistic density functional theory was used for theoretical investigation. For computation, they used Fe- and O$_{3}$- terminated Fe$_{2}$O$_{3}$ (011) surfaces, and after optimization found that the most likeliest position of Pt atom is 3-fold hollow sites on the O$_{3}$ terminated surface, where Pt atom is linked with three oxygen atom. From HAAD images, they observed that the single Pt atom exactly replaces the single Fe atom located at 3-fold hollow sites of O$_{3}$-terminated surface. Before testing the catalytic performance, the Pt$_{1}$/FeO$_{x}$ catalyst was reduced by flowing H$_{2}$/He gas for 30 min at 200 $^{\circ}$C.The oxidation of CO on the surface of Pt$_{1}$/FeO$_{x}$follows Langmuir-Hinshelwood (H-L) mechanism and the step by step reaction mechanism is shown in Fig. \ref{fig:Co-oxidation-Pt-FeOx}. After prereduction by H$_{2}$, the oxygen vacancy (O$_{vac}$) near the Pt atom was created by reducing the stoichiometric hematite surfaces partially (step i), which provides an active site for adsorption of O$_{2}$ molecule. A Similar theoretical model was designed for computation by removing one oxygen atom, which is connected to the Pt atom, the oxygen coordination number of Pt atom reduces from 3 to 2. In step ii, O$_{2}$ molecule is adsorbed with adsorption energy 1.05 eV, and optimize O-O bond length signifies that it is well activated by Pt atom and O$_{vac}$ . Next, in step iii, CO molecule adsorbed on Pt$_{1}$ atom with binding energy 1.27 eV, and one of Oxygen atom of O$_{2}$ molecule comes nearer to CO molecule and form transition state (TS-1) . The activation energy needed to process the reaction ($CO_{ad}+O-O_{ad}\rightarrow CO_{2}+O_{ad}$) is 0.49 eV, and after releasing first CO$_{2}$molecule from remaining O$_{ad}$ atom restores the Pt-loaded stoichiometric hematite surface in step iv. In step v, another CO molecule adsorbed at Pt atom and migrated to an O$_{ad}$ atom in step vi and form a second transition state (TS-2). The activation energy needed for the processing of the second reaction is 0.79 eV. After releasing the second CO$_{2}$ molecule, the Pt-loaded stoichiometric hematite surface reduced again to create new oxygen vacancy near Pt atom and approaches to initial step i. All of the catalytic steps are exothermic and the activation energy needed for formation of CO$_{2}$ molecule is small at low temperature, indicates that catalytic activity of Pt$_{1}$/FeO$_{x}$ for CO oxidation is very high. \begin{figure} \begin{centering} \includegraphics[scale=0.5]{CO_oxidation_mechanism} \par\end{centering} \caption{\label{fig:Co-oxidation-Pt-FeOx}(a) Top and (b) side view of proposed reaction pathways for CO oxidation on Pt$_{1}$/FeO$_{x}$. The calculated relative energy of proposed reaction pathways is presented in circle. (reprinted/reproduce from Ref. {[}16{]} with the permission of Nature publishing group, copyright 2011).} \end{figure} Liang $et$ $al.$ and Qiao $et$ $al.$ investigated experimentally as well as theoretically the catalytic activity of Ir$_{1}$/FeO$_{x}$\cite{Ir1/FeOx_CO_liang2014theoretical}, Au$_{1}$/FeO$_{x}$\cite{Au1/FeOx} for CO oxidation, respectively. Using DFT, Liang $et$ $al.$ explored the catalytic activity of non-noble metal single-atom catalyst (i.e., Ni$_{1}$/FeO$_{x}$\cite{Ni1/FeOx_CO_liang2016theoretical}) and also compared the catalytic activity of Pt$_{1}$/FeO$_{x}$, Ir$_{1}$/FeO$_{x}$ and Ni$_{1}$/FeO$_{x}$ systematically. The O$_{2}$ molecule adsorbed on the surface of these SACs in a different manner, in the case Pt$_{1}$/FeO$_{x}$ and Ni$_{1}$/FeO$_{x}$ it adsorbed on top of Pt and Ni atoms, respectively, whereas in the case and Ir$_{1}$/FeO$_{x}$ it adsorbs dissociatively, i.e., one O atom on the top of Ir and another O atom occupy the oxygen vacancy site. The activation energy needed for the formation of CO$_{2}$ (TS-1) in the case of Pt$_{1}$/FeO$_{x}$, Ir$_{1}$/FeO$_{x}$ and Ni$_{1}$/FeO$_{x}$ catalysis is 0.49 eV, 0.59 eV and 0.75 eV, respectively, whereas for the formation of second CO$_{2}$ (TS-2) the activation barrier is 0.79 eV, 1.41 eV and 0.64 eV, respectively. The rate-determining step for Ni/FeO$_{x}$ (0.75 eV) catalyst is lowest compared to Pt$_{1}$/FeO$_{x}$ (0.79 eV) and Ir$_{1}$/FeO$_{x}$ (1.41 eV) catalyst, suggest that it exhibits the highest catalytic activity for CO oxidation compared to others at room temperature. Using experimental and theoretical methods, Moses-DeBusk $et$ $al.$\cite{Pt/Al2O3} examine the catalytic activity of single Pt atom dispersed on an inert substrate, $\theta-$Al$_{2}$O$_{3}$ for CO oxidation, in the presence of stoichiometric oxygen. They reported that the proposed pathway of CO oxidation is slightly different from the conventional Langmuir-Hinshelwood mechanism because the conventional mechanism requires at least one Pt-Pt bond. In search of non-precious and more efficient/active SACs for CO oxidation Li $et$ $al.$\cite{M1/FeOx_li2014exploration} systematically studied the catalytic activity of various single-atom catalysts M$_{1}$/FeO$_{x}$ (M=Au, Rh, Pd, Co, Cu, Ru and Ti) by employing density functional theory. They reported five SACs, especially Rh$_{1}$/FeO$_{x}$and Pd$_{1}$/FeO$_{x}$ with oxygen vacancy, CO$_{1}$/FeO$_{x}$and Ti$_{1}$/FeO$_{x}$ without oxygen vacancy, and Ru$_{1}$/FeO$_{x}$ with or without oxygen vacancy surface exhibits better catalytic activity compared to Pt$_{1}$/FeO$_{x}$. Furthermore, they also reported that non-precious single atom CO$_{1}$/FeO$_{x}$and Ti$_{1}$/FeO$_{x}$ catalyst need very low activation energy for CO oxidation via L-H mechanism. Using DFT calculation Tang $et$ $al.$\cite{Pt/CeO2_tang2017theoretical} systematically studied the catalytic activity of single Pt atom dispersed on the CeO$_{2}$ (111), (110) and (100) surfaces for CO oxidation via Mars-van Krevelen mechanism. They reported that the single Pt atom loaded on the ceria surfaces are thermodynamically stable, and the oxidation state of Pt atom on (111) and (100) surfaces is +4. In contrast, the oxidation state of (110) surface is +2 due to the spontaneous formation of O$_{2}{}^{2-}$ spices, which reduces the oxidation state of Pt atom from +4 to +2, making the Pt$_{1}$@CeO$_{2}$ (110) catalyst most stable. \subsection{Water-Gas Shift Reaction } The water-gas shift (WGS) reaction was discovered in 1780 by Italian physicist Felice Fontana, but its importance in the industrial sector was realized much later. In this reaction, carbon monoxide and water vapor reacts to form carbon dioxide and hydrogen molecule $(CO+H_{2}O\rightleftharpoons CO_{2}+H_{2})$. WGS is a cost-effective and more efficient method for the production of hydrogen. In industrial sectors, a large amount of hydrogen is needed for various process such as ammonia synthesis via Haber-Bosch process, synthetic liquid fuels synthesis via Fischer-Tropsch method, hydro-treating of petroleum products for removing CO contamination, in the synthesis of nitrogenous fertilizers, for preparation of ethanol, methanol, and dimethyl ether, and hydrogenation of hazardous wastes (PCBs and dioxins)\cite{H_app_ramachandran1998overview,WGS_aap_Ratnasamy}. Apart from this, from future aspects, hydrogen is considered to be one of the cleanest and renewable energy source because it can be stored and transported efficiently and after burning, it produces only water as a byproduct\cite{Hydrogen_pro_chen2010semiconductor,Hydrogen_Pro_levalley2014progress,Hydrogen_pro_pagliaro2010solar,Hydrogen_pro_Turner972}. Due to the high catalytic activity, selectivity, and efficiency of SACs, many researchers have investigated its catalytic properties for WGS reaction\cite{Ir1/FeOx,WGS_review_flytzani2012atomically,wgs_thomas2011can,WGS_Au_flytzani2013gold,Au/CeO2_WGS_DFTsong2014mechanistic,Au-OHx/TiO2_WGSyang2013atomically}. Fu $et$ $al.$ \cite{Au/Ceo_WGS_fu2005activity} synthesized low-content (0.2 - 0.9 wt\%) gold-cerium oxide catalyst and reported its activity and stability is high for WGS reaction. Yang $et$ $al.$ prepared SAC consisting of isolated Au atoms dispersed on titania support and reported that it exhibits excellent activity for WGS reaction at low temperatures. They stabilize Au atom on support by irradiating titania support by UV rays, which is suspended in ethanol solution, where the gold atom donates the separated electrons to $-$OH groups. The Au atoms with surrounding extra surface $-$OH groups act as active sites for the WGS reaction and also reported that its catalytic performance is better than Au/CeO$_{2}$\cite{Au/Ceo_WGS_fu2005activity,Au/Pt/ceria_WGSfu2003active}. Flytzani-Stephanopoulos group members prepared single atom centric Pt (Pt(II)$-$O(OH)$_{x}-$) and Au (Au$-$O(OH)$_{x}-$) sites stabilize by sodium or potassium ion by making bond with it through $-$O ligands on three different supports, and examine its catalytic activity for WGS reaction \cite{Pt(II)--O(OH)x_WGS_yang2015common,Au-O(OH)x_WGS_yang2014catalytically}. They found that the reaction rate of Pt(II)$-$O(OH)$_{x}-$ species for WGS reaction is same for all supports (i.e., anatase (TiO$_{2}$), a microporous K-type L-zeolite (KTLZ) and mesoporous silica MCM-41 ({[}Si{]}MCM41) ) for Na-containing catalyst with 0.5 wt\% Pt loading\cite{Pt(II)--O(OH)x_WGS_yang2015common}. Similar to finding of single-site Pt(II)$-$O(OH)$_{x}-$ species, irrespective of the support KLTL and {[}Si{]}MCM41, TiO$_{2}$, CeO$_{2}$, and Fe$_{2}$O$_{3}$ the reaction rate of Au$-$O(OH)$_{x}-$ species is same with 0.25 wt\%, 0.25 wt\%, 0.12 wt\%, 0.50 wt\% and 1.16 wt\% Au loading, respectively\cite{Au-O(OH)x_WGS_yang2014catalytically}. Lin $et$ $al.$\cite{Ir1/FeOx} synthesized catalyst consisting of isolated Ir atom loaded on FeO$_{x}$ support and found that it shows remarkable performance for WGS reaction. The catalytic activity of Ir$_{1}$/FeO$_{x}$ is higher than its cluster and nano-particle counterparts, also higher than Au- or Pt-based catalyst\cite{Au/Pt/ceria_WGSfu2003active}. After extensive research, they found that the single atom is responsible for $\approx$70\% catalytic activity in a single atom, clusters and nano-particles catalyst. The Ir atom helps FeO$_{x}$ support in reduction for creating oxygen vacancy, which leads to enhance the catalytic activity of Ir$_{1}$/FeO$_{x}$. In literature, it has been seen that the WGS reaction mainly follows three reaction mechanisms, i.e., redox, formate, and carboxyl mechanisms. Fu et al.\cite{Au/Pt/ceria_WGSfu2003active} proposed that nano-structured gold-ceria oxide catalyst follows the redox mechanism for WGS reaction. In this reaction mechanism, CO atom adsorbed on Au atom and oxidized with the help of O atom of ceria oxide; after that, support is reoxidized by water, and hydrogen is released. Shido and Iwasawa\cite{Formate_WGW_SHIDO1992493,Formate_WGW_SHIDO199371} were the first to propose the formate mechanism for WGS reaction, in this mechanism CO and H$_{2}$O molecule adsorbs on the surface of support, next CO molecule interact with surface OH group to form formate (HCOO) intermediate, which dissociate into CO$_{2}$ molecule and H atom, and finally, two H atom recombine to form H$_{2}$ molecule. Liu et al.\cite{Carboxyl_PhysRevLett.94.196102}, studied the catalytic activity Au clusters-ceria oxide (Au$_{4-6}$/CeO$_{2}$) catalyst for WGS reaction and found that it follows the carboxyl mechanism. In this mechanism, CO$_{ad}$ adsorbs on Au, and H$_{2}$O dissociatively (H and OH) adsorbs on Au, the adsorbed CO$_{ad}$ interact with OH$_{ad}$ to form carboxyl (COOH) intermediate after that COOH dissociate into CO$_{2}$ molecule and H atom, and at last, two H atom recombine to form H$_{2}$ molecule. Song $et$ $al.$\cite{Au/CeO2_WGS_DFTsong2014mechanistic} predicted the reaction mechanism of isolated and clustered Au atoms on CeO$_{2}$(110) using density functional theory for WGS reaction. Song $et$ $al.$\cite{Au/CeO2_WGS_DFTsong2014mechanistic} by employing DFT methods studied reaction mechanism of isolated and clustered Au atoms on CeO$_{2}$ (110) surface for WGS activity, using both pathways redox and carboxyl mechanism. The carboxyl mechanism is more favorable than redox mechanism because it requires higher energy for breaking O$-$H bonds, which is directly involved in the production of H$_{2}$ and CO$_{2}$. Recently, Liang $et$ $al.$\cite{liang2020dual} studied the catalytic activity of Ir$_{1}$/FeO$_{x}$ SAC for WGS reaction by using theoretical and experimental methods. In Fig. \ref{fig:Ir1FeOx} (a) and (b), a schematic diagram \begin{figure} \begin{centering} \includegraphics[scale=0.4]{Ir1_FeOx_WGS} \par\end{centering} \caption{\label{fig:Ir1FeOx}(a) The top view of SAC Ir$_{1}$/FeO$_{x}$with oxygen vacancy (O$_{vac}$) and (b) the local atomic arrangement of Ir$_{1}$/FeO$_{x}$$-$O$_{vac}$ is shown in the figure. The surface lattice oxygen atom of FeO$_{x}$ support are represented by O$_{A}$, O$_{B}$, O$_{c}$, O$_{D}$, O$_{E}$, and oxygen vacancy (O$_{vac}$) is situated at the right side of Ir atom. Ir atom= blue, Oxygen atom= red, and Fe atom= purple. (reprinted/reproduce from Ref. {[}42{]} with the permission of Wiley-VCH group, copyright 2020).} \end{figure} of Ir$_{1}$/FeO$_{x}$ with oxygen vacancy, and the surface lattice oxygen atom (red) around Ir atom blue are represented, respectively. The most favorable position of Ir atom to stabilize on the surface of FeO$_{x}$ is O$_{3}-$terminated surface, where Ir atom is bonded with three oxygen atom. The site structure of Ir$_{1}$/FeO$_{x}$ with and without oxygen vacancy is the same, and it follows two different redox reaction pathways I and II, shown in Fig. \ref{fig:WGS_reaction_pathways} for WGS reaction. Let us considered the reaction pathways I (Fig \ref{fig:WGS_reaction_pathways} (b)), in step (i) Ir atom is bonded with two oxygen, and on the right side of the Ir atom, there is an O$_{vac}$. Next, in step (ii) H$_{2}$O molecule dissociate into H and OH, and adsorbed on O$_{D}$ atom (represented as H$_{a}$) and at O$_{vac}$ site (represented as O$_{F}$ for O atom and H$_{b}$ for Hydrogen), respectively. The CO (O atom of CO is represented as O$_{G}$) molecule is absorbed on Ir$_{1}$ atom in step (iii). The next step is TS1, where H$_{a}$ and H$_{b}$ directly combine to form H$_{2}$ and require high activation energy 3.45 $eV$, which is also a rate-determining step. Afterwards the absorbed CO atom starts moving towards the $O_{F}$ atom in step (iv) and gradually approaches to TS2. The energy barrier for the formation of CO$_{2}$ if the activation energy of 1 $eV$ is applied. The newly form bent CO$_{2}$ molecule with a 140.7$^{\circ}$ angle still absorbs on Ir single atom in step (v). The bent CO$_{2}$ can be considered as a virtual CO$^{-}{}_{2}$ anion, and its absorption energy on the Ir$_{1}$/FeO$_{x}$ is 1.29 $eV$and require another intermediate step (vi) for relaxation. The bent CO$^{-}{}_{2}$ transforms into linear CO$_{2}$ by losing an electron in TS3, and the activation energy of TS3 is 0.59 eV. Finally, in step (vii), the desorption of CO$_{2}$ from the Ir$_{1}$/FeO$_{x}$ and regeneration of O$_{vac}$ occurs on the Ir atom's right side. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.45]{WGS_mechanism} \par\end{centering} \caption{\label{fig:WGS_reaction_pathways}(a) The calculated relative energy diagram of proposed reaction pathways I and II for WGS reaction on Ir$_{1}$/FeO $_{x}$ catalyst. The reaction step and corresponding structures for (b) path I and (c) path II are shown. The rate-determining step with the energy barrier is demonstrated in a circle, and a green triangle region represents active sites in the reaction. Ir atom= blue, Oxygen atom= red, Fe atom= purple, C atom = pink and O atom of CO= dark green. (reprinted/reproduce from Ref. {[}42{]} with the permission of Wiley-VCH group, copyright 2020).} \end{figure} The redox reaction pathways II is presented in Fig. \ref{fig:WGS_reaction_pathways} (c), and the reaction path till step (iii) is the same as pathways I. The Next step is TS1, which needs activation energy of 1.52 $eV$ to move the adsorbed CO molecule slowly towards the adjacent O$_{a}$ at the left side of Ir atom. The bent CO$_{2}$ (CO$^{-}{}_{2}$ ) molecule is formed in step (iv). Afterward, the bent CO$_{2}$ requires small activation energy 0.13 $eV$ for desorbing from the surface Ir$_{1}$/FeO$_{x}$ by releasing an electron in TS2. The O$_{vac}$ on the left side of the Ir atom is produced after the desorption of CO$_{2}$ in step (v). The H$_{b}$ atom of HO$_{F}$ starts moving towards Ir atom slowly if small activation energy of 0.28 $eV$ is applied in TS3. In the intermediate step (vi) H$_{a}$ and H$_{b}$ atoms are associated with O$_{D}$ and Ir atom, respectively. The H$_{a}$ and H$_{b}$ approach towards each other in TS4 for the formation H$_{2}$ (H$^{*}{}_{a}$ + H$^{*}{}_{b}$$\rightarrow$H$^{*}{}_{2}$), and energy barrier for the reaction is 1.42 $eV$. The obtained H$_{2}$ slowly migrate towards the Ir atom in step (vii). Next, H$_{2}$ molecule desorbs from the Ir$_{1}$/FeO$_{x}$ in TS5 with an energy barrier of 0.53 $eV$. Finally, after releasing the H$_{2}$ molecule, the surface of Ir$_{1}$/FeO$_{x}$ SAC is recovered, and O$_{vac}$ is generated on the left side of Ir atom. During the WGS reaction process on Ir$_{1}$/FeO$_{x}$ surface, O$_{vac}$ shifts from the right side to the left side of Ir atom. On comparing the pathways, we found that H$_{2}$ is first formed before CO$_{2}$ in pathways I, whereas in path II, it is reversed. The most favorable pathways for WGS reaction on Ir$_{1}$/FeO$_{x}$ surface is path II, as the energy barrier for the rate-determining step is 1.52 $eV$, much lower compared to the Path I (3.45 $eV$). Using Bader charge analysis Liang $et$ $al.$\cite{liang2020dual} also reported the oxidation state of Ir and Fe atom for both pathways. In pathways I, only Ir atom changes its oxidation state, whereas, in pathways II, both Ir and Fe atom changes oxidation state. In step (i) of pathways II, the oxidation state of Ir and Fe$^{(a)}$ atom are +3 and +2, respectively, whereas in the final step (viii) the oxidation of Ir atom decreases from +3 to +2 and the oxidation state of Fe atom increases from +2 to +3. We can conclude that in WGS reaction Pathways II both Ir and Fe atom changes its oxidation state. \section{\label{sec:Summary-and-Conclusions}Summary and Conclusions } In this review article, we presented the recent advancement in the field of single-atom catalysis with a focus on the various synthesis methods and their application in varoius catalytic reactions, such as CO oxidation\cite{Pt/FeOx,Ir1/FeOx_CO_liang2014theoretical,Ni1/FeOx_CO_liang2016theoretical,Au1/FeOx,Pt/Al2O3,M1/FeOx_li2014exploration,Pt/CeO2_CO_oxidation_ALD_wang}, water\textminus gas shift (WGS)\cite{Ir1/FeOx,WGS_review_flytzani2012atomically,wgs_thomas2011can,WGS_Au_flytzani2013gold,Au/CeO2_WGS_DFTsong2014mechanistic,Au-OHx/TiO2_WGSyang2013atomically,liang2020dual}, etc. We also discussed the reaction mechanism of a single-atom catalyst for different catalytic reactions from theoretical aspects using density functional theory. \section*{Acknowledgements} D.K.R would like to thank Prof. Jun Li and Prof. Yang-Gang Wang for useful discussion and for giving the opportunity to write a review article on Single-Atom Catalysts. The author also gratefully acknowledges the financial support from the Southern University of Science and Technology (SUSTech) and computational resource support from the Center for Computational Science and Engineering at SUSTech.
1,116,691,498,201
arxiv
\section{Introduction} \label{introduction} \vspace{-0.16cm} Deep metric learning (DML) aims to learn a non-linear embedding function (a.k.a. distance metric) such that the semantic similarities over samples are well captured in the feature space \citep{tadmor2016learning,sohn2016improved}. Due to its fundamental function of learning discriminative representations, DML has diverse applications, such as image retrieval \citep{song2016deep}, clustering \citep{song2017deep}, verification \citep{schroff2015facenet}, few-shot learning \citep{vinyals2016matching} and zero-shot learning \citep{bucher2016improving}. \vspace{-0.06cm} A key to DML is to design an effective and efficient loss function for supervising the learning process, thus significant efforts have been made \citep{chopra2005learning,schroff2015facenet,sohn2016improved,song2016deep,song2017deep,law2017deep,wu2017sampling}. Some loss functions learn the embedding function from pairwise or triplet-wise relationship constraints \citep{chopra2005learning,schroff2015facenet,tadmor2016learning}. However, they are known to not only suffer from an increasing number of non-informative samples during training, but also incur considering only several instances per loss computation. Therefore, informative sample mining strategies are proposed \citep{schroff2015facenet,wu2017sampling,wang2019deep}. Recently, several methods consider semantic relations among multiple examples to exploit their similarity structure \citep{sohn2016improved,song2016deep,song2017deep,law2017deep}. Consequently, these structured losses achieve better performance than pairwise and triple-wise approaches. \vspace{-0.06cm} In this paper, we tackle the DML problem from a novel perspective. Specifically, we propose a novel loss function inspired by CCE. CCE is well-known in classification problems owing to the fact that it has an intuitive probabilistic interpretation and achieves great performance, e.g., ImageNet classification \citep{russakovsky2015imagenet}. However, since CCE learns a decision function which predicts the class label of an input, it learns class-level centres for reference \citep{zhang2018heated,wang2017normface}. Therefore, CCE is not scalable to infinite classes and cannot generalise well when it is directly applied to DML \citep{law2017deep}. \vspace{-0.06cm} With scalability and structured information in mind, we introduce instance cross entropy (ICE) for DML. It learns an embedding function by minimising the cross entropy between a predicted instance-level matching distribution and its corresponding ground-truth. In comparison with CCE, given a query, CCE aims to maximise its \textit{matching probability with the class-level context vector} (weight vector) of its ground-truth class, whereas ICE targets at maximising its \textit{matching probability with it similar instances}. As ICE does not learn class-level context vectors, it is scalable to infinite training classes, which is an intrinsic demand of DML. Similar to \citep{sohn2016improved,song2016deep,song2017deep,law2017deep,goldberger2005neighbourhood,wu2018improving}, ICE is a structured loss as it also considers all other instances in the mini-batch of a given query. We illustrate ICE with comparison to other structured losses in Figure~\ref{fig:comparing_losses}. \vspace{-0.06cm} A common challenge of instance-based losses is that many training examples become trivial as model improves. Therefore, we integrate seamless sample reweighting into ICE, which functions similarly with various sample mining schemes \citep{sohn2016improved,schroff2015facenet,shi2016embedding,yuan2017hard,wu2017sampling}. Existing mining methods require either separate time-consuming process, e.g., class mining \citep{sohn2016improved}, or distance thresholds for data pruning \citep{schroff2015facenet,shi2016embedding,yuan2017hard,wu2017sampling}. Instead, our reweighting scheme works without explicit data truncation and mining. It is motivated by the relative weight analysis between two examples. The current common practice of DML is to learn an angular embedding space by projecting all features to a unit hypersphere surface \citep{song2017deep,law2017deep,movshovitz2017no}. We identify the challenge that without sample mining, informative training examples cannot be differentiated and emphasised properly because the relative weight between two samples is strictly bounded. We address it by sample reweighting, which rescales samples' gradient to control the differentiation degree among them. \vspace{-0.06cm} Finally, for intraclass compactness and interclass separability, most methods \citep{schroff2015facenet,song2016deep,tadmor2016learning,wu2017sampling} use distance thresholds to decrease intraclass variances and increase interclass distances. In contrast, we achieve the target from \textit{a perspective of instance-level matching probability}. \textit{Without any distance margin constraint}, ICE makes no assumptions about the boundaries between different classes. Therefore, ICE is easier to apply in applications where we have no prior knowledge about intraclass variances. \vspace{-0.06cm} Our contributions are summarised: (1) We approach DML from a novel perspective by taking in the key idea of matching probability in CCE. We introduce ICE, which is scalable to an infinite number of training classes and exploits structured information for learning supervision. (2) A seamless sample reweighting scheme is derived for ICE to address the challenge of learning an embedding subspace by projecting all features to a unit hypersphere surface. (3) We show the superiority of ICE by comparing with state-of-the-art methods on three real-world datasets. \begin{figure}[!t] \begin{subfigure}[h]{0.46\textwidth} \vspace{-0.16cm} \includegraphics[width=1.01\textwidth]{Learned_Center_v04} \captionsetup{width=0.98\textwidth} \caption{ A query versus {\textbf{learned parametric class centroids}}. {All $T$ classes in the training set are considered.} Prior work: CCE, Heated-up \citep{zhang2018heated}, NormFace \citep{wang2017normface}. } \label{fig:Learned_Center} \end{subfigure} ~~~~~~ \begin{subfigure}[h]{0.5\textwidth} \vspace{-0.16cm} \includegraphics[width=0.96\textwidth]{Mean_Center_v04} \captionsetup{width=1.0\textwidth} \caption{ A query versus \textbf{{non-parametric class means}}. {Only classes in the mini-batch are considered.} Representative work: {TADAM} \citep{oreshkin2018tadam}, {DRPR} \citep{law2019dimensionality}, {Prototypical Networks} \citep{snell2017prototypical}. } \label{fig:Mean_Center} \end{subfigure} \vspace{0.06cm} \hdashrule{1.0\textwidth}{1pt}{1.0pt} \vspace{0.06cm} \begin{subfigure}[h]{0.46\textwidth} \includegraphics[width=0.85\textwidth]{NPairMC_v04} \captionsetup{width=1.0\linewidth} \caption{\textit{N}-pair-mc \citep{sohn2016improved}: A query versus {\textbf{one instance per class}}. A mini-batch has to be 2 examples per class rigidly. Only one instance per negative class is randomly sampled out of 2. } \label{fig:NPairMC} \end{subfigure} ~~~~~~ \begin{subfigure}[h]{0.5\textwidth} \includegraphics[width=0.98\textwidth]{NCA_v04} \captionsetup{width=1.0\linewidth} \caption{ NCA \citep{goldberger2005neighbourhood} and S-NCA \citep{wu2018improving}: A query versus \textbf{the rest instances}. $\hphantom{dddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddddd}$ } \label{fig:NCA} \end{subfigure} \vspace{0.06cm} \hdashrule{1.0\textwidth}{1pt}{1.0pt} \begin{subfigure}[h]{0.46\textwidth} \vspace{-0.20cm} \centering \includegraphics[width=0.86\textwidth]{ICE_v04} \captionsetup{width=1.0\textwidth} \caption{ Our ICE: A query versus \textbf{one positive and all negatives per distribution}. A query's number of matching distributions is defined by the number of its positive examples. } \label{fig:ICE} \end{subfigure} ~~~~~~ \begin{subfigure}[h]{0.5\textwidth} ~~~~~~~~ \vspace{-0.08cm} \includegraphics[width=0.86\textwidth]{icon_v04} \end{subfigure} \caption{ Our ICE and related losses. The first row shows prior work of {a query versus class centres/means} while the second row displays the work of {a query versus instances.} Note that the cross entropy computation and interpretation are different in different losses. For a mini-batch, we show two classes, i.e., circle and rectangle, with 3 examples per class except \textit{N}-pair-mc which requires 2 samples per class. The icons are at the right bottom. GT means ground-truth matching distribution. When illustrating the losses of a query versus instances in (c), (d) and (e), we index those instances with numbers for clarity, except the query. } \label{fig:comparing_losses} \vspace{-0.3cm} \end{figure} \vspace{-0.06cm} \section{Related Work} \vspace{-0.16cm} \vspace{-0.05cm} \subsection{Structured Losses by Query versus Class Centres} \vspace{-0.16cm} \textbf{Heated-up}, \textbf{NormFace}, \textbf{TADAM}, \textbf{DRPR}, \textbf{Prototypical Networks}, \textbf{Proxy-NCA}. These methods calculate the {similarities between a query and class centres (a.k.a. proxies or prototypes) instead of other instances} \citep{zhang2018heated,wang2017normface,oreshkin2018tadam,law2019dimensionality,snell2017prototypical,movshovitz2017no}. In Heated-up and NormFace, the class centres are learned parameters of a fully connected layer, which is similar to Center Loss \citep{wen2016discriminative}. While in TADAM, DRPR, and Prototypical Networks, a class centre is the mean over all embeddings of a class. By comparing a sample with other examples other than class centres, more informative instances can contribute more in ICE. \vspace{-0.14cm} \subsection{Structured Losses by Query versus Instances} \vspace{-0.16cm} \textbf{NCA} \citep{goldberger2005neighbourhood}, \textbf{S-NCA} \citep{wu2018improving}. NCA learns similarity relationships between instances. Since original NCA learns the whole training data and its time complexity is quadratically proportional to the scale of training data, S-NCA is proposed recently with linear time complexity with respect to the training data size. Instead, ICE is scalable to infinite training data by iterative learning on randomly sampled small-scale instances matching tasks. S-NCA and NCA share the same learning objective. However, they treat the event of all similar instance being correctly recognised \textit{as a whole} by a sum accumulator. Instead, we maximise the probability of every similar sample being correctly identified \textit{individually}. Therefore, ICE's optimisation task is harder, leading to better generalisation \vspace{-0.08cm} \textbf{\textit{N}-pair-mc} \citep{sohn2016improved}. The aim of \textit{N}-pair-mc is to {identify one positive example from $N-1$ negative examples of $N-1$ classes} (one negative example per class). In other words, only one positive and one negative instance per class are considered per loss computation by simulating CCE exactly. Instead, ICE exploits all negative examples to benefit from richer information. When constructing mini-batches, \textit{N}-pair-mc requires expensive offline class mining and samples 2 images per class. According to \citep{sohn2016improved} \textit{N}-pair-mc is superior to NCA. \vspace{-0.06cm} \textbf{Hyperbolic} \citep{nickel2018learning}. It aims to preserve the similarity structures among instances as well. However, it learns a hyperbolic embedding space where the distance depends only on norm of embeddings. Instead, we learn an angular space where the similarity depends only on the angle between embeddings. Besides, Hyperbolic requires a separate sampling of semantic subtrees when the dataset is large. \vspace{-0.16cm} \subsection{Sample Mining and Weighting} \label{sec:connect_to_OSM} \vspace{-0.16cm} Mining informative examples or emphasising on them are popular strategies in DML: 1) Mining non-trivial samples during training is crucial for faster convergence and better performance. Therefore, sample mining is widely studied in the literature. In pairwise or triplet-wise approaches \citep{schroff2015facenet,wu2017sampling,huang2016local,yuan2017hard}, data pairs with higher losses are emphasized during gradient backpropagation. As for structured losses, Lifted Struct \citep{song2016deep} also focuses on harder examples. Furthermore, \citep{sohn2016improved} and \citep{suh2019stochastic} propose to mine hard negative classes to construct informative input mini-batches. Proxy-NCA \citep{movshovitz2017no} addresses the sampling problem by learning class proxies. 2) Assigning higher weights to informative examples is another effective scheme \citep{wang2019ranked, wang2019multi}. Beyond, there are some other novel perspectives to address sample mining or weighting, e.g., hardness-aware examples generation \citep{zheng2019hardness} and divide-and-conquer of the embedding space \citep{sanakoyeu2019divide}. \vspace{-0.06cm} Our proposed ICE has a similarity scaling factor which helps to emphasise more on informative examples. Moreover, as described in \citep{schroff2015facenet}, very hard negative pairs are likely to be outliers and it is safer to mine semi-hard ones. In ICE, the similarity scaling factor is flexible in that it controls the emphasis degree on harder samples. Therefore, a proper similarity scaling factor can help mine informative examples and alleviate the disturbance of outliers simultaneously. \textit{What makes ours different is that we do not heuristically design the mining or weighting scheme}. Instead, it is built-in and we simply scale it as demonstrated in Section~\ref{sec:implict_weighting_generalisation}. \vspace{-0.16cm} \subsection{Discussion} \vspace{-0.16cm} We remark that Prototypical Networks, Matching Networks \citep{vinyals2016matching} and NCA are also scalable and do not require distance thresholds. Therefore, they are illustrated and differentiated in Figure~\ref{fig:comparing_losses}. Matching Networks are designed specifically for one-shot learning. Similarly, \citep{triantafillou2017few} design mAP-SSVM and mAP-DLM for few-shot learning, which directly optimises the retrieval performance mAP when multiple positives exist. FastAP \citep{cakir2019deep} is similar to \citep{triantafillou2017few} and optimises the ranked-based average precision. Instead, ICE processes one positive at a time. Beyond, the setting of few-shot learning is different from deep metric learning: Each mini-batch is a complete subtask and contains a support set as training data and a query set as validation data in few-shot learning. Few-shot learning applies episodic training in practice. \vspace{-0.06cm} Remarkably, TADAM formulates instances versus class centres and also has a metric scaling parameter for adjusting the impact of different class centres. Contrastively, ICE adjusts the influence of other instances. Furthermore, ours is not exactly distance metric scaling since we simply apply naive cosine similarity as the distance metric at the testing stage. That is why we interpret it as a weighting scheme during training. \vspace{-0.16cm} \section{Instance Cross Entropy} \label{sec:minimising_ICE} \vspace{-0.16cm} {\textbf{Notation}}. $\mathbf{X} = \{(\mathbf{x}_i, y_i)\}_{i=1}^{N} = \{\{\mathbf{x}_i^c\}_{i=1}^{N_c}\}_{c=1}^C$ is an input mini-batch, where $\mathbf{x}_i \in \mathbb{R}^{h\times w\times 3}$ and $y_i \in \{1, ... , C\}$ represent $i$-th image and the corresponding label, respectively; $\{\mathbf{x}_i^c\}_{i=1}^{N_c}$ is a set of $N_c$ images belonging to $c$-th class, $\forall c, N_c \ge 2$. The number of classes $C$ is generally much smaller than the total number of classes $T$ in the training set ($C \ll T$). Note that $T$ is allowed to be extremely large in DML. Given a sufficient number of different mini-batches, our goal is to learn an embedding function $f$ that captures the semantic similarities among samples in the feature space. We represent deep embeddings of X as $\{\{\mathbf{f}_i^c = f(\mathbf{x}_i^c) \}_{i=1}^{N_c}\}_{c=1}^C$. Given a query, `positives' and `negatives' refer to samples of the same class and different classes, respectively. \vspace{-0.16cm} \subsection{Revisiting Categorical Cross Entropy} \label{sec:CCE} \vspace{-0.12cm} CCE is widely used in a variety of tasks, especially classification problems. As demonstrated in \citep{liu2016large}, a deep classifier consists of two joint components: \textit{deep feature learning} and \textit{linear classifier learning}. The feature learning module is a transformation (i.e., embedding function $f$ ) composed of convolutional and non-linear activation layers. The classifier learning module has one neural layer, which learns $T$ class-level context vectors such that any image has the highest compatibility (logit) with its ground-truth class context vector: \vspace{-0.1cm} \begin{equation} \label{equation:cce_prob} p({\mathbf{w}_{y_i} | \mathbf{x}_i}) = \frac{\exp(\mathbf{f}_i^\top\mathbf{w}_{y_i})}{\sum\nolimits_{k=1}^T \exp(\mathbf{f}_i^\top\mathbf{w}_k)} \text{~~~and~~~} L_{\mathrm{CCE}}(\mathbf{X};f, \mathbf{W}) = -\sum\nolimits_{i=1}^N \log p({\mathbf{w}_{y_i} | \mathbf{x}_i}), \end{equation} where $\mathbf{f}_i = f(\mathbf{x}_i) \in \mathbb{R}^d$ is a $d$-dimensional vector, $p({\mathbf{w}_{y_i} | \mathbf{x}_i})$ is the probability (softmax normalised logit) of $\mathbf{x}_i$ matching $\mathbf{w}_{y_i}$, $\mathbf{W} = \{\mathbf{w}_k \in \mathbb{R}^{d} \}_{k=1}^T$ is the learned parameters of the classifier. During training, the goal is to maximise the joint probability of all instances being correctly classified. The identical form is to minimise the negative log-likelihood, i.e., $L_{\mathrm{CCE}}(\mathbf{X};f, \mathbf{W})$. Therefore, the learning objective of CCE is: \vspace{-0.16cm} \begin{equation} \label{equation:cce_loss} \argmax_{f, \mathbf{W}} \prod\nolimits_{i=1}^N p({\mathbf{w}_{y_i} | \mathbf{x}_i}) = \argmin_{f, \mathbf{W}} L_{\mathrm{CCE}}(\mathbf{X};f, \mathbf{W}). \end{equation} \vspace{-0.28cm} \subsection{Instance Cross Entropy} \label{sec:ICE} \vspace{-0.16cm} In contrast to CCE, ICE is a loss for measuring instance matching quality (lower ICE means higher quality) and does not need class-level context vectors. We remark that an anchor may have multiple positives, which are isolated in separate matching distributions. There is a matching distribution for every anchor-positive pair versus their negatives as displayed in Figure~\ref{fig:ICE}. \vspace{-0.06cm} Let $\mathbf{f}_a^c$ be a random query, we compute its similarities with the remaining points using dot product. We define the probability of the given anchor $\mathbf{x}_a^c$ matching one of its positives $\mathbf{x}_i^c (i \neq a)$ as follows: \vspace{-0.2cm} \begin{equation} \label{equation:ice_prob_pos} p(\mathbf{x}_i^c | \mathbf{x}_a^c) = \frac{\exp( {\mathbf{f}_a^c}^\top {\mathbf{f}_i^c} )} { \exp( {\mathbf{f}_a^c}^\top {\mathbf{f}_i^c} ) + \sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o} ) }, \end{equation} \vspace{-0.3cm} where ${\mathbf{f}_a^c}^\top {\mathbf{f}_i^c}$ is the similarity between $\mathbf{x}_a^c$ and $\mathbf{x}_i^c$ in the embedding space, $\sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o} )$ is the sum of similarities between $\mathbf{x}_a^c$ and its all negatives. Similarly, when the positive is $\mathbf{x}_i^c$, the probability of one negative point $\mathbf{x}_j^o (o\neq c)$ matching the anchor is: \vspace{-0.2cm} \begin{equation} \label{equation:ice_prob_neg} p(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c) = \frac{ \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o}) } { \exp( {\mathbf{f}_a^c}^\top {\mathbf{f}_i^c} ) + \sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o} ) }. \end{equation} \vspace{-0.2cm} We remark: (1) Dot product measures the similarity between two vectors; (2) Eq.~(\ref{equation:ice_prob_pos}) represents the probability of a query matching a positive while Eq.~(\ref{equation:cce_prob}) is the probability of a query matching its ground-truth class. To maximise $p(\mathbf{x}_i^c | \mathbf{x}_a^c)$ and minimise $p(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c)$ simultaneously, we minimise the Kullback-Leibler divergence \citep{kullback1951information} between the predicted and ground-truth distributions, which is equivalent to minimise their cross entropy. Since the ground-truth distribution is one-hot encoded, the cross-entropy is $ -\log p(\mathbf{x}_i^c | \mathbf{x}_a^c)$. \vspace{-0.06cm} To be more general, for the given anchor $\mathbf{x}_a^c$, there may exist multiple matching points when $N_c > 2$, i.e., $|\{\mathbf{x}_i^c\}_{i \neq a}|=N_c-1>1$. In this case, we predict one matching distribution per positive point. Our goal is to maximise the joint probability of all positive instances being correctly identified, i.e., $\label{equation:ice_prob_prod} p_{\mathbf{x}_a^c} = \prod\nolimits_{i \neq a} p(\mathbf{x}_i^c | \mathbf{x}_a^c)$. A case of two positives matching a given query is described in Figure~\ref{fig:ICE}. \vspace{-0.06cm} In terms of mini-batch, each image in $\mathbf{X}$ serves as the anchor iteratively and we aim to maximise the joint probability of all queries $\{\{p_{\mathbf{x}_a^c}\}_{a=1}^{N_c} \}_{c=1}^C$. Equivalently, we can achieve this by minimising the sum of all negative log-likelihoods. Therefore, our proposed ICE on $\mathbf{X}$ is as follows: \vspace{-0.1cm} \begin{equation} \begin{aligned} \label{equation:ice_loss} L_{\mathrm{ICE}}(\mathbf{X};f) &= -\sum\nolimits_{c=1}^C \sum\nolimits_{a=1}^{N_c} \log p_{\mathbf{x}_a^c} &= -\sum\nolimits_{c=1}^C \sum\nolimits_{a=1}^{N_c} \sum\nolimits_{i \neq a} \log p(\mathbf{x}_i^c | \mathbf{x}_a^c) . \end{aligned} \vspace{-0.06cm} \end{equation} \vspace{-0.16cm} \subsection{Regularisation by $L_2$ Feature Normalisation} \label{sec:L2regularisation} \vspace{-0.16cm} Following the common practice in existing DML methods, we apply $L_2$-normalisation to feature embeddings before the inner product. Therefore, the inner product denotes the cosine similarity. The similarity between two feature vectors is determined by their norms and the angle between them. Without $L_2$ normalisation, the feature norm can be very large, making the model training unstable and difficult. With $L_2$ normalisation, \textit{all features are projected to a unit hypersphere surface}. Consequently, the semantic similarity score is merely determined by the direction of learned representations. Therefore, $L_2$ normalisation can be regarded as a regulariser during training\footnote{The training without $L_2$ feature normalisation leads to the norm of features becoming very large easily and the dot product becoming INF.}. Note that the principle is quite different from recent hyperspherical learning methods \citep{liu2017sphereface,wang2018cosface,wang2018additive,liu2017deep,liu2018decoupled,liu2018learning}. They enforce the learned \textit{weight parameters} to a unit hypersphere surface and diversify their angles. In contrast, feature normalisation is \textit{output regularisation} and invariant to the parametrisation of the underlying neural network \citep{pereyra2017regularizing}. In summary, our learning objective is: \vspace{-0.1cm} \begin{equation} \label{equation:ice_object_regu} \argmax_{f} \prod\nolimits_{c=1}^C \prod\nolimits_{a=1}^{N_c} p_{\mathbf{x}_a^c} = \argmin_{f} L_{\mathrm{ICE}}(\mathbf{X};f) \quad s.t. \quad \forall a, c, ||\mathbf{f}_a^c||_2=1. \end{equation} \vspace{-0.2cm} The feature $L_2$-normalisation layer is implemented according to \cite{wang2017normface}. It is a differentiable layer and can be easily inserted at the output of a neural net. \vspace{-0.16cm} \subsection{Sample Reweighting of ICE} \label{sec:implict_weighting_generalisation} \vspace{-0.16cm} \textbf{Intrinsic sample weighting.} We find that ICE emphasises more on harder samples from the perspective of gradient magnitude. We demonstrate this by deriving the partial derivatives of $L_{\mathrm{ICE}}(\mathbf{X};f)$ with respect to positive and negative examples. Given the query $\mathbf{x}_a^c$, the partial derivative of its any positive instance is derived by the chain rule: \vspace{-0.1cm} \begin{equation} \label{equation:ice_derivative_one_pos} \begin{aligned} \frac{\partial L_{\mathrm{ICE}}(\mathbf{X};f)}{\partial \mathbf{f}_i^c} &=- \frac{\mathbf{f}_a^c \cdot \sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o}) } { \exp( {\mathbf{f}_a^c}^\top {\mathbf{f}_i^c} ) + \sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o} ) } &= - \mathbf{f}_a^c \cdot (1-p(\mathbf{x}_i^c | \mathbf{x}_a^c)). \end{aligned} \end{equation} Since $||\mathbf{f}_a^c||_2 =1$, $w_{(\mathbf{x}_i^c;\mathbf{x}_a^c)} =||\frac{\partial L_{\mathrm{ICE}}(\mathbf{X};f)}{\partial \mathbf{f}_i^c}||_2 =(1-p(\mathbf{x}_i^c | \mathbf{x}_a^c))$ can be viewed as the weight of $\mathbf{f}_i^c$ when the anchor is $\mathbf{x}_a^c$. Thus, \textit{ICE focuses more on harder positive samples}, whose $p(\mathbf{x}_i^c | \mathbf{x}_a^c)$ is lower. \vspace{-0.06cm} Similarly, the partial derivative of its any negative sample is: \begin{equation} \label{equation:ice_derivative_one_neg} \begin{aligned} \frac{\partial L_{\mathrm{ICE}}(\mathbf{X};f)}{\partial \mathbf{f}_j^o} &= \sum\nolimits_{i \neq a} \frac{\mathbf{f}_a^c \cdot \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o}) } { \exp( {\mathbf{f}_a^c}^\top {\mathbf{f}_i^c} ) + \sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o} ) } &= \mathbf{f}_a^c \cdot \sum\nolimits_{i \neq a} p(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c) , \end{aligned} \end{equation} where $p(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c)$ is the matching probability between $\mathbf{x}_j^o$ and $\mathbf{x}_a^c$ given that the ground-truth example is $\mathbf{x}_i^c$. The weight of $\mathbf{x}_j^o$ w.r.t. $\mathbf{x}_a^c$ is: $w_{(\mathbf{x}_j^o;\mathbf{x}_a^c)}=||\frac{\partial L_{\mathrm{ICE}}(\mathbf{X};f)}{\partial \mathbf{f}_j^o} ||_2 = \sum\nolimits_{i \neq a} p(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c)$. Clearly, \textit{the harder negative samples own higher matching probabilities and weights}. \vspace{-0.06cm} \textbf{Relative weight analysis.} In general,{ the relative weight \citep{tabachnick2007using} is more notable as the exact weight will be rescaled during training}, e.g., linear post-processing by multiplying the learning rate. Therefore, we analyse the relative weight between two positive points of the same anchor ($i \neq k \neq a$):\begin{equation} \label{equation:ice_relative_weight_pos} \begin{aligned} \frac{w_{(\mathbf{x}_i^c;\mathbf{x}_a^c)}}{w_{(\mathbf{x}_k^c;\mathbf{x}_a^c)}} &= \frac{1-p(\mathbf{x}_i^c | \mathbf{x}_a^c)}{1-p(\mathbf{x}_k^c | \mathbf{x}_a^c)} &=\frac{\exp( {\mathbf{f}_a^c}^\top {\mathbf{f}_k^c} ) + \sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o} ) } {\exp( {\mathbf{f}_a^c}^\top {\mathbf{f}_i^c} ) + \sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o} ) }. \end{aligned} \end{equation} Similarly, the relative weight between two negative points of the same anchor ($o \neq c, l \neq c$) is: \begin{equation} \label{equation:ice_relative_weight_neg} \begin{aligned} \frac{w_{(\mathbf{x}_j^o;\mathbf{x}_a^c)}} {w_{(\mathbf{x}_k^l;\mathbf{x}_a^c)}} = \frac{\sum\nolimits_{i \neq a} p(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c)} {\sum\nolimits_{i \neq a} p(\mathbf{x}_k^l | \mathbf{x}_a^c,\mathbf{x}_i^c)} = \frac{ \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o}) } { \exp({ \mathbf{f}_a^c}^\top {\mathbf{f}_k^l}) }. \end{aligned} \end{equation} Note that the positive relative weight in Eq.~(\ref{equation:ice_relative_weight_pos}) is \textit{only decided} by ${\mathbf{f}_a^c}^\top {\mathbf{f}_i^c}$ and ${\mathbf{f}_a^c}^\top {\mathbf{f}_k^c}$ while the negative relative weight in Eq.~(\ref{equation:ice_relative_weight_neg}) is \textit{only determined} by ${ \mathbf{f}_a^c}^\top {\mathbf{f}_j^o}$ and ${ \mathbf{f}_a^c}^\top {\mathbf{f}_k^l}$. The relative weight is merely determined by the dot product, which is in the range of $[-1, 1]$ and strictly bounded. \vspace{-0.06cm} \textbf{Non-linear scaling for controlling the relative weight.} Inspired by \citep{hinton2015distilling}, we introduce a scaling parameter to modify the absolute weight non-linearly: \begin{equation} \begin{aligned} \hat{w}_{(\mathbf{x}_i^c;\mathbf{x}_a^c)} &= \frac{ \sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp(s \cdot { \mathbf{f}_a^c}^\top {\mathbf{f}_j^o}) } { \exp(s \cdot {\mathbf{f}_a^c}^\top {\mathbf{f}_i^c} ) + \sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp(s \cdot { \mathbf{f}_a^c}^\top {\mathbf{f}_j^o} ) } &= 1-\hat{p}{(\mathbf{x}_i^c|\mathbf{x}_a^c)}, \end{aligned} \end{equation} \begin{equation} \label{equation:ice_weight_one_neg} \begin{aligned} \hat{w}_{(\mathbf{x}_j^o;\mathbf{x}_a^c)} &= \sum\nolimits_{i \neq a} \frac{ \exp(s \cdot { \mathbf{f}_a^c}^\top {\mathbf{f}_j^o}) } { \exp(s \cdot {\mathbf{f}_a^c}^\top {\mathbf{f}_i^c} ) + \sum\nolimits_{o \neq c} \sum\nolimits_{j} \exp(s \cdot { \mathbf{f}_a^c}^\top {\mathbf{f}_j^o} ) } &=\sum\nolimits_{i \neq a} \hat{p}(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c) , \end{aligned} \end{equation} where $s \ge 1$ is the scaling parameter. In contrast to $p$ and $w$, $\hat{p}$ and $\hat{w}$ represent the rescaled matching probability and partial derivative weight, respectively. We remark that we scale the absolute weight non-linearly, which is an indirect way of controlling the relative weight. We do not modify the relative weight directly and Eq.~(\ref{equation:ice_relative_weight_pos}) and Eq.~(\ref{equation:ice_relative_weight_neg}) are only for introducing our motivation. \vspace{-0.08cm} Our objective is to maximise an anchor's matching probability with its any positive instance competing against its negative set. Therefore, we normalise the rescaled weights based on each anchor: \begin{equation} \vspace{-0.16cm} \label{equation:final_weight_pos} \begin{aligned} \bar{w}_{(\mathbf{x}_i^c;\mathbf{x}_a^c)} &= \frac{1}{N} \cdot \frac{\hat{w}_{(\mathbf{x}_i^c;\mathbf{x}_a^c)} } { \sum\nolimits_{i \neq a} \hat{w}_{(\mathbf{x}_i^c;\mathbf{x}_a^c)} + \sum\nolimits_{o \neq c} \sum\nolimits_{j} \hat{w}_{(\mathbf{x}_j^o;\mathbf{x}_a^c)} } &= \frac{1}{2N} \cdot \frac{ 1-\hat{p}{(\mathbf{x}_i^c|\mathbf{x}_a^c)} } { \sum\nolimits_{i \neq a} (1-\hat{p}{(\mathbf{x}_i^c|\mathbf{x}_a^c)}) } , \end{aligned} \end{equation} \begin{equation} \label{equation:final_weight_neg} \begin{aligned} \bar{w}_{(\mathbf{x}_j^o;\mathbf{x}_a^c)} &= \frac{1}{N} \cdot \frac{\hat{w}_{(\mathbf{x}_j^o;\mathbf{x}_a^c)}} { \sum\nolimits_{i \neq a} \hat{w}_{(\mathbf{x}_i^c;\mathbf{x}_a^c)} + \sum\nolimits_{o \neq c} \sum\nolimits_{j} \hat{w}_{(\mathbf{x}_j^o;\mathbf{x}_a^c)} } &= \frac{1}{2N} \cdot \frac{ \sum\nolimits_{i \neq a} \hat{p}(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c) } { \sum\nolimits_{i \neq a} (1-\hat{p}{(\mathbf{x}_i^c|\mathbf{x}_a^c)}) } . \end{aligned} \end{equation} Note that the denominators in Eq.~(\ref{equation:final_weight_pos}) and (\ref{equation:final_weight_neg}) are the accumulated weights of positives and negatives w.r.t. $\mathbf{x}_a^c$, respectively. \textit{Although there are much more negatives than positives, the negative set and positive set contribute equally as a whole, as indicated by $1/2$. } $N=\sum_{c=1}^{C} N_c$ is the total number of instances in $\mathbf{X}$. We select each instance as the anchor iteratively and treat all anchors equally, as indicated by $1/N$. \begin{algorithm}[!t] \caption{Learn by minimising ICE stochastically} \label{algorithm:ICE} \begin{algorithmic} \STATE \textbf{Batch setting}: $C$ classes, $N_c$ images from $c$-th class, batch size $N=\sum_{c=1}^{C} N_c$. \STATE \textbf{Hyper-setting}: The scaling parameter $s$ and the number of iterations $\tau$. \STATE \textbf{Input}: Initialised embedding function $f$, iteration counter $iter=0$. \STATE \textbf{Output}: Updated $f$. \FOR{$iter < \tau$} \STATE $iter= iter + 1$. \\Sample one mini-batch randomly $\mathbf{X} = \{\{\mathbf{x}_i^c\}_{i=1}^{N_c}\}_{c=1}^C$. \STATE \textbf{Step 1}: Feedforward $\mathbf{X}$ into $f$ to obtain feature representations $\{\{\mathbf{f}_i^c\}_{i=1}^{N_c}\}_{c=1}^C$. \STATE \textbf{Step 2}: Compute the similarities between an anchor and the remaining instances. Every example serves as the anchor iteratively. \FOR{$\mathbf{f}_a^c \in \{\{\mathbf{f}_i^c\}_{i=1}^{N_c}\}_{c=1}^C$} \FOR{$\mathbf{f}_i^c \in \{\mathbf{f}_i^c\}_{i \neq a}$ } \STATE Compute $p(\mathbf{x}_i^c | \mathbf{x}_a^c)$ using Eq.~(\ref{equation:ice_prob_pos}). // We do not need to compute Eq.~(\ref{equation:ice_prob_neg}). \ENDFOR \\ \ENDFOR \STATE Compute $L_{\mathrm{ICE}}(\mathbf{X};f)$ using Eq.~(\ref{equation:ice_loss}). \STATE \textbf{Step 3}: Gradient back-propagation to update the parameters of $f$ using Eq.~(\ref{equation:ice_derivative_one_pos_final}). \ENDFOR \end{algorithmic} \end{algorithm} \vspace{-0.08cm} It is worth noting that during back-propagation, the magnitudes of partial derivatives in Eq.~(\ref{equation:ice_derivative_one_pos}) and Eq.~(\ref{equation:ice_derivative_one_neg}), i.e., $w_{(\mathbf{x}_i^c;\mathbf{x}_a^c)}$ and $w_{(\mathbf{x}_i^c;\mathbf{x}_a^c)}$, are replaced by $\bar{w}_{(\mathbf{x}_i^c;\mathbf{x}_a^c)}$ and $\bar{w}_{(\mathbf{x}_i^c;\mathbf{x}_a^c)}$ respectively. The direction of each individual partial derivative is unchanged. However, since weights are rescaled non-linearly, the final partial derivative of each sample is changed to a better weighted combination of multiple partial derivatives. {Final partial derivatives} of $L_{\mathrm{ICE}}(\mathbf{X};f)$ w.r.t. positives and negatives are: \begin{equation} \label{equation:ice_derivative_one_pos_final} \begin{aligned} \frac{\partial L_{\mathrm{ICE}}(\mathbf{X};f)}{\partial \mathbf{f}_i^c} &= - \mathbf{f}_a^c \cdot \bar{w}_{(\mathbf{x}_i^c;\mathbf{x}_a^c)} \text{~~ and ~~} \frac{\partial L_{\mathrm{ICE}}(\mathbf{X};f)}{\partial \mathbf{f}_j^o} &= \mathbf{f}_a^c \cdot \bar{w}_{(\mathbf{x}_j^o;\mathbf{x}_a^c)}. \end{aligned} \end{equation} \vspace{-0.3cm} \vspace{-0.16cm} \subsection{A Case Study and Intuitive Explanation of ICE} \vspace{-0.16cm} To make it more clear and intuitive for understanding, we now analyse a naive case of ICE, where there are two samples per class in every mini-batch, i.e., $\forall c, Nc = 2$, $|\{\mathbf{x}_i^c\}_{i \neq a}|=N_c-1=1$. In this case, for each anchor (query), there is only one positive among the remaining data points. As a result, the weighting schemes in Eq.~(\ref{equation:final_weight_pos}) for positives and Eq.~(\ref{equation:final_weight_neg}) for negatives can be simplified: \vspace{-0.15cm} \begin{equation} \label{equation:final_weight_pos_naive} \begin{aligned} \bar{w}_{(\mathbf{x}_i^c;\mathbf{x}_a^c)} &= \frac{1}{2N} \cdot \frac{ 1-\hat{p}{(\mathbf{x}_i^c|\mathbf{x}_a^c)} } { \sum\nolimits_{i \neq a} (1-\hat{p}{(\mathbf{x}_i^c|\mathbf{x}_a^c)}) } = \frac{1}{N} \cdot \frac{1}{2} , \end{aligned} \end{equation} \vspace{-0.2cm} \begin{equation} \label{equation:final_weight_neg_naive} \begin{aligned} \bar{w}_{(\mathbf{x}_j^o;\mathbf{x}_a^c)} &= \frac{1}{2N} \cdot \frac{ \sum\nolimits_{i \neq a} \hat{p}(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c) } { \sum\nolimits_{i \neq a} (1-\hat{p}{(\mathbf{x}_i^c|\mathbf{x}_a^c)}) } &= \frac{1}{N} \cdot \frac{1}{2}\cdot \frac{ \hat{p}(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c) } { 1-\hat{p}{(\mathbf{x}_i^c|\mathbf{x}_a^c)} }. \end{aligned} \end{equation} Firstly, we have $N$ anchors that are treated equally as indicated by $1/N$. Secondly, for each anchor, we aim to recognise its positive example correctly. However, there is a \textit{sample imbalance problem} because \textit{each anchor has only one positive and many negatives}. ICE addresses it by treating the positive set (single point) and negative set (multiple points) equally, i.e., $1/2$ in Eq.~(\ref{equation:final_weight_pos_naive}) and Eq.~(\ref{equation:final_weight_neg_naive}) \footnote{The weight sum of negatives: $\sum_{o \neq c} \sum_{j} \hat{p}(\mathbf{x}_j^o | \mathbf{x}_a^c,\mathbf{x}_i^c) = 1-\hat{p}{(\mathbf{x}_i^c|\mathbf{x}_a^c)} => \sum_{o \neq c} \sum_{j} \bar{w}_{(\mathbf{x}_j^o;\mathbf{x}_a^c)} = \bar{w}_{(\mathbf{x}_i^c;\mathbf{x}_a^c)} = 1/(2N) $. }. Finally, as there are many negative samples, we aim to focus more on informative ones, i.e., harder negative instances with higher matching probabilities with a given anchor. The non-linear transformation can help control the relative weight between two negative points. The weighting scheme shares the same principle as the popular temperature-based categorical cross entropy \citep{hinton2015distilling,oreshkin2018tadam}. The key is that we should consider not only focusing on harder examples, but also the emphasis degree. \vspace{-0.16cm} \subsection{Complexity Analysis} \label{sec:complexity_analysis} \vspace{-0.16cm} Algorithm~\ref{algorithm:ICE} summarises the learning process with ICE. As presented there, the input data format of ICE is the same as CCE, i.e., images and their corresponding labels. In contrast to other methods which require rigid input formats \citep{schroff2015facenet,sohn2016improved}, e.g., triplets and n-pair tuplets, ICE is much more flexible. We iteratively select one image as the anchor. For each anchor, we aim to maximise its matching probabilities with its positive samples against its negative examples. Therefore, \textit{the computational complexity over one mini-batch is} $O(N^2)$, being the same as recent online metric learning approaches \citep{song2016deep,wang2019deep}. Note that in FaceNet \citep{schroff2015facenet} and $N$-pair-mc \citep{sohn2016improved}, expensive sample mining and class mining are applied, respectively. \vspace{-0.1cm} \section{Experiments} \vspace{-0.1cm} \begin{table} \parbox{.55\textwidth}{ \captionsetup{width=.55\textwidth} \caption{ A summary of three fine-grained datasets. Training and test classes are disjoint. `\#' refers to the number of each item. There are only 5.3 images per class on average in SOP. } \label{table:datasets} \vspace{-0.35cm} \begin{center} \begin{small} \setlength{\tabcolsep}{2.8pt} \begin{tabular}{lccc} \toprule Datasets & CARS196 & CUB-200-2011 & SOP \\ \midrule Context & Cars & Birds & Products \\ \#Total classes & 196 & 200 & 22,634 \\ \#Total images & 16,185 & 11,788 & 120,053 \\ \#Training classes & 98 & 100 & 11,318 \\ \#Training images & 8,054 & 5,864 & 59,551\\ \#Test classes & 98 & 100 & 11,316 \\ \#Test images & 8,131 & 5,924 & 60,502 \\ \bottomrule \end{tabular} \end{small} \end{center} } \hfill \parbox{.41\textwidth}{ \caption{ The results of different reweighting parameters $s$ on SOP in terms of Recall@$K$ (\%). There are 90 classes and 2 images per class in a mini-batch, i.e., the batch size is 180. } \label{table:s} \vspace{-0.42cm} \begin{center} \begin{small} \setlength{\tabcolsep}{6pt} \begin{tabular}{lccc} \toprule Reweighting & R@1 & R@10 & R@100 \\ \midrule $s=1$ & 42.0 & 58.1 & 74.1 \\ $s=16$ & 71.0 & 85.6 & 93.8 \\ $s=32$ & 73.6 & 87.5 & 94.7 \\ $s=48$ & {76.9} & 89.7 & {95.5} \\ $s=64$ & \textbf{77.3} & \textbf{90.0} & \textbf{95.6} \\ $s=80$ & 75.4 & 88.7 & 94.9 \\ \bottomrule \end{tabular} \end{small} \end{center} } \vspace{-0.3cm} \end{table} \vspace{-0.1cm} \subsection{Implementation Details and Evaluation Settings} \vspace{-0.1cm} For data augmentation and preprocessing, we follow \citep{song2016deep,song2017deep}. In detail, we first resize the input images to $256 \times 256$ and then crop it at $227 \times 227$. We use random cropping and horizontal mirroring for data augmentation during training. To fairly compare with the results reported in \citep{song2017deep}, we use a centre cropping without horizontal flipping in the test phase. For the embedding size, we set it to 512 on all datasets following \citep{sohn2016improved,law2017deep,wang2019ranked}. To compare fairly with \citep{song2017deep,law2017deep,movshovitz2017no}, we choose GoogLeNet V2 (with batch normalisation) \citep{ioffe2015batch} as the backbone architecture initialised by the publicly available pretrained model on ImageNet \citep{russakovsky2015imagenet}. We simply change the original 1000-neuron fully connected layers followed by softmax normalisation and CCE to 512-neuron fully connected layers followed by the proposed ICE. For faster convergence, we randomly initialise the new layers and optimise them with 10 times larger learning rate than the others as in \citep{song2016deep}. \vspace{-0.06cm} We implement our algorithm in the Caffe framework \citep{jia2014caffe}. The source code will be available soon. \vspace{-0.06cm} \textbf{Datasets.} Following the evaluation protocol in \citep{song2016deep,song2017deep}, we test our proposed method on three popular fine-grained datasets including CARS196 \citep{krause20133d}, CUB-200-2011 \citep{wah2011caltech} and SOP \citep{song2016deep}. A summary of the datasets is given in Table~\ref{table:datasets}. We also keep the same train/test splits. We remark that to test the generalisation and transfer capability of the learned deep metric, the training and test classes are disjoint. \vspace{-0.06cm} \textbf{Evaluation protocol.} We evaluate the learned representations on the image retrieval task in terms of Recall@$K$ performance \citep{song2016deep}. Given a query, its $K$ nearest neighbours are retrieved from the database. Its retrieval score is one if there is an image of the same class in the $K$ nearest neighbours and zero otherwise. Recall@$K$ is the average score of all queries. \vspace{-0.06cm} \textbf{Training settings.} All the experiments are run on a single PC equipped with Tesla V100 GPU with 32GB RAM. For optimisation, we use the stochastic gradient descent (SGD) with a weight decay of $1e^{-5}$ and a momentum of 0.8. The base learning rate is set as $1e^{-3}$. The training converges at $20k$ iterations on SOP while $4k$ iterations on CARS196 and CUB-200-2011. As for the hyper-parameters, we study their impacts in Sec. \ref{sec:ablation_study} and supplementary material. The mini-batch size is 60 for small datasets CARS196 and CUB-200-2011 while 180 for the large benchmark SOP. Additionally, we set $C=6, N_c=10$ on CARS196 and CUB-200-2011 while $C=90, N_c=2$ on SOP. The design reasons are: 1) SOP has only 5.3 images per class on average. Therefore $N_c$ cannot be very large; 2) It helps to simulate the global structure of deep embeddings, where the database is large and only a few matching instances exist. \vspace{-0.06cm} The analysis of batch content, batch size and embedding size is presented in the supplementary material. \begin{table*}[!t] \caption{Comparison with the state-of-the-art methods on CARS196, CUB-200-2011 and SOP in terms of Recall@$K$ (\%). All the compared methods use GoogLeNet V2 as the backbone architecture. `--' means the results which are not reported in the original paper. The best results in the first block using single embedding are bolded. } \vspace{-0.2cm} \label{table:SOTA} \begin{center} \begin{small} \setlength{\tabcolsep}{6.0pt} \begin{tabular}{lcccc|cccc|ccc} \toprule & \multicolumn{4}{c|}{CARS196} & \multicolumn{4}{c|}{CUB-200-2011} & \multicolumn{3}{c}{SOP} \\ \cmidrule(r){2-12} $K$ & 1 & 2 & 4 & 8 & 1 & 2 & 4 & 8 & 1 & 10 & 100 \\ \midrule Without fine-tuning & 35.6 & 47.3 & 59.4 & 72.2 & 40.1 & 53.2 & 66.0 & 76.6 & 43.7 & 60.8 & 76.5 \\ Fine-tuned with CCE & 48.8 & 58.5 & 71.0 & 78.4 & 46.0 & 58.0 & 69.3 & 78.3 & 51.7 & 69.8 & 85.3 \\ Triplet Semihard & 51.5 & 63.8 & 73.5 & 82.4 & 42.6 & 55.0 & 66.4 & 77.2 & 66.7 & 82.4 & 91.9 \\ Lifted Struct & 53.0 & 65.7 & 76.0 & 84.3 & 43.6 & 56.6 & 68.6 & 79.6 & 62.5 & 80.8 & 91.9\\ N-pair-mc & 53.9 & 66.8 & 77.8 & 86.4 & 45.4 & 58.4 & 69.5 & 79.5 & 66.4 & 83.2 & 93.0 \\ Struct Clust & 58.1 & 70.6 & 80.3 & 87.8 & 48.2 & 61.4 & 71.8 & 81.9 & 67.0 & 83.7 & 93.2 \\ Spectral Clust & 73.1 & 82.2 & 89.0 & 93.0 & 53.2 & 66.1 & 76.7 & 85.3 & 67.6 & 83.7 & 93.3 \\ Proxy NCA & {73.2} & 82.4 & 86.4 & 88.7 & 49.2 & 61.9 & 67.9 & 72.4 & 73.7 & -- & -- \\ RLL & 74.0 & 83.6 & 90.1& 94.1& 57.4 &\textbf{69.7}& {79.2}& \textbf{86.9} & 76.1& 89.1& 95.4 \\ ICE & \textbf{77.0} & \textbf{85.3} & \textbf{91.3} & \textbf{94.8} & \textbf{58.3} & {69.5} & \textbf{{79.4}} & {86.7} & \textbf{77.3} & \textbf{90.0} & \textbf{95.6} \\ \midrule RLL-(L,M,H) & 82.1 & 89.3 & {93.7} & {96.7} & 61.3 & {72.7} & {82.7} & {89.4} & 79.8 & 91.3 & 96.3 \\ ICE-(L, M, H) & {82.8} & {89.5} & {93.7} & {96.4} & {61.4} & {73.2} & {82.5} & {89.2} & {80.1} & {91.8} & {96.6} \\ \bottomrule \end{tabular} \end{small} \end{center} \end{table*} \vspace{-0.16cm} \subsection{Quantitative Results} \vspace{-0.16cm} \textbf{Remarks.} For a fair comparison, we remark that the methods group \citep{ustinova2016learning, harwood2017smart,wang2017deep,duan2018deep,lin2018deep,suh2019stochastic,zheng2019hardness} using GoogLeNet V1 \citep{szegedy2015going} and another group \citep{wu2017sampling,cakir2019deep, sanakoyeu2019divide} using ResNet-50 \citep{he2016deep} are not benchmarked. Besides, ensemble models \citep{yuan2017hard,opitz2017bier,kim2018attention,xuan2018deep} are not considered. HTL~\citep{ge2018deep} also uses GoogLeNet V2, but it constructs a hierarchical similarity tree {over the whole training set} and updates the tree every epoch, thus being highly unscalable and expensive in terms of both computation and memory. That is why HTL achieves better performance on small datasets but performs worse than ours on the large dataset SOP. Finally, there are some other orthogonal deep metric learning research topics that are worth studying together in the future, e.g., a robust distance metric \citep{yuan2019signal} and metric learning with continuous labels \citep{kim2019deep}. In GoogLeNet V2, there are three fully connected layers of different depth. We refer them based on their depth: L for the low-level layer (inception-3c/output), M for the mid-level layer (inception-4e/output) and H for the high-level layer (inception5b/output). By default, we use only `H'. We also report the results of their combination (L, M, H) for reference following RLL \citep{wang2019ranked}. \textbf{Competitors.} All the compared baselines, Triplet Semihard \citep{schroff2015facenet}, Lifted Struct \citep{song2016deep}, $N$-pair-mc \citep{sohn2016improved}, Struct Clust \citep{song2017deep}, Spectral Clust \citep{law2017deep}, Proxy-NCA \citep{movshovitz2017no}, RLL \citep{wang2019ranked} and our ICE are trained and evaluated using the same settings: (1) GoogLeNet V2 serves as the backbone network; (2) All models are initialised with the same pretrained model on ImageNet; (3) All apply the same data augmentation during training and use a centre-cropped image during testing. The results of some baselines \citep{schroff2015facenet,song2016deep,sohn2016improved} are from \citep{song2017deep}, which means they are reimplemented there for a fair comparison. In addition, the results of vanilla GoogLeNet V2 pretrained on ImageNet without fine-tuning and with fine-tuning via minimising CCE are reported in \citep{law2017deep}, which can be regarded as the most basic baselines. Among these baselines, Proxy NCA is not scalable as class-level proxies are learned during training. Struct Clust and Spectral Clust are clustering-motivated methods which explicitly aim to optimise the clustering quality. We highlight that clustering performance Normalised Mutual Information (NMI) \citep{schutze2008introduction} is not a good assessment for SOP \citep{law2017deep} because SOP has a large number of classes but only 5.3 images per class on average. Therefore, we only report and compare Recall@$K$ performance. \vspace{-0.06cm} \textbf{Results.} Table~\ref{table:SOTA} compares the results of our ICE and those of the state-of-the-art DML losses. ICE achieves the best Recall@1 performance on all benchmarks. We observe that only RLL achieves comparable performance in a few terms. However, RLL is more complex since it has three hyper-parameters in total: one weight scaling parameter and two distance margins for positives and negatives, respectively. In addition, its perspective is different since it processes the positive set together similarly with \citep{triantafillou2017few,wang2019multi}. We note that \citep{wang2019multi} is also complex in designing weighting schemes and contains four control hyper-parameters. However, our Recall@$1$ on SOP is 77.3\%, which is only 0.9\% lower than 78.2\% of \citep{wang2019multi}. It is also worth mentioning that among these approaches, except fine-tuned models with CCE, only our method has a clear probability interpretation and aims to maximise the joint instance-level matching probability. As observed, apart from being unscalable, CCE's performance is much worse than the state-of-the-art methods. Therefore, ICE can be regarded as a successful exploration of softmax regression for learning deep representations in DML. The t-SNE visualisation \citep{van2014accelerating} of learned embeddings are available in the supplementary material. \vspace{-0.16cm} \subsection{Analysis of Sample Reweighting in ICE} \label{sec:ablation_study} \vspace{-0.16cm} We empirically study the impact of the weight scaling parameter $s$, which is the only hyper-parameter of ICE. It functions similarly with the popular sample mining or example weighting \citep{wang2019ranked,wang2019deep,wang2019multi} widely applied in the baselines in Table~\ref{table:SOTA}. Generally, different $s$ corresponds to different emphasis degree on difficult examples. When $s$ is larger, more difficult instances are assigned with relatively higher weights. \vspace{-0.06cm} In general, small datasets are more sensitive to minor changes of hyper-settings and much easier to overfit. Therefore, the experiments are conducted on the large dataset SOP. The results are shown in Table~\ref{table:s}. Note that when $s$ is too small, e.g., $s=1$, we observe that the training does not converge, which demonstrates the necessity of weighting/mining samples. The most significant observation is that focusing on difficult samples is better but the emphasis degree should be properly controlled. When $s$ increases from 16 to 64, the performance grows gradually. However, when $s = 80$, we observe the performance drops a lot. That may be because extremely hard samples, e.g., outliers, are emphasised when $s$ is too large. \vspace{-0.16cm} \section{Conclusion} \vspace{-0.16cm} In this paper, we propose a novel instance-level softmax regression framework, named instance cross entropy, for deep metric learning. Firstly, the proposed ICE has clear probability interpretation and exploits structured semantic similarity information among multiple instances. Secondly, ICE is scalable to infinitely many classes, which is required by DML. Thirdly, ICE has only one weight scaling hyper-parameter, which works as mining informative examples and can be easily selected via cross-validation. Finally, distance thresholds are not applied to achieve intraclass compactness and interclass separability. This indicates that ICE makes no assumptions about intraclass variances and the boundaries between different classes. Therefore ICE owns general applicability. \bibliographystyle{iclr2020_conference}
1,116,691,498,202
arxiv
\section{Introduction} Recently, fifth-generation (5G) wireless communication systems have appeared in numerous countries, satisfying the demands of both industries and end users. However, today, researchers have focused on the concepts and applications of future wireless communication, that is sixth-generation (6G) wireless communications. As the roadmap of 6G becomes actualized, the data rates of 6G usage cases, such as streaming services and mobile augmented and virtual reality (AR/VR), are also required at a superior level, which exceeds one terabit per second (Tbit/s). In addition, an increasingly large number of such devices are being connected to the network, and they are communicating with one another. {Such interactions are expected to exceed the connectivity of $10^7$~connections per~$\textrm{km}^2$~\cite{T6G}, thereby surpassing the capacity of 5G (up from $10^6$)~\cite{T6G}.} Moreover, severe problems arise from the scarcity of licensed spectrum resources. {This problem cannot be resolved by existing radio-frequency (RF) communication systems, which are almost saturated by licensed spectrums. Although we find the available RF spectrum and utilize it for mobile access networks, it is difficult to satisfy the target data rate ($\approx$Tbits/s) of 6G because of its relatively low bandwidth~\cite{T6G, UAVFSO}. An emerging solution is the free-space optical (FSO) communication system. FSO communication is a strong candidate for future wireless networks owing to several features, including high capacity and license-free characteristics, that can achieve the targets of 6G by utilizing hundreds of GHz or even THz of bandwidth scales~\cite{QD2}.} For example, accompanied by unmanned aerial vehicles (UAVs), FSO communication can be used as a non-terrestrial wireless backhaul system. Generally, costly wired communication infrastructure (e.g., optical fiber) is necessary to provide reliable network services to rural areas. Many researchers believe that this problem can be solved by deploying FSO-communication-enabled UAVs to provide wireless backhaul connections~\cite{UAVFSO}. Wireless backhauls can be easily developed by combining the mobility of UAVs with the high data rate and massive-connectivity properties of FSO links. Moreover, such backhaul connections can be expanded and reinstalled while adhering to the standards of the 6G network~\cite{T6G, UAVFSO}. One of the most significant technical challenges of the FSO communication link is that the channel characteristics--atmospheric turbulence and scintillation--can easily fluctuate the signal by including~\cite{SCI}. To mitigate the effect of this phenomenon, researchers have investigated various methods to construct robust real-time transmission architectures with high bit rates. Most previous studies have developed high-level technologies for high-rate FSO signal transmission. {These technologies have included a 3D video transmission with an adaptive demodulation scheme and a robust channel coding strategy, and a robust UAV-trajectory optimization strategy for UAV-to-ground FSO signal transmission scenario under the turbulent correlated FSO channels~\cite{VTOWC2, JLT2}. They also demonstrated end-to-end transmission of FSO signals with various modulation schemes and link distances of tens or hundreds of meters~\cite{32Q, 11m}.} However, these works solely focused on numerical simulation or short-distance-link prototyping, which cannot cover the actual feasibility of long-range FSO links, widely assumed in various scenarios (e.g., non-terrestrial cellular systems) for 6G wireless networks~\cite{UAVFSO, QD2}. In this study, we demonstrate a real-time FPGA-based high-resolution video signal transmission via the FSO channel emulator to verify the feasibility of the long-range FSO communication link for 6G. This transmission setup models the channel characteristics of the FSO channel, including turbulence, scintillation, and power attenuation. We processed the video signal via an electrical-to-optical (E/O) converter, optical-to-electrical (O/E) converter, and FPGA module. The solar background noise critically affects the FSO transceiver and the link quality. To minimize this effect, we applied a spatial selective filtering method that adaptively reduced the field-of-view (FoV) of the receiver and suppressed the background noise generated by sunlight. We applied the proposed sampling-based pointing, acquisition, and tracking (PAT) techniques to improve the signal-to-noise (SNR) ratio to enhance the accuracy of the FSO signal transmission. In the main video transmission testbed, we demonstrate that the proposed communication scheme transmits and processes the video signal, even under harsh channel conditions, such as high turbulence or wind speed. {This study marks the world's first integration of an FSO channel emulator and an FPGA-based module for evaluating an FSO channel's feasibility. As such, our work represents a significant contribution toward the development of 6G mobile networks (e.g., deployment of high-altitude platform (HAP)-assisted backhaul networks) with feasible and long-distance FSO links. We also consider the proposed system as a unified open platform and expect the opportunities for developing advanced link-enhancement technologies (e.g., satellite network in 6G) to be utilized by the proposed platform.} {The main structure of this article is as follows: 1)in Section II, we explain why the proposed long-distance FSO link demonstration is needed, including why the vulnerability of the FSO link must be combatted , based on previous studies and their limitations. We end the section by summarizing the mechanism and the contribution of the proposed long-distance FSO link testbed.} 2) In Section III, we present the proposed FSO link prototype, which integrates the FSO channel emulator and the FPGA-based software-defined-radio (SDR) platform and models the long-distance (up to 20~km) FSO link. The proposed testbed includes various techniques to enhance the robustness of the FSO link. These include the space-selective filtering technique, which suppresses the solar background noise, and the sampling-based PAT technique with SNR enhancement. 3) In Section III, we analyze the signal-transmission result using the proposed platform. 4) In Section IV, we present some concluding remarks. \begin{figure*}[t] \centering \includegraphics[width=1.82\columnwidth]{Fig/setup2.png} \caption{{A prototype structure of video signal transmission according to parameter variation related to channel environments and block diagram of proposed feasibility validation of long-distance FSO links.}} \label{video} \end{figure*} \section{Long-Range FSO Communication System for 6G: Challenges and Solution} {In this section, we explain the confronted challenges to develop the proposed FSO link testbed, including the limitations of previous studies related to the feasibility of a long-range FSO network. Subsequently, we summarize the contribution of the proposed testbed in comparison with previous works. \subsection{Motivation for the Proposed Real-Time FSO Link Demonstration} \subsubsection{Encountered Challenges of the Long-Distance FSO Link Utilization} {To utilize the FSO communication which has a dominant position for obtaining a high data rate for 6G wireless networks, as illustrated in the left part of Fig.~\ref{6GFSO}, we must overcome the vulnerability against atmospheric turbulence and other losses, such as penetration, pointing, or propagation losses. These effects become significantly stronger as the distance between the transmitter and receiver increases. For instance, an FSO-communication-based wireless backhaul system with the assistance of UAVs is considered a novel service strategy for 6G, as illustrated on the right side of Fig.~\ref{6GFSO}, assumes a ground-to-UAV link distance of up to 20~km~\cite{UAVFSO}. Therefore, researchers have focused on modeling the turbulence of the FSO signal and developing transceiver techniques to enhance the link quality under various scenarios and reflect this in the link margin strategy~\cite{PAT3, VTOWC2, SCI}.} \subsubsection{Recent Works and Limitations} {Owing to its potential in data rate and connectivity, several studies have been conducted to evaluate the actual performance and feasibility of an FSO link. Concurrently, researchers have been striving to develop related technologies for robust FSO communication, including its performance analysis by simulation. In~\cite{VTOWC2}, the authors analyzed the performance of 3D video transmission with $N$-orbital-angular-momentum-shift keying ($N$-OAM-SK) FSO system by applying deep-learning techniques. In~\cite{JLT2}, the authors assumed a UAV-mounted FSO communication system and maximized the flight time of the UAV by optimizing the trajectory based on various atmospheric environments and the channel and rate characteristics of FSO links. Both studies considered the distance of the FSO link up to approximately 1~km. In~\cite{PAT3}, the authors developed an RF lens-antenna-based PAT technique based on an accurate angle-of-arrival (AoA) estimation with a fast steering mirror (FSM) for hybrid RF/FSO communication systems with a maximum link distance of 3~km.} {To measure the actual feasibility of FSO communications, research continues on how to develop an end-to-end FSO link prototype under various transmission and channel scenarios. The authors in~\cite{32Q} demonstrated an end-to-end link testbed with a distance of 100~m and applied an FSO signal with a 25 GHz data rate and 32-quadrature amplitude modulation (QAM), which is sent by a 12 wavelength-division-multiplexing (WDM) channel. The authors in~\cite{11m} presented an end-to-end demonstration of transmitting 4- and 64-QAM signals over an FSO channel under non-uniform turbulence, with a maximum distance of 500~m. Recently, the authors in~\cite{RISE} demonstrated a coherent 100 Gbps FSO link with a link distance of only 40 m by applying dual-polarization quadrature phase shift keying (DP-QPSK).} {As indicated above, existing research works have focused on either overcoming the transmission environments or the link-quality enhancement strategies based on simulation and have assumed a relatively short FSO link of up to a few hundred meters for hardware validation. Nevertheless, formalizing the challenges and opportunities pertaining to a feasible long-distance FSO link is still considered an open problem. Indeed, it is difficult to measure the exact feasibility or properties of the FSO link for large distances of up to 20~km. This distance also restricts researchers from focusing on analytical expressions and computer simulations.} \subsection{Contribution of the Proposed Real-Time FSO Link Demonstration} To solve these problems and evaluate the feasibility of the long-distance FSO link for 6G, we developed a real-time video signal transmission prototype. The prototype is an integration of an FSO channel emulator and an FPGA-based SDR platform. The channel emulator models the time-varying atmospheric channel with power attenuation by absorption and scattering, and it models turbulence with scintillation. An FPGA-based SDR platform was implemented for video signal generation and encoding/decoding. Moreover, we applied various link-quality-enhancement techniques to improve the quality of the received video signal. {This is the world's first prototype that evaluates the feasibility of the long-distance FSO link by these two essential platforms for FSO and RF communications. Moreover, the novel sunlight noise mitigation and tracking algorithm are jointly included in the testbed for link-quality enhancement, which contributes to the robustness of the FSO signal against the long 20~km transmission distance. Therefore, it can solve the vulnerability issues of long-distance FSO links that occur under atmospheric conditions and transceiver misalignment. By combining these factors, we successfully transmitted an ultra-high-definition (UHD: 3840 $\times$ 2160) video signal over a 20~km FSO link under both clear and hazy atmospheric conditions, owing to the proposed hybrid FSO/RF hybrid platform and outage mitigation techniques. Accordingly, our demonstration significantly contributes to the reliable usage of long-range FSO communication in future wireless networks, including non-terrestrial FSO backhaul networks with a link distance of scale of tens of kilometers, as described in Section~I and Fig.~\ref{6GFSO}.} \begin{figure*}[t] \centering \includegraphics[width=1.37\columnwidth]{Fig/solarber.png} \caption{Block diagram and testbed of the proposed spatial selective filtering structure and the results of eye-diagram and BER. Using the proposed adaptive noise suppressing technique, we can significantly lower the noise level and BER.} \label{select} \end{figure*} \section{System Architecture of FSO Link Prototype}\label{sec3} In this section, we present the proposed FSO link prototype, including the setup of the FSO channel model, hardware and data, and various techniques for increasing the robustness of up to 20~km in the FSO link. We then analyze the signal transmission results by varying the channel conditions of the experiment. \subsection{Path-loss Model of the FSO Communication Link} It is widely known that FSO signals undergo atmospheric attenuation, the degree of which depends on the scintillation and the size of the scattering particles. Pointing inaccuracy takes the center of the beam away from an air terminal, not to receive a maximum intensity, and optical loss occurs owing to the less-than-perfect FSO transceiver elements~\cite{PAT3}. Consequently, the received power ${P_{\mathrm{R}}}$ is determined by $L_{\mathrm{l}}$, $L_{\mathrm{p}}$ and $L_{\mathrm{o}}$, which indicate atmospheric attenuation, pointing, and optical loss, respectively,~\cite{OCS2, UAVFSO}. \subsubsection{Atmospheric Attenuation: Scintillation} For $L_{\mathrm{l}}$, we must consider the scintillation and scattering effects, which are key to the vulnerability of the long-distance FSO link. We employed the scintillation (turbulence) model $L_{\mathrm{sci}}$ for atmospheric channel modeling~\cite{SCI, Wind1}. {It represents a function of the refractive index structure parameter $C_n^2$ modeled by the Hufnagel-Valley (H-V) model~\cite{SCI}, which is a function of altitude $h$ in meters and wind speed $v$ in m/s.} \subsubsection{Atmospheric Attenuation: Scattering} We consider the Mie scattering model to express the scattering loss $L_{\mathrm{sca}}$~\cite{Mie2} in dB. This is induced by particles having a size similar to that of the wavelength $\lambda$. As fog, rain, and clouds can cause a scattering effect~\cite{UAVFSO}, a scattering model must be considered for each scenario. For fog and rain, we employed the Kruse model to model the effects~\cite{SCI}. This is a function of the scattering coefficient $\beta_{\textrm{sca}}$, which is a function of $\lambda$, visibility range $V$, and rainfall rate $R_{\mathrm{r}}$ in mm/h. {In the proposed testbed, ${V =10}$~km and ${V =3}$~km are chosen to represent clear weather and hazy weather, respectively. These values are chosen to verify the transmission results based on two separate Kruse models corresponding to $V\lessgtr6$~km\cite{UAVFSO, Mie2}.} For cloudy conditions, we used the model in~\cite{cloudconf} to express the scattering loss by clouds. Therefore, the total loss $L_{\mathrm{l}}$ in decibels is given by $L_{\mathrm{l}}=\left(\sum_{\textrm{attenuation factors}} L_{\textrm{sca}} \right)+ L_{\textrm{sci}}.$ \subsubsection{Pointing and Optical Loss} We set the pointing $\left(L_{\mathrm{p}}\right)$ and optical $\left(L_{\mathrm{o}}\right)$ losses equal to 2~dB, resulting from the misalignment and optical efficiency of the transceiver, respectively,~\cite{UAVFSO}. {The pointing loss is determined by each pointing loss factor of the transceiver, which is determined by the transceiver aperture, wavelength $\lambda$, and pointing error angle~\cite{SCI}. We obtained data in our hardware setup (transceiver aperture and wavelength) and the proposed sampling-based PAT algorithm in Section III.C with low misalignment. The optical loss can be modeled as $L_{\mathrm{o}}=10\log_{10} (\eta_t \eta_r)$ for the optical efficiencies of the FSO transmitter ($\eta_t$) and receiver ($\eta_r$). The typical value of $\eta_t\eta_r$ is in the range of $[0.2, 0.7]$~\cite{UAVFSO, 32Q}, which depends on the optical components, and a value $0.65$ is used in our demonstration, which approximates $L_{\mathrm{o}}$ by 2~dB.} \begin{table}[t] \centering \caption{Parameter setup for video signal transmission through the FSO communication channel} \begin{tabular}{|c |c|} \hline \textbf{Parameter} & \textbf{Value (hazy, clear weather)} \\ \hlin {Wind speed (height: 0~km) $(v)$} & model in~\cite{Wind1} (6, 1~m/s) \\\hline Visibility $(V)$ & 3, 10~km\\ \hline Fog layer thickness $\left(d_{\mathrm{fog}}\right)$ & 50, 0~m\\ \hline Rain layer thickness $\left(d_{\mathrm{rain}}\right)$ & 1, 0~km\\ \hline Cloud attenuation $\left(L_{\mathrm{sca, cloud}}\right)$ & model in~\cite{cloudconf}\\ \hline Wavelength $\left(\lambda\right)$ & 1550~nm\\ \hline Link distance $\left(\ell\right)$ & 20~km \\ \hline Bit rate of video signal & 35~Mbps \\ \hlin Resolution and frame rate & UHD 60~fps (H.264)\\ \hline \end{tabular} \label{param} \end{table} \subsection{Hardware and Data Setup} The prototype structure used to evaluate the feasibility of the FSO link is illustrated in Fig.~\ref{video} and the detailed parameters are listed in Table~\ref{param}. For video signal transmission, we transmitted a one-minute clip of the 2008 animated short \textit{Big Buck Bunny} with 4 K UHD resolution. {Since the transmission is demonstrated in an indoor environment, to induce the accurate signal transmission compared to an actual outdoor scenario with 20~km link distance, we used the Mach-Zehnder Modulator (MZM), which is widely applicable in high-speed optical systems owing to its dominant position on ease of fabrication and flexibility of pulse reputation~\cite{11m, MZM2}. We employed the MZM in the proposed system to model the 2D atmospheric channel and perform error correction, including both line-of-sight (LoS) and pointing errors. Here, using the given atmospheric parameters, we generate the corresponding atmospheric channel with a distance of 20~km by the MZM, which is completed by the following procedure: we divide the transmitted FSO signal into two paths, determine the intensity and phase of the signal by driving voltages to each signal, which causes the variation of optical path lengt and merge them~\cite{MZM2}.} The channel emulator was connected between the transceiver FPGA modules. The transmitter and receiver included a laser diode (LD) with an E/O converter and a photodiode (PD) with an O/E converter, respectively, to ensure long-term video signal transmission. {After receiving an oversampled data stream using a digital sampling oscilloscope (DSO) on the receiver side, we derived the bit-error-rate (BER) performance by using the average and standard deviations of the received ones and zeros~\cite{32Q}. The diodes were connected to an FPGA-based PXIe SDR platform for real-time video signal processing. This involves the encoding and modulation of the bitstream of the video signal. The SDR platform at each transceiver side comprises a PXIe chassis (PXIe-1082) and FPGA controller modules (NI-7976, 5791, PXIe-8880, 8374, and 2953R for the transmitter and receiver sides). \begin{figure*}[t] \centering \includegraphics[width=1.37\columnwidth]{Fig/pat.png} \caption{Proposed multiple-sampling-based PAT technique through SNR improving and 3D acquisition and the misalignment result along the x-axis. By enhancing SNR through sampling gain of $\sqrt{m}$, we can achieve far less tracking error compared to the size of QD.} \label{patx} \end{figure*} \subsection{Proposed Techniques for Link-Quality Enhancement} Since we consider a link distance of up to 20~km, the vulnerability becomes even greater. Hence, we applied several link-quality-enhancing strategies to ensure the reliability of video signal transmission. \subsubsection{Suppressing Solar Background Noise by Spatial Selective Filtering} In this section, we present the testbed structure for suppressing the solar background noise. The FSO signal is known to be vulnerable to solar noise~\cite{QD2}, where SNR degradation is approximately 5 dB, resulting in more than 50 dB when sunlight becomes direct~\cite{OCS2}. Hence, in the proposed test bed, we suppress this solar background noise compared to the signal power using the proposed spatial selective filtering technique. {We assume that the amount of solar noise is given by (20) in~\cite{QD2}, where the amount of noise is proportional to the receiver FoV, and hence the amount of noise proportionally decays by the selected portion of the receiver aperture.} {Here, we spatially divide the given area into specific partitions (e.g., 2$\times$2, 3$\times$3, $\cdots$) and filter the area where the signal intensity is greater than that of the others. This leads to the result that the signal power of the selected region is , assuming $2\times2$ division, for example, greater than the average signal power $\frac{P_s}{4}$ for the total signal power of aperture $P_s$. However, the noise power of the selected area was suppressed by $\frac{P_n}{4}$ for the total noise power through the total aperture $P_n$. Hence, the optical SNR becomes greater than $\frac{P_{\mathrm{s}}}{P_{\mathrm{n}}}$, which is the SNR of the conventional method without spatial filtering.} To measure the performance of the proposed spatial selection technique, we performed a point-to-point test and realized the algorithm. We split the transmitted FSO link by the beam splitter and determined by our high frames-per-second (fps) image sensor, the amount of reduction of the receiver FoV, and feedback to the receiver. Using the proposed spatial selective filtering method (shown in Fig.~\ref{select}), we can reduce the BER from an order of $10^{-3}$ to $10^{-4}$, which positively contributes to the accuracy of the long-distance video signal~transmission. \subsubsection{Link-Quality Enhancement by SNR-Improving PAT Technique} It is widely known that when the FSO transmitter and receiver are misaligned, the link quality is highly degraded, and the outage of the FSO link is increased~\cite{PAT3}. Therefore, we need to apply the proposed multiple-sampling-based PAT technique to our testbed, as illustrated in Fig.~\ref{patx}. To improve the SNR performance, we measured the received $m$ signals $Y_m=HX_m+n_m$ for the static channel $H$, transmitted signal $\left\{X_i\right\}_{i=1}^m$ and noise $n: \left\{n_i\right\}_{i=1}^m$ for each sampling and acquisition time with the control of the quadrant-detector (QD) receiver, and measured the concatenated optical SNR $\mathrm{SNR}_{\mathrm{c}}$ for $m$ samples. {Here, the QD receiver detects optical signals from its photoreceiver divided into quarters, estimating the beam displacement by comparing the received signal power of each quadrant $V_1, \cdots, V_4$~\cite{QD2}.} The concatenated SNR is given by $\mathrm{SNR}_{\mathrm{c}}=\frac{H\cdot\left(X_1+\cdots+X_m\right)}{\sqrt{n_1^2 +\cdots+n_m^2}}\approx \frac{mH\cdot X}{n\sqrt{m}}=\sqrt{\bm{m}} \frac{H\cdot X}{n}$, which implies that we can obtain an SNR gain of $\sqrt{m}$ for $m$ sampling compared to a single-signal-reception scenario. {Moreover, by our precise control of the QD receiver with angle resolution of sub-$\mu$ rad scale and compensation frequency of up to 1 kHz, the misalignment along the x-axis is reduced by 0.19 mm for ten samplings ($m=10$), which is negligible compared to the size of the QD given by order of~1~mm~\cite{QD2}.} \subsection{Video Signal Transmission under Various Channel Conditions} By implementing the proposed testbed using the various link-quality-enhancing techniques described above, we conducted video signal transmission through the emulated FSO channel. {We assumed a four-level pulse-amplitude-modulated (PAM-4) signal for video signal transmission, which achieves superior computational complexity and power consumption for long-distance FSO signal transmission compared to other higher modulation schemes (e.g., $n$-QAM or orthogonal-frequency-division-multiplexing (OFDM)-QAM) that require a complex system architecture and enormous transmit power.} The upper part of Fig.~\ref{snap} illustrates the verification procedure for data reliability by measuring. As shown in the figure, by transmitting a 4-Gbps PAM-4 signal under a weak turbulence channel with a 20-km link distance, our proposed technique can achieve a BER of $10^{-4}$ order. This guarantees the reliability of data transmitted through the 2D channel model. The lower part of Fig.~\ref{snap} shows snapshots of the video transmission results under the channel conditions. In an environment with low turbulence and slow wind speed, the original clip can be transmitted with almost zero distortion. {Even under harsh conditions of high turbulence and fast wind speed, the clip can be sent with negligible distortion.} Therefore, we can conclude that, with our prototype of a realistic FSO channel model, researchers can confirm the feasibility of the long-distance FSO link under several channel conditions. \begin{figure*}[t] \centering \includegraphics[width=1.25\columnwidth]{Fig/results.png} \caption{{Data-reliability validation process under $\ell=20~\mathrm{km}$ scenario and snapshots of video transmission in clear (low turbulence, 1~m/s wind speed, ...) and hazy (high turbulence, 6~m/s wind speed, ...) weathers. The denoted wind speed is based on the height of 0~km~\cite{Wind1}.}} \label{snap} \end{figure*} \section{Conclusion} In this article, we highlighted the utility of long-range FSO communication links for 6G networks by investigating their high-data-rate, unlicensed, and narrow-beam properties. We also emphasize that the vulnerability of the FSO communication link depends heavily on atmospheric conditions, the harshness of which increases as the link distance increases. To measure the link-level feasibility of the long-distance FSO link, we developed the first novel technique to model a real FSO channel with an MZM emulator and a point-to-point transceiver realized by FPGA modules. We considered the FSO channel with both low- and high-turbulence scenarios and adopted various link-quality enhancement strategies, including solar-noise suppression by selective filtering and SNR-improving PAT. To verify the performance of the strategies, we independently conducted a point-to-point FSO signal transmission experiment and demonstrated how our techniques lowered the BER and misalignment results. For the main feasibility measurement, we demonstrated that even a video signal with UHD 60 fps could be received without distortion (or less) using the proposed SDR-platform-based prototype. We believe that through the description of the challenges and opportunities and the proposed demonstration in this article, we have conveyed a promising insight into the benefits of implementing long-distance FSO links in future wireless networks. Furthermore, we believe that more accurate video transmission techniques, which hold up in practice, can be modeled and tested using the proposed testbed. \section*{Acknowledgment} This work was supported by the Institute for Information \& communications Technology Promotion (IITP) grant funded by the Korean government (MSIP) (No. 2019-0-00685, Free space optical communication based vertical mobile network). \bibliographystyle{IEEEtran} \bibliography {sample} \textbf{Hong-Bae Jeon} is currently pursuing the Ph.D degree in the School of Integrated Technology, Yonsei University, Korea. \textbf{Soo-Min Kim} is currently pursuing the Ph.D degree in the School of Integrated Technology, Yonsei University, Korea. \textbf{Hyung-Joo Moon} is currently pursuing the Ph.D degree in the School of Integrated Technology, Yonsei University, Korea. \textbf{Do-Hoon Kwon} is currently pursuing the Ph.D degree in the School of Electrical and Electronic Engineering, Yonsei University, Korea. \textbf{Joon-Woo Lee} is currently pursuing the Ph.D degree in the School of Electrical and Electronic Engineering, Yonsei University, Korea. \textbf{Jong-Moon Chung} is a Professor in the School of Electrical and Electronic Engineering, Yonsei University, Korea. \textbf{Sang-Kook Han} is a Professor in the School of Electrical and Electronic Engineering, Yonsei University, Korea. \textbf{Chan-Byoung Chae} is an Underwood Distinguished Professor at Yonsei University, Korea. His research interest includes emerging technologies for 6G and molecular communications. \textbf{Mohamed-Slim Alouini} is a Distinguished Professor of Electrical Engineering at King Abdullah University of Science and Technology (KAUST), Saudi Arabia. His current research interests include the modeling, design, and performance analysis of wireless communication systems. \end{document}
1,116,691,498,203
arxiv
\section{Introduction} Factorization homology is a homology theory on manifolds with coefficients in suitable $\mathrm{E}_n$-algebras. The main results of this paper are: \begin{enumerate} \item We construct a $\Gtop$-enriched category $\mathrm{Mfld}^{\mathrm{fr}_{V}}_{n}$. Its objects are $V$-framed $G$-manifolds of dimension $n$. The endomorphism operad of the object $V$ is equivalent to the little $V$-disk operad of Guillou-May \cite{GM17}, thus it is an $\mathrm{E}_V$-operad. \item With this category, we define the equivariant factorization homology $\displaystyle\int_MA$ by a monadic bar construction. \item We prove the nonabelian Poincar\'e duality theorem using a geometrically-seen scanning map, which establishes a weak $G$-equivalence $\displaystyle\int_MA \simeq \mathrm{Map}_*(M^+, \mathbf{B}^VA)$. \end{enumerate} Here, $M$ is a $V$-framed manifold, and $M^+$ is its one-point compactification. The coefficient $A$ is an $\mathrm{E}_V$-algebra and $\mathbf{B}^VA$ is the $V$-fold deloop of $A$. The approach in this paper follows the non-equivariant treatment in \cite{Miller}. It is a global generalization of the delooping machines of \cite{MayGILS,GM17}. The nonabelian Poincar\'e duality theorem gives a simplicial filtration on the mapping space $\mathrm{Map}_*(M^+, \mathbf{B}^VA)$, thus offering a calculational tool. There are other approaches of a different flavor to equivariant factorization homology, developed by \cite{Horev,Weelinck}. In joint work with Horev and Klang, we give an alternative proof of the nonabelian Poincar\'e duality theorem in \cite{HKZ} in Horev's context, together with an application to Thom $G$-spectra. \subsection{Factorization homology: history and equivariant} The language of factorization homology has been used to formulate and solve questions in many areas of mathematics. Among others, there are homological stability results in \cite{MK,Knudsen}, a reconstruction of the cyclotomic trace in \cite{AMGR} and the study of quantum field theory in \cite{BZBJ,CG}. Non-equivariantly, factorization homology has multiple origins. The most well-known approach started in Bellinson-Drinfeld's study of an algebraic geometry approach to conformal field theory \cite{BD} under the name of Chiral Homology. Lurie \cite[5.5]{HA} and Ayala--Francis \cite{AF15} introduced and extensively studied the algebraic topology analogue, named as factorization homology. This route relies heavily on $\infty$-categorical foundations. An alternative geometric model is Salvatore's configuration spaces with summable labels~\cite{Salvatore}. This construction is close to the geometric intuition, but not homotopical. Yet another model, using the bar construction and developed by Andrade~\cite{Andrade}, Miller~\cite{Miller} and Kupers--Miller~\cite{MK}, is homotopically well-behaved while staying close to the geometric intuition of configuration spaces. We take the third approach in this paper. To give an idea of the concept, we start with the non-equivariant story. Classically, the Dold--Thom theorem states that the symmetric product is a homology theory. For a based CW-complex $M$ with base point $*$, the symmetric product on $M$ is $\mathrm{Symm}(M) = \big( \coprod_{k \geq 0} M^k/\Sigma_k \big)/\sim, $ where $\sim$ is the base-point identification $(m_1, \cdots, m_k, *) \sim (m_1, \cdots, m_k)$. The Dold-Thom theorem states that when $M$ is connected, there are natural isomorphisms $\pi_*(\mathrm{Symm}(M)) \cong \widetilde{H}_*(M, \bZ).$ Factorization homology, viewed as a functor on manifolds, generalizes the homology theory of topological spaces. It uses the manifold structure to work with coefficients in the noncommutative setting. Essentially, $\displaystyle\int_MA$ is the configuration space on $M$ with summable labels in an $\mathrm{E}_n$-algebra $A$; the local Euclidean chart offers the way to sum the labels. Rigorously, \cite{Andrade, MK} defines the factorization homology on $M$ to be: \begin{equation} \label{eq:intro-def1} \int_M A = \mathrm{B}(\mathrm{D}_M, \mathrm{D}_n, A), \end{equation} where $\mathrm{D}_n$ is the reduced monad associated to the little $n$-disks operad and $\mathrm{D}_M$ is the functor associated to embeddings of disks in $M$. This bar construction definition is a concrete point-set level model of the $\infty$-categorical definition \cite{HA,AF15}. We can construct a topological category $\mathrm{Mfld}^{\mathrm{fr}}_n$ of framed smooth $n$-dimensional manifolds and framed embeddings. It is a symmetric monoidal category under taking disjoint union. Let $\mathrm{Disk}^{\mathrm{fr}}_n$ be the full subcategory spanned by objects equivalent to $\sqcup_k \bR^n$ for some $k \geq 0$. An $\mathrm{E}_n$-algebra $A$ is just a symmetric monoidal topological functor out of $\mathrm{Disk}^{\mathrm{fr}}$. The factorization homology is the derived symmetric monoidal topological left Kan extension of $A$ along the inclusion: \begin{equation} \label{eq:intro-def2} \begin{tikzcd} \mathrm{Disk}_n^{\mathrm{fr}} \ar[r,"A"] \ar[d,hook] & (\mathrm{Top},\times) \\ \mathrm{Mfld}_{n}^{\mathrm{fr}} \ar[ur, dotted, "{\int_-A}"'] & \end{tikzcd} \end{equation} Horel \cite[7.7]{Horel} shows the equivalence of \autoref{eq:intro-def1} and \autoref{eq:intro-def2}. We could also view factorization homology as a functor on the algebra. This gives a geometric interpretation of some classical invariants of structured rings and a way to produce more. For example, $\mathrm{THH}$ of an associative ring is equivalent to the factorization homology on $S^1$. We will not make use of this perspective in this paper. We point out one technicality of this bar construction that takes some effort equivariantly, namely, how to give the morphism space of $\mathrm{Mfld}^{\mathrm{fr}}_{n}$. On the one hand, from the definition of the little $V$-disks operad, we want to include only the ``rectilinear'' embeddings as framed embeddings; on the other hand, we lose control of any rectilinearily once we throw the little disks into the wild category of framed manifolds. The solution is to allow all embeddings but add in path data to correct the homotopy type. This idea goes back to Steiner \cite{Steiner} where he used paths to construct an especially useful $\mathrm{E}_n$-operad. Andrade~\cite{Andrade} and Kupers--Miller \cite{MK} used paths in the framing space to define framed embedding spaces so that they do not see the unwanted rotations. An equivariant version of $\mathrm{E}_n$-algebra is given by Guillou--May's little $V$-disks operad $\mathscr{D}_V$ and $\mathrm{E}_V$-algebras in \cite{GM17}. The $\mathrm{E}_V$-algebras give the correct coefficient input of equivariant factorization homology on $V$-framed manifolds. In \autoref{sec:tangential}, we construct the category $\mathrm{Mfld}^{\mathrm{fr}_V}_n$ of $V$-framed smooth $G$-manifolds of dimension $n$. A $V$-framing of $M$ is a trivialization $\phi_M: \mathrm{T}M \cong M \times V$ of its tangent bundle. We put the $V$-framing into a general framework of tangential structures $\theta: B \to B_GO(n)$ and define the $\theta$-framed embedding space of $\theta$-framed manifolds using paths in the framing space. In \autoref{chap:homspace}, we compare this approach to an alternative one that does not make explicit use of the framing space. In \autoref{sec:eFH}, we use the $\Gtop$-enriched category $\mathrm{Mfld}^{\mathrm{fr}_V}_n$ to build $V$-framed factorization homology by a monadic bar construction. The ingredients to set up the bar construction are the $V$-framed little disks operad $\mathscr{D}^{\mathrm{fr}_V}_V$, the monad $\mathrm{D}^{\mathrm{fr}_V}_V$ and the functor $\mathrm{D}_{\myM}^{\mathrm{fr}_V}$, which is a right module over $\mathrm{D}^{\mathrm{fr}_V}_V$ (\autoref{defn:Dm}). \begin{defn}(\autoref{defn:factoriaztion}) The equivariant factorization homology is: \begin{equation*} \int_M A = \mathbf{B}(\mathrm{D}_{\myM}^{\mathrm{fr}_V}, \mathrm{D}^{\mathrm{fr}_V}_V, A). \end{equation*} \end{defn} In \autoref{sec:embeddingspace}, we study the homotopy type of the defined embedding space in $\mathrm{Mfld}^{\mathrm{fr}_V}_n$ and show that $\mathrm{Emb}^{\mathrm{fr}_V}(\coprod_k V, M)$, the $V$-framed embedding space, has the same homotopy type as $\conf{M}$, the ordered configuration space of $k$ points in $M$: \begin{thm}(\autoref{cor:conf}\autoref{item:corconf1}) \label{thm:intro1} Evaluating at $0$ of the embedding gives a $(G \times \Sigma_k)$-homotopy equivalence: \begin{equation*} ev_0: \mathrm{Emb}^{\mathrm{fr}_V}(\coprod_k V, M) \overset{\simeq}{ \to } \conf{M}. \end{equation*} \end{thm} \noindent In particular, the $V$-framed little disks operad is equivalent to the Guillou--May little $V$-disks operad (\autoref{cor:compareDV}), so it is an $\mathrm{E}_V$-operad. \subsection{Nonabelian Poincar\'e duality theorem} Our main theorem is: \begin{thm}(\autoref{thm:NPDV}) \label{thm:intro-thm2} Let $M$ be a $V$-framed manifold and $A$ be a $G$-connected $\mathrm{D}^{\mathrm{fr}_V}_V$-algebra in $\Gtop$. There is a weak $G$-equivalence: \begin{equation*} \int_M A \simeq \mathrm{Map}_*(M^+, \mathbf{B}^VA). \end{equation*} \end{thm} The proof of \autoref{thm:intro-thm2} is inspired by \cite{Miller}. There are two main ingredients: the recognition principal in \cite{MayGILS,GM17} for the local result, and the scanning map that has been studied non-equivariantly in \cite{McDuff75, BM88, MT14}. In \autoref{sec:scanning}, we construct the scanning map, a natural transformation of right $\mathrm{D}^{\mathrm{fr}_{V}}_{V}$-functors: \begin{equation*} s: \mathrm{D}^{\mathrm{fr}_V}_{\myM}(-) \to \mathrm{Map}_c(M,\Sigma^V-), \end{equation*} and compare it to the scanning maps in the literature in \autoref{chap:appendix-scanning}. In \autoref{sec:NPD} to \autoref{sec:dimension}, we prove \autoref{thm:intro-thm2}. \subsection{Notations} \begin{itemize} \item $\Gtop$ is the $\mathrm{Top}$-enriched category of $G$-spaces and $G$-equivariant maps. \item $\topG$ is the $\Gtop$-enriched category of $G$-spaces and non-equivariant maps where $G$ acts by conjugation on the mapping space. \end{itemize} \newpage We note the following facts: \begin{enumerate} \item $\Gtop$ is the underlying $\mathrm{Top}$-enriched category of $\topG$ : \begin{equation*} \Gtop(X,Y) \cong \Gtop(\mathrm{pt},\topG(X,Y)). \end{equation*} \item $\Gtop$ is a closed Cartesian monoidal category. The interal hom $G$-space is given by the morphism in $\topG$. \end{enumerate} For orthogonal $G$-representations $V$ and $W$, we use the following notations for the mapping spaces, all of which are $G$-subspaces of $\topG(V,W)$: \begin{itemize} \item $\mathrm{Hom}(V,W)$ for linear maps; \item $\mathrm{Iso}(V,W)$ for linear isomorphisms of vector spaces; \item $\mathrm{O}(V,W)$ for linear isometries; \item $\mathrm{O}(V)$ for $\mathrm{O}(V,V)$. \end{itemize} For a space $X$ and $b \in X$, \begin{itemize} \item $P_bX$ is the path space of $X$ at the base point $b$; \item $\Omega_bX$ is the loop space of $X$ at the base point $b$; \item $\moore_bX$ is the Moore loop space of $X$ at the base point $b$, defined to be \begin{equation*} \moore_bX = \{(l,\alpha) \in \bR_{\geq 0} \times X^{\bR_{\geq 0}} | \alpha(0) = b, \ \ \alpha(t)=b \text{ for }t \geq l\}. \end{equation*} \end{itemize} For a space $X$, a vector space $V$ and a map $\phi: V \to X$, \begin{itemize} \item $\Omega_{\phi}X$ is $\Omega_{\phi(0)}X$; $\moore_{\phi}X$ is $\moore_{\phi(0)}X$. \end{itemize} For based spaces $X,Y$ and an unbased space $M$, \begin{itemize} \item $\mathrm{Map}_{*}(Y,X)$ is the space of based maps; \item $\mathrm{Map}_c(M,X) = \{f \in \mathrm{Map}(M,X) | \overline{f^{-1}(X \setminus *)} \text{ is compact}\}$ is the space of compactly supported maps. \end{itemize} For a space $M$ and a fiber bundle $E \to M$, \begin{itemize} \item $\conf{M}$ is the ordered configuration space of $k$ points in $M$. \item $\overconf{E}$ is the ordered configuration space of $k$ points in $E$ whose images are $k$ distinct points in $M$. \end{itemize} \subsection{Acknowledgement} This paper is the main part of my thesis. I would like to express my deepest gratitude to my advisor Peter May, who raises me up from the kindergarten of mathematics. I am indebted to Inbar Klang, whose work motivates my thesis, and to Alexander Kupers and Jeremy Miller, whose work leads to the approach in my research. I would like to thank Haynes Miller for helpful conversations and Shmuel Weinberger for being my committee member. \section{Preliminary} \label{part:preliminary} \subsection{$\Lambda$-sequences and operads} \label{sec:Lambda} To streamline the monadic bar construction in the main body, we study unital operads, reduced monads and bar constructions using an elementary categorical framework of $\Lambda$-objects in more details in a separate paper with May and Zhang \cite{MZZ}. This subsection is a summary of the relevant content and readers familiar with operads may skip it. Let $\Lambda$ be the category of based finite sets $\mathbf{n} = \{0,1,2,\cdots,n\}$ with base point $0$ and based injections. The morphisms of $\Lambda$ are generated by permutations and the ordered injections $s_i^{k} : \mathbf{k-1} \to \mathbf{k}$ that skip $i$ for $1 \leq i \leq k$. It is a symmetric monoidal category with wedge sum as the symmetric monoidal product. Let $(\mathscr{V},\otimes,\mathcal{I})$ be a bicomplete symmetric monoidal category with initial object $\varnothing$, terminal object $*$. Let $ \mathscr{V}_{\mathcal{I}}$ be the category under the unit. Later we will mostly be concerned about $(\Gtop, \times, \mathrm{pt})$ which is Cartesian monoidal, so ${\Gtop}_{\mathrm{pt} } = \Gtop_{*}$ is the category of pointed $G$-spaces. \begin{defn} A $\Lambda$-sequence in $\mathscr{V}$ is a functor $\mathscr{E}: \Lambda^{op} \to \mathscr{V}$. It is called unital if $\mathscr{E}(\mathbf{0}) = \mathcal{I}$. The category of all $\Lambda$-sequences in $\mathscr{V}$ is denoted $\Lambda^{op}(\mathscr{V})$, where morphisms are natural transformations of functors. The category of all unital $\Lambda$-sequences in $\mathscr{V}$ is denoted $\Lambda^{op}_{\mathcal{I}}(\mathscr{V})$, where morphisms are natural transformations of functors that are identity at level zero. \end{defn} The category $\Lambda^{op}[ \mathscr{V}]$ admits a symmetric monoidal structure $(\Lambda^{op}[\mathscr{V}], \Day, \mathscr{I}_0)$. It is the Day convolution of functors on the closed symmetric monoidal category $\Lambda^{op}$. The unit is given by \begin{equation*} \mathscr{I}_{0}(n) = \begin{cases} \mathcal{I}, & n=0;\\ \varnothing, & n>0; \end{cases} \end{equation*} We write $\Lambda^{op}[\mathscr{V}]_{\mathscr{I}_0}$ for the category of objects under the unit $\mathscr{I}_0$. The symmetric monoidal product $\Day$ on $\Lambda^{op}[\mathscr{V}]$ induces a symmetric monoidal product on $\Lambda^{op}[\mathscr{V}]_{\mathscr{I}_0}$ and its subcategory $\Lambda^{op}_{\mathcal{I}}[\mathscr{V}]$, which we still denote by $\Day$. \begin{rem} To clarify a possible confusion with notation, note that $\mathscr{E} \in \Lambda^{op}_{\mathcal{I}}[\mathscr{V}]$ is a unital $\Lambda$-sequence with $\mathscr{E}(\mathbf{0}) = I$, while $\mathscr{F} \in \Lambda^{op}[\mathscr{V}]_{\mathscr{I}_0}$ comes with a specified map $\mathcal{I} \to \mathscr{F}(\mathbf{0})$. $\mathscr{F}$ is called a \emph{unitary} $\Lambda$-sequence in \cite{MZZ}. \end{rem} Both categories highlighted in the remark above admit a (nonsymmetric) monoidal product $\odot$ in addition to $\Day$. It is analogous to Kelly's circle product on symmetric sequences \cite{Kelly05}. The unit for $\odot$ is given by $$ \mathscr{I}_{1}(n) = \begin{cases} \mathcal{I}, & n=0,1;\\ \varnothing, & n>1; \end{cases} $$ where the only non-trivial morphism $\mathscr{I}_1(1) \to \mathscr{I}_1(0)$ is the identity. For a brief definition of $\odot$, see \autoref{defn:LambdaObjectCircle}~\autoref{item:associate2}; for a detailed definition, see \cite{MZZ}. For a $\Lambda$-sequence $\mathscr{E}$, the spaces $\mathscr{E}(\mathbf{k})$ admit $\Sigma_k$-actions, so $\mathscr{E}$ has an underlying symmetric sequence. Though not relevant to this paper, it is surprising that the Day convolution of $\Lambda$-sequences agrees with the Day convolution of symmetric sequences: \begin{thm}(\cite[Theorem 3.3]{MZZ}) For $\mathscr{D}, \mathscr{E} \in \Lambda^{op}[\mathscr{V}]$, there is an isomorphism of symmetric sequences $\mathscr{D} \Day_{\Sigma} \mathscr{E} \to \mathscr{D} \Day_{\Lambda} \mathscr{E}$. \end{thm} \noindent Of course, Kelly's circle product on symmetric sequences does not agree with the circle product on $\Lambda$-sequences. An operad in $\mathscr{V}$, as defined in \cite{May84}, gives an example of a symmetric sequence in $\mathscr{V}$. If the operad is unital, meaning the $0$-space of the operad is the unit, it has the structure of a $\Lambda$-sequence in $\mathscr{V}$. In fact, We have the unital variant of Kelly's observation \cite{Kelly05}: \begin{thm}(\cite[Theorem 0.10]{MZZ}) \label{thm:operad} A unital operad in $\mathscr{V}$ is a monoid in the monoidal category $(\Lambda^{op}_{\mathcal{I}}[\mathscr{V}], \odot, \mathscr{I}_1)$. \end{thm} When $\mathscr{V} = \mathrm{Top}$ or $\mathscr{V} = \Gtop$, a unital operad is also called a reduced operad in \cite{May84}. \medskip We give a construction which will be used in the definition of equivariant factorization homology: the associated functor of a unital $\Lambda$-sequence. This construction specializes to the reduced monad associated to a reduced operad of \cite{May84} when $\mathscr{V}$ is Cartesian monoidal; it also appears in the definition of the circle product $\odot$. \begin{con} \label{cons:covariantFunctorXdot} Let $(\mathscr{W},\otimes,\mathcal{J})$ be a symmetric monoidal category and $X \in \mathscr{W}_{\mathcal{J}}$ be an object under the unit. Define $X^{*}: \Lambda \to \mathscr{W}$ to be the covariant functor that sends $\mathbf{n}$ to $X^{\otimes n}$. On morphisms, it sends the permutations to permutations of the $X$'s and sends the injection $s_i^k: \mathbf{k-1} \to \mathbf{k}$ for $1 \leq i \leq k$ to the map \begin{equation*} \begin{tikzcd} (s_i^k)_{*}: X^{\otimes {k-1}} \cong X^{\otimes {i-1}} \otimes \mathcal{J} \otimes X^{\otimes {k-i}} \arrow{rr}{\mathrm{id}^{i-1} \otimes \eta \otimes \mathrm{id}^{k-i}} & & X^{\otimes k}, \end{tikzcd} \end{equation*} where $\eta: \mathcal{J} \to X$ is the unit map of $X$. By convention, $X^{\otimes 0} = \mathcal{J}$. \end{con} This defines a functor $(-)^{*}: \mathscr{W}_{\mathcal{J} } \to \mathrm{Fun}^{\otimes}(\Lambda, \mathscr{W})$. Here, $\mathrm{Fun}^{\otimes}(\Lambda, \mathscr{W})$ is the category of strong symmetric monoidal functors from $\Lambda$ to $\mathscr{W}$. \begin{rem} The above defined functor $(-)^{*}$ is indeed an equivalence with an inverse given by the forgetful functor $ \mathrm{Fun}^{\otimes}(\Lambda, \mathscr{W}) \to \mathscr{W}_{\mathcal{J} }$ that sends $ \mathcal{X}$ to $\mathcal{X}(\mathbf{1})$. \end{rem} Assume that $(\mathscr{W},\otimes,\mathcal{J})$ is a cocomplete symmetric monoidal category tensored over $\mathscr{V}$. Then one can form the categorical tensor product over $\Lambda$ of the contravariant functor $\mathscr{E}$ and the covariant functor $X^{*}$. \begin{con}\label{defn:functor} Let $\mathscr{E} \in \Lambda^{op}[\mathscr{V}]_{\mathscr{I}_0}$ be a unitary $\Lambda$-sequence. The functor \begin{equation*} \mathrm{E}: \mathscr{W}_{\mathcal{J} } \to \mathscr{W}_{\mathcal{J}} \end{equation*} associated to $\mathscr{E}$ is defined to be \begin{equation*} \mathrm{E}(X) = \mathscr{E} \otimes_{\Lambda} X^{*} = \coprod_{k \geq 0} \mathscr{E}(k) \otimes X^{\otimes k}/\approx, \end{equation*} where $(\alpha^{*} f, \mathbf{x}) \approx (f, \alpha_{*} \mathbf{x})$ for all $f \in \mathscr{E}(m)$, $ \mathbf{x} \in X^{\otimes n}$ and $\alpha \in \Lambda(\mathbf{n} , \mathbf{m})$. The unit map of $\mathrm{E}(X)$ is given by $\mathcal{J} \cong \mathcal{I} \otimes \mathcal{J} \to \mathscr{E}(0) \otimes X^{\otimes 0} \to \mathrm{E}(X)$. \end{con} \begin{rem} \label{rmk:equi-relation} It is sometimes useful to take the quotient in two steps and use the following alternative formula for $\mathrm{E}$: \begin{equation*} \mathrm{E}(X) = \coprod_{k \geq 0} \mathscr{E}(k) \otimes_{\Sigma_k} X^{\otimes k}/ \sim, \end{equation*} where $[(s_i^{k})^{*} f, \mathbf{x}] \sim [f, (s^{k}_{i})_{*} \mathbf{x}]$ for all $f \in \mathscr{E}(k)$, $ \mathbf{x} \in X^{\otimes k-1}$. We will use $\approx$ or $\sim$ for the equivalence relation to be clear which formula we are using and refer to $\sim$ as the base point identification. \end{rem} \begin{con} We focus on the following context of \autoref{defn:functor}. \label{defn:LambdaObjectCircle} \begin{enumerate} \item \label{item:associate1} Let $\mathscr{W} = \mathscr{V}$. The associated functor is $\mathrm{E}: \mathscr{V}_{\mathcal{I}} \to \mathscr{V}_{\mathcal{I}}$. In particular, taking $\mathscr{V} = \Gtop$, one gets for a reduced $G$-operad $\mathscr{C} \in \Lambda^{op}_{*}(\Gtop)$ the \emph{reduced monad} \begin{equation*} \mathrm{C}: \Gtop_{*} \to \Gtop_{*}. \end{equation*} \item \label{item:associate2} Let $\mathscr{W} = (\Lambda^{op}[\mathscr{V}], \Day, \mathscr{I}_{0})$ via the Day monoidal structure. Then $\mathscr{W}$ is tensored over $\mathscr{V}$ in the obvious way by levelwise tensoring. One gets the \emph{circle product} for $\mathscr{E} \in \Lambda^{op}[\mathscr{V}]_{\mathscr{I}_0}$ and $ \mathscr{F} \in \Lambda^{op}[\mathscr{V}]_{\mathscr{I}_0}$ by: \begin{equation*} \mathscr{E} \odot \mathscr{F} := \mathscr{E} \otimes_{\Lambda} \mathscr{F}^{*} \in \Lambda^{op}[\mathscr{V}]_{\mathscr{I}_0}. \end{equation*} \end{enumerate} \end{con} These two cases are further related: the $0$-th level functor \begin{equation*} \imath_0:\mathscr{V} \to\Lambda^{op}[\mathscr{V}] , \ (\imath_0X)(n) = \begin{cases} X, & n=0;\\ \varnothing, & n>0; \end{cases} \end{equation*} gives an inclusion of a full symmetric monoidal subcategory, so we have \begin{equation} \label{eq:monad-and-circle} \imath_0 (\mathrm{E}X) \cong \imath_0( \mathscr{E} \otimes_{\Lambda} X^{*}) \cong \mathscr{E} \otimes_{\Lambda} (\imath_0(X)^{*}) \cong \mathscr{E} \odot \imath_0 X. \end{equation} In words, the reduced monad construction is what happens at the 0-space of the circle product. Using this, one can show \begin{prop}(\cite[Proposition 6.2]{MZZ}) \label{prop:functorOfCircle} Let $\mathrm{E},\mathrm{F}: \mathscr{V}_{\mathcal{I}} \to \mathscr{V}_{\mathcal{I}}$ be the functors associated to $\mathscr{E}$ and $\mathscr{F}$. Then the functor associated to $\mathscr{E} \odot \mathscr{F}$ is $\mathrm{E} \circ \mathrm{F}$. \end{prop} A monad is a monoid in the functor category. Using the associativity of the circle product and \autoref{eq:monad-and-circle}, it is easy to prove that when $\mathscr{C}$ is an operad, the associated functor $\mathrm{C}$ in \autoref{defn:functor} is a monad. The following construction gives examples of monoids and modules in $(\Lambda^{op}_{\mathcal{I}}[\mathscr{V}], \odot)$: \begin{con}(\cite[Section 8]{MZZ}) \label{prop:endoperad} Suppose that we have a $\mathscr{V}$-enriched symmetric monoidal category $(\mathscr{W}, \otimes, \mathcal{I}_{\mathscr{W}})$ such that $\ul{\mathscr{W}}(\mathcal{I}_{\mathscr{W}}, Y) \cong \mathcal{I}_{\mathscr{V}}$ for all objects $Y$ of $\mathscr{W}$. Then we can construct a $\Lambda^{op}_{\mathcal{I}_\mathscr{V}}[\mathscr{V}]$-enriched category $\mathcal{H}_{\mathscr{W}}$. The objects are the same as those of $\mathscr{W}$, while the enrichment is given by \begin{equation*} \ul{\mathcal{H}_{\mathscr{W}}}(X,Y) = \ul{\mathscr{W}}(X^{\otimes *},Y). \end{equation*} The definition of the composition in $\mathcal{H}_{\mathscr{W}}$ is similar to the structure maps of an endomorphism operad. So, for any objects $X, Y, Z$ of $\mathscr{W}$, $\ul{\mathcal{H}_{\mathscr{W}}}(Y,Y)$ is monoid in $(\Lambda^{op}_{\mathcal{I}}[\mathscr{V}], \odot)$, $\ul{\mathcal{H}_{\mathscr{W}}}(X,Y)$ is a left module over it, and $\ul{\mathcal{H}_{\mathscr{W}}}(Y,Z)$ is a right module. In the light of \autoref{thm:operad}, $\ul{\mathcal{H}_{\mathscr{W}}}(Y,Y)$ is a unital operad, the endomorphism operad. The assumption $\ul{\mathscr{W}}(\mathcal{I}_{\mathscr{W}}, Y) \cong \mathcal{I}_{\mathscr{V}}$ is automatically satisfied if $\mathscr{W}$ is coCartesian monoidal. \end{con} We will use that the circle product is strong symmetric monoidal in the first variable: \begin{prop}(\cite[Proposition 4.7]{MZZ}) \label{prop:circledistribute} For any $\mathscr{E} \in \Lambda^{op}[\mathscr{V}]_{\mathscr{I}_0}$, the functor $ - \odot \mathscr{E}$ on $(\Lambda^{op}(\mathscr{V})_{\mathscr{I}_{0}}, \Day ,\mathscr{I}_{0})$ is strong symmetric monoidal. That is, the circle product distributes over the Day convolution: for any $\mathscr{D}, \mathscr{D}' \in \Lambda^{op}(\mathscr{V})_{\mathscr{I}_{0}}$, we have \begin{equation*} (\mathscr{D} \Day \mathscr{D}') \odot \mathscr{E} \cong (\mathscr{D} \odot \mathscr{E}) \Day (\mathscr{D}' \odot \mathscr{E}). \end{equation*} \end{prop} \subsection{Equivariant bundles} \label{sec:equiBundle} In this paper, we characterize the $V$-framing of a $G$-manifold and the space of $V$-framed maps using equivariant bundles. This approach has the advantage of being very concrete. In this subsection, we list some preliminary results for the reader's reference, with a focus on vector bundles. The proofs as well as a clarification of different notions of equivariant fiber bundles can be found in the companion paper \cite{ZouBundle}. Let $G$ and $\Pi$ be compact Lie groups, where $G$ is the ambient action group and $\Pi$ is the structure group. \begin{defn} \label{defn:Gvector} A $G$-$n$-vector bundle a map $p:E \to B$ such that the following statements hold: \begin{enumerate} \item The map $p$ is a non-equivariant $n$-dimensional vector bundle; \item Both $E$ and $B$ are $G$-spaces and $p$ is $G$-equivariant; \item \label{item:Gvector3} The $G$-action is linear on fibers. \end{enumerate} \end{defn} \begin{defn} \label{defn:Gprincipal} A principal $G$-$\Pi$-bundle is a map ${p:P \to B}$ such that the following statements hold: \begin{enumerate} \item The map $p$ is a non-equivariant principal $\Pi$-bundle; \item Both $P$ and $B$ are $G$-spaces and $p$ is $G$-equivariant; \item The actions of $G$ and $\Pi$ commute on $P$. \end{enumerate} \end{defn} \begin{rem} This is called a principal $(G,\Pi)$-bundle in \cite[IV1]{LMS86}. \end{rem} As in the non-equivariant case, we write the $\Pi$-action on the right of a principal $G$-$\Pi$-bundle $P$; for convenience of diagonal action, we consider $P$ to have a left $\Pi$-action, that is, $\ele \in \Pi$ acts on $z \in P$ by $\ele z = z\ele^{-1}$. \begin{thm} \label{thm:G-structure-1} There is an equivalence of categories between \{$G$-$n$-vector bundles over $B$\} and \{principal $G$-$O(n)$-bundles over $B$\}. \end{thm} To deal with more general cases than $G$-vector bundles, for example, Atiyah's Real vector bundles, tom~Dieck \cite{TD} introduced a complex conjugation action of $C_2$ on the structure group $U(n)$. Lashof--May \cite{LM86} had the idea to further introduce a total group that is the extension of the structure group $\Pi$ by $G$. Tom~Dieck's work became a special case of a split extension, or equivalently a semidirect product. \begin{defn}(\cite{LM86} \label{defn:GGammaprincipal} Let $1 \to \Pi \to \Gamma \to G \to 1$ be an extension of compact Lie groups. A principal $(\Pi;\Gamma)$-bundle is a map $p:P \to B$ such that the following statements hold: \begin{enumerate} \item The map $p$ is a non-equivariant principal $\Pi$-bundle; \item The space $P$ is a $\Gamma$-space; $B$ is a $G$-space. Viewing $B$ as a $\Gamma$-space by pulling back the action, the map $p$ is $\Gamma$-equivariant. \end{enumerate} A morphism between two principal $(\Pi;\Gamma)$-bundles $p_1: P_1 \to B_1$ and $p_2: P_2 \to B_2$ is a pair of maps $(\bar{f},f)$ fitting in the commutative diagram \begin{equation*} \begin{tikzcd} P_1 \ar[r,"{\bar{f}}"] \ar[d,"{p_1}"'] & P_2 \ar[d,"{p_2}"] \\ B_1 \ar[r,"f"] & B_2 \end{tikzcd} \end{equation*} such that $f$ is $G$-equivariant and $\bar{f}$ is $\Gamma$-equivariant. \end{defn} Taking the extension to be $\Gamma = \Pi \times G$, we recover the principal $G$-$\Pi$-bundles of \autoref{defn:Gprincipal}. In this case we have two names for the same thing. This could be confusing, but since a ``principal $G$-$\Pi$-bundle'' looks more natural than a ``principal $(\Pi; \Pi \times G)$-bundle'' for this thing, we will keep both names. There is also a structure theorem identifying the category of equivariant principal bundles of \autoref{defn:GGammaprincipal} with suitable equivariant fiber bundles: \begin{thm}(\cite[IV1]{LMS86}) \label{thm:G-structure-2} For any $\Pi$-effective $\Gamma$-space $F$ and $G$-space $B$, there is an equivalence of categories between \{$G$-fiber bundles with structure group $\Pi$, total group $\Gamma$ and fiber $F$ over $B$\} and \{principal $(\Pi;\Gamma)$-bundles over $B$\}. \end{thm} There are two subtleties here: One is that the fiber $F$ needs to have a preassigned $\Gamma$-action; the other is how to define the structure group of a fiber bundle. We skip the details and the interested reader may refer to the original reference or \cite{ZouBundle} for explanations. \subsubsection{$V$-trivial bundles} A $G$-vector bundle $E \to B$ is $V$-trivial for some $n$-dimensional $G$-representation $V$ if there is a $G$-vector bundle isomorphism $E \cong B \times V$. Such an isomorphism is a $V$-framing of the bundle. This is analogous to the case of non-equivariant vector bundles, except that equivariance adds in the complexity of a representation $V$ that's part of the data. However, the representation $V$ in the equivariant trivialization of a fixed vector bundle may not be unique. Let $\mathrm{Iso}(V,W)$ be the space of linear isomorphisms $V \to W$ with the conjugation $G$-action for $G$-representations $V$ and $W$. \begin{lem} \label{lem:trivialbundle} For a $G$-space $B$, there exists a $G$-vector bundle isomorphism $B \times V \cong B \times W$ if and only if there exists a $G$-map $f: B \to \mathrm{Iso}(V,W)$. \end{lem} \begin{cor} \label{cor:trivialbundle} If $B$ has a $G$-fixed point, then $B \times V \cong B \times W$ only when $V \cong W$. \end{cor} \begin{exmp}[Counterexample] Let $G=C_2$, $\sigma$ be the sign representation. The unit sphere, $S(2\sigma)$, is $S^1$ with the 180 degree rotation action. As $C_2$-vector bundles, \begin{equation*} S(2\sigma) \times \mathbb{R}^2 \cong S(2\sigma) \times 2\sigma. \end{equation*} \end{exmp} \begin{exmp}(Counterexample, Gus Longerman) Take $G$ to be any compact Lie group and $V$ and $W$ to be any two representation of $G$ that are of the same dimension. Then $G \times V \cong G \times W$. \end{exmp} \subsubsection{$V$-framing bundles} Equivariantly, $G$-representations serve the role of vector spaces and there can be more than one of them in each dimension. So it is natural to consider the $V$-framing bundle for an orthogonal $n$-dimensonal representation $V$. \begin{defn} \label{defn:frv} Let $p:E \to B$ be a $G$-$n$-vector bundle. Let $\mathrm{Fr}_V(E)$ be the space of the admissible maps with the $G$-action $ g(\psi) = g \psi \rho(g)^{-1}.$ \end{defn} \noindent In other words, $\mathrm{Fr}_V(E)$ has the same underlying space as $\mathrm{Fr}_{\bR^n}(E)$, but we think of admissible maps as mapping out of $V$ instead of $\bR^n$. One would expect $\mathrm{Fr}_V(E)$ is some principal bundle in the sense of \autoref{defn:GGammaprincipal}. Although this is true, it does not capture the complete data. Let $V$ be given by $\rho: G \to O(n)$. We write $O(V)$ for the group $O(n)$ with the data $G \to \mathrm{Aut}(O(n))$ given by $g(\ele) = \rho(g) \ele \rho(g)^{-1}$ for $g \in G$ and $\ele \in O(n)$, so it is clear what $O(V) \rtimes G$ means. This convention coincides with the conjugation $G$-action on $O(V)$ thought of as a mapping space. \begin{prop} \label{prop:frv} $\mathrm{Fr}_V(E)$ is a principal $(O(V);O(V) \rtimes G)$-bundle and we have isomorphisms of $G$-vector bundles: \begin{equation*} E \cong (\mathrm{Fr}_V(E) \times V)/O(n). \end{equation*} \end{prop} Note that we have an isomorphism \begin{equation} \label{eq:2} \begin{array}{ccc} O(V) \rtimes G & \cong & O(n) \times G \\ (\ele,g) & \leftrightarrow & (\ele\rho(g), g). \end{array} \end{equation} So $\mathrm{Fr}_V(E)$ and $\mathrm{Fr}_{\bR^n}(E)$ are the same even as principal $(\Pi;\Gamma)$-bundles, where \begin{equation*} (\Pi;\Gamma) = (O(V); O(V) \rtimes G) \cong (O(n), O(n) \times G). \end{equation*} The $G$-actions tell them apart in two perspectives. When $1 \to \Pi \to \Gamma \to G \to 1$ is a split extension, inclusion to the second coordinate gives a canonical inclusion $G \to \Gamma$, which gives a $G$-action on the total space of a principal $(\Pi;\Gamma)$-bundle. These are the $G$-actions on $\mathrm{Fr}_{\bR^n}(E)$ and $\mathrm{Fr}_V(E)$. The isomorphism \autoref{eq:2} is not compatible with the splitting, resulting in the different $G$-actions. In fact, $G$ in the second line of \autoref{eq:19} includes as the graph subgroup $\Lambda_{\rho} = \{(\rho(g), g) | g \in G\} \subgroup O(n) \times G$. \begin{equation} \label{eq:19} \begin{tikzcd} 1 \ar[r] & O(n) \ar[d,equal] \ar[r] & O(n) \times G \ar[d, "\cong", "\autoref{eq:2}"'] \ar[r] & G \ar[d,equal] \ar[r] \ar[l, shift right, dotted, "{(e,g) \mapsfrom g}"'] & 1 \\ 1 \ar[r] & O(V) \ar[r] & O(V) \rtimes G \ar[r] & G \ar[r] \ar[l, shift left, dotted,"{(e,g)\mapsfrom g}"]& 1 \end{tikzcd} \end{equation} The second perspective to see the difference of $\mathrm{Fr}_V(E)$ and $\mathrm{Fr}_{\bR^n}(E)$ is via the different $G$-actions on the fiber $\bR^n$ to recover $E$ in \autoref{prop:frv}. Recall that the fiber of an equivariant fiber bundle should have an action of the extended structure group $\Gamma$ (See \autoref{thm:G-structure-2}); for split extensions this is equivalent to specifying a $G$-action. To recover $E$ from $\mathrm{Fr}_V(E)$, the fiber with the appropriate $G$-action is exactly the representation $V$ thought of as a $G$-space. \subsubsection{Fixed points} Let $H \subgroup G$ be a subgroup and $\mathrm{Rep}(H,\Pi)$ be the set: \begin{equation*} \mathrm{Rep}(H,\Pi) = \{ \text{group homomorphism }\rho: H \to \Pi\}/ \Pi \text{-conjugation}. \end{equation*} A group homomorphism $\rho: H \to \Pi$ gives a subgroup $\Lambda_{\rho} \subgroup (\Pi \times G)$ via its graph: $$\Lambda_{\rho} = \{(\rho(h) ,h)| h \in H\}.$$ Denote the centralizer of the image of $\rho$ in $\Pi$ by $\mathrm{Z}_{\Pi}(\rho)$. We have \begin{equation*} \mathrm{Z}_{\Pi}(\rho) = \Pi \cap \mathrm{Z}_{\Pi \times G}(\Lambda_{\rho}) = \{\ele \in \Pi| \ele \rho(h) = \rho(h) \ele \text{ for all } h \in H\}. \end{equation*} \cite[Theorem 12]{LM86} gives complete information on the fixed-point spaces of a principal $(\Pi;\Gamma)$-bundle. We focus on the special case of the trivial extension $\Gamma = \Pi \times G$ when a principal $(\Pi; \Gamma)$-bundle is just a principal $G$-$\Pi$-bundle. Take $p: P \to B$ to be such a principal $G$-$\Pi$-bundle. Then Lashof--May's quoted theorem associates to each component $B_0 \subset B^H$ a homomorphism $[\rho] \in \mathrm{Rep}(H,\Pi)$: \begin{thm} \label{thm:LM} $\{\rho: H \to \Pi | \big(p^{-1}(B_0)\big)^{\Lambda_{\rho}} \not= \varnothing \}$ form a single conjugacy class of representations. Furthermore, the (non-equivariant) principal $\Pi$-bundle $p^{-1}(B_0) \to B_0$ has a reduction of the structure group from $\Pi$ to the closed subgroup $Z_{\Pi}(\rho) \subgroup \Pi$. \end{thm} Note that a bundle morphism preserves the associated representation $[\rho]$. Also, $ E^{\Lambda_{\rho}} \to p(E^{\Lambda_{\rho}})$ is a principal $Z_{\Pi}(\rho)$-bundle for a fixed representation $\rho$. \subsubsection{Equivariant classifying spaces} The universal principal $(\Pi;\Gamma)$-bundle can be recognized by the following property: \begin{thm}(\cite[Theorem 9]{LM86}) \label{thm:universalbundle} A principal $(\Pi; \Gamma)$-bundle $p:E \to B$ is universal if and only if \begin{equation*} E^{\Lambda} \simeq *, \text{ for all subgroups } \Lambda \subset \Gamma \text{ such that } \Lambda \cap \Pi = {e}. \end{equation*} \end{thm} When $\Gamma = \Pi \times G$, such a subgroup $\Lambda$ comes in the form of $\Lambda_{\rho}$ as defined above. \begin{notn} \label{notn:universal} The universal principal $G$-$O(n)$-bundle is denoted $E_GO(n) \to B_GO(n)$; The universal principal $(\Pi; \Gamma)$-bundle is denoted $E(\Pi;\Gamma) \to B(\Pi;\Gamma)$. We denote the universal $G$-$n$-vector bundle by $\universal \to B_GO(n)$, where $$\universal = E_GO(n) \times_{O(n)} \bR^n.$$ \end{notn} From information about the fixed-point spaces and \autoref{thm:universalbundle}, one gets the $G$-homotopy type of the universal base: \begin{thm}(\cite[Theorem 2.17]{L82}) \label{thm:BGO(n)} \begin{align*} (B_GO(n))^{G} & \simeq \coprod_{[\rho] \in \mathrm{Rep}(G,O(n))}B\mathrm{Z}_{O(n)}(\rho); \\ & \simeq \coprod_{[V] \in \mathrm{Rep}(G,O(n))}B (O(V)^G). \end{align*} \end{thm} \begin{exmp} Take $H=G=C_2$ and $\Pi=O(2)$. Then $$\mathrm{Rep}(C_2,O(2)) = \{\mathrm{id}, \text{ rotation}, \text{ reflection}\}.$$ For $\rho=\mathrm{id}$ or $\rho=\text{rotation}$, $Z_{\Pi}(\rho)=O(2)$. For $\rho=\text{reflection}$, $Z_{\Pi}(\rho) \cong \bZ/2 \times \bZ/2$. So \begin{equation*} (B_{C_2}O(n))^{C_2} \simeq BO(2) \sqcup BO(2) \sqcup B(\bZ/2 \times \bZ/2). \end{equation*} \end{exmp} One can make explicit the classifying maps of $V$-trivial bundles as follows. A $G$-map $\theta: \mathrm{pt} \to B_GO(n)$ gives the following data: it lands in one of the $G$-fixed components of $B_GO(n)$, indexed by a representation class $[V]$; its image is a $G$-fixed point $b \in B_GO(n)$. \begin{prop} \label{rmk:classifyingV} The pullback of the universal bundle is $\theta^{*}\universal \cong V$. \end{prop} The loop space of $B_GO(n)$ at the base point $b$, $\Omega_b B_GO(n)$, is a $G$-space with the pointwise $G$-action on the loops. Via concatenation of loops, it is an $A_{\infty}$-algebra in $G$-spaces. Using the Moore loop space $\moore_b B_GO(n)$, whose definition we omit here, we can strictify $\Omega_bB_GO(n)$ to a $G$-monoid. \begin{defn} A $G$-monoid is a monoid in $G$-spaces, that is, an underlying monoid such that the multiplication is $G$-equivariant. A morphism of $G$-monoids is an equivalence if it is a weak $G$-equivalence. \end{defn} \begin{thm} \label{cor:monoidmap} Let $O(V)$ be isometric self maps of $V$ with $G$ acting by conjugation. \begin{enumerate} \item There is a $G$-homotopy equivalence $\Omega_b B_GO(n) \simeq O(V)$; \item There is an equivalence of $G$-monoids $\moore_b B_GO(n) \simeq O(V)$. \end{enumerate} \end{thm} \autoref{cor:monoidmap} is used later in \autoref{thm:autoV} for understanding the automorphism space of a framed disk $V$. \begin{rem} \label{rem:OV} Explicitly, the equivalence of $G$-monoids is given by a zigzag \begin{equation} \label{eq:OV} \begin{tikzcd} \moore_b B_GO(n) & (\widetilde{\moore}_bE_GO(n))/O(n) \ar[l,"\xi"',"\simeq"] \ar[r,"\psi","\simeq"'] & O(V). \end{tikzcd} \end{equation} Here, let $p$ denote the universal principal $G$-$O(n)$-bundle $E_GO(n) \to B_GO(n)$; We define \begin{equation*} \widetilde{\moore}_bE_GO(n)= \{(l,\alpha) | l \in \bR_{\geq 0}, \alpha : \bR_{\geq 0} \to E_GO(n), p(\alpha(0)) = p(\alpha(t)) = b \text{ for }t \geq l\}, \end{equation*} so that $(\widetilde{\moore}_bE_GO(n))/O(n) = [l, \alpha]$ where the equivalence relation is \begin{equation*} (l,\alpha) \sim (l,\beta) \text{ if there is } \ele \in O(n) \text{ such that } \alpha(t) = \beta(t)\ele \text{ for all }t \geq 0. \end{equation*} While $\widetilde{\moore}_bE_GO(n) $ does not have the structure of a $G$-monoid, $(\widetilde{\moore}_bE_GO(n))/O(n)$ does. Fix a base point $z \in p^{-1}(b) \subset E_GO(n)$. The maps are given by \begin{align*} \xi([l,\alpha]) & = (l, p(\alpha)) \in \moore_b B_GO(n);\\ \psi([l,\alpha]) & \in O(V) \text{ is determined by } \alpha(0)\psi([l,\alpha])=\alpha(l). \qedhere \end{align*} \end{rem} We conclude this section with results on the morphism spaces of equivariant principal bundles. Let $1 \to \Pi \to \Gamma \to G \to 1$ be an extension of groups. Let $$p_{\Pi;\Gamma}: E(\Pi;\Gamma) \to B(\Pi;\Gamma)$$ be the universal principal $(\Pi;\Gamma)$-bundle and let $p: P \to B$ be any principal $(\Pi;\Gamma)$-bundle. Let $\mathrm{Hom}(P,E(\Pi;\Gamma))$ be the space of (non-equivariant) principal $\Pi$-bundle morphisms. Since $\mathrm{Hom}(P,E(\Pi;\Gamma)) \cong \mathrm{Map}_{\Pi}(P,E(\Pi;\Gamma))$, the conjugation $\Gamma$-action on $\mathrm{Map}(P,E(\Pi;\Gamma))$ descends to give a $G$-action on $\mathrm{Hom}(P,E(\Pi;\Gamma))$. One can prove: \begin{lem} \label{lem:contractible} $\mathrm{Hom}(P,E(\Pi;\Gamma))$ is $G$-contractible. \end{lem} \autoref{lem:contractible} leads to the following result. Let $p : P \to B$ be any principal $G$-$O(n)$-bundle. Restricting a bundle map to its map of base spaces gives \begin{equation} \label{eq:universal} \pi: \mathrm{Hom}(P, E_GO(n)) \to \mathrm{Map}_p(B,B_GO(n)). \end{equation} Here, $\mathrm{Map}_p(B,B_GO(n))$ is the (non-equivariant) component of the classifying map of $p$ in $\mathrm{Map}(B,B_GO(n))$; $G$ acts by conjugation on both sides of \autoref{eq:universal}. Note that $G$ acts on $\mathrm{Aut}_BP$ by conjugation, so we can form $\Gamma = \mathrm{Aut}_{B}P \rtimes G$. \begin{thm} \label{thm:MapToUniversal} The map \autoref{eq:universal} is a universal principal $(\mathrm{Aut}_{B}P; \Gamma)$-bundle. \end{thm} \section{Tangential structures and factorization homology} \label{chap:FH} \subsection{Equivariant tangential structures} \label{sec:tangential} Fix an integer $n$ and a finite group $G$. A tangential structure is a $G$-map $\theta: B \to B_G O(n)$. Here, $B_GO(n)$ is the classifying space as in \autoref{notn:universal}. A morphism of two tangential structures is a $G$-map over $B_GO(n)$. All tangential structures form a category $\mathcal{TS}$, which is simply the over category $\Gtop/_{B_GO(n)}$. In this subsection we fix a tangential structure $\theta$ and construct two categories. The first one is $\mathrm{Vec}^{\theta}$, the category of $n$-dimensional $\theta$-framed bundles with $\theta$-framed bundle maps as morphisms. The second one is $\mathrm{Mfld}^{\theta}$, the category of smooth $n$-dimensional $\theta$-framed manifolds and $\theta$-framed embeddings. The category $\mathrm{Mfld}^{\theta}$ is a subcategory of $\mathrm{Vec}^{\theta}$; both $\mathrm{Mfld}^{\theta}$ and $\mathrm{Vec}^{\theta}$ are enriched over $\Gtop$. If we let $\theta$ vary, both constructions define covariant functors from $\mathcal{TS}$ to categories. Denote by $\universal$ the universal $G$-$n$-vector bundle over $B_GO(n)$. Pulling back along the tangential structure $\theta: B \to B_G O(n)$ gives a bundle $\theta^{*}\universal$ over $B$. This is meant to be the universal $\theta$-framed vector bundle. For an $n$-dimensional smooth $G$-manifold $M$, the tangent bundle of $M$ is a $G$-$n$-vector bundle. It is classified by a $G$-map up to $G$-homotopy: \begin{equation*} \tau: M \to B_G O(n). \end{equation*} \begin{defn} A $\theta$-framing on a $G$-$n$-vector bundle $E \to M$ is a $G$-$n$-vector bundle map $\phi_E: E \to \theta^{*}\universal$. A $\theta$-framing on a smooth $G$-manifold $M$ is a $\theta$-framing $\phi_M$ on its tangent bundle. We abuse notations and refer to the map on the base spaces as $\phi_M$ as well. \end{defn} A bundle has a $\theta$-framing if and only if its classifying map $\tau: M \to B_GO(n)$ has a factorization up to $G$-homotopy through $B$; see diagram~\autoref{eq:theta-framing}. However, a factorization $\tau_B: M \to B$ does not uniquely determine a $\theta$-framing $\phi_E: E \to \theta^{*}(\universal)$. Indeed, a bundle map $\phi_E: E \to \theta^{*}(\universal)$ is the same data as a map $\tau_B: M \to B$ on the base plus a homotopy between the two classifying maps from $M$ to $B_GO(n)$. For a detailed proof, see \autoref{cor:HomVSMapoverB} with \autoref{defn:mapOverB}. \begin{equation} \label{eq:theta-framing} \begin{tikzcd} & B \ar[d,"\theta"] \\ M \ar[r,"\tau"'] \ar[ur,dotted, "\tau_B","h^{\curvearrowright}"'] & B_GO(n)\\ \end{tikzcd} \end{equation} \begin{exmp} When $B$ is a point, a tangential structure $\theta: \mathrm{pt} \to B_GO(n)$ picks out in its image a $G$-fixed component of $B_GO(n)$. This component is indexed by some $n$-dimensional $G$-representation $V$ up to isomorphism. Then $\theta^{*}\universal \cong V$ as a $G$-vector space over $\mathrm{pt}$ (\autoref{rmk:classifyingV}). We denote this tangential structure by $\mathrm{fr}_V: \mathrm{pt} \to B_GO(n)$ and call it a $V$-framing. A $V$-framing on a vector bundle $E \to M$ is just an equivariant trivialization $E \cong M \times V$. We emphasize that the $V$-framing tangential structure is not only a space $B= \mathrm{pt}$ but also a map $\mathrm{fr}_V$. \end{exmp} \begin{defn} \label{defn:theta-bundle-map} Given two $\theta$-framed bundles $E_1,E_2$ with framings $\phi_1, \phi_2$, the space of $\theta$-framed bundle maps between them is defined as: \begin{equation} \mathrm{Hom}^{\theta}(E_1, E_2) := \mathrm{hofib}\big( \mathrm{Hom}(E_1,E_2) \overset{\phi_2 \circ -}{ \longrightarrow } \mathrm{Hom}(E_1, \theta^{*}\universal)\big), \end{equation} where $\mathrm{Hom}(E_1, \theta^{*}\universal)$ is based at $\phi_1$. Explicitly, a $\theta$-framed bundle map is a bundle map $f$ and a homotopy connecting the two resulting $\theta$-framings $\phi_1$ and $\phi_2f$ of $E_1$: \begin{equation*} \mathrm{Hom}^{\theta}(E_1,E_2) = \{(f, \alpha) \in \mathrm{Hom}(E_1,E_2) \times \mathrm{Hom}(E_1,\theta^{*}\universal)^I | \alpha(0) = \phi_1, \alpha(1) = \phi_2 f\}. \end{equation*} The unit in $\mathrm{Hom}^{\theta}(E,E)$ is given by $(\mathrm{id}_E, \phi_{\mathrm{const}})$; the composition of two $\theta$-bundle maps is defined as: \begin{equation*} \begin{array}{ccc} \mathrm{Hom}^{\theta}(E_2,E_3) \times \mathrm{Hom}^{\theta}(E_1, E_2) & \to & \mathrm{Hom}^{\theta}(E_1,E_3);\\ \big((g,\beta),(f,\alpha)\big) & \mapsto & (g \circ f, \lambda), \end{array} \end{equation*} \begin{equation*} \text{ where }\lambda(t) = \left\{ \begin{array}{l@{\quad \text{ when}\quad}l} \alpha(2t),& 0 \le t \le 1/2; \\ \beta(2t-1) \circ f,& 1/2 < t \le 1. \end{array}\right. \end{equation*} \end{defn} As defined above, the composition is unital and associative only up to homotopy. One can modify $\mathrm{Hom}^{\theta}(E_1, E_2)$ using Moore paths in the homotopy to make the composition strictly unital and associative; see \cite[Definition 17]{MK} or \autoref{defn:mapOverB} for a construction in the same spirit. We omit the details here and assume we have built a category $\mathrm{Vec}^{\theta}$ of $\theta$-framed bundles and $\theta$-framed embeddings. As such, an element of $\mathrm{Hom}^{\theta}(E_1,E_2)$ has a third data of the length of the path, which is a locally constant function on $ \mathrm{Hom}(E_1,E_2)$, but for brevity we sometimes do not write it. In the definition of $\mathrm{Hom}^{\theta}(E_1,E_2)$, everything is taken non-equivariantly. The spaces $\mathrm{Hom}(E_1,E_2)$ and $\mathrm{Hom}(E_1, \theta^{*}\universal)$ have $G$-actions by conjugation. Since $\phi_1$ and $\phi_2$ are $G$-maps, the homotopy fiber $\mathrm{Hom}^{\theta}(E_1,E_2)$ inherits the conjugation $G$-action. \begin{defn} \label{def:embedding} The space of $\theta$-framed embeddings between two $\theta$-framed manifolds is defined as the pullback displayed in the following diagram of $G$-spaces: \begin{equation} \label{eq:emb-space1} \begin{tikzcd} \mathrm{Emb}^{\theta}(M,N) \ar[r] \ar[d] & \mathrm{Hom}^{\theta}(\mathrm{T}M,\mathrm{T}N) \ar[d] \\ \mathrm{Emb}(M,N) \ar[r, "d"] & \mathrm{Hom}(\mathrm{T}M,\mathrm{T}N) \end{tikzcd} \end{equation} Here, $\mathrm{Emb}(M,N)$ is the space of smooth embeddings and the map $d$ takes an embedding to its derivative. \end{defn} \begin{rem} \label{rem:emb-data} Most of the time, we drop the Moore-path-length data and write an element of $\mathrm{Emb}^{\theta}(M,N)$ as a package of a map $f$ and a homotopy $\bar{f}=(f,\alpha)$, with $f \in \mathrm{Emb}(M,N)$ and $\alpha: [0,1] \to \mathrm{Hom}(\mathrm{T}M, \mathrm{T}N)$ satisfying $\alpha(0) = \phi_M$ and $\alpha(1)= \phi_N \circ df$. There is a functor $\mathrm{Mfld}^\theta \to \mathrm{Mfld}$ by forgetting the tangential structure. It sends $\bar{f} \in \mathrm{Emb}^{\theta}(M, N)$ to $f \in \mathrm{Emb}(M,N)$. \end{rem} Let $\sqcup$ be the disjoint union of $\theta$-framed vector bundles or manifolds and $\varnothing$ be the empty bundle or manifold. Both $(\mathrm{Vect}^{\theta}, \sqcup, \varnothing)$ and $(\mathrm{Mfld}^{\theta}, \sqcup, \varnothing)$ are $\Gtop$-enriched symmetric monoidal categories. In both categories, $\varnothing$ is the initial object. In $ \mathrm{Vect}^{\theta}$, $\sqcup$ is the coproduct, but not in $\mathrm{Mfld}^{\theta}$. \begin{rem} We need the length of the Moore path to be locally constant as introduced in \cite[Definition 17]{MK} as opposed to constant for the enrichment to work. Namely, the map \begin{equation*} \mathrm{Hom}^{\theta}(E_1, E'_1) \times \mathrm{Hom}^{\theta}(E_2, E'_2) \to \mathrm{Hom}^{\theta}(E_1 \sqcup E_2 , E'_1\sqcup E'_2) \end{equation*} is given by first post-composing with the obvious $\theta$-framed map $E'_i \to E'_1 \sqcup E'_2$ for $i=1,2$, then using a homeomorphism, as follows: \begin{align*} \mathrm{Hom}^{\theta}(E_1, E'_1) \times \mathrm{Hom}^{\theta}(E_2, E'_2) & \to \mathrm{Hom}^{\theta}(E_1, E'_1 \sqcup E'_2) \times \mathrm{Hom}^{\theta}(E_2, E'_1 \sqcup E'_2)\\ & \cong \mathrm{Hom}^{\theta}(E_1 \sqcup E_2, E'_1 \sqcup E'_2) \end{align*} If the length of the Moore path were constant, the displayed homeomorphism would only be a homotopy equivalence, as the length of a Moore path can be different on the two parts. \end{rem} \medskip To set up factorization homology in \autoref{sec:eFH}, we fix an $n$-dimensional orthogonal $G$-representation $V$; in addition, we suppose that $V$ is $\theta$-framed and fix a $\theta$-framing $$\phi: \mathrm{T}V \to \theta^{*}\universal$$ on $V$. Since $\mathrm{T}V \cong V$ as $G$-vector bundles, the space of $\theta$-framings on $V$ is \begin{equation} \label{eq:22} \mathrm{Hom}(\mathrm{T}V, \theta^{*}\universal)^G \simeq \mathrm{Hom}(V,\theta^{*}\universal)^G = \mathrm{Hom}(\bR^n,\theta^{*}\universal)^{\Lambda_{\rho}} \simeq (\theta^{*}E_GO(n))^{\Lambda_\rho}, \end{equation} where $\Lambda_{\rho} = \{(\rho(g),g) \in O(n) \times G| g \in G\}$ and $\rho: G \to O(n)$ is a matrix representation of $V$. By \autoref{thm:LM}, $$(\theta^{*}E_GO(n))^{\Lambda_\rho} \cong \theta^{*}(E_GO(n))^{\Lambda_\rho}.$$ So the spaces in \autoref{eq:22} are non-empty, or a $\theta$-framing on $V$ exists, if and only if the intersection of $\theta(B)$ and the $V$-indexed component of $(B_GO(n))^G$ as introduced in \autoref{thm:BGO(n)} is non-empty. \medskip We also describe the change of tangential structures, which is not studied in this paper. Let $q$ be a morphism from $\theta_1: B_1 \to B_GO(n)$ to $\theta_2: B_2 \to B_GO(n)$, equivalently, a $G$-map $q: B_1 \to B_2$ satisfying $\theta_2q=\theta_1$. Then a $\theta_1$-framed vector bundle $E \to B$ with $\phi_E: E \to \theta_1^{*}\universal $ is $\theta_2$-framed by $$ E \to \theta_1^{*}\universal = q^{*}\theta_2^{*}\universal \to \theta_2^{*}\universal.$$ The morphism $q$ also induces a map on framed-morphisms. So we have a functor \begin{equation*} q_{*}:\mathrm{Vec}^{\theta_1} \to \mathrm{Vec}^{\theta_2}, \text{ and similarly } q_{*}: \mathrm{Mfld}^{\theta_1} \to \mathrm{Mfld}^{\theta_2}. \end{equation*} \subsection{Equivariant factorization homology} \label{sec:eFH} In this subsection, we use the $\Lambda$-sequence machinery in \autoref{sec:Lambda} and the $\Gtop$-enriched category $\mathrm{Mfld}^{\theta}$ developed in \autoref{sec:tangential} to define the equivariant factorization homology as a bar construction. Recall from \autoref{sec:tangential} that we have fixed an $n$-dimensional orthogonal $G$-representation $V$ and a $\theta$-framing $\phi: \mathrm{T}V \to \theta^{*}\universal$ on the $G$-manifold $V$. \begin{defn} \label{defn:Dm} For a $\theta$-framed manifold $M$, we define the $\Lambda$-sequence $\mathscr{D}_{\myM}^{\theta} $ to be $$\mathscr{D}_{\myM}^{\theta} = \mathrm{Emb}^{\theta}({}^{*}V,M) \in \Lambda^{op}_*(\Gtop).$$ Here, ${}^{*}V $ is the symmetric monoidal functor $(\Lambda, \wg, \mathbf{0}) \to (\mathrm{Mfld}^{\theta}, \sqcup, \varnothing)$ that sends $\mathbf{1}$ to $(V,\phi)$ and sends $\mathbf{0} \to \mathbf{1}$ to the unique map $\varnothing \hookrightarrow V$. \end{defn} Explicitly, on objects, we have \begin{align*} \mathscr{D}_{\myM}^{\theta}: \Lambda^{op} & \to \Gtop, \\ \mathbf{k} & \mapsto \mathrm{Emb}^{\theta}(\coprod_k V, M); \end{align*} On morphisms, $\Sigma_k = \Lambda(\mathbf{k}, \mathbf{k})$ acts by permuting the copies of $V$, and $s_i^k: \mathbf{k-1} \to \mathbf{k}$ induces $(s_i^k)^{*}: \mathscr{D}_{\myM}^{\theta}(k) \to \mathscr{D}_{\myM}^{\theta}(k-1)$ by forgeting the $i$-th $V$ in the embeddings for $1 \leq i \leq k$. Plugging in $V$ in the second variable, we have $\mathscr{D}_V^{\theta}$. Using \autoref{defn:LambdaObjectCircle}, we get associated functors of $ \mathscr{D}^{\theta}_{\myM}$ and $\mathscr{D}^{\theta}_{V}$, which we denote by \begin{align*} \mathrm{D}^{\theta}_{\myM},\mathrm{D}^{\theta}_{V}&: \Gtop_{*} \to \Gtop_{*}; \\ \mathrm{D}^{\theta}_{\myM}(X) & = \coprod_{k \geq 0} \mathscr{D}_{\myM}^{\theta}(k) \times_{\Sigma_k} X^{\times k}/ \sim. \end{align*} These $\Lambda$-sequences satisfy certain structures coming from the composition of morphisms in $\mathrm{Mfld}^{\theta}$. It is best described using the Kelly monoidal structure $(\Lambda^{op}_{*}(\Gtop), \odot)$ as defined in \autoref{defn:LambdaObjectCircle}. Taking $\mathscr{V}= \Gtop$ and $(\mathscr{W},\otimes) = (\mathrm{Mfld}^{\theta},\sqcup)$ in \autoref{prop:endoperad}, we can identify \begin{equation*} \mathscr{D}_{\myM}^{\theta} = \ul{\mathcal{H}_{\mathscr{W}}}(V, M). \end{equation*} Consequently, $\mathscr{D}^{\theta}_V = \ul{\mathcal{H}_{\mathscr{W}}}(V, V)$ is a monoid in $(\Lambda^{op}_{*}(\Gtop), \odot)$ and $\mathscr{D}_{\myM}^{\theta}$ is a right module over it. Translating by \autoref{thm:operad}, $\mathscr{D}^{\theta}_V$ is a reduced operad in $(\Gtop , \times)$. This operad is close to the little $V$-disk operad $\mathscr{D}_V$ except it also allows $\theta$-framed automorphisms of the embedded $V$-disks. We make the remark that in light of \autoref{thm:autoV}, we expect there to be something like an equivalence of $G$-operads: $\mathscr{D}^{\theta}_V \simeq \mathscr{D}_V \rtimes (\moore_{\phi} B)$. This is formulated and proved in \cite[Appendix B]{ZouThesis}. By \autoref{prop:functorOfCircle}, the right module map $\mathscr{D}^{\theta}_{\myM} \odot \mathscr{D}^{\theta}_V \to \mathscr{D}^{\theta}_{\myM} $ of $\Lambda$-sequences yields a natural transformation $\mathrm{D}^{\theta}_{\myM} \circ \mathrm{D}^{\theta}_{V} \to \mathrm{D}^{\theta}_{\myM}$; The monoid structure maps ${\mathscr{I}_1 \to \mathscr{D}^{\theta}_V}$ and ${\mathscr{D}^{\theta}_V \odot \mathscr{D}^{\theta}_V \to \mathscr{D}^{\theta}_V}$ yield natural transformations ${\mathrm{id} \to \mathrm{D}^{\theta}_{V}}$ and $\mathrm{D}^{\theta}_{V} \circ \mathrm{D}^{\theta}_{V} \to \mathrm{D}^{\theta}_{V}$. The following is a standard definition from \cite{May84}: \begin{defn} Let $\mathscr{C}$ be a reduced operad in $(\Gtop,\times)$ and $\mathrm{C}$ be the associated reduced monad. An object $A \in \Gtop_{*}$ is a $\mathscr{C}$-algebra if there is a map $\gamma: \mathrm{C}A \to A$ such that the following diagrams commute, where the unlabeled maps are the unit and multiplication map of the monad $\mathrm{C}$: \begin{equation*} \begin{tikzcd} CCA \ar[r,"C\gamma"] \ar[d] & CA \ar[d,"\gamma"] \\ CA \ar[r,"\gamma"] & A \end{tikzcd}; \quad \begin{tikzcd} A \ar[r] \ar[rd,equal] & CA \ar[d,"\gamma"]\\ & A \end{tikzcd}. \end{equation*} \end{defn} In what follows, let $A$ be a $\mathscr{D}_V^{\theta}$-algebra in $\Gtop_{*}$. We have a simplicial $G$-space, whose $q$-th level is \begin{equation*} \mathbf{B}_q(\mathrm{D}_{\myM}^{\theta},\mathrm{D}_V^{\theta},A) = \mathrm{D}_{\myM}^{\theta}(\mathrm{D}_V^{\theta})^qA. \end{equation*} The face maps are induced by the above-given structure maps \begin{equation*} \mathrm{D}^{\theta}_M \mathrm{D}^{\theta}_V \to \mathrm{D}^{\theta}_M, \ \ \mathrm{D}^{\theta}_V \mathrm{D}^{\theta}_V \to \mathrm{D}^{\theta}_V \text{ and } \gamma: \mathrm{D}^{\theta}_VA \to A. \end{equation*} The degeneracy maps are induced by $\mathrm{id} \to \mathrm{D}^{\theta}_V$. We have the following definition after the idea of \cite[IX.1.5]{Andrade}: \begin{defn} \label{defn:factoriaztion} The factorization homology of $M$ with coefficient $A$ i \begin{equation*} \int_M^{\theta}A : = \mathbf{B}(\mathrm{D}_{\myM}^{\theta},\mathrm{D}_V^{\theta} ,A). \end{equation*} \end{defn} \begin{notn} Since we are not comparing tangential structures in this paper, we drop the $\theta$ in the notation and write $\displaystyle\int_M^{\theta} A $ as $\displaystyle\int_M A$. \end{notn} \medskip The category of algebras $\mathscr{D}_V^{\theta}[\Gtop_{*}]$ has a transfer model structure via the forgetful functor $\mathscr{D}_V^{\theta}[\Gtop_{*}] \to \Gtop_{*}$ (\cite[3.2, 4.1]{BergMoer}), so that weak equivalences of maps between algebras are just underlying weak equivalences. \begin{prop} The functor $\displaystyle\int_M -: \mathscr{D}_V^{\theta}[\Gtop_{*}] \to \Gtop_{*}$ is homotopical. \end{prop} \begin{proof} The proof is a formal argument assembling the literature and deferred. We show that the bar construction is Reedy cofibrant in \autoref{lem:Reedy}. The claim then follows since geometric realization preserves levelwise weak equivalences between Reedy cofibrant simplicial $G$-spaces, as quoted in \autoref{thm:Reedy}. \end{proof} We have the following properties of the factorization homology. \begin{prop} \label{prop:FHonV} \begin{equation*} \int_VA \simeq A. \end{equation*} \end{prop} \begin{proof} This follows from the extra degeneracy argument of \cite[Proposition 9.8]{MayGILS}. The extra degeneracy coming from the unit map of the first $\mathrm{D}_V^{\theta}$ establishes $A$ as a retract of $\mathbf{B}(\mathrm{D}_V^{\theta},\mathrm{D}_V^{\theta},A)$, which is just $\displaystyle\int_VA.$ \end{proof} \begin{prop} \label{prop:FHonUnion} For $\theta$-framed manifolds $M$ and $N$, \begin{equation*} \int_{M \sqcup N}A \simeq \int_MA \times \int_NA. \end{equation*} \end{prop} \begin{proof} Without loss of generality, we may assume that both $M$ and $N$ are connected. Then \begin{align*} \mathscr{D}_{M \sqcup N}^{\theta}(k) & \cong \mathrm{Emb}^{\theta}(\sqcup_k V, M \sqcup N) \\ &\cong \coprod_{i=0}^k\big(\mathrm{Emb}^{\theta}(\sqcup_i V, M ) \times \mathrm{Emb}^{\theta}(\sqcup_{k-i} V, N)\big) \times_{\Sigma_i \times \Sigma_{k-i}} \Sigma_k \\ & \cong \coprod_{i=0}^k \big(\mathscr{D}_M^{\theta}(i) \times \mathscr{D}_{N}^{\theta}(k-i)\big) \times_{\Sigma_i \times \Sigma_{k-i}} \Sigma_k \end{align*} This is the formula of the Day convolution of $\mathscr{D}_{M}^{\theta}$ and $\mathscr{D}_{N}^{\theta}$. So we have \begin{equation} \label{eq:5} \mathscr{D}_{M \sqcup N}^{\theta} \cong \mathscr{D}_{M}^{\theta} \boxtimes \mathscr{D}_{N}^{\theta}. \end{equation} We drop the $\theta$ in the rest of the proof. By \autoref{eq:5} and iterated use of \autoref{prop:circledistribute}, there is an isomorphism in $\Lambda^{op}_*(\Gtop)$ for each $q$: \begin{equation} \label{eq:1} \mathbf{B}_q(\mathscr{D}_{M \sqcup N},\mathscr{D}_{V},\imath_0(A)) \cong \mathbf{B}_q(\mathscr{D}_M, \mathscr{D}_V, \imath_0(A)) \boxtimes \mathbf{B}_q(\mathscr{D}_{N}, \mathscr{D}_V, \imath_0(A)). \end{equation} Iterated use of \autoref{eq:monad-and-circle} identifies \begin{equation*} \imath_0(\mathbf{B}_{q}(\mathrm{D}_{\myM},\mathrm{D}_{V},A)) \cong \mathbf{B}_q(\mathscr{D}_{\myM}, \mathscr{D}_V, \imath_0(A)), \end{equation*} so evaluating on the 0-th level of \autoref{eq:1} gives equivalence of simplical $G$-spaces: \begin{equation*} \mathbf{B}_{*}(\mathrm{D}_{M \sqcup N},\mathrm{D}_{V}, A) \cong \mathbf{B}_*(\mathrm{D}_{M}, \mathrm{D}_V, A) \times \mathbf{B}_*(\mathrm{D}_{N}, \mathrm{D}_V, A). \end{equation*} The claim follows from passing to geometric realization and commuting the geometric realization with finite products. \end{proof} \subsection{Relation to configuration spaces} \label{sec:embeddingspace} Now we restrict our attention to the $V$-framed case for an orthogonal $n$-dimensional $G$-representation $V$. We give $V$ the canonical $V$-framing $\mathrm{T}V \cong V \times V$ and let $M$ be a $G$-manifold of dimension $n$. When $M$ is $V$-framed, we denote the $V$-framing by $\phi_M: \mathrm{T}M \to V$. In this subsection, we first prove that a smooth embedding of $\sqcup_kV$ into $M$ is determined by its images and derivatives at the origin up to a contractible choice of homotopy (\autoref{lem:derivative}). The proof of the non-equivariant version can be found in Andrade's thesis \cite[V4.5]{Andrade}. Then we proceed to prove that a $V$-framed embedding space of $\sqcup_kV$ into $M$ as defined in \autoref{eq:emb-space1} is homotopically the same as choosing the center points (\autoref{cor:conf}). To formulate the result, we first define the suitable equivariant configuration space related to a manifold, which will be ``the space of points and derivatives''. We use $\conf{E}$ to denote the ordered configuration space of $k$ distinct points in $E$, topologized as a subspace of $E^k$. When $E$ is a $G$-space, $\conf{E}$ has a $G$-action by pointwise acting that commutes with the $\Sigma_k$-action by permuting the points. \begin{defn} For a fiber bunde $p: E \to M$, define $\overconf{E}$ to be configurations of $k$-ordered distinct points in $E$ with distinct images in $M$. $\overconf{E}$ is a subspace of $\conf{E}$ and inherits a free $\Sigma_k$-action. When $p$ is a $G$-fiber bundle, $\overconf{E}$ is a $G$-space. \end{defn} \begin{exmp} When $k=1$, $\mathscr{F}_{E \downarrow M}(1) \cong \mathscr{F}_E(1)$. \end{exmp} \begin{exmp} \label{exmp:frame-manifold-PConf} When $E = M \times F$ is a trivial bundle over $M$ with fiber $F$, \begin{equation*} \overconf{E} \cong \conf{M} \times F^k . \end{equation*} \end{exmp} In general, we have the following pullback diagram: \begin{equation*} \begin{tikzcd} \overconf{E} \ar[d] \ar[r,hook] & E^k \ar[d,"p^k"] \\ \conf{M} \ar[r,hook] & M^k. \end{tikzcd} \end{equation*} Now, we take $E = \mathrm{Fr}_V(\mathrm{T}M)$. Recall that $\mathrm{Fr}_V(\mathrm{T}M) = \mathrm{Hom}(V, \mathrm{T}M)$ is a $G$-bundle over $M$. For an embedding $\sqcup_kV \to M$, we take its derivative and evaluate at $0 \in V$. We will get $k$-points in $\mathrm{Fr}_V(\mathrm{T}M)$ with different images projecting to $M$. In other words, the composition \begin{equation*} \mathrm{Emb}(\coprod_k V, M) \overset{d}{ \to } \mathrm{Hom}(\coprod_k \mathrm{T}V, \mathrm{T} M) \overset{ev_0}{ \to } \mathrm{Hom}(\coprod_k V, \mathrm{T}M) = \mathrm{Fr}_V(\mathrm{T}M)^{k} \end{equation*} factors as \begin{equation} \label{equ:derivative} \mathrm{Emb}(\coprod_k V, M) \overset{d_0}{ \to } \overconf{\mathrm{Fr}_V(\mathrm{T}M)} \hookrightarrow \mathrm{Fr}_V(\mathrm{T}M)^{k} . \end{equation} \begin{prop} \label{lem:derivative} The map $d_0$ in \autoref{equ:derivative} is a $G$-Hurewicz fibration and $(G \times \Sigma_k)$-homotopy equivalence. \end{prop} \begin{proof} It suffices to prove for $k =1$, that is, for \begin{equation*} d_0: \mathrm{Emb}(V,M) \to \mathrm{Fr}_V(\mathrm{T}M), \end{equation*} since the general case will follow from the pullback diagram: \begin{equation*} \begin{tikzcd} \mathrm{Emb}(\coprod_k V, M) \ar[d,"d_0"] \ar[r,hook] & \mathrm{Emb}(V, M)^k \ar[d,"{(d_0)^k}"] \\ \overconf{\mathrm{Fr}_V(\mathrm{T}M)} \ar[r,hook] & \mathrm{Fr}_V(\mathrm{T}M)^k. \end{tikzcd} \end{equation*} We show that $d_0$ is a $G$-Hurewicz fibration by finding an equivariant local trivialization. Fix an $H$-fixed point $x \in \mathrm{Fr}_V(\mathrm{T}M)$ and let $d_0^{-1}(x)$ be the fiber at $x$. Our goal is to find an $H$-invariant neighborhood $\bar{U}$ of $x$ in $\mathrm{Fr}_V(\mathrm{T}M)$ and an $H$-equivariant homeomorphism $$\bar{U} \times d_0^{-1}(x) \cong d_0^{-1}(\bar{U}) \subset \mathrm{Emb}(V,M).$$ First, we find the small neighborhood $\bar{U}$. Let $x_0$ be the image of $x$ under the projection $\mathrm{Fr}_V(\mathrm{T}M) \to M$, then $x_0$ is also $H$-fixed. Consequently, $W = \mathrm{T}_{x_{0}}M$ is an $H$-representation. Using the exponential map, there is a local chart for $M$ that is $H$-homeomorphic to $W$ with $0 \in W$ mapping to $x_0$. We will refer to this local chart as $W$. On the chart, $\mathrm{Fr}_V(\mathrm{T}M)$ is homeomorphic to $W \times \mathrm{Hom}(V,W)$, and we may identify $x$ with $(0, A) \in W \times \mathrm{Hom}(V,W)$ for some $H$-invariant $A$. For simplicity, we put a metric on $W$ to make it an orthogonal representation. Choose an $\epsilon$-ball $U_1 \subset W$ and a small enough $H$-invariant neighborhood $A \in U_{2}\subset \mathrm{Hom}(V,W)$ and set $\bar{U} = U_1 \times U_2$. Second, we construct an $H$-equivariant local trivialization of $d_0$ on $\bar{U}$, \begin{equation*} \begin{array}{cccc} \bar{\phi}:& \bar{U} \times d_0^{-1}(x) & \to & \mathrm{Emb}(V,M), \\ & (y,f) & \mapsto & \phi(y) \circ f \end{array} \end{equation*} by utilizing a yet-to-be-constructed map $\phi: \bar{U} \to \mathrm{Diff}(M)$. The map $\phi$ needs to satisfy the following properties: \begin{enumerate} \item $\phi$ is $H$-equivariant; \item $\phi(x) = \mathrm{id}$; \item \label{item:phi2} For any $y \in \bar{U}$, $d(\phi(y)) \circ x=y$. (Recall that $x,y \in \mathrm{Fr}_{V}(\mathrm{T}M) = \mathrm{Hom}(V, \mathrm{T}M)$ and $d(\phi(y)): \mathrm{T}M \to \mathrm{T}M$ is the derivative of $\phi(y)$.) \end{enumerate} For any $\chi \in \mathrm{Diff}(M)$ and $g \in \mathrm{Emb}(V,M)$, $d_0(\chi \circ g) = d_{g(0)}(\chi) \circ d_0(g)$. We can check that $d_0(\phi(y)\circ f) = y$ and that for any $g \in \mathrm{Emb}(V,M)$ with $d_0(g) = y$, $d_0(\phi(y)^{-1}\circ g ) =x$. So, the map $\phi(y) \circ -$ translates $d_0^{-1}(x)$, the fiber over $x$, to $d_0^{-1}(y)$, the fiber over $y$. This shows $\bar{\phi}$ is an $H$-equivariant homeomorphism to $d_0^{-1}(\bar{U})$. Third, we describe only the idea of the construction of $\phi$, as it is a bit technical. Noticing that the requirement \autoref{item:phi2} is local, we can construct $\phi_0:\bar{U} \to \mathrm{Diff}(W)$ on the local chart $W$ satisfying all the requirements using linear maps. Then we need to modify these diffeomorphisms of $W$ equivariantly without changing them on the $\epsilon$-ball $U_1$, so that they become compactly supported and still satisfy all the requirements. Finally, we extend the modified $\phi_0$ by identity to get $\phi$, diffeomorphisms of $M$. The technical part is the modification for $\phi_0$. It can be done by (1) taking an $H$-invariant polytope $P$ containing $U_1$, (2) taking a large enough multiple $m$ such that $mP$ contains the image of all $\phi_0(\bar{U})(U_1)$, (3) setting $\phi_0(y)$ to be $\mathrm{id}$ outside of $mP$, (4) extending by piecewise linear function between $P$ and $mP$, and (5) smoothing it. It is because of this step that we have to choose a small enough neighborhood $U_2$, but it is good enough for our purpose. To show $d_0$ is a $G$-homotopy equivalence, one can construct a section of $d_0$ by the exponential map: \begin{equation*} \sigma: \mathrm{Fr}_V(\mathrm{T}M) \to \mathrm{Emb}(V,M). \end{equation*} Since there is a (contractible) choice of the radius at each point for the exponential map to be homeomorphism, $\sigma$ is defined only up to homotopy. Using blowing-up-at-origin techniques, the section can be shown to indeed give a deformation retract of $d_0$. To be useful later, the section exists up to homotopy for general $k$ as well: \begin{equation} \label{equ:sectionexp} \sigma: \overconf{\mathrm{Fr}_V(\mathrm{T}M)} \to \mathrm{Emb}(\coprod_k V, M). \qedhere \end{equation} \end{proof} Now we are ready to justify our desired equivalence of the $V$-framed embedding spaces from $V$ to $M$ and configuration spaces of $M$. Moreover, we show that this equivalence is compatible over $\mathrm{Emb}(\coprod_k V, M)$ in part \autoref{item:corconf2}. This will be used in later sections to compare different scanning maps. \begin{lem} \label{item:corconf0} For a $V$-framed manifold $M$, the projection \begin{equation*} \overconf{\mathrm{Fr}_V(\mathrm{T}M)} \to \conf{M} \end{equation*} is a trivial bundle with fiber $(\mathrm{Hom}(V,V))^k$. We call the section that selects $(\mathrm{id}_{V})^k$ in each fiber the zero section $z$. \end{lem} \begin{proof} Regarding $V$ as a bundle over a point, we may identify $\mathrm{Fr}_V(V) = \mathrm{Hom}(V,V)$. Since $M$ is $V$-framed, $\mathrm{Fr}_V(\mathrm{T}M) \cong \mathrm{Fr}_V(M \times V) \cong M \times \mathrm{Fr}_V(V)$ as equivariant bundles. The claim follows from \autoref{exmp:frame-manifold-PConf}. \end{proof} We can restrict the exponential map \autoref{equ:sectionexp} to the zero section in \autoref{item:corconf0} to get \begin{equation} \label{equ:sectionexp-at0} \sigma_0: \conf{M} \to \mathrm{Emb}(\coprod_k V, M). \end{equation} \begin{cor} \label{cor:conf} For a $V$-framed manifold $M$, we have: \begin{enumerate} \item \label{item:corconf1} Evaluating at $0$ of the embedding gives a $(G \times \Sigma_k)$-homotopy equivalence: \begin{equation*} ev_0:\mathscr{D}^{\mathrm{fr}_{V}}_{\myM}(k) \equiv \mathrm{Emb}^{\mathrm{fr}_V}(\coprod_k V, M) \to \conf{M}. \end{equation*} \item \label{item:corconf2} The map $ev_0$ and $\sigma_0$ in \autoref{equ:sectionexp-at0} fit in the following $(G \times \Sigma_k)$-homotopy commutative diagram: \begin{equation*} \begin{tikzcd} \mathrm{Emb}^{\mathrm{fr}_V}(\coprod_k V, M) \ar[r] \ar[d,"ev_0"'] & \mathrm{Emb}(\coprod_k V, M) \\ \conf{M} \ar[ur,"\sigma_0"']& \end{tikzcd} \end{equation*} \end{enumerate} \end{cor} \begin{proof} \autoref{item:corconf1} By Definitions~\ref{def:embedding} and \ref{defn:Dm}, $\mathrm{Emb}^{\mathrm{fr}_V}(\coprod_k V, M)$ is the homotopy fiber of the composite: \begin{equation*} D: \mathrm{Emb}(\coprod_k V, M) \overset{d}{ \to } \mathrm{Hom}(\coprod_k \mathrm{T}V, \mathrm{T} M) \overset{(\phi_M)_{*}}{ \to } \mathrm{Hom}(\coprod_k \mathrm{T}V, V). \end{equation*} We would like to restrict the composite at $\{0\} \sqcup \cdots \sqcup \{0\} \subset V\sqcup \cdots \sqcup V$. Since \begin{equation*} \mathrm{Hom}(\coprod_k \mathrm{T}V, \mathrm{T} M) \cong \prod_k \mathrm{Hom}(\mathrm{T}V, \mathrm{T} M) \end{equation*} and $i_0: V \to \mathrm{T}V$ is a $G$-homotopy equivalence of $G$-vector bundles, \begin{equation*} ev_0: \mathrm{Hom}(\coprod_k \mathrm{T}V, \mathrm{T} M) \overset{(i_0)^{*}}{ \to } \prod_k \mathrm{Hom}(V,\mathrm{T} M) \cong (\mathrm{Fr}_V(\mathrm{T}M))^k \end{equation*} is a $(G \times \Sigma_k)$-homotopy equivalence. So in the following commutative diagram, the vertical maps are all $(G \times \Sigma_k)$-homotopy equivalences: \begin{equation*} \begin{tikzcd} \mathrm{Emb}(\coprod_k V, M)\ar[r,"d"] \ar[d,"d_0"',"{\simeq \text{ by \autoref{lem:derivative}}}"] & \mathrm{Hom}(\coprod_k \mathrm{T}V, \mathrm{T} M) \ar[d,"ev_0"',"\simeq"] \ar[r,"(\phi_M)_{*}"] & \mathrm{Hom}(\coprod_k \mathrm{T}V, V) \ar[d,"ev_0"',"\simeq"] \\ \overconf{\mathrm{Fr}_V(\mathrm{T}M)} \ar[r,hook] \ar[d,"{\cong \text{ by \autoref{item:corconf0}}}"] & \mathrm{Fr}_V(\mathrm{T}M)^{k} \ar[r,"(\phi_M)_{*}"] & \mathrm{Fr}_V(V)^{k} \ar[d,equal]\\ \conf{M} \times \mathrm{Fr}_V(V)^{k} \ar[rr,"proj_2"] & & \mathrm{Fr}_V(V)^{k}. \end{tikzcd} \end{equation*} We focus on the top composition $D$ and the bottom map $proj_2$. The map $ev_0$ between their codomains is a based map. Indeed, the base point of ${\mathrm{Hom}(\coprod_k \mathrm{T}V, V)}$ is from the $V$-framing of $\coprod_kV$ and is $(G \times \Sigma_k)$-fixed. It is mapped to $\mathrm{id}^k$, the base point of $\mathrm{Fr}_V(V)^{k}$. Consequently, there is a $(G \times \Sigma_k)$-homotopy equivalence between the homotopy fibers of these two maps. \begin{equation} \label{eq:14} \mathrm{Emb}^{\mathrm{fr}_V}(\coprod_k V, M) = \mathrm{hofib}(D) \overset{\simeq}{ \to } \mathrm{hofib}(proj_2). \end{equation} Our desired $ev_0$ in question is the composite of \autoref{eq:14} and the following map: \begin{equation*} X: \mathrm{hofib}(proj_2) \to \conf{M} \times \mathrm{Fr}_V(V)^{k} \overset{proj_1}{\to} \conf{M}. \end{equation*} It suffices to show that $X$ is a $(G \times \Sigma_k)$-equivalence. Indeed, $X$ is the comparison of the homotopy fiber and the actual fiber of $proj_2$. Write temporarily $F = \conf{M}$ and $B = \mathrm{Fr}_V(V)^{k}$ with the $(G \times \Sigma_k)$-fixed base point $b$. Then the map $X$ is projection to $F$: \begin{equation*} \mathrm{hofib}(proj_2) \cong P_bB \times F \to F. \end{equation*} The claim follows from the fact that $P_bB$ is $(G \times \Sigma_k)$-contractible. \autoref{item:corconf2} We have the following $(G \times \Sigma_k)$-homotopy commutative solid diagram, where $z$ is the zero section in \autoref{item:corconf0}: \begin{equation*} \begin{tikzcd} \mathrm{Emb}^{\mathrm{fr}_V}(\coprod_k V, M) \ar[r] \ar[d,"ev_0"] & \mathrm{Emb}(\coprod_k V, M) \ar[d,"d_0"] \\ \conf{M} \ar[r, "z"] \ar[ur,"\sigma_0", dotted]& \overconf{\mathrm{Fr}_V(\mathrm{T}M)}. \end{tikzcd} \end{equation*} The commutativity can be seen easily and is actually an extension of the big commutativity diagram in part \autoref{item:corconf1} to (homotopy) fibers. As $\sigma_0 = \sigma \circ z$ and $\sigma$ is a $(G \times \Sigma_k)$-homotopy inverse of $d_0$ by \autoref{lem:derivative}, the diagram with the dotted arrow is homotopy commutative. \end{proof} \begin{rem} \label{rem:levelwise-equi-functor} Part \autoref{item:corconf1} of \autoref{cor:conf} gives a levelwise equivalence of objects in $\Lambda^{op}_*(\Gtop)$: \begin{equation*} ev_0: \mathscr{D}^{\mathrm{fr}_{V}}_{\myM} \to \mathscr{F}_M. \end{equation*} \end{rem} We conclude this subsection by comparing $\mathscr{D}_V^{\mathrm{fr}_V}$ to $\mathscr{D}_V$. For background, the little $V$-disks operad $\mathscr{D}_V$ is a well-studied notion introduced for recognizing $V$-fold loop spaces; see \cite[1.1]{GM17}. It is an equivariant generalization of the little $n$-disks operad. Roughly speaking, $\mathscr{D}_V (k)$ is the space of non-equivariant embeddings of $k$ copies of the open unit disks $\mathrm{D}(V)$ to $\mathrm{D}(V)$, each of which takes only the form $\mathbf{v} \mapsto a\mathbf{v}+\mathbf{b}$ for some $0<a \leq 1$ and $\mathbf{b} \in \mathrm{D}(V)$, called rectilinear. In particular, the spaces are the same as those of little $n$-disks operad, and so are the structure maps. The $G$-action on $\mathscr{D}_{V}(k)$ is by conjugation. It is well-defined, commutes with the $\Sigma_k$-action and the structure maps are $G$-equivariant. \begin{prop} \label{cor:compareDV} There is an equivalence of $G$-operads $\beta: \mathscr{D}_V \to \mathscr{D}_V^{\mathrm{fr}_V}$. \end{prop} \begin{proof} To construct the map of operads $\beta$, we first define $\beta(1): \mathscr{D}_V(1) \to \mathscr{D}_V^{\mathrm{fr}_V}(1)$. Take $e \in \mathscr{D}_V(1)$, we must give $\beta(1)(e) = (f , l, \alpha) \in \mathscr{D}_V^{\mathrm{fr}_V}(1)$. Explicitly, \begin{equation*} e: \mathrm{D}(V) \to \mathrm{D}(V) \text{ is } e( \mathbf{v}) = a \mathbf{v} + \mathbf{b} \text{ for some } 0< a \leq 1 \text{ and } \mathbf{b} \in \mathrm{D}(V). \end{equation*} Define \begin{equation*} \begin{array}[h]{rcl} f: V \to V & \text{ to be } & f( \mathbf{v}) = a \mathbf{v} + \mathbf{b}; \\ l \in \bR_{\geq 0} & \text{ to be } & l = -\ln(a); \\ \alpha: \bR_{\geq 0} \to \mathrm{Hom}(\mathrm{T}V, V) & \text{ to be } & \alpha(t) = \begin{cases} \mathfrak{c}_{\exp(-t)\mathrm{I}} & \text{ for } t \leq l; \\ \mathfrak{c}_{a \mathrm{I}} & \text{ for } t > l. \end{cases} \end{array} \end{equation*} For $\alpha$, $\mathrm{Hom}(\mathrm{T}V, V) \cong \mathrm{Map}(V, O(V))$, $\mathrm{I}$ is the unit element of $O(V)$ and $\mathfrak{c}$ is the constant map to the indicated element. It can be checked that $\beta(1)$ as defined is a map of $G$-monoids. Restricting $\beta(1)^k: \mathscr{D}_V(1)^k \to \mathscr{D}_V^{\mathrm{fr}_V}(1)^k$ to the subspace $\mathscr{D}_V(k) \subset \mathscr{D}_V(1)^k$, we get $\beta(k): \mathscr{D}_V(k) \to \mathscr{D}_V^{\mathrm{fr}_V}(k)$. Then $\beta$ is automatically a map of $G$-operads because $\mathscr{D}_V$ and $\mathscr{D}_V^{\mathrm{fr}_V}$ are suboperads of $\mathscr{D}_V(1)^-$ and $(\mathscr{D}_V^{\mathrm{fr}_V})^-$. The composite $\mathrm{ev}_0 \circ \beta: \mathscr{D}_V \to \mathscr{D}_V^{\mathrm{fr}_V} \to \mathscr{F}_V$ is a levelwise homotopy equivalence by \cite[Lemma 1.2]{GM17}. We have shown $\mathrm{ev}_0$ is a levelwise equivalence (\autoref{rem:levelwise-equi-functor}). So $\beta$ is also a levelwise homotopy equivalence. \end{proof} \section{Nonabelian Poincar\'e Duality for $V$-framed manifolds} \label{chap:NPD} Configuration spaces have scanning maps out of them. It turns out that equivariantly the scanning map is an equivalence on $G$-connected labels $X$. Since the factorization homology is built up simplicially by the configuration spaces, we can upgrade the scanning equivalence to what is known as the nonabelian Poincar\'e duality theorem. \subsection{Scanning map for $V$-framed manifolds} \label{sec:scanning} In this subsection we construct the scanning map, a natural transformation of right $\mathrm{D}^{\mathrm{fr}_{V}}_{V}$-functors: \begin{equation} \label{map:scanning} s: \mathrm{D}^{\mathrm{fr}_V}_{\myM}(-) \to \mathrm{Map}_{c}(M,\Sigma^V-). \end{equation} In \autoref{chap:appendix-scanning}, we compare our scanning map to the existing different constructions in the literature and utilize known results about equivariant scanning maps to give \autoref{thm:scanning-equi}, a key input to the nonabelian Poincar\'e duality theorem in \autoref{sec:NPD}. Assuming that the scanning map \autoref{map:scanning} has been constructed for a moment, the right $\mathrm{D}^{\mathrm{fr}_V}_V$-functor structure for $\mathrm{Map}_c(M,\Sigma^V-)$ is as follows: the scanning map for $M=V$ gives a map of monads $s: \mathrm{D}^{\mathrm{fr}_{V}}_V \to \Omega^V\Sigma^{V}$. It induces a natural map \begin{equation*} \Sigma^V\mathrm{D}^{\mathrm{fr}_V}_V \overset{\Sigma^Vs}{\longrightarrow } \Sigma^V\Omega^V\Sigma^V \overset{\text{ counit }}{ \longrightarrow } \Sigma^V. \end{equation*} Now we construct the scanning map. For any $G$-space $X$, recall that \begin{equation*} \mathrm{D}^{\mathrm{fr}_V}_{\myM} (X) = \coprod_{k \geq 0} \mathscr{D}_{\myM}^{\mathrm{fr}_{V}}(k) \times_{\Sigma_k} X^{k}/\sim, \end{equation*} where $\sim$ is the base point identification. Take an element $$P = [\bar{f}_1, \cdots, \bar{f}_k, x_1, \cdots, x_k] \in \mathscr{D}_{\myM}^{\mathrm{fr}_{V}}(k) \times_{\Sigma_k} X^{k}.$$ Here, each $\bar{f}_i = (f_i, \alpha_{i})$ consists of an embedding $f_i: V \to M$ and a homotopy $\alpha_i $ of two bundle maps $\mathrm{T}V \to V$, see \autoref{def:embedding}. We use only the embeddings $f_i$ to define an element $s_{X}(P) \in \mathrm{Map}_c(M,\Sigma^VX)$: \begin{equation} \label{eq:defnsX} s_{X}(P)(m) = \begin{cases} f_{i}^{-1}(m) \sm x_{i} & \text{ when $m \in M$ is in the image of some $f_i$;} \\ * & \text{ otherwise. } \end{cases} \end{equation} Notice that if $x_i$ is the base point, $f_i^{-1}(m) \sm x_i$ is the base point regardless of what $f_i$ is. So passing to the quotient, \autoref{eq:defnsX} yields a well-defined map \begin{equation} \label{eq:sX} s_{X}: \mathrm{D}_{\myM}^{\mathrm{fr}_V}(X) \to \mathrm{Map}_{c}(M, \Sigma^VX). \end{equation} In particular, taking $X=S^0$, we get \begin{equation} \label{eq:9} s_{S^0}: \coprod_{k \geq 0} \mathscr{D}_{\myM}^{\mathrm{fr}_{V}}(k)/\Sigma_k \to \mathrm{Map}_{c}(M, S^V), \end{equation} and $s_X$ is simply a labeled version of it. A more categorical construction of the scanning map $s_X$, as the composition of the Pontryagin-Thom collapse map and a ``folding'' map $\vee_kS^V \times X^k \to \Sigma^VX$ is given in \cite[Section 9]{MZZ}. We use the following results of Rourke--Sanderson \cite{RS00}, which are proved using equivariant transversality. To translate from their context to ours, see \autoref{cor:scanning-equi} and \autoref{thm:RS}. \begin{thm} \label{thm:scanning-equi} The scanning map $s_{X}: \mathrm{D}^{\mathrm{fr}_V}_{\myM} X \to \mathrm{Map}_c(M, \Sigma^V X)$ is: \begin{enumerate} \item \label{item:scanning1} a weak $G$-equivalence if $X$ is $G$-connected, \item \label{item:scanning2} or a weak group completion if $V \cong W \oplus 1$ and $M \cong N \times \mathbb{R}$. Here, $W$ is a $(n-1)$-dimension $G$-representation and $N$ is a $W$-framed compact manifold, so that $N \times \bR$ is $V$-framed. \end{enumerate} \end{thm} \subsection{Nonabelian Poincar\'e duality theorem} \label{sec:NPD} We have seen that the scanning map is an equivalence for $G$-connected labels $X$. Since the factorization homology is built up simplicially by the configuration spaces, we can upgrade the scanning equivalence to what is known as the nonabelian Poincar\'e duality theorem (NPD). The proof in this subsection follows the non-equivariant treatment by Miller \cite{Miller}. Let $A$ be a $\mathrm{D}^{\mathrm{fr}_V}_V$-algebra in $\Gtop$ throughout this subsection. Assume that $A$ is non-degenerately based, meaning that the structure map $\mathscr{D}^{\mathrm{fr}_V}_V(0) = \mathrm{pt} \to A$ gives a non-degenrate base point of $A$. This is a mild assumption for homotopical purposes. We use the following $V$-fold delooping model of $A$. \begin{defn} \label{defn:BV} The $V$-fold delooping of $A$, denoted as $\mathrm{B}^VA$, is the monadic two sided bar construction $\mathrm{B}(\Sigma^V, \mathrm{D}^{\mathrm{fr}_{V}}_V, A)$. \end{defn} \noindent Here, $\mathrm{B}_q(\Sigma^V, \mathrm{D}^{\mathrm{fr}_{V}}_V, A) = \Sigma^V(\mathrm{D}^{\mathrm{fr}_{V}}_V)^qA$. The first face map $\Sigma^V\mathrm{D}^{\mathrm{fr}_{V}}_V \to \Sigma^V$is induced by the scanning map of monads $\mathrm{D}^{\mathrm{fr}_V}_V \to \Omega^V\Sigma^V$. The last face map $\mathrm{D}^{\mathrm{fr}_{V}}_V A \to A$ is the structure maps of the algebra. The middle face maps and degeneracy maps are induced by the structure map of the monad $\mathrm{D}^{\mathrm{fr}_{V}}_V \mathrm{D}^{\mathrm{fr}_{V}}_V \to \mathrm{D}^{\mathrm{fr}_{V}}_V$ and $\mathrm{Id} \to \mathrm{D}^{\mathrm{fr}_{V}}_V$. \begin{rem} There is an equivalence of $G$-operads $\mathscr{D}_V \to \mathscr{D}^{\mathrm{fr}_V}_V $ from the little $V$-disk operad to the little $V$-framed disk operad. So a $ \mathrm{D}^{\mathrm{fr}_{V}}_V$-algebra restricts to a $\mathrm{D}_V$-algebra and there is an equivalence from the Guillou--May delooping \cite{GM17} to our delooping: $\mathrm{B}(\Sigma^V, \mathrm{D}_V, A) \to \mathrm{B}(\Sigma^V, \mathrm{D}^{\mathrm{fr}_{V}}_V, A)$ \end{rem} \begin{thm}(NPD) \label{thm:NPDV} Let $M$ be a $V$-framed manifold and $A$ be a $\mathrm{D}^{\mathrm{fr}_V}_V$-algebra in $\Gtop$. Then there is a $G$-map, which is a weak $G$-equivalence if $A$ is $G$-connected: \begin{equation*} \int_M A \equiv |B_{\bullet}(\mathrm{D}^{\mathrm{fr}_V}_{\myM} , \mathrm{D}^{\mathrm{fr}_V}_V, A)| \to \mathrm{Map}_*(M^+, \mathrm{B}^VA), \end{equation*} where $M^+$ is the one-point-compactification of $M$. \end{thm} \begin{proof} We will sketch the proof, assuming some lemmas that are proven in the remainder of this subsection. First, from \autoref{map:scanning}, we have a scanning map for each $q \geq 0$: \begin{equation*} \mathrm{D}^{\mathrm{fr}_V}_{\myM} (\mathrm{D}^{\mathrm{fr}_V}_V)^q A \to \mathrm{Map}_c (M, \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{q} A). \end{equation*} They assemble to a simplicial scanning map, which is a levelwise weak $G$-equivalence as shown in \autoref{cor:simplicialScanning}: \begin{equation} \label{eq:18} \mathrm{B}(s, \mathrm{id}, \mathrm{id}): \mathrm{B}_{\bullet}(\mathrm{D}^{\mathrm{fr}_V}_{\myM} , \mathrm{D}^{\mathrm{fr}_V}_V, A) \to \mathrm{Map}_c (M, \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{\bullet} A). \end{equation} One can identify the space of compactly supported maps with the space of based maps out of the one point compactification: \begin{equation*} \mathrm{Map}_c (M, \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{\bullet} A) \overset{\sim}{ \to } \mathrm{Map}_* (M^+, \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{\bullet} A). \end{equation*} With some cofibrancy argument in \autoref{thm:Reedy} and \autoref{lem:Reedy}, this map induces is a weak $G$-equivalence on the geometric realization: \begin{equation*} \mathrm{B}(\mathrm{D}^{\mathrm{fr}_V}_{\myM} , \mathrm{D}^{\mathrm{fr}_V}_V, A) \to |\mathrm{Map}_* (M^+, \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{\bullet} A)|. \end{equation*} Next, we change the order of the mapping space and the geometric realization. There is a natural map: \begin{equation*} | \mathrm{Map}_{*} (M^{+}, \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{\bullet} A)| \to \mathrm{Map}_{*} (M^{+}, |\Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{\bullet} A|). \end{equation*} \autoref{thm:HM}, taking $X = M^+$ and $K_{\bullet} = \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{\bullet} A$, gives a sufficient connectivity condition for it to be a weak $G$-equivalence. This connectivity condition is then checked in \autoref{lem:NPDV}. Finally, $|\Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{\bullet} A| = \mathrm{B}^VA$ by \autoref{defn:BV}. This finishes the proof of the theorem. \end{proof} \begin{rem} If we take $M=V$ in the theorem and use \autoref{prop:FHonV}, we get that $A \simeq \Omega^V\mathrm{B}^VA$ for a $G$-connected $\mathrm{E}_V$-algebra $A$. This recovers \cite[Theorem 1.14]{GM17} and justifies the definition of $\mathrm{B}^{V}A$. \end{rem} \subsection{Connectedness} \begin{defn} A $G$-space $X$ is $G$-connected if $X^H$ is connected for all subgroups $H\subgroup G$. \end{defn} To show that the scanning map is an equivalence in each simplicial level, we need: \begin{lem} \label{lem:Gconnected} If $X$ is $G$-connected, then $\mathrm{D}^{\mathrm{fr}_V}_V X$ is also $G$-connected. \end{lem} \begin{proof}By \autoref{cor:conf}, $\mathrm{D}^{\mathrm{fr}_V}_V X$ is $G$-homotopy equivalent to $F_VX$. It suffices to show that $F_VX$ is $G$-connected. Fix any subgroup $H\subgroup G$; we must show that $(F_VX)^H$ is connected. This is the space of $H$-equivariant unordered configuration on $V$ with based labels in $X$. Intuitively, this is true because the space of labels $X$ is $G$-connected, so that one can always move the labels of a configuration to the base point. Nevertheless, we give a proof here by carefully writing down the fixed points of $F_VX$ in terms of the fixed points of $\mathscr{F}_V(k)$ and $X$. We have: \begin{align*} (F_VX)^{H} & = (\coprod_{k \geq 0} F_V (k) \times_{\Sigma_k} X^{k} /\sim)^{H} = \coprod_{k \geq 0} (F_V (k) \times_{\Sigma_k} X^{k})^{H}/\sim_{H} \end{align*} Here, $\sim$ is the equivalence relation in \autoref{rmk:equi-relation} and $\sim_{H}$ is $\sim$ restricted on $H$-fixed points. They are explicitly forgetting a point in the configuration if the corresponding label is the base point in $X$. Notice that taking $H$-fixed points will not commute with $\approx$ in \autoref{defn:functor}, but commutes with $\sim$. This is because the $H$-action preserves the filtration and $\sim$ only identifies elements of different filtrations. Since the $\Sigma_k$-action is free on $F_V(k) \times X^k$ and commutes with the $G$-action, we have a principal $G$-$\Sigma_k$-bundle \begin{equation*} F_V(k) \times X^k \to F_V(k) \times_{\Sigma_k} X^k. \end{equation*} To get $H$-fixed points on the base space, we need to consider the $\Lambda_{\alpha}$-fixed points on the total space for all the subgroups $\Lambda_{\alpha} \subgroup G \times \Sigma_k$ that are the graphs of some group homomorphisms $\alpha: H \to \Sigma_k$. More precisely, by \autoref{thm:LM}, we have \begin{equation*} (F_V (k) \times_{\Sigma_k} X^{k})^{H} = \coprod_{ [\alpha: H \to \Sigma_k]} \Big((F_V(k) \times X^k)^{\Lambda_{\alpha}} /Z_{\Sigma_k}(\alpha)\Big). \end{equation*} Here, the coproduct is taken over $\Sigma_k$-conjugacy classes of group homomorphisms and $Z_{\Sigma_k}(\alpha)$ is the centralizer of the image of $\alpha$ in $\Sigma_k$. We would like to make the expression coordinate-free for $k$. A homomorphism $\alpha$ can be identified with an $H$-action on the set $\{1, \cdots, k\}$. For an $H$-set $S$, write $X^S = \mathrm{Map}(S,X)$ and $F_V(S) = \mathrm{Emb}(S, V)$. Then \begin{equation*} (F_V(k) \times X^k)^{\Lambda_{\alpha}} = ( F_V(S) \times X^{S})^H \text{ and } Z_{\Sigma_k}(\alpha)= \mathrm{Aut}_H(S). \end{equation*} So we have: \begin{equation*} (F_V (k) \times_{\Sigma_k} X^{k})^{H} = \coprod_{[S]: \text{iso classes of $H$-set} , |S|=k} \Big( (F_V(S) \times X^{S})^H/\mathrm{Aut}_H(S) \Big). \end{equation*} If we take care of the base point identification, we end up with: \begin{equation} \label{eq:10} (F_VX)^H = \bigg( \coprod_{[S]: \text{iso classes of finite $H$-set}} (F_V(S) \times X^S)^H / \mathrm{Aut}_H(S) \bigg) /\sim_H. \end{equation} Suppose that the $H$-set $S$ breaks into orbits as $S = \amalg_i r_{i}(H/K_i)$ for $i=1,\cdots, s$, where $K_{i}$'s are in distinct conjugacy classes of subgroups of $H$ and $r_i > 0$, then we know explicitly each coproduct component is \begin{align*} (F_V(S) \times X^{S})^H/\mathrm{Aut}_H S & = (\mathrm{Emb}_{H}(S, V) \times \mathrm{Map}_H(S, X))/\mathrm{Aut}_H S \\ & = (\mathrm{Emb}_{H}(\amalg_{i} r_i (H/K_{i}), V) \times \prod_{i} (X^{K_i})^{r_{i}})/ \prod_i (W_H(K_i) \wr \Sigma_{r_i}). \end{align*} Since $X^{K_i}$ are all connected, so are the spaces $\prod_{i} (X^{K_i})^{r_{i}}$. Each contains the base point of the labels $*=\prod_i\prod_{r_i} * \to \prod_i (X^{K_i})^{r_{i}}$. So after the gluing $\sim_{H}$, each component in \autoref{eq:10} is in the same component as the base point of $F_VX$. Thus $(F_VX)^{H}$ is connected. \end{proof} \begin{cor} \label{cor:simplicialScanning} The map $B_{\bullet}(\mathrm{D}^{\mathrm{fr}_V}_{\myM} , \mathrm{D}^{\mathrm{fr}_V}_V, A) \to \mathrm{Map}_c (M, \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{\bullet} A)$ in \autoref{eq:18} is a levelwise weak $G$-equivalence of simplicial $G$-spaces if $A$ is $G$-connected. \end{cor} \begin{proof} This is a consequence of \autoref{thm:scanning-equi} and \autoref{lem:Gconnected}. \end{proof} For geometric realization, we have: \begin{thm}[Theorem 1.10 of \cite{MMO}] \label{thm:Reedy} A levelwise weak $G$-equivalence between Reedy cofibrant simplicial objects realizes to a weak $G$-equivalence. \end{thm} \subsection{Cofibrancy} We take care of the cofibrancy issues in this part, following details in \cite{MayGILS}. We first show that some functors preserve $G$-cofibrations. One who is willing take it as a blackbox may skip directly to \autoref{lem:Reedysi}. The NDR data give a hands-on way to handle cofibrations. \begin{defn}[Definition A.1 of \cite{MayGILS}] \label{defn:NDR} A pair $(X,A)$ of $G$-spaces with $A \subset X$ is an NDR pair if there exists a $G$-invariant map $u : X \to I = [0,1]$ such that $A = u^{-1}(0)$ and a homotopy given by a map $h: I \to \mathrm{Map}_G(X,X)$ satisfying \begin{itemize} \item $h_0(x) = x $ for all $x \in X$; \item $h_t(a) = a$ for all $t \in I$ and $a \in A$; \item $h_1(x) \in A$ for all $x \in u^{-1}[0,1)$. \end{itemize} The pair $(h,u)$ is said to a representation of $(X,A)$ as an NDR pair. A pair $(X,A)$ of based $G$-spaces is an NDR pair if it is an NDR pair of $G$-spaces with the $h_t$ being based maps for all $t \in I$. \end{defn} Such a pair gives a $G$-cofibration $A \to X$. The function $u$ gives an open neighboorhood $U$ of $A$ by taking $U=u^{-1}[0,1)$. The function $h$ restricts on $I \times U$ to a neighborhood deformation retract of $A$ in $X$. We refer to $u$ as the neighborhood data and $h$ as the retract data. We have the following ``\emph{ad hoc} definition'' for a functor $F$ to preserve NDR-pairs in a functorial way: \begin{defn}[Definition A.7 of \cite{MayGILS}] \label{defn:admissible} A functor $F: \Gtop \to \Gtop$ is admissible if for any representation $(h,u)$ of $(X,A)$ as an NDR pair, there exists a representation $(Fh,Fu)$ of $(FX,FA)$ as an NDR pair such that: \begin{enumerate}[(i)] \item \label{item:Fh} The map $Fh: I \to \mathrm{Map}_G(FX, FX)$ is determined by $(Fh)_t = F(h_{t})$. \item \label{item:robust} The map $Fu: FX \to [0,1]$ satisfies the following property: for any map $g:X \to X$ such that $ug(x)<1$ whenever $x \in X$ and $u(x)<1$, $(Fu)(Fg)(y)<1$ whenever $ y \in FX$ and $(Fu)(y)<1$. \end{enumerate} And similarly for functors $F: \Gtop_{*} \to \Gtop_{*}$. \end{defn} In plain words, the retract data $Fh$ for $(FX,FA)$ are dictated by applying the functor $F$ to $h$, but there is some room in choosing the neighborhood data $Fu$. Denote the open neighborhood of $FA$ in $FX$ by $U'=(Fu)^{-1}[0,1)$. Condition \autoref{item:robust} says that $U'$ is a ``robust open neighborhood'' in the sense that a map of pairs $g: (X,U) \to (X,U)$ induces a map $Fg: (FX,U') \to (FX,U')$. \begin{rem} \label{rem:admissible} Suppose that $F$ sends inclusions to inclusions and that we have $(Fh,Fu)$ satisfying \autoref{item:Fh} and \autoref{item:robust}. \begin{itemize} \item In order for $(Fh,Fu)$ to be a representation of $(FX, FA)$ as an NDR pair, we only need to check \begin{equation*} (Fu)^{-1}(0) = FA, \ (Fu)^{-1}[0,1) \subset (Fh_1)^{-1}(FA). \end{equation*} \item Since we have $U \subset h_1^{-1}(A)$, we get $FU \subset F(h_1^{-1}A) \subset (Fh_1)^{-1}(FA)$. That is, the neighborhood $FU$ of $FA$ retracts to $FA$, but it may not be open. \end{itemize} \end{rem} Admissible functors obviously preserve cofibrations. The elaboration of the NDR data gives a way to easily verify that a functor is admissible, at least in the following cases: \begin{lem} \label{lem:NDR} Any functor $F$ associated to $\mathscr{F} \in \Lambda^{op}_{*}(\Gtop)$ is admissible. In particular, both $\mathrm{D}^{\mathrm{fr}_{V}}_V$ and $\mathrm{D}^{\mathrm{fr}_{V}}_{\myM}$ are admissible. The functors $\mathrm{Map}_c (M,-)$ and $\mathrm{Map}_{*}(M^+,-)$ are admissible. The functor $\Sigma^V$ sends NDR pairs to NDR pairs. \end{lem} \begin{proof} To show $F$ is admissible, it suffices to find the neighborhood data $Fu$ in each case. Let $\mathscr{F} \in \Lambda^{op}_{*}(\Gtop)$ be a unital $\Lambda$-sequence. The functor $F$ associated to $\mathscr{F}$ as defined in \autoref{defn:functor} sends $X \in \Gtop_{*}$ to $FX = \big(\sqcup_k\mathscr{F}(k) \times_{\Sigma_k} X^k\big)/\sim$. Define $Fu(c,x_1, \cdots, x_j) = \max_{i = 1, \cdots, j}u(x_{i})$ for $c \in \mathscr{F}(k)$ and $x_i \in X$. This is well-defined and $G$-equivariant. We check that $Fu$ satisfies \autoref{defn:admissible}. For \autoref{item:robust}, suppose we have $g: X \to X$ and $y= (c,x_1, \cdots, x_j) \in FX$ with $Fu(y) = \max_{i = 1, \cdots, j}u(x_{i}) <1.$ Then $$(Fu)(Fg)(y) = \max_{i = 1, \cdots, j}u(gx_{i}) <1.$$ To check the conditions in \autoref{rem:admissible}, we have $Fu(c,x_1, \cdots, x_j)=0$ if and only if $u(x_{i})=0$ for all $i$. This gives $(Fu)^{-1}(0) = FA$; $Fu(c,x_1, \cdots, x_j)<1$ if and only if $u(x_{i})<1$ for all $i$. This gives $(Fu)^{-1}[0,1) \subset FU \subset (Fh_1)^{-1}(FA)$. For $F = \mathrm{Map}_c (M,-)$, let $Fu(f) = \max_{m \in M} u(f(m))$ for $f \in \mathrm{Map}_c (M,X)$. This is well-defined since $f$ is compactly supported. $Fu$ is $G$-equivariant since $u$ is. We check that $Fu$ satisfies \autoref{defn:admissible}. For \autoref{item:robust}, suppose we have $g: X \to X$ and $f \in \mathrm{Map}_c(M,X)$ with $Fu(f) = \max_{m \in M} u(f(m)) <1$. Then $(Fu)(Fg)(f) = \max_{m \in M} u(gf(m)) <1$. For the conditions in \autoref{rem:admissible}, $Fu(f)=0$ if and only if $u(f(m))=0$ for all $m \in M$. This gives $(Fu)^{-1}(0) = \mathrm{Map}_c(M,A) = FA$; $Fu(f)<1$ if and only if $u(f(m))<1$ for all $m \in M$. This gives $(Fu)^{-1}[0,1) \subset FU \subset (Fh_1)^{-1}(FA)$. The same argument works for $F = \mathrm{Map}_{*}(M^+, -)$. The functor $F = \Sigma^V$ can not be admissible in the sense of \autoref{defn:admissible}, because for the pair $(X,A) = (S^1, \mathrm{pt})$ and any NDR representation $(h,u)$ of it, $$(Fh_1)^{-1}(FA) = \Sigma^V(h_1^{-1}A)$$ does not contain an open neighborhood of the base point of $\Sigma^VX$, which leaves no room for $U'$ to exist. Nevertheless, using the fact that $(S^V, \infty)$ is an NDR pair, $(\Sigma^VX, \Sigma^VA)$ is still an NDR pair by a based version of \cite[Lemma A.3]{MayGILS}. \end{proof} \begin{defn}[Lemma 1.9 of \cite{MMO}] \label{lem:Reedysi} A simplicial $G$-space $X_{\bullet}$ is Reedy cofibrant if all degeneracy operators $s_i$ are $G$-cofibrations. \end{defn} The following lemma shows that monadic bar constructions are Reedy cofibrant. \begin{lem}[adaptation of Proposition A.10 of \cite{MayGILS}] \label{lem:GILSA10} Let $\mathscr{C}$ be a reduced operad in $G$-spaces such that the unit map $\eta: \mathrm{pt} \to \mathscr{C}(1)$ gives a non-degenerate base point. Let $C$ be the reduced monad associated to $\mathscr{C}$. Let $A$ be a $C$-algebra in $\Gtop_{*}$ and $F: \Gtop_{*} \to \Gtop_{*}$ be a right-$C$-module functor. Suppose that $F$ sends NDR pairs to NDR pairs. Then $B_{\bullet}(F,C,A)$ is Reedy cofibrant. \end{lem} \begin{proof} We need to show that for any $n \geq 0$ and $0 \leq i \leq n$, the degeneracy map $${s^i_n = FC^i\eta_{C^{n-i}A}: FC^nA \to FC^{n+1}A}$$ is a $G$-cofibration. Write $X = C^{n-i}A$. By \autoref{lem:NDR}, $C$ sends NDR pairs to NDR pairs. Start from the NDR pair $(A, \mathrm{pt})$ and apply this functor $(n-i)$ times, we get an NDR pair $(C^{n-i}A, \mathrm{pt}) = (X, \mathrm{pt})$. Together with the assumption that $\mathscr{C}(1)$ is non-degenerately based, we can show $(CX, X)$ is an NDR pair where $X$ is identified with the image $\eta_X: X \to CX$ (see the proof of \cite[A.10]{MayGILS}). Applying $C$ another $i$ times and then $F$, we get the NDR pair $\big(FC^{i+1}X, FC^iX\big)=\big(FC^{n+1}A, FC^nA\big)$. Thus $s^i_n = FC^i\eta_X$ is a $G$-cofibration. \end{proof} \begin{cor} \label{lem:Reedy} Let $M,V,A$ be as in \autoref{thm:NPDV}. Then the following are Reedy cofibrant simplicial $G$-spaces: \begin{equation*} \mathrm{B}_{\bullet}(\mathrm{D}^{\mathrm{fr}_V}_{\myM} ,{D}^{\mathrm{fr}_V}_V, A), \ \mathrm{Map}_c (M, \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V)^{\bullet} A) \text{ and }\mathrm{Map}_{*} (M^+, \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V)^{\bullet} A). \end{equation*} \end{cor} \begin{proof} In \autoref{lem:GILSA10}, we take $C = \mathrm{D}^{\mathrm{fr}_{V}}_V $ and respectively $F = \mathrm{D}^{\mathrm{fr}_{V}}_{\myM}$, $ F = \mathrm{Map}_c (M, \Sigma^V -)$ or $ F = \mathrm{Map}_* (M^+, \Sigma^V -)$. By \autoref{lem:NDR}, each $F$ does send NDR pairs to NDR pairs. \end{proof} \subsection{Dimension} \label{sec:dimension} We start with an introduction to $G$-CW complexes and equivariant dimensions following \cite[I.3]{MayAlaska}. A $G$-CW complex $X$ is a union of $G$-spaces $X^n$ obtained by inductively gluing cells $G/K \times D^n$ for subgroups $K \subgroup G$ via $G$-maps along their boundaries $G/K \times S^{n-1}$ to the previous skeleton $X^{n-1}$. Conventionally, $X^{-1} = \varnothing$. We shall look at functions from the conjugacy classes of subgroups of $G$ to $\bZ_{\geq -1}$ and typically denote such a function by $\nu$. We say that a $G$-CW complex $X$ has dimension $\leq \nu$ if its cells of orbit type $G/H$ all have dimensions $\leq \nu(H)$, and that a $G$-space $X$ is $\nu$-connected if $X^H$ is $\nu(H)$-connected for all subgroups $H\subgroup G$, that is, $\pi_k(X^H) = 0$ for $k \leq \nu(H)$. We allow $\nu(H)=-1$ for the case $X^H=\varnothing$. For the purpose of induction in this paper, we use the following \emph{ad hoc} definition: \begin{defn} A based $G$-CW complex is a union of $G$-spaces $X^n$ obtained by inductively gluing cells to $X^{-1} = \mathrm{pt}$. We refer to the base point as $*$. And we do NOT count the point in $X^{-1}$ as a cell for a based $G$-CW complex, excluding it from counting the dimension as well. This is not the same as a based $G$-CW complex in \cite[Page 18]{MayAlaska}, where the base point is put in the 0-skeleton $X^0$. \end{defn} Fix a subgroup $H \subgroup G$. We have the double coset formula \begin{equation} \label{eq:27} G/K \cong \coprod_{1 \leq i \leq |H\backslash G/K|} H/K_i \text{ as $H$-sets,} \end{equation} where each $K_i = H \cap g_iKg_i^{-1} $ for some element $g_i \in G$. So a (based) $G$-CW structure on $X$ restricts to a (based) $H$-CW structure on the $H$-space $\mathrm{Res}^G_HX$. A function $\nu$ from the conjugacy classes of subgroups of $G$ to $\bZ_{\geq -1}$ induces a function from the conjugacy classes of subgroups of $H$ to $\bZ_{\geq -1}$, which we still call $\nu$. However, for $X$ of dimension $\leq \nu$, $\mathrm{Res}^G_H X$ may not be of dimension $\leq \nu$, as we see in \autoref{eq:27} that an $H/K_i$-cell can come from a $G/K$-cell for a larger group $K$. For a function $\nu$, we define the function $d_{\nu}$ to be $$d_{\nu}(K) = \max\limits_{K \subgroup L} \nu(L).$$ Then $\mathrm{Res}^G_H X$ is of dimension $\leq d_{\nu}$. \begin{rem} \label{rem:dim} More specifically, one can define the dimension of a (based) $G$-CW complex $X$ to be the minimum $\nu$ such that $X$ is of dimension $\leq \nu$. Suppose $X$ has dimension $\nu$. Then from \autoref{eq:27}, we get: \begin{enumerate}[(i)] \item The (based) $H$-CW complex $\mathrm{Res}^G_H X$ has dimension $\mu$, where \begin{equation*} \mu(K) = \max\limits_{\substack{K \subgroup L \\ K = L \cap H}} \nu(L). \end{equation*} We have $\mu \leq d_{\nu}$, and it can be strictly less. (For a trivial example, take $H = G$.) \item The (based) CW-complex $X^H$ has dimension $\mu(H) = d_{\nu}(H)$. (In the based case, we also exclude the base point from counting the dimension of $X^H$.) \end{enumerate} \end{rem} We define the dimension of a representation $V$ to be $\mathrm{dim}(V)(H)= \mathrm{dim}(V^H)$ for $H$ representing a conjugacy class of subgroups of $G$. Note that $d_{\mathrm{dim}(V)} = \mathrm{dim}(V)$. \medskip The goal of this subsection is to give a sufficent condition for the following map \autoref{eq:MapToSset} to be a weak $G$-equivalence. Let $X$ be a finite based $G$-CW complex and $K_{\bullet}$ be a simplicial $G$-space. Then the levelwise evaluation is a $G$-map \begin{equation*} |\mathrm{Map}_{*}(X, K_{\bullet})| \sm X \cong |\mathrm{Map}_{*}(X, K_{\bullet}) \sm X| \to |K_{\bullet}|, \end{equation*} whose adjoint gives a $G$-map \begin{equation} \label{eq:MapToSset} |\mathrm{Map}_{*}(X, K_{\bullet})| \to \mathrm{Map}_{*}(X, |K_{\bullet}|). \end{equation} Non-equivariantly, it is one of the key steps in May's recognition principal \cite{MayGILS} to realize that \autoref{eq:MapToSset} is a weak equivalence when the dimension of $X$ is small compared to the connectivity of $K_{\bullet}$. May proved this using quasi-fibrations, a concept that goes back to Dold--Thom. Equivariantly, one has a similar result (see \autoref{thm:HM}). It is due to Hauschild and written down by Costenoble--Waner \cite{CW91}. \begin{defn} A map $p: Y \to W$ of spaces is a quasi-fibration if $p$ is onto and it induces an isomorphism on homotopy groups $\pi_{*}(Y,p^{-1}(w), y) \rightarrow \pi_{*}(W,w)$ for all $w \in W$ and $y \in p^{-1}(w)$. In other words, there is a long exact sequence on homotopy groups of the sequence $p^{-1}(w) \to Y \to W$ for any $w \in W$. \end{defn} Usually, the geometric realization of a levelwise fibration is not a fibration. The following theorem gives conditions when it is a quasi-fibration, which is good enough for handling the homotopy groups. \begin{thm}(\cite[Theorem 12.7]{MayGILS}) \label{thm:quasifib} Let $p: E_{\bullet} \to B_{\bullet}$ be a levelwise Hurewicz fibration of pointed simplicial spaces such that $B_{\bullet}$ is Reedy cofibrant and $B_n$ is connected for all $n$. Set $F_{\bullet} = p^{-1}(*)$. Then the realization $|E_{\bullet}| \to |B_{\bullet}|$ is a quasi-fibration with fiber $|F_{\bullet}|$. \end{thm} We need the following: \begin{thm} \label{thm:HM} Let $G$ be a finite group. If $X$ is a finite based $G$-CW complex of dimension $\leq \nu$ and $K_{\bullet}$ is a simplicial $G$-space such that for any $n$, $K_n$ is $d_\nu$-connected, then the natural map \autoref{eq:MapToSset} \begin{equation*} |\mathrm{Map}_{*}(X, K_{\bullet})| \to \mathrm{Map}_{*}(X, |K_{\bullet}|) \end{equation*} is a weak $G$-equivalence. \end{thm} \begin{proof} Let $* = X^{-1} \subset X^0 \subset X^1 \subset \cdots \subset X^{d_\nu(e)} = X$ be the $G$-CW skeleton of $X$. We use induction on $k$ to show that \begin{enumerate}[(i)] \item \label{item:induct2}$\mathrm{Map}_{*}(X^k, K_n)^H$ is connected for all $n$ and $H \subgroup G$. \item \label{item:induct1} $|\mathrm{Map}_{*}(X^k, K_{\bullet})|^H \to \mathrm{Map}_{*}(X^k, |K_{\bullet}|)^H$ is a weak equivalence for all $H \subgroup G$; \end{enumerate} The base case $k=-1$ is obvious. For the inductive case, take the cofiber sequence $$X^{k} \to X^{k+1} \to X^{k+1}/X^{k}$$ and map it into $K_{\bullet}$. We then apply \autoref{eq:MapToSset} and get the following commutative diagram: \begin{equation} \label{equ:quasifib} \begin{tikzcd} \vert \mathrm{Map}_{*}(X^{k+1}/X^k, K_{\bullet})\vert ^H \ar[d] \ar[r] & \vert \mathrm{Map}_{*}(X^{k+1}, K_{\bullet})\vert ^H \ar[d] \ar[r] & \vert \mathrm{Map}_{*}(X^k, K_{\bullet})\vert ^H \ar[d] \\ \mathrm{Map}_{*}(X^{k+1}/X^k, |K_{\bullet}|)^H \ar[r] & \mathrm{Map}_{*}(X^{k+1}, |K_{\bullet}|)^H \ar[r] &\mathrm{Map}_{*}(X^k, |K_{\bullet}|)^H \end{tikzcd} \end{equation} Since maps out of a cofiber sequence form a fiber sequence, we have a fiber sequence in the second row and a realization of the following levelwise fiber sequence in the first row: \begin{equation} \label{eq:quasifib2} \begin{tikzcd} \mathrm{Map}_{*}(X^{k+1}/X^k, K_{\bullet}) ^H \ar[r] & \mathrm{Map}_{*}(X^{k+1}, K_{\bullet}) ^H \ar[r] & \mathrm{Map}_{*}(X^k, K_{\bullet}) ^H \end{tikzcd} \end{equation} By the inductive hypothesis~\autoref{item:induct2} and \autoref{thm:quasifib}, it realizes to a quasi-fibration. We first show the inductive case of \autoref{item:induct2}. Suppose that we have $$X^{k+1}/X^k = \vee_i (G/K_i)_+ \sm S^{k+1},$$ where $\{K_i\}_i$ is a finite sequence of subgroups of $G$. This implies $\nu(K_i) \geq k+1$. From \autoref{eq:27}, we have $X^{k+1}/X^k \cong \vee_i \vee_j (H/K_{i,j})_+ \sm S^{k+1}$ as a space with $H$-action, where each $K_{i,j}$ is $G$-conjugate to a subgroup of $K_i$. Since $d_{\nu}(K_{i,j}) \geq \nu(K_i)$, we have $d_{\nu}(K_{i,j}) \geq k+1$ and the following space is connected by assumption: \begin{equation*} \mathrm{Map}_{*}(X^{k+1}/X^k, K_{n}) ^H = \prod_i \mathrm{Map}_{*}(S^{k+1}, K_n^{K_{i,j}}). \end{equation*} This space is the fiber in \autoref{eq:quasifib2}. The connectedness of the base space by the inductive hypothesis~\autoref{item:induct2} implies that of the total space. We next show the inductive case of \autoref{item:induct1}. Commuting geometric realization with finite product and fixed point, the left vertical map of \autoref{equ:quasifib} is a product of maps \begin{equation*} |\mathrm{Map}_{*}(S^{k+1}, K_{\bullet}^{K_{i,j}})| \to \mathrm{Map}_{*}(S^{k+1}, |K_{\bullet}^{K_{i,j}}|). \end{equation*} These maps are weak equivalences by \cite[Theorem 12.3]{MayGILS}. By the inductive hypothesis~\autoref{item:induct1}, the right vertical map is a weak equivalence. Comparing the long exact sequences of homotopy groups, this implies that the middle vertical map is also a weak equivalence. \end{proof} \begin{rem} Non-equivariantly, supposing that $\mathrm{dim}(X)=m$, Miller \cite[Cor 2.22]{Miller} observed that the theorem is also true if $K_n$ is only $(m-1)$-connected for all $n$, since the only thing that fails in the proof is \autoref{item:induct2} for $k=m$. Equivariantly, one needs \autoref{item:induct2} to hold for $k<d_\nu(e)$. So an equivariant stingy man can only relax the assumption to $K_n^H$ being $\min\{d_\nu(H), d_\nu(e)-1\}$-connected for all $n$ and $H$. \end{rem} Just as a remark, the unbased version of \autoref{thm:HM} is the following: \begin{thm}(\cite[Lemma 5.4]{CW91}) \label{thm:CW} Let $G$ be a finite group. If $Y$ is a finite (unbased) $G$-CW complex and $K_{\bullet}$ is a simplicial $G$-space such that for any $n$, $K_n$ is $\mathrm{dim}(Y)$-connected, then the natural map \begin{equation*} |\mathrm{Map}(Y, K_{\bullet})| \to \mathrm{Map}(Y, |K_{\bullet}|) \end{equation*} is a weak $G$-equivalence. \end{thm} \noindent \autoref{thm:CW} is a consequence of \autoref{thm:HM} by taking $X = Y \sqcup \{*\}$ and using \autoref{rem:dim}. Note that by adopting the strange convention of the dimension of a based $G$-CW complex, the dimension of $Y$ is the same as $X$. On the other hand, we have the cofiber sequence $S^0 \to X_+ \to X$ for a based $G$-CW complex $X$ as well as the identification of $\mathrm{Map}_{*}(X_+, K_{\bullet})$ with $\mathrm{Map}(X, K_{\bullet})$. If $K_{\bullet}$ is $G$-connected, we can use the quasi-fibration technique and take $Y = X$ in \autoref{thm:CW} to deduce \autoref{thm:HM}. But there are also cases to apply \autoref{thm:HM} where $K_{\bullet}$ is not required to be $G$-connected, for example, when $X = (G/H)_+ \sm S^n$ for $H \neq G$. So \autoref{thm:HM} is slightly finer than \autoref{thm:CW}. \medskip Finally, we prepare the following results for the application of \autoref{thm:HM} in the setting of nonabelian Poincar\'e duality \autoref{thm:NPDV}. We need $G$-CW structures on $G$-manifolds $M$, which exist by work of Illman: \begin{thm}[Theorem 3.6 of \cite{Illman}] \label{lem:Illman} For a smooth $G$-manifold $M$ and a closed smooth $G$-submanifold $N$, there exists a smooth $G$-equivariant triangulation of $(M,N)$. \end{thm} \begin{lem} \label{lem:NPDV} Let $M$ be a $V$-framed manifold and $A$ be a $G$-connected space, then \begin{enumerate} \item\label{item:NPDV1} $M^{+}$ has the homotopy type of a $G$-CW complex of dimension $\leq \mathrm{dim}(V)$. \item\label{item:NPDV2} $K_{n} = \Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{n} A$ is $\mathrm{dim}(V)$-connected. \end{enumerate} \end{lem} \begin{proof} \autoref{item:NPDV1} Since $M$ is a $V$-framed, the exponential maps give local coordinate charts of $M^H$ as a (possibly empty) manifold of dimension $\mathrm{dim}(V^H)$. If $M$ is compact we take $W = M$, otherwise we take a compact manifold $W$ with boundary such that $M$ is diffeomorphic to the interior of $W$. By \autoref{lem:Illman}, $(W,\partial W) $ has a $G$-equivariant triangulation. It gives a relative $G$-CW structure on $(W,\partial W)$ with relative cells of type $G/H$ of dimension $\leq \mathrm{dim}(V^{H})$. The quotient $W/\partial W$ gives the desired $G$-CW model for $M^+$. \autoref{item:NPDV2} For any subgroup $H\subgroup G$, we have $K_n^H = (\Sigma^V (\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{n} A)^H = \Sigma^{V^{H}} ((\mathrm{D}^{\mathrm{fr}_{V}}_V) ^{n} A)^{H}$. By \autoref{lem:Gconnected}, $((\mathrm{D}^{\mathrm{fr}_{V}}_V)^{n} A)^{H}$ is connected. So $K_n^H$ is $\mathrm{dim}(V^H)$-connected. Thus, $K_n$ is $\mathrm{dim}(V)$-connected. \end{proof}
1,116,691,498,204
arxiv
\section*{Appendix} Diverse, multi-modal behaviors generated by our models on different environment are best experienced and understood in a video. We invite you to visit \url{https://notmahi.github.io/bet} to see BeT{} models in action. \section{Environment and Dataset Details} \label{sec:appendix_environments_datasets} \input{appendix_sections/bet_env_datasets} \section{Implementation Details and Hyperparameters} \label{sec:appendix_implementation} \subsection{Baselines} \input{appendix_sections/bet_baselines} \subsection{Algorithm Details} \input{appendix_sections/bet_compute_details} \subsection{Pseudocode} \input{appendix_sections/bet_pseudocode} \subsection{Architecture and Implementation} \label{sec:appendix_arch} For our implementation, we used the MinGPT \cite{mingpt} repository almost as-is. We modified the input token conversion layer to a linear projection layer to handle our continuous, instead of discrete, inputs. Apart from that, we followed the MinGPT architecture quite exclusively, with successive attention layers with a number of attention head and embedding dimensions. Between the layers, we used dropout regularization same as \cite{mingpt}. For the smallest tasks, like point-mass environments, we used models with approximately $10^4$ parameters, which went up to around $10^6$ for Kitchen environments. \section{Ablation studies} \label{sec:appendix_ablation} \input{appendix_sections/bet_ablations} \newpage \subsection{Ablating historical context} \label{sec:app:historical_context} One of the reasons why we used transformer-based generative networks in our work is because of our hypothesis that having historical context helps our model learn better behavioral cloning. Our experiments are performed by using the same model and simply providing sequences of length one on training and test time. As we can see on Sec.~\ref{sec:ablations}, having some historical context helps our model learn much better. \subsection{Ablating the number of discrete bin centers, k} \label{sec:app:ablation:num_bins} Since BeT{} is trained with a sum of focal loss for the binning head and MSE loss for the offset head, the number of cluster centers present a trade-off in the architecture. Concretely, as the number of bins go up, the log-likelihood loss goes up but the MSE loss goes down. In Sec.~\ref{sec:ablations}, we showed that using only one bin ($k=1$) decreases the performance level of BeT{}. In this section, we present the plot of the variation in performance as $k$ value changes. \begin{figure}[H] \centering \includegraphics[width=\linewidth]{figs/k-sweep.png} \caption{Ablating the number of discrete bin centers $k$ for BeT{}. Reward is normalized with respect to the best performing model.} \label{fig:k_sweep} \vspace{-5pt} \end{figure} \subsection{Ablating the core model in the architecture} \label{sec:app:ablation:transformer} To ablate the core MinGPT transformer model in the architecture, we run three ablations, where we replace it respectively with a fully-connected multi-layer perceptron (MLP) network, a temporal convolution network, and an LSTM-based recurrent neural network. \paragraph{Multi-Layer Perceptrons:} Since generally MLP networks are not capable of taking in historical context in consideration, we instead stack the last $t$ frames of observation to pass into the MLP network. Near the beginning of a trajectory, the stack of observation is zero-padded to $t$ frames. For the intermediate layers in the MLP, we keep the same width and the number of layers as the corresponding MinGPT. \paragraph{Temporal Convolution:} Convolutions over the sequence length has been used in numerous prior works~\cite{oord2016wavenet,kalchbrenner2016neural,dauphin2017language,gehring2017convolutional,bai2018empirical} for sequence modeling. As a baseline, we implement such temporal convolutional network to replace our MinGPT-based trunk. We perform a temporal convolution over the same period of history that is provided to our transformer models. We found that the performance of the temporal convolution models are constantly lower than our MinGPT based models. However, temporal convolutional networks are easier to fit on our data compared to RNNs. \paragraph{LSTM-based RNN:} Recurrent neural networks (RNNs) were the previous state-of-the-art for sequence modeling before transformer-based models. In this work, we compare against an Long-short term memory (LSTM) \cite{gers1999learning} based RNN instead of a transformer based trunk. We find that even with sufficient model capacity, the RNN based model took significantly longer than our MinGPT model to fit the same dataset. Moreover, the quality of fit was worse, both in training and test time. Finally, in open-ended rollouts, this performance downgrade is reflected in a far lower success rate for completing tasks in the environment (Table.~\ref{tab:ablation-table}). \section{Environments} \label{sec:appendix_environments} \paragraph{Gym Experiments} We derive all environments used in the experiments in this paper from OpenAI Gym \citep{brockman2016openai} MuJoCo tasks. Namely, we use the HalfCheetah-v3 and Hopper-v3 environments for 2d locomotion tasks and Swimmer-v3 and Ant-v3 environments for the 3d locomotion tasks (see Figure~2 for the agent morphologies). Since we aim to train primitives, we want policies that perform well regardless of the global states of the agent (global position etc.), only depending on local states (join angles etc.). Thus, we train our agents and each of our baselines with a maximum episode length of $100$ ($200$ for Swimmer only), while we test them with a maximum episode length of $500$ for static or the block environments and $200$ for the broken leg environments. As our projection function $\sigma$, we measured the $x$ velocity of the agent in 2d environments, and the $(x, y)$ velocity of the agent in 3d environments. We made $\sigma(s)$ available to the intrinsic reward calculation functions of both our methods and the baselines. \paragraph{Block Experiments} For our block experiment set, we implemented the blocks as immovable spheres of radius $3$ at a distance $10$ from origin. We dynamically added $40$ blocks at the environment creation, and deleted them with the MuJoCo interface available in Gym. The blocks were all added before the agent took the first step in the environment, and removed over the agents' lifetime as described in Section~4.3. The blocks were always removed counter-clockwise, following the trajectory of $(\cos \frac{2 \pi t}{T}, \sin \frac{2 \pi t}{T})$ over $t \in [0, T]$, where $t$ is the current timestep and $T$ is the total timestep for training. \paragraph{Broken Leg Experiments} For our broken leg experiment set, we implemented a broken leg as a leg where no actions have any effect. We switch which leg is broken every 1M steps, and train all skills for a total of 10M steps in both Off-DADS and BeT{}. \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{iclr2022/figs/disk_broken_legs_appendix.pdf} \caption{Skills learned by BeT{}, evaluated with each of the legs broken. The legs are numbered such that the final leg is numbered \#4} \label{fig:disk_all_broken} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth]{iclr2022/figs/dads_broken_legs_appendix.png} \caption{Skills learned by Off-DADS at the end of 10M steps, evaluated with each of the legs broken. The legs are numbered such that the final leg is numbered \#4,} \label{fig:dads_all_broken} \end{figure} \paragraph{Hierarchical Experiments} For the hierarchical environments, we use the Ant-v3 environment in a goal-conditioned manner. The goals are sampled from $[-15, 15]^2$ uniformly, and the hierarchical agent can take $100$ steps to reach as close to the goal as possible. At each step of the hierarchical agent, it chooses a skill, which is then executed for 10 timesteps in the underlying environment. So, in total, the agent has $1000$ timesteps to reach the goal. At every timestep, the agent is given a dense reward of $-\|x - g\|_2$, where $x$ is the current location of the agent, and $g$ is the location of the goal. On each step, the hierarchical agent gets the sum of the goal conditioned reward from the 10 timesteps in the underlying environment. All the hierarchical agents were trained with the \texttt{stable-baselines3} package \citep{stable-baselines3}. We used their default PPO agent for all the downstream set of skills, and trained the hierarchical agent for a total $500\,000$ environment steps. \section{Further Mathematical Details} \label{sec:appendix_math} \subsection{Expansion and Derivation of Objectives} The objective function ${\mathcal{F}} (\theta)$, as defined in Equation~1, is the source from where we derive our incremental objective function, reproduced here. \begin{equation*} {\mathcal{F}}(\theta) = I(S;Z) + \mathcal{H}(A|S,Z) \end{equation*} We can expand the first term in Equation~1 as \begin{align*} I(S;Z) &\equiv H(S) - H(S\mid Z) \end{align*} by definition of mutual information. Now, once we assume $Z$ is a discrete variable, the second part of this equation becomes \begin{align*} \mathcal H(S\mid Z) &\equiv \sum_{z_m} p(z_m) \mathcal H(S \mid Z = z_m) \\ &= \expect_{z_m \sim p(z_m)} \left [ \mathcal H(S \mid Z = z_m) \right ] \end{align*} And thus we have \begin{align*} I(S; Z) &= \mathcal{H}(S) - \expect_{z_m \sim p(z_m)} \left [ \mathcal H(S \mid Z = z_m) \right ] \\ &= \expect_{z_m \sim p(z_m)} \left [ \mathcal{H}(S) - \mathcal H(S \mid Z = z_m) \right ] \\ \end{align*} But the term inside the expectation is the definition of information gain (not to be confused with KL divergence), defined by \begin{align*} IG(S; Z = z_m) &\equiv \mathcal{H}(S) - \mathcal H(S \mid Z = z_m) \end{align*} Thus, we arrive at \begin{align*} I(S; Z) &= \expect_{z_m \sim p(z_m)} \left [ IG(S; Z = z_m) \right ] \\ \end{align*} Similarly, by definition of conditional entropy, we can expand the second part of the Equation~1 \begin{align*} \mathcal H(A\mid S, Z) &\equiv \sum_{z_m} p(z_m) \mathcal H(A \mid S, Z = z_m) \\ &= \expect_{z_m \sim p(z_m)} \left [ \mathcal H(A \mid S, Z = z_m) \right ] \end{align*} Thus, we can convert Equation~1 into \begin{align*} {\mathcal{F}}(\theta) &= I(S;Z) + \mathcal{H}(A|S,Z) \\ &= \expect_{z_m \sim p(z_m)} \left [ IG(S; Z = z_m) \right ] + \expect_{z_m \sim p(z_m)} \left [ \mathcal H(A \mid S, Z = z_m) \right ] \\ &= \expect_{z_m \sim p(z_m)} \left [ IG(S; Z = z_m) + \mathcal H(A \mid S, Z = z_m) \right ] \end{align*} If we assume a uniform prior over our skills, which is another assumption made by \cite{diversity}, and also assume we are trying to learn $M$ skills in total, we can further expand Equation~1 into: \begin{align*} {\mathcal{F}}(\theta) &= \frac{1}{M} \sum _{m = 1}^M \left [ IG(S; Z = z_m) + \mathcal H(A \mid S, Z = z_m) \right ] \end{align*} Ignoring the number of skills term (which is constant over a single skills' learning period) gives us exactly Equation~3, which was: \begin{align*} {\mathcal{F}}(\theta) &:= \sum _{m = 1}^M \left [ IG(S; Z = z_m) + \mathcal H(A \mid S, Z = z_m) \right ] \end{align*} Now, under our framework, we assume that skills $1, 2, \cdots, M-1$ has been learned and fixed, and we are formulating an objective for the $M$th skill. As a result, we can ignore the associated Information Gain and action distribution entropy terms from skills $1, 2, \cdots, M-1$, and simplify our objective to be: \begin{align*} {\mathcal{F}}(\theta) &:= IG(S; Z = z_M) + \mathcal H(A \mid S, Z = z_M) \\ &= \mathcal H(S) - \mathcal H(S \mid Z = z_M) + \mathcal H(A \mid S, Z = z_M) \end{align*} which is exactly the same objective we defined in Equation~5. \subsection{Point-based Nearest Neighbor Entropy Estimation} In our work, we use an alternate approach, first shown by ~\citet{singh2003nearest}, to estimate the entropy of a set of points. This method gives us a non-parametric Nearest Neighbor (NN) based entropy estimator: \begin{align*} \hat{\mathbb{H}}_{k,{\bm{X}}}(p) &= -\frac{1}{N}\sum_{i=1}^N\ln\frac{k\Gamma(q/2+1)}{N \pi^{q/2} R_{i,k,{\bm{X}}}^q } + C_k, \end{align*} where $\Gamma$ is the gamma function, $C_k=\ln k -\frac{\Gamma'(k)}{\Gamma(k)}$ is the bias correction term, and $R_{i,k,{\bm{X}}}=\|{\bm{x}}_i - \mathrm{NN}_{k,{\bm{X}}}({\bm{x}}_i)\|$ is the Euclidean distance between ${\bm{x}}_i$ and its $k^{\text{th}}$ nearest neighbor from the dataset ${\bm{X}}$, defined as $\mathrm{NN}_{k,{\bm{X}}}({\bm{x}}_i)$. The term inside the sum can be simplified as \begin{align*} \ln\frac{k\Gamma(q/2+1)}{N \pi^{q/2} R_{i,k,{\bm{X}}}^q } &= \ln\frac{k\Gamma(q/2+1)}{N \pi^{q/2} } - \ln R_{i,k,{\bm{X}}}^q \\&= \ln\frac{k\Gamma(q/2+1)}{N \pi^{q/2} } - q\ln R_{i,k,{\bm{X}}}\\ &= \ln\frac{k\Gamma(q/2+1)}{N \pi^{q/2} } - q\ln \| {\bm{x}}_i - \mathrm{NN}_{k,{\bm{X}}}({\bm{x}}_i)\|. \end{align*} Here, $\ln\dfrac{k\Gamma(q/2+1)}{N \pi^{q/2} }$ is a constant term independent of ${\bm{x}}_i$. If we ignore the this term and the bias-correction term $C_k$ and the constant, we get \begin{align*} \label{eqn:entropy} \hat{\mathbb{H}}_{k,{\bm{X}}}(p) &\propto \sum_{i=1}^N \ln \| {\bm{x}}_i - \mathrm{NN}_{k,{\bm{X}}}({\bm{x}}_i)\|. \end{align*} Which is the formulation we use in this work. This estimator is shown to be asymptotically unbiased and consistent in~\citet{singh2003nearest}. \subsection{Hausdorff Distance}\label{app:hausdorff} In our work, to compare between two algorithms learning skills on the same environment, we used a metric based on Hausdorff distance. Hausdorff distance, also known as the Hausdorff metric or the Pompeiu–Hausdorff distance, is a metric that measures the distance between two subsets of a metric space. Informally, we think of two sets in a metric space as close in the Hausdorff distance if every point of either set is close to some point of the other set. The Hausdorff distance is the longest distance one can force you to travel by choosing a point adversarially in one of the two sets, from which you have to travel to the other set. Put simply, it is the greatest of all the distances from a point in one set to the nearest point in the other. Mathematically, given two subsets $A$ and $B$ of a metric space $(M, d)$ we define Hausdorff distance $d_H{(A, B)}$ as: \[d_H(A, B) = \max\left \{ \sup_{a\in A} d(a, B), \sup_{b\in B} d(b, A)\right \}\] Where $\sup$ represents the supremum, $d(x, Y) = \inf_{y\in Y}d(x, y)$ is the distance between a point and another set, and $\inf$ represents the infimum. Given a set of skills, we calculate the diversity of one skill over all other skills by calculating the Hausdorff distance between that skill's trajectory end $(x, y)$ location, and the terminal $(x, y)$ locations of all other trajectories. Intuitively, a skill has high Hausdorff distance if the end states it generates is far away from other skills' endpoints. Similarly, a high average Hausdorff distance for skills from an algorithm means that the algorithm's generated skills on average have a high distance from each other, which is a desirable property for an algorithm which needs to generate diverse skills. \section{Reinforcement learning} \label{sec:appendix_rl} In our continuous-control RL setting, an agent receives a state observation $s_t \in \mathcal{S}$ from the environment and applies an action $a_t \in \mathcal{A}$ according to policy $\pi$. In our setting, where the policy is stochastic, the policy returns a distribution $\pi(s_t)$, and we sample a concrete action $a_t \sim \pi(s_t)$. The environment returns a reward for every action $r_t$. The goal of the agent is to maximize expected cumulative discounted reward $\mathbb{E}_{s_{0:T},a_{0:T-1},r_{0:T-1}}\left[\sum_{t=0}^{T-1} \gamma^t r_t\right]$ for discount factor $\gamma$ and horizon length $T$. On-policy RL ~\citep{schulman2015trust,kakade2002natural,williams1992simple} optimizes $\pi$ by iterating between data collection and policy updates. It hence requires new on-policy data every iteration, which is expensive to obtain. On the other hand, off-policy reinforcement learning retains past experiences in a replay buffer and is able to re-use past samples. Thus, in practice, off-policy algorithms have been found to achieve better sample efficiency \citep{lillicrap2015continuous,haarnoja2018soft}. For our experiments we use SAC~\citep{haarnoja2018soft} as our base RL optimizer due to its implicit maximization of action distribution entropy, sample efficiency, and fair comparisons with baselines that also build on top of SAC. However, we note that our framework is compatible with any standard off-policy RL algorithm that maximizes the entropy of the action distribution $\pi(\cdot)$ either implicitly or explicitly. \paragraph{Soft Actor-Critic} The Soft Actor-Critic (SAC)~\citep{haarnoja2018soft} is an off-policy model-free RL algorithm that instantiates an actor-critic framework by learning a state-action value function $Q_\theta$, a stochastic policy $\pi_\theta$ and a temperature $\alpha$ over a discounted infinite-horizon MDP $({\mathcal{X}}, {\mathcal{A}}, P, R, \gamma, d_0)$ by optimizing a $\gamma$-discounted maximum-entropy objective~\citep{ziebart2008maximum}. With a slight abuse of notation, we define both the actor and critic learnable parameters by $\theta$. SAC parametrizes the actor policy $\pi_\theta({\bm{a}}_t|{\bm{x}}_t)$ via a $\mathrm{tanh}$-Gaussian defined as $ {\bm{a}}_t = \mathrm{tanh}(\mu_\theta({\bm{x}}_t)+ \sigma_\theta({\bm{x}}_t) \epsilon)$, where $\epsilon \sim {\mathcal{N}}(0, 1)$, $\mu_\theta$ and $\sigma_\theta$ are parametric mean and standard deviation. The SAC's critic $Q_\theta({\bm{x}}_t, {\bm{a}}_t$) is parametrized as an MLP neural network. The policy evaluation step learns the critic $Q_\theta({\bm{x}}_t, {\bm{a}}_t)$ network by optimizing the one-step soft Bellman residual: \begin{align*} {\mathcal{L}}_Q({\mathcal{D}}) &= \mathbb{E}_{\substack{( {\bm{x}}_t,{\bm{a}}_t, {\bm{x}}_{t+1}) \sim {\mathcal{D}} \\ {\bm{a}}_{t+1} \sim \pi(\cdot|{\bm{x}}_{t+1})}}[(Q_\theta({\bm{x}}_t, {\bm{a}}_t) - y_t)^2]\text{ and}\\ y_t &= R({\bm{x}}_t, {\bm{a}}_t) + \gamma [Q_{\theta'}({\bm{x}}_{t+1}, {\bm{a}}_{t+1}) - \alpha \log \pi_\theta({\bm{a}}_{t+1}|{\bm{x}}_{t+1})] , \end{align*} where ${\mathcal{D}}$ is a replay buffer of transitions, $\theta'$ is an exponential moving average of $\theta$ as done in~\citep{lillicrap2015continuous}. SAC uses clipped double-Q learning~\citep{van2016deep, fujimoto2018addressing}, which we omit from our notation for simplicity but employ in practice. The policy improvement step then fits the actor $\pi_\theta({\bm{a}}_t|{\bm{s}}_t)$ network by optimizing the following objective: \begin{align*} {\mathcal{L}}_\pi({\mathcal{D}}) &= \mathbb{E}_{{\bm{x}}_t \sim {\mathcal{D}}}[ D_{\mathrm{KL}}(\pi_\theta(\cdot|{\bm{x}}_t) || \exp\{\frac{1}{\alpha}Q_\theta({\bm{x}}_t, \cdot)\})]. \end{align*} Finally, the temperature $\alpha$ is learned with the loss: \begin{align*} {\mathcal{L}}_\alpha({\mathcal{D}}) &= \mathbb{E}_{\substack{{\bm{x}}_t \sim {\mathcal{D}} \\ {\bm{a}}_t \sim \pi_\theta(\cdot|{\bm{x}}_t)}}[-\alpha \log \pi_\theta({\bm{a}}_t|{\bm{x}}_t) - \alpha \bar{\mathcal{H}}], \end{align*} where $\bar{\mathcal{H}} \in \mathbb{R}$ is the target entropy hyper-parameter that the policy tries to match, which in practice is set to $\bar{\mathcal{H}}=-|{\mathcal{A}}|$. The overall optimization objective of SAC equals to: \begin{align*} \mathcal{L}_\mathrm{SAC}({\mathcal{D}}) &= {\mathcal{L}}_\pi({\mathcal{D}}) + {\mathcal{L}}_Q({\mathcal{D}}) + {\mathcal{L}}_\alpha({\mathcal{D}}). \end{align*} \section{Broader Impacts} \LP{Is this needed?} \section{Discussions} \label{sec:limit} In this work, we introduce Behavior Transformers (BeT{}), which uses a transformer-decoder based backbone with a discrete action mode predictor coupled with a continuous action offset corrector to model continuous actions sequences from open-ended, multi-modal demonstrations. While BeT{} shows promise, the truly exciting use of it would be to learn diverse behavior from human demonstrations or interactions in the real world. In parallel, extracting a particular, unimodal behavior policy from BeT{} during online interactions, either by distilling the model or by generating the right `prompts' \citep{reynolds2021prompt}, would make BeT{} tremendously useful as a prior for online Reinforcement Learning. \section{Introduction} Creating agents that can behave intelligently in complex environments has been a longstanding problem in machine learning. Although Reinforcement Learning~(RL) has made significant advances in behavior learning, its success comes at the cost of high sample complexity~\cite{mnih2015human,duan2016benchmarking,akkaya2019solving}. Without priors on how to behave, state-of-the-art RL methods require online interactions on the order of 1-10M `reward-labeled' samples for benchmark control tasks~\cite{yarats2021mastering}. This is in stark contrast to vision and language tasks, where pretrained models and data-driven priors are the norm~\cite{devlin2018bert,brown2020language,byol,bardes2021vicreg}, which allows for efficient downstream task solving. \begin{figure} \centering \includegraphics[width=1.0\linewidth]{figs/intro_pdf.pdf} \caption{Unconditional rollouts from BeT{} models trained from multi-modal demonstartions on the CARLA, Block push, and Franka Kitchen environments. Due to the multi-modal architecture of BeT{}, even in the same environment successive rollouts can achieve different goals or the same goals in different ways. } \label{fig:intro_rollouts} \end{figure} So how do we learn behavioral priors from pre-collected data? One option is offline RL~\cite{levine2020offline}, where offline datasets coupled with conservative policy optimization can learn task-specific behaviors. However, such methods have yet to tackle domains where task-specific reward labels are not present. Without explicit reward labels, imitation learning, particularly behavior cloning, is a more fitting option~\cite{pomerleau1989alvinn,bojarski2016end,torabi2018behavioral}. Here, given behavior data $\mathcal{D} \equiv \{s_t,a_t\}$, behavior models can be trained to predict actions $f_{\theta}(s_t) \rightarrow a_t$ through supervised learning. When demonstration data is plentiful, such approaches have found impressive success in a variety of domains from self-driving~\cite{pomerleau1989alvinn,codevilla2019exploring} to robotic manipulation~\cite{zhang2018deep,pari2021surprising}. Importantly, it requires neither online interactions nor reward labels to optimize behaviors. However, state-of-the-art behavior cloning methods often make a fundamental assumption -- that the data is drawn from a unimodal expert solving a single task. This assumption is often baked in to the architecture design, such as using a Gaussian prior. On the other hand, natural pre-collected data is sub-optimal, noisy, and contains multiple modes of behavior, all entangled in a single dataset. This distributionally multi-modal experience is most prominent in human demonstrations. Not only do we perform a large variety of behaviors every day, our personal biases result in significant multi-modality even for the same behavior~\cite{grauman2021ego4d,lynch2020learning}. This raises an important question: How do we train models that can ``clone'' multi-modal behavior data? In this work, we present Behavior Transformers (BeT{}), a new method for learning behaviors from rich, distributionally multi-modal data. BeT{} is based of three key insights. First, we leverage the context based multi-token prediction ability of transformer-based sequence models~\cite{vaswani2017attention} to predict multi-modal actions. Second, since transformer-based sequence models are naturally suited to predicting discrete classes, we cluster continuous actions into discrete bins using k-means~\cite{macqueen1967some}. This allows us to model high-dimensional, continuous multi-modal action distributions as categorical distributions without learning complicated generative models~\cite{kingma2013auto,dinh2016density}. Third, to ensure that the actions sampled from BeT{} are useful for online rollouts, we concurrently learn a residual action corrector to produce continuous actions for a specific sampled action bin. We experimentally evaluate BeT{} on five datasets ranging from simple diagnostic toy datasets to complex datasets that include simulated robotic pushing~\cite{florence2021implicit}, sequential task solving in kitchen environments~\cite{gupta2019relay}, and self-driving with visual observations in CARLA~\cite{carla}. The two main findings from these experiments can be summarized as: \begin{enumerate}[leftmargin=0.05\textwidth] \item On multi-modal datasets, BeT{} achieves significantly higher performance during online rollouts compared to prior behavior modelling methods. \item Rather than collapsing or latching onto one mode, BeT{} is able to cover the major modes present in the training behavior datasets. Unconditional rollouts from this model can be seen in Fig.~\ref{fig:intro_rollouts}. \end{enumerate} All of our datasets, code, and trained models will be made publicly available. \section*{Checklist} \clearpage \input{appendix} \clearpage \end{document} \section{Behavior Transformers} Given a dataset of continuous observation and action pairs $\mathcal D \equiv \{(o, a)\} \subset \mathcal O \times \mathcal A$ that contains behaviors we are interested in, our goal is to learn a behavior policy $\pi : \mathcal O \mapsto \mathcal A$ that models this data without any online interactions with the environment or reward labels. This setup follows the Behavior Cloning formulation, where policies are trained to model demonstrations from expert rollouts. Often, such policies are chosen from a hypothesis class parametrized by parameter set $\theta$. Following this convention, our objective is to find the parameter $\theta$ that maximizes the probability of the observed data \[\theta^* := \argmax_\theta \prod_t \mathbb P(a_t \mid o_t; \theta)\] When the model class is restricted to unimodal isotropic Gaussians, this maximum likelihood estimation problem leads to minimizing the Mean Squared Error (MSE), $\sum_t \| a_t - \pi(o_t; \theta) \|^2$. \paragraph{Limitations of traditional MSE-based BC:} \begin{wrapfigure}{r}{0.6\textwidth} \vspace{-10pt} \centering \includegraphics[width=0.58\textwidth]{figs/whats_multimodal_tiny.pdf} \caption{Comparison between a regular MSE-based BC model and a BeT{} models that can capture multi-modal distributions. The MSE-BC model takes $0$ action to minimize MSE.} \label{fig:multipath_intro} \vspace{-5pt} \end{wrapfigure} While MSE-based BC has been able to solve a variety of tasks \cite{bojarski2016end,torabi2018behavioral}, it assumes that the data distribution is unimodal. Clean data from an expert demonstrator solving a particular task in a particular way satisfies this assumption, but pre-collected intelligent behavior often may not~\cite{lynch2020learning, gupta2019relay}. While more recent behavior generation models have sought to address this problem, they often require complex generative models \citep{singh2020parrot}, an exponential number of bins for actions \citep{mandi2021towards}, complicated training schemes \cite{spirl}, or time-consuming test-time optimization \citep{florence2021implicit}. An experimental analysis of some of these prior works is presented in Section~\ref{sec:results}. \begin{figure}[t!] \centering \includegraphics[width=\linewidth]{figs/bet_arch_semibold_2.pdf} \caption{Architecture of Behavior Transformer. (A) The continuous action binning using k-means algorithm that lets BeT{} split every action into a discrete bin and a continuous offset, and later combine them into one full action. (B) Training BeT{} using demonstrations offline; each ground truth action provides a ground truth bin and residual action, which is used to train the minGPT trunk with its binning and action offset heads. (C) Rollouts from BeT{} in test time, where it first chooses a bin and then picks the corresponding offset to reconstruct a continuous action.} \label{fig:arch} \end{figure} \paragraph{Overview of Behavior Transformers (BeT{}):} We address two critical assumptions in regular BC. First, we relax the assumption that the behavior we are cloning is purely Markovian, and instead model $P(a_t \mid o_t, o_{t-1}, \cdots, o_{t-h+1})$ for some horizon $h$. Second, instead of assuming that actions are generated by a unimodal action distribution, we model our action distribution as a mixture of gaussians. However, unlike previous efforts similar to Mixture Density Networks (MDN) to do so, whose limitations have been explored in \citet{florence2021implicit}, we do not explicitly predict mode centers, which significantly improves our modeling capacity. To operationalize these two features in a single behavior model, we make use of transformers since (a) they are effective in utilizing prior observational history, and (b) their architecture is naturally suited to output multi-modal tokens. \subsection{Action discretization for distribution learning} Although transformers have become standard as a backbone for sequence-to-sequence models~\cite{devlin2018bert,brown2020language}, they are designed to process discrete tokens and not continuous values. In fact, modeling multi-modal distributions of high-dimensional continuous variables in a tractable manner is in itself a challenging problem, especially if we want the trained behavior model to cover the modes present in the dataset. To address this, we propose a new factoring of the action prediction task by dividing each action in two parts: a categorical variable denoting an `action center', and a corresponding `residual action'. To this end, given the actions in our dataset, we first optimize for a set of $k$ action centers, $\{A_1, A_2, \cdots, A_k\} \subset \mathcal A$. We then decompose each action into two parts: a categorical variable representing the closest action bin, $\lfloor a \rfloor := \argmin_{i} \|a - A_i\|_2$, and a continuous residual action $\langle a \rangle := a - A_{\lfloor a \rfloor}$. If we are given the set of action centers $\{A_i\}_{i=1}^k$, an action bin index $\lfloor a \rfloor$ and the residual action $\langle a \rangle$, we can deterministically reconstruct the true action $a := A_{\lfloor a \rfloor} + \langle a \rangle$. Once learned, these k-means based encoder and decoders for this action factorization process are fixed for the rest of the train and testing phases. The action factorization procedure is illustrated in Fig.~\ref{fig:arch}~(A). \subsection{Attention-based behavior mode learning} Once we have the clustering based autoencoder learned from the actions in the dataset, we model our demonstration trajectories with BeT{}. We use a transformer decoder model, namely minGPT~\cite{brown2020language}, with minor modifications, as our backbone. The transformer $\mathcal{T}$ takes in a sequence of continuous observations $(o_i, o_{i+1}, \cdots, o_{i+h-1})$ and learns a sequence-to-sequence model mapping each observation to a categorical distribution over $k$ discrete action bins. The predicted probability sequence is then compared with the ground truth labels, $(\lfloor a_i \rfloor, \lfloor a_{i+1} \rfloor, \lfloor a_{i+2} \rfloor, \cdots, \lfloor a_{i+h-1} \rfloor)$. We use a negative log-likelihood-based Focal loss~\cite{lin2017focal} between the predicted categorical distribution probabilities and the ground truth labels to train the transformer head. Focal loss is a simple modification over the standard cross entropy loss. While the standard cross entropy loss for binary classification can be thought of $\mathcal L_{ce} (p_t) = -\log (p_t)$, Focal loss adds a term $(1-p_t)^\gamma$ to this, to make the new loss \[\mathcal L_{focal} (p_t) = -(1-p_t)^\gamma\log (p_t)\] This loss has the interesting property that its gradient is more steep for smaller values of $p_t$, while flatter for larger values of $p_t$. Thus, it penalizes and changes the model more for making errors in the low-probability classes, while is more lenient about making errors in the high probability classes. The model is illustrated in Fig.~\ref{fig:arch}~(B). \subsection{Action correction: from coarse to finer-grained predictions} Using a transformer allows us to model multi-modal actions. However, discretizing the continuous action space in any way invariably causes loss of fidelity~\cite{janner2021offline}. Discretization error may cause online rollouts of the behavior policy to go out of distribution from the original dataset~\cite{dagger2010}, which can in turn cause critical failures. To predict the complete continuous action, we add an extra head to the transformer decoder that offsets the discretized action centers based on the observations. For each observation $o_i$ in the sequence, the head produces a $k \times \text{dim}(A)$ matrix with $k$ proposed residual action vectors, $\left (\langle a^{(j)}_i \rangle \right )_{j=1}^k = (\langle \hat a^{(1)}_i \rangle, \langle \hat a^{(2)}_i \rangle, \langle\hat a^{(3)}_i \rangle, \cdots, \langle\hat a^{(k)}_i \rangle)$, where the residual actions correspond to bin centers $A_1, A_2, A_3, \cdots, A_k$. These residual actions are trained with a loss akin to the \textit{masked multi-task loss}~\cite{girshick2015fast} from object detection. In our case, if the ground truth action is $\mathbf{a}$, the loss is: \[\text{MT-Loss}\left (\mathbf{a}, \left (\langle \hat a^{(j)}_i \rangle\right )_{j=1}^k\right ) = \sum_{j=1}^k \mathbb I [\lfloor \mathbf{a} \rfloor = j] \cdot \| \langle \mathbf{a} \rangle - \langle \hat a^{(j)} \rangle \|_2^2\] Where $\mathbb I[]$ denotes the Iverson bracket, ensuring the offset head of BeT{} only incurs loss from the ground truth class of action $\mathbf{a}$. This mechanism prevents the model from trying to fit the ground truth action using the offset at every index. \subsection{Test-time sampling from BeT{}} During test time, at timestep $t$ we input the latest $h$ observations $(o_t, o_{t-1}, \cdots, o_{t-h+1})$ to the transformer, combining the present observation $o_t$ with $h-1$ previous observations. Our trained MinGPT model gives us $h \times 1 \times k$ bin center probability vectors, and $h \times k \times \text{dim}(A)$ offset matrix. To sample an action at timestep $t$, we first sample an action center according to the predicted bin center probabilities on the $t$\textsuperscript{th} index. Once we have chosen an action center $A_{t,j}$, we add the corresponding residual action $\langle \hat a_t^{(j)} \rangle$ to it to recover a predicted continuous action $\hat {\mathbf{a_t}} = A_{t,j} + \langle \hat a_t^{(j)} \rangle$. This sampling procedure is illustrated in Fig.~\ref{fig:arch}~(C). \section{Related Work} This paper builds upon a rich literature in imitation learning, offline learning, generative models, and transformer architectures. The most relevant ones to our work are discussed here. \paragraph{Learning from offline data:} Since \citet{pomerleau1988alvinn} showed the possibility of driving an autonomous vehicle using offline data and a neural network, learning behavior from offline data has been a continuous topic of research for scalable behavior learning \citep{argall2009survey, billard2008survey, schaal1999imitation}. The approaches can be divided into two broad classes: Offline RL \citep{fujimoto2018addressing, kumar2019stabilizing, kumar2020conservative, wu2019behavior, levine2020offline, fu2020d4rl}, focusing on learning from datasets of a mixed quality that also have reward labels; and imitation learning \citep{osa2018algorithmic, peng2018deepmimic, peng2021amp, ho2016generative}, focusing on learning behavior from a dataset of expert behavior without reward labels. BeT{} falls under the second category, as it is a behavior cloning model. Behavior cloning is a form of imitation learning that tries to model the action of the expert given the observation which is often used in real-world applications \citep{zhang2018deep, zhu2018reinforcement, zhang2018deep, rahmatizadeh2018vision, florence2019self, zeng2020transporter}. As behavior cloning algorithms are generally solving a fully supervised learning problem, they tend to be faster and simpler than reinforcement learning or offline RL algorithms and in some cases show competitive results \cite{fu2020d4rl, gulcehre2020rl}. \paragraph{Generative models for behavior learning:} One approach for imitation learning is Inverse Reinforcement Learning or IRL \citep{russell1998learning, ng2000algorithms}, where given expert demonstrations, a model tries to construct the reward function. This reward function is then used to generate desirable behavior. GAIL \citep{ho2016generative}, an IRL algorithm, connects generative adversarial models with imitation learning to construct a model that can generate expert-like behavior. Under this IRL framework, previous works have tried to predict multi-modal, multi-human trajectories \citep{lee2016predicting, ivanovic2018generative}. Similarly, other works have tried Gaussian Processes \citep{rasmussen2010gaussian} for creating dynamical models for human motion \citep{wang2007gaussian}. Another class of algorithms learn a generative action decoder~\citep{pertsch2020accelerating, lynch2020learning, singh2020parrot} from interaction data to make downstream reinforcement learning faster and easier, which inspired BeT{}'s action factorization. Finally, a class of algorithms, most notably ~\citep{liu2020energy, florence2021implicit, kostrikov2021offline, nachum2021provable} do not directly learn a generative model but instead learn energy based models. These energy based models can then be sampled to generate desired behavior. Since \citep{florence2021implicit} is a BC model capable of multi-modality, we compare against it as a baseline in Sec.~\ref{sec:exp}. \paragraph{Transformers for control:} With the stellar success of transformer models \citep{vaswani2017attention} in natural language processing \citep{devlin2018bert, brown2020language} and computer vision \citep{dosovitskiy2020image}, there has been significant interest in using transformer models to learn behavior and control. Among those, \citep{chen2021dt, janner2021offline} applies them to Reinforcement Learning and Offline Reinforcement Learning, respectively, while \citep{clever2021assistive, dasari2020transformers, mandi2021towards} use them for imitation learning. \citep{dasari2020transformers, mandi2021towards} use transformers mostly to summarize historical visual context, while \citep{clever2021assistive} relies on their long-term extrapolation abilities to collect human-in-the-loop demonstrations more efficiently. BeT{} is inspired by both of these use cases, as we use a transformer to summarize historical context while leveraging its generative abilities. \paragraph{Datasets for distributionally multi-modal data:} Similar to computer vision \citep{deng2009imagenet, lin2014microsoft, liu2018large} and natural language processing \citep{bowman2015large,rajpurkar2016squad}, there has been a recent interest in collecting behavior datasets that may aid in downstream behavior learning. Some of them are labeled with agent goals or rewards for downstream tasks \citep{mandlekar2018roboturk,fu2020d4rl, robomimic2021}, while others are more open ended \citep{gupta2019relay,lynch2020learning,young2021playful} and come without reward or task labels. In our work, we focus towards the latter class. The lack of labeled goal or reward labels in the second category implies that there is more multi-modality in the action distributions compared to action distributions of goal or reward conditioned datasets, which is the same reason a lot of work learning from multi-modal datasets try to learn a goal-conditioned model~\citep{gupta2019relay,lynch2020learning,dasari2020transformers}. Finally, the lack of labelling requirements mean that the unlabelled datasets are cheaper to obtain, which should help BeT{} scale further in the future. \section{Experiments} \label{sec:exp} \label{sec:results} We now study the empirical performance of BeT{} on a variety of behavior learning tasks. Our experiments are designed to answer the following questions: (a) Is BeT{} able to imitate multi-modal demonstrations? (b) How well does BeT{} capture the modes present in behavior data? (c) How important are the individual components of BeT{}? \subsection{Environments and datasets} We experiment with five broad environments. While full descriptions of these environments, dataset creation procedure, and overall statistics are in Appendix~\ref{sec:appendix_environments_datasets}, a brief description of them are as follows. \begin{enumerate}[label=(\alph*),leftmargin=*] \item \textbf{Point mass environment \#1:} Our first set of experiments in Fig.~\ref{fig:multipath_intro}, used to get a qualitative understanding of BeT{}, were performed in a simple Pointmass environment with a 2D observation and action space with two hundred demonstrations. The pre-collected demonstrations start at a fixed point, and then make their way to another point while avoiding a block in the middle. The two primary modes in this dataset are taking a left turn versus a right turn. \item \textbf{Point mass environment \#2:} The setup is similar to the previous environment with the exception of one straight line and two complicated prolonged `Z' shaped modes of demonstration (Fig.~\ref{fig:multipath_time}.) \item \textbf{CARLA self-driving environment:} CARLA~\cite{carla} uses the Unreal Engine to provide a simulated driving environment in a visually realistic landscape. The agent action space is 2D (accelerate/brake and left/right steer), while the observation space is (224,224,3)-dimensional RGB image from the car. A hundred total demonstrations drive around a building block in two distinct modes. This environment highlights the challenge of behavior learning from high-dimensional observations as shown in Fig.~\ref{fig:intro_rollouts} (a). For visual observations with BeT{}, we use a frozen ResNet-18~\cite{he2016deep} pretrained on ImageNet~\cite{deng2009imagenet} as an encoder. \item \textbf{Multi-modal block-pushing environment:} For more complicated interaction data, we use the multi-modal block-pushing environment from Implicit Behavioral Cloning (IBC)~\cite{florence2021implicit}, where an XArm robot needs to push two blocks into two squares in any order. The blocks and target squares are colored red and green. The positions of the blocks are randomized at episode start. We collect 1,000 demonstrations using a deterministic controller with two independent axes of multi-modality: (a) it starts by reaching either the red or the green block, with 50\% probability, and (b) it pushes the blocks to (red, green) or (green, red) squares respectively with 50\% probability. \item \textbf{Franka kitchen environment:} To highlight the complexity of performing long sequences of actions, we use the Relay Kitchen Environment \cite{gupta2019relay} where a Franka robot manipulates a virtual kitchen environment. We use the relay policy learning dataset with 566 demonstrations collected by human participants wearing VR headsets. The participants completed a sequence of four object-interaction tasks in each episode~\cite{gupta2019relay}. There are a total of seven interactable objects in the kitchen: a microwave, a kettle, a slide cabinet, a hinge cabinet, a light switch, and two burner knobs. This dataset contains two different kinds of multi-modality: one from the inherent noise in human demonstrations, and another from the demonstrators' intent. \end{enumerate} \subsection{Baseline behavior learning methods} \looseness=-1 While a full description of our baselines are in Appendix~\ref{sec:app:baselines}, a brief description of them is here: \begin{enumerate}[label=(\alph*),leftmargin=*] \item \textbf{Multi-layer Perceptron with MSE (RBC):} We use MLP networks trained with MSE loss as our first baseline, since this is the standard way of performing behavioral cloning for a new task~\cite{torabi2018behavioral}. A comparison with transformer-based behavior cloning is discussed in Section~\ref{sec:ablations}. \item \textbf{Nearest neighbor (NN):} Nearest neighbor based algorithms are easy to implement, and has recently shown to have strong performance on complicated behavioral cloning tasks~\cite{arunachalam2022dexterous}. \item \textbf{Locally Weighted Regression (LWR):} This non-parametric approach provides better regularization compared to NN and is a strong alternative to parametric BC~\cite{atkeson1997locally,pari2021surprising}. \item \textbf{Variational auto-encoders (VAE):} Inspired by SPiRL~\cite{spirl}, where behavioral priors are learned through a VAE~\cite{kingma2013auto}, we compare with continuous actions generated from the VAE and the prior. \item \textbf{Normalizing Flow (Flow):} Inspired by PARROT~\cite{singh2020parrot}, where state-conditioned action priors are learned through a Flow model~\cite{dinh2016density}, we compare with actions generated from the Flow model. \item \textbf{Implicit Behavioral Cloning (IBC):} Instead of modeling the conditional distribution $P(a \mid o)$, IBC models the joint probability distribution $P(a, o)$ using energy-based models~\cite{florence2021implicit}. While IBC is slower than explicit BC models because of their sampling requirements, they have been shown to learn well on multi-modal data, and outperform earlier work such as MDNs~\cite{bishop1994mixture}. \end{enumerate} \subsection{Is BeT{} able to imitate multi-modal demonstrations?} The first question we ask is whether BeT{} can actually clone behaviors given a mixed dataset of unlabeled, multi-modal behaviors. To examine that, we look at the performance of our model in CARLA, Block push, and Kitchen environments compared with our baselines in Table~\ref{tab:perf-table}. \input{tables/perf_table} We see that BeT{} outperforms all other methods in all environments except CARLA, where it is narrowly outperformed by LWR. Since the models are all behavioral cloning algorithms, they share the failure mode of failing once the observations go out of distribution (OOD). However, they vary in the tolerance. For example, BeT{} shines in the Block push environment, where alongside extreme environment randomness and multi-modality, the models also have to learn significant long-term behaviors and commit to a single mode over a long period. While all baselines can somewhat successfully reach one block, they fail to complete the long-horizon, multi-modal task of pushing two blocks into two different bins. On the other hand, we observe that BeT{}'s primary failure mode is not realizing a block has not completely entered the target yet, while other methods either go OOD quickly, or keep switching between modes. We also observe that BeT{} performs well even in complex observation and action spaces. In the CARLA environment, the model takes in visual observations, while in the Franka Kitchen environment, the action space corresponds to a 9-DOF torque controlled robot. BeT handles both cases with the same ease as it does environments with lower-dimensional observation or action spaces. \subsection{Does BeT{} capture the modes present in behavior data?} \begin{figure} \centering \includegraphics[width=\linewidth]{figs/multimodal_colorbar_flipped.pdf} \caption{Distribution of most frequent tasks completed in sequence in the Kitchen environment. Each task is colored differently, and frequency is shown out of a 1,000 unconditional rollouts from the models.} \label{fig:multimodal_kitchen_tasks} \end{figure} Next, we examine the question of whether, given a dataset where multi-modal behavior exists, our model learns behavior that is also multi-modal. Here, we are interested in seeing the variance of the behavior of the model over different rollouts. In each of our environments, the demonstrations contain different types of multi-modality. As a result, we show a comprehensive analysis of multi-modality seen in our agent behaviors. \input{tables/multimodality_table} We see in Table~\ref{tab:multimodal-table} that in CARLA and Block push, BeT{} covers all the modes of the demonstration data, even in the few cases where it does not perfectly match the demonstrated task probabilities. For the Kitchen environment, we see in Fig.~\ref{fig:multimodal_kitchen_tasks} that BeT{} visits certain strings of tasks more frequently than in the original demonstrations. However, compared to other strong baselines, BeT{} generates longer task strings more often while maintaining diversity and not collapsing to a single mode. \subsection{How important are the individual components of BeT{}?} \label{sec:ablations} There are four key differences between BeT{} architecture and standard BC: (a) binning actions into discrete clusters, (b) using offsets to faithfully reconstruct actions later, (c) learning sequentially to use historical context, and (d) using an attention-based MinGPT trunk. In this section, we discuss the impacts they have in BeT{} performance. \input{tables/ablation_table} \paragraph{Impact of discrete binning:} Intuitively, having discrete options for bin centers is what enables BeT{} to express multi-modal behavior even when starting from an identical starting state. Indeed, if there is no binning, we see from Table~\ref{tab:ablation-table} that the performance of BeT{} drops significantly. More tellingly, in the Franka Kitchen environment, the model only ever completed a subsequence of (kettle, top/bottom burner, light switch, slide cabinet) tasks after 100 random rollouts. This result shows us that having discrete bins helps BeT{} achieve multi-modality. To find the best value of the number of bins, $k$, we fit a Bayesian GMM~\citep{roberts1998bayesian} to our action dataset and use the estimated number of clusters as a guide. We show a plot of $k$ vs. performance on the tasks in the Appendix~\ref{sec:app:ablation:num_bins}. \paragraph{Necessity of action offsets:} An important feature of BeT{} is the residual action offset that corrects the discrete actions coming from the bins. While the bin centers may be quite expressive, Table~\ref{tab:ablation-table} shows that the inability to correct them causes a performance degrade. Interestingly, the largest degradation comes in the Kitchen environment, which also has the highest dimensional action space. Intuitively, we can understand how in higher dimension the loss of fidelity from discretizing would be higher, and the relative performance loss across three environments support that hypothesis. \paragraph{Importance of historical context:} \begin{figure} \centering \includegraphics[width=\linewidth]{figs/importance_of_time.pdf} \caption{Comparison between an RBC model and two BeT{} models, trained with and without historical context on a dataset with three distinct modes. BeT{} with history is better able to capture the context-dependant behavior in the demonstrations.} \label{fig:multipath_time} \vspace{-15pt} \end{figure} While RL algorithms traditionally assume environments are Markovian, human behavior in an open-ended environment is rarely so. Thus, using historical context helps BeT{} to perform well. We show a simple experiment in Fig.~\ref{fig:multipath_time} on the second point mass environment. Here, training and evaluating with some historical context allows BeT{} to follow the demonstrations better. We experience the same in the CARLA, Block push, and Kitchen environments, where training with some historical context raises performance across the board as seen in Table~\ref{tab:ablation-table}. Since transformer-based models are generally able to learn from long-term historical context, we believe BeT{} should also be able to model real-world long-term behavior patterns. \paragraph{Importance of transformer architecture:} Despite transformers' success in other fields of machine learning, it is natural to wonder whether the tasks BeT{} solves here really requires one. We ablated BeT{} by replacing the MinGPT trunk with MLP, temporal convolution, and LSTM architecture based models, and found that they have lower performance while also being difficult to train stably. We gave the MLP equal historical context by concatenating the last few frames. See Table.~\ref{tab:ablation-table} for results and Appendix ~\ref{sec:app:ablation:transformer} for further details. \paragraph{Computation considerations:} While transformers in usual contexts are large models, we downscale them for our application in BeT{} (See Appendix~\ref{sec:appendix_arch}). Our models contain on the order of $10^4 -10^6$ parameters, and even with a small batch size and on a single desktop GPU trains within an hour for our largest datasets (Block push). In contrast, for the same task, our strongest baseline IBC takes about 14 hours. On the same environment, 100 evaluation rollouts take about 165 seconds with BeT{}, as opposed to 1770 seconds with IBC.
1,116,691,498,205
arxiv
\section*{} \section{Introduction} The extreme difficulties which arise if one tries to draw physically important conclusions from the basic assumptions of Einstein's theory are mainly due to the non-linearity of the field equations. Moreover, the fact that the spacetime topology is not given a priori, and the impossibility to integrate tensors over finite regions cause difficulties unknown in other branches of mathematical physics. Actually in this respect they are not so different from others fields, for example the electromagnetic field, the scalar field, etc., by themselves obey linear equations in a given spacetime, they form a non-linear system when their mutual interactions are taken into account. The distinctive feature of the gravitational field is that it is self-interacting (as the Yang-Mills field): it is non-linear even in the absence of other fields. This is because it defines the spacetime over which it propagates \citep{Hawking}. Linearized gravity is any approximation to General Relativity obtained from $g_{\mu\nu}=g_{\mu\nu}^{(0)}+h_{\mu\nu}$ (where $g_{\mu\nu}^{(0)}$ is any curved background spacetime) in Einstein's equation and retaining only the terms linear in $h_{\mu\nu}$ \citep{Wald}. The weakness of the gravitational field means in the context of general relativity that the spacetime is nearly flat. Small gravitational perturbations in Minkowski space can be treated in the simplest linearized version of General Relativity, \begin{equation} \label{linear} g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}, \end{equation} as describing a theory of a symmetric tensor field $h_{\mu\nu}$ propagating on at background spacetime. This theory is Lorentz invariant in the sense of Special Relativity. If one wants to obtain a solution of the non-linear equations, it is necessary to employ an iterative method on approximate linear equations whose solutions are shown to converge in a certain neighbourhood of initial surface. It should be possible to avoid some of these difficulties of non-linearity by working in some spacetimes that shall be described in this paper. The proposal is that some metric fields can be separated into the parts carrying the dynamical information and those parts characterizing the coordinates system. In this proposal, terms of the coordinates system will have tensorial structure limited only in the first order. The tensors that will describe $g_{\mu\nu}$ have linear behavior. Naturally, there is a price that must be paid for the linear tensors. The dynamic terms that carry the gravitational strength have exponential structure. The principal idea came from the basic principle that one should interpret (\ref{linear}) as separation between pure mathematical and physical terms in metric field tensor. In spite of the fact that $\eta_{\mu\nu}$ plays a key role as empty flat and background spacetime of the Standard Model in the description of fundamental interactions, this background tensor metric $\eta_{\mu\nu}$ is an object wholly mathematical and entirely geometrical, while $h_{\mu\nu}$ contains the physical information. The strength of gravity is tied in the components of $h_{\mu\nu}$. The proposal of this paper is a working hypothesis to untie the strength of gravity from geometrical tensors. This proposal is valid for a family of metric field tensors $g_{\mu\nu}$, and some basic examples such Newtonian limit and gravitational plane waves of low amplitudes are described. This paper is outlined as follows: in Sec. II, we present the basic mathematic concepts of the (quasi-)idempotent tensors that compose the structure of metric fields approached in this work. In Sec. III, we propose how to link the strength of gravity with the tensors from Sec. II, then is defined a family of exponential metrics. In Sec. IV, we present some examples of these exponential metrics, such as: Yilmaz metric, circularly polarized wave and rotating bodies. In Sec. V, we present exponential metrics (`adjoint metric field') that are non-physical, but which help us to compute Christoffel symbols, and consequently the curvature tensors, Ricci tensors and determinant of metric field. In Sec. VI, we verify the Newtonian limit and also we obtain gravitational waves. In Sec. VII, we present a general conclusion. We assume spacetime $({\cal M},{\bf g})$ to be a ${C^{\infty}4-}$dimen\-sional, globally hyperbolic, pseudo-Riemannian manifold ${\cal M}$ with Lorentzian metric tensor ${\bf g}$ (whose components are $g_{\mu\nu}$) associated with the line element $$ds^2=g_{\mu\nu}(x)dx^{\mu}dx^{\nu},$$ assumed to have signature $(+---)$ \citep{Landau}. Lower case Greek indices refer to coordinates on ${\cal M}$ and take the values $0,1,2,3.$ The relation between the metric field $g_{\mu\nu}$ and the material contents of spacetime is expressed by Einstein's field equation, \begin{equation} \label{field equation} R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R=\frac{8\pi G}{c^4}T_{\mu\nu}, \end{equation} $T_{\mu\nu}$ being the stress-energy-momentum tensor, $R_{\mu\nu}$ the contract curvature tensor (Ricci tensor) and $R$ its trace. In an empty region of spacetime we have $R_{\mu\nu}=0$, such a region is called vacuum field. \section{Pure Mathematical terms of spacetime geometry} The Minkowski flat spacetime $(\mathbb{R}^4,\bm\eta)$, where the components of $\bm\eta$ are $\eta_{\mu\nu}=\mbox{diag}(1,-1,-1,-1)$, is the simplest empty spacetime and it is in fact the spacetime of Special Relativity. One can obtain spaces locally identical to $(\mathbb{R}^4,\bm\eta)$ but with different (large scale) topology properties by identifying points in $\mathbb{R}^4$ which are equivalent under a discrete isometry without a fixed point. Any local Lorentz frames is merely the statement that any curved space has the Minkowski flat space `tangent' to it at any point. The Minkowski spacetime is the universal covering space for all such derived spaces \citep{Hawking}. In this sense it should be reasonable to one choose $(\mathbb{R}^4,\bm\eta)$ as background spacetime and important compound piece of some spacetimes. In fact, the most straightforward approach to linear gravitation is realized in Minkowski spacetime. The conformal structure of Minkowski space is what one would regard as the `normal' behavior of a spacetime at infinity. The metric tensor $\bm\eta$ from background Minkowski spacetime in any coordinates is an object wholly mathematical and entirely geometrical. No strength of gravity is linked up with the mathematical structure of $\bm\eta$. Then, we choose this metric tensor as a principal descriptive piece of some spacetimes built below, and it does rise and lower the indices in the same way as in Special Relativity with $\eta_{\mu\nu}\eta^{\nu\alpha}={\delta_{\mu}}^{\alpha}$. The aim of this paper is to obtain some spacetimes $({\cal M},{\bf g})$ that their non-linearity are less hard, softer at least in a tensorial descriptive way. Therefore, it is defined only a symmetric tensor $\bm\Upsilon$, which like $\bm\eta$ is an object wholly mathematical and entirely geometrical. This (metric) tensor $\bm\Upsilon$ that will be a piece of metric ${\bf g}$ can have non-static terms from spacetime in their components, however in this approach, this tensor purely will not have gravitational strength. The components $\Upsilon_{\mu\nu}$ of tensor $\bm\Upsilon$ are raised and lowered by $\eta^{\mu\nu}$ and $\eta_{\mu\nu}$, \begin{eqnarray} \eta^{\mu\nu}\Upsilon_{\nu\alpha}= {\Upsilon^{\mu}}_{\alpha}, \cr \Upsilon_{\mu\nu}\eta^{\nu\alpha}= {\Upsilon_{\mu}}^{\alpha},\cr \eta^{\mu\nu}\Upsilon_{\nu\alpha}\eta^{\alpha\beta}=\Upsilon^{\mu\beta}. \end{eqnarray} In this context it is adopted the point of view, that $\bm\Upsilon$ is a tensor on a background Minkowski spacetime, similar to deviation $h_{\mu\nu}$ from linearized version of general relativity (\ref{linear}). But instead one has the infinitesimal condition for $|h_{\mu\nu}|\ll 1$, it is accepted that the magnitude of $\Upsilon_{\mu\nu}$ can be equal to the magnitude of empty flat spacetime ($|\Upsilon_{\mu\nu}|\approx |\eta_{\mu\nu}| $). Moreover, it might be defined as an important mathematical relationship among $\Upsilon_{\mu\nu}$ themselves, \begin{eqnarray} \label{ID} \Upsilon_{\mu\nu}\Upsilon^{\nu\rho}=-2{\Upsilon_{\mu}}^{\rho}. \end{eqnarray} The above equation is an important argument to shape some spacetimes that are described below. This equation will improve linearity in the tensorial sense. The contracting indices of tensor $\bm\Upsilon$ by themselves show that $\Upsilon_{\mu\nu}$ are (quasi-)idempotent elements ($\bm\Upsilon\cdot \bm\Upsilon \propto \bm\Upsilon$; otherwise a factor $-2$ in the operation), and the equation (\ref{ID}) improves at least to a linear tensorial template of some tensor metric fields. It is possible to rise or lower indices of $\Upsilon_{\mu\nu}$ operating themselves, \begin{eqnarray} \Upsilon^{\rho\mu}\Upsilon_{\mu\nu}\Upsilon^{\nu\sigma}=4\Upsilon^{\rho\sigma}. \end{eqnarray} One can also verify the expression ${\Upsilon_{\mu}}^{\nu}{\Upsilon_{\nu\rho}}=-2\Upsilon_{\mu\rho}$. The trace of $\bm\Upsilon$ is obtained when $\rho=\mu$ in the expression (\ref{ID}), \begin{eqnarray} \label{ID traco} \Upsilon_{\mu\nu}\Upsilon^{\nu\mu}=-2{\Upsilon_{\mu}}^{\mu}, \end{eqnarray} and realize derivative calculation of trace ${\Upsilon_{\mu}}^{\mu}$ from (\ref{ID traco}) since $\eta_{\mu\nu}=\mbox{diag}(1,-1,-1,-1)$, \begin{eqnarray} \label{derivada traco1} -2\partial_{\alpha}{\Upsilon_{\mu}}^{\mu} = 2 \Upsilon_{\mu\nu}\partial_{\alpha}\Upsilon^{\mu\nu}, \end{eqnarray} thus, \begin{eqnarray} \label{derivada traco2} \Upsilon_{\mu\nu}\partial_{\alpha}\Upsilon^{\mu\nu}=\Upsilon^{\mu\nu}\partial_{\alpha}\Upsilon_{\mu\nu}=-\partial_{\alpha}{\Upsilon_{\mu}}^{\mu}. \end{eqnarray} Only both these tensors, $\bm\eta $ from background Minkowski spacetime and $\bm\Upsilon$, will become the mathematical and geometrical basis to physical spacetimes described as follows. \section{Strength of gravity terms tied in spacetime geometry} A dimensionless parameter $\Phi$ characterizing the stren\-gth of gravity at a spacetime point $\wp$ with coordinates $(t,{\bf x})=(x^{\alpha})$ due to a gravitating source is the ratio of the potential energy, $m\varphi_{N}$ (due to this source), to the inertial mass-energy $mc^2$ of a test body at $\wp$, i.e., \begin{equation} \Phi(x^{\alpha})=\frac{\varphi_{N}(x^{\alpha})}{c^2}. \end{equation} Here $\varphi_{N}(x^{\alpha})$ is the gravitational potential. For a point source with mass $M$ in Newtonian gravity, \begin{equation} \Phi(x^{\alpha})=-\frac{GM}{c^2r}, \end{equation} where $r$ is the distance to the source. So, for a nearly Newtonian system, we can use Newtonian potential for $\varphi_{N}$. To construct spacetimes with the basis $\bm\eta $ from background Minkowski spacetime and $\bm\Upsilon$ defined in previous section, it is proposed to tie the strength of gravity $\Phi$ to theses tensors. While the tensor $\bm\Upsilon$ can be a function of the coordinates $(t,{\bf x})=(x^{\alpha})$, the strength of gravity $\Phi$ will be a function of the coordinates and also of the Newtonian gravitational constant $G$ and of a parameter of mass $M$. Thus, I do propose: \begin{equation} {\bf g}(\Phi,x)=e^{\Phi}{\bm\eta}+\sinh(\Phi){\bm\Upsilon},\nonumber \end{equation} {with components} \begin{equation} \label{metricphi} g_{\mu\nu}=e^{\Phi}\eta_{\mu\nu}+\sinh({\Phi})\Upsilon_{\mu\nu}. \end{equation} The inverse tensor is just \begin{eqnarray} \label{metricphiinverse} g^{\mu\nu}=e^{-\Phi}\eta^{\mu\nu}-\sinh({\Phi})\Upsilon^{\mu\nu}. \end{eqnarray} If the tensor $\bm\Upsilon $ is diagonal, then the inverse tensor of ${\bf g}$ is \begin{equation} {\bf g}^{-1}(\Phi,x)={\bf g}(-\Phi,x)=e^{-\Phi}{\bm\eta}-\sinh(\Phi){\bm\Upsilon}. \end{equation} From the above definitions it follows that $g_{\mu\nu}g^{\nu\alpha}={\delta_{\mu}}^{\alpha}$, in fact \begin{scriptsize} \begin{eqnarray} g_{\mu\nu}g^{\nu\alpha}=\left(e^{\Phi}\eta_{\mu\nu}+\sinh(\Phi)\Upsilon_{\mu\nu}\right)\left(e^{-\Phi}\eta^{\nu\alpha}-\sinh(\Phi)\Upsilon^{\nu\alpha}\right)\nonumber \end{eqnarray} \end{scriptsize} where the rightside is \begin{scriptsize} \begin{eqnarray} {\delta_{\mu}}^{\alpha} +\sinh(\Phi)\left(e^{-\Phi}\Upsilon_{\mu\nu}\eta^{\nu\alpha}-e^{\Phi}\eta_{\mu\nu}\Upsilon^{\nu\alpha}-\sinh(\Phi)\Upsilon_{\mu\nu}\Upsilon^{\nu\alpha}\right)\nonumber \end{eqnarray} \end{scriptsize} that is \begin{scriptsize} \begin{eqnarray} {\delta_{\mu}}^{\alpha}+\sinh(\Phi)\left[-(e^{\Phi}-e^{-\Phi}){\Upsilon_{\mu}}^{\alpha}-\sinh(\Phi)\Upsilon_{\mu\nu}\Upsilon^{\nu\alpha}\right]&=&\cr {\delta_{\mu}}^{\alpha}-\sinh^2(\Phi)\left(2{\Upsilon_{\mu}}^{\alpha}+\Upsilon_{\mu\nu}\Upsilon^{\nu\alpha}\right) \end{eqnarray} \end{scriptsize} Using the expression (\ref{ID}) in the second term in the parenthesis ($\Upsilon_{\mu\nu}\Upsilon^{\nu\alpha}=-2{\Upsilon_{\mu}}^{\alpha}$) it results in, \begin{eqnarray} g_{\mu\nu}g^{\nu\alpha}&=&{\delta_{\mu}}^{\alpha}-\sinh^2(\Phi)\left[{2\Upsilon_{\mu}}^{\alpha}+(-2){\Upsilon_{\mu}}^{\alpha}\right], \end{eqnarray} finally one did prove this identity $g_{\mu\nu}g^{\nu\alpha}={\delta_{\mu}}^{\alpha}$. Because of equation (\ref{ID}), there are no quadratic values naturally in $\bm\Upsilon$ and consequently the non-linearity of metric field will be less complicated. If one considers some spacetime $({\cal M},{\bf g})$ such as the same of ${\bf g}$ from definition (\ref{metricphi}), one can observe that the first term $\gamma_{\mu\nu}=e^{\Phi}\eta_{\mu\nu}$ is a background spacetime conformally flat (where the Weyl tensor vanishes). In the previous section one has accepted that $|\Upsilon_{\mu\nu}|\approx |\eta_{\mu\nu}|$, but now because of $\sinh{(\Phi)}$ (that can be small) multiplying this tensor $\bm\Upsilon$, it can be understood as (small) disturbance ${\cal S}_{\mu\nu}=\sinh(\Phi)\Upsilon_{\mu\nu}$, such as, \begin{equation} g_{\mu\nu}=\gamma_{\mu\nu}+{\cal S}_{\mu\nu}, \end{equation} if one is dealing with the Einstein's vacuum equation, the (small) disturbance that represents the gravitational wave can be separated from the conformally flat background spacetime $\gamma_{\mu\nu}$ \citep{Birrell}. \section{Some Examples} \subsection{Yilmaz Metric} As a first application of the metric field defined in previous sections, let us take ${\Upsilon}_{\mu\nu} $ to be: \begin{small} \begin{eqnarray} \label{weak} {\Upsilon}_{\mu\nu}= \begin{pmatrix}0&0&0&0\cr 0&2&0&0 \cr 0&0&2&0 \cr 0&0&0&2 \end{pmatrix},\hspace{0.1cm}{{\Upsilon}_{\mu}}^{\nu} \begin{pmatrix}0&0&0&0\cr 0&-2&0&0 \cr 0&0&-2&0 \cr 0&0&0&-2 \end{pmatrix}\nonumber \end{eqnarray} \end{small} and \begin{small} \begin{eqnarray} {\Upsilon}^{\mu\nu}= \begin{pmatrix}0&0&0&0\cr 0&2&0&0 \cr 0&0&2&0 \cr 0&0&0&2 \end{pmatrix} \end{eqnarray} \end{small} that ${\Upsilon}_{\mu\nu}{\Upsilon}^{\nu\alpha}=-2{{\Upsilon}_{\mu}}^{\alpha}$, with trace given ${\Upsilon}_{\mu\nu}{\Upsilon}^{\nu\mu}=-2{{\Upsilon}_{\mu}}^{\mu}=-2{\bf Tr} {\bm\Upsilon}$, that ${\bf Tr} ({\bm\Upsilon})=-6$ . Now we can display the tensor metric (\ref{metricphi}), \begin{scriptsize} \begin{eqnarray} \label{metricphi2} g_{\mu\nu}&=&e^{\Phi}\begin{pmatrix}1&0&0&0\cr 0&-1&0&0 \cr 0&0&-1&0 \cr 0&0&0&-1 \end{pmatrix}+\sinh(\Phi)\begin{pmatrix}0&0&0&0\cr 0&2&0&0 \cr 0&0&2&0 \cr 0&0&0&2 \end{pmatrix}\cr g_{\mu\nu} &=& \begin{pmatrix} e^{\Phi}&0&0&0\cr 0&-e^{-\Phi}&0&0 \cr 0&0&-e^{-\Phi}&0 \cr 0&0&0&-e^{-\Phi} \end{pmatrix}, \end{eqnarray} \end{scriptsize} since $-e^{\Phi}+2\sinh(\Phi)=-e^{-\Phi}$. Then the line element is, \begin{equation} \label{Yilmaz} ds^2=e^{\Phi}c^2dt^2 - e^{-\Phi}(dx^2+dy^2+dz^2). \end{equation} This metric field (\ref{metricphi2},\ref{Yilmaz}) has been proposed by Yilmaz \citep{Yilmaz1,Yilmaz2,Yilmaz3,Yilmaz4,Yilmaz5,Yilmaz6,Yilmaz7}. In the case of a mass singularity, $\Phi=-\dfrac{2GM}{c^2r}\ll 1$ we have the far-field metric, \begin{small} \begin{equation} \label{metricphi3} ds^2=(1-\frac{2GM}{c^2r})c^2dt^2-(1+\frac{2GM}{c^2r})(dx^2+dy^2+dz^2). \end{equation} \end{small} This is to be contrasted with the Schwarzschild (in General Relativity, GR) line element in isotropic coordinates \citep{Landau}, \begin{scriptsize} \begin{equation} \label{isotropic} ds^2=\left(\frac{1+\Phi/4}{1-\Phi/4}\right)^2c^2dt^2 - \left(1-\frac{\Phi}{4}\right)^4(dr^2+ r^2d\theta^2+r^2\sin^2\theta d\phi^2), \end{equation} \end{scriptsize} if we compare expansions with the line element of Yilmaz theory (\ref{Yilmaz}),\\ \indent Yilmaz: \begin{scriptsize} \begin{eqnarray} g_{00}&=&1+\Phi+\frac{\Phi^2}{2}+\frac{\Phi^3}{6}+\cdots \hspace{0.5cm} g_{11}=-1+\Phi-\frac{\Phi^2}{2}+\cdots\nonumber \end{eqnarray} \end{scriptsize} \indent GR: \begin{scriptsize} \begin{eqnarray} g_{00}&=&1+\Phi+\frac{\Phi^2}{2}+\frac{3\Phi^3}{16}+\cdots \hspace{0.5cm} g_{11}=-1+\Phi-\frac{3\Phi^2}{8}-\cdots\,,\nonumber \end{eqnarray} \end{scriptsize} $g_{11}$ coefficients differing only in the second order terms, while $g_{00}$ differing in the third order. Both, Yilmaz and GR, give observational indistinguishable predictions for red-shift, light bending and perihelion advance, but the Yilmaz metric does not admit black holes. Citing this property and assuming that Yilmaz theory is correct, Clapp \citep{Clapp} has suggest that a significant component of quasar red-shift may be gravitational. Robertson \citep{Robertson1, Robertson2} has suggested that some neutron stars and black hole candidates may be like `Yilmaz stars'. Robertson argues that neutron star with mass $\sim 10M_{\odot}$ is found for Yilmaz metric while that an object of nuclear density greater than $\sim 2.8M_{\odot}$ should be a black hole in Schwarzschild metric. Ibison has tested Yilmaz theory by working out the corresponding Friedmann equations generated by assuming the Friedmann-Robertson-Walker cosmological metrics \citep{Ibison}. There are a series of claims and counter-claims involving Fackerell \citep{Fackerell,Yilmaz8,Alley1}, and also Misner and Wyss \citep{Wyss,Misner,Alley2} about Yilmaz theory. At the present time both Yilmaz and Schwarzschild solutions give results in agreement with observation \citep{Rosen}. However, it may be possible in the future, with LISA mission \citep{NASA,Baker}, to distinguish between Yilmaz and Schwarzschild. \subsection{Circularly Polarized Wave} Gravitational waves are one of the most important predictions of General Relativity. Now we can try a solution of gravitational waves in $z$ direction, \begin{scriptsize} \begin{eqnarray} \label{wavephi} {\Upsilon}_{\mu\nu}&=& \begin{pmatrix}0&0&0&0\cr 0&1+\cos{\zeta}&\sin{\zeta}&0 \cr 0&\sin{\zeta}&1-\cos{\zeta}&0 \cr 0&0&0&2 \end{pmatrix},\cr {{\Upsilon}_{\mu}}^{\nu}&=&{\Upsilon}_{\mu\alpha}\eta^{\alpha\nu}= \begin{pmatrix}0&0&0&0\cr 0&-1-\cos{\zeta}&-\sin{\zeta}&0 \cr 0&-\sin{\zeta}&-1+\cos{\zeta}&0 \cr 0&0&0&-2 \end{pmatrix} \end{eqnarray} \end{scriptsize} and \begin{scriptsize} \begin{equation} \Upsilon^{\mu\nu}=\Upsilon_{\alpha\beta}\,\eta^{\alpha\mu}\eta^{\beta\nu}= \begin{pmatrix}0&0&0&0\cr 0&1+\cos{\zeta}&\sin{\zeta}&0 \cr 0&\sin{\zeta}&1-\cos{\zeta}&0 \cr 0&0&0&2 \end{pmatrix} \end{equation} \end{scriptsize} so ${\Upsilon}_{\mu\nu}{\Upsilon}^{\nu\alpha}=-2{{\Upsilon}_{\mu}}^{\alpha}$ is verified for the tensor above and ${\bf Tr} ({\bm\Upsilon})=-4$. Then, if one chooses $\zeta=\omega t-kz$, the metric field is a solution for a gravitational plane wave $g_{\mu\nu}=e^\Phi\eta_{\mu\nu}+\sinh(\Phi)\Upsilon_{\mu\nu} $, \begin{eqnarray} g_{\mu\nu}&=& \begin{pmatrix}e^{\Phi}&0&0&0\cr 0&-\cosh(\Phi)&0&0 \cr 0&0&-\cosh(\Phi)&0 \cr 0&0&0&-e^{\Phi} \end{pmatrix}\cr &+&\sinh(\Phi) \begin{pmatrix}0&0&0&0\cr 0&\cos{\zeta}&\sin{\zeta}&0 \cr 0&\sin{\zeta}&-\cos{\zeta}&0 \cr 0&0&0&2 \end{pmatrix}. \end{eqnarray} Where the first term can be the background spacetime (asymptotically flat) and the second term is the disturbance in this background or in other words the circularly polarized radiation $h_{\mu\nu}^{TT}$ with amplitude $\sinh(\Phi)$. The gravitational wave polarization is important from astrophysical and cosmological viewpoints. A binary system of two stars in circular orbit one around the other is expected to emit circularly polarized waves in the direction perpendicular to the plane of the orbit \citep{Schutz}. Moreover, the Big Bang left behind an echo in the electromagnetic spectrum, the cosmic microwave background, but the Big Bang most likely also left cosmological gravitational waves that will be possible to observe with the help of LISA \citep{NASA,Baker}. Since cosmological gravitational waves propagate without significant interaction after they are produced, once detected they should provide a powerful tool for studying the early Universe at the time of gravitational wave generation \citep{Buonanno}. Various mechanisms for cosmological gravitational wave generation have been proposed, and many of these state that the cosmological gravitational wave are circularly polarized. T. Kahniashvili et al \citep{Kahniashvili} argued that helical turbulence produced during a first-order phase transition generated circularly polarized cosmological gravitational waves. Other physicists have said that the parity violation due to the gravitational Chern-Simons term in superstring theory can produce the primordial gravitational waves with circular polarization \citep{Lue,Choi,Alexander,Satoh,Saito}. If we assume long distance from source, we can obtain plane wave solution with $\Phi\ll 1$ so that, \begin{scriptsize} \begin{eqnarray} \label{wavephi2} g_{\mu\nu}&=&(1+\Phi)\eta_{\mu\nu}+\Phi\Upsilon_{\mu\nu}\cr &=& \begin{pmatrix}1+\Phi&0&0&0\cr 0&-1&0&0 \cr 0&0&-1&0 \cr 0&0&0&-1+\Phi \end{pmatrix}\cr & &+\Phi\begin{pmatrix}0&0&0&0\cr 0&\cos{\zeta}&\sin{\zeta}&0 \cr 0&\sin{\zeta}&-\cos{\zeta}&0 \cr 0&0&0&0 \end{pmatrix}\nonumber \end{eqnarray} \end{scriptsize} the second term is the circularly polarized radiation $h_{\mu\nu}^{TT}$. Because of the far distance from source, we have the presence of $g_{00}$ and $g_{33}$ as static terms, weakly pertubed in these coordinates. \subsection{Rotating Bodies} The Kerr metric is important astrophysically since it is a good approximation to the metric of a rotating star at large distances. It is possible to obtain a kind Kerr metric from $g_{\mu\nu}=e^{\Phi}\eta_{\mu\nu}+\sinh(\Phi)\Upsilon_{\mu\nu}$, that in coordinates $(t,x,y,z)$ the tensor $\Upsilon_{\mu\nu}$ is: \begin{tiny} \begin{equation} \label{UKerr} \begin{pmatrix} \cosh^2\Lambda & -\sinh\Lambda\cosh\Lambda \sin{\phi} & \sinh\Lambda\cosh\Lambda \cos{\phi} & 0\cr -\sinh\Lambda\cosh\Lambda \sin{\phi} & \sinh^2\Lambda\sin^2\phi & -\sinh^2\Lambda\cos\phi \sin{\phi} & 0\cr \sinh\Lambda\cosh\Lambda \cos{\phi}& -\sinh^2\Lambda\cos{\phi}\sin{\phi} & \sinh^2\Lambda\cos^2{\phi} & 0\cr 0 & 0 & 0 & 0 \end{pmatrix} \end{equation} \end{tiny} satisfying $\Upsilon_{\mu\nu}\Upsilon^{\nu\rho}=-2{\Upsilon_{\mu}}^{\rho}$ with ${\bf Tr}({\bm\Upsilon})=-2$. In this example we choose $\Upsilon_{33}=0$, but if the choice was $\Upsilon_{33}=2$, the above tensor still satisfy the algebra (\ref{ID}). One can change coordinates $(t,x,y,z)$ to the Boyer-Lindquist coordinates $(t,r,\theta,\phi)$, with spatial part as flat space in ellipsoidal coordinates, \begin{eqnarray} \label{Elipse achatada} t&=&t,\cr x&=&\sqrt{r^2+a^2}\sin\theta\cos\phi,\cr y&=&\sqrt{r^2+a^2}\sin\theta\sin\phi,\cr z&=&r\cos\theta, \end{eqnarray} The Minkowski tensor metric related to them is: \begin{scriptsize} \begin{equation} \label{eta elipsi} \eta_{\mu\nu}= \begin {pmatrix} 1&0&0&0\\\noalign{\medskip}0&-{\frac {{r} ^{2}+{a}^{2} \cos ^2 \theta}{{r}^{2}+ {a}^{2}}}&0&0\\\noalign{\medskip}0&0&-{r}^{2}-{a}^{2}\cos^2\theta &0\\\noalign{\medskip}0&0&0&- \left( {r}^{2}+{a}^{2} \right) \sin^2 \theta \end{pmatrix}, \end{equation} \end{scriptsize} we are assuming that the angle $\phi$ from tensor $\Upsilon_{\mu\nu}$ of (\ref{UKerr}) can be the same angle from transformations (\ref{Elipse achatada}). So, this coordinate transformations will become tensor (\ref{UKerr}) in: \begin{tiny} \begin{equation} \label{UKerr2} \Upsilon_{\mu\nu}= (-2)\begin{pmatrix} \cosh^2 \Lambda &0&0&\sinh \Lambda \cosh \Lambda R\sin \theta \\\noalign{\medskip}0&0&0&0\\\noalign{\medskip}0&0&0&0 \\\noalign{\medskip}\sinh \Lambda \cosh \Lambda R\sin \theta &0&0& \sinh^2\Lambda R^2 \sin^2\theta \end{pmatrix} \end{equation} \end{tiny} where $R=\sqrt {{r}^{2}+{a}^{2}}$. At the appendices, it is verified with details that the above tensor obeys the algebra (\ref{ID}) in the background Minkowski spacetime (\ref{eta elipsi}). Now we can choose a particular solution for this tensor choosing the geometric terms $\sinh\Lambda$ and $\cosh\Lambda$: \begin{equation} \label{definicao shL} \sinh\Lambda=-\frac{a\sin\theta}{\rho} \hspace{0.5cm}\mbox{and} \hspace{0.5cm}\cosh\Lambda=\frac{\sqrt{r^2+a^2}}{\rho} , \end{equation} with \begin{equation} \rho^2=r^2+a^2\cos^2\theta \end{equation} satisfying $\cosh^2\Lambda -\sinh^2\Lambda=1$. The physical terms that contain the strength of gravity $e^{\Phi}$ and $\sinh{\Phi}$ can be: \footnote{In this paragraph we assume $G=1$ and $c=1$.} \begin{equation} \label{Phi RB} \Phi=\frac{Mr}{(r^2+a^2)}\ll 1, \end{equation} so that: \begin{equation} e^{\Phi}\approx 1+\Phi =1+ \frac{Mr}{(r^2+a^2)} \nonumber \end{equation} {and} \begin{equation} \sinh(\Phi)\approx \Phi= \frac{Mr}{(r^2+a^2)}.\nonumber \end{equation} We can compute each term of metric field $g_{\mu\nu}$ (for more details see appendices), \begin{small} \begin{eqnarray} g_{00}&=&\frac{\Delta-a^2\sin^2\theta}{\rho^2}+\frac{Mr}{(r^2+a^2)},\cr g_{03}&=&g_{30}=\frac{2Mra\sin^2{\theta}}{\rho^2},\cr g_{11}&=& -\frac{\rho^2}{\Delta}+ \frac{Mr\rho^2}{\Delta(r^2+a^2)},\cr g_{22}&=&-\rho^2-\frac{Mr\rho^2}{r^2+a^2},\cr g_{33}&=&\frac{-\sin^2\theta\left[\left(r^2+a^2\right)^2-\Delta a^2\sin^2\theta\right]}{\rho^2} - Mr\sin^2\theta,\nonumber \end{eqnarray} \end{small} where we use the definition , \begin{equation} \Delta=r^2-2Mr+a^2. \end{equation} The metric tensor $g_{\mu\nu}$ in the matrix form is: \begin{small} \begin{equation} \begin{pmatrix}\frac{\Delta-a^2\sin^2\theta}{\rho^2}& 0 & 0 & \frac{2Mra\sin^2{\theta}}{\rho^2}\cr 0&-\frac{\rho^2}{\Delta}& 0 & 0 \cr 0 & 0 & -\rho^2 & 0 \cr \frac{2Mra\sin^2{\theta}}{\rho^2}& 0 & 0 & \frac{-\sin^2\theta\left[\left(r^2+a^2\right)^2-\Delta a^2\sin^2\theta\right]}{\rho^2} \end{pmatrix}\nonumber \end{equation} \begin{equation} + \begin{pmatrix} \frac{Mr}{(r^2+a^2)}& 0 & 0 &0 \cr 0 & \frac{Mr\rho^2}{\Delta(r^2+a^2)}& 0 & 0 \cr 0 & 0 & -\frac{Mr\rho^2}{r^2+a^2} & 0 \cr 0 & 0 & 0 & - Mr\sin^2\theta \end{pmatrix}. \end{equation} \end{small} The above first matrix is just `exact Kerr solution'. While the second matrix can be seen as deviation or deformity from `exact solution'. Then, it makes sense to see this deviation as approximate solution of \begin{equation} g_{\mu\nu}=g_{\mu\nu}^{(0)}+h_{\mu\nu}, \end{equation} where $g_{\mu\nu}^{(0)}$ is some known exact solution (that in this case is Kerr solution) and $h_{\mu\nu}$ is the perturbation. For Einstein's vacuum equation it is possible to obtain explicitly the vacuum perturbation equations from an arbitrary exact solution \citep{Wald}. Gravitational wave observations of extreme-mass-ratio-inspirals (EMRIs) by LISA will provide unique evidence for the identity of the supermassive objects in galactic nuclei. It is commonly assumed that these objects are indeed Kerr black holes. K. Glampedakis and S. Babak argue that from the observed signal, LISA will have the potential to prove (or disprove) this assumption, by extracting the first few multipole moments of the spacetime outside these objects. The possibility of discovering a non-Kerr object should be taken into account when constructing waveform templates for LISA's data analysis tools. They provide a prescription for building a `quasi-Kerr' metric, that is a metric that slightly deviates from Kerr, and present results on how this deviation impacts orbital motion and the emitted waveform \citep{Glampedakis}. \subsection{Deformed Schwarzschild spacetime} Another example can be given in spherical coordinates $(t,r,\theta,\phi)$ where the components of metric tensor of Minkowski flat spacetime is:\\ $\eta_{\mu\nu}=\mbox{diag}(1,-1,-r^2,-r^2\sin\theta)$,\\ with the respective inverse\\ $\eta^{\mu\nu}=\mbox{diag}(1,-1,-\frac{1}{r^2},-\frac{1}{r^2\sin\theta})$. As we have seen before, let us describe a simpler (quasi-)idempotent tensor in spherical coordinates given by \begin{equation} \label{deformed S} \Upsilon_{\mu\nu}= \begin{pmatrix} 0&0&0&0\cr 0&2&0&0\cr 0&0&{r}^{2}&-{r}^{2}\sin\theta\cr 0&0&-{r}^{2}\sin\theta &{r}^{2} \sin^2 \theta \end{pmatrix} \end{equation} with \begin{equation} {\Upsilon_{\mu}}^{\nu}=\Upsilon_{\mu\alpha}\eta^{\alpha\nu}= \begin{pmatrix} 0&0&0&0\cr 0&-2&0&0\cr 0&0&-1&\frac{1}{\sin\theta}\cr 0&0&\sin\theta &-1 \end{pmatrix} \end{equation} {and} \begin{equation} {\Upsilon^{\mu}}_{\nu}=\eta^{\mu\alpha}\Upsilon_{\alpha\nu}= \begin{pmatrix} 0&0&0&0\cr 0&-2&0&0\cr 0&0&-1&{\sin\theta}\cr 0&0&\frac{1}{\sin\theta} &-1 \end{pmatrix}. \end{equation} Then ${\bf Tr}(\bm\Upsilon)={\Upsilon^{\mu}}_{\mu}=-4$. One can show that $\Upsilon^{\mu\nu}$ given by $\eta^{\mu\alpha}\Upsilon_{\alpha\beta}\eta^{\beta\nu}$ is, \begin{equation} \Upsilon^{\mu\nu}= \begin{pmatrix} 0&0&0&0\cr 0&2&0&0\cr 0&0&\frac{1}{r^2}&-{\frac {1}{{r}^{2}\sin \theta}}\cr 0&0&-{\frac {1}{{r}^{2}\sin \theta }}&{\frac {1}{{r}^{2} \sin^2\theta}} \end{pmatrix} \end{equation} that satisfies the algebraic relation (\ref{ID}), \begin{small} \begin{eqnarray} \Upsilon_{\mu\alpha}\Upsilon^{\alpha\nu}&=& \begin{pmatrix} 0&0&0&0\cr 0&4&0&0\cr 0&0&2&-\frac{2}{\sin \theta}\cr 0&0&-2\,\sin \theta &2 \end{pmatrix}\cr &=&-2 \begin{pmatrix} 0&0&0&0\cr 0&-2&0&0\cr 0&0&-1&\frac{1}{\sin \theta}\cr 0&0&\sin \theta &-1 \end {pmatrix}=-2{\Upsilon_{\mu}}^{\nu}. \end{eqnarray} \end{small} From expression (\ref{metricphi}) we have the metric field $g_{\mu\nu}=e^{\Phi}\eta_{\mu\nu}+\sinh\Phi\Upsilon_{\mu\nu}$, it follows that, \begin{eqnarray} g_{00}&=&e^{\Phi},\cr g_{rr}&=&-\frac{1}{e^{\Phi}},\cr g_{\theta\theta}&=&-r^2\cosh\Phi,\cr g_{\phi\phi}&=&-r^2\sin^2\theta\cosh\Phi,\cr g_{\theta\phi}&=&g_{\phi\theta}=-r^2\sin\theta\sinh\Phi, \end{eqnarray} in the case that $\Phi=-\frac{2GM}{c^2r}\ll 1$ with line element expanded in the first order of the strength of gravity, \begin{scriptsize} \begin{eqnarray} ds^2 &=& \underbrace{\left(1-\frac{2GM}{c^2r}\right)c^2dt^2-\frac{dr^2}{\left(1-\frac{2GM}{c^2r}\right)}-r^2d\theta^2-r^2\sin^2\theta d\phi^2}_{\mbox{Schwarzschild spacetime}}\cr &+& 4\left(\frac{GM}{c^2}\right) r \sin\theta d\theta d\phi, \end{eqnarray} \end{scriptsize} This metric field is asymptotically flat, the components approach those of Minkowski spacetime in spherical coordinates. If only $g_{\theta\phi}$ would be vanished, we could obtain the Schwarzchild spacetime. However, it is necessary the terms $\Upsilon_{\theta\phi}= \Upsilon_{\phi\theta}= -r^2\sin\theta$ in the symmetric tensor of equation (\ref{deformed S}) for this tensor may obey the algebraic relation (\ref{ID}). Nevertheless, it raises a question: what kind of gravitating source should distort a spherically symmetric spacetime? The main purpose of this section is to illustrate some (quasi-)idempotent tensors $\bm\Upsilon$ that must satisfy the algebraic relation (\ref{ID}). There are many works with discussions about Yilmaz metric field and their sources. However, this paper does not discuss the physics of gravitating sources of circularly polarized wave, rotating bodies and deformed Schwarzschild spacetime purposed here. In forthcoming work, it will be necessary to analyse principally the gravitating source that distort the static Schwarzschild solution. These examples suggest that the ${\bf Tr}({\bm\Upsilon})\in\mathbb{Z}$ are constant numbers. Forward it will be necessary to calculate the derivative of ${\bf Tr}({\bm\Upsilon})$ in any situations, thus we shall assume that, \begin{equation} \label{Dtraca} \partial_\alpha{\bf Tr}({\bm\Upsilon})=0 \end{equation} in this paper. \section{Adjoint Metric Field, Christoffel Symbols and determinant} \subsection{Adjoint Metric Field} One can see that spacetime (\ref{metricphi}) is asymptotically flat. The Minkowski spacetime is the universal covering space for all such derived spacetimes, and in this sense it is possible to restore Minkowski spacetime in (\ref{metricphi}) if one turns off the gravitational strength in metric field, or in other words, if $\Phi=0$ then $g_{\mu\nu}=\eta_{\mu\nu}$. However one can construct a kind of spacetime with hyperbolic cosine instead of hyperbolic sine that is not asymptotically flat, \begin{equation} \label{pseudometricphi} \breve{g}_{\mu\nu}=e^{\Phi}\eta_{\mu\nu}+\cosh(\Phi)\Upsilon_{\mu\nu}, \end{equation} with respective inverse, \begin{equation} \label{pseudometricphiinverse} \breve{g}^{\mu\nu}=e^{-\Phi}\eta^{\mu\nu}+\cosh(\Phi)\Upsilon^{\mu\nu}, \end{equation} and with $\breve{g}_{\mu\nu}\breve{g}^{\nu\alpha}={\delta_{\mu}}^{\alpha}$ since $\Upsilon_{\mu\nu}\Upsilon^{\nu\alpha}=-2{\Upsilon_{\mu}}^{\alpha}$ from equation (\ref{ID}). A map between metrics field (\ref{metricphi}) and (\ref{pseudometricphi}) is obtained through partial derivative in $\Phi$: \begin{equation} \breve{g}_{\mu\nu}=\frac{\partial g_{\mu\nu}}{\partial \Phi} \hspace{1cm}\mbox{and}\hspace{1cm}\breve{g}^{\mu\nu}= -\frac{\partial g^{\mu\nu}}{\partial \Phi}. \end{equation} also one has $g_{\mu\nu}=\dfrac{\partial^2g_{\mu\nu}}{\partial\Phi^2}$ and $g^{\mu\nu}=\dfrac{\partial^2g^{\mu\nu}}{\partial\Phi^2}$. This `adjoint metric field' (\ref{pseudometricphi}) helps us to simplify the Christoffel symbols. But before, it is necessary to establish some relationships between ${\bf g}$ and $\breve{\bf g}$, \begin{scriptsize} \begin{eqnarray} \label{gg0} g_{\mu\nu}\breve{g}^{\nu\alpha}&=&(e^{\Phi}\eta_{\mu\nu}+\sinh(\Phi)\Upsilon_{\mu\nu})(e^{-\Phi}\eta^{\nu\alpha}+\cosh(\Phi)\Upsilon^{\nu\alpha})\cr &=&{\delta_{\mu}}^{\alpha}+[e^{\Phi}\cosh(\Phi)+e^{-\Phi}\sinh(\Phi)]{\Upsilon_{\mu}}^{\alpha}\cr &+&\sinh(\Phi)\cosh(\Phi)\Upsilon_{\mu\nu}\Upsilon^{\nu\alpha}, \end{eqnarray} \end{scriptsize} with $\Upsilon_{\mu\nu}\Upsilon^{\nu\alpha}=-2{\Upsilon_{\mu}}^{\alpha}$ and hyperbolic expressions $e^{\Phi}=\cosh(\Phi)+\sinh(\Phi)$ and $e^{-\Phi}=\cosh(\Phi)-\sinh(\Phi)$ we have: \begin{equation} \label{gg1} g_{\mu\nu}\breve{g}^{\nu\alpha}={\delta_{\mu}}^{\alpha}+{\Upsilon_{\mu}}^{\alpha}, \end{equation} and, \begin{equation} \label{gg2} g^{\mu\nu}\breve{g}_{\nu\alpha}={\delta^{\mu}}_{\alpha}+ {\Upsilon^{\mu}}_{\alpha}. \end{equation} It is important to observe that: \begin{equation} g_{\mu\nu}\breve{g}^{\mu\nu}=g^{\mu\nu}\breve{g}_{\mu\nu}={\bf Tr(\bm{\eta})}+ {\bf Tr(\bm{\Upsilon})}. \end{equation} Further we have: \begin{equation} \label{gUspsilon1} g_{\mu\nu}\Upsilon^{\nu\alpha}=(e^{\Phi}\eta_{\mu\nu} +\sinh(\Phi)\Upsilon_{\mu\nu})\Upsilon^{\nu\alpha}=e^{-\Phi}{\Upsilon_\mu}^{\alpha}, \end{equation} \begin{equation} \label{gUspsilon2} g^{\mu\nu}\Upsilon_{\nu\alpha}=e^{\Phi}{\Upsilon^\mu}_{\alpha}, \end{equation} \begin{equation} \label{gUspsilon3} \breve{g}_{\mu\nu}\Upsilon^{\nu\alpha}=-e^{-\Phi}{\Upsilon_\mu}^{\alpha}, \end{equation} \begin{equation} \label{gUspsilon4} \breve{g}^{\mu\nu}\Upsilon_{\nu\alpha}=-e^{\Phi}{\Upsilon^\mu}_{\alpha}. \end{equation} Nevertheless for this `adjoint metric field', when there is not gravitational strength ($\Phi=0$) it is not possible to reorder Minkowski spacetime. Because if $\Phi=0$ then $g_{\mu\nu}=\eta_{\mu\nu}+\Upsilon_{\mu\nu}$. Unless all $\Upsilon_{\mu\nu}$ are vanished, one can not obtain Minkowski spacetime. Moreover, with $\Upsilon_{\mu\nu}\neq 0 $ a spacetime $(\cal M,\breve{\bf g})$ has a different topology of physical spacetimes. For example, if one chooses $\Upsilon_{\mu\nu}$ from equation (\ref{weak}), the correspondent line element is \begin{equation} d\breve{s}^2=e^{\Phi}c^2dt^2 + e^{-\Phi}(dx^2+dy^2+dz^2), \end{equation} that has signature $(++++)$. \subsection{Christoffel Symbols} We want a manifold $\cal M$ with Levi-Civita connection, since the connection coefficient $\bm \Gamma$ is given by \begin{equation} \Gamma^{\beta}_{\mu\nu}=\frac{1}{2}g^{\alpha\beta}(\partial_{\mu}g_{\alpha\nu}+\partial_{\nu}g_{\alpha\mu}-\partial_{\alpha}g_{\mu\nu}). \end{equation} We have that, \begin{equation} \label{pg1} \partial_{\alpha}g_{\mu\nu}=(e^{\Phi}\eta_{\mu\nu} +\cosh(\Phi)\Upsilon_{\mu\nu})\partial_{\alpha}\Phi+\sinh(\Phi)\partial_{\alpha}\Upsilon_{\mu\nu}, \end{equation} that in term of (\ref{pseudometricphi}) is \begin{equation} \partial_{\alpha}g_{\mu\nu}=\breve{g}_{\mu\nu}\partial_{\alpha}\Phi+\sinh(\Phi)\partial_{\alpha}\Upsilon_{\mu\nu}, \end{equation} from which we obtain that Christoffel symbols are \begin{scriptsize} \begin{eqnarray} \Gamma^{\beta}_{\mu\nu}&=&\frac{1}{2}g^{\alpha\beta}\Big[\breve{g}_{\alpha\nu}\partial_{\mu}\Phi+\breve{g}_{\alpha\mu}\partial_{\nu}\Phi-\breve{g}_{\mu\nu}\partial_{\alpha}\Phi\cr &+&\sinh(\Phi)(\partial_{\mu}\Upsilon_{\alpha\nu}+\partial_{\nu}\Upsilon_{\alpha\mu}-\partial_{\alpha}\Upsilon_{\mu\nu})\Big]\cr &=&\frac{1}{2}\left[({\delta^{\beta}}_{\nu}+{\Upsilon^{\beta}}_{\nu})\partial_{\mu}\Phi+({\delta^{\beta}}_{\mu}+{\Upsilon^{\beta}}_{\mu})\partial_{\nu}\Phi-g^{\alpha\beta}\breve{g}_{\mu\nu}\partial_{\alpha}\Phi\right] \cr &+&\frac{1}{2}\sinh(\Phi)g^{\alpha\beta}(\partial_{\mu}\Upsilon_{\alpha\nu}+\partial_{\nu}\Upsilon_{\alpha\mu}-\partial_{\alpha}\Upsilon_{\mu\nu}). \end{eqnarray} \end{scriptsize} Let us consider Christoffel symbols as a combination of two terms: the first is dependent of $\partial_{\alpha}\Phi$ and the second is dependent of $\partial_{\alpha}\bm{\Upsilon}$, \begin{equation} \label{Cristoffel1A} \Gamma^{\beta}_{\mu\nu}=\Gamma^{\beta(1)}_{\mu\nu}+\Gamma^{\beta(2)}_{\mu\nu} \end{equation} where, \begin{scriptsize} \begin{equation} \label{Cristoffel2} \Gamma^{\beta(1)}_{\mu\nu}=\frac{1}{2}\left({\delta^{\beta}}_{\nu}{\delta^{\alpha}}_{\mu}+{\delta^{\beta}}_{\mu}{\delta^{\alpha}}_{\nu}+{\delta^{\alpha}}_{\mu}{\Upsilon^{\beta}}_{\nu}+{\delta^{\alpha}}_{\nu}{\Upsilon^{\beta}}_{\mu}-g^{\alpha\beta}\breve{g}_{\mu\nu}\right)\partial_{\alpha}\Phi \end{equation} \end{scriptsize} and \begin{equation} \label{Cristoffel3} \Gamma^{\beta(2)}_{\mu\nu}=\frac{1}{2}\sinh(\Phi)g^{\alpha\beta}(\partial_{\mu}\Upsilon_{\alpha\nu}+\partial_{\nu}\Upsilon_{\alpha\mu}-\partial_{\alpha}\Upsilon_{\mu\nu}). \end{equation} Since $\bm\Gamma$ is not a tensor, it can not have intrinsic geometrical meaning as measures of how much a manifold is curved. Below we shall compute intrinsic objects as Ricci tensor and scalar curvature. \subsection{Determinant $\bm g$} From identity \citep{Schutz}, \begin{equation} \label{ID determinante} \Gamma^{\nu}_{\mu\nu}=\partial_{\mu}(\ln\,\sqrt{-g}), \end{equation} where $g=\det(g_{\mu\nu})$, it is possible to obtain the determinant $g$ by contracting the indices in (\ref{Cristoffel1A}) with $\beta=\nu$ and the equation (\ref{gg2}): \begin{small} \begin{eqnarray} \label{Cristoffel ID} \Gamma^{\nu}_{\mu\nu}&=&\frac{1}{2}\Big[{\bf Tr}(\bm{\eta})\partial_{\mu}\Phi+ \partial_{\mu}\Phi+{\bf Tr}(\bm{\Upsilon})\partial_{\mu}\Phi +{\Upsilon^{\alpha}}_{\mu}\partial_{\alpha}\Phi\cr & &- ({\delta^{\alpha}}_{\mu}+{\Upsilon^{\alpha}}_{\mu})\partial_{\alpha}\Phi\Big] \cr & &+ \frac{1}{2}\sinh(\Phi)g^{\alpha\nu}(\partial_{\mu}\Upsilon_{\alpha\nu}+\partial_{\nu}\Upsilon_{\alpha\mu}-\partial_{\alpha}\Upsilon_{\mu\nu})\cr &=& \frac{1}{2}\left[{\bf Tr}(\bm{\eta})+{\bf Tr}(\bm{\Upsilon})\right]\partial_{\mu}\Phi+\frac{1}{2}\sinh(\Phi)g^{\alpha\nu}\partial_{\mu}\Upsilon_{\alpha\nu},\nonumber \end{eqnarray} \end{small} we note that the second term with $g^{\alpha\nu}\partial_{\mu}\Upsilon_{\alpha\nu}$ is vanished, \begin{small} \begin{eqnarray} g^{\alpha\nu}\partial_{\mu}\Upsilon_{\alpha\nu}&=&e^{-\Phi}\eta^{\alpha\nu}\partial_{\mu}\Upsilon_{\alpha\nu}-\sinh(\Phi)\Upsilon^{\alpha\nu}\partial_{\mu}\Upsilon_{\alpha\nu}\cr &=&e^{-\Phi}\partial_{\mu}{\bf Tr}({\bm\Upsilon})-\sinh(\Phi)[-\partial_{\mu}{\bf Tr}({\bm\Upsilon})]=0,\nonumber \end{eqnarray} \end{small} where we used (\ref{derivada traco2}) and (\ref{Dtraca}), thus, \begin{eqnarray} \label{Cristoffel ID2} \Gamma^{\nu}_{\mu\nu}&=&\frac{1}{2}\left[{\bf Tr}(\bm{\eta})+{\bf Tr}(\bm{\Upsilon})\right]\partial_{\mu}\Phi=\partial_{\mu}(\ln\,\sqrt{-g}). \end{eqnarray} and we finally find, \begin{equation} \label{ID determinante3} \sqrt{-g}=\exp\left\{\frac{\Phi}{2}\left[{\bf Tr}(\bm{\eta})+{\bf Tr}(\bm{\Upsilon})\right]\right\}. \end{equation} In fact, since $\ln(\det{g_{\mu\nu}})={\bf Tr}(\ln g_{\mu\nu}) $. Also, one should use the identity $dg=g\,\,g^{\mu\nu}dg_{\mu\nu}$ and to compute $\sqrt{-g}$. One can verify the examples of above section in coordinates $(t,x,y,z)$: \begin{table}[h] \begin{scriptsize} \begin{tabular}{@{}ll} Yilmaz metric: & $g=-\exp\left\{{\Phi}\left[4+(-6)\right]\right\}=-e^{-2\Phi}$ \\ circularly polarized wave: & $ g=-\exp\left\{{\Phi}\left[4+(-4)\right]\right\}=-1$\\ rotating bodies: & $ g=-\exp\left\{{\Phi}\left[4+(-2)\right]\right\}=-e^{2\Phi}$ \end{tabular} \end{scriptsize} \end{table} The natural volume element of manifold $\cal M$ with metric tensor (\ref{metricphi}), $dv =\sqrt{-g}d^4x$, is invariant under coordinate transformation. An action with Lagrangian $\cal L$ in spacetime $({\cal M},{\bf g})$ is given by, \begin{equation} \label{action1} S= \int_{\cal M}{\cal L}\,\, \sqrt{-g}\,\,\,d^4x, \end{equation} then we have, \begin{equation} \label{action2} S= \int_{\cal M}{\cal L}\,\, \exp\left\{\frac{\Phi}{2}\left[{\bf Tr}(\bm{\eta})+{\bf Tr}(\bm{\Upsilon})\right]\right\}\,d^4x. \end{equation} \section{Newtonian Limit and Gravitational Waves} \subsection{Newtonian Limit} In the Newtonian limit one assumes that velocities are small, $\left|\dfrac{v}{c}\right|\ll1,$ that gravitational potentials are near their Minkowski values, \footnote{Observe that $g_{\mu\nu}=e^{\Phi}\eta_{\mu\nu}+\sinh(\Phi)\Upsilon_{\mu\nu}\approx (1+\Phi)\eta_{\mu\nu}+\Phi\Upsilon_{\mu\nu}= \eta_{\mu\nu} +\Phi(\eta_{\mu\nu}+\Upsilon_{\mu\nu})=\eta_{\mu\nu}+h_{\mu\nu}$ and \\ $g^{\mu\nu}\approx \eta^{\mu\nu}-\Phi(\eta^{\mu\nu}+\Upsilon^{\mu\nu})= \eta^{\mu\nu}-h^{\mu\nu}$.} and that pressures or other mechanical stresses are negligible compared to the energy densities $|P|\ll \rho c^2$. The description of Einstein's field equation (\ref{field equation}) will use values from Christoffel symbols from equation (\ref{Cristoffel1A}), where this has two terms: $\Gamma^{\beta(1)}_{\mu\nu}$ dependent of $\partial_{\alpha}\Phi$ and $\Gamma^{\beta(2)}_{\mu\nu}$ dependent of $\partial_{\alpha}\bm\Upsilon$. In this section we shall verify the Einstein tensor $G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R$, only in the case that $\partial_{\alpha}\bm\Upsilon=0$, then $\Gamma^{\beta(2)}_{\mu\nu}=0$, and in any coordinates system, the Ricci tensor is given by \begin{equation} \label{TensorRicci1} R_{\mu\nu}^{(1)}=\partial_{\alpha}\Gamma^{\alpha(1)}_{\mu\nu} -\Gamma^{\alpha(1)}_{\mu\beta}\Gamma^{\beta(1)}_{\nu\alpha} - \partial_{\nu}\Gamma^{\alpha(1)}_{\mu\alpha} +\Gamma^{\alpha(1)}_{\alpha\beta}\Gamma^{\beta(1)}_{\mu\nu}. \end{equation} Since $\partial_{\alpha}\bm\Upsilon=0$ the first term of $R_{\mu\nu}^{(1)}$ is given by \begin{small} \begin{eqnarray} \label{Newton 1} \partial_{\alpha}\Gamma^{\alpha(1)}_{\mu\nu}&=& \frac{1}{2}\left(\breve{g}_{\mu\nu}\breve{g}^{\alpha\beta}-g_{\mu\nu}g^{\alpha\beta}\right)\partial_{\alpha}\Phi\partial_{\beta}\Phi \cr & &+\frac{1}{2}\big(2\partial_{\mu}\partial_{\nu}\Phi+{\Upsilon^{\beta}}_{\nu}\partial_{\mu}\partial_{\beta}\Phi+ {\Upsilon^{\beta}}_{\mu}\partial_{\nu}\partial_{\beta}\Phi\cr & &-\breve{g}_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}\Phi\big), \end{eqnarray} \end{small} the second term of Ricci tensor is: \begin{small} \begin{eqnarray} \Gamma^{\alpha(1)}_{\mu\beta}\Gamma^{\beta(1)}_{\nu\alpha}&=&\frac{3}{2}\,\,\partial_{\mu}\Phi\,\,\partial_{\nu}\Phi+\frac{1}{2}(\partial_{\mu}\Phi{\Upsilon^{\gamma}}_{\nu}+\partial_{\nu}\Phi{\Upsilon^{\gamma}}_{\mu})\partial_{\gamma}\Phi\cr & &-\frac{1}{2}g_{\mu\nu}g^{\lambda\gamma}\partial_{\gamma}\Phi\,\,\partial_{\lambda}\Phi +\frac{1}{2}{\Upsilon^{\lambda}}_{\mu}{\Upsilon^{\gamma}}_{\nu}\partial_{\gamma}\Phi\,\,\partial_{\lambda}\Phi,\nonumber \end{eqnarray} \end{small} with (\ref{Cristoffel ID2}) we can obtain the third term: \begin{equation} \label{Newton 3} \partial_{\nu}\Gamma^{\alpha(1)}_{\mu\alpha}=\frac{1}{2}[{\bf Tr}({\bm \eta})+{\bf Tr}({\bm \Upsilon})]\partial_{\mu}\partial_{\nu}\Phi, \end{equation} and the fourth is \begin{eqnarray} \Gamma^{\alpha(1)}_{\beta\alpha}\Gamma^{\beta(1)}_{\mu\nu}=\frac{1}{4}[{\bf Tr}({\bm \eta})+{\bf Tr} ({\bm \Upsilon})]\big(2\partial_{\mu}\Phi\partial_{\nu}\Phi\cr +\partial_{\mu}\Phi{\Upsilon^{\beta}}_{\nu}\partial_{\beta}\Phi+ \partial_{\nu}\Phi{\Upsilon^{\beta}}_{\mu}\partial_{\beta}\Phi -\breve{g}_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\Phi\partial_{\beta}\Phi \big). \end{eqnarray} For Newtonian limit, we must retain only the terms in the first order in the strength of gravity $\Phi$, or in other words, we have $\partial\Phi\partial\Phi\ll\partial\partial\Phi$ and $\Phi\partial\partial\Phi\ll\partial\partial\Phi$ , with this assumption we have the second term of (\ref{Newton 1}) and (\ref{Newton 3}) such as, \begin{eqnarray} \label{TensorRicci2} R_{\mu\nu}^{(1)}=\frac{1}{2}\big(2\partial_{\mu}\partial_{\nu}\Phi+{\Upsilon^{\beta}}_{\nu}\partial_{\mu}\partial_{\beta}\Phi +{\Upsilon^{\beta}}_{\mu}\partial_{\nu}\partial_{\beta}\Phi\cr -\breve{g}_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}\Phi\big)- \frac{1}{2}[{\bf Tr}({\bm \eta})+{\bf Tr}({\bm \Upsilon})]\partial_{\mu}\partial_{\nu}\Phi. \end{eqnarray} One can choose the simplest $\bm\Upsilon$, and it should be from (\ref{weak}) such as ${\bf Tr}({\bm \Upsilon})=-6$, where the Ricci tensor is now given by \begin{scriptsize} \begin{equation} R_{\mu\nu}^{(1)}=2\partial_{\mu}\partial_{\nu}\Phi+\frac{1}{2}{\Upsilon^{\beta}}_{\nu}\partial_{\mu}\partial_{\beta}\Phi+\frac{1}{2}{\Upsilon^{\beta}}_{\mu}\partial_{\nu}\partial_{\beta}\Phi-\frac{1}{2}\breve{g}_{\mu\nu}g^{\alpha\beta}\partial_{\alpha}\partial_{\beta}\Phi. \end{equation} \end{scriptsize} The scalar curvature is obtained by further contracting indices $R^{(1)}=g^{\mu\nu}R_{\mu\nu}^{(1)}$, \begin{eqnarray} R^{(1)}= 3g^{\mu\nu}\partial_{\mu}\partial_{\nu}\Phi+g^{\mu\nu}{\Upsilon^{\beta}}_{\nu}\partial_{\mu}\partial_{\beta}\Phi, \end{eqnarray} $g^{\mu\nu}=(1-\Phi)\eta^{\mu\nu}-\Phi\Upsilon^{\mu\nu}$ in terms of the first order in $\Phi$, we find that the scalar curvature is \begin{equation} R^{(1)}=3\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}\Phi+\eta^{\mu\nu}{\Upsilon^{\beta}}_{\nu}\partial_{\mu}\partial_{\beta}\Phi. \end{equation} Since the Newtonian limit approach is in the background Minkowski spacetime, we find that \begin{equation} \label{weak R} R^{(1)}=3\Box\Phi+\Upsilon^{\beta\mu}\partial_{\mu}\partial_{\beta}\Phi. \end{equation} From the field equations (\ref{field equation}) we have \begin{equation} \label{field equation2} R=-\frac{8\pi G}{c^4}T, \end{equation} in the situations where Newtonian theory can be applicable $T\approx{\rho}{c^2}$. Now we can combine equations (\ref{field equation2}) and (\ref{weak R}) such as \begin{equation} \label{weak R2} 3\Box\Phi+\Upsilon^{\beta\mu}\partial_{\mu}\partial_{\beta}\Phi=-\frac{8\pi G}{c^2}\rho. \end{equation} We assume that velocities are small, then \begin{equation} \Box=\dfrac{\partial^2}{c^2\partial t^2}-\nabla^2\approx -\nabla^2. \end{equation} And in particular from (\ref{weak}) we have $\Upsilon_{\mu\nu}\Rightarrow 2\delta_{ij}$, where $\delta_{ij}$ is the Kronecker delta in $\mathbb{R}^3$. The field equation (\ref{weak R2}) results in: \begin{equation} \label{weak R3} -3\nabla^2\Phi+2\delta^{ij}\partial_{i}\partial_{j}\Phi=-\frac{8\pi G}{c^2}\rho, \end{equation} or \begin{equation} \label{weak R4} \nabla^2\Phi=\frac{8\pi G}{c^2}\rho. \end{equation} We can introduce $\Phi=\dfrac{2{\varphi}_{N}}{c^2}$, where ${\varphi}_{N}$ is the Newtonian potential. Thus we can obtain the Poisson's equation of Newton's law of gravitation \begin{equation} \label{weak R5} \nabla^2\varphi_{N}=4\pi G\rho, \end{equation} in this way for a point source with mass $M$ we have \begin{equation} \label{potencial de newton} \varphi_{N}=-\frac{GM}{r}. \end{equation} Consequently $\Phi=-\dfrac{2GM}{c^2r}$. In this discussion that follows (\ref{TensorRicci2}) we assumed that $\Upsilon_{\mu\nu}$ is from (\ref{weak}), it results that the line element is given by (\ref{metricphi3}) at thefirst order in $\Phi$. \subsection{Gravitational Waves} Gravitational waves are one of the most important physical phenomena associated with the presence of strong and dynamic gravitational fields. Though such gravitational radiation has not yet been detected directly. There is strong indirect evidence for its existence around the famous binary pulsar PSR 1913+16 \citep{Taylor}, that in 1974 it was discovered by R.A. Hulse and J.H. Taylor \citep {Hulse}, a discovery for which they were awarded the 1993 Nobel Prize. Here, for a description of plane gravitational waves we can assume that $\Phi$ is constant so $\partial_{\alpha}\Phi=0$, implying in $\Gamma^{\beta(1)}_{\mu\nu}=0$ in (\ref{Cristoffel1A}), thus we have $\Gamma^{\beta}_{\mu\nu}=\Gamma^{\beta(2)}_{\mu\nu}$ given by equation (\ref{Cristoffel3}): \begin{equation} \label{Cristoffel5} \Gamma^{\beta(2)}_{\mu\nu}=\frac{1}{2}\sinh(\Phi)g^{\alpha\beta}(\partial_{\mu}\Upsilon_{\alpha\nu}+\partial_{\nu}\Upsilon_{\alpha\mu}-\partial_{\alpha}\Upsilon_{\mu\nu}). \end{equation} One can obtain the field equation writing the Ricci tensor in terms of Christoffel symbols above, \begin{equation} \label{TensorRicci2a} R_{\mu\nu}^{(2)}=\partial_{\alpha}\Gamma^{\alpha(2)}_{\mu\nu} -\Gamma^{\alpha(2)}_{\mu\beta}\Gamma^{\beta(2)}_{\nu\alpha} - \partial_{\nu}\Gamma^{\alpha(2)}_{\mu\alpha} +\Gamma^{\alpha(2)}_{\alpha\beta}\Gamma^{\beta(2)}_{\mu\nu}. \end{equation} From equation (\ref{Cristoffel ID2}) we have \\$\Gamma^{\alpha}_{\alpha\nu}=\dfrac{1}{2}[{\bf Tr}({\bm \eta}) +{\bf Tr}({\bm\Upsilon})]\partial_{\nu}\Phi=0$, such as, \begin{equation} \label{TensorRicci2b} R_{\mu\nu}^{(2)}=\partial_{\alpha}\Gamma^{\alpha(2)}_{\mu\nu} -\Gamma^{\alpha(2)}_{\mu\beta}\Gamma^{\beta(2)}_{\nu\alpha}, \end{equation} As shown in appendix, the The above Ricci tensor is given by, \begin{small} \begin{eqnarray} \label{TensorRicci2c} R_{\mu\nu}^{(2)}=\frac{1}{2}\sinh^2(\Phi)\,\partial_\alpha \Upsilon^{\alpha\beta}(\partial_\mu\Upsilon_{\beta\nu}+\partial_\nu\Upsilon_{ \beta\mu}-\partial_{\beta}\Upsilon_{\mu\nu})& &\cr +\frac{1}{2}\sinh(\Phi)\, g^{\alpha\beta}(\partial_\alpha\partial_\mu\Upsilon_{\beta\nu} +\partial_\alpha\partial_\nu\Upsilon_{\beta\mu}-\partial_\alpha\partial_{\beta} \Upsilon_{\mu\nu})& &\cr -\frac{1}{4}\sinh^2(\Phi)[g^{\alpha\gamma}g^{\beta\lambda}\partial_\mu\Upsilon_{ \gamma\beta}\partial_\nu\Upsilon_{\alpha\lambda}& &\cr +2(g^{\alpha\gamma}g^{\beta\lambda}-g^{\alpha\beta}g^{\gamma\lambda} )\partial_\beta\Upsilon_{\gamma\mu}\partial_\alpha\Upsilon_{\nu\lambda}].& & \end{eqnarray} \end{small} The study of gravitational waves involves essentially the approximation of Einstein's weak field equation, from these results one can obtain wave equation with low amplitudes, $\Phi\ll 1$. This can only be brought in the first order of the gravitational potential to find, \begin{equation} \label{TensorRicci2d} R_{\mu\nu}^{(2)}=\frac{1}{2}\sinh(\Phi)\, g^{\alpha\beta}(\partial_\alpha\partial_\mu\Upsilon_{\beta\nu}+\partial_\alpha\partial_\nu\Upsilon_{\beta\mu}-\partial_\alpha\partial_{\beta}\Upsilon_{\mu\nu}), \end{equation} with $\sinh(\Phi)\, g^{\alpha\beta}\approx \Phi[(1-\Phi)\eta^{\alpha\beta}-\Phi\Upsilon^{\alpha\beta}]=\Phi\eta^{\alpha\beta}$, one can obtain the Ricci tensor in Minkowski background spacetime, which this yields, \begin{equation} \label{TensorRicci2e} R_{\mu\nu}^{(2)}=\frac{1}{2}{\Phi}\,(\partial^\beta\partial_\mu\Upsilon_{\beta\nu}+\partial^\beta\partial_\nu\Upsilon_{\beta\mu}-\Box\Upsilon_{\mu\nu}), \end{equation} The gauge choice $\partial^\beta\Upsilon_{\beta\nu}=0 $, simplifies Ricci tensor in: \begin{equation} \label{TensorRicci2f} R_{\mu\nu}^{(2)}=-\frac{1}{2}{\Phi}\,\Box\Upsilon_{\mu\nu}. \end{equation} One should observe that scalar curvature \begin{equation} R^{(2)}=g^{\mu\nu}R_{\mu\nu}^{(2)}\nonumber \end{equation} is vanished for the first order in $\Phi$, because \begin{equation} R^{(2)}=\dfrac{1}{2}\eta^{\mu\nu}{\Phi}\,\Box\Upsilon_{\mu\nu}=\dfrac{1}{2}\Box{\bf Tr}({\bm \Upsilon})=0, \nonumber \end{equation} since the trace of tensor $\bm\Upsilon$ is a constant number, in accordance with equation (\ref{Dtraca}). Then, for the field equation (\ref{field equation}) the gravitational wave equation obtained in the vacuum is only, \begin{equation} \label{wave1} \Box\Upsilon_{\mu\nu}=0. \end{equation} It is necessary that the tensor $\Upsilon_{\mu\nu}$ satisfies the condition (\ref{ID}) and the above wave equation. Its shape should be, for example, the tensor (\ref{wavephi}) with $\zeta=\kappa_{\mu}x^{\mu}=\omega t-kz$, a solution for gravitational plane waves with circular polarization traveling in the $z$ direction, with amplitude $\Phi$. \section{Conclusion} This paper deals with a tensorial structure that assumes a (quasi-)idempotent feature to be able to improve at least the linear tensorial template of some tensor metric fields. It is clear that Einstein's field equations are non-linear, however, with these (quasi-)idempotent tensorial structure, without quadratic tensorial values, the non-linearity becomes more moderate although there is a price to pay. The part that carries the dynamical information, the strength of gravity is tied to the tensorial structure by exponential functions. In this approach the metric field can be characterized by a background spacetime conformally flat affected by a disturbance. We have approached some examples in this tensorial structure that results in exponential metric fields, we can point out as the main exponential metric obtained in this paper which has been extensively explored: the Yilmaz exponential metric \citep{Yilmaz1,Yilmaz2,Yilmaz3,Yilmaz4,Yilmaz5,Yilmaz6,Yilmaz7,Clapp,Robertson1,Robertson2,Ibison}. H. Yilmaz has argued that in his theory, the gravitational field can be quantized via Feynman's method \citep{Yilmaz9,Alley}. Further, it has been found that the quantized theory is finite. Incidentally in the exponential metric fields approached in this work just as in the Yilmaz theory there are no black holes in the sense of event horizons, but there can be stellar collapse \citep{Robertson1,Robertson2}. However, there are no point singularities. Interesting results obtained in this work from exponential metric fields are: circularly polarized wave; rotating bodies that in the first order is a deformation of Kerr metric and also we have a deformed static spherically symmetric spacetime. Many discussions around massive stellar objects have suggested, for example, that Kerr metric should be slightly deviated from Kerr. The possibility of discovering a non-Kerr object should be taken into account when constructing waveform templates for LISA's data analysis tools \citep{Glampedakis,White}. The technological development is ripe enough so much so in the years to come we might be able to test the second order relativist-gravity effects and may lead to answers to some important questions of gravity. In this work, we have obtained a simple and general expression for the volume element of a manifold in coordinates $(t,x,y,z)$ given in terms of strength of gravity and of traces of tensors $\bm \eta$ and $\bm \Upsilon$. It is possible that an analysis of any Lagrangian of field interacting with gravity will become easer. An interesting observation is the spacetime of circularly polarized plane wave, in this spacetime the volume element $\sqrt{-g}\,\,d^4x$ is the same of Minkowski spacetime, in this sense this gravitational radiation obtained from exponential metric field does not modify the volume element of background Minkowski spacetime where this plane wave travels onto. Moreover, it was purposed and verified the Newtonian limit as solution for Einstein's equation, since we can assume that the trace of stress-energy tensor is $T\approx \rho c^2$. Other important solution of Einstein's equation analysed in this paper was the plane gravitational wave for the empty space since we have considered the vanished stress-energy tensor to the first order in $\Phi$. Both solved Einstein's equations for Newtonian limit and plane gravitational wave propagating in the vacuum are cases that the strength of gravity is small, $\Phi\ll 1$. We have analysed the Newtonian limit in the case that $\partial_{\alpha}\bm\Upsilon=0$, and analysed the plane gravitational wave considering the strength of gravity as a constant term, thus we had two independent Ricci tensors, $R_{\mu\nu}^{(1)}$ which $\partial_{\alpha}\bm\Upsilon=0$ (for Newtonian limit) and $R_{\mu\nu}^{(2)}$ which $\partial_{\alpha}\Phi=0$ (for plane gravitational wave). In a forthcoming work, an analysis of Einstein's equations with both non-vanished $\partial_{\alpha}\bm\Upsilon$ and $\partial_{\alpha}\Phi$, will be considered. It is missing a discussion about quantities of physical interest in the solutions of Einstein's equations which describe the exterior and interior gravitational field. Yilmaz has argued the existence of the matter part in the right-hand side of the field equations correspondent to field energy in the exterior. This paper lacks a discussion about the interior and the exterior field energies denoted by a total stress-energy tensor. An analysis about the total stress-energy tensor will be the object of a forthcoming study, where the physical consequences of terms of deformity in Kerr and Schwarzschild solutions could be analysed. We know that the dark energy and the dark matter problems are challenges to modern astrophysics and cosmology; as a typical example, we could mention the galactic rotation curves of spiral galaxies, that probably, indicates the possible failure of both Newtonian gravity and General Relativity on galactic and intergalactic scales. To explain astrophysical and cosmological problems with arguments against dark energy and dark matter many works have been devoted to the possibility that the Einstein-Hilbert Lagrangian, linear in the Ricci scalar R, should be generalized. In this sense, the choice of a generic function $f(R)$ can be derived by matching the data and the requirement that no exotic ingredient have to be added \citep{Allemandi,Barrow,Capozziello1,Capozziello2,Capozziello3,Carroll1,Carroll2,Faraoni,Flanagan,Koivisto,Nojiri1,Nojiri2,Nojiri3,Sotiriou}. This class of theories when linearized exhibits others polarization modes for the gravitational waves, of which two correspond to the massless graviton and others such as massive scalar and ghost modes in $f(R)$ gravity \citep{Bellucci,Bogdanos}. In this way, analyses in any order to $f(R)$ gravity with `exponential metrics' proposed in the present work could give a positive contribution to the debate of astrophysical and cosmological questions. \acknowledgments It is a pleasure to acknowledge many stimulating and helpful discussions with my colleagues and friends Andr\'e L. A. Penna and Caio M. M. Polito.
1,116,691,498,206
arxiv
\section{INTRODUCTION} A monochromatic spectral line will be broadened after single Compton scattering on thermal electrons in an optically thin, hot plasma. The emergent spectrum will depend on the angle between the direction, $\bg{\Omega}$, from which the photons are supplied and the observer's direction, $\bg{\Omega^\prime}$. A classical problem then arises of finding the redistribution function, $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$, which gives the probability for the incident photon $(\nu,\bg{\Omega})$ to be scattered in the direction $\bg{\Omega^\prime}$ with a frequency of $\nu^\prime$ \citep{dirac25}. In the case in which the incident radiation is isotropic and monochromatic, the spectrum resulting from single Compton scattering can be found by integrating $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ over the scattering angle ($\mu_{\rm s}=\bg{\Omega}\bg{\Omega^\prime}$): \begin{equation} P(\nu\rightarrow\nu^\prime)=\int K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})\,d\bg{\Omega^\prime}= 2\pi\int K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})\,d\mu_{\rm s} . \label{k_int} \end{equation} Spectra described by equation (\ref{k_int}) may in particular form when spherical symmetry is present in the system. Consider, for example, an isotropic source of monochromatic radiation that is either located at the center of a spherical cluster of galaxies with hot intergalactic gas or distributed spherically symmetrically across the cluster. The frequency distribution of photons that have experienced a single scattering within the cluster will then be precisely $P(\nu\rightarrow\nu^\prime)$ if the entire cluster is probed at the same time. If instead the spectrum is collected from a part of the cluster, then one must take into account the angular distribution of incident photons and make use of the direction-dependent $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ function. Single-scattering line profiles, which can provide important information on the conditions (particularly the temperature) of the plasma in the source, can originate in various astrophysical environments. For the X-ray spectral band, these include intracluster gas, the coronae of accretion disks around black holes in binary stellar systems and active galactic nuclei (AGNs), and plasma streams outflowing from the neutron star during super-Eddington X-ray bursts in bursters. High-quality spectroscopic X-ray observations necessary for the detection and measurement of such lines will soon become possible with the satellites {\sl Chandra}, {\sl XMM} (both already in orbit), and {\sl Spectrum-X-Gamma}, and in a more distant outlook, with {\sl Constellation-X} and {\sl XEUS}. This was one of the primary motives for us in initiating the current study, in which we calculate analytically the functions $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ (eq. [\ref{k_set}]) and $P(\nu\rightarrow\nu^\prime)$ (eqs. [\ref{p_set}] and [\ref{p_rn}]) in the mildly relativistic limit. \subsection{Integral Kinetic Equation} Apart from the single-scattering problem outlined above, $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ and $P(\nu\rightarrow\nu^\prime)$ can be used as the kernels of the corresponding integrodifferential kinetic equations describing the Comptonization of photons on Maxwellian electrons. In the anisotropic problem, the kinetic equation in the case of an infinite homogeneous medium can be written as \begin{eqnarray} \frac{\partial I(\nu,\bg{\Omega},\tau)}{\partial\tau}=-\int I(\nu,\bg{\Omega},\tau) K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime}) [1+n(\nu^\prime,\bg{\Omega^\prime},\tau)] d\nu^{\prime} d\bg{\Omega^\prime} \nonumber\\ +\int \frac{\nu}{\nu^\prime}I(\nu^\prime,\bg{\Omega^\prime},\tau) K(\nu^\prime,\bg{\Omega^\prime}\rightarrow\nu,\bg{\Omega}) [1+n(\nu,\bg{\Omega},\tau)] d\nu^\prime d\bg{\Omega^\prime}, \label{kinetic_k} \end{eqnarray} where $I(\nu,\bg{\Omega})$ is the specific intensity of the radiation, $n=c^2 I/(2h\nu^3)$ is the occupation number in photon phase space, and $\tau=\sigma_{\rm T} N_{\rm e} ct$ is the dimensionless time ($\sigma_{\rm T}$ is the Thomson scattering cross section and $N_{\rm e}$ is the number density of electrons). It is easy to write down kinetic equations similar to equation (\ref{kinetic_k}) for any of the standard problems of radiation transfer. In the isotropic case, the kinetic equation becomes \begin{equation} \frac{\partial I(\nu,\tau)}{\partial\tau}=-\int I(\nu,\tau) P(\nu \rightarrow \nu^\prime)[1+n(\nu^\prime,\tau)] d\nu^{\prime}+ \int\frac{\nu}{\nu^\prime} I(\nu^\prime,\tau) P(\nu^\prime \rightarrow \nu)[1+n(\nu,\tau)] d\nu^\prime. \label{kinetic} \end{equation} The integrodifferential equation (\ref{kinetic}) can in general be solved numerically if its kernel, $P(\nu\rightarrow\nu^\prime)$, is known. Alternatively, one can treat Comptonization problems using Monte Carlo methods \citep[see, e.g., the review by][]{pozetal83}. In the nonrelativistic limit, i.e. when the typical photon energy, $h\nu$, and the plasma temperature, $kT_{\rm e}$, are both negligibly small compared to the electron rest energy, $m_{\rm e} c^2$, the variation in intensity at a given frequency is largely governed by transitions in a narrow interval of a continuum spectrum near this frequency. If the initial distribution of the photons in frequency is smooth enough (in a problem with a small number of scatterings) or the formation of a spectrum by multiple scatterings is studied, it is possible to perform a Fokker-Planck-type expansion of the integral equation (\ref{kinetic}), thereby reducing it to a much simpler differential equation that describes the diffusion and flow of the photons in frequency space: \begin{eqnarray} \frac{\partial n(\nu)}{\partial \tau}=\frac{1}{\nu^2}\frac{\partial} {\partial\nu}\left\{-\nu^2n\langle\Delta\nu\rangle(1+n)+ \frac{1}{2}\left[-\nu^2n\langle (\Delta\nu)^2\rangle\frac{\partial n} {\partial\nu}+(1+n)\frac{\partial}{\partial\nu} \nu^2n\langle(\Delta\nu)^2\rangle\right] \right. \nonumber\\ \left. +\frac{1}{6}\left[-\nu^2n\langle(\Delta\nu)^3\rangle \frac{\partial^2 n}{\partial\nu^2}+\frac{\partial n}{\partial\nu} \frac{\partial}{\partial\nu}\nu^2n\langle(\Delta \nu)^3\rangle-(1+n)\frac{\partial^2}{\partial\nu^2} \nu^2n\langle(\Delta\nu)^3\rangle\right] \right. \nonumber\\ \left. +\frac{1}{24}\left[-\nu^2n\langle (\Delta\nu)^4\rangle\frac{\partial^3n} {\partial\nu^3}+\frac{\partial^2n}{\partial\nu^2} \frac{\partial}{\partial\nu}\nu^2n\langle(\Delta\nu)^4\rangle -\frac{\partial n}{\partial\nu}\frac{\partial^2}{\partial\nu^2} \nu^2n\langle(\Delta\nu)^4\rangle \right.\right. \nonumber\\ \left.\left. +(1+n)\frac{\partial^3}{\partial\nu^3}\nu^2n\langle(\Delta\nu)^4\rangle \right]+...\right\}. \label{fokker} \end{eqnarray} The moments of the kernel that enter this equation are found from the formula \begin{equation} \langle(\Delta\nu)^n\rangle=\int P(\nu\rightarrow\nu^\prime) (\nu^\prime-\nu)^n d\nu^\prime. \label{mom1} \end{equation} Substituting the first two moments, $\langle\Delta\nu\rangle$ and $\langle(\Delta\nu)^2\rangle$, calculated to an accuracy of $kT_{\rm e}/m_{\rm e} c^2$ and $h\nu/m_{\rm e} c^2$ (using the kernel given by eq. [\ref{p_k}] below) into the corresponding terms of equation (\ref{fokker}) leads to the famous \cite{kompaneets57} equation: \begin{eqnarray} \frac{\partial n(\nu)}{\partial\tau}=\frac{h}{m_{\rm e} c^2}\frac{1}{\nu^2} \frac{\partial}{\partial \nu}\nu^4\left(n+n^2+\frac{kT_{\rm e}}{h} \frac{\partial n}{\partial\nu} \right). \label{komp} \end{eqnarray} The Kompaneets equation is valid in the nonrelativistic limit ($h\nu$, $kT_{\rm e}\ll m_{\rm e} c^2$). The last parenthesized term in equation (\ref{komp}) describes the frequency diffusion of photons due to the Doppler effect and the transfer of energy from the electrons to the radiation; the first term describes the downward photon flow along the frequency axis due to Compton recoil, and the second term, which also is owing to recoil, accounts for induced Compton scattering. In studying the interaction between energetic photons and hot electrons ($h\nu,\,\,kT_{\rm e}\gtrsim 0.01 m_{\rm e} c^2$), relativistic corrections to the kernel and the Kompaneets equation become important. In order to find the main corrections (of the order of $h\nu/m_{\rm e} c^2$ and $kT_{\rm e}/m_{\rm e} c^2$), it proves necessary to take into account in equation (\ref{fokker}) all terms that depend on the first four moments of the kernel. The resultant generalization of the Kompaneets equation was found by \cite{itoetal98} and \cite{chalas98} when these authors examined the distortion of the spectrum of the cosmic microwave background (CMB) during interaction with the hot intergalactic gas in clusters of galaxies. \cite{rephaeli95} pointed out the important role of corrections of the order of $kT_{\rm e}/m_{\rm e} c^2$ in this phenomenon. The kernel $P(\nu\rightarrow \nu^\prime)$ was not given in explicit form in the derivation of \cite{itoetal98} and \cite{chalas98}. Instead, the Fokker-Planck operator was applied to the kinetic equation written in the most general form, where the amplitude appears for a transition from the initial state with given 4-momenta of the photon and electron into the final state with the corresponding 4-momenta. Earlier, \cite{rosetal78} and \cite{illetal79} added to the Kompaneets equation a dispersion term associated with Compton recoil and a term that takes into account, in a first approximation, the transition from the Thomson cross section to the Klein-Nishina one. Their equation, which describes much better than the Kompaneets equation the scattering of energetic photons ($h\nu\sim 0.1m_{\rm e} c^2$) by sufficiently cold electrons ($kT_{\rm e}\ll h\nu$), is a particular case of the more general formula of \cite{itoetal98} and \cite{chalas98}. Despite the attractiveness of employing the Fokker-Planck approximation in treating Comptonization problems, its scope is limited. For example, in considering the effect of Comptonization from a small number of scatterings on the profiles of narrow spectral lines, the initial radiation spectrum cannot be represented as a finite Taylor series in terms of the frequency variation, and, therefore, Fokker-Planck-type equations are not applicable. For this kind of problem, it is necessary to make use of the integral kinetic equations (\ref{kinetic_k}) or (\ref{kinetic}), which requires knowledge of their kernels, $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ and $P(\nu\rightarrow\nu^\prime)$. \subsection{The Kernel} \cite{dirac25} has given an approximate expression for the direction-dependent kernel, $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$. The Doppler shift was taken into account to within the first order in $v/c$, but Compton recoil was totally neglected. Integration of Dirac's $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ function over the scattering angle results in the zero-order approximation for the isotropic kernel $P(\nu\rightarrow\nu^\prime)$ \citep[][see also Weymann 1970]{hummih67}. Being symmetric in frequency variation [$P(\nu\rightarrow \nu+\Delta\nu)=P(\nu\rightarrow \nu-\Delta\nu)$], this kernel does not take into account the average Doppler increment in the photon energy: $\langle \Delta\nu/\nu\rangle=4kT_{\rm e}/m_{\rm e} c^2$. With the zero-order approximation for the kernel, it is not possible to describe many important astrophysical phenomena, such as: (a) distortion of the spectrum of the CMB in the direction of galaxy clusters \citep[][see the recent review by Birkinshaw 1999]{sunzel72}, (b) The $y$ \citep{zelsun69} and Bose-Einstein $\mu$ \citep{sunzel70} distortions of the CMB spectrum resulting from energy release in the early universe (see the reviews by Danese \& de Zotti 1977; Sunyaev \& Zeldovich 1980; the book by Peebles 1993; and the strict constraints on these distortions placed by the Far-Infrared Absolute Spectrophotometer on board COBE, Fixsen et al. 1996), (c) the formation of hard power-law tails in the emission spectra of the famous X-ray source Cygnus X--1 \citep{suntru79} and other stellar-mass black hole candidates, AGNs, and quasars; and in the spectra of accreting neutron stars \citep[e.g.,][] {shaetal76,suntit80,pozetal83,tanshi96,pousve96,naretal98,zdzetal98}. Babuel-Peyrissac \& Rouvillois (1970, hereafter BR70) have obtained a more accurate expression for the $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ kernel, by taking into account not only the average Doppler and recoil frequency shifts (the latter is $\langle \Delta\nu/\nu\rangle=-h\nu/m_{\rm e} c^2$), but also, in the first order, Klein-Nishina corrections to the scattering cross section. BR70 also attempted to find relativistic corrections due to high electron velocities ($kT_{\rm e}\sim 0.1m_{\rm e} c^2$), but did not include all relevant terms (as we show in the present paper), and therefore their expression is valid for nonrelativistic electrons only. The allowance for Compton recoil is also made in the $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ kernel written down by \cite{zeletal72}. Integration of the BR70 kernel over the scattering angle ignoring recoil ($h\nu\ll kT_{\rm e}\llm_{\rm e} c^2$) leads to the first-order approximation for the $P(\nu\rightarrow\nu^\prime)$ kernel \citep{sunyaev80}. The formula of \cite{sunyaev80}, as opposed to the approximation of \cite{hummih67}, accounts for the asymmetry of the scattered line profile and enables us to describe (in the nonrelativistic limit) the above-mentioned phenomena related to the transfer of energy from hot electrons to radiation. Using this expression, one can derive the diffusion part of the Kompaneets operator (the last term of eq. [\ref{komp}]). The remaining two (flow) terms can be found by applying the thermodynamic principle that the desired equation must obey: a Planckian distribution of photons must remain unchanged during the interaction with electrons of the same temperature. This is the method followed by the authors of the original derivation of the Kompaneets equation for finding not only the term describing the downward motion of the photons along the frequency axis, but also the term describing the effect of induced scattering (report No. 336 of the Chemical Physics Institute of USSR Academy of Sciences 1950; Kompaneets 1957; Ya.B. Zeldovich 1968, private communication). \cite{keretal86} have derived a semianalytical relativistic formula for the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel that is valid for arbitrary values of photon energy and electron temperature. With this formula, the calculation of the kernel is reduced to computing a single integral over the electron Lorentz factor. \cite{keretal86} further succeded in expanding their basic expression in powers of $kT_{\rm e}/m_{\rm e} c^2$, and thus derived an algebraic formula for $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ that is applicable in the low-temperature limit $kT_{\rm e}\lesssim 0.1m_{\rm e} c^2$, with no limitation imposed on the photon energy. Other previous relativistic studies of the Compton scattering kernel also need to be mentioned. \cite{ahaato81} were the first to derive the exact relativistic formula for the direction-dependent kernel, $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$, for an isotropic ensemble of monoenergetic electrons. \cite{nagpou94} (see also references therein) derived the exact relativistic expression for the isotropic kernel, $P(\nu\rightarrow\nu^\prime)$, again for an ensemble of monoenergetic electrons. These formulae make it possible to solve efficiently very general Comptonization problems, i.e., with no constraints set on the parameters. However, their application always implies the need to perform one or two numerical integrations, usually over a specified distribution of electron energies (e.g., Maxwellian), and sometimes over the scattering angle. We also note that the scattering kernel has been the subject of a number of studies that employed numerical methods \citep[e.g.,][]{pomraning73,illetal79,pozetal79,loeetal91,molbir99}. In the present paper, we have obtained algebraic approximate expressions for the kernels $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ (eq. [\ref{k_set}]) and $P(\nu\rightarrow\nu^\prime)$ (eqs. [\ref{p_set}] and [\ref{p_rn}]), which take into account (1) relativistic effects due to high electron velocities and (2) quantum effects, namely, Compton recoil and Klein-Nishina corrections. The formula for the angle-dependent kernel, $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$, is a very good approximation in the parameter range $kT_{\rm e}\lesssim 25$~keV and $h\nu\lesssim 50$~keV. Moreover, in the case of nonrelativistic electrons, this formula without the temperature correction terms (eq. [\ref{k_nr}]) gives the exact result for arbitrary photon energies, including $h\nu\ggm_{\rm e} c^2$. We derived our expression in the same manner as BR70 derived theirs, but our expression is more accurate, since it includes the relativistic corrections consistently. Our expression for $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ can be compared with the corresponding formula derived by \cite{keretal86} (their eq. [41]). Both formulae are fully algebraic and valid in roughly the same electron temperature range ($kT_{\rm e}/m_{\rm e} c^2\lesssim 0.1$). The expression of \cite{keretal86} is more general than our equation (\ref{k_set}). It holds for any $h\nu$ in all its temperature terms, while in the case of our formula, this statement is true for the main, nonrelativistic temperature term only. Therefore, our expression should be derivable from the formula of \cite{keretal86}. The referee of the present paper informed us that both formulae indeed yield similar numerical results for the same parameter values. On the other hand, our expression is significantly simpler in structure and, more importantly, when $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ is written in this form, it can be integrated analytically (under some additional constraints on the parameters) over the scattering angle or the emergent photon energy. The first of these integrations leads to the algebraic formula for the $P(\nu\rightarrow\nu^\prime)$ kernel, while the second leads to an expression for the angular scattering function. We stress that it is this unique integrability that makes equation (\ref{k_set}) of interest and useful for further applications. We derived the expression for $P(\nu\rightarrow\nu^\prime)$ by integrating $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ over the scattering angle under the assumption $h\nu(h\nu/m_{\rm e} c^2)\ll kT_{\rm e}$. In this limit, it turns out to be possible to carry the term describing the effect of Compton recoil out of an exponential factor that enters the expression for $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ and implement the subsequent integration analytically. As a result, the range of applicability of our approximation for the isotropic kernel is narrower than that of our formula for the angle-dependent kernel: $h\nu\lesssim 50$~keV, $h\nu(h\nu/m_{\rm e} c^2)\lesssim kT_{\rm e}\lesssim 25$~keV; in this range of parameter values, the accuracy of the approximation is better than 98 per cent. The latter constraint means that the average recoil-induced frequency shift must be less than the typical Doppler broadening. The opposite case, $h\nu(h\nu/m_{\rm e} c^2)\gtrsim kT_{\rm e}$, corresponds to a situation in which the recoil effect is predominant. In this case, the single-scattering line profile is double-peaked, which is related to the Rayleigh scattering phase function \citep[e.g.,][]{pozetal79,pozetal83}. It is also worth noting that when the temperature of the matter is sufficiently low, scattering of X-rays on neutral hydrogen and helium may become more important than Compton scattering on free electrons \citep[see a discussion of the recoil profile arising in this problem in][]{sunchu96}. The paper is organized as follows. In \S 2 we report our analytical results. For the observational astrophysicist, reading this part of the paper should be sufficient for finding all the information necessary for application of the results. The main results are the formulae for the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ and $P(\nu\rightarrow\nu^\prime)$ kernels --- equations (\ref{k_set}),(\ref{k_nr}), and equations (\ref{p_set}),(\ref{p_rn}),(\ref{p_k}), and (\ref{p_k_rn}), respectively. We thoroughly examine the properties of the kernels and determine the parameter ranges of applicability of the different approximations. In \S 2 we also: (1) discuss the properties of the angular function for Compton scattering in a hot plasma that results from our $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ kernel; (2) show that the Fokker-Planck expansion (to fourth order) of the kinetic equation (\ref{kinetic}) with the kernel equation (\ref{p_set}) leads to the generalized Kompaneets equation \citep{itoetal98,chalas98}, equation (\ref{komp_gen}); (3) derive mildly-relativistic formulae, equations (\ref{k_ind}) and (\ref{p_ind}), for the kernels, $K^{\rm ind}(\nu,\bg{\Omega};\nu^\prime,\bg{\Omega^\prime})$ and $P^{\rm ind}(\nu;\nu^\prime)$, that correspond to problems in which a decisive role is played by induced Compton scattering; and (4) give the result of the convolution of a spectrum described by the step function with $P(\nu\rightarrow\nu^\prime)$ as an example of application of the kernel to Comptonization problems. The subsequent sections of the paper provide the calculation details. The derivation procedure for the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel is described in \S 3. The angular scattering function is calculated directly [to a better accuracy than $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$] in \S 4. The formula for the $P(\nu\rightarrow \nu^\prime)$ kernel is derived by integrating $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ over the scattering angle in \S 5. An alternative (fully independent) method for deriving the $P(\nu\rightarrow \nu^\prime)$ kernel in the low-frequency case ($h\nu\ll kT_{\rm e}$) is presented in \S 6. \section{RESULTS} \subsection{The $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ Kernel} BR70 have made an attempt to take into account relativistic effects associated with high electron velocities for the kernel $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$. We have examined the derivation of these authors and found that not all relevant correction terms were included by them. In particular, the relativistic corrections to the Maxwellian velocity distribution function were neglected. The BR70 kernel is therefore of better accuracy [correct up to terms of order $(kT_{\rm e}/m_{\rm e} c^2)^{1/2}h\nu/m_{\rm e} c^2$] with respect to photon energy than with respect to electron temperature. Therefore, this kernel is strictly only valid for nonrelativistic electrons. In \S 3, we revise the derivation of BR70 and obtain the following approximate formula for $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$, which is correct up to terms of order $(kT_{\rm e}/m_{\rm e} c^2)^{3/2}$, $(kT_{\rm e}/m_{\rm e} c^2)^{1/2}h\nu/m_{\rm e} c^2$, and $(h\nu/m_{\rm e} c^2)^2$: \begin{mathletters} \begin{eqnarray} K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime}) =\nu^{-1}\frac{3}{32\pi}\sqrt{\frac{2}{\pi}}\eta^{-1/2}\,\frac {\nu^\prime}{g}\left\{1+\mu_{\rm s}^2+\left(\frac{1}{8}-\mu_{\rm s}-\frac{63}{8}\mu_{\rm s}^2+5\mu_{\rm s} ^3\right)\eta-\frac{\mu_{\rm s}(1+\mu_{\rm s})}{2}\epsilon^2 \right. \nonumber\\ \left. -\frac{3(1+\mu_{\rm s}^2)}{32(1-\mu_{\rm s})^2}\frac{\epsilon^4}{\eta}+\mu_{\rm s}(1-\mu_{\rm s}^2) \epsilon\frac{h\nu}{m_{\rm e} c^2}+\frac{1+\mu_{\rm s}^2}{8(1-\mu_{\rm s})}\frac{\epsilon^3} {\eta}\frac{h\nu}{m_{\rm e} c^2}+(1-\mu_{\rm s})^2\frac{h^2\nu\nu^\prime}{m_{\rm e}^2c^4}\right\} \exp{\left[-\frac{\epsilon^2}{4(1-\mu_{\rm s})\eta}\right]}, \label{k} \end{eqnarray} where \begin{equation} g=|\nu\bg{\Omega}-\nu^\prime\bg{\Omega^\prime}| =(\nu^2-2\nu\nu^\prime\mu_{\rm s}+\nu^{\prime 2})^{1/2}, \label{g} \end{equation} \begin{equation} \epsilon=\frac{[2(1-\mu_{\rm s})]^{1/2}}{g}\left[\nu^\prime-\nu+ \frac{h\nu\nu^\prime}{m_{\rm e} c^2}(1-\mu_{\rm s})\right], \label{epsilon} \end{equation} \begin{equation} \eta=\frac{kT_{\rm e}}{m_{\rm e} c^2}, \label{eta} \end{equation} \label{k_set} \end{mathletters} and $\mu_{\rm s}=\bg{\Omega}\bg{\Omega^\prime}$. Note that equation (\ref{k_set}) gives the probability of a scattering event per unit dimensionless time, $\tau=\sigma_{\rm T} N_{\rm e} ct$. Let us look at the sequence of terms within the braces in equation (\ref{k}). The main term, $1+\mu_{\rm s}^2$, is just the Rayleigh angular scattering function. The last term, which is proportional to $(h\nu/m_{\rm e} c^2)^2$, describes the second-order Klein-Nishina correction to the scattering cross section. The remaining five terms become important when the electron velocities are high. These temperature-correction terms are either incorrect or absent in the kernel of BR70. The term of order $(h\nu/m_{\rm e} c^2)^2$ was not given by BR70 either. Equation (\ref{k_set}) is a good approximation (which will be supported below by a direct comparison with results of numerical calculations) to the kernel if both the photon energy and electron temperature are moderately relativistic. Furthermore, this formula without the temperature-correction terms (see the resulting eq. [\ref{k_nr}] below) describes the scattering of photons of arbitrary energy (including $h\nu\ggm_{\rm e} c^2$) on nonrelativistic electrons ($kT_{\rm e}\ll m_{\rm e} c^2$) exactly. The term $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ must obey the detailed balance principle (including induced effects), i.e., ensure conservation of a blackbody spectrum, $B_{\nu}=2h\nu^3/c^2[\exp{(h\nu/kT_{\rm e})}-1]^{-1}$, in thermodynamic equilibrium: \begin{equation} K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime}) \left[1+\frac{c^2B_{\nu}(\nu^\prime)}{2h\nu^{\prime3}} \right]\frac{B_{\nu}(\nu)}{h\nu}=K(\nu^\prime,\bg{\Omega^\prime}\rightarrow\nu, \bg{\Omega})\left[1+\frac{c^2B_{\nu}(\nu)}{2h\nu^3} \right]\frac{B_{\nu}(\nu^\prime)}{h\nu^\prime}. \label{k_balance1} \end{equation} Equation (\ref{k_balance1}) reduces to \begin{equation} K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})= \left(\frac{\nu^\prime}{\nu}\right)^2\exp{\left[\frac{h(\nu-\nu^\prime)}{kT_e} \right]}K(\nu^\prime,\bg{\Omega^\prime}\rightarrow\nu,\bg{\Omega}). \label{k_balance2} \end{equation} It is easily verified that our expression (\ref{k_set}) does satisfy eq. (\ref{k_balance2}). \subsubsection{$K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ for Nonrelativistic Electrons and Photons of Arbitrary Energy} If the electrons are nonrelativistic ($\eta\ll 1$), equation (\ref{k_set}) simplifies to \begin{eqnarray} K_{\rm nr}(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime}) =\frac{3}{32\pi}\sqrt{\frac{2}{\pi}}\eta^{-1/2}\,\frac {\nu^\prime}{\nu g}\left[1+\mu_{\rm s}^2+(1-\mu_{\rm s})^2\frac{h^2\nu\nu^\prime} {m_{\rm e}^2c^4}\right] \nonumber\\ \exp{\left\{-\frac{1}{2\eta g^2}\left[\nu^\prime-\nu +\frac{h\nu\nu^\prime}{m_{\rm e} c^2}(1-\mu_{\rm s})\right]^2\right\}}, \label{k_nr} \end{eqnarray} where $g$ and $\eta$ are given by equations (\ref{g}) and (\ref{eta}), respectively. Since equation (\ref{k_nr}) fully takes into account the Klein-Nishina scattering cross section, it holds true for photons of arbitrary energy when the electrons are nonrelativistic. If we are interested in the case in which the photons are of sufficiently low energy, $h\nu\lesssim 0.1 m_{\rm e} c^2$, the second-order Klein-Nishina correction term in equation (\ref{k_nr}) becomes small and can be omitted. The result is \begin{equation} K_{\rm nr}(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime}) =\frac{3}{32\pi}\sqrt{\frac{2}{\pi}}\eta^{-1/2}\,\frac {\nu^\prime}{\nu g}(1+\mu_{\rm s}^2) \exp{\left\{-\frac{1}{2\eta g^2}\left[\nu^\prime-\nu +\frac{h\nu\nu^\prime}{m_{\rm e} c^2}(1-\mu_{\rm s} )\right]^2\right\}}. \label{k_bab} \end{equation} Equation (\ref{k_bab}) includes the first-order Klein-Nishina correction and takes into account Compton recoil; both effects are essentially described in the exponential factor (see the derivation in \S 3). It also accounts for (to first order) the asymmetry of the scattered profile due to the Doppler effect (the preexponential factor $\nu^\prime/\nu$ in eq. [\ref{k_bab}] is important here), and therefore allows one to derive the Kompaneets differential equation. It is worth noting that the simplified derivation of the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel described in an appendix to the BR70 paper makes it possible to obtain the exponential factor in equation (\ref{k_nr}), but not the preexponential factor $\nu^\prime/\nu$, and therefore does not lead to the Kompaneets equation and does not describe the energy transfer from the electrons to the photons by scattering. \subsubsection{Angular Scattering Function} By integrating the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel over the photon frequency upon scattering, one can determine how many photons are scattered in a unit time through a given angle, i.e., the angular function for scattering (defined by eq. [\ref{ang_gen}] in \S 4): \begin{equation} \frac{d\sigma}{d\mu_{\rm s}}=2\pi\int K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})\,d\nu^\prime. \label{ang_int} \end{equation} We have verified, through numerical computation, that the angular function that corresponds to the $K_{\rm nr}$ kernel given by equation (\ref{k_nr}) is exactly the Klein-Nishina formula, describing the cross section for Compton scattering on an electron at rest: \begin{equation} \left(\frac{d\sigma}{d\mu_{\rm s}}\right)_{\rm nr}=\frac{3}{8}\left[1+ (1-\mu_{\rm s})\frac{h\nu}{m_{\rm e} c^2}\right]^{-2}\left\{1+\mu_{\rm s}^2+(1-\mu_{\rm s})^2 \left[1+\frac{h\nu}{m_{\rm e} c^2}(1-\mu_{\rm s})\right]^{-1}\left(\frac{h\nu}{m_{\rm e} c^2} \right)^2\right\}. \label{ang_kn} \end{equation} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[24 250 570 590]{fig1.ps} \caption{ Angular function (in polar coordinates) for Compton scattering on thermal electrons, in various regimes set by the values of parameters: photon energy, $E=h\nu$, and electron temperature, $T_{\rm e}$. (a) Low-temperature case: $kT_{\rm e}\ll h\nu$, $kT_{\rm e}\ll m_{\rm e} c^2$. {\sl Solid lines:} Angular functions corresponding to the $K_{\rm nr}$ angle-dependent kernel given by eq. (\ref{k_nr}). These patterns are described by the Klein-Nishina eq. (\ref{ang_kn}). {\sl Dotted line:} Rayleigh angular function (also in the other panels). (b) Low frequency case: $h\nu\ll kT_{\rm e}$, $h\nu\ll m_{\rm e} c^2$. {\sl Solid lines:} Results of Monte Carlo simulations. For the mildly relativistic example, $kT_{\rm e}=25$~keV, also shown are the angular function calculated from eq. (\ref{ang1}), which results from the $K$ kernel (eq. [\ref{k_set}]) ({\sl dashed line}), and a more accurate approximation described by eq. (\ref{ang2}) ({\sl dash-dotted line}). (c) and (d): Both $h\nu$ and $kT_{\rm e}$ are mildly relativistic. The Monte Carlo results ({\sl solid lines}) are compared with the results of eq. (\ref{ang1}) ({\sl dashed lines}) and eq. (\ref{ang2}) ({\sl dash-dotted lines}). } \end{figure*} Two examples of the angular function that corresponds to the $K_{\rm nr}$ kernel and is described by equation (\ref{ang_kn}) are shown in Figure~1a: one for the case $h\nu\llm_{\rm e} c^2$ and the other for $h\nu\ggm_{\rm e} c^2$. We see the well-known Klein-Nishina pattern, namely, that more photons are scattered forward ($\mu_{\rm s}=1$) than backward ($\mu_{\rm s}=-1$). This angular function corresponds to the case of nonrelativistic electrons ($kT_{\rm e}\ll m_{\rm e} c^2$). If the electrons are mildly relativistic, the $K_{\rm nr}$ kernel becomes inaccurate, and the temperature corrections included in the $K$ kernel (eq. [\ref{k_set}]) become important. Analytical integration of this kernel over the scattering angle is possible if an assumption is made that $h\nu(h\nu/m_{\rm e} c^2)\ll kT_{\rm e}\ll m_{\rm e} c^2$. In this limit, $K$ can be written in the form given by equation (\ref{k_exp_set}) of \S 5, which leads to \begin{eqnarray} \frac{d\sigma}{d\mu_{\rm s}}=\frac{3}{8}\left\{1+\mu_{\rm s}^2+\left[-2(1-\mu_{\rm s}) (1+\mu_{\rm s}^2)\frac{h\nu}{m_{\rm e} c^2}+(1-\mu_{\rm s})^2(4+3\mu_{\rm s}^2) \left(\frac{h\nu}{m_{\rm e} c^2}\right)^2...\right] \right. \nonumber\\ \left. +2(1-2\mu_{\rm s}-3\mu_{\rm s}^2+2\mu_{\rm s}^3)\frac{kT_{\rm e}}{m_{\rm e} c^2} +O((h\nu/m_{\rm e} c^2)(kT_{\rm e}/m_{\rm e} c^2),(kT_{\rm e}/m_{\rm e} c^2)^2)\right\}. \label{ang1} \end{eqnarray} The expression in square brackets in equation (\ref{ang1}) is an expansion series in powers of $h\nu/m_{\rm e} c^2$ resulting from equation (\ref{ang_kn}) (only two leading terms are presented). More interesting is the correction term of order $kT_{\rm e}/m_{\rm e} c^2$. The origin of this term is related to Doppler aberration and has nothing to do with quantum effects. The terms that are given in implicit form in equation (\ref{ang1}), i.e. $O(...)$, indicate the order of inaccuracy of our approximation for the kernel. Although equation (\ref{ang1}) was obtained under the assumption that $h\nu(h\nu/m_{\rm e} c^2)\ll kT_{\rm e}\ll m_{\rm e} c^2$, it holds true for arbitrary proportions of $h\nu$ and $kT_{\rm e}$, which we have verified by a direct numerical calculation of the integral (\ref{ang_int}). It is also possible to directly calculate the angular function, not using the $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ kernel. The corresponding derivation procedure, which is similar to but simpler than that for $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$, is described in \S 4. The result, which is more accurate than equation (\ref{ang1}), is \begin{eqnarray} \frac{d\sigma}{d\mu_{\rm s}}=\frac{3}{8}\left[1+\mu_{\rm s}^2-2(1-\mu_{\rm s}) (1+\mu_{\rm s}^2) \frac{h\nu}{m_{\rm e} c^2}+2(1-2\mu_{\rm s}-3\mu_{\rm s}^2+2\mu_{\rm s}^3) \frac{kT_{\rm e}}{m_{\rm e} c^2} \right. \nonumber\\ \left. +(1-\mu_{\rm s})^2(4+3\mu_{\rm s}^2)\left(\frac{h\nu}{m_{\rm e} c^2}\right)^2+ (1-\mu_{\rm s})(-7+14\mu_{\rm s}+9\mu_{\rm s}^2-10\mu_{\rm s}^3)\frac{h\nu}{m_{\rm e} c^2}\frac{kT_{\rm e}} {m_{\rm e} c^2} \right. \nonumber\\ \left. +(-7+22\mu_{\rm s}+9\mu_{\rm s}^2-38\mu_{\rm s}^3+20\mu_{\rm s}^4)\left(\frac{kT_{\rm e}}{m_{\rm e} c^2}\right)^2 +...\right]. \label{ang2} \end{eqnarray} We see that the first-order correction terms, $O(h\nu/m_{\rm e} c^2)$ and $O(kT_{\rm e}/m_{\rm e} c^2)$, and the second-order term proportional to $(h\nu/m_{\rm e} c^2)^2$ are the same as in equation (\ref{ang1}). The last two terms in equation (\ref{ang2}), $O((h\nu/m_{\rm e} c^2)(kT_{\rm e}/m_{\rm e} c^2))$ and $O((kT_{\rm e}/m_{\rm e} c^2)^2)$, belong to the next-order approximation (with respect to $K$ given by eq. [\ref{k_set}]) to the kernel. Integration of equation (\ref{ang2}) over all scattering angles leads to the well-known expression that describes the total cross section for Compton scattering on Maxwellian electrons [defined as $\sigma=(\lambda N_{\rm e})^{-1}$, where $\lambda$ is the photon mean free path] in the mildly relativistic limit \citep[e.g.,][]{pozetal83,sheetal88}: \begin{equation} \sigma=\sigma_{\rm T}\left[1-2\frac{h\nu}{m_{\rm e} c^2}-5\frac{h\nu}{m_{\rm e} c^2} \frac{kT_{\rm e}}{m_{\rm e} c^2}+\frac{26}{5}\left(\frac{h\nu}{m_{\rm e} c^2} \right)^2\right]. \label{sigma} \end{equation} Note that the pure temperature terms, $O((kT_{\rm e}/m_{\rm e} c^2)^n)$, in equation (\ref{ang2}) give no contribution to $\sigma$. This is a well-known fact, which means that the total cross section for low-energy photons in a hot plasma is equal to the Thomson cross section. At the same time, the angular function (eq. [\ref{ang2}]) is different from the Rayleigh function. In Figures~1b--1d, several examples of the angular function for Compton scattering on hot electrons are presented; the results of Monte Carlo simulations are compared with the results of the calculation by the approximate formulae (\ref{ang1}) and (\ref{ang2}). Figure~1b demonstrates scattering of low-frequency photons: $h\nu\ll kT_{\rm e}$, $h\nu\ll m_{\rm e} c^2$. The angular function in this case is totally different from the Klein-Nishina one --- compare with Figure~1a. Let us first consider the mildly relativistic temperature range $kT_{\rm e}\lesssim 25$~keV, within which equation (\ref{ang1}) (with the terms containing $h\nu/m_{\rm e} c^2$ vanishing) is a good approximation. Scattering is somewhat suppressed (compared to the Rayleigh angular function, which corresponds to the $K_{\rm nr}$ kernel in this case) both in the forward and backward directions. There is, however, a noticeable enhancement in the number of photons scattered through intermediate angles, between $69^\circ$ and $138^\circ$; these values are found by equating the correction term of order $kT_{\rm e}/m_{\rm e} c^2$ in equation (\ref{ang1}) to zero. The temperature-relativistic correction to the Rayleigh angular function reaches a maximum of $12(\eta/0.05)$\% at an angle of $105^\circ$. The relativistic reduction of the angular function is maximal, $10(\eta/0.05)$\%, for the two extreme values of the scattering angle, $0$ and $\pi$. It is evident from Figure~1b that the approximation represented by equation (\ref{ang2}) is more accurate that that given by equation (\ref{ang1}). The inclusion of the correction term of order $(kT_{\rm e}/m_{\rm e} c^2)^2$ is particularly important for very large scattering angles (close to $\pi$). Equation (\ref{ang2}) proves to be a good approximation to the actual scattering angular function in the range $kT_{\rm e}\lesssim 35$~keV. As the temperature becomes significantly relativistic (see the patterns for $kT_{\rm e}=0.5 m_{\rm e} c^2$ and $5 m_{\rm e} c^2$ in Fig.~1b), the scattering angular function modifies further and becomes totally unlike the Rayleigh angular function. Only the results of Monte Carlo simulations are shown for these cases, because the convergence of the expansion series in powers of $kT_{\rm e}/m_{\rm e} c^2$ for the angular function becomes poor when $kT_{\rm e}$ exceeds $\sim 40$~keV. Scattering in the forward direction is now heavily suppressed (i.e., the plasma effectively screens itself from the radiation incident from outside), while more and more photons are scattered through angles between $\pi/2$ and $\pi$. In particular, at temperatures $kT_{\rm e}\gtrsim 0.5 m_{\rm e} c^2$, the number of photons scattered through an angle of $\pi$ is higher than in the case $kT_{\rm e}=0$. In the limit $kT_{\rm e}\gg m_{\rm e} c^2$, the angular function is described by the law $d\sigma/d\mu_{\rm s}=(1-\mu_{\rm s})/2$ \citep{sazsun00}. Figures~1c and 1d show two examples of the angular scattering function when both $h\nu$ and $kT_{\rm e}$ are mildly relativistic. The approximations described by equations (\ref{ang1}) and (\ref{ang2}) work well for photon energies $h\nu\lesssim 50$~keV within the temperature ranges quoted above. The phenomenon of anisotropic (backward) Compton scattering by a hot plasma has been previously mentioned in astrophysical literature \citep{ghietal91,titarchuk94,pousve96,gieetal99}. In particular, \cite{haardt93} has performed a semianalytical calculation (using Monte Carlo simulations) of the angular scattering function, as well as other related quantities, and obtained results for the case of low-frequency radiation ($h\nu\ll m_{\rm e} c^2$) that are similar to those depicted in Figure~1b. \subsubsection{The Moments of the Kernel} Using equation (\ref{k_exp_set}) of \S 5 it is possible to calculate the moments of the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel, in a similar way as we derived above the angular function. Here we define the moments as \begin{equation} \langle(\Delta\nu)^n\rangle_{\mu_{\rm s}}=2\pi\int K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime}) (\nu^\prime-\nu)^n d\nu^\prime, \label{mom2} \end{equation} so that the integration of $\langle(\Delta\nu)^n\rangle_{\mu_{\rm s}}$ over $\mu_{\rm s}$ gives the moments of the $P(\nu\rightarrow\nu^\prime)$ kernel for the isotropic problem, which were defined in equation (\ref{mom1}). The result for the first four moments is \begin{eqnarray} \langle\Delta\nu\rangle_{\mu_{\rm s}} &=& \frac{3\nu}{8}\left[(1-\mu_{\rm s})(1+\mu_{\rm s}^2)\left(4\frac{kT_{\rm e}}{m_{\rm e} c^2} -\frac{h\nu}{m_{\rm e} c^2}\right)+2(7-21\mu_{\rm s}+5\mu_{\rm s}^2+19\mu_{\rm s}^3-10\mu_{\rm s}^4) \left(\frac{kT_{\rm e}}{m_{\rm e} c^2}\right)^2 \right. \nonumber\\ & &\left.+\,\frac{-37+81\mu_{\rm s}-65\mu_{\rm s}^2+41\mu_{\rm s}^3-20\mu_{\rm s}^4}{2}\frac{h\nu} {m_{\rm e} c^2}\frac{kT_{\rm e}}{m_{\rm e} c^2}+3(1-\mu_{\rm s})^2(1+\mu_{\rm s}^2) \left(\frac{h\nu}{m_{\rm e} c^2}\right)^2 \right], \nonumber\\ \langle(\Delta\nu)^2\rangle_{\mu_{\rm s}} &=& \frac{3\nu^2}{8}\left[2(1-\mu_{\rm s})(1+\mu_{\rm s}^2)\frac{kT_{\rm e}}{m_{\rm e} c^2} +(37-81\mu_{\rm s}+65\mu_{\rm s}^2-41\mu_{\rm s}^3+20\mu_{\rm s}^4)\left(\frac{kT_{\rm e}}{m_{\rm e} c^2}\right)^2 \right. \nonumber\\ & &\left.-18(1-\mu_{\rm s})^2(1+\mu_{\rm s}^2)\frac{h\nu}{m_{\rm e} c^2}\frac{kT_{\rm e}}{m_{\rm e} c^2} +(1-\mu_{\rm s})^2(1+\mu_{\rm s}^2)\left(\frac{h\nu}{m_{\rm e} c^2}\right)^2 \right], \nonumber\\ \langle(\Delta\nu)^3\rangle_{\mu_{\rm s}} &=& \frac{3\nu^3}{8}(1-\mu_{\rm s})^2(1+\mu_{\rm s}^2) \left[36\left(\frac{kT_{\rm e}}{m_{\rm e} c^2}\right)^2-6\frac{h\nu}{m_{\rm e} c^2}\frac{kT_{\rm e}} {m_{\rm e} c^2}\right], \nonumber\\ \langle(\Delta\nu)^4\rangle_{\mu_{\rm s}} &=& \frac{3\nu^4}{8}12(1-\mu_{\rm s})^2(1+\mu_{\rm s}^2) \left(\frac{kT_{\rm e}}{m_{\rm e} c^2} \right)^2. \label{k_moments} \end{eqnarray} The moments of higher degrees turn out to be at least of order $\eta^3$. The importance of including relativistic corrections in the kernel becomes clear from examining the terms proportional to $\eta^2$ and $\eta h\nu/m_{\rm e} c^2$ in the expressions for the first two moments, which become comparable to the main terms already at moderate $\eta, h\nu/m_{\rm e} c^2\sim 0.02$ for large scattering angles. The first and second moments describe correspondingly the average frequency increment by scattering and the broadening of the scattered profile. It should be noted that the accuracy, to within $O(\eta^2,\eta h\nu/m_{\rm e} c^2,(h\nu/m_{\rm e} c^2)^2)$, of the derived moments is better than the accuracy, to within $O(\eta^{3/2}, \eta^{1/2}h\nu/m_{\rm e} c^2)$, of the kernel itself. The reason for this is that $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ is multiplied by a small quantity, the frequency shift ($\sim\nu\eta^{1/2}$) raised to a certain power, when the moments are calculated. \subsubsection{Comparison of the Analytical Formulae for the Kernel with Numerical Results} We performed a series of Monte Carlo simulations (a modification of the code described by Pozdnyakov et al. 1983 was used) to evaluate the accuracy of our analytical expressions for the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel. Figures~2--7 show examples of spectra that may form through single scattering (with a given angle) of a monochromatic line on thermal electrons in various scattering regimes that are set by the values of the parameters: $kT_{\rm e}$, $h\nu$, $\mu_{\rm s}$. The numerical results are compared with the analytical kernels $K$ (mildly relativistic) and $K_{\rm nr}$ (non-relativistic). As inferred from Figures~2--7 and further from the entire set of results of our numerical calculations, the mildly relativistic equation (\ref{k_set}) is a very good approximation for electron temperatures $kT_{\rm e}\lesssim 25$~keV and photon energies $h\nu\lesssim 50$~keV. In this range of parameter values, the accuracy is better than 98\%, except in the far wings of the scattered profile. The latter can be roughly defined as the regions where $|\epsilon|\gtrsim 0.5(1-\mu_{\rm s})^{1/2}$ (this size should be compared with the characteristic width of the line, which is much smaller: $[(1-\mu_{\rm s})\eta]^{1/2}$). For very large scattering angles, $\mu_{\rm s}\lesssim -0.8$, relativistic corrections are particularly important. In this angular range, the accuracy quoted above is achieved in a more narrow temperature range, $kT_{\rm e}\lesssim 15$~keV. \begin{figure*}[b] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig2.ps} \caption{ Photon number spectra resulting from the single scattering of a monochromatic line of energy, $h\nu=6.7$~keV, on hot electrons with a scattering angle of $\pi/2$, for a number of values of electron temperature. The results of Monte Carlo simulations ({\sl solid lines}) are compared with various approximations for the angle-dependent kernel: $K_{\rm nr}$ (eq. [\ref{k_nr}], {\sl dotted lines}) and $K$ (eq. [\ref{k_set}], {\sl dashed lines}). Note the increasing influence of relativistic corrections (included in the $K$ kernel) on the spectrum as the electron temperature becomes higher. } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig3.ps} \caption{ Same as Fig.~2, but for a fixed, mildly relativistic electron temperature, $kT_{\rm e}=25$~keV, and a varying scattering angle. Note that increasing the scattering angle for a given temperature has a similar broadening effect on the spectrum as increasing the temperature with the scattering angle fixed (compare with Fig.~2). } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig4.ps} \caption{ Spectra resulting from the single scattering of a monochromatic line of energy, $h\nu=6.7$~keV, on low-temperature electrons, $T_{\rm e}=10^4$~K, for a number of scattering angles. The corrections relevant for relativistic electrons are not important in this case. The presented spectra were calculated from eq. (\ref{k_nr}) for the $K_{\rm nr}$ kernel, which gives the exact result in the case of nonrelativistic electrons and leads to the same results as Monte Carlo simulations. } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig5.ps} \caption{ Same as Fig.~4, but for high-energy photons, $h\nu=1238$~keV, and electrons of $kT_{\rm e}=1$~keV. The $K_{\rm nr}$ kernel given by eq. (\ref{k_nr}) is still very accurate at this electron temperature and leads to practically identical results with those of Monte Carlo simulations. Note that the total number of scattered photons is drastically decreased on going from $\mu_{\rm s}=0.5$ to $\mu_{\rm s}=-1$. This is due to the increasing effect of Klein-Nishina corrections on the scattering cross section (compare with the nonrelativistic case in Fig.~4). } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig6.ps} \caption{ Same as Fig.~5 ($h\nu=1238$~keV), but for a fixed scattering angle, $\mu_{\rm s}=0$, and a varying, nonrelativistic electron temperature. } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig7.ps} \caption{ Spectra forming through single scattering of monochromatic radiation, $h\nu=50$~keV, on mildly relativistic thermal electrons, $kT_{\rm e}=25$~keV, for a set of scattering angles. The results of Monte Carlo simulations ({\sl solid lines}) are compared with the different approximations for the angle-dependent kernel: $K_{\rm nr}$ (eq. [\ref{k_nr}], {\sl dotted lines}) and $K$ (eq. [\ref{k_set}], {\sl dashed lines}). Temperature relativistic corrections, which are included in the $K$ kernel but not in the $K_{\rm nr}$ kernel, are of importance in this case. } \end{figure*} One can safely use the $K_{\rm nr}$ kernel when $kT_{\rm e}\lesssim 5$~keV. The photon energy can take arbitrary values in this case because equation (\ref{k_nr}) is exact for nonrelativistic electrons, as we pointed out before. This is demonstrated by Figures~5--7, which illustrate the scattering of a line with $h\nu=1238$~keV (which corresponds to one of the strongest $\gamma$ lines produced through radioactive decay of $^{56}$Co during supernova explosions) by thermal electrons, for a number of values of the electron temperature and scattering angle. Note that even for a temperature of $T_{\rm e}=10^4$~K, the Doppler width of the scattered line is $\sim 1$~keV for a scattering angle of $\pi/2$. The Ge cryogenic detectors of the {\sl International Gamma-Ray Astrophysical Observatory} ({\sl INTEGRAL}), scheduled for launch in 2002, will be capable of measuring such broadening. The line broadening arising from the Doppler effect depends on two parameters: the electron temperature, $kT_{\rm e}$, and the scattering angle, $\mu_{\rm s}$. The width of the profile as calculated from equation (\ref{k_nr}) is strictly proportional to $[(1-\mu_{\rm s})\eta]^{1/2}$. Therefore, in the nonrelativistic limit, one can only determine this combination of the two parameters, rather than $kT_{\rm e}$ and $\mu_{\rm s}$ separately, from a measurement of line broadening. This point is illustrated in Figures~2 and 3. One can see that identical profiles can be obtained by varying either the temperature or the scattering angle. However, if the electrons are mildly relativistic, the correction terms in equation (\ref{k_set}) become important, with their own dependence on $\mu_{\rm s}$. This property makes it possible, in principle, to find both $kT_{\rm e}$ and $\mu_{\rm s}$ from a measurement of a scattered line. However, in many situations, it would be easier to determine $\mu_{\rm s}$ by measuring the recoil-induced shift of the line (i.e., the position of the line on the frequency axis, see Figs.~2--7) which is proportional to $ (1-\mu_{\rm s})h\nu/m_{\rm e} c^2$. \subsection{The $P(\nu\rightarrow \nu^\prime)$ Kernel for the Isotropic Problem} In the preceeding paragraph we presented the analytical formula for the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel, which can be used for describing spectra forming as a result of single Compton scattering of a monochromatic line, with a given angle between the direction from which photons are supplied ($\bg{\Omega}$) and the observer's direction ($\bg{\Omega^\prime}$). However, in most astrophysical situations we are presented with some angular distribution of incident radiation. In such cases, one needs to convolve the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel with this initial distribution in order to determine the emergent spectrum. If the initial distribution is isotropic, we arrive at the kinetic equation (\ref{kinetic}), with the kernel derivable by the integration given in equation (\ref{k_int}). This integral can be performed analytically for the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel given by equation (\ref{k_set}) if we assume that $h\nu(h\nu/m_{\rm e} c^2)\ll kT_{\rm e}$, so that the Doppler broadening prevails over the recoil shift. The details of the integration procedure are given in \S 5. The essence of this calculation is to write the argument of the exponential in equation (\ref{k_set}) as a trinomial. The two terms dependent on the photon energy ($h\nu$) then turn out to be small with respect to the main term describing the Doppler broadening (because of our assumption that $h\nu\sim kT_{\rm e}$), and therefore can be carried out of the exponential. The subsequent integration of $K$ over the scattering angle becomes straightforward. The result is \begin{mathletters} \begin{eqnarray} P(\nu\rightarrow\nu^\prime)=\nu^{-1}\sqrt{\frac{2}{\pi}}\eta^{-1/2} \left\{\left[1+\sqrt{2}\delta\left(1-\frac{h\nu}{kT_{\rm e}}\right)\eta^{1/2} -4\delta^2\frac{h\nu}{m_{\rm e} c^2} \right.\right. \nonumber\\ \left.\left. +2\sqrt{2}\delta^3\left(-2+\frac{1}{3}\left(\frac{h\nu} {kT_{\rm e}}\right)^2\right)\frac{h\nu}{m_{\rm e} c^2}\eta^{1/2}\right]p_0 +\left[1+\sqrt{2}\delta\left(1-\frac{h\nu}{kT_{\rm e}}\right)\eta^{1/2}\right]p_t \right. \nonumber\\ \left. +\left[1+\sqrt{2}\delta\left(3-\frac{h\nu}{kT_{\rm e}}\right)\eta^{1/2} \right]p_r \right\}, \label{p} \end{eqnarray} where \begin{eqnarray} p_0 &=& \left(\frac{11}{20}+\frac{4}{5}\delta^2+\frac{2}{5}\delta^4\right)F +|\delta|\left(-\frac{3}{2}-2\delta^2-\frac{4}{5}\delta^4\right)G, \nonumber\\ p_t &=& \left[\left(-\frac{1091}{1120}-\frac{507}{560}\delta^2 +\frac{57}{35}\delta^4+\frac{68}{35}\delta^6\right)F+|\delta|\left(\frac{9}{4} +\delta^2-\frac{26}{5}\delta^4-\frac{136}{35}\delta^6\right)G\right] \eta, \nonumber\\ p_r &=& \left[\left(-\frac{23}{280}+\frac{26}{35}\delta^2+\frac{34}{35} \delta^4+\frac{16}{35}\delta^6\right)F+|\delta|\left(-2\delta^2-\frac{12}{5} \delta^4-\frac{32}{35}\delta^6\right)G\right] \left(\frac{h\nu}{m_{\rm e} c^2}\right)^2\eta^{-1}, \nonumber\\ F &=& \exp{(-\delta^2)},\,\,\,G=\int_{|\delta|}^{\infty}\exp{(-t^2)}\,dt=0.5 \pi^{1/2}Erfc(|\delta|),\,\,\,\delta=(2\eta)^{-1/2} \frac{\nu^\prime-\nu} {\nu^\prime+\nu}. \label{pp} \end{eqnarray} \label{p_set} \end{mathletters} The kernel given by equation (\ref{p_set}) is a series in powers of $\eta^{1/2}$ (given that $h\nu\sim kT_{\rm e}$), written up to the third order. In the case of scattering of low-frequency radiation in a hot plasma ($h\nu\ll kT_{\rm e}$), equation (\ref{p_set}) simplifies significantly: \begin{eqnarray} P_T=\nu^{-1}\sqrt{\frac{2}{\pi}}\eta^{-1/2}\left[1+\sqrt{2}\delta\eta^{1/2} \right](p_0+p_t). \label{pt} \end{eqnarray} Equation (\ref{pt}), which totally neglects Compton recoil, can be obtained independently from the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel, using a simple calculation procedure that employs a transition to the rest frame of the scattering electron (\S 6). As a matter of fact, we first obtained equation (\ref{pt}), and this formula allowed us to check (by applying the detailed balance principle; see below) some of the terms dependent on $h\nu/m_{\rm e} c^2$ in the more general equation (\ref{p_set}). We have verified that the expression (\ref{p_set}) obeys the detailed balance principle, satisfying (in all orders up to $\eta^{3/2}$) the equation \begin{equation} P(\nu\rightarrow\nu^\prime)= \left(\frac{\nu^\prime}{\nu}\right)^2\exp{\left[\frac{h(\nu-\nu^\prime)} {kT_{\rm e}}\right]}P(\nu^\prime\rightarrow\nu), \label{p_balance} \end{equation} which is the analog of the corresponding equation for the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel (eq. [\ref{k_balance2}]). \subsubsection{The Kernel Leading to the Kompaneets Equation} The expression (\ref{p_set}) is the sum of four leading terms of the series in powers of $\eta^{1/2}$ for the $P(\nu\rightarrow\nu^\prime)$ kernel. By retaining a smaller number of terms in this series, one can build up cruder approximations to the kernel. Below, we shall consider two such approximations. The least accurate approximation uses the first term in the series given in equation (\ref{p}): \begin{equation} P_0(\nu\rightarrow\nu^\prime)=\nu^{-1}\sqrt{\frac{2}{\pi}}\eta^{-1/2}p_0, \label{p_0} \end{equation} with $p_0$ given by equation (\ref{pp}). The expression (\ref{p_0}) is equivalent to the formula obtained by \cite{hummih67}. $P_0$ is symmetric in frequency shift: $p_0(-\delta)=p_0(\delta)$. Therefore, it only describes the Doppler (random-walk) broadening, not accounting for the average increase in the photon energy by the Doppler effect. A more accurate approximation can be obtained by summing up two leading terms in the series given in equation (\ref{p}): \begin{equation} P_K(\nu\rightarrow\nu^\prime)=\nu^{-1}\sqrt{\frac{2}{\pi}}\eta^{-1/2}\left[1+ \sqrt{2}\delta\left(1-\frac{h\nu}{kT_{\rm e}}\right)\eta^{1/2}\right]p_0. \label{p_k} \end{equation} The $P_K$ kernel is necessary and sufficient for deriving the Kompaneets equation (\ref{komp}). For this reason, we refer to it as the ``Kompaneets equation kernel''. When $h\nu\ll kT_{\rm e}$, equation (\ref{p_k}) is equivalent to the formula obtained by \cite{sunyaev80}. The $P_K$ kernel already accounts for the asymmetry of the scattered line and the corresponding photon heating. The Kompaneets equation kernel also takes into account (to first order) Compton recoil. \subsubsection{The Moments and Normalization of the Kernel} Important information on the properties of the $P(\nu\rightarrow\nu^\prime)$ kernel is provided by its moments, which were defined in equation (\ref{mom1}). The required integration is readily performed for the kernel given by equation (\ref{p_set}) after we express the differential $d\nu^\prime$ appearing in equation (\ref{mom1}) through $d\delta$, \begin{equation} d\nu^\prime=2\sqrt{2}\nu\eta^{1/2}(1+2\sqrt{2}\eta^{1/2}\delta+6\delta^2\eta +8\sqrt{2}\delta^3\eta^{3/2}+...)\,d\delta. \end{equation} The quantity $\delta$ was introduced in equation (\ref{pp}). The subsequent integration over $\delta$ should be carried out in the limits $-\infty$ to $\infty$. Collecting terms up to $O(\eta^2,\eta h\nu/m_{\rm e} c^2,(h\nu/m_{\rm e} c^2)^2)$, we obtain \begin{eqnarray} \langle\Delta\nu\rangle &=& \nu\left[4\frac{kT_{\rm e}}{m_{\rm e} c^2} -\frac{h\nu}{m_{\rm e} c^2}+10\left(\frac{kT_{\rm e}}{m_{\rm e} c^2}\right)^2-\frac{47}{2} \frac{h\nu}{m_{\rm e} c^2}\frac{kT_{\rm e}}{m_{\rm e} c^2} +\frac{21}{5}\left(\frac{h\nu}{m_{\rm e} c^2} \right)^2\right], \nonumber\\ \langle(\Delta\nu)^2\rangle &=& \nu^2\left[2\frac{kT_{\rm e}}{m_{\rm e} c^2}+47 \left(\frac{kT_{\rm e}}{m_{\rm e} c^2}\right)^2-\frac{126}{5}\frac{h\nu}{m_{\rm e} c^2} \frac{kT_{\rm e}}{m_{\rm e} c^2}+\frac{7}{5}\left(\frac{h\nu}{m_{\rm e} c^2}\right)^2\right], \nonumber\\ \langle(\Delta\nu)^3\rangle &=& \nu^3\left[\frac{252}{5}\left(\frac{kT_{\rm e}} {m_{\rm e} c^2}\right)^2-\frac{42}{5}\frac{h\nu}{m_{\rm e} c^2}\frac{kT_{\rm e}}{m_{\rm e} c^2} \right], \nonumber\\ \langle(\Delta\nu)^4\rangle &=& \nu^4\left[\frac{84}{5}\left(\frac{kT_{\rm e}} {m_{\rm e} c^2}\right)^2\right]. \label{p_moments} \end{eqnarray} The moments of higher degrees turn out to be at least of order $\eta^3$. The values for the moments above reproduce the results of \cite{itoetal98,chalas98} (more general expressions for the first two moments are also available; see Shestakov et al. 1988; \S 4.3 in Nagirner \& Poutanen 1994). It is important to mention that the equations (\ref{p_moments}) are valid for arbitrary values of the $h\nu/kT_{\rm e}$ ratio, including the case $kT_{\rm e}=0$, in contrast to the kernel equation (\ref{p_set}) itself, which holds in the limit $h\nu(h\nu/m_{\rm e} c^2)\lesssim kT_{\rm e}$ only (see the next paragraph for a discussion of the scope of the analytical approximations). The moments (\ref{p_moments}) can also be obtained by integrating over the scattering angle the moments of the angle-dependent $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ kernel (eq. [\ref{k_moments}]). The normalization of the kernel equation (\ref{p_set}) can be calculated similarly to its moments: \begin{equation} \int P\,d\nu^\prime=1-2\frac{h\nu}{m_{\rm e} c^2}+\left[-\frac{53}{10}\left(\frac {kT_{\rm e}}{m_{\rm e} c^2}\right)^2-\frac{44}{5}\frac{h\nu}{m_{\rm e} c^2}\frac{kT_{\rm e}}{m_{\rm e} c^2} +\frac{63}{20}\left(\frac{h\nu}{m_{\rm e} c^2}\right)^2\right]. \label{p_norm} \end{equation} This expression should be compared with the known expansion series for the total cross section, equation (\ref{sigma}). One can see that the terms in square brackets in equation (\ref{p_norm}) describe the inaccuracy of the approximation in equation (\ref{p_set}), whereas the term $-2h\nu/m_{\rm e} c^2$ corresponds to the actual first-order Klein-Nishina correction to the cross section (see eq. [\ref{sigma}]). Note that equation (\ref{p_norm}) does not describe the second-order Klein-Nishina correction to the scattering cross section, although our expression for the angle-dependent kernel $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ (eq. [\ref{k_set}]) does contain the corresponding term. This term was omitted when we integrated $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ over the scattering angle (in \S 5), because in the current limit ($h\nu\sim kT_{\rm e}$), inclusion of this term in the formula for $P(\nu\rightarrow\nu^\prime)$ (eq. [\ref{p_set}]) would be inconsistent with the absence of some (unknown) terms of the same order, such as $O(\eta^2)$ or $O((h\nu/m_{\rm e} c^2)^3\eta^{-1})$, in this formula. For the zero-order kernel, $P_0$ (eq. [\ref{p_0}]), the first two moments are \begin{eqnarray} \langle\Delta\nu\rangle_0 &=& 0, \nonumber\\ \langle(\Delta\nu)^2\rangle_0 &=& \nu^2\frac{kT_{\rm e}}{m_{\rm e} c^2}, \label{p_0_moments} \end{eqnarray} and the normalization is \begin{equation} \int P_0\,d\nu^\prime=1+\frac{3}{2}\frac{kT_{\rm e}}{m_{\rm e} c^2}. \label{p_0_norm} \end{equation} The corresponding relations for the Kompaneets equation kernel (\ref{p_k}) are \begin{eqnarray} \langle\Delta\nu\rangle_k &=& \nu\left(4\frac{kT_{\rm e}}{m_{\rm e} c^2} -\frac{h\nu}{m_{\rm e} c^2}\right), \nonumber\\ \langle(\Delta\nu)^2\rangle_k &=& 2\nu^2\frac{kT_{\rm e}}{m_{\rm e} c^2}, \label{p_k_moments} \end{eqnarray} and \begin{equation} \int P_K\,d\nu^\prime=1+\left(\frac{5}{2}\frac{kT_{\rm e}}{m_{\rm e} c^2}-\frac{h\nu}{m_{\rm e} c^2} \right). \label{p_k_norm} \end{equation} The terms in parentheses in equation (\ref{p_k_norm}) describe the inaccuracy of the $P_K$ kernel. In particular, the Klein-Nishina reduction of the integral cross section is not covered by this approximation. The $P_K$ kernel also neglects altogether the frequency diffusion of photons due to Compton recoil (as a result of the recoil-induced frequency shift depending on the scattering angle). The term of order $(h\nu/m_{\rm e} c^2)^2$ in the expansion series for $\langle (\Delta\nu)^2\rangle$, which is present in equation (\ref{p_moments}) and absent from equation (\ref{p_k_moments}), is responsible for this. The Kompaneets equation (\ref{komp}), which is a Fokker-Planck equation with its coefficients governed by the moments (\ref{p_k_moments}), does not include the corresponding dispersion term, the importance of which for the case $h\nu\gg kT_{\rm e}$ was pointed out in \citep{rosetal78,illetal79}. Having calculated the normalization (\ref{p_norm}) for the $P$ kernel and knowing the exact result for the total scattering cross section (eq. [\ref{sigma}]), we can try to crudely take into account the terms of order $\eta^2$ for $P$, not calculating them. To this end, we have to renormalize the kernel as \begin{equation} P^\prime=\left[1+\frac{53}{10}\left(\frac{kT_{\rm e}}{m_{\rm e} c^2}\right)^2 +\frac{19}{5}\frac{h\nu}{m_{\rm e} c^2}\frac{kT_{\rm e}}{m_{\rm e} c^2} +\frac{41}{20}\left(\frac{h\nu}{m_{\rm e} c^2} \right)^2\right]P, \label{p_rn} \end{equation} where $P$ is given by equation (\ref{p_set}). As a result, the normalization of the modified kernel $P^\prime$ is precisely the bracketed expression in equation (\ref{sigma}). In the case $h\nu\ll kT_{\rm e}$, the kernel given by equation (\ref{pt}) can be renormalized similarly: \begin{equation} P_T^\prime=\left[1+\frac{53}{10}\left(\frac{kT_{\rm e}}{m_{\rm e} c^2} \right)^2\right]P_T, \label{pt_rn} \end{equation} Finally, we may introduce the renormalized kernels $P_0^\prime$ and $P_K^\prime$: \begin{equation} P_0^\prime=\left(1-\frac{3}{2}\frac{kT_{\rm e}}{m_{\rm e} c^2}\right)P_0, \label{p_0_rn} \end{equation} \begin{equation} P_K^\prime=\left(1-\frac{5}{2}\frac{kT_{\rm e}}{m_{\rm e} c^2}-\frac{h\nu}{m_{\rm e} c^2} \right)P_K. \label{p_k_rn} \end{equation} The assumed normalizations of $P_0^\prime$ and $P_K^\prime$ are 1 and $1-2h\nu/m_{\rm e} c^2$, respectively. We should stress that the renormalized kernels are still approximations of the same order of uncertainty as the original kernels, but these kernels turn out to be more accurate than the original ones, as follows from a comparison with results of numerical calculations, which we present in the next paragraph. Note also that the moments are not affected by the renormalization procedure. \subsubsection{Comparison of the Analytical Approximations for the Kernel with Numerical Results} \paragraph{The case of $\bf h\bg{\nu}\ll k T_{\rm\bf e}\ll m_{\rm\bf e} c^2$.} In this case, the profile of a Compton-scattered monochromatic line forms through the Doppler mechanism alone. The exact kernel can be computed numerically, using equation (\ref{p_f}) of \S 6 or by means of Monte Carlo simulations. We employ both methods in our analysis. Figures~8 and 9 compare, for two values of the electron temperature ($kT_{\rm e}=10$~keV and 25~keV), the accurate spectra with the corresponding analytical dependences as calculated in different approximations for the kernel: $P_T$, $P_0$, and $P_K$ (eqs. [\ref{pt},\ref{p_0},\ref{p_k}]). As expected, the asymmetry of the line (domination of the right wing over the left) increases as the temperature grows. One can see that the zero approximation \citep{hummih67}, $P_0$, which is symmetric in frequency shift, matches the line profile poorly. Therefore, we recommend its usage be restricted to the range $kT_{\rm e}\lesssim 500$~eV, where $P_0$ is accurate to better than 98\%, except in the far wings of the line. The latter can be roughly defined as the regions $|\nu^\prime-\nu|/(\nu^\prime+\nu)\gtrsim 1/4$. \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig8.ps} \caption{ (a) Spectrum that forms through single scattering of isotropic monochromatic low-frequency ($h\nu\ll kT_{\rm e}$) radiation on weakly relativistic thermal electrons, $kT_{\rm e}=10$~keV. The solid line shows the result of an accurate numerical calculation from eq. (\ref{p_f}). Also shown are the approximations given by: eq. (\ref{p_0}), the zero-order kernel $P_0$ ({\sl dotted line}); eq. (\ref{p_k}), the Kompaneets equation kernel $P_K$ ({\sl dashed line}); and eq. (\ref{pt}), the mildly relativistic kernel $P_T$ ({\sl dash-dotted line}). (b) Ratio of the approximate spectra shown in (a) to the accurate spectrum. } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig9.ps} \caption{ Same as Fig.~8, but for a higher temperature, $kT_{\rm e}=25$~keV. } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig10.ps} \caption{ Same as Fig.~9, but the renormalized kernels $P_0^\prime$ (eq. [\ref{p_0_rn}]), $P_K^\prime$ (eq. [\ref{p_k_rn}]), and $P_T^\prime$ (eq. [\ref{pt_rn}]), are used. One can see that the agreement between the approximations and the exact kernel is better than in Fig.~9. } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig11.ps} \caption{ (a) Spectrum that forms through single scattering of isotropic monochromatic radiation of high energy, $h\nu=50$~keV, on weakly relativistic thermal electrons, $kT_{\rm e}=10$~keV. The result of a Monte Carlo simulation ({\sl solid line}) is compared with the different approximations for the kernel: $P_0^\prime$ (eq. [\ref{p_0_rn}], {\sl dotted line}), $P_K^\prime$ (eq. [\ref{p_k_rn}], {\sl dashed line}), and $P^\prime$ (eq. [\ref{p_rn}], {\sl dash-dotted line}). (b) Ratio of the approximate spectra shown in (a) to the accurate (Monte Carlo) spectrum. } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig12.ps} \caption{ Same as Fig.~11 ($h\nu=50$~keV), but for a higher electron temperature, $kT_{\rm e}=25$~keV. } \end{figure*} The Kompaneets equation kernel, $P_K$, which was derived for the considered case ($h\nu\ll kT_{\rm e}$) by \cite{sunyaev80}, works well when $kT_{\rm e}\lesssim 5$~keV. At higher temperatures, it becomes important to take into account the relativistic correction terms, $(1+\sqrt{2}\delta\eta^{1/2}) p_t$, in equation (\ref{pt}), i.e., to use our most accurate approximation, $P_T$. At $kT_{\rm e}=25$~keV, the \citep{sunyaev80} kernel overestimates the flux at the line peak by 10\% (see Fig.~9). The flux turns out to be even more overestimated in the wings. The $P_T$ kernel is accurate to better than 98\% at this temperature. At $kT_{\rm e}\sim 50$~keV, the electrons are already strongly relativistic: $\langle (v/c)^2\rangle^{1/2}\sim \eta^{1/2}\sim 0.5$, and the $\eta^{1/2}$ series in equation (\ref{pt}) converges poorly, particularly in the wings of the line. At higher temperatures, one must use the exact kernel (which can be calculated from eq. [\ref{p_f}]) and treat the problem numerically. Figure~10 is the same as Figure~9, but presents the results for the modified kernels, $P_T^\prime$, $P_0^\prime$, and $P_K^\prime$ (eqs. [\ref{pt_rn}]--[\ref{p_k_rn}]). It is seen that the renormalization has indeed appreciably improved the accuracy of the approximations. \paragraph{Inclusion of quantum effects: $\bf h\bg{\nu}\neq 0$, $\bf k T_{\rm\bf e}\ll m_{\rm\bf e} c^2$.} The assumption $h\nu(h\nu/m_{\rm e} c^2)\ll kT_{\rm e}$, which we used when deriving equation (\ref{p_set}), means that the recoil-induced downward shift in the photon frequency must be small compared to the Doppler line broadening. Using Monte Carlo simulations, we have established that $P$ and $P_K$ (eqs. [\ref{p_set}] and [\ref{p_k}]) remain good approximations up to $h\nu(h\nu/m_{\rm e} c^2)\sim kT_{\rm e}$, i.e., when the Doppler and recoil effects become comparable. Naturally, the zero-order kernel, $P_0$, is a very poor approximation in this case, because it totally neglects Compton recoil. This is demonstrated by Figures~11 and 12, which compare accurate (Monte Carlo) single-scattering spectra with the corresponding profiles computed from the renormalized kernels $P^\prime$, $P_0^\prime$, and $P_K^\prime$ (eqs. [\ref{p_rn}], [\ref{p_0_rn}], and [\ref{p_k_rn}]). The results of our simulations imply that the temperature limits quoted above for the case $h\nu\ll kT_{\rm e}$ remain valid upon inclusion of quantum effects. We finally conclude that the $P$ kernel can be used in the following range of parameter values: $h\nu(h\nu/m_{\rm e} c^2)\lesssim kT_{\rm e}\lesssim 25$~keV, $h\nu\lesssim 50$~keV. The corresponding limits for the $P_K$ kernel are $h\nu(h\nu/m_{\rm e} c^2)\lesssim kT_{\rm e}\lesssim 5$~keV, $h\nu\lesssim 50$~keV. At $h\nu(h\nu/m_{\rm e} c^2)\gtrsim kT_{\rm e}$, recoil becomes more important than the Doppler effect. The single-scattered profile becomes double-peaked \citep[see, e.g., the results of Monte Carlo simulations of][]{pozetal83}. This case requires a special analytical treatment that is beyond the scope of this paper. Here we may only point out the principle mathematical diffuculty that makes it impossible to use our current method for calculating the $P(\nu\rightarrow\nu^\prime)$ kernel in this limit; the exponential entering the expression for $K$ (eq. [\ref{k_set}]) cannot be expanded in powers of $\eta^{1/2}$, as was done for the case $h\nu(h\nu/m_{\rm e} c^2)\lesssim kT_{\rm e}$. \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig13.ps} \caption{ Spectra resulting from the single scattering of isotropic monochromatic radiation of energy $h\nu=6.7$~keV on low-temperature ($h\nu> 4 kT_{\rm e}$) thermal electrons, for different values of $kT_{\rm e}$. In this case, the Compton-recoil shift is larger than the Doppler shift. The results of Monte Carlo simulations ({\sl solid lines}) are compared with the results of the calculation by the approximate eq. (\ref{p_rn}) for the mildly relativistic kernel $P^\prime$ ({\sl dashed lines}). For the case $kT_{\rm e}=0$ (cold electrons), only the Monte Carlo result ({\sl double-peaked profile}) is shown, since our approximation for the kernel is not valid in this case. } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig14.ps} \caption{ Same as Fig.~13 ($h\nu=6.7$~keV), but for high-temperature electrons, $h\nu< 4 kT_{\rm e}$. In this case, the Doppler shift is larger than the recoil shift. } \end{figure*} \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig15.ps} \caption{ Spectra resulting from the single scattering of isotropic monochromatic radiation on weakly relativistic electrons, $kT_{\rm e}=10$~keV, for different photon energies. The results of Monte Carlo simulations ({\sl solid lines}) are compared with the results of the calculation by the approximate eq. (\ref{p_rn}) for the mildly relativistic kernel $P^\prime$ ({\sl dashed lines}). One can observe how the effect of Compton recoil on the spectrum increases as the photon energy becomes higher. The case of the 122~keV nuclear line produced by $^{57}$Co is beyond the scope of our analytical approximation for the isotropic kernel. } \end{figure*} Numerous examples of spectra forming by single Compton scattering, which may well be encountered in various astrophysical situations, and which are well approximated by our analytical kernel equation (\ref{p_set}), are presented in Figures~13--15. \subsubsection{Properties of the Single-Scattering Profile} The line profile forming by single scattering of isotropic radiation on thermal electrons (see Figs.~8--15), which is approximated by equation (\ref{p_set}), is unique in its properties. It therefore appears interesting to examine its characteristic features in some detail (following Pozdnyakov et al. 1983). First, let us compare the line profile as calculated from equation (\ref{p_set}) with the usual Gaussian profile that, e.g., may result from Doppler broadening of an emission line in the presence of thermal or turbulent motions of ions. To simplify this comparison, let us assume that $h\nu\ll kT_{\rm e}$. For a given plasma temperature, it is natural to adopt $\Delta\nu_D=\nu(2\eta)^{1/2}$ for the Doppler shift\footnote{Note that the width used, $\Delta\nu_D$, is $(M/m_{\rm e})^{1/2}=43(M/m_{\rm p})^{1/2}$ times the thermal width of lines of an ion of mass $M$.}, i.e. $N\sim \exp{[-(\nu^\prime-\nu)^2/\Delta\nu_D^2]}$. The mean (rms) frequency shift, $\sqrt{\langle(\Delta\nu)^2\rangle}$, is $\nu\eta^{1/2}$ for the Doppler profile. The corresponding value for the Compton-scattered line is $\nu[2\eta(1+23.5\eta)]^{1/2}$ (as results from the value of the second moment given by eq. [\ref{p_moments}]). The width of the single-scattering profile at half-maximum (FWHM) is approximately equal to $2\nu[\ln{2}\eta]^{1/2}=1.66\nu\eta^{1/2}$. The corresponding value for the Doppler profile is $2\nu[2\ln{2}\eta]^{1/2}$, i.e., $\sqrt{2}$ times more, which is opposite to the situation with the rms shift. Thus, in the case of the line forming by Compton scattering, relatively few photons appear in the upper part of the profile (above half-maximum), and an accordingly large fraction of the scattered radiation emerges in the wings of the line. It is also worth noting that the Doppler profile is symmetric, while the profile due to Compton scattering is not. Now let us consider the peak of the single-scattering profile, a detail which makes it so peculiar. In the vicinity of the maximum ($|\nu^\prime-\nu|\ll\nu\eta^{1/2}$), the spectrum is well approximated by the following expression, which results from equation (\ref{p_set}): \begin{eqnarray} P(\nu\rightarrow\nu^\prime)_{+,-}=\nu^{-1}\frac{11}{20}\sqrt{\frac{2}{\pi}} \eta^{-1/2} \left\{\left[1+\left(-\frac{1091}{616}-\frac{23}{154}\left(\frac{h\nu} {kT_{\rm e}}\right)^2\right)\eta\right] \right. \nonumber\\ \left. +\frac{\nu^\prime/\nu-1}{\eta^{1/2}}\left[\mp\frac{15}{22}\sqrt{\frac{\pi}{2}} +\frac{1}{2}\left(1-\frac{h\nu}{kT_{\rm e}}\right)\eta^{1/2}+...\right] +...\right\}, \label{peak} \end{eqnarray} where the indices plu and minus signs correspond to the right and left wings, respectively. We see that the spectrum has a cusp at $\nu^\prime=\nu$ (a break in the derivative occurs there). Near the cusp, on its both sides, the spectrum can be approximated as a power law, the slopes in the right and left wings [the coefficient at $(\nu^\prime/\nu-1)$ in eq. (\ref{peak})] being significantly different: \begin{eqnarray} \alpha_{+} &=& -\left[\frac{d\ln{P}}{d\ln{(\nu^\prime/\nu)}}\right]_{\nu^ \prime=\nu+0}= \frac{15}{22}\sqrt{\frac{\pi}{2}}\eta^{-1/2} -\frac{1}{2}+\frac{1}{2}\frac{h\nu}{kT_{\rm e}}, \nonumber\\ \alpha_{-} &=& ~~\left[\frac{d\ln{P}}{d\ln{(\nu^\prime/\nu)}}\right]_{\nu^ \prime=\nu-0}=\frac{15}{22}\sqrt{\frac{\pi}{2}}\eta^{-1/2} +\frac{1}{2}-\frac{1}{2}\frac{h\nu}{kT_{\rm e}},~~~~~~ \alpha_{-}-\alpha_{+}=1-\frac{h\nu}{kT_{\rm e}} \label{alphas} \end{eqnarray} (in Pozdnyakov et al. 1983, $\alpha_{-}-\alpha_{+}=3$ when $h\nu=0$, because they considered the energy spectrum that is the product of $\nu^\prime/\nu$ and the photon spectrum considered here). It is interesting that when $h\nu=kT_{\rm e}$, the line profile in the vicinity of the cusp is symmetric (in logarithmic coordinates) about $\nu^\prime=\nu$ ($\alpha_{+}=\alpha_{-}$). Let us give a few further examples. If $h\nu=0$ and $\eta=$~0.001, 0.01, and 0.1, the slopes run: $\alpha_{+,-}=27.0\mp0.5$, $8.5\mp0.5$, and $2.7\mp0.5$. For $h\nu/m_{\rm e} c^2=0.1$ and $\eta=0.02$ (the case of Figure~11): $\alpha_{+}=8.0$, $\alpha_{-}=4.1$, and $\alpha_{-}-\alpha_{+}=-3.9$ (for $h\nu=0$ and $\eta=0.02$, we would have $\alpha_{+}=5.5$, $\alpha_{-}=6.6$, and $\alpha_{-}-\alpha_{+}=1.0$). The asymmetry of the single-scattering profile can be demonstrated by comparing the fractions of the scattered radiation contained in the right ($\nu^\prime>\nu$) and left ($\nu^\prime<\nu$) wings of the line. Using the renormalized kernel given by equations (\ref{p_rn}) and (\ref{p_set}), we find in terms of number of photons ($N=\int P^\prime\,d\nu^\prime$) and total energy ($W=\int h\nu^\prime P^\prime\,d\nu^\prime$), \begin{eqnarray} N_{+,-}=\frac{1}{2}\pm \eta^{1/2}\sqrt{\frac{2}{\pi}}\left[\frac{69}{70}-\frac{23}{70} \frac{h\nu}{kT_{\rm e}}\right]-\frac{h\nu}{m_{\rm e} c^2} \pm\eta^{3/2}\sqrt{\frac{2}{\pi}}\left[-\frac{1577}{1680}-\frac{4061}{1680} \frac{h\nu}{kT_{\rm e}} \right. \nonumber\\ \left. +\frac{43}{84}\left(\frac{h\nu}{kT_{\rm e}}\right)^2 +\frac{43}{1260}\left(\frac{h\nu}{kT_{\rm e}}\right)^3\right]+ \eta^2\left[-\frac{5}{2}\frac {h\nu}{kT_{\rm e}}+\frac{13}{5}\left(\frac{h\nu}{kT_{\rm e}}\right)^2\right], \label{n_mp} \end{eqnarray} \begin{eqnarray} W_{+,-}=\left\{\frac{1}{2}\pm \eta^{1/2}\sqrt{\frac{2}{\pi}}\left[\frac{23}{14}-\frac{23}{70}\frac{h\nu} {kT_{\rm e}}\right]+\eta\left[2-\frac{3}{2}\frac{h\nu}{kT_{\rm e}}\right] \pm\eta^{3/2}\sqrt{\frac{2}{\pi}}\left[\frac{187}{48}-\frac{521}{80}\frac {h\nu}{kT_{\rm e}} \right.\right. \nonumber\\ \left.\left. +\frac{43}{6}\left(\frac{h\nu}{kT_{\rm e}}\right)^2 +\frac{43}{1260}\left(\frac{h\nu}{kT_{\rm e}}\right)^3\right]+ \eta^2\left[5-\frac{57}{4}\frac{h\nu}{kT_{\rm e}}+\frac{47}{10} \left(\frac{h\nu}{kT_{\rm e}}\right)\right]\right\}h\nu. \label{w_mp} \end{eqnarray} The renormalization is important here because it enables us to obtain the terms $O(\eta^2)$, $O(\eta h\nu/m_{\rm e} c^2)$, and $O((h\nu/m_{\rm e} c^2)^2)$ in equations (\ref{n_mp}) and (\ref{w_mp}). This procedure is strictly correct for the following reason. The terms of even orders in $\eta^{1/2}$ in the power series for the $P(\nu\rightarrow\nu^\prime)$ kernel (see eq. [\ref{p_set}]), i.e. $p_0$, $p_t$, $p_r$ and analogous (unknown) terms of higher orders, are symmetric in frequency variation. Therefore, if we know (and we indeed do) the contribution of such a term, say $O(\eta^2)$, to the normalization of the accurate kernel, we immediately know that the contribution of this term to both $N_{+}$ and $N_{-}$ is equal to half this value. $W_{+,-}$ are then determined as $\int h(\nu+\nu^\prime-\nu)P^\prime\,d\nu^\prime= N_{+,-}h\nu+\int h(\nu^\prime-\nu)P^\prime\,d\nu^\prime$. The last integral is accurate to within $\eta^2$, $\eta h\nu/m_{\rm e} c^2$, and $(h\nu/m_{\rm e} c^2)^2$ because of the presence of the small factor $\nu^\prime-\nu$. An interesting conclusion can be drawn from both equation (\ref{w_mp}) and the above discussion: the left wing contributes to the total accumulation of energy by the photons exactly as much as the right wing. \subsubsection{Single Scattering of a Step-Function Spectrum} Although the present paper is mainly devoted to the case of single Compton scattering of narrow spectral lines, our basic formulae --- equation (\ref{k_set}) for the direction-dependent problem and equation (\ref{p_set}) for the isotropic problem --- describe the kernels of the corresponding integral kinetic equations (eqs. [\ref{kinetic_k}] and [\ref{kinetic}]) appearing in the general problem of Comptonization in thermal plasmas. Let us present a simple example of using the $P(\nu\rightarrow \nu^\prime)$ kernel. Consider the scattering of an isotropic photon distribution described by the step function, i.e., $dN_0/d\nu=1$ if $\nu\le\nu_0$ and $dN_0/d\nu=0$ if $\nu>\nu_0$, in an optically thin (so that multiple scatterings are not important), hot plasma. The spectrum of the scattered photons can be found by convolving the initial frequency distribution with the kernel: \begin{equation} \frac{dN(\nu)}{d\nu}=\tau\int\frac{dN_0(\nu^\prime)}{d\nu} P(\nu^\prime\rightarrow\nu)\,d\nu^\prime, \label{step_gen} \end{equation} where $\tau\ll 1$ is the Thomson optical depth of the scattering medium. Figure~16 presents examples of spectra of the scattered component as calculated from equation (\ref{step_gen}) using different analytical approximations for the kernel: $P_0^\prime$ (eq. [\ref{p_0_rn}]), $P_K^\prime$ (eq. [\ref{p_k_rn}]), and $P^\prime$ (eq. [\ref{p_rn}]). The radiation is assumed to be low-frequency ($h\nu_0\ll kT_{\rm e}$). In one case, $kT_{\rm e}=1$~keV, a spectrum forming in the case of non-negligible photon energy ($h\nu_0=7.1$~keV) is also shown. The integral in equation (\ref{step_gen}) was performed numerically. Note that the actual observable spectrum will be the sum of the unscattered and scattered components, and will depend on $\tau$, i.e., $(dN/d\nu)_{\rm total}=(1-\tau)dN_0/d\nu+\tau dN/d\nu$. \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig16.ps} \caption{ Single scattering of a spectrum described by the step function --- $dN_0/dE=1$ if $E\le E_0$ and $dN_0/dE=0$ if $E>E_0$ --- on mildly relativistic thermal electrons, considered for various electron temperatures, assuming that the photon energy is negligible ($E_0\ll kT_{\rm e}$). The spectra resulting from the convolution (eq. [\ref{step_gen}]) of the step function with the following approximations for the isotropic kernel are shown: the zero-order kernel $P_0^\prime$ (eq. [\ref{p_0_rn}], {\sl dotted lines}), the Kompaneets equation kernel $P_K^\prime$ (eq. [\ref{p_k_rn}], {\sl dashed lines}), and the mildly relativistic kernel $P^\prime$ (eq. [\ref{p_rn}], {\sl dash-dotted lines}). For the case of $kT_{\rm e}=1$~keV, also shown is the spectrum (corresponding to the $P^\prime$ kernel) for a nonnegligible photon energy, $E_0=7.1$~keV ({\sl solid line}). Note that the actual observable spectrum will be the sum of the nonscattered and scattered components, i.e., $(dN/dE)_{\rm total}=(1-\tau)dN_0/dE+\tau dN/dE$, where $\tau\ll 1$ is the Thomson optical depth of the scattering medium. } \end{figure*} The spectra shown in Figure~16 allow analytical description. We present here only the result of the convolution of the step function with the Kompaneets equation kernel, $P_K$: \begin{mathletters} \begin{equation} \frac{dN}{d\nu}=\left\{\begin{array}{ll} (f_0+f_1)\tau, & \nu > \nu_0\\ (1-f_0+f_1)\tau, & \nu\le\nu_0, \end{array}\right.\ \label{step} \end{equation} where \begin{eqnarray} f_0=\frac{1}{\sqrt{\pi}}\left[\left(-\frac{6}{5}\delta_0 -\frac{13}{15}\delta_0^3-\frac{4}{15}\delta_0^5\right)F+\left(1+3\delta_0^2 +2\delta_0^4+\frac{8}{15}\delta_0^6\right)G\right], \nonumber\\ f_1=\sqrt{\frac{2}{\pi}}\left[\left(\frac{23}{70}-\frac{27}{35}\delta_0^2 -\frac{24}{35}\delta_0^4-\frac{8}{35}\delta_0^6\right)F+\left(2\delta_0^3 +\frac{8}{5}\delta_0^5+\frac{16}{35}\delta_0^7\right)G\right]\eta^{1/2} \left(1-\frac{h\nu_0}{kT_{\rm e}}\right), \nonumber\\ F=\exp{(-\delta_0^2)},\,\,\,G=\int_{\delta_0}^{\infty}\exp{(-t^2)}\,dt=0.5 \pi^{1/2}Erfc(\delta_0),\,\,\,\delta_0=(2\eta)^{-1/2}\frac{|\nu-\nu_0|} {\nu_0+\nu}. \end{eqnarray} \label{step_set} \end{mathletters} The main term in equation (\ref{step}) --- $f_0\tau$ (if $\nu>\nu_0$) or $(1-f_0)\tau$ (if $\nu\le\nu_0$) --- results from the zero-order kernel, $P_0$ \citep{hummih67}. The spectrum of the scattered component at frequencies $\nu>\nu_0$ has a quasi-power-law shape, with a slope that is approximately equal to the slope of the right wing of the kernel itself (see eq. [\ref{alphas}]): \begin{equation} \alpha=\frac{11}{10}\sqrt{\frac{2}{\pi}}\eta^{-1/2}-\frac{253}{175\pi}\left( 1-\frac{h\nu_0}{kT_{\rm e}}\right). \label{step_slope} \end{equation} As is the case with the kernel itself, the first-order temperature correction causes the spectrum of the scattered component to be flatter than results from the zero-order approximation. Compton recoil has an opposite effect on the slope and may cause a significant steepening of the spectrum when $h\nu_0\gg kT_{\rm e}$. It is clear from Figure~16 and equation (\ref{step_slope}) that in the low-frequency case, it is possible to determine the temperature of the scattering plasma by measuring the slope of the spectrum of the scattered component. Let us also give an expression for the total number of photons that have been redistributed from the frequency region $\nu\le\nu_0$ into the region $\nu>\nu_0$: \begin{equation} N=\left[\frac{23}{35}\sqrt{\frac{2}{\pi}}+\left(1-\frac{h\nu_0}{m_{\rm e} c^2}\right) \eta^{1/2}\right]\nu_0\eta^{1/2}\tau. \end{equation} We can also imagine a situation in which a spectrum described by the left-side step function: $dN_0/d\nu=1$ if $\nu\ge\nu_0$ and $dN_0/d\nu=0$ if $\nu<\nu_0$, is being scattered. In this case, we arrive at a formula that is similar to equation (\ref{step_set}): \begin{equation} \frac{dN}{d\nu}=\left\{\begin{array}{ll} (f_0-f_1)\tau, & \nu < \nu_0\\ (1-f_0-f_1)\tau, & \nu\ge\nu_0. \end{array}\right.\ \label{step_left} \end{equation}\ We see that here the asymmetry of the kernel has the opposite effect on the scattered spectrum as compared with the previous situation, in which the right-side step function was considered (compare the signs of the term $f_1$ in eqs. [\ref{step}] and [\ref{step_left}]), namely, the temperature correction causes the spectrum to steepen, while the recoil correction makes the slope flatter. \subsection{Fokker-Planck Expansion of the Integral Kinetic Equation} We can carry out a Fokker-Planck expansion (eq. [\ref{fokker}]) of the kinetic equation (\ref{kinetic}) with the $P(\nu\rightarrow\nu^\prime)$ kernel derived in this paper (eq. [\ref{p_set}]). The coefficients in equation (\ref{fokker}) depend on the moments of the kernel, which are given by equation (\ref{p_moments}). As a result, we obtain the generalized (for the mildly relativistic case) Kompaneets equation: \begin{eqnarray} \frac{\partial n(\nu)}{\partial\tau}=\frac{h}{m_{\rm e} c^2}\frac{1}{\nu^2} \frac{\partial}{\partial \nu}\nu^4\left\{n(1+n)+\frac{kT_{\rm e}}{h} \frac{\partial n}{\partial\nu}+\frac{7}{10}\frac{h\nu^2}{m_{\rm e} c^2} \frac{\partial n}{\partial\nu}+\frac{kT_{\rm e}}{m_{\rm e} c^2}\left[\frac{5}{2}\left( n(1+n)+\frac{kT_{\rm e}}{h}\frac{\partial n}{\partial\nu}\right) \right.\right. \nonumber\\ \left.\left. +\frac{21}{5}\nu \frac{\partial}{\partial\nu}\left(n(1+n)+\frac{kT_{\rm e}}{h} \frac{\partial n} {\partial\nu}\right) +\frac{7}{10}\nu^2\left(-2\left(\frac{\partial n}{\partial\nu}\right)^2+2(1+2n) \frac{\partial^2 n}{\partial\nu^2}+\frac{kT_{\rm e}}{h}\frac{\partial^3 n}{\partial \nu^3}\right)\right]\right\}. \label{komp_gen} \end{eqnarray} This equation was earlier obtained, in a different way, by \cite{itoetal98} and \cite{chalas98}. Substituting the Planckian distribution that corresponds to the temperature of the electrons, $n=(e^{h\nu/kT_{\rm e}}-1)^{-1}$, into equation (\ref{komp_gen}) yields $\partial n(\nu)/\partial\tau=0$. This test confirms once more that equation (\ref{p_set}) for the kernel consistently takes into account necessary corrections. One can also make sure that equation (\ref{komp_gen}) without the terms responsible for induced scattering does not modify a Wien spectrum, $n=e^{-h\nu/kT_{\rm e}}$. The fact that equation (\ref{komp_gen}) conserves the total number of photons follows directly from its divergent form. Thus, as expected, the basic properties of the Kompaneets equation are retained in equation (\ref{komp_gen}). Equation (\ref{komp_gen}) can be simplified in the case $h\nu\ll kT_{\rm e}$. Omitting the terms related to quantum effects and induced scattering, we find \begin{equation} \frac{\partial n(\nu)}{\partial\tau}=\frac{kT_{\rm e}}{m_{\rm e} c^2}\frac{1} {\nu^2}\frac{\partial}{\partial \nu}\nu^4\left[\frac{\partial n}{\partial\nu} +\frac{kT_{\rm e}}{m_{\rm e} c^2}\left(\frac{5}{2}\frac{\partial n}{\partial\nu} +\frac{21}{5}\nu\frac{\partial^2 n}{\partial\nu^2}+\frac{7}{10}\nu^2 \frac{\partial^3 n}{\partial\nu^3}\right)\right]. \label{komp_gen_a} \end{equation} This equation allows one to derive the first-order relativistic corrections to the effect of distortion of the CMB spectrum in clusters of galaxies with hot gas \citep[][see another way of finding these analytical corrections in Sazonov \& Sunyaev 1998]{itoetal98,chalas98}. In another limit, $h\nu(h\nu/m_{\rm e} c^2)\gg kT_{\rm e}$, when the Doppler effect is not important, one obtains (ignoring induced-scattering terms) \begin{equation} \frac{\partial n(\nu)}{\partial\tau}=\frac{h}{m_{\rm e} c^2}\frac{1}{\nu^2} \frac{\partial}{\partial \nu}\nu^4\left(n+\frac{7}{10}\frac{h\nu^2}{m_{\rm e} c^2} \frac{\partial n}{\partial\nu}\right). \label{komp_gen_b} \end{equation} The second parenthesized term in this equation, which describes frequency diffusion of photons, was added to the Kompaneets equation by \cite{rosetal78} and \cite{illetal79}. This correction becomes especially important when studying the scattering of hard radiation on cold electrons in an optically thick medium. Such a situation takes place, for example, during a supernova explosion; an analytical solution to the corresponding diffusion problem was derived and employed in a calculation of the evolution of the X-ray spectrum of Supernova 1987A by \cite{gresun87}. In the intermediate case of $h\nu\gg kT_{\rm e}$, $h\nu(h\nu/m_{\rm e} c^2)\ll kT_{\rm e}$, the dispersion term $(kT_{\rm e}/m_{\rm e} c^2)\nu^{-2}\partial/ \partial\nu(\nu^4\partial n/\partial\nu)$ of the Kompaneets equation, which describes the diffusion of the photons due to the Doppler effect, must be added to equation (\ref{komp_gen_b}). \subsection{Kinetic Equation for Problems with a Decisive Role of Induced Compton Scattering} If the conditions $n\gg 1$ and $n\gg kT_{\rm e}/h\nu$ are both satisfied, then equation (\ref{komp_gen}) will simplify to (only induced-scattering terms have been retained) \begin{equation} \frac{\partial n(\nu)}{\partial\tau}=\frac{h}{m_{\rm e} c^2}\frac{1}{\nu^2} \frac{\partial}{\partial \nu}\nu^4\left\{n^2+\frac{kT_{\rm e}}{m_{\rm e} c^2} \left[\frac{5}{2}n^2+\frac{42}{5}\nu n\frac{\partial n} {\partial\nu}+\frac{14}{5}\nu^2 n\frac{\partial^2 n}{\partial\nu^2}-\frac{7}{5}\nu^2\left(\frac{\partial n}{\partial \nu}\right)^2\right]\right\}. \label{komp_gen_c} \end{equation} It is interesting that only correction terms that are proportional to $kT_{\rm e}/m_{\rm e} c^2$ appear in this equation. There is no term proportional to $h\nu/m_{\rm e} c^2$, although such a term is present in the diffusion equation describing the spontaneous scattering process, equation (\ref{komp_gen_b}). This is a result of the joint operation of Compton recoil and Klein-Nishina corrections, which both contribute to the first two moments of the kernel (eq. [\ref{p_moments}]). In the past, many phenomena caused by induced Compton scattering were investigated in the nonrelativistic approximation, using only the main term in equation (\ref{komp_gen_c}). One such phenomenon is distortions in the low-frequency radiation spectra of radio sources \citep{sunyaev71}, which become large if $kT_b=0.5I_{\nu}\lambda^2\gg m_{\rm e} c^2/\tau(1+\tau)$, where $I_{\nu}$ is the intensity of quasi-isotropic radiation at a wavelength $\lambda$ and $\tau$ is the Thomson optical depth of the scattering cloud. Particularly interesting is the case of bright extragalactic radio sources, for which $kT_b\gg m_{\rm e} c^2/\tau$ even though $\tau\ll 1$, because $T_b\sim 10^{11}\div 10^{13}$~K. Other phenomena include plasma heating \citep{peyraud68,zellev70,levsun71,vinpus72,blasch76,illkom77} and induced light-pressure force \citep{levetal72} in the vicinity of astrophysical objects emitting low-frequency radiation with high $T_b$. Obviously, relativistic corrections, described by equation (\ref{komp_gen_c}), could play an important role in such problems. In particular, these terms (although small) should play the role of viscosity for such phenomena as the formation of shock waves in the photon spectrum during Bose-condensation of photons \citep{zelsun72}. Induced Compton scattering may also lead to essentially anisotropic effects, such as narrowing or spreading (depending on the spectrum of the radiation) of a radiation beam traversing a plasma \citep{goletal75,zelsun76}. It may also play a major role in the interaction of beams of maser radiation having narrow spectra ($\Delta\nu<\Delta\nu_D$, where $\Delta\nu_D=\nu[2(1-\mu_{\rm s})kT_{\rm e}/m_{\rm e} c^2]^{1/2}$ is the Doppler shift) (Zeldovich et al. 1972). Let us write (for an infinite homogeneous medium) the integral kinetic equation that arises in such problems, \begin{eqnarray} \frac{\partial n(\nu,\bg{\Omega},\tau)}{\partial\tau} &=& n(\nu,\bg{\Omega},\tau) \int n(\nu^\prime,\bg{\Omega^\prime},\tau)\left[\left(\frac{\nu^\prime}{\nu} \right)^2 K(\nu^\prime,\bg{\Omega^\prime}\rightarrow\nu,\bg{\Omega})- K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})\right] d\nu^\prime d\bg{\Omega^\prime} \nonumber\\ &=& n(\nu,\bg{\Omega},\tau)\int n(\nu^\prime,\bg{\Omega^\prime},\tau) K^{\rm ind}(\nu,\bg{\Omega};\nu^\prime,\bg{\Omega^\prime})\,d\nu^\prime d\bg{\Omega^\prime}, \label{kinetic_ind} \end{eqnarray} where $n(\nu)$ is the occupation number in photon phase space, and we have introduced the new kernel $K^{\rm ind}(\nu,\bg{\Omega};\nu^\prime,\bg{\Omega^\prime})$. In the limit $h\nu\ll m_ec^2$, $h\nu\ll kT_{\rm e}$, which always takes place in the case of compact sources of low-frequency radiation, a particularly simple expression for this kernel can be given. Indeed, we can expand the exponential in the expression (\ref{k}) according to equation (\ref{q}) of \S 5, and then, taking the second term in the resulting series out of the exponential, get \begin{eqnarray} K^{\rm ind}(\nu,\bg{\Omega};\nu^\prime,\bg{\Omega^\prime}) &=& \frac{3}{16\pi}\sqrt{\frac{2}{\pi}}\eta^{-3/2}\frac{h\nu^{\prime 2} (\nu^\prime-\nu)}{m_{\rm e} c^2 g^3}(1-\mu_{\rm s}) \left[1+\mu_{\rm s}^2+\left(\frac{1}{8}-\mu_{\rm s}-\frac{63}{8}\mu_{\rm s}^2+5\mu_{\rm s}^3\right)\eta \right. \nonumber\\ &&\left. -\mu_{\rm s}(1-\mu_{\rm s}^2)\left(\frac{\nu^\prime-\nu}{g}\right)^2 -\frac{3(1+\mu_{\rm s}^2)}{8\eta}\left(\frac{\nu^\prime-\nu}{g}\right)^4\right] \exp{\left[-\frac{(\nu^\prime-\nu)^2}{2g^2\eta}\right]}, \nonumber\\ g &=& |\nu\bg{\Omega}-\nu^\prime\bg{\Omega^\prime}| =(\nu^2-2\nu\nu^\prime\mu_{\rm s}+\nu^{\prime 2})^{1/2}. \label{k_ind} \end{eqnarray} Equation (\ref{k_ind}), like the formula in equation (\ref{k_set}) for the $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ kernel, can be used when the plasma is mildly relativistic, $kT_{\rm e}\lesssim 0.1m_{\rm e} c^2$. In the case of nonrelativistic electrons, $kT_{\rm e}\llm_{\rm e} c^2$, the correction terms in equation (\ref{k_ind}) can be neglected, and $\nu^\prime$ can be replaced by $\nu$ everywhere except in the difference $\nu^\prime-\nu$, which results in the following formula \begin{equation} K^{\rm ind}_{\rm nr}(\nu,\bg{\Omega};\nu^\prime,\bg{\Omega^\prime}) =\frac{3}{32}(\pi\eta)^{-3/2}\nu^{-1}\frac{h(\nu^\prime-\nu)}{m_{\rm e} c^2} (1-\mu_{\rm s})^{-1/2}(1+\mu_{\rm s}^2)\exp{\left[-\frac{(\nu^\prime-\nu)^2}{4\nu^2(1-\mu_{\rm s}) \eta} \right]}, \label{k_ind_nr} \end{equation} which was earlier obtained by \cite{zeletal72}. Integrating the kernel (eq. [\ref{k_ind}]) over all scattering angles gives the corresponding kernel for the isotropic problem. This kernel is, however, more easily deduced from equation (\ref{p_set}): \begin{equation} P^{\rm ind}(\nu;\nu^\prime)=\left(\frac{\nu^\prime}{\nu}\right)^2 P(\nu^\prime\rightarrow\nu)-P(\nu\rightarrow\nu^\prime) =\sqrt{\frac{2}{\pi}}\eta^{-3/2} \frac{h(\nu^2+\nu^{\prime 2})(\nu^\prime-\nu)}{m_{\rm e} c^2\nu^2(\nu+\nu^\prime)} (p_0+p_t), \label{p_ind} \end{equation} where $p_0$ and $p_t$ are given by equation (\ref{pp}). The kernel given in equation (\ref{p_ind}) is applicable in the limit $h\nu\ll kT_{\rm e}\lesssim 0.1m_{\rm e} c^2$ and makes possible to obtain the differential equation (\ref{komp_gen_c}). \section{DERIVATION OF THE $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ KERNEL} Consider a photon of frequency $\nu$ that propagates in the direction $\bg{\Omega}$ . We calculate the probability (per unit dimensionless time, $\tau$) that the photon will be scattered by a Maxwellian distribution of electrons into a solid-angle interval $d\bg{\Omega^\prime}$, with the emergent photon frequency falling in an interval $d\nu^\prime$. Our primary goal is to derive a formula that would be a good approximation for situations in which both the electrons and photons are mildly relativistic, i.e., when $\eta=kT_{\rm e}/m_{\rm e} c^2, h\nu/m_{\rm e} c^2\sim 0.1$. Therefore, we make an initial assumption that $\eta, h\nu/m_{\rm e} c^2\ll m_{\rm e} c^2$. The final formula will contain correction terms of order up to $\eta^{3/2}$, $\eta^{1/2} h\nu/m_{\rm e} c^2$, and $(h\nu/m_{\rm e} c^2)^2$. The term of order $(h\nu/m_{\rm e} c^2)^2$ originates directly from the Klein-Nishina formula for the scattering cross section. We retain this term in our final expression on purpose (although it causes the formula to be of a slighly better accuracy in terms of photon energy that in terms of electron temperature), because this expression without the temperature-correction terms then becomes the {\sl exact formula} for the case of scattering of photons of arbitrary (including $h\nu\ggm_{\rm e} c^2$) energy on nonrelativistic electrons ($\eta\ll 1$). Let us introduce the following system of reference: ${\bf Ox}$ points along $\bg{\Omega}$, ${\bf Oy}$ is in the ($\bg{\Omega}$, $\bg{\Omega^\prime}$) plane, ${\bf Oz}$ is normal to this scattering plane. There are two basic equations. The first is the energy-conservation relation \citep[e.g.,][]{pozetal83}: \begin{equation} \frac{\nu^\prime}{\nu}=\frac{1-\bg{\Omega\beta}}{1-\bg{\Omega^\prime\beta} +(h\nu/\gamma mc^2)(1-\cos{\alpha})}, \label{en_cons} \end{equation} where $c\bg{\beta}$ is the electron velocity and $\cos{\alpha}=\bg{\Omega}\bg{\Omega^\prime}$. The second equation describes the differential cross section for Compton scattering \citep{jauroh76,beretal82}: \begin{equation} \frac{d\sigma}{d\bg{\Omega^\prime}}=\frac{3\sigma_{\rm T}} {16\pi\gamma^2}\frac{X} {(1-\bg{\Omega\beta})^2}\left(\frac{\nu^\prime}{\nu}\right)^2, \label{klein} \end{equation} where \begin{equation} X=2-\frac{2(1-\cos{\alpha})}{\gamma^2(1-\bg{\Omega\beta}) (1-\bg{\Omega^\prime\beta})}+\frac{(1-\cos{\alpha})^2}{\gamma^4(1- \bg{\Omega\beta})^2(1-\bg{\Omega^\prime\beta})^2}+\frac{\nu^\prime}{\nu} \left(\frac{h\nu}{m_{\rm e} c^2}\right)^2\frac{(1-\cos{\alpha})^2} {\gamma^2(1-\bg{\Omega\beta})(1-\bg{\Omega^\prime\beta})}. \label{x} \end{equation} Introducing the components of the electron velocity ($\beta_x$, $\beta_y$, $\beta_z$), we find \begin{eqnarray} \bg{\Omega\beta}=\beta_x,\,\,\,\bg{\Omega^\prime\beta}=\beta_x \cos{\alpha}+\beta_y\sin{\alpha}. \label{omegab} \end{eqnarray} Equation (\ref{en_cons}) imposes a link between the different $\bg{\beta}$ components for given $\nu^\prime/\nu$ and $\alpha$: \begin{equation} \beta_y=\frac{1}{\sin{\alpha}}\left[\beta_x\left(\frac{\nu}{\nu^\prime} -\cos{\alpha}\right)+1-\frac{\nu}{\nu^\prime}+\frac{h\nu}{\gamma m_{\rm e} c^2}(1-\cos{\alpha})\right], \label{beta_y} \end{equation} with \begin{equation} \gamma^2=\frac{1}{1-(\beta_x^2+\beta_y^2+\beta_z^2)}.\ \label{gamma} \end{equation} In order to calculate $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$, we ought to carry out the following integration over electron velocities: \begin{equation} K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})\equiv\frac{dP} {d\tau\,d\Omega^\prime\,d\nu^\prime}=\frac{1}{\sigma_{\rm T}}\int \frac{d\sigma}{d\bg{\Omega^\prime}} (1-\bg{\Omega\beta}) f_{\beta}(\beta_x,\beta_y,\beta_z)\left|\frac{\partial \beta_y}{\partial \nu^\prime}\right|\,d\beta_x d\beta_z. \label{k_gen} \end{equation} Here, the factor $(1-\bg{\Omega\beta})$ allows for the relative velocity of the photon and electron along the direction of the former's motion \citep{lanlif75}, $f_{\beta}(\beta_x,\beta_y,\beta_z)$ is the electron velocity distribution function. In equation (\ref{k_gen}), one of the velocity components (say, $\beta_y$) must be expressed through the other two. The interval, $d\nu^\prime$, of the photon frequency after scattering can then be related with the corresponding interval of $\beta_y$; hence the appearence of the factor $|\partial \beta_y/\partial \nu^\prime|$ in equation (\ref{k_gen}). For $f_{\beta}$ we substitute the relativistic Maxwellian distribution function, \begin{equation} f_{\beta}=(2\pi\eta)^{-3/2}\left(1+\frac{15}{8}\eta\right)^{-1} \left(1+\frac{5}{2}\beta^2-\frac{3}{8}\frac{\beta^4}{\eta}\right) \exp{\left(-\frac{\beta^2}{2\eta}\right)}, \label{fm} \end{equation} Here we have retained only correction terms of order $\eta$. We can proceed with the integration in equation (\ref{k_gen}) on expanding the integrand in powers of $\beta$. The correct account of temperature terms of order $\beta^2$ and $\beta^3$ necessitates inclusion of the corresponding terms in equation (\ref{fm}) for the velocity distribution (contrary to the statement made in BR70). We also note that the terms of order $\beta^3$, $\beta h\nu/m_{\rm e} c^2$, and $(h\nu/m_{\rm e} c^2)^2$, which we are keep throughout, were neglected altogether in the derivation of BR70. As follows from equation (\ref{beta_y}), the derivative $\partial \beta_y/\partial \nu^\prime$ to the first order is \begin{equation} \frac{\partial\beta_y}{\partial\nu^\prime}=\frac{\nu}{\nu^{\prime 2} \sin{\alpha}}(1-\beta_x). \label{deriv1} \end{equation} The last bracketed term in the expression (\ref{beta_y}) gives rise to additional terms of order $\beta h\nu/m_{\rm e} c^2$ due to the presence of the factor $1/\gamma$. Using equation (\ref{gamma}) we find \begin{equation} \frac{\partial(1/\gamma)}{\partial\nu^\prime}\approx -\beta_y\frac{\partial\beta_y}{\partial\nu^\prime}\approx -\beta_y\frac{\nu}{\nu^{\prime 2}\sin{\alpha}}, \label{deriv2} \end{equation} which finally yields \begin{equation} \frac{\partial\beta_y}{\partial\nu^\prime}=\frac{\nu}{\nu^{\prime 2} \sin{\alpha}}\left\{1-\beta_x-\frac{h\nu}{m_{\rm e} c^2}\frac{1-\cos{\alpha}} {\sin^2{\alpha}}\left[\beta_x\left(\frac{\nu} {\nu^\prime}-\cos{\alpha}\right)+1-\frac{\nu}{\nu^\prime}+\frac{h\nu}{m_{\rm e} c^2} (1-\cos{\alpha})\right] \right\}. \label{deriv3} \end{equation} Note that the main Klein-Nishina correction to the scattering cross section, which is of the order of $h\nu/m_{\rm e} c^2$, is contained in the factor $(\nu^\prime/\nu)^2$ in equation (\ref{klein}), rather than in the $X$ function (eq. [\ref{x}]), which describes Doppler aberration and the Klein-Nishina correction of order $(h\nu/m_{\rm e} c^2)^2$. The reciprocal factor, $\nu/\nu^{\prime 2}$, enters equation (\ref{deriv3}), so upon multiplication of $d\sigma/d\bg{\Omega^\prime}$ and $|\partial \beta_y/\partial \nu^\prime|$ in the integrand of equation (\ref{k_gen}), the presence of the $O(h\nu/m_{\rm e} c^2)$ correction in $K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})$ is not explicit. Proceeding further with expansions, we obtain \begin{eqnarray} \frac{1}{\sigma_{\rm T}}\frac{d\sigma}{d\bg{\Omega^\prime}} (1-\bg{\Omega\beta})\left|\frac{\partial \beta_y}{\partial \nu^\prime}\right|=\frac{3}{16\pi}\frac{1}{\nu\sin{\alpha}} \left\{1+\mu_{\rm s}^2+2s(-\mu_{\rm s}+\mu_{\rm s}^2)+2\beta_x(-\mu_{\rm s}+\mu_{\rm s}^2) \right. \nonumber\\ \left. +s^2(1-4\mu_{\rm s}+3\mu_{\rm s}^2)+2s\beta_x(1-3\mu_{\rm s}+2\mu_{\rm s}^2) +\beta_x^2(1-4\mu_{\rm s}+3\mu_{\rm s}^2)+\beta^2(-1+2\mu_{\rm s}-3\mu_{\rm s}^2) \right. \nonumber\\ \left. +2s^3(1-3\mu_{\rm s}+2\mu_{\rm s}^2)+2s^2\beta_x(2-5\mu_{\rm s}+3\mu_{\rm s}^2) +2s\beta_x^2(2-5\mu_{\rm s}+3\mu_{\rm s}^2)+2\beta_x^3(1-3\mu_{\rm s}+2\mu_{\rm s}^2) \right. \nonumber\\ \left. +2\beta^2 s(-1+4\mu_{\rm s}-3\mu_{\rm s}^2)+2\beta^2 \beta_x(-1+4\mu_{\rm s}-3\mu_{\rm s}^2) +\frac{h\nu}{m_{\rm e} c^2}\left(1-\frac{\nu}{\nu^\prime}+\frac{h\nu}{m_{\rm e} c^2} (1-\mu_{\rm s})\right)\frac{(1+\mu_{\rm s}^2)}{1+\mu_{\rm s}} \right. \nonumber\\ \left. -\frac{h\nu}{m_{\rm e} c^2}\beta_x\frac{(1-\mu_{\rm s})(1+\mu_{\rm s}^2)}{1+\mu_{\rm s}} +\frac{\nu^\prime}{\nu}\left(\frac{h\nu}{m_{\rm e} c^2}\right)^2(1-\mu_{\rm s})^2\right\}, \label{ind} \end{eqnarray} where $\mu_{\rm s}=\cos{\alpha}$, \begin{equation} s=\bg{\Omega^\prime\beta}= \beta_x\frac{\nu}{\nu^\prime}+1-\frac{\nu}{\nu^\prime}+\frac{h\nu}{m_{\rm e} c^2} (1-\mu_{\rm s}), \label{s} \end{equation} \begin{equation} \beta^2\approx\tilde{\beta^2}= \beta_x^2+\beta_z^2+\frac{1}{1-\mu_{\rm s}^2}\left[\beta_x\left(\frac{\nu} {\nu^\prime}-\mu_{\rm s}\right)+1-\frac{\nu}{\nu^\prime}+\frac{h\nu}{m_{\rm e} c^2} (1-\mu_{\rm s})\right]^2. \label{beta2} \end{equation} Here $\tilde{\beta^2}$ must be substituted for $\beta^2$ in equation (\ref{ind}) and in the terms preceeding the exponential in equation (\ref{fm}). Note that in the situation of scattering of nonrelativistic electrons ($\eta\ll 1$), equation (\ref{ind}) reduces to a simple formula: \begin{equation} \frac{1}{\sigma_{\rm T}}\frac{d\sigma}{d\bg{\Omega^\prime}} (1-\bg{\Omega\beta})\left|\frac{\partial \beta_y}{\partial \nu^\prime}\right|=\frac{3}{16\pi}\frac{1}{\nu\sin{\alpha}} \left[1+\mu_{\rm s}^2+\frac{\nu^\prime}{\nu}\left(\frac{h\nu}{m_{\rm e} c^2}\right)^2 (1-\mu_{\rm s})^2\right], \label{ind_nr} \end{equation} which is valid for arbitrary values of photon energy, including $h\nu\gg m_{\rm e} c^2$. Our final formula will consequently be exact in the limit $\eta\ll 1$, as we claimed at the beginning of this section. Having made this remark, let us return to considering our main situation of interest, i.e,. with mildly relativistic electrons and photons. The accuracy of equation (\ref{beta2}) turns out to be insufficent for describing the exponential factor of the distribution function (eq. [\ref{fm}]), in which $\beta^2$ is divided by $\eta$. In this factor, $\beta^2$ need be given accurately to the fifth order, which requires the inclusion of the factor $1/\gamma$ in equation (\ref{beta_y}) (similarly to the situation with $\partial\beta_y/\partial\nu^\prime$ above). We consequently get \begin{equation} \exp\left(-\frac{\beta^2}{2\eta}\right)=\exp\left(-\frac{\tilde{\beta^2}+ \Delta\beta^2}{2\eta}\right)\approx \left(1-\frac{\Delta\beta^2}{2\eta}\right)\exp\left(-\frac{\tilde{\beta^2}} {2\eta}\right), \end{equation} where \begin{equation} \Delta\beta^2=-\frac{1-\cos{\alpha}}{\sin{\alpha}}\frac{h\nu}{m_{\rm e} c^2}\beta_y \tilde{\beta^2}, \end{equation} with $\beta_y$ and $\tilde{\beta^2}$ given by equations (\ref{beta_y}) and (\ref{beta2}), respectively. Now, having completed all the necessary preparations, we can implement the integration in equation (\ref{k_gen}). Integrals of the following type then appear: \[ \int \beta_x^k\beta_z^l\exp{\left\{-\frac{1}{2\eta}[\beta_x^2+\beta_z^2+ (a\beta_x+b)^2]\right\}}\,d\beta_x\,d\beta_z, \] where $a$ and $b$ are constants set by equation (\ref{beta2}). Such integrals are readily done (see, e.g., BR70). It is natural to present the final result in the form of a series in terms of the quantity \begin{equation} \Delta=\nu^\prime-\nu+\nu^\prime\frac{h\nu}{m_{\rm e} c^2}(1-\cos{\alpha}). \label{delta} \end{equation} Indeed, consider the situation with $kT_{\rm e}=0$, when all scattered photons undergo the same decrement in frequency, owing to Compton recoil: $\nu/\nu^\prime=1+(1-\cos{\alpha})h\nu/m_{\rm e} c^2$, as follows from equation (\ref{en_cons}). This shift corresponds exactly to $\Delta=0$. If we now allow a nonvanishing electron temperature ($kT_{\rm e}\neq 0$), the scattered profile will be Doppler-broadened near the recoil-shifted peak of the line. The term $\Delta$ will then measure the frequency variation relative to this peak and therefore will always be of the order of $\nu\eta^{1/2}$, which ensures that the final expression will be convergent regardless of the proportion of $h\nu$ and $kT_{\rm e}$ (for comparison, we can consider a similar quantity $\nu^\prime-\nu$, which is $\sim\nu\eta^{1/2}$ if the Doppler effect is dominant but $\sim -h\nu^2/m_{\rm e} c^2$ if recoil is more important). Moreover, note that $\Delta$ enters as an entity in all the major expressions we obtained above (see eqs. [\ref{beta_y}], [\ref{deriv3}]--[\ref{beta2}]). After lengthy calculations, we finally arrive at equation (\ref{k_set}) given in \S 2. \section{CALCULATION OF THE ANGULAR SCATTERING FUNCTION} The angular function, $d\sigma/d\mu_{\rm s}$, for Compton scattering on Maxwellian electrons can be directly calculated using the same formalism that we employed to derive the $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ kernel in the preceeding section. In fact, the derivation of $d\sigma/d\mu_{\rm s}$ is less tedious than that of $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$. For this reason, we here calculate the angular function to a better accuracy than that of our final formula for the kernel (eq. [\ref{k_set}]). Namely, we are willing to obtain an expression that will contain correction terms of the order of $(kT_{\rm e}/m_{\rm e} c^2)^2$ and $(h\nu/m_{\rm e} c^2)(kT_{\rm e}/m_{\rm e} c^2)$ to the Rayleigh angular function. The term of order $(h\nu/m_{\rm e} c^2)^2$ will also be found, but this term already follows from equation (\ref{k_nr}) for the $K_{\rm nr}$ kernel, which, as we recall, is accurate for arbitrary photon energies in the case of nonrelativistic electrons ($kT_{\rm e}\ll m_{\rm e} c^2$). In order to find the angular function, the integration over the electron velocity space needs to be done: \begin{equation} \frac{d\sigma}{d\mu_{\rm s}}=\frac{2\pi}{\sigma_{\rm T}}\int \frac{d\sigma}{d\bg{\Omega^\prime}}(1-\bg{\Omega\beta}) f_{\beta}(\beta_x,\beta_y,\beta_z)\,d\beta_x d\beta_y d\beta_z. \label{ang_gen} \end{equation} Here the differential scattering crosscsection, $d\sigma/d\bg{\Omega^\prime}$, for given $(\beta_x,\beta_y,\beta_z)$ and $h\nu/m_{\rm e} c^2$ is given by equations (\ref{en_cons}) and (\ref{klein}), and the relativistic Maxwellian distribution function is represented by the series given in equation (\ref{fm}). The principal difference (which simplifies the calculation) of equation (\ref{ang_gen}) from the similar equation (\ref{k_gen}) is that $\beta_y$ is now a free parameter, like the other components of the electron velocity. Next, the quantity $(1-\bg{\Omega\beta})d\sigma/d\bg{\Omega^\prime}$ needs to be expanded in powers of $\beta_x$, $\beta_y$, $\beta_z$ (to fourth order) and $h\nu/m_{\rm e} c^2$ (to second order), as we similarly did (to a worse accuracy) in the previous section (see eq. [\ref{ind}]). The resultant expression is rather cumbersome, so we do not give it here. For the $f_{\beta}$ distribution function, the approximation of equation (\ref{fm}) is sufficient, because the next order terms, $O(\eta^2,\eta\beta^2, \beta^4,\beta^6/\eta,\beta^8/\eta^2)$, in the series $f_{\beta}$ cancel upon integration. The integration in equation (\ref{ang_gen}) is connected to the calculation of standard integrals \begin{equation} \int \beta_x^k\beta_y^l\beta_z^m \exp{\left\{-\frac{\beta_x^2+\beta_y^2 +\beta_z^2}{2\eta}\right\}}\,d\beta_x d\beta_y d\beta_z, \end{equation} where $k,l$ and $m$ are even numbers [odd terms with respect to one of the $(\beta_x,\beta_y,\beta_z)$ components vanish upon integration]. The final result is equation (\ref{ang2}). \section{DERIVATION OF THE $P(\nu\rightarrow\nu^\prime$) KERNEL} Here we demonstrate how to perform the integral in equation (\ref{k_int}) with $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ given by equation (\ref{k_set}) when $h\nu(h\nu/m_{\rm e} c^2)\ll kT_{\rm e}$. Write the exponential factor entering equation (\ref{k}) in the form of a polynomial: \begin{equation} \frac{\epsilon^2}{4(1-\mu_{\rm s})\eta}=\frac{(\nu^\prime-\nu)^2}{2g^2\eta} +\frac{h\nu\nu^\prime}{m_{\rm e} c^2}\frac{(\nu^\prime-\nu)(1-\mu_{\rm s})}{g^2\eta} +\left(\frac{h\nu\nu^\prime}{m_{\rm e} c^2}\right)^2\frac{(1-\mu_{\rm s})^2}{2g^2\eta} \label{q} \end{equation} where \begin{equation} g^2=2\nu\nu^\prime(1-\mu_{\rm s})+(\nu^\prime-\nu)^2. \label{g2} \end{equation} One can see from equation (\ref{g2}) that $g^2\rightarrow 0$ at $\nu^\prime=\nu$ when $\mu_{\rm s}\rightarrow 1$. The factor $g^2$ enters the denominator of the first member of the polynomial in equation (\ref{q}), which describes the Doppler broadening. Knowing this property, which means that $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ is a $\delta$-function for $\mu_{\rm s}=0$, one can immediately make a prediction that the $P(\nu\rightarrow\nu^\prime)$ kernel resulting from the integration in equation (\ref{k_int}) over $\mu_{\rm s}$ will have a cusp (a point where a break in the derivative occurs) at $\nu^\prime=\nu$. Such a cusp is indeed present in our final expression for $P(\nu\rightarrow\nu^\prime)$, as we shall see below. There is no singularity at $\mu_{\rm s}=1$ in the second and third members of the polynomial (\ref{q}), which describe the frequency variation due to Compton recoil. This suggests that the cusp mentioned above will remain at the same position, $\nu^\prime=\nu$, regardless of the initial photon energy ($h\nu$), which indeed proves to be the case \citep[see the results of Monte Carlo simulations in Fig.~1 of the review by][]{pozetal83}. By assumption ($h\nu\sim kT_{\rm e}$), the main contribution to the photon frequency increment by scattering comes from the Doppler effect. This implies that $\nu^\prime-\nu$ is typically $\sim \nu\eta^{1/2}$. The second and third terms in equation (\ref{q}) are thus infinitesimal of the order of $\eta^{1/2}$ and $\eta$, respectively. One can therefore take these terms out of the exponential, which will make possible analytical integration in equation (\ref{k_int}). It is convenient to present our final result for $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ in terms of the quantity \begin{equation} \delta=(2\eta)^{-1/2}\frac{\nu^\prime-\nu}{\nu+\nu^\prime}. \label{delta_t} \end{equation} This new variable, describing the relative frequency shift, is similar to $\epsilon$, in powers of which the original equation (\ref{k_set}) is written, when $h\nu\rightarrow 0$ (see the main term of eq. [\ref{epsilon2}] below). Moreover, the combination $(\nu^\prime-\nu)/(\nu+\nu^\prime)$ arises in a natural way if one first calculates the analog of the $P(\nu\rightarrow\nu^\prime)$ kernel for a monoenergetic isotropic electron distribution and then convolves this quantity with the Maxwellian distribution function (as we do in \S 6). Indeed, for a given electron speed, $\beta c$, the photon frequency after scattering can take values in the range $|\nu^\prime-\nu|/(\nu+\nu^\prime)\le \beta$. From this fact it becomes immediately clear that $P(\nu\rightarrow\nu^\prime)$ must (and indeed does) possess a certain symmetry if expressed in terms of $\delta$ given by equation (\ref{delta_t}). After a few intermediate steps (see eq. [\ref{k_set}]), which are \begin{eqnarray} \frac{\epsilon^2}{\eta} &=& 8\delta^2+4\sqrt{2}(1-\mu_{\rm s})\delta\frac{h\nu} {m_{\rm e} c^2}\eta^{-1/2}-\frac{16(1+\mu_{\rm s})}{1-\mu_{\rm s}}\delta^4\eta+8(1-\mu_{\rm s})\delta^2 \frac{h\nu}{m_{\rm e} c^2}+(1-\mu_{\rm s})^2\left(\frac{h\nu}{m_{\rm e} c^2}\right)^2\eta^{-1} \nonumber\\ &&-8\sqrt{2}(1+\mu_{\rm s}) \delta^3\frac{h\nu}{m_{\rm e} c^2}\eta^{1/2}+2\sqrt{2}(1-\mu_{\rm s})^2\delta \left(\frac{h\nu}{m_{\rm e} c^2}\right)^2\eta^{-1/2}+...\,, \label{epsilon2} \end{eqnarray} \begin{equation} \frac{\nu^\prime}{g}=[2(1-\mu_{\rm s})]^{-1/2}\left(1+\sqrt{2}\delta\eta^{1/2} -\frac{1+\mu_{\rm s}}{1-\mu_{\rm s}}\delta^2\eta-\frac{\sqrt{2}(1+ \mu_{\rm s})}{1-\mu_{\rm s}}\delta^3 \eta^{3/2}\right)+...\,, \end{equation} \begin{mathletters} we finally obtain \begin{eqnarray} K(\nu,\bg{\Omega}\rightarrow\nu^\prime,\bg{\Omega^\prime})=\nu^{-1}\frac{3} {32\pi}[\pi(1-\mu_{\rm s})\eta]^{-1/2}\exp{\left(-\frac{2\delta^2}{1-\mu_{\rm s}}\right)} \nonumber\\ \times\left\{\left[1+\sqrt{2}\delta\left(1-\frac{h\nu}{kT_{\rm e}}\right)\eta^{1/2} -4\delta^2\frac{h\nu}{m_{\rm e} c^2}+2\sqrt{2}\delta^3\left(-2+\frac{1}{3} \left(\frac{h\nu}{kT_{\rm e}}\right)^2\right)\frac{h\nu}{m_{\rm e} c^2} \eta^{1/2}\right]k_0 \right. \nonumber\\ \left. +\left[1+\sqrt{2}\delta\left(1-\frac{h\nu}{kT_{\rm e}}\right)\eta^{1/2}\right]k_t +\left[1+\sqrt{2}\delta\left(3-\frac{h\nu}{kT_{\rm e}}\right)\eta^{1/2}\right]k_r \right\}, \label{k_exp} \end{eqnarray} where \begin{eqnarray} k_0 &=& 1+\mu_{\rm s}^2, \nonumber\\ k_t &=& \left[\frac{1}{8}-\mu_{\rm s}-\frac{63}{8}\mu_{\rm s}^2+5\mu_{\rm s}^3+\frac {-1-5\mu_{\rm s}-\mu_{\rm s}^2+3\mu_{\rm s}^3}{1-\mu_{\rm s}}\delta^2 +\frac{2(-1+2\mu_{\rm s}-\mu_{\rm s}^2+2\mu_{\rm s}^3)}{(1-\mu_{\rm s})^2}\delta^4\right]\eta, \nonumber\\ k_r &=& (1+\mu_{\rm s}^2)\left(-\frac{1-\mu_{\rm s}}{4}+\delta^2\right) \left(\frac {h\nu}{m_{\rm e} c^2}\right)^2\eta^{-1}. \end{eqnarray}\ \label{k_exp_set} \end{mathletters} Note that we have omitted in equation (\ref{k_exp_set}) the second-order, $O((h\nu/m_{\rm e} c^2)^2)$, Klein-Nishina correction term, which was present in the original expression in equation (\ref{k_set}). This is because in the limit we are currently working with ($h\nu\sim kT_{\rm e}$), inclusion of this term would be inconsistent with the absence of (unknown) terms of the same order, such as $O(\eta^2)$ or $O((h\nu/m_{\rm e} c^2)^3\eta^{-1})$, in the series given in equation (\ref{k_exp}). The term $K(\nu,\bg{\Omega}\rightarrow \nu^\prime,\bg{\Omega^\prime})$ in the form (\ref{k_exp_set}) is easily integrated over the scattering angle using the change of variables ($\mu_{\rm s}\rightarrow t$) $2\delta^2/(1-\mu_{\rm s})=t^2$, which results in the final equation (\ref{p_set}) for the $P(\nu\rightarrow\nu^\prime)$ kernel for the isotropic problem. \section{DIRECT CALCULATION OF THE $P(\nu\rightarrow\nu^\prime$) KERNEL IN THE CASE OF $h\nu\ll kT_{\rm e}$} Given is an isotropic field of electromagnetic radiation of frequency $\tilde{\nu}$. Its spectrum (number of photons per unit solid angle, unit frequency interval, and unit detector area) is \begin{equation} \frac{d N(\nu)}{d\Omega\,d\nu}=\frac{\delta(\nu-\tilde{\nu})}{4\pi}. \end{equation} Consider scattering of the radiation on an electron moving at a speed of $v=\beta c$. The energy of the photons is assumed to be low enough ($h\tilde{\nu}\llm_{\rm e} v^2$) that Compton recoil can be ignored. We consider electrons that are not too relativistic: $(h\tilde{\nu}/m_{\rm e} c^2)\gamma\ll 1$, where $\gamma=(1-\beta^2)^{-1/2}$; thus Klein-Nishina relativistic corrections are not significant. For the moment, we ignore induced scattering. Consider the situation in the electron rest frame. In this frame, the spectral intensity of the incident radiation is direction dependent: \begin{equation} \frac{d N_0(\mu_0,\nu_0)}{d\Omega_0\,d\nu_0}=\left(\frac{\nu_0}{\nu}\right)^2 \frac{d N(\nu)}{d\Omega\,d\nu}, \end{equation} where $\mu$ is the cosine of the angle between the velocity of the electron and the direction of propagation of the photon. Quantities that are measured in the electron rest frame (with subscript ``0'') are related to the corresponding quantities measured in the laboratory frame via the Lorentz transformations: \begin{equation} \mu_0=\frac{\mu-\beta}{1-\beta\mu},\,\,\nu_0=\frac{\nu}{\gamma(1+\beta\mu_0)}. \label{lorentz} \end{equation} This leads to \begin{equation} \frac{dN_0(\mu_0,\nu_0)}{d\Omega_0\,d\nu_0}=\frac{1}{4\pi\gamma^3(1+\beta \mu_0)^3}\delta\left(\nu_0-\frac{\tilde{\nu}}{\gamma(1+\beta\mu_0)}\right). \end{equation} Under the given assumptions, the scattering can be treated in the Thomson limit (the photon frequency does not change) in the electron rest frame. Therefore, the number of photons scattered into an interval $d\Omega_0$ of solid angle in a unit time is \begin{eqnarray} \frac{d N_0(\mu_0,\nu_0)}{dt\,d\Omega_0\,d\nu_0}=\frac{3\sigma_{\rm T} c}{16} \int_{-1}^{1}d\mu_0^\prime (3+3\mu_0^2\mu_0^{\prime 2}-\mu_0^2-\mu_0^{\prime 2})\frac{d N(\mu_0^\prime,\nu_0)}{d\Omega_0^\prime\,d\nu_0} \nonumber\\ =\frac{3\sigma_{\rm T}\nu_0}{64\pi\gamma\beta\tilde{\nu}^2}\left[3+ \frac{3\mu_0^2}{\beta^2}\left(1-\frac{\tilde{\nu}}{\gamma\nu_0}\right)^2 -\mu_0^2-\frac{1}{\beta^2}\left(1-\frac{\tilde{\nu}}{\gamma\nu_0}\right)^2 \right]. \end{eqnarray} The reverse transition to the laboratory frame can be performed using equation (\ref{lorentz}) by the formula \begin{eqnarray} \frac{d N(\mu,\nu)} {dt\,d\Omega\,d\nu}=\frac{dt_0}{dt}\frac{d\mu_0} {d\mu}\frac{d\nu_0}{d\nu}\frac{d N_0(\mu_0,\nu_0)}{d t_0\,d\Omega_0\,d\nu_0}= \frac{1}{\gamma^2(1-\beta\mu)}\frac{d N_0(\mu_0,\nu_0)} {dt_0\,d\Omega_0\,d\nu_0}~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \nonumber\\ =\frac{3\sigma_{\rm T} cu}{64\tilde{\nu}\pi\gamma^2\beta} \left[3+\frac{3}{\beta^2}\left(\frac{\mu-\beta}{1-\beta\mu}\right)^2 \left(1-\frac{1}{\gamma^2(1-\beta\mu)u}\right)^2- \left(\frac{\mu-\beta}{1-\beta\mu}\right)^2-\frac{1}{\beta^2} \left(1-\frac{1}{\gamma^2(1-\beta\mu)u}\right)^2\right], \label{p_beta1} \end{eqnarray} where $u=\nu/\tilde{\nu}$. The derivation given above is analogous to the one in \citep{sazsun98}. The difference is that that paper considered the scattering of a Planckian spectrum\footnote{A formula similar to equation (\ref{p_beta1}) made it possible to find relativistic corrections to the amplitude of CMB distortions in the direction of clusters of galaxies. These corrections are of the order of $(kT_{\rm e}/m_{\rm e} c^2)^2$, $(V/c)^2$, and $(V_r/c)\times (kT_{\rm e}/m_{\rm e} c^2)$, where $V$ and $V_r$ are the cluster peculiar velocity and its component along the line of sight, respectively.}. For a given $\mu$, the photon frequency change, $u$, ranges in $[(1-\beta)/(1-\beta\mu), (1+\beta)/(1-\beta\mu)]$. Integrating equation (\ref{p_beta1}) over the solid angle of emergence of the scattered photon yields the spectrum that forms as a result of scattering of monochromatic radiation on electrons moving at a speed of $\beta$. For a given $u$, the integration limits are \begin{equation} \left\{ \begin{array}{rcll} -1 & \le \mu \le & (u-1+\beta)/(\beta u),\,\,\, & u<1; \\ (u-1-\beta)/(\beta u) & \le \mu \le & 1,\,\,\, & u>1.\\ \end{array} \right. \end{equation} The result is \begin{eqnarray} \left(\frac{dN(\nu)}{dt\,d\nu}\right)_{+,-}=\frac{3\sigma_{\rm T}c} {32\tilde{\nu}u\gamma^2\beta^6}\left\{\mp(u-1)\left(\frac{u^2+6u+1} {\gamma^4}+4u\right) \right. \nonumber\\ \left. +2u(u+1)\left[2\beta\left(\frac{3}{\gamma^2}+\beta^4\right) +\frac{3-\beta^2}{\gamma^2} \left(\ln{\frac{1-\beta}{1+\beta}}\pm \ln{u}\right)\right]\right\}, \label{p_beta2} \end{eqnarray} where the subscript plus sign corresponds to the values $u>1$, and the minus sign to $u<1$. Equation (\ref{p_beta2}) was earlier derived by \cite{faretal97}. The derivation of these authors differs from the one above in the order of integrations: they first considered the scattering of a beam of photons by a beam of electrons with a given angle between the two, and then implemented the integration over this angle. Let us now make a transition from a single electron to an ensemble of electrons with density $N_{\rm e}$ and introduce a quantity, related to the result of equation (\ref{p_beta2}), that gives the probability of a scattering event calculated per unit dimensionless time $\tau=\sigma_{\rm T} N_{\rm e} ct$, defined as \begin{equation} P(\tilde{\nu}\rightarrow\nu,\beta)=\frac{1}{\sigma_{\rm T} c}\frac{d N(\nu)} {dt\,d\nu}. \label{p_u_beta} \end{equation} Equation (\ref{p_u_beta}) describes the kernel of the integral kinetic equation for the problem of Comptonization on monoenergetic electrons. Let us mention its basic properties: (1) $P(\nu\rightarrow\tilde{\nu},\beta)=(\tilde{\nu}/\nu)^2 P(\tilde{\nu}\rightarrow\nu,\beta) $, (2) $\int_{\nu_{min}}^{\nu_{max}}P\,d\nu=1$, and (3) $\int_{\nu_{min}}^{\nu_{max}}(\tilde{\nu}/\nu-1)P\,d\nu=4(\gamma^2-1)/3$, where $\nu_{min}=(1-\beta)/(1+\beta)$, $\nu_{max}=(1+\beta)/(1-\beta)$ are the minimal and maximal possible values of the photon frequency after scattering. Property 1 ensures that the detailed balance principle is obeyed (see also eq. [\ref{p_balance}] for the case of Maxwellian electrons). In property 2, photon conservation manifests itself. Property 3 is a well-known relation (see, e.g., eq. [2.33] in the review by Pozdnyakov et al. 1983), which describes the rate at which the electrons transfer energy to the photons. There exists a more general expression than equation (\ref{p_beta2}) (which was obtained in the Thomson limit) for the $P(\tilde{\nu}\rightarrow\nu,\beta)$ kernel (see eqs. [8.1.8] and [8.1.12] in Nagirner \& Poutanen 1994, and references therein), which is valid for arbitrary photon and electron energies. In Figure~17, we present examples of spectra described by equations (\ref{p_beta2}) and (\ref{p_u_beta}). Their characteristic feature is the presence of a cusp at $\nu/\tilde{\nu}=1$. Note that the right wing of the line contains more photons ($\approx 1/2+69\beta/140$) than the left one ($\approx 1/2-69\beta/140$), the asymmetry becoming more pronounced as the electrons get more relativistic. Such spectra will build up when isotropic monochromatic radiation is scattered off an optically thin cloud of electrons that are moving isotropically at the same speed. Note that equation (\ref{p_beta2}) retains valid for ultrarelativistic ($\gamma\gg 1$) electrons \citep[the case that interested][]{faretal97}. \begin{figure*}[tb] \epsfxsize=16.5cm \epsffile[-110 190 685 700]{fig17.ps} \caption{ Spectra resulting from the single scattering of isotropic monochromatic radiation on an ensemble of electrons moving isotropically at a given speed, as calculated from formula (\ref{p_beta2}) for different values of the electron velocity ($v=\beta c$). Compton recoil is ignored, which corresponds to the limit $h\nu\ll m_{\rm e} v^2$. } \end{figure*} In the nonrelativistic limit ($\beta \ll 1$), the frequency is changed upon scattering by a small amount: $|u-1|/(u+1) \ll 1$. This allows us, by expanding equation (\ref{p_beta2}) in powers of $\beta$ and \begin{equation} \xi=\frac{u-1}{u+1}, \label{Delta} \end{equation} to derive the formula \begin{eqnarray} P(\tilde{\nu}\rightarrow\nu,\beta)=\frac{1}{\tilde{\nu}\beta}\left[\left( \frac{11}{20}-\frac{73}{140}\beta^2+O(\beta^4)\right)+\frac{|\xi|}{\beta} \left(-\frac{3}{4}+\frac{3}{4}\beta^2+O(\beta^4)\right) \right. \nonumber\\ \left. +\left(\frac{\xi}{\beta}\right)^2\left(\frac{11}{20}\beta^2+O(\beta^4)\right) +\left(\frac{|\xi|}{\beta}\right)^3 \left(\frac{1}{2}-\frac{7}{4}\beta^2+O(\beta^4)\right) +\left(\frac{|\xi|}{\beta}\right)^5\left(-\frac{3}{10}+\frac{17}{10}\beta^2 +O(\beta^4)\right) \right. \nonumber\\ \left. +\left(\frac{|\xi|}{\beta}\right)^7\left(-\frac{51}{70}\beta^2+O(\beta^4) \right)\right](1+\xi), \label{p_beta3} \end{eqnarray} which approximates well the scattered photon distribution. The series given in equation (\ref{p_beta3}) ensures photon conservation to an accuracy of $\int P\,d\nu=1+O(\beta^4)$, or, putting the same expression in explicit form, $\int P\,d\nu=1+19\beta^4/300+...\,$. The spectrum that forms as a result of the single scattering of isotropic monochromatic radiation on a group of electrons with a given isotropic distribution of velocities, $f_{\beta}$, is described in general by the formula \begin{equation} P(\tilde{\nu}\rightarrow\nu)=\int P(\tilde{\nu}\rightarrow\nu,\beta) f_{\beta}\,\beta^2\,d\beta, \label{p_f} \end{equation} where $P(\tilde{\nu}\rightarrow\nu,\beta)$ is governed by equations (\ref{p_beta2}) and (\ref{p_u_beta}). In the case of a thermal plasma, one must insert the relativistic Maxwellian distribution function into equation (\ref{p_f}). The result can then be evaluated numerically. In the present study, we are interested in the mildly relativistic case, $\eta=kT_{\rm e}/m_{\rm e} c^2\lesssim 0.1$. In this limit, we can derive $P(\tilde{\nu}\rightarrow\nu)$ analytically by making use of the approximate equation (\ref{p_beta3}). To this end, the $f_{\beta}$ function must be expanded in terms of $\beta$. In order to obtain the result with the accuracy we need, it is enough to retain only relativistic correction terms of order $\eta$ in this series, i.e., to substitute equation (\ref{fm}) of \S 3 for $f_{\beta}$. The integral in equation (\ref{p_f}) is to be performed in the range of values $\beta\ge \beta_m=|\xi|=|(u-1)/(u+1)|$. Upon elementary calculations, we arrive at equation (\ref{pt}) (with the transition from $\tilde{\nu}$, $\nu$ to $\nu$, $\nu^\prime$), which was our aim. \begin{acknowledgements} This work was supported in part by the Russian Foundation for Basic Research through grants 97-02-16264, 00-15-96649, and 00-02-16681. \end{acknowledgements}
1,116,691,498,207
arxiv
\section*{Acknowledgement} This research was financially supported by the Victorian High Education State Investment Fund (VHESIF) Smarter, Faster, Biopharma and Food Manufacturing and CSL Innovation Ltd. \section{Introduction} In the era of digital revolution, computers have been increasingly used to digitize and automate manufacturing processes \citep{glassey1994artificial,graefe1999new,lasi2014industry,frank2019industry}. In biologics manufacturing, efforts have been made to improve bioprocesses via advanced data analytics such as artificial intelligence and machine learning \citet{udugama2020role,gargalo2020towards,gargalo2020towards2}. However, there is still a long way to go to achieve full automation or digital twins in biomanufacturing, due to the high uncertainty of bioprocesses that involve living organisms and large, complex molecules \citep{sokolov2021hybrid}. To achieve a digital twin, one of the main challenges is building an accurate simulation model to mimic the complex dynamics of the underlying biosystem. This is of vital importance to improving the efficiency of bioprocesses, so as to satisfy the increasing demands for bioproduct. Having an accurate simulation model can help identify the optimal operating conditions for process control and design the best feed strategy for fed-batch cell culture, which would otherwise rely on expensive wet lab experiments which are also limited by the relatively low throughput of cell culture technologies \citep{bradford2018dynamic,duran2020multivariate}. Existing bioprocess modeling techniques can be roughly classified into three categories: mechanistic, data-driven, and hybrid \citep{solle2017between,del2019comparison}. \textit{Mechanistic} methods are physics‐based models (e.g. differential equations) developed based on time-dependent mass balances of participating components in the biosystem \citep{kyriakopoulos2018kinetic}. Example mechanistic methods include the unstructured model developed by \citet{jang2000unstructured} to simulate the production of monoclonal antibodies in batch and fed-batch culture and the system of equations developed by \citet{del2017kinetic} to simulate and predict biomass growth and lutein production. Developing kinetic models requires deep understanding of underlying process mechanisms and significant biological knowledge. The knowledge learned via kinetic methods can typically be transferred between bioprocesses with similar underlying physical laws. However, the production of recombinant protein in mammalian cells cannot be fully described mechanistically based on our current biological knowledge and ability to measure cellular processes, so all mechanistic models of such processes involve assumptions that may impact on their usefulness. In contrast, \textit{data-driven} methods are statistical or machine learning models, that aim to automatically learn the underlying process mechanisms from experimental data \citep{glassey1994artificial}. Typical examples include the reinforcement learning approach proposed by \citet{petsagkourakis2020reinforcement} for batch bioprocess optimization, the Artificial Neural Network (ANN) presented by \citet{garcia2016artificial} to predict the growth of the microalga \textit{Karlodinium veneficum}, and the ANN used in \citep{del2017efficient} to model the rate of change of the dynamic biosystem. Data-driven methods are simple and easy to develop but typically require a large amount of high-quality data to train. In addition, predictions from data-driven models may be unreliable due to their black-box nature, and they have limited use for conditions outside the training dataset. \textit{Hybrid} methods, which combine the merits of both mechanistic and data-driven methods, have gained growing interest in recent years \citep{narayanan2019new,tsopanoglou2021moving,sokolov2021hybrid,merkelbach2022hybridml}. In general, hybrid methods make use of the mechanistic framework to improve model robustness while using data-driven techniques to improve model accuracy \citep{narayanan2019new}. For example, data-driven models can be used to estimate unknown parts of mechanistic models or to reduce the errors made by mechanistic models. Mechanistic models can be used to generate a large amount of data, which can then be used to improve the training of data-driven models \citep{solle2017between}. Although many efforts have been made, the existing techniques are still insufficient to accurately capture the complex dynamics of bioprocesses \citep{sokolov2021hybrid}. Due to the biological variance of living cells \citep{fraser2001biological} and calibration or measurement errors, repeated wet lab experiments under the same conditions may lead to different system dynamics and product yield. Such a high level of uncertainty in bioprocesses makes system dynamics hard to predict, posing a significant challenge for existing techniques \citep{papathanasiou2020engineering}. Another challenge for bioprocess modeling is that acquiring data from wet lab experiments is very expensive (in cost of reagents and operators, as well as time), and hence the amount of data available is typically insufficient to train an accurate and robust model. To address these challenges, we propose to use a statistical machine learning approach, Multi-Fidelity Gausssian Process (MFGP) \citep{Kennedy2000}, for bioprocess modeling. Gaussian Process (GP) is a data-driven approach that automatically learns a mapping from an input vector to an output, therefore not requiring deep knowledge of the underlying process mechanisms to build the model \citep{o1978curve,di2008biomass,bradford2018dynamic,deringer2021gaussian,petsagkourakis2021safe,del2021real}. GP can consider the uncertainty naturally inherent in a bioprocess as Gaussian noise and provide an uncertainty estimate along with the prediction. Multi-fidelity GP is a more advanced learning model that can make use of multiple sources of information with different levels of fidelity to build a prediction model \citep{Kennedy2000,Gratiet2014,Perdikaris2017, Peherstorfer2018}. Hence, it is particularly suitable for bioprocess modelling, for which the amount of high-fidelity data is typically small. We apply the MFGP approach to model bioprocesses in which the amount of data is small, and demonstrate the efficacy of MFGP using two case studies: (1) bioreactor scale-up and (2) knowledge transfer across cell lines. For bioreactor scale-up, we use data collected from smaller-scale and larger-scale bioreactors as low-fidelity and high-fidelity data respectively. We show that using multiple sources of data can facilitate the development of a more robust and accurate model for larger-scale bioreactors than only using the data from larger-scale bioreactors. In the second case study, we treat the data collected from different cell lines as different levels of fidelity, and show that the knowledge in one cell line can be successfully transferred to another cell line via the MFGP approach. The contributions of this paper are summarized as follows: \begin{enumerate}[label=(\alph*)] \item We propose to use the MFGP approach for bioprocess modeling with a small amount of data. We show that the MFGP approach can potentially lead to an improved prediction model, especially when the amount of high-fidelity data is insufficient. \item We apply the MFGP approach to solve two important tasks in bioprocess modeling, bioreactor scale-up and knowledge transfer across cell lines. The performance of the MFGP approach is evaluated on real-world datasets and compared against three other baseline methods. \item We consider two typical MFGP methods, that can capture the linear correlation and nonlinear correlation between different levels of fidelity data respectively. The strength and weakness of different methods are thoroughly analysed on the two bioprocess modeling tasks. \end{enumerate} \section{Multi-fidelity Gaussian Process} \subsection{Gaussian Process} A GP is a stochastic process consisting of random variables $\{ f(x)\ |\ x\in S \}$ indexed by the index set $S=\{x\ |\ x\in \mathbb{R}^d\}$ such that any finite subset indexed by $\{x_1,\ \ldots,\ x_n\}\subseteq S$ is multivariate Gaussian distributed with mean $[m(x_1), \ldots, m(x_n)]^\top$ and covariance $k(x_i,x_j)=E\big{(}(f(x_i)-m(x_i))(f(x_j)-m(x_j))\big{)}$ where $m(x)=E(f(x))$ \citep{Rasmussen2018}: \begin{equation} f \sim GP(m(x),\ k(x_i,x_j)). \end{equation} Gaussian processes can be applied to regression analysis where $f$ represents the latent function to fit to observations $y(x_i)$ that are related through the observation model $y(x_i) = f(x_i) + \epsilon$ with $\epsilon \sim iid\ N(0,\sigma^2)$ representing measurement noise. The kernel function $k(x_i,x_j)$ is expressed in terms of learnable hyperparameters typically under a stationarity assumption, and encodes the general shape of the function. A common choice of kernel function is the radial basis function (RBF) kernel with length scale $\lambda$ as a hyperparameter and is defined as: \begin{equation} k(x_i,x_j) = \exp \frac{-||x_i-x_j||^2}{2\lambda^2}. \end{equation} Without loss of generality it can be assumed that $m(x)=0$ which has a negligible effect on the predictive distribution particularly when the training data is standardised. The predictive distribution $f(x_*) | D, \theta, x_*$ of the latent function conditioned on test input $x_*$, training data $D$ and hyperparameters $\theta$ is Gaussian distributed with mean $E(f(x_*))$ and variance $V(f(x_*))$ given by: \begin{align} E(f(x_*)) &= k_*^T(K+\sigma I)^{-1}y,\\ V(f(x_*)) &= k(x_*, x_*) - k_*^T(K+\sigma I)^{-1}k_*, \end{align} where $k_*$ is a column vector of covariances between $f(x_*)$ and the training observations, and $K$ is a kernel matrix with entries $K_{ij} = k(x_i, x_j)$. An example of the predictive distribution of GP is shown in Figure~\ref{fig:illustration}. Model selection can be done by choosing a hyperparameter setting $\hat \theta$ that maximises the marginal log likelihood (MLL) $p(Y|\theta, X)$, which is the log probability of the observed response $Y$ being a realisation of the model given the corresponding input $X$ in a procedure known as maximum likelihood type-II (ML-II) estimation as a frequentist approximation to a fully Bayesian treatment: \begin{align} \log p(Y|\theta, X) &= -\frac{1}{2}(y^T(K+\sigma I)^{-1}y+ \log |K+\sigma I| + n \log 2\pi),\\ \hat\theta &= \text{argmax}_{\theta\in \mathbb{R}} \log p(Y|X,\ \theta). \end{align} The MLL penalises model misfit $y^T(K+\sigma I)^{-1}y$ and model complexity $\log |K+\sigma I|$ which trades off between goodness of fit and overfitting respectively. \subsection{Multi-fidelity Gaussian Process} It is assumed fidelity levels $\{X_i, Y_i\}_{1\leq i \leq L}$ are available in the training dataset that describe the highest fidelity level $Y_L$ with varying degrees of fidelity, where $X_i$ is the input that generated the response $Y_i$. Fidelity level $i<j$ implies observations of fidelity $i$ are a lower fidelity estimate of $Y_L$ compared to those of level $j$, forming a hierarchy. It is assumed observations $y_i \in Y_t$ for corresponding input $x_i\in X_t$ are related to an underlying latent function $f_i$ through the observation model $y_i = f_i(x_i) + \epsilon$ with $\epsilon \sim iid\ N(0, \sigma)$ representing obscuring noise. The fidelity levels can be related through an autoregressive function $f$ that expresses level $t$ in terms of level $t-1$ for $t\geq 1$ which exploits the correlation between levels for their prediction: \begin{equation}\label{eq:0} \begin{aligned} f_t(x) &= f(f_{t-1}(x), x). \end{aligned} \end{equation} An example of the MFGP approach is shown in Figure~\ref{fig:illustration}. \begin{figure}[!t] \centering \resizebox{\textwidth}{!}{ \includegraphics[scale=0.5]{figures/GP_example.png} \includegraphics[scale=0.5]{figures/MFGP_example.png} } \caption{An illustration of the GP and MFGP models. In the left figure, only six data points from the high fidelity function are available to train a GP model, of which the posterior mean and two standard deviation band are shown. In the right figure, in addition to the six high-fidelity data points, twelve data points from the low fidelity function are available. An MFGP model is trained using both the low-fidelity and high-fidelity data, with the posterior mean and two standard deviation band shown.} \label{fig:illustration} \end{figure} \subsubsection{Linear Autoregressive Gaussian Process (LARGP)} A family of multi-fidelity models known as multi-fidelity Gaussian process models exist in literature which differ by the choice of $f$ in which Gaussian processes are assigned as priors to equation terms resulting in a non-parametric and uncertainty propagating Bayesian statistical model that is ideal for small-data regimes. The seminal work of \citet{Kennedy2000} assumes fidelity level $t$ is some linear model of the lower fidelity $t-1$ in terms of a hyperparameter scaling factor $\rho_t$ plus some error correction $\delta_t(x)$ that is assigned a GP prior: \begin{equation}\label{eq:1} \begin{aligned} f_1(x)& = \delta_1(x),\\ f_t(x) &= \rho_t f_{t-1}(x) + \delta_t(x), \text{ for $t\geq 2$}. \end{aligned} \end{equation} By assuming independence between $\delta_t(x)$ and lower fidelity levels $(f_{m}(x))_{m < t}$, a closed form solution can be derived. Due to this assumption, given $f_{t-1}(x)$ nothing more can be learned from lower fidelity levels about $f_t(x)$ (Markov property). \citet{Gratiet2014} propose a computationally efficient recursive method for computing the posterior, involving sequentially fitting independent multi-fidelity models to fidelity level pairs $(t-1,t)$ beginning with the lowest fidelity level under a nested data assumption $X_i \subseteq X_j$ for fidelity levels $i<j$. The key difference in their solution is to replace $f_{t-1}(x)$ in Equation~\eqref{eq:1} with its posterior $f_{t-1}^*(x)$ conditioned on all lower fidelity levels. As they prove, the resulting predictive distribution is identical to the coupled model proposed by \citet{Kennedy2000}. In the model each fidelity level can be modelled with GP regression over the observations for the corresponding fidelity level and the posterior of the lower fidelity level $f_{t-1}^*(x)$, where the predictive distribution for test input $x_*$ is Gaussian distributed with mean $E(f_t (x_*))$ and variance $V(f_t (x_*))$ given by: \begin{align}\label{eq:3} E(f_t (x_*)) &= \rho_t E(f_{t-1} (x_*))+\mu_t+ k_*^T(K_t+\sigma_t I)^{-1}\large{(}\normalsize y_t -\rho_tE(f_{t-1} (x_*))-\mu_t\large{)},\\ V(f_t (x_*)) &= \rho_t^2 V(f_{t-1} (x_*))+k(x_*, x_*)- k_*^T(K_t+\sigma_t I)^{-1}k_*, \end{align} where $\mu_t$ is the mean of $\delta_t$, $K_t$ is the kernel matrix of the response $f_t(x)$ defined in terms of kernel hyper parameters $\theta_t$, and $k_*$ is a column vector of covariances between the test point and training points. The time complexity is significantly reduced to $O(\sum_i n_i^3)$ due to the inversion of square matrices of size $n_i$ for the sequential inference of each fidelity level $i$ having $n_i$ observations. In the following sections, this recursive model is used in an application to bioprocess modelling as introduced denoted MFGP. \subsubsection{Nonlinear Autoregressive Gaussian Process (NARGP)} \citet{Perdikaris2017} replace the restrictive linear function that relates adjacent fidelities in Equation~\eqref{eq:1} with an expressive GP $F_t$ that can capture nonlinear relationships: \begin{equation}\label{eq:4} \begin{aligned} f_1(x) &= \delta_{1}(x), \\ f_t(x) &= F_{t}(f_{t-1}(x),x), \; \text{ for $t \geq 2$}, \end{aligned} \end{equation} where $\delta_1$ is a GP over the input $x$. Since $F_t(x)$ is the composition of GPs known as a Deep Gaussian Process (DGP), the posterior is no longer Gaussian \citep{Damianou2012}. To arrive at a solution, \citet{Perdikaris2017} replace $f_{t-1}(x)$ with its posterior $f_{t-1}^*(x)$ which assumes $f_t$ is independent of all fidelity levels given $f_{t-1}$. The chosen kernel is: \begin{equation}\label{eq:5} \begin{aligned} K_1 &= K_b^1(x_i, x_j), \\ K_t &= K_d^t(x_i, x_j)K_f^t(f_{t-1}(x_i),f_{t-1}(x_j)) + K_b^t(x_i, x_j), \; \text{ for $t \geq 2$}, \end{aligned} \end{equation} where RBF kernels $K_f$ and $K_d$ describe the interaction between the input $x$ and the lower fidelity $f_{t-1}$, and RBF kernel $K_b$ represents the covariance of the bias. The time complexity is similar to the recursive solution to LARGP proposed by \citet{Gratiet2014}. \section{Computational Results} We use simulation experiments to evaluate the efficacy of the proposed methods for bio-process modeling. We first evaluate whether GP can successfully model the uncertainty associated with an upstream fed-batch cell culture problem in Section~\ref{sec::GPvsMLP}, and then evaluate the efficacy of MFGP for bioprocess scale-up and knowledge transfer across cell lines in Sections~\ref{sec::scale-up} and \ref{sec::cell lines}, respectively. \subsection{GP for bioprocess modeling with uncertainty}\label{sec::GPvsMLP} We train a GP model for the regression task described in Section~\ref{sec::MFGP4scale-up}, where the input to an ML model is feed strategy to a bioreactor (i.e., the composition of feed media and daily feed volumes, which is referred to as feed pattern), and the output is the predicted product concentration, cell viability, VCD, or TCD. We use the data set generated from bioreactors of 0.25L volume, which consists of 24 data points. We compare GP against a popular ANN model, Multi-Layer Perceptron (MLP) \citep{murtagh1991multilayer}, for this regression task. As the dataset is quite small, we only use one hidden layer with four neurons for MLP. Other parameter settings for MLP are consistent with the default of the scikit-learn library \citep{scikit-learn}. For visualization, we use principal component analysis \citep{wold1987principal} to reduce the dimension of the inputs to one. The learned GP and MLP models are shown in Figure~\ref{fig:gp}, where each dot represents an independent experiment in a bioreactor. We can observe that there exists uncertainty in upstream fed-batch cell culture, in the sense that similar feed media and feed pattern may yield different outputs. GP provides a natural way to capture the uncertainty associated with the processes using Gaussian noise. The GP posterior mean represents the most likely output predicted by the trained GP model, and the two standard deviation band indicates that the trained GP model is 95\% confident that the output is within this band. From Figure~\ref{fig:gp}, we can see that GP successfully capture all the data points within its posterior two standard deviation band with only a few outliers. As expected, the two standard deviation band is large in the area that lacks data observations or if the degree of noise in the training data is high. The MLP model, in contrast, can only provide a single prediction for a given output, and hence it cannot model uncertainty in this bioprocess. In addition, the response predicted by the MLP model indicates significant overfitting. We also compare the prediction errors made by GP and MLP on each of the 24 data points in a ``leave one out'' fashion. Specifically, for each data observation (e.g., TMP01), we use other 23 data points to train a GP or MLP model and test the trained model on that data observation. The experimental results are presented in Table~\ref{tab::GPresults}. Again the true values of the data points mostly locate within the GP posterior two standard deviation band (95\% confidence band), demonstrating the efficacy of GP for modeling bioprocesses with uncertainty. Comparing GP posterior mean against MLP, we can observe that GP achieves a significantly smaller root mean square error (RMSE) than MLP on cell viable and VCD, while they both perform similarly on product concentration and TCD. \section{Bioprocess Scale-up}\label{sec::MFGP4scale-up} Bioprocess scale-up plays an important role in biomanufacturing, as it underlies the transition from laboratory-scale early process development to a large-scale production environment \citep{lindskog2018upstream,richelle2020analysis}. This often involves a change in the fermentation conditions, ranging from the geometries of the bioreactors to the operating speed or flow rates of substrate supply and additions of reagents. Although small-scale data is easier to acquire, with lower experimentation cost and faster turnaround time, such data may not accurately represent the production-scale environment. As bioprocesses involve living cells which adapt and react differently to their environment, scale-up can significantly change the cell culture behaviour, resulting in changes in productivity, yield and product quality. Consequently, the bioprocesses do not simply scale according to volumetric changes, but may require a modified model to represent the new biodynamics at the larger scale \citep{xia2015advances}. These differences in data fidelity pose a challenge in using lab-scale results as training data to model bioprocesses across different scales. Existing studies have used multivariate data analysis for bioprocess scale-up \citep{mercier2013multivariate,tescione2015application}. Such a simple statistical approach is often inadequate to address the challenges inherent in bioprocess modeling. In this paper, we propose to use a more advanced statistical machine learning approach, MFGP, which naturally treats the data from different scales with different levels of fidelity. The MFGP approach has the potential to automatically learn complex relationships between the data generated from different scales, leading to a more robust machine learning model, even if the amount of high-fidelity data is small. Hence, MFGP is very suitable for bioprocess scale-up, as the amount of available data from large-scale bioreactors is typically small due to high data acquisition cost. \subsection{Dataset} As a case study, we use the data generated from 40 fed-batch bioreactors with a culture period of 14 days. This includes 24 data points from Ambr® 250 bioreactors at the 0.25L scale and 16 data points from bioreactors at the 5L scale. The same CHO cell line was fermented across all bioreactors under different feeding strategies. For each strategy, we recorded the concentration of twenty amino acid components in the feed medium. The target variable is the end-point (at day 14) measurement of recombinant protein concentration, which is normalized into a range of 0 to 1. The distribution of the 40 data points is shown in Figure~\ref{fig:pca_scale} via Principal Component Analysis (PCA). Our goal is to model the effects of feed medium on the recombinant protein concentration, which can further be used to optimise the feeding strategy at different scales. We build a MFGP model for the prediction task for 5L bioreactors, treating the 0.25L mini-bioreactor data as the low-fidelity level and the 5L-scale bioreactor data as the high-fidelity level. \begin{figure}[!t] \centering \begin{tikzpicture} \begin{axis} [box plot width=0.20em, xlabel = \scriptsize First Principal Component of Feed Medium, ylabel = \scriptsize Normalized Product Concentration, height=0.50\textwidth,width=0.95\textwidth, grid style={line width=.1pt, draw=gray!10},major grid style={line width=.2pt,draw=gray!30}, xmajorgrids=true, ymajorgrids=true, major tick length=0.05cm, minor tick length=0.0cm, legend style={at={(0.65,0.80)},anchor=west,font=\scriptsize,draw=none}] \addplot[only marks, color=brickred, mark=*, mark size = 2.0] table[x index=0, y index=1] {\PCAScaleHigh}; \addlegendentry{\scriptsize Data from 5L Bioreactors} \addplot[only marks, color=frenchblue, mark=diamond*, mark size = 2.0] table[x index=0, y index=1] {\PCAScaleLow}; \addlegendentry{\scriptsize Data from 0.25L Bioreactors}; \end{axis} \end{tikzpicture} \caption{The distribution of data points from bioreactors with different scales. The x axis is the first principal component of the feed medium and the y axis is the normalized recombinant protein concentration.} \label{fig:pca_scale} \end{figure} \subsection{Feature Selection}\label{subsec::fs_scale} In this dataset, the number of high-fidelity data points is less than the number of features (i.e. amino acids). Using all features to train a machine learning model likely leads to overfitting. We therefore identify a subset of important features that are mostly relevant to the target (i.e. recombinant protein concentration), and use the selected subset of features to train GP models. To do so, we use a widely-used information theoretic feature selection method, Minimum-Redundancy Maximum-Relevance (MRMR) \citep{peng2005feature}, which uses (conditional) mutual information to measure the relevance between features and the target. As the features and target in this dataset are continuous variables, we use the Minimum Description Length method \citep{fayyad1993multi} to evenly divide the continuous values into five bins, following \citep{sun2020revisiting}. Note that the discretized dataset is only used for the feature selection procedure, whilst the original continuous dataset is used in the subsequent prediction tasks. The MRMR method essentially provides a ranking of the features based on their relevance to the target. To determine an appropriate subset size, we vary the number of selected features ($N_f$) from 1 to 20, and train a separate LARGP model using the selected features. For each model, fifteen high-fidelity data points are used in training and one data point is left for testing. The root mean square errors (RMSE) averaged across 30 independent runs (with different random seeds) are presented in Figure~\ref{fig:feature_selection_scale}. The results show that feature selection is beneficial in the sense that using a subset of selected features to train the LARGP model can result in a smaller RMSE than using all the features (i.e. $N_f=20$). The smallest RMSE is achieved when using one or eight features. In the following, we will select eight features to train all the models. \begin{figure}[!t] \centering \begin{tikzpicture} \begin{axis} [box plot width=0.20em, xlabel = \scriptsize Number of features selected ($N_f$), ylabel = \scriptsize RMSE of LARGP, height=0.50\textwidth,width=0.95\textwidth, grid style={line width=.1pt, draw=gray!10},major grid style={line width=.2pt,draw=gray!30}, xmajorgrids=true, ymajorgrids=true, major tick length=0.05cm, minor tick length=0.0cm, legend style={at={(0.05,0.10)},anchor=west,font=\scriptsize,draw=none}] \addplot[color=brickred, mark=*, mark size = 2.0, line width=0.50mm, dashed] table[x index=0, y index=1] {\FeatSelScale}; \end{axis} \end{tikzpicture} \caption{The RMSE generated by LARGP when using different number of features selected by MRMR for bioprocess scale-up.} \label{fig:feature_selection_scale} \end{figure} \subsection{Comparison between MFGP Models}\label{subsec::mgfp_comp_scale} We consider two MFGP models, LARGP and NARGP, for the prediction task. It is important to note that the MFGP models require the nested data structure of the training data, which means for every high-fidelity data point, there must exist a corresponding low-fidelity data point with the same the feature vector (feed medium). However, this assumption might be violated as can be seen in Figure~\ref{fig:pca_scale}. To address this, we train a GP model on the low-fidelity data and use the trained model to sample new data points to satisfy the nested data assumption. \begin{figure*}[!t] \centering \begin{tikzpicture} \begin{axis}[ybar, symbolic x coords={6, 10, 14}, xtick=data, ytick=\empty, axis y line = none, axis x line* = bottom, enlarge x limits = 0.2, nodes near coords={\pgfmathprintnumber[fixed zerofill, precision=2]{\pgfplotspointmeta}}, height=0.50\textwidth,width=1.0\textwidth, bar width=20pt, xticklabels={\scriptsize $N_t=6$, \scriptsize $N_t=10$,\scriptsize $N_t=14$}, major tick length=0.0cm, minor tick length=0.0cm, legend style={at={(0.7,0.95)},anchor=north,legend columns=0, font=\scriptsize,draw=none}] \addplot[color=brickred, fill=brickred] table[x=Category, y=GP-Low] {\MFGPScale};\addlegendentry{\scriptsize GP-Low} \addplot[color=frenchblue, fill=frenchblue] table[x=Category, y=GP-high] {\MFGPScale};\addlegendentry{\scriptsize GP-High} \addplot[color=darkcyan, fill=darkcyan] table[x=Category, y=GP-vol] {\MFGPScale};\addlegendentry{\scriptsize GP-Vol} \addplot[color=carrotorange, fill=carrotorange] table[x=Category, y=AR1] {\MFGPScale};\addlegendentry{\scriptsize LARGP} \addplot[color=skymagenta, fill=skymagenta] table[x=Category, y=NARGP] {\MFGPScale};\addlegendentry{\scriptsize NARGP} \end{axis} \end{tikzpicture} \caption{The RMSE generated by the machine learning models for bioprocess scale-up. $N_t$ is the number of high-fidelity data points used in training with the remaining data points used for testing. The RMSE is averaged across 30 independent runs with different random seeds.} \label{fig:mfgp-scale} \end{figure*} The results generated by LARGP and NARGP are presented in Figure~\ref{fig:mfgp-scale}. The linear autoregressive model, LARGP, performs much better than the non-linear autoregressive model, NARGP, on this dataset. The RMSE generated by the NARGP model is the same as that of GP-High, which is a GP model trained on high-fidelity data only. This indicates that the NARGP model fails to learn any correlation between high-fidelity and low-fidelity data, possibly due to the high uncertainty inherent in the dataset as can be seen in Figure~\ref{fig:pca_scale}. Hence, we will mainly consider LARGP for bioprocess scale-up in the following. \subsection{Comparison to Baselines} We compare the LARGP model to three baselines: 1) GP-Low, trained on low-fidelity data only, 2) GP-High, trained on high-fidelity data only, and 3) GP-Vol, trained on both low-fidelity and high-fidelity data with the volume of bioreactors as an additional feature. The RMSE generated by each method is shown in Figure~\ref{fig:mfgp-scale}. The LARGP model generally achieves the lowest RMSE for the task of bioprocess scale-up compared to all other methods. This demonstrates its efficacy in learning the correlation between data from different fidelities and smart use of multiple sources of information to build a better predictive model. The GP-Vol is essentially another linear autoregressive model, also learning the correlation between low-fidelity and high-fidelity data. However, it focuses on predicting for both low-fidelity and high-fidelity data, in contrast to LARGP which aims to predict for high-fidelity data only. \subsection{Effects of the Number of Training Data Points} There are totally 16 high-fidelity data points in our dataset. Here, we vary the number of high-fidelity data points used to train the GP models ($N_t = $ 6, 10, or 14), and the results are shown in Figure~\ref{fig:mfgp-scale}. As the number of high-fidelity training data points increases, the RMSE generated by all models (except GP-Low) decreases. The GP-Low model is trained on low-fidelity data only, thus its performance is not improved when increasing the number of high-fidelity training data points. When $N_t = 6$, the RMSE generated by GP-Low is much smaller than that of GP-High, indicating that the low-fidelity data is related to the high-fidelity data. As the number of high-fidelity training data points increases, the difference between LARGP, GP-High, and GP-Vol decreases. This suggests that the methodology of utilising multiple sources of data to build a predictive model is more significant when the number of high-fidelity training data points is insufficient. In other words, if the number of high-fidelity training data points is sufficient, we can simply use the GP-High model. \section{Knowledge Transfer across Cell Lines}\label{sec::MFGP4cell-lines} Similar to scaleability, cell line is another crucial process parameter that alters cell culture behaviour. A cell line is a population of transformed cells that can be divided indefinitely while maintaining stability of certain phenotypes and functions \citep{lindskog2018host}. Different cell lines are developed for each biological product through a process of cell line development. A desirable cell line should be highly adaptive to the fermentation conditions, such that it can retain a stably high growth rate and productivity \citep{CASTAN2018131}. Knowledge transfer across cell lines is highly desirable when developing a new bioproduct, as it can potentially harness the historical experimental data of different cell lines to reduce the high cost of generating new data \citet{hutter2021knowledge,rogers2022transfer}. However, the underlying relationship between different cell lines is often complex and difficult for data-driven methods to learn. Here, we propose that the MFGP approach is a powerful technique that can automatically learn the relationship between different cell lines from data and provides a natural approach for knowledge transfer across cell lines. \subsection{Dataset} As a case study, we focus on Chinese Hamster Ovary (CHO) cell lines, which are primarily used in the biopharmaceutical manufacturing industry to produce recombinant monoclonal antibodies (mAb). Different recombinant protein products often require different CHO cell lines, for example, CHO-K1, CHO-RD, or CHO.1F8 \citep{ZHANG2013127}. For this case study, we use the dataset presented in \citep{gangadharan2021data}, which was extracted from AstraZeneca upstream process development and production databases. For each cell culture, the 22 throughput parameters (e.g., Glutamine and Glutamate) were recorded for 17 days. The task was to predict the mAb concentration at the endpoint day using the throughput parameters at the midpoint day \citep{gangadharan2021data}. The values of each parameter were normalized to the range between 0 and 1, and the categorical parameters were anonymised using alphabet letters. There are 30 data points from cell line A and 9 data points from cell line H, and the data distribution is visualised in Figure~\ref{fig:pca_cell} via PCA. Our aim is to build a MFGP model for knowledge transfer between cell lines A and H. Specially, we use the data from cell line A as low-fidelity data and that from cell line H as high-fidelity data. \begin{figure}[!t] \centering \begin{tikzpicture} \begin{axis} [box plot width=0.20em, xlabel = \scriptsize First Principal Component of Throughput Parameters at the Midpoint Day, ylabel = \scriptsize Normalized mAb Concentration, height=0.50\textwidth,width=0.95\textwidth, grid style={line width=.1pt, draw=gray!10},major grid style={line width=.2pt,draw=gray!30}, xmajorgrids=true, ymajorgrids=true, major tick length=0.05cm, minor tick length=0.0cm, legend style={at={(0.05,0.80)},anchor=west,font=\scriptsize,draw=none}] \addplot[only marks, color=brickred, mark=*, mark size = 2.0] table[x index=0, y index=1] {\PCACellHigh}; \addlegendentry{\scriptsize Data from Cell Line H} \addplot[only marks, color=frenchblue, mark=diamond*, mark size = 2.0] table[x index=0, y index=1] {\PCACellLow}; \addlegendentry{\scriptsize Data from Cell Line A}; \end{axis} \end{tikzpicture} \caption{The distribution of data points from different cell lines. The x axis is the first principal component of the throughput parameters at the midpoint day and the y axis is the normalized mAb concentration.} \label{fig:pca_cell} \end{figure} \subsection{Feature Selection} \begin{figure}[!t] \centering \begin{tikzpicture} \begin{axis} [box plot width=0.20em, xlabel = \scriptsize Number of features selected ($N_f$), ylabel = \scriptsize RMSE of LARGP, height=0.50\textwidth,width=0.95\textwidth, grid style={line width=.1pt, draw=gray!10}, major grid style={line width=.2pt,draw=gray!30}, xmajorgrids=true, ymajorgrids=true, major tick length=0.05cm, minor tick length=0.0cm, legend style={at={(0.05,0.10)},anchor=west,font=\scriptsize,draw=none}] \addplot[color=brickred, mark=*, mark size = 2.0, line width=0.50mm, dashed] table[x index=0, y index=1] {\FeatSel}; \end{axis} \end{tikzpicture} \caption{The RMSE generated by LARGP when using different number of features selected by MRMR for knowledge transfer across cell lines.} \label{fig:feature_selection} \end{figure} We first use the MRMR method to select a subset of relevant features for this prediction task, similar to that in Section~\ref{subsec::fs_scale}. The RMSE generated by LARGP when using different number of selected features is shown in Figure~\ref{fig:feature_selection}. We can observe that using a subset of selected features to train the LARGP model can lead to a smaller RMSE as compared to using all the features (i.e. $N_f=22$). The smallest RMSE is generated when using six features to train the LARGP model. However, as the number of high-fidelity data points in this dataset is only nine, using six features to train a machine learning model can still lead to overfitting, especially for the GP model trained only on the high-fidelity data. Hence in our subsequent study, we will select three features, for which the RMSE generated by LARGP is also low. \subsection{Comparison between MFGP Models} We compare the performance of LARGP and NARGP on the task of knowledge transfer across cell lines. The nested data assumption is satisfied by using the same method as in Section~\ref{subsec::mgfp_comp_scale}. The RMSEs generated by both models are presented in Figure~\ref{fig:mfgp-cell}. Interestingly, the non-linear autoregressive model (NARGP) performs better than the linear autoregressive model (LARGP) on this dataset, suggesting the correlation between the data from the cell lines is non-linear. Hence, we will mainly consider NARGP for knowledge transfer across cell lines in our subsequent study. \begin{figure*}[!t] \centering \begin{tikzpicture} \begin{axis}[ybar, symbolic x coords={3, 5, 7}, xtick=data, ytick=\empty, axis y line = none, axis x line* = bottom, enlarge x limits = 0.2, nodes near coords, nodes near coords style={/pgf/number format/fixed, /pgf/number format/precision=3}, height=0.50\textwidth,width=1.0\textwidth, bar width=20pt, xticklabels={\scriptsize $N_t=3$, \scriptsize $N_t=5$,\scriptsize $N_t=7$}, major tick length=0.0cm, minor tick length=0.0cm, legend style={at={(0.40,1.05)},anchor=north,legend columns=0, font=\scriptsize,draw=none}] \addplot[color=brickred, fill=brickred] table[x=Category, y=GP-Low]{\MFGPCell};\addlegendentry{\scriptsize GP-Low} \addplot[color=frenchblue, fill=frenchblue] table[x=Category, y=GP-High] {\MFGPCell};\addlegendentry{\scriptsize GP-High} \addplot[color=darkcyan, fill=darkcyan] table[x=Category, y=GP-Cat] {\MFGPCell};\addlegendentry{\scriptsize GP-Cat} \addplot[color=carrotorange, fill=carrotorange] table[x=Category, y=AR1] {\MFGPCell};\addlegendentry{\scriptsize LARGP} \addplot[color=skymagenta, fill=skymagenta] table[x=Category, y=NARGP] {\MFGPCell};\addlegendentry{\scriptsize NARGP} \end{axis} \end{tikzpicture} \caption{The RMSE generated by the machine learning models for knowledge transfer between cell lines. $N_t$ is the number of high-fidelity data points used in training with the remaining data points used for testing. The RMSE is averaged across 30 independent runs with different random seeds.} \label{fig:mfgp-cell} \end{figure*} \subsection{Comparison to Baselines} We compare the NARGP model to three baselines: 1) GP-Low, trained on low-fidelity data only, 2) GP-High, trained on high-fidelity data only, and 3) GP-Cat, trained on both low-fidelity and high-fidelity data with the category of cell lines (i.e., 1 and 2) as an additional feature. Note that similar to LARGP, the GP-Cat model is equivalent to a linear autoregressive model that is able to learn any linear correlation between the low-fidelity and high-fidelity data. However, the GP-Cat model aims to minimise the prediction error for both low-fidelity and high-fidelity data, whilst the LARGP model aims to minimise the prediction error for high-fidelity data only. The RMSEs generated by all the models are presented in Figure~\ref{fig:mfgp-cell}. The GP-Low model performs much worse than other models, suggesting that the low-fidelity data is significantly different from high-fidelity data in this dataset and the knowledge about cell line A cannot be directly transferred to cell line H. Interestingly, the two linear autoregressive models, LARGP and GP-Cat, are generally outperformed by the GP-High model. This again indicates that the correlation between low-fidelity and high-fidelity data in this dataset is nonlinear, requiring a more sophisticated model such as NARGP. As expected, the NARGP model achieves the lowest RMSE for this prediction task as compared to all other models. \subsection{Effects of the Number of Training Data Points} We investigate the effect of the number of training data points on the performance of the machine learning models. To do so, we vary the number of high-fidelity data points used in training ($N_t = $ 3, 5, or 7), with the remaining high-fidelity data points for testing. The results in Figure~\ref{fig:mfgp-cell} show that the performance of all models improves (except GP-Low) when more high-fidelity training data points are available. The performance of GP-Low does not improve as it is trained on low-fidelity data. The efficacy of the NARGP model is more significant when a smaller amount of high-fidelity training data is available. \section{Conclusion} In this article, we have proposed to use a statistical machine learning approach named Multi-Fidelity Gaussian Process (MFGP) for process modelling in biomanufacturing. We have shown that this MFGP technique provides a powerful tool to deal with the small data challenge in bioprocess modelling, by making use of multiple sources of data with different levels of fidelity. We have successfully applied MFGP for bioreactor scale-up and knowledge transfer across cell lines, representing two important problems in biomanufacturing. For bioreactor scale-up, we demonstrated that using the data collected from small bioreactors as low-fidelity data can yield an improved model for the prediction task for large bioreactors. Similarly, for knowledge transfer across cell lines, we treated the data generated from different cell lines as different levels of fidelity, and showed that the pattern underlying one cell line (i.e., the one treated as low-fidelity data) can be successfully transferred to another (i.e., the one treated as high-fidelity). We showed that the MFGP technique can achieve a higher accuracy for these two prediction tasks as compared to three baseline methods, especially when the amount of training data is small. We have considered two MFGP models in this paper, a linear autoregressive model (LARGP) and a nonlinear autoregressive model (NARGP). The results showed that 1) for bioprocess scale-up, the LARGP model performs better as the corresponding dataset is too noisy for the NARGP model to learn any complex correlation between data from different scales; and 2) for knowledge transfer across cell lines, the NARGP model performs better as the correlation between the cell lines is nonlinear in the dataset considered. In future work, it will be interesting to consider more MFGP models (e.g. \citep{cutajar2019deep}) and develop a method that can automatically select a well-performed MFGP model based on the characteristics of the dataset under consideration. This may be achieved by using algorithm selection techniques \citep{munoz2015algorithm} and instance space analysis \citep{andres2022bifidelity}. There may exist other opportunities of leveraging MFGP for bioprocess modeling. For example, we can use simulation data generated from a mechanistic model \citep{kyriakopoulos2018kinetic} as low-fidelity data and real data from bioreactors as high-fidelity data. This can potentially lead to a novel and effective hybrid model \citep{tsopanoglou2021moving,sokolov2021hybrid}, suitable for tackling the challenge of small data in bioprocess modeling. \section{Introduction} In the era of digital revolution, computers have been increasingly used to digitize and automate manufacturing processes \citep{glassey1994artificial,graefe1999new,lasi2014industry,frank2019industry}. In biologics manufacturing, efforts have been made to improve bioprocesses via advanced data analytics such as artificial intelligence and machine learning \citet{udugama2020role,gargalo2020towards,gargalo2020towards2}. However, there is still a long way to go to achieve full automation or digital twins in biomanufacturing, due to the high uncertainty of bioprocesses that involve living organisms and large, complex molecules \citep{sokolov2021hybrid}. To achieve a digital twin, one of the main challenges is building an accurate simulation model to mimic the complex dynamics of the underlying biosystem. This is of vital importance to improving the efficiency of bioprocesses, so as to satisfy the increasing demands for bioproduct. Having an accurate simulation model can help identify the optimal operating conditions for process control and design the best feed strategy for fed-batch cell culture, which would otherwise rely on expensive wet lab experiments which are also limited by the relatively low throughput of cell culture technologies \citep{bradford2018dynamic,duran2020multivariate}. Existing bioprocess modeling techniques can be roughly classified into three categories: mechanistic, data-driven, and hybrid \citep{solle2017between,del2019comparison}. \textit{Mechanistic} methods are physics‐based models (e.g. differential equations) developed based on time-dependent mass balances of participating components in the biosystem \citep{kyriakopoulos2018kinetic}. Example mechanistic methods include the unstructured model developed by \citet{jang2000unstructured} to simulate the production of monoclonal antibodies in batch and fed-batch culture and the system of equations developed by \citet{del2017kinetic} to simulate and predict biomass growth and lutein production. Developing kinetic models requires deep understanding of underlying process mechanisms and significant biological knowledge. The knowledge learned via kinetic methods can typically be transferred between bioprocesses with similar underlying physical laws. However, the production of recombinant protein in mammalian cells cannot be fully described mechanistically based on our current biological knowledge and ability to measure cellular processes, so all mechanistic models of such processes involve assumptions that may impact on their usefulness. In contrast, \textit{data-driven} methods are statistical or machine learning models, that aim to automatically learn the underlying process mechanisms from experimental data \citep{glassey1994artificial}. Typical examples include the reinforcement learning approach proposed by \citet{petsagkourakis2020reinforcement} for batch bioprocess optimization, the Artificial Neural Network (ANN) presented by \citet{garcia2016artificial} to predict the growth of the microalga \textit{Karlodinium veneficum}, and the ANN used in \citep{del2017efficient} to model the rate of change of the dynamic biosystem. Data-driven methods are simple and easy to develop but typically require a large amount of high-quality data to train. In addition, predictions from data-driven models may be unreliable due to their black-box nature, and they have limited use for conditions outside the training dataset. \textit{Hybrid} methods, which combine the merits of both mechanistic and data-driven methods, have gained growing interest in recent years \citep{narayanan2019new,tsopanoglou2021moving,sokolov2021hybrid,merkelbach2022hybridml}. In general, hybrid methods make use of the mechanistic framework to improve model robustness while using data-driven techniques to improve model accuracy \citep{narayanan2019new}. For example, data-driven models can be used to estimate unknown parts of mechanistic models or to reduce the errors made by mechanistic models. Mechanistic models can be used to generate a large amount of data, which can then be used to improve the training of data-driven models \citep{solle2017between}. Although many efforts have been made, the existing techniques are still insufficient to accurately capture the complex dynamics of bioprocesses \citep{sokolov2021hybrid}. Due to the biological variance of living cells \citep{fraser2001biological} and calibration or measurement errors, repeated wet lab experiments under the same conditions may lead to different system dynamics and product yield. Such a high level of uncertainty in bioprocesses makes system dynamics hard to predict, posing a significant challenge for existing techniques \citep{papathanasiou2020engineering}. Another challenge for bioprocess modeling is that acquiring data from wet lab experiments is very expensive (in cost of reagents and operators, as well as time), and hence the amount of data available is typically insufficient to train an accurate and robust model. To address these challenges, we propose to use a statistical machine learning approach, Multi-Fidelity Gausssian Process (MFGP) \citep{Kennedy2000}, for bioprocess modeling. Gaussian Process (GP) is a data-driven approach that automatically learns a mapping from an input vector to an output, therefore not requiring deep knowledge of the underlying process mechanisms to build the model \citep{o1978curve,di2008biomass,bradford2018dynamic,deringer2021gaussian,petsagkourakis2021safe,del2021real}. GP can consider the uncertainty naturally inherent in a bioprocess as Gaussian noise and provide an uncertainty estimate along with the prediction. Multi-fidelity GP is a more advanced learning model that can make use of multiple sources of information with different levels of fidelity to build a prediction model \citep{Kennedy2000,Gratiet2014,Perdikaris2017, Peherstorfer2018}. Hence, it is particularly suitable for bioprocess modelling, for which the amount of high-fidelity data is typically small. We apply the MFGP approach to model bioprocesses in which the amount of data is small, and demonstrate the efficacy of MFGP using two case studies: (1) bioreactor scale-up and (2) knowledge transfer across cell lines. For bioreactor scale-up, we use data collected from smaller-scale and larger-scale bioreactors as low-fidelity and high-fidelity data respectively. We show that using multiple sources of data can facilitate the development of a more robust and accurate model for larger-scale bioreactors than only using the data from larger-scale bioreactors. In the second case study, we treat the data collected from different cell lines as different levels of fidelity, and show that the knowledge in one cell line can be successfully transferred to another cell line via the MFGP approach. The contributions of this paper are summarized as follows: \begin{enumerate}[label=(\alph*)] \item We propose to use the MFGP approach for bioprocess modeling with a small amount of data. We show that the MFGP approach can potentially lead to an improved prediction model, especially when the amount of high-fidelity data is insufficient. \item We apply the MFGP approach to solve two important tasks in bioprocess modeling, bioreactor scale-up and knowledge transfer across cell lines. The performance of the MFGP approach is evaluated on real-world datasets and compared against three other baseline methods. \item We consider two typical MFGP methods, that can capture the linear correlation and nonlinear correlation between different levels of fidelity data respectively. The strength and weakness of different methods are thoroughly analysed on the two bioprocess modeling tasks. \end{enumerate} \section{Computational Results} We use simulation experiments to evaluate the efficacy of the proposed methods for bio-process modeling. We first evaluate whether GP can successfully model the uncertainty associated with an upstream fed-batch cell culture problem in Section~\ref{sec::GPvsMLP}, and then evaluate the efficacy of MFGP for bioprocess scale-up and knowledge transfer across cell lines in Sections~\ref{sec::scale-up} and \ref{sec::cell lines}, respectively. \subsection{GP for bioprocess modeling with uncertainty}\label{sec::GPvsMLP} We train a GP model for the regression task described in Section~\ref{sec::MFGP4scale-up}, where the input to an ML model is feed strategy to a bioreactor (i.e., the composition of feed media and daily feed volumes, which is referred to as feed pattern), and the output is the predicted product concentration, cell viability, VCD, or TCD. We use the data set generated from bioreactors of 0.25L volume, which consists of 24 data points. We compare GP against a popular ANN model, Multi-Layer Perceptron (MLP) \citep{murtagh1991multilayer}, for this regression task. As the dataset is quite small, we only use one hidden layer with four neurons for MLP. Other parameter settings for MLP are consistent with the default of the scikit-learn library \citep{scikit-learn}. For visualization, we use principal component analysis \citep{wold1987principal} to reduce the dimension of the inputs to one. The learned GP and MLP models are shown in Figure~\ref{fig:gp}, where each dot represents an independent experiment in a bioreactor. We can observe that there exists uncertainty in upstream fed-batch cell culture, in the sense that similar feed media and feed pattern may yield different outputs. GP provides a natural way to capture the uncertainty associated with the processes using Gaussian noise. The GP posterior mean represents the most likely output predicted by the trained GP model, and the two standard deviation band indicates that the trained GP model is 95\% confident that the output is within this band. From Figure~\ref{fig:gp}, we can see that GP successfully capture all the data points within its posterior two standard deviation band with only a few outliers. As expected, the two standard deviation band is large in the area that lacks data observations or if the degree of noise in the training data is high. The MLP model, in contrast, can only provide a single prediction for a given output, and hence it cannot model uncertainty in this bioprocess. In addition, the response predicted by the MLP model indicates significant overfitting. We also compare the prediction errors made by GP and MLP on each of the 24 data points in a ``leave one out'' fashion. Specifically, for each data observation (e.g., TMP01), we use other 23 data points to train a GP or MLP model and test the trained model on that data observation. The experimental results are presented in Table~\ref{tab::GPresults}. Again the true values of the data points mostly locate within the GP posterior two standard deviation band (95\% confidence band), demonstrating the efficacy of GP for modeling bioprocesses with uncertainty. Comparing GP posterior mean against MLP, we can observe that GP achieves a significantly smaller root mean square error (RMSE) than MLP on cell viable and VCD, while they both perform similarly on product concentration and TCD. \section{Multi-fidelity Gaussian Process} \subsection{Gaussian Process} A GP is a stochastic process consisting of random variables $\{ f(x)\ |\ x\in S \}$ indexed by the index set $S=\{x\ |\ x\in \mathbb{R}^d\}$ such that any finite subset indexed by $\{x_1,\ \ldots,\ x_n\}\subseteq S$ is multivariate Gaussian distributed with mean $[m(x_1), \ldots, m(x_n)]^\top$ and covariance $k(x_i,x_j)=E\big{(}(f(x_i)-m(x_i))(f(x_j)-m(x_j))\big{)}$ where $m(x)=E(f(x))$ \citep{Rasmussen2018}: \begin{equation} f \sim GP(m(x),\ k(x_i,x_j)). \end{equation} Gaussian processes can be applied to regression analysis where $f$ represents the latent function to fit to observations $y(x_i)$ that are related through the observation model $y(x_i) = f(x_i) + \epsilon$ with $\epsilon \sim iid\ N(0,\sigma^2)$ representing measurement noise. The kernel function $k(x_i,x_j)$ is expressed in terms of learnable hyperparameters typically under a stationarity assumption, and encodes the general shape of the function. A common choice of kernel function is the radial basis function (RBF) kernel with length scale $\lambda$ as a hyperparameter and is defined as: \begin{equation} k(x_i,x_j) = \exp \frac{-||x_i-x_j||^2}{2\lambda^2}. \end{equation} Without loss of generality it can be assumed that $m(x)=0$ which has a negligible effect on the predictive distribution particularly when the training data is standardised. The predictive distribution $f(x_*) | D, \theta, x_*$ of the latent function conditioned on test input $x_*$, training data $D$ and hyperparameters $\theta$ is Gaussian distributed with mean $E(f(x_*))$ and variance $V(f(x_*))$ given by: \begin{align} E(f(x_*)) &= k_*^T(K+\sigma I)^{-1}y,\\ V(f(x_*)) &= k(x_*, x_*) - k_*^T(K+\sigma I)^{-1}k_*, \end{align} where $k_*$ is a column vector of covariances between $f(x_*)$ and the training observations, and $K$ is a kernel matrix with entries $K_{ij} = k(x_i, x_j)$. An example of the predictive distribution of GP is shown in Figure~\ref{fig:illustration}. Model selection can be done by choosing a hyperparameter setting $\hat \theta$ that maximises the marginal log likelihood (MLL) $p(Y|\theta, X)$, which is the log probability of the observed response $Y$ being a realisation of the model given the corresponding input $X$ in a procedure known as maximum likelihood type-II (ML-II) estimation as a frequentist approximation to a fully Bayesian treatment: \begin{align} \log p(Y|\theta, X) &= -\frac{1}{2}(y^T(K+\sigma I)^{-1}y+ \log |K+\sigma I| + n \log 2\pi),\\ \hat\theta &= \text{argmax}_{\theta\in \mathbb{R}} \log p(Y|X,\ \theta). \end{align} The MLL penalises model misfit $y^T(K+\sigma I)^{-1}y$ and model complexity $\log |K+\sigma I|$ which trades off between goodness of fit and overfitting respectively. \subsection{Multi-fidelity Gaussian Process} It is assumed fidelity levels $\{X_i, Y_i\}_{1\leq i \leq L}$ are available in the training dataset that describe the highest fidelity level $Y_L$ with varying degrees of fidelity, where $X_i$ is the input that generated the response $Y_i$. Fidelity level $i<j$ implies observations of fidelity $i$ are a lower fidelity estimate of $Y_L$ compared to those of level $j$, forming a hierarchy. It is assumed observations $y_i \in Y_t$ for corresponding input $x_i\in X_t$ are related to an underlying latent function $f_i$ through the observation model $y_i = f_i(x_i) + \epsilon$ with $\epsilon \sim iid\ N(0, \sigma)$ representing obscuring noise. The fidelity levels can be related through an autoregressive function $f$ that expresses level $t$ in terms of level $t-1$ for $t\geq 1$ which exploits the correlation between levels for their prediction: \begin{equation}\label{eq:0} \begin{aligned} f_t(x) &= f(f_{t-1}(x), x). \end{aligned} \end{equation} An example of the MFGP approach is shown in Figure~\ref{fig:illustration}. \begin{figure}[!t] \centering \resizebox{\textwidth}{!}{ \includegraphics[scale=0.5]{figures/GP_example.png} \includegraphics[scale=0.5]{figures/MFGP_example.png} } \caption{An illustration of the GP and MFGP models. In the left figure, only six data points from the high fidelity function are available to train a GP model, of which the posterior mean and two standard deviation band are shown. In the right figure, in addition to the six high-fidelity data points, twelve data points from the low fidelity function are available. An MFGP model is trained using both the low-fidelity and high-fidelity data, with the posterior mean and two standard deviation band shown.} \label{fig:illustration} \end{figure} \subsubsection{Linear Autoregressive Gaussian Process (LARGP)} A family of multi-fidelity models known as multi-fidelity Gaussian process models exist in literature which differ by the choice of $f$ in which Gaussian processes are assigned as priors to equation terms resulting in a non-parametric and uncertainty propagating Bayesian statistical model that is ideal for small-data regimes. The seminal work of \citet{Kennedy2000} assumes fidelity level $t$ is some linear model of the lower fidelity $t-1$ in terms of a hyperparameter scaling factor $\rho_t$ plus some error correction $\delta_t(x)$ that is assigned a GP prior: \begin{equation}\label{eq:1} \begin{aligned} f_1(x)& = \delta_1(x),\\ f_t(x) &= \rho_t f_{t-1}(x) + \delta_t(x), \text{ for $t\geq 2$}. \end{aligned} \end{equation} By assuming independence between $\delta_t(x)$ and lower fidelity levels $(f_{m}(x))_{m < t}$, a closed form solution can be derived. Due to this assumption, given $f_{t-1}(x)$ nothing more can be learned from lower fidelity levels about $f_t(x)$ (Markov property). \citet{Gratiet2014} propose a computationally efficient recursive method for computing the posterior, involving sequentially fitting independent multi-fidelity models to fidelity level pairs $(t-1,t)$ beginning with the lowest fidelity level under a nested data assumption $X_i \subseteq X_j$ for fidelity levels $i<j$. The key difference in their solution is to replace $f_{t-1}(x)$ in Equation~\eqref{eq:1} with its posterior $f_{t-1}^*(x)$ conditioned on all lower fidelity levels. As they prove, the resulting predictive distribution is identical to the coupled model proposed by \citet{Kennedy2000}. In the model each fidelity level can be modelled with GP regression over the observations for the corresponding fidelity level and the posterior of the lower fidelity level $f_{t-1}^*(x)$, where the predictive distribution for test input $x_*$ is Gaussian distributed with mean $E(f_t (x_*))$ and variance $V(f_t (x_*))$ given by: \begin{align}\label{eq:3} E(f_t (x_*)) &= \rho_t E(f_{t-1} (x_*))+\mu_t+ k_*^T(K_t+\sigma_t I)^{-1}\large{(}\normalsize y_t -\rho_tE(f_{t-1} (x_*))-\mu_t\large{)},\\ V(f_t (x_*)) &= \rho_t^2 V(f_{t-1} (x_*))+k(x_*, x_*)- k_*^T(K_t+\sigma_t I)^{-1}k_*, \end{align} where $\mu_t$ is the mean of $\delta_t$, $K_t$ is the kernel matrix of the response $f_t(x)$ defined in terms of kernel hyper parameters $\theta_t$, and $k_*$ is a column vector of covariances between the test point and training points. The time complexity is significantly reduced to $O(\sum_i n_i^3)$ due to the inversion of square matrices of size $n_i$ for the sequential inference of each fidelity level $i$ having $n_i$ observations. In the following sections, this recursive model is used in an application to bioprocess modelling as introduced denoted MFGP. \subsubsection{Nonlinear Autoregressive Gaussian Process (NARGP)} \citet{Perdikaris2017} replace the restrictive linear function that relates adjacent fidelities in Equation~\eqref{eq:1} with an expressive GP $F_t$ that can capture nonlinear relationships: \begin{equation}\label{eq:4} \begin{aligned} f_1(x) &= \delta_{1}(x), \\ f_t(x) &= F_{t}(f_{t-1}(x),x), \; \text{ for $t \geq 2$}, \end{aligned} \end{equation} where $\delta_1$ is a GP over the input $x$. Since $F_t(x)$ is the composition of GPs known as a Deep Gaussian Process (DGP), the posterior is no longer Gaussian \citep{Damianou2012}. To arrive at a solution, \citet{Perdikaris2017} replace $f_{t-1}(x)$ with its posterior $f_{t-1}^*(x)$ which assumes $f_t$ is independent of all fidelity levels given $f_{t-1}$. The chosen kernel is: \begin{equation}\label{eq:5} \begin{aligned} K_1 &= K_b^1(x_i, x_j), \\ K_t &= K_d^t(x_i, x_j)K_f^t(f_{t-1}(x_i),f_{t-1}(x_j)) + K_b^t(x_i, x_j), \; \text{ for $t \geq 2$}, \end{aligned} \end{equation} where RBF kernels $K_f$ and $K_d$ describe the interaction between the input $x$ and the lower fidelity $f_{t-1}$, and RBF kernel $K_b$ represents the covariance of the bias. The time complexity is similar to the recursive solution to LARGP proposed by \citet{Gratiet2014}. \section*{Acknowledgement} This research was financially supported by the Victorian High Education State Investment Fund (VHESIF) Smarter, Faster, Biopharma and Food Manufacturing and CSL Innovation Ltd. \section{Bioprocess Scale-up}\label{sec::MFGP4scale-up} Bioprocess scale-up plays an important role in biomanufacturing, as it underlies the transition from laboratory-scale early process development to a large-scale production environment \citep{lindskog2018upstream,richelle2020analysis}. This often involves a change in the fermentation conditions, ranging from the geometries of the bioreactors to the operating speed or flow rates of substrate supply and additions of reagents. Although small-scale data is easier to acquire, with lower experimentation cost and faster turnaround time, such data may not accurately represent the production-scale environment. As bioprocesses involve living cells which adapt and react differently to their environment, scale-up can significantly change the cell culture behaviour, resulting in changes in productivity, yield and product quality. Consequently, the bioprocesses do not simply scale according to volumetric changes, but may require a modified model to represent the new biodynamics at the larger scale \citep{xia2015advances}. These differences in data fidelity pose a challenge in using lab-scale results as training data to model bioprocesses across different scales. Existing studies have used multivariate data analysis for bioprocess scale-up \citep{mercier2013multivariate,tescione2015application}. Such a simple statistical approach is often inadequate to address the challenges inherent in bioprocess modeling. In this paper, we propose to use a more advanced statistical machine learning approach, MFGP, which naturally treats the data from different scales with different levels of fidelity. The MFGP approach has the potential to automatically learn complex relationships between the data generated from different scales, leading to a more robust machine learning model, even if the amount of high-fidelity data is small. Hence, MFGP is very suitable for bioprocess scale-up, as the amount of available data from large-scale bioreactors is typically small due to high data acquisition cost. \subsection{Dataset} As a case study, we use the data generated from 40 fed-batch bioreactors with a culture period of 14 days. This includes 24 data points from Ambr® 250 bioreactors at the 0.25L scale and 16 data points from bioreactors at the 5L scale. The same CHO cell line was fermented across all bioreactors under different feeding strategies. For each strategy, we recorded the concentration of twenty amino acid components in the feed medium. The target variable is the end-point (at day 14) measurement of recombinant protein concentration, which is normalized into a range of 0 to 1. The distribution of the 40 data points is shown in Figure~\ref{fig:pca_scale} via Principal Component Analysis (PCA). Our goal is to model the effects of feed medium on the recombinant protein concentration, which can further be used to optimise the feeding strategy at different scales. We build a MFGP model for the prediction task for 5L bioreactors, treating the 0.25L mini-bioreactor data as the low-fidelity level and the 5L-scale bioreactor data as the high-fidelity level. \begin{figure}[!t] \centering \begin{tikzpicture} \begin{axis} [box plot width=0.20em, xlabel = \scriptsize First Principal Component of Feed Medium, ylabel = \scriptsize Normalized Product Concentration, height=0.50\textwidth,width=0.95\textwidth, grid style={line width=.1pt, draw=gray!10},major grid style={line width=.2pt,draw=gray!30}, xmajorgrids=true, ymajorgrids=true, major tick length=0.05cm, minor tick length=0.0cm, legend style={at={(0.65,0.80)},anchor=west,font=\scriptsize,draw=none}] \addplot[only marks, color=brickred, mark=*, mark size = 2.0] table[x index=0, y index=1] {\PCAScaleHigh}; \addlegendentry{\scriptsize Data from 5L Bioreactors} \addplot[only marks, color=frenchblue, mark=diamond*, mark size = 2.0] table[x index=0, y index=1] {\PCAScaleLow}; \addlegendentry{\scriptsize Data from 0.25L Bioreactors}; \end{axis} \end{tikzpicture} \caption{The distribution of data points from bioreactors with different scales. The x axis is the first principal component of the feed medium and the y axis is the normalized recombinant protein concentration.} \label{fig:pca_scale} \end{figure} \subsection{Feature Selection}\label{subsec::fs_scale} In this dataset, the number of high-fidelity data points is less than the number of features (i.e. amino acids). Using all features to train a machine learning model likely leads to overfitting. We therefore identify a subset of important features that are mostly relevant to the target (i.e. recombinant protein concentration), and use the selected subset of features to train GP models. To do so, we use a widely-used information theoretic feature selection method, Minimum-Redundancy Maximum-Relevance (MRMR) \citep{peng2005feature}, which uses (conditional) mutual information to measure the relevance between features and the target. As the features and target in this dataset are continuous variables, we use the Minimum Description Length method \citep{fayyad1993multi} to evenly divide the continuous values into five bins, following \citep{sun2020revisiting}. Note that the discretized dataset is only used for the feature selection procedure, whilst the original continuous dataset is used in the subsequent prediction tasks. The MRMR method essentially provides a ranking of the features based on their relevance to the target. To determine an appropriate subset size, we vary the number of selected features ($N_f$) from 1 to 20, and train a separate LARGP model using the selected features. For each model, fifteen high-fidelity data points are used in training and one data point is left for testing. The root mean square errors (RMSE) averaged across 30 independent runs (with different random seeds) are presented in Figure~\ref{fig:feature_selection_scale}. The results show that feature selection is beneficial in the sense that using a subset of selected features to train the LARGP model can result in a smaller RMSE than using all the features (i.e. $N_f=20$). The smallest RMSE is achieved when using one or eight features. In the following, we will select eight features to train all the models. \begin{figure}[!t] \centering \begin{tikzpicture} \begin{axis} [box plot width=0.20em, xlabel = \scriptsize Number of features selected ($N_f$), ylabel = \scriptsize RMSE of LARGP, height=0.50\textwidth,width=0.95\textwidth, grid style={line width=.1pt, draw=gray!10},major grid style={line width=.2pt,draw=gray!30}, xmajorgrids=true, ymajorgrids=true, major tick length=0.05cm, minor tick length=0.0cm, legend style={at={(0.05,0.10)},anchor=west,font=\scriptsize,draw=none}] \addplot[color=brickred, mark=*, mark size = 2.0, line width=0.50mm, dashed] table[x index=0, y index=1] {\FeatSelScale}; \end{axis} \end{tikzpicture} \caption{The RMSE generated by LARGP when using different number of features selected by MRMR for bioprocess scale-up.} \label{fig:feature_selection_scale} \end{figure} \subsection{Comparison between MFGP Models}\label{subsec::mgfp_comp_scale} We consider two MFGP models, LARGP and NARGP, for the prediction task. It is important to note that the MFGP models require the nested data structure of the training data, which means for every high-fidelity data point, there must exist a corresponding low-fidelity data point with the same the feature vector (feed medium). However, this assumption might be violated as can be seen in Figure~\ref{fig:pca_scale}. To address this, we train a GP model on the low-fidelity data and use the trained model to sample new data points to satisfy the nested data assumption. \begin{figure*}[!t] \centering \begin{tikzpicture} \begin{axis}[ybar, symbolic x coords={6, 10, 14}, xtick=data, ytick=\empty, axis y line = none, axis x line* = bottom, enlarge x limits = 0.2, nodes near coords={\pgfmathprintnumber[fixed zerofill, precision=2]{\pgfplotspointmeta}}, height=0.50\textwidth,width=1.0\textwidth, bar width=20pt, xticklabels={\scriptsize $N_t=6$, \scriptsize $N_t=10$,\scriptsize $N_t=14$}, major tick length=0.0cm, minor tick length=0.0cm, legend style={at={(0.7,0.95)},anchor=north,legend columns=0, font=\scriptsize,draw=none}] \addplot[color=brickred, fill=brickred] table[x=Category, y=GP-Low] {\MFGPScale};\addlegendentry{\scriptsize GP-Low} \addplot[color=frenchblue, fill=frenchblue] table[x=Category, y=GP-high] {\MFGPScale};\addlegendentry{\scriptsize GP-High} \addplot[color=darkcyan, fill=darkcyan] table[x=Category, y=GP-vol] {\MFGPScale};\addlegendentry{\scriptsize GP-Vol} \addplot[color=carrotorange, fill=carrotorange] table[x=Category, y=AR1] {\MFGPScale};\addlegendentry{\scriptsize LARGP} \addplot[color=skymagenta, fill=skymagenta] table[x=Category, y=NARGP] {\MFGPScale};\addlegendentry{\scriptsize NARGP} \end{axis} \end{tikzpicture} \caption{The RMSE generated by the machine learning models for bioprocess scale-up. $N_t$ is the number of high-fidelity data points used in training with the remaining data points used for testing. The RMSE is averaged across 30 independent runs with different random seeds.} \label{fig:mfgp-scale} \end{figure*} The results generated by LARGP and NARGP are presented in Figure~\ref{fig:mfgp-scale}. The linear autoregressive model, LARGP, performs much better than the non-linear autoregressive model, NARGP, on this dataset. The RMSE generated by the NARGP model is the same as that of GP-High, which is a GP model trained on high-fidelity data only. This indicates that the NARGP model fails to learn any correlation between high-fidelity and low-fidelity data, possibly due to the high uncertainty inherent in the dataset as can be seen in Figure~\ref{fig:pca_scale}. Hence, we will mainly consider LARGP for bioprocess scale-up in the following. \subsection{Comparison to Baselines} We compare the LARGP model to three baselines: 1) GP-Low, trained on low-fidelity data only, 2) GP-High, trained on high-fidelity data only, and 3) GP-Vol, trained on both low-fidelity and high-fidelity data with the volume of bioreactors as an additional feature. The RMSE generated by each method is shown in Figure~\ref{fig:mfgp-scale}. The LARGP model generally achieves the lowest RMSE for the task of bioprocess scale-up compared to all other methods. This demonstrates its efficacy in learning the correlation between data from different fidelities and smart use of multiple sources of information to build a better predictive model. The GP-Vol is essentially another linear autoregressive model, also learning the correlation between low-fidelity and high-fidelity data. However, it focuses on predicting for both low-fidelity and high-fidelity data, in contrast to LARGP which aims to predict for high-fidelity data only. \subsection{Effects of the Number of Training Data Points} There are totally 16 high-fidelity data points in our dataset. Here, we vary the number of high-fidelity data points used to train the GP models ($N_t = $ 6, 10, or 14), and the results are shown in Figure~\ref{fig:mfgp-scale}. As the number of high-fidelity training data points increases, the RMSE generated by all models (except GP-Low) decreases. The GP-Low model is trained on low-fidelity data only, thus its performance is not improved when increasing the number of high-fidelity training data points. When $N_t = 6$, the RMSE generated by GP-Low is much smaller than that of GP-High, indicating that the low-fidelity data is related to the high-fidelity data. As the number of high-fidelity training data points increases, the difference between LARGP, GP-High, and GP-Vol decreases. This suggests that the methodology of utilising multiple sources of data to build a predictive model is more significant when the number of high-fidelity training data points is insufficient. In other words, if the number of high-fidelity training data points is sufficient, we can simply use the GP-High model. \section{Knowledge Transfer across Cell Lines}\label{sec::MFGP4cell-lines} Similar to scaleability, cell line is another crucial process parameter that alters cell culture behaviour. A cell line is a population of transformed cells that can be divided indefinitely while maintaining stability of certain phenotypes and functions \citep{lindskog2018host}. Different cell lines are developed for each biological product through a process of cell line development. A desirable cell line should be highly adaptive to the fermentation conditions, such that it can retain a stably high growth rate and productivity \citep{CASTAN2018131}. Knowledge transfer across cell lines is highly desirable when developing a new bioproduct, as it can potentially harness the historical experimental data of different cell lines to reduce the high cost of generating new data \citet{hutter2021knowledge,rogers2022transfer}. However, the underlying relationship between different cell lines is often complex and difficult for data-driven methods to learn. Here, we propose that the MFGP approach is a powerful technique that can automatically learn the relationship between different cell lines from data and provides a natural approach for knowledge transfer across cell lines. \subsection{Dataset} As a case study, we focus on Chinese Hamster Ovary (CHO) cell lines, which are primarily used in the biopharmaceutical manufacturing industry to produce recombinant monoclonal antibodies (mAb). Different recombinant protein products often require different CHO cell lines, for example, CHO-K1, CHO-RD, or CHO.1F8 \citep{ZHANG2013127}. For this case study, we use the dataset presented in \citep{gangadharan2021data}, which was extracted from AstraZeneca upstream process development and production databases. For each cell culture, the 22 throughput parameters (e.g., Glutamine and Glutamate) were recorded for 17 days. The task was to predict the mAb concentration at the endpoint day using the throughput parameters at the midpoint day \citep{gangadharan2021data}. The values of each parameter were normalized to the range between 0 and 1, and the categorical parameters were anonymised using alphabet letters. There are 30 data points from cell line A and 9 data points from cell line H, and the data distribution is visualised in Figure~\ref{fig:pca_cell} via PCA. Our aim is to build a MFGP model for knowledge transfer between cell lines A and H. Specially, we use the data from cell line A as low-fidelity data and that from cell line H as high-fidelity data. \begin{figure}[!t] \centering \begin{tikzpicture} \begin{axis} [box plot width=0.20em, xlabel = \scriptsize First Principal Component of Throughput Parameters at the Midpoint Day, ylabel = \scriptsize Normalized mAb Concentration, height=0.50\textwidth,width=0.95\textwidth, grid style={line width=.1pt, draw=gray!10},major grid style={line width=.2pt,draw=gray!30}, xmajorgrids=true, ymajorgrids=true, major tick length=0.05cm, minor tick length=0.0cm, legend style={at={(0.05,0.80)},anchor=west,font=\scriptsize,draw=none}] \addplot[only marks, color=brickred, mark=*, mark size = 2.0] table[x index=0, y index=1] {\PCACellHigh}; \addlegendentry{\scriptsize Data from Cell Line H} \addplot[only marks, color=frenchblue, mark=diamond*, mark size = 2.0] table[x index=0, y index=1] {\PCACellLow}; \addlegendentry{\scriptsize Data from Cell Line A}; \end{axis} \end{tikzpicture} \caption{The distribution of data points from different cell lines. The x axis is the first principal component of the throughput parameters at the midpoint day and the y axis is the normalized mAb concentration.} \label{fig:pca_cell} \end{figure} \subsection{Feature Selection} \begin{figure}[!t] \centering \begin{tikzpicture} \begin{axis} [box plot width=0.20em, xlabel = \scriptsize Number of features selected ($N_f$), ylabel = \scriptsize RMSE of LARGP, height=0.50\textwidth,width=0.95\textwidth, grid style={line width=.1pt, draw=gray!10}, major grid style={line width=.2pt,draw=gray!30}, xmajorgrids=true, ymajorgrids=true, major tick length=0.05cm, minor tick length=0.0cm, legend style={at={(0.05,0.10)},anchor=west,font=\scriptsize,draw=none}] \addplot[color=brickred, mark=*, mark size = 2.0, line width=0.50mm, dashed] table[x index=0, y index=1] {\FeatSel}; \end{axis} \end{tikzpicture} \caption{The RMSE generated by LARGP when using different number of features selected by MRMR for knowledge transfer across cell lines.} \label{fig:feature_selection} \end{figure} We first use the MRMR method to select a subset of relevant features for this prediction task, similar to that in Section~\ref{subsec::fs_scale}. The RMSE generated by LARGP when using different number of selected features is shown in Figure~\ref{fig:feature_selection}. We can observe that using a subset of selected features to train the LARGP model can lead to a smaller RMSE as compared to using all the features (i.e. $N_f=22$). The smallest RMSE is generated when using six features to train the LARGP model. However, as the number of high-fidelity data points in this dataset is only nine, using six features to train a machine learning model can still lead to overfitting, especially for the GP model trained only on the high-fidelity data. Hence in our subsequent study, we will select three features, for which the RMSE generated by LARGP is also low. \subsection{Comparison between MFGP Models} We compare the performance of LARGP and NARGP on the task of knowledge transfer across cell lines. The nested data assumption is satisfied by using the same method as in Section~\ref{subsec::mgfp_comp_scale}. The RMSEs generated by both models are presented in Figure~\ref{fig:mfgp-cell}. Interestingly, the non-linear autoregressive model (NARGP) performs better than the linear autoregressive model (LARGP) on this dataset, suggesting the correlation between the data from the cell lines is non-linear. Hence, we will mainly consider NARGP for knowledge transfer across cell lines in our subsequent study. \begin{figure*}[!t] \centering \begin{tikzpicture} \begin{axis}[ybar, symbolic x coords={3, 5, 7}, xtick=data, ytick=\empty, axis y line = none, axis x line* = bottom, enlarge x limits = 0.2, nodes near coords, nodes near coords style={/pgf/number format/fixed, /pgf/number format/precision=3}, height=0.50\textwidth,width=1.0\textwidth, bar width=20pt, xticklabels={\scriptsize $N_t=3$, \scriptsize $N_t=5$,\scriptsize $N_t=7$}, major tick length=0.0cm, minor tick length=0.0cm, legend style={at={(0.40,1.05)},anchor=north,legend columns=0, font=\scriptsize,draw=none}] \addplot[color=brickred, fill=brickred] table[x=Category, y=GP-Low]{\MFGPCell};\addlegendentry{\scriptsize GP-Low} \addplot[color=frenchblue, fill=frenchblue] table[x=Category, y=GP-High] {\MFGPCell};\addlegendentry{\scriptsize GP-High} \addplot[color=darkcyan, fill=darkcyan] table[x=Category, y=GP-Cat] {\MFGPCell};\addlegendentry{\scriptsize GP-Cat} \addplot[color=carrotorange, fill=carrotorange] table[x=Category, y=AR1] {\MFGPCell};\addlegendentry{\scriptsize LARGP} \addplot[color=skymagenta, fill=skymagenta] table[x=Category, y=NARGP] {\MFGPCell};\addlegendentry{\scriptsize NARGP} \end{axis} \end{tikzpicture} \caption{The RMSE generated by the machine learning models for knowledge transfer between cell lines. $N_t$ is the number of high-fidelity data points used in training with the remaining data points used for testing. The RMSE is averaged across 30 independent runs with different random seeds.} \label{fig:mfgp-cell} \end{figure*} \subsection{Comparison to Baselines} We compare the NARGP model to three baselines: 1) GP-Low, trained on low-fidelity data only, 2) GP-High, trained on high-fidelity data only, and 3) GP-Cat, trained on both low-fidelity and high-fidelity data with the category of cell lines (i.e., 1 and 2) as an additional feature. Note that similar to LARGP, the GP-Cat model is equivalent to a linear autoregressive model that is able to learn any linear correlation between the low-fidelity and high-fidelity data. However, the GP-Cat model aims to minimise the prediction error for both low-fidelity and high-fidelity data, whilst the LARGP model aims to minimise the prediction error for high-fidelity data only. The RMSEs generated by all the models are presented in Figure~\ref{fig:mfgp-cell}. The GP-Low model performs much worse than other models, suggesting that the low-fidelity data is significantly different from high-fidelity data in this dataset and the knowledge about cell line A cannot be directly transferred to cell line H. Interestingly, the two linear autoregressive models, LARGP and GP-Cat, are generally outperformed by the GP-High model. This again indicates that the correlation between low-fidelity and high-fidelity data in this dataset is nonlinear, requiring a more sophisticated model such as NARGP. As expected, the NARGP model achieves the lowest RMSE for this prediction task as compared to all other models. \subsection{Effects of the Number of Training Data Points} We investigate the effect of the number of training data points on the performance of the machine learning models. To do so, we vary the number of high-fidelity data points used in training ($N_t = $ 3, 5, or 7), with the remaining high-fidelity data points for testing. The results in Figure~\ref{fig:mfgp-cell} show that the performance of all models improves (except GP-Low) when more high-fidelity training data points are available. The performance of GP-Low does not improve as it is trained on low-fidelity data. The efficacy of the NARGP model is more significant when a smaller amount of high-fidelity training data is available. \section{Conclusion} In this article, we have proposed to use a statistical machine learning approach named Multi-Fidelity Gaussian Process (MFGP) for process modelling in biomanufacturing. We have shown that this MFGP technique provides a powerful tool to deal with the small data challenge in bioprocess modelling, by making use of multiple sources of data with different levels of fidelity. We have successfully applied MFGP for bioreactor scale-up and knowledge transfer across cell lines, representing two important problems in biomanufacturing. For bioreactor scale-up, we demonstrated that using the data collected from small bioreactors as low-fidelity data can yield an improved model for the prediction task for large bioreactors. Similarly, for knowledge transfer across cell lines, we treated the data generated from different cell lines as different levels of fidelity, and showed that the pattern underlying one cell line (i.e., the one treated as low-fidelity data) can be successfully transferred to another (i.e., the one treated as high-fidelity). We showed that the MFGP technique can achieve a higher accuracy for these two prediction tasks as compared to three baseline methods, especially when the amount of training data is small. We have considered two MFGP models in this paper, a linear autoregressive model (LARGP) and a nonlinear autoregressive model (NARGP). The results showed that 1) for bioprocess scale-up, the LARGP model performs better as the corresponding dataset is too noisy for the NARGP model to learn any complex correlation between data from different scales; and 2) for knowledge transfer across cell lines, the NARGP model performs better as the correlation between the cell lines is nonlinear in the dataset considered. In future work, it will be interesting to consider more MFGP models (e.g. \citep{cutajar2019deep}) and develop a method that can automatically select a well-performed MFGP model based on the characteristics of the dataset under consideration. This may be achieved by using algorithm selection techniques \citep{munoz2015algorithm} and instance space analysis \citep{andres2022bifidelity}. There may exist other opportunities of leveraging MFGP for bioprocess modeling. For example, we can use simulation data generated from a mechanistic model \citep{kyriakopoulos2018kinetic} as low-fidelity data and real data from bioreactors as high-fidelity data. This can potentially lead to a novel and effective hybrid model \citep{tsopanoglou2021moving,sokolov2021hybrid}, suitable for tackling the challenge of small data in bioprocess modeling.
1,116,691,498,208
arxiv
\section{Introduction} \vspace{-2mm} Statistical dependency measures the correlation of random variables or factors in models, which is often an important concern in various scientific domains including statistics~\cite{granger1994using,jiang2015nonparametric}, robotics~\cite{julian2014mutual,charrow2015information}, bioinformatics~\cite{lachmann2016aracne,zea2016mitos}, and machine learning~\cite{chen2016infogan,alemi2016deep,hjelm2018learning}. In recent deep learning studies, statistical dependency has increasingly served as learning objectives or regularizers for neural network training, and has achieved improvement in terms of model robustness~\cite{zhu2020learning}, generalizability~\cite{alemi2016deep}, interpretablity~\cite{chen2016infogan,cheng2020improving}, \textit{etc}. Among statistical dependency measurements, mutual information (MI) is commonly used in machine learning. Given two random variables ${\bm{x}}, {\bm{y}}$, the mutual information is defined as: \begin{equation}\label{eq:mi-definition} \mathcal{I}({\bm{x}}; {\bm{y}}) = \mathbb{E}_{p({\bm{x}}, {\bm{y}})} \Big[\log \frac{p({\bm{x}}, {\bm{y}})}{p({\bm{x}}) p({\bm{y}})} \Big]. \end{equation} Recently, mutual information has shown significant improvement when applied as a training criterion on learning tasks, such as conditional generation~\cite{chen2016infogan}, domain adaptation~\cite{gholami2020unsupervised}, representation learning~\cite{chen2020simple}, and fairness~\cite{song2019learning}. However, MI can only handle the statistical dependency between two variables. When considering optimization of correlation among multiple variables, MI requires computation of each variable pair, which leads to a quadratic increase in computation cost. To address this problem, total correlation (TC) has been proposed by extending MI to multi-variable cases: \begin{equation}\label{eq:tc-definition} \mathcal{TC}({\bm{X}}) = \mathcal{TC}({\bm{x}}_1,{\bm{x}}_2,\dots,{\bm{x}}_n) = \mathbb{E}_{p({\bm{x}}_1, {\bm{x}}_2, \dots, {\bm{x}}_n)} \Big[\log \frac{p({\bm{x}}_1, {\bm{x}}_2, \dots, {\bm{x}}_n)}{p({\bm{x}}_1) p({\bm{x}}_2)\dots p({\bm{x}}_n) }\Big]. \end{equation} TC has also proven effective to enhance machine learning models in many tasks, such as independent component analysis~\cite{cardoso2003dependence}, and disentangled representation learning~\cite{chen2018isolating,locatello2019fairness,kim2018disentangling}. However, TC suffers from the same numerical problem as MI: the exact values of TC are difficult to calculate without the closed-form distribution $p({\bm{x}}_i)$ and with only samples accessible. Previous works on disentangled representation learning~\cite{chen2018isolating,gao2019auto} avoid the estimation problem by assuming that both the latent priors and the inference posteriors follow multi-variate Gaussian distributions. Poole, {\em et al.}~\cite{poole2019variational} proposed an upper bound of TC by further introducing another variable ${\bm{y}}$. With a strong assumption that given ${\bm{y}}$, all ${\bm{x}}_i|{\bm{y}}$ are independent, $p({\bm{X}}|{\bm{y}}) = \prod_{i=1}^n p({\bm{x}}_i|{\bm{y}})$, Poole, {\em et al.}~\cite{poole2019variational} concluded that $\mathcal{TC}({\bm{X}}) = \sum_{i=1}^n \mathcal{I}({\bm{x}}_i; {\bm{y}}) - \mathcal{I}({\bm{X}}; {\bm{y}}) $. All the aforementioned methods require additional assumptions to the distributions, which limits their application scenarios. In this paper, we propose two TC estimation strategies based on mutual information variational bounds. More specifically, we decompose TC into the summation of MI terms along two different calculation paths: the tree-like path and the line-like path. Then the TC values are approximated by applying MI estimation to each decomposed term. In our experiments, we test the performance of the proposed TC estimators under multivariate Gaussian simulations. \vspace{-2mm} \section{Method} \vspace{-2mm} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figs/recusive.png} \caption{Two calculation paths of total correlation. \textbf{Left} (Tree-like calculation path): Divide the current variables into two subgroups with similar sizes. Calculate the MI between the subgroups and recursively calculate TC of both subgroups. $\lceil n/2\rceil$ is the smallest number larger than $n/2$. \textbf{Right} (Line-like calculation path): Calculate the MI between the current group of variables and the next variable, and then add the the next variable into current group. } \label{fig:calculation_path} \vspace{-3mm} \end{figure} \begin{algorithm}[b] \begin{algorithmic} \STATE \textbf{Prerequisite:} MI estimation method $\hat{\mathcal{I}}$, samples $\{{\bm{X}}^{(i)} \}_{i=1}^M =\{({\bm{x}}^{(i)}_1, {\bm{x}}^{(i)}_2, \dots, {\bm{x}}^{(i)}_n )\}_{i=1}^M$ \STATE \textbf{Function} TC$_\text{Tree}$-estimate(${\bm{X}}_{i:j}$)\textbf{:} \IF {$j-i\leq0$} \RETURN 0 \ELSE \STATE $m =\lfloor {(i+j)}/{2} \rfloor $ \RETURN TC$_\text{Tree}$-estimate(${\bm{X}}_{i:m}$) + TC$_\text{Tree}$-estimate(${\bm{X}}_{m+1:j}$) + $\hat{\mathcal{I}}({\bm{X}}_{i:m};{\bm{X}}_{m+1:j})$ \ENDIF \end{algorithmic} \caption{Tree-like TC estimation algorithm} \label{alg:learning-algorithm} \end{algorithm} With the definition of total correlation (TC) and mutual information (MI) in \eqref{eq:tc-definition} and \eqref{eq:mi-definition}, we find a connection between TC and MI summarized in Theorem~\ref{thm:general_connection}. \begin{theorem}\label{thm:general_connection} Suppose $\mathcal{A} = \{i_1, i_2, \dots, i_m\} \subseteq \{1,2,\dots,n\}$ is an index subset. $\bar{\mathcal{A}}= \{j:j\notin\mathcal{A}\}$ is the complementary set of $\mathcal{A}$. Denote ${\bm{X}}_{\mathcal{A}} = ({\bm{x}}_{i_1}, {\bm{x}}_{i_2}, \dots, {\bm{x}}_{i_m})$ as the selected variables from ${\bm{X}}$ with the indexes $\mathcal{A}$. Then we have $\mathcal{TC}({\bm{X}}) = \mathcal{TC}({\bm{X}}_{\mathcal{A}}) + \mathcal{TC}({\bm{X}}_{\bar{\mathcal{A}}}) + \mathcal{I}({\bm{X}}_{\mathcal{A}}; {\bm{X}}_{\bar{\mathcal{A}}})$. \end{theorem} \begin{corollary} Given a variable group ${\bm{X}}$ and another ${\bm{y}}$, $ \mathcal{TC}({\bm{X}} \cup \{{\bm{y}}\}) = \mathcal{TC}({\bm{X}}) + \mathcal{I}({\bm{X}}; {\bm{y}}).$ \end{corollary} \begin{corollary}\label{thm:line-like} Given ${\bm{X}} = ({\bm{x}}_1, {\bm{x}}_2, \dots, {\bm{x}}_n)$, we have $\mathcal{TC}({\bm{X}}) = \sum_{i=1}^{n-1} \mathcal{I}({\bm{X}}_{1:i} ; {\bm{x}}_{i+1}).$ \end{corollary} The Theorem~\ref{thm:general_connection} provides insight that the TC of a group of variables ${\bm{X}}$ can be decomposed into the TC of two subgroups ${\bm{X}}_{\mathcal{A}}$ and ${\bm{X}}_{\bar{\mathcal{A}}}$ and the MI between the two subgroups. Therefore, we can recursively represent the TC with MI terms. More specifically, we propose two schemes with different structures to calculate TC with different MI terms (as shown in Figure~\ref{fig:calculation_path}). Let ${\bm{X}}_{i:j} = ({\bm{x}}_i, {\bm{x}}_{i+1}, \dots, {\bm{x}}_{j})$ denote a subset of variables with indexes from $i$ to $j$. Based on Theorem~\ref{thm:general_connection}, we propose two recursive TC calculation schemes: (1) \textbf{Line-like}: $\mathcal{TC}({\bm{X}}_{1:i+1}) = \mathcal{TC}({\bm{X}}_{1:i}) + \mathcal{I}({\bm{X}}_{1:i}; {\bm{x}}_{i+1})$; (2) \textbf{Tree-like}: $\mathcal{TC}({\bm{X}}_{i:j}) = \mathcal{TC}({\bm{X}}_{i:\lfloor {(i+j)}/{2} \rfloor}) + \mathcal{TC}({\bm{X}}_{\lfloor {(i+j)}/{2} \rfloor+1:j}) + \mathcal{I}({\bm{X}}_{i:\lfloor {(i+j)}/{2} \rfloor};{\bm{X}}_{\lfloor {(i+j)}/{2} \rfloor+1:j}) $, where $\lfloor t\rfloor$ indicates the largest integer smaller than $t$. The line-like dynamic calculates the MI between a subgroup and a single variable, which leads to the representation of TC as the summation in Corollary~\ref{thm:line-like}. % The tree-like dynamic divides the variables into balanced subgroups, so that the MI between two subgroups can be calculated with two variable parts in similar dimensions. Since the tree-like estimation is hard to summarize in an equation, we describe it in Algorithm~1. With the total correlation being decomposed into summation of MI terms, we can derive total correlation estimators based on the previous mutual information variational bounds. \vspace{-2mm} \section{Experiments} \vspace{-2mm} We derive our TC estimators based on four MI bounds (MINE~\cite{belghazi2018mutual}, NWJ~\cite{nguyen2010estimating}, InfoNCE~\cite{oord2018representation}, and CLUB~\cite{cheng2020club}) as TC-MINE, TC-NWJ, TC-InfoNCE, and TC-CLUB. The detailed description and implementation to the four MI estimators are shown in the Supplementary Material. Then we test the TC estimators with both tree-like and line-like strategies on simulations. The simulation data are drawn from four-dimensional Gaussian distributions $({\bm{x}}_1, {\bm{x}}_2, {\bm{x}}_3, {\bm{x}}_4) \sim\mathcal{N}(\bm{0}, \bm{\Sigma})$, where $\bm{\Sigma}$ is a covariance matrix with all diagonal elements equal to $1$. With this Gaussian assumption, the true TC value can be calculated as $\mathcal{TC}({\bm{x}}_1,{\bm{x}}_2, {\bm{x}}_3, {\bm{x}}_4) = -\frac{1}{2} \log \text{Det}(\bm{\Sigma})$, where $\text{Det}(\bm{\Sigma})$ is the determinant of $\bm \Sigma$. Therefore, we can adjust the correlation coefficients in $\bm \Sigma$ to set the ground-truth TC values in the range $\{2.0,4.0,6.0,8.0,10.0\}$. At each TC true value, we sample data batches 4000 times, with batch size equal to 64, for the training of variational TC estimators. \begin{figure}[t] \centering \includegraphics[width=0.9\textwidth]{figs/tc_est_3d.pdf} \vspace{-2mm} \caption{Simulation performance of TC \textbf{Line-like} estimators with different MI bounds.} \label{fig:results_line-like} \includegraphics[width=0.9\textwidth]{figs/tc_est_4d.pdf} \vspace{-2mm} \caption{Simulation performance of TC \textbf{Tree-like} estimators with different MI bounds.} \label{fig:results_tree-like} \vspace{-3mm} \end{figure} In Figure~\ref{fig:results_line-like}, we report the performance of our TC estimators with different MI bounds at each training steps. In each figure, the true TC value is shown as a step function with black line. The estimation values are displayed among different steps with shadow blue curves. The dark blue curves shows the local averages of estimated TC, with a bandwidth equal to 200. Under both a tree-like and line-like path calculation, the TC-MINE, TC-NWJ and TC-InfoNCE remains a lower bound of the truth TC values, based on the fact that MINE, NWJ, and InfoNCE are lower bound of mutual information. CLUB is an MI upper bound, while the TC-CLUB also behaves as an upper bound of total correlation. The upper bound method TC-CLUB achieves better performance with line-like calculation. This is because that CLUB requires a variational approximation $q_\theta({\bm{v}}|{\bm{u}})$ when estimating $\mathcal{I}({\bm{v}};{\bm{u}})$. When we use line-like calculation path, ${\bm{v}} = {\bm{x}}_{i+1} $ is always a single variable, and ${\bm{u}} = {\bm{X}}_{1:i}$ is the concatenation of $({\bm{x}}_1,\dots,{\bm{x}}_i)$. The $q_\theta({\bm{v}}| {\bm{u}})$ as a neural network can have better performance with output ${\bm{v}}$ in a fixed low dimension. In contrast, the lower bound methods show better estimation with tree-like calculation than line-like calculation. Because for all listed lower bound methods, the estimation of $\mathcal{I}({\bm{v}};{\bm{u}})$ is based on ${\bm{v}}$ and ${\bm{u}}$ equally. With the tree-like strategy, each time the MI estimators are provided with samples in similar dimensions, which facilitates the learning of lower bound MI estimators. The bias and variance of the TC estimators are shown in the Supplementary Material. \vspace{-2mm} \section{Discussion} \vspace{-2mm} We have derived the line-like and tree-like calculation strategies to decompose the total correlation into summation of mutual information. By estimating mutual information terms with MI bounds, we introduced several TC estimators. The tree-like and line-like calculation strategies can bring advantages to TC estimation depending on different MI estimation processes. The proposed TC estimators can be further applied as learning criterion on many deep learning tasks, such as disentangled representation learning, ensemble learning, and model distillation.
1,116,691,498,209
arxiv
\section{Introduction} \par A metric space $(X,d)$ is a CAT(0) space if it is geodesically connected and if every geodesic triangle in $X$ is at least as thin as its comparison triangle in the Euclidean plane. For other equivalent definitions and basic properties, we refer the reader to standard texts such as \cite{Bridson}. Complete CAT(0) spaces are often called Hadamard spaces. Let $x,y\in X$. We write $\lambda x\oplus (1-\lambda) y$ for the the unique point $z$ in the geodesic segment joining from $x$ to $y$ such that \begin{eqnarray*}\label{oplus} d(z, x)= (1-\lambda)d(x, y)\ \ \ \mbox{and}\ \ \ d(z, y)=\lambda d(x, y). \end{eqnarray*} We also denote by $[x, y]$ the geodesic segment joining from $x$ to $y$, that is, $[x, y]=\{\lambda x\oplus (1-\lambda) y : \lambda\in [0, 1]\}$. A subset $C$ of a CAT(0) space is convex if $[x, y]\subseteq C$ for all $x, y\in C$. \par Berg and Nikolaev in \cite{Berg1} have introduced the concept of \emph{quasilinearization}. Let us formally denote a pair $(a,b)\in X\times X$ by $\overrightarrow{ab}$ and call it a vector. Then quasilinearization is the map $\langle \cdot, \cdot \rangle : (X\times X)\times(X\times X)\to \mathbb{R}$ defined by \begin{eqnarray}\label{qasilin} \langle\overrightarrow{ab},\overrightarrow{cd}\rangle = \frac{1}{2}\left(d^2(a,d)+d^2(b,c)-d^2(a,c)-d^2(b,d)\right), \ \ \ \ (a,b,c,d\in X). \end{eqnarray} We say that $X$ satisfies the Cauchy-Schwarz inequality if \begin{eqnarray}\label{Cha-Sch} \langle\overrightarrow{ab},\overrightarrow{cd}\rangle\leq d(a,b) d(c,d) \end{eqnarray} for all $a,b,c,d\in X$. It known \cite[Corollary 3]{Berg1} that a geodesically connected metric space is CAT(0) space if and only if it satisfies the Cauchy-Schwarz inequality. \par We need the following lemma in the sequel. \begin{lem}\cite[Lemma 2.5]{Dhompongsa} A geodesic space $X$ is a CAT(0) space if and only if the following inequality \begin{eqnarray}\label{para ine} d^2(\lambda x\oplus (1-\lambda) y, z)\leq \lambda d^2(x,z)+(1-\lambda)d^2(y,z)-\lambda(1-\lambda)d^2(x,y) \end{eqnarray} is satisfied for all $x,y,z\in X$ and $\lambda\in [0,1]$. \end{lem} \section{Main results} Let $C$ be a nonempty complete convex subset of a CAT(0) space $X$. It is known \cite[Proposition 2.4]{Bridson} that for any $x\in X$ there exists a unique point $x_0\in C$ such that \begin{eqnarray} \nonumber d(x,x_0)= \min_{ y\in C} d(x, y). \end{eqnarray} The mapping $P_C : X \rightarrow C$ defined by $P_Cx= x_0$ is called the \emph{metric projection} from $X$ onto $C$. \par We need the following useful lemma to prove our main result. \begin{lem}\label{coffi quasi lem} (For a general case see \cite[Lemma 4.1.1]{Dehghan}) Let $X$ be a CAT(0) space, $x,y\in X$, $\lambda\in [0,1]$ and $z= \lambda x\oplus (1-\lambda)y$. Then, \begin{eqnarray}\label{coffi quasi} \langle\overrightarrow{zy},\overrightarrow{zw}\rangle \leq \lambda \langle\overrightarrow{xy},\overrightarrow{zw}\rangle \end{eqnarray} for all $w\in X$. \end{lem} \begin{proof} Using (\ref{oplus}) and (\ref{para ine}), we have \begin{eqnarray} \nonumber 2(\langle\overrightarrow{zy},\overrightarrow{zw}\rangle-\lambda \langle\overrightarrow{xy},\overrightarrow{zw}\rangle)&=& d^2(z,w) + d^2(y,z) - d^2(y,w)\\ \nonumber && -\lambda ( d^2(x,w) + d^2(y,z) - d^2(x,z) - d^2(y,w) )\\ \nonumber&\leq& \lambda d^2(x,w) + (1-\lambda) d^2(y,w) -\lambda(1- \lambda) d^2(x,y)+ d^2(y,z) \\ \nonumber && - d^2(y,w) -\lambda ( d^2(x,w) + d^2(y,z) - d^2(x,z) - d^2(y,w) )\\ \nonumber &=& (1-\lambda) d^2(y,z) + \lambda d^2(x,z) -\lambda(1- \lambda) d^2(x,y)\\ \nonumber &=& \lambda^2 (1-\lambda) d^2(y,x) + \lambda (1-\lambda)^2 d^2(x,y) -\lambda(1- \lambda) d^2(x,y)\\ \nonumber &=& 0, \end{eqnarray} which is the desired inequality. \end{proof} \begin{thm}\label{proj} Let $C$ be a nonempty convex subset of a CAT(0) space $X$, $x\in X$ and $u\in C$. Then $u=P_Cx$ if and only if \begin{eqnarray}\label{charac} \langle\overrightarrow{xu},\overrightarrow{uy}\rangle\geq0 \end{eqnarray} for all $y\in C$. \end{thm} \begin{proof} Let $\langle\overrightarrow{xu},\overrightarrow{uy}\rangle\geq0$ for all $y\in C$. If $d(x,u)=0$, then the assertion is clear. Otherwise, we have \begin{eqnarray} \nonumber \langle\overrightarrow{xu},\overrightarrow{xy}\rangle -\langle\overrightarrow{xu},\overrightarrow{xu}\rangle= \langle\overrightarrow{xu},\overrightarrow{uy}\rangle\geq0. \end{eqnarray} This together with Cauchy-Schwarz inequality implies that \begin{eqnarray} \nonumber d^2(x,u)=\langle\overrightarrow{xu},\overrightarrow{xu}\rangle \leq \langle\overrightarrow{xu},\overrightarrow{xy}\rangle \leq d(x,u) d(x,y) . \end{eqnarray} That is, $d(x,u)\leq d(x,y)$ for all $y\in C$ and so $u=P_Cx$. \par Conversely, let $u=P_Cx$. Since $C$ is convex, then $z=\lambda y\oplus(1-\lambda)u\in C$ for all $y\in C$ and $\lambda\in(0,1)$. Thus, $d(x,u)\leq d(x,z)$. Using (\ref{qasilin}) we have \begin{eqnarray}\label{char ineq1} \langle\overrightarrow{xz},\overrightarrow{uz}\rangle \geq \frac{1}{2}d^2(x,z)-\frac{1}{2}d^2(x,u)\geq 0. \end{eqnarray} On the other hand, by using Lemma \ref{coffi quasi lem}, we have $\langle\overrightarrow{xz},\overrightarrow{uz}\rangle\leq \lambda\langle\overrightarrow{xz},\overrightarrow{uy}\rangle$. This together with (\ref{char ineq1}) implies that \begin{eqnarray} \nonumber \langle\overrightarrow{xz},\overrightarrow{uy}\rangle \geq 0. \end{eqnarray} Since the function $d(\cdot , x): X \to \mathbb{R}$ is continuous for all $x\in X$, letting $\lambda\to 0^+$, we have $\langle\overrightarrow{xu},\overrightarrow{uy}\rangle\geq0$. This completes the proof. \end{proof} \begin{thm} Let $C$ be a nonempty subset of a CAT(0) space $X$ and $x\in X$. Then $P_Cx\subset \partial C$, where $ P_Cx=\{z\in C: d(x,z)= \inf_{ y\in C} d(x, y)\}$ and $\partial C$ is the boundary of $C$. \end{thm} \begin{proof} Let $u\in P_Cx$ and $u\not\in \partial C$. Then there exists an $\varepsilon> 0$ such that $B(u,\varepsilon)\subset C$, where $B(u,\varepsilon)$ denotes the open ball with center $u$ and radius $\varepsilon$. For each $n\geq 1$, let $z_n=1/n x\oplus (1-1/n)u$. We know that \begin{eqnarray} \nonumber d(z_n,u)= \frac{1}{n}d(x,u). \end{eqnarray} Hence, For sufficiently large $N\geq 1$, $d(z_N,u)<\varepsilon$. Thus $z_N\in B(u,\varepsilon)\subset C$. On the other hand, \begin{eqnarray} \nonumber d(z_N,x)=\left(1- \frac{1}{N}\right)d(x,u)<d(x,u)= d(x,C), \end{eqnarray} which contradicts the fact that $u\in P_Cx$. Therefore, $u\in \partial C$. \end{proof} \par A self-mapping $T$ of $C\subseteq X$ is said to be \begin{itemize} \item[(i)] \emph{nonexpansive} if $d( Tx, Ty)\leq d(x,y)$, \item[(ii)] \emph{firmly nonexpansive} if $\langle\overrightarrow{xy},\overrightarrow{Tx Ty}\rangle\geq d^2(Tx, Ty)$, \item[(iii)] \emph{monotone} if $\langle\overrightarrow{xy},\overrightarrow{Tx Ty}\rangle\geq 0$, \end{itemize} for all $x,y\in C$. It is clear that every firmly nonexpansive mapping is monotone. Also, it follows from Cauchy-Schwarz inequality that every firmly nonexpansive mapping is nonexpansive. \begin{pro}\label{firmly nonex} Let $C$ be a nonempty closed convex subset of a Hadamard space $X$. Then, the metric projection $P_C : X \rightarrow C\subseteq X$ is firmly nonexpansive and so it is monotone and nonexpansive. \end{pro} \begin{proof} Let $x,y\in X$. Since $P_Cx, P_Cy\in C$, it follows from Theorem \ref{proj} that \begin{eqnarray} \nonumber \langle\overrightarrow{xP_Cx},\overrightarrow{P_Cx P_Cy}\rangle\geq 0\ \ \ \ \mbox{and}\ \ \ \langle\overrightarrow{yP_Cy},\overrightarrow{P_Cy P_Cx}\rangle\geq0. \end{eqnarray} Therefore, \begin{eqnarray}\label{P_C firmly nonexp} \nonumber \langle\overrightarrow{xy},\overrightarrow{P_Cx P_Cy}\rangle&=&\langle\overrightarrow{xP_Cx},\overrightarrow{P_Cx P_Cy}\rangle+\langle\overrightarrow{P_Cx P_Cy},\overrightarrow{P_Cx P_Cy}\rangle+\langle\overrightarrow{yP_Cy},\overrightarrow{P_Cy P_Cx}\rangle\\ \nonumber &\geq&\langle\overrightarrow{P_Cx P_Cy},\overrightarrow{P_Cx P_Cy}\rangle\\ \nonumber &=&d^2(P_Cx, P_Cy), \end{eqnarray} which completes the proof. \end{proof} {\small
1,116,691,498,210
arxiv
\section{Introduction} Exel has recently introduced a new kind of crossed product for an endomorphism $\alpha$ of a $C^*$-algebra $B$ \cite{E}. The crucial ingredient in his construction is a \emph{transfer operator}, which is a positive linear map $L:B\to B$ satisfying $L(\alpha(a)b)=aL(b)$. In the motivating example, $B=C(X)$, {$X$ is a compact Hausdorff space}, $\alpha$ is the endomorphism $\alpha:f\mapsto f\circ\sigma$ associated to a covering map $\sigma:X\to X$, and $L$ is defined by \begin{equation}\label{defL} L(f)(x)=\frac{1}{|\sigma^{-1}(\{x\})|}\sum_{\sigma(y)=x}f(y). \end{equation} Exel's crossed product $B\rtimes_{\alpha,L}\mathbb{N}$ can be constructed in several ways, but here we view it as the Cuntz-Pimsner algebra $\mathcal{O}(M_L)$ of a right-Hilbert $B$-bimodule $M_L$ constructed from $L$, as discussed in \cite{BR} (see also \S\ref{subsecExel} below). We became interested in this circle of ideas when we noticed that the bimodule $M_L$ associated to the covering map $\sigma:z\mapsto z^N$ of the unit circle $\mathbb{T}$ plays a key role in work of Packer and Rieffel on projective multi-resolution analyses \cite{P}--\cite{PR2}. The module elements $m\in M_L$ such that $\langle m,m\rangle$ is the identity of $C(X)$ are precisely the quadrature mirror filters arising in signal processing and wavelet theory, and orthonormal bases for $M_L$ are what engineers call ``filter banks with perfect reconstruction'' (as observed and exploited in \cite{LR2} and \cite{IM}, for example.) We then noticed further, using results from \cite{EV}, that the associated crossed product $C(\mathbb{T})\rtimes_{\alpha_N,L}\mathbb{N}$, where $\alpha_N$ is the endomorphism of $C(\mathbb{T}^d)$ given by $\sigma$, is simple, and accordingly computed its $K$-theory, finding that $K_0=\mathbb{Z}\oplus(\mathbb{Z}/(N-1)\mathbb{Z})$ and $K_1=\mathbb{Z}$. But then we saw this $K$-theory occurring elsewhere, and we gradually realised that the $C^*$-algebra $C(\mathbb{T})\rtimes_{\alpha_N,L}\mathbb{N}$ had already been studied by many authors under other guises. (An almost certainly incomplete list includes \cite[Example~3]{Deaconu}, \cite[Example~4.1]{KW}, \cite[Appendix~A]{K4} and \cite[Theorem~2.1]{Y}.) Multiplication by $N$, however, is just one of many dilations of interest in wavelet theory (see, for example, \cite{S}). Here we consider the covering maps $\sigma_A$ of $\mathbb{T}^d=\mathbb{R}^d/\mathbb{Z}^d$ induced by integer matrices $A$ whose eigenvalues $\lambda$ satisfy $|\lambda|>1$, and the crossed products of the associated systems $(C(\mathbb{T}^d),\alpha_A,L)$, where $\alpha_A$ is the endomorphism of $C(\mathbb{T}^d)$ given by $\sigma_A$. We show, using results from \cite{EV} and \cite{K4}, that the crossed products $C(\mathbb{T}^d)\rtimes_{\alpha_A,L}\mathbb{N}$ are simple and purely infinite, {and} hence by the Kirchberg-Phillips theorem are classified by their $K$-theory. % The computation of the $K$-theory groups of $C(\mathbb{T}^d)\rtimes_{\alpha_A,L}\mathbb{N}$ therefore has a special significance and one of the main goals of this paper is to perform precisely this calculation. Since $C(\mathbb{T}^d)\rtimes_{\alpha_A,L}\mathbb{N}$ is a Cuntz-Pimsner algebra, one should in principle be able to compute its $K$-theory using the exact sequence of {{\cite[Theorem~4.8]{Pim}}}, but in practice we were not able to compute some of the homomorphisms in that sequence. So we have argued directly from the six-term exact sequence associated to the Toeplitz algebra of the bimodule $M_L$, and we hope that our computation will be of independent interest. Our computation is based on a six-term exact sequence which is valid for any system $(B,\alpha,L)$ for which the bimodule $M_L$ is free as a right Hilbert $B$-module. Using an orthonormal basis for $M_L$, we build a homomorphism $\Omega:B\to M_N(B)$ which has the property that $\Omega\circ \alpha(a)$ is the diagonal matrix $a1_N$ with $N$ copies of $a$ down the diagonal, and which we view as a $K$-theoretic left inverse for $\alpha$. When the bimodule is obtained from an integral matrix $A$, as above, this map is closely associated to the classical adjoint of $A$. We then show that there is an exact sequence \begin{equation*} \xymatrix{ K_0(B)\ar[r]^{\operatorname{id}-\Omega_*}&K_0(B)\ar[r]^{j_{B*}\quad}&K_0(\mathcal{O}(M_L))\ar[d]\\ K_1(\mathcal{O}(M_L))\ar[u]&K_1(B)\ar[l]_{\quad \ j_{B*}}&K_1(B)\ar[l]_{\operatorname{id}-\Omega_*} } \end{equation*} in which $j_B$ is the canonical embedding of $B$ in the Cuntz-Pimsner algebra $\mathcal{O}(M_L)$. When $(B,\alpha,L)=(C(\mathbb{T}^d),\alpha_A,L)$, we know from \cite{PR1} that $C(\mathbb{T}^d)_L$ is free, so this exact sequence applies; since we also know from \cite{J} that $K_*(C(\mathbb{T}^d))=K^*(\mathbb{T}^d)$ is isomorphic to the exterior ring generated by a copy of $\mathbb{Z}^d$ in $K^1(\mathbb{T}^d)$, we can in this case compute $\Omega_*$, and derive explicit formulas for $K_i(\mathcal{O}(M_L))$. \section{Crossed products by endomorphisms}\label{secdefcp} \subsection{Cuntz-Pimsner algebras} A \emph{right-Hilbert bimodule} over a $C^*$-algebra $B$, also known as a \emph{correspondence}, is a right Hilbert $B$-module $M$ with a left action of $B$ implemented by a homomorphism $\phi$ of $B$ into the $C^*$-algebra $\mathcal{L}(M)$ of adjointable operators on $M$. In this paper $B$ is always unital, the bimodule $M$ is always \emph{essential} in the sense that $1\cdot m=m$ for $m\in M$, and the bimodule has a \emph{finite Parseval frame} or \emph{quasi-basis}: a finite subset $\{m_j:0\leq j<N\}$ for which we have the \emph{reconstruction formula} \begin{equation}\label{rp*} m=\sum_{j =0}^{N-1} m_j\cdot\langle m_j,m\rangle\ \text{ for every $m\in M$.} \end{equation} The reconstruction formula implies that \begin{equation}\label{leftactfr} \phi(a)=\sum_{j=0}^{N-1}\Theta_{a\cdot m_j,m_j}\ \text{ for every $a\in B$,} \end{equation} and hence that the homomorphism $\phi$ takes values in the algebra $\mathcal{K}(M)$ of compact operators. The obvious examples of Parseval frames are orthonormal bases: \begin{lemma}\label{onisframe} Suppose that $\{m_j:0\leq j<N\}$ are vectors in a right-Hilbert bimodule $M$ over a unital $C^*$-algebra $B$. If the $m_j$ generate $M$ as a Hilbert $B$-module and satisfy $\langle m_j\,,\,m_k\rangle=\delta_{j,k}1_B$, then $\{m_j:0\leq j<N\}$ is a finite Parseval frame for $M$, and $m\mapsto (\langle m_j\,,\,m\rangle)_j$ is an isomorphism of $M$ onto $B^N$. \end{lemma} \begin{proof} A quick calculation gives the reconstruction formula for $m$ of the form $m_k\cdot b$, and then linearity and continuity give it for arbitrary $m$. For the last assertion, check that $(b_0,\cdots,b_{N-1})\mapsto \sum_jm_j\cdot b_j$ is an inverse. \end{proof} \begin{remark} If $P\in \mathcal{L}(M)$ is a projection and $\{n_j\}$ is an orthonormal basis for $M$, then $\{Pn_j\}$ is a Parseval frame for $P(M)$, and Frank and Larson have shown that every Parseval frame $\{m_j\}$ has this form because $m\mapsto (\langle m_j\,,\,m\rangle)_j$ is an isomorphism of $M$ onto a complemented submodule of $B^N$ \cite[Theorem~5.8]{FR}. However, many interesting bimodules have Parseval frames but are not obviously presented as direct summands of free modules. For example, for a bimodule of the form $C(X)_L$, one can construct a Parseval frame directly using a partition of unity (see, for example, \cite[Proposition~8.2]{EV}). \end{remark} A \emph{Toeplitz representation} of a right-Hilbert bimodule $M$ in a $C^*$-algebra $C$ consists of a linear map $\psi:M\to C$ and a homomorphism $\pi:B\to C$ satisfying $\psi(m)^*\psi(n)=\pi(\langle m\,,\,n\rangle )$ and $\psi(\phi(a)m)=\pi(a)\psi(m)$; we then also have $\psi(m\cdot a)=\psi(m)\pi(a)$. The \emph{Toeplitz algebra} $\mathcal{T}(M)$ is generated by a universal Toeplitz representation $(i_M,i_B)$ of $M$ (either by theorem \cite{Pim} or by definition \cite{FR}). The following lemma is implicit in the proof of \cite[Corollary~3.3]{BR}. \begin{lemma}\label{lem-annoying} Suppose $M$ is an essential right-Hilbert bimodule over a unital $C^*$-algebra $B$ and $(\psi, \pi)$ is a Toeplitz representation of $M$ on a Hilbert space $\mathcal{H}$. Then the subspace $\pi(1)\mathcal{H}$ is reducing for $(\psi, \pi)$, and \[ (\psi, \pi)=(\psi_{\pi(1)\mathcal{H}}\oplus 0,\pi_{\pi(1)\mathcal{H}}\oplus 0). \] \end{lemma} \begin{proof} It is standard that $\pi=\pi_{\pi(1)\mathcal{H}}\oplus 0$, and each $\psi(m)=\psi(1\cdot m)=\pi(1)\psi(m)$ has range in $\pi(1)\mathcal{H}$, so it suffices to show that $h\perp\pi(1)\mathcal{H}$ implies $\psi(m)h=0$. Suppose $h\perp\pi(1)\mathcal{H}$. Then {$\pi(\langle m\,,\, m\rangle)h\in\pi(1)\mathcal{H}$, so that} \[ \|\psi(m)h\|^2=(\psi(m)h\,|\,\psi(m)h)=(\psi(m)^*\psi(m)h\,|\,h)=(\pi(\langle m\,,\,m\rangle)h\,|\,h)=0.\qedhere \] \end{proof} \begin{remark}\label{convunital} Lemma~\ref{lem-annoying} implies that the Toeplitz algebra $\mathcal{T}(M)$ is universal for Toeplitz representations $(\psi,\pi)$ in which $\pi$ is unital, and we shall assume from now on that in all Toeplitz representations $(\psi,\pi)$, $\pi$ is unital. \end{remark} For every Toeplitz representation $(\psi,\pi)$ of $M$, there is a unique representation $(\psi,\pi)^{(1)}$ of the algebra $\mathcal{K}(M)$ of compact operators on $M$ such that \[ (\psi,\pi)^{(1)}(\Theta_{m,n})=\psi(m)\psi(n)^*\ \text{ for $m,n\in M$} \] (see, for example, \cite[Proposition~1.6]{FR}). When\footnote{As is always the case here; when the left action on the bimodule $M$ contains non-compact operators, there are several competing definitions of $\mathcal{O}(M)$.} $\phi:B\to \mathcal{L}(M)$ has range in $\mathcal{K}(M)$, we say that $(\psi,\pi)$ is \emph{Cuntz-Pimsner covariant} if $\pi =(\psi,\pi)^{(1)}\circ\phi$, and the \emph{Cuntz-Pimsner algebra} $\mathcal{O}(M)$ is the quotient of $\mathcal{T}(M)$ which is universal for Cuntz-Pimsner covariant representations. The algebra $\mathcal{O}(M)$ is generated by a canonical Cuntz-Pimsner covariant representation $(j_M,j_B)$. Now we investigate what this all means when $M$ has an orthonormal basis. {Compare with \cite[Section~8]{E2} and \cite[Proposition~7.1]{EV} which use quasi-bases.} \begin{lemma}\label{Cuntzfam} Suppose that $M$ is an essential right-Hilbert bimodule over a unital $C^*$-algebra $B$, and that $\{m_j:0\leq j<N\}$ is an orthonormal basis for $M$. Let $(\psi,\pi)$ be a Toeplitz representation of $M$. Then: \smallskip \textnormal{(1)} $\{\psi(m_j):0\leq j<N\}$ is a Toeplitz-Cuntz family of isometries such that $\sum_{j=0}^{N-1}\psi(m_j)\psi(m_j)^*$ commutes with every $\pi(a)$; and \smallskip \textnormal{(2)} $(\psi,\pi)$ is Cuntz-Pimsner covariant if and only if $\{\psi(m_j):0\leq j<N\}$ is a Cuntz family. \end{lemma} \begin{proof} (1) The relations $\psi(m_j)^*\psi(m_j)=\pi(\langle m_j\,,\,m_j\rangle)=\pi(1)$ and our convention that $\pi(1)=1$ (see Remark~\ref{convunital}) imply that the $\psi(m_j)$ are isometries. Next, we fix $a\in B$, let $q:=\sum_{j=0}^{N-1}\psi(m_j)\psi(m_j)^*$, and compute using the reconstruction formula \eqref{rp*}: \begin{align}\label{compq} q\pi(a)q&=\sum_{j,k=0}^{N-1}\psi(m_j)\psi(m_j)^*\pi(a)\psi(m_k)\psi(m_k)^*\\ &=\sum_{j,k=0}^{N-1}\psi(m_j)\pi(\langle m_j\,,\, a\cdot m_k\rangle)\psi(m_k)^*\notag\\ &=\sum_{k=0}^{N-1}\Big(\sum_{j=0}^{N-1}\psi(m_j\cdot \langle m_j\,,\, a\cdot m_k\rangle)\Big)\psi(m_k)^*\notag\\ &=\sum_{k=0}^{N-1} \psi( a\cdot m_k)\psi(m_k)^*\notag\\ &=\pi(a)q\notag. \end{align} Taking $a=1$ in \eqref{compq} shows that $q^2=q$, and since $q$ is self-adjoint it is a projection. Since each $\psi(m_j)$ is an isometry, each $\psi(m_j)\psi(m_j)^*$ is a projection, and since their sum is a projection, their ranges must be mutually orthogonal. Thus $\{\psi(m_j)\}$ is a Toeplitz-Cuntz family. Next we use \eqref{compq} again to see that $q\pi(a)=(\pi(a^*)q)^*=(q\pi(a^*)q)^*=q\pi(a)q=\pi(a)q$, and we have proved (1). \smallskip\noindent (2) Suppose that $(\psi,\pi)$ is Cuntz-Pimsner covariant. Plugging the formula \eqref{leftactfr} for $a=1$ into $(\psi,\pi)^{(1)}(\phi(1))=\pi(1)=1$ shows that $\sum_j\psi(m_j)\psi(m_j)^*=1$, so $\{\psi(m_j)\}$ is a Cuntz family. On the other hand, if $\{\psi(m_j)\}$ is a Cuntz family, then we can deduce from \eqref{leftactfr} that \[ (\psi,\pi)^{(1)}(\phi(a))=\sum_{j=0}^{N-1}\psi(m_j)\psi(a^*\cdot m_j)^*=\sum_{j=0}^{N-1}\psi(m_j)\psi(m_j)^*\pi(a)=\pi(a), \] and $(\psi,\pi)$ is Cuntz-Pimsner covariant. \end{proof} \subsection{Exel systems and crossed products}\label{subsecExel} Let $\alpha$ be an endomorphism of a unital $C^*$-algebra $B$. A \emph{transfer operator} $L$ for $(B,\alpha)$ is a positive linear map $L:B\to B$ such that $L(\alpha(a)b)=aL(b)$ for all $a,b\in B$. We call the triple $(B,\alpha, L)$ an \emph{Exel system}. Given an Exel system $(B,\alpha, L)$, we construct a right-Hilbert $B$-module $M_L$ over $B$ as in \cite{E} and \cite{BR}. Let $B_L$ be a copy of the underlying vector space of $B$. Define a right action of $a\in B$ on $m\in B_L$ by $m\cdot a=m\alpha(a)$, and a $B$-valued pairing on $B_L$ by \[ \langle m\,,\,n\rangle=L(m^*n)\quad\text{for $m,n\in B_L$}. \] Modding out by $\{m:\langle m\,,\,m\rangle=0\}$ and completing yields a right Hilbert $B$-module $M_L$. The action of $B$ by left multiplication on $B_L$ extends to an action of $B$ by adjointable operators on $M_L$ which is implemented by a unital homomorphism $\phi:B\to \mathcal{L}(M_L)$, and and thus makes $M_L$ into a right-Hilbert bimodule over $B$. Exel's crossed product is constructed in two stages. First he forms a Toeplitz algebra $\mathcal{T}(B,\alpha,L)$, which is isomorphic to $\mathcal{T}(M_L)$ (see \cite[Corollary~3.2]{BR}). Then the crossed product $B\rtimes_{\alpha,L}\mathbb{N}$ is the quotient of $\mathcal{T}(M_L)$ by the ideal generated by the elements \[ i_B(a)-(i_{M_L},i_B)^{(1)}(\phi(a))\ \text{ for $a\in K_\alpha:=\phi^{-1}(\mathcal{K}(M_L)) \cap\overline{B\alpha(B)B}$} \] (see \cite[Lemma~3.7]{BR}). When $M_L$ has a finite Parseval frame and the projection $\alpha(1)$ is full, we have $\phi^{-1}(\mathcal{K}(M_L))=B=K_\alpha$, and $B\rtimes_{\alpha,L}\mathbb{N}$ is the Cuntz-Pimsner algebra $\mathcal{O}(M_L)$. For us, the main examples of Exel systems come from surjective endomorphisms $\sigma$ of a compact group $K$ with finite kernel: the corresponding Exel system $(C(K),\alpha ,L)$ has $\alpha(f)=f\circ\sigma$ and $L$ defined by averaging over the fibres of $\sigma$, as in \eqref{defL}. The next lemma is a mild generalisation of \cite[Proposition~1]{PR1}. \begin{lemma}\label{ex-compactgroup} Suppose that $\sigma:K\to K$ is a surjective endomorphism of a compact abelian group $K$ with $N:=|\ker\sigma|<\infty$, and $(C(K),\alpha,L)$ is the corresponding Exel system. Then the norm on $C(K)_L$ defined by the inner product is equivalent to the usual sup-norm, and $C(K)_L$ is complete. It has an orthonormal basis $\{m_j:0\leq j<N\}$. \end{lemma} \begin{proof} The assertions about the norm and the completeness are proved in \cite[Lemma~3.3]{LR2}, for example. Since $\gamma\mapsto \gamma|_{\ker\sigma}$ is surjective and $|(\ker\sigma)^\wedge|=|\ker\sigma|=N$, we can find a subset $\{\gamma_i:0\leq i<N\}$ of $\widehat K$ such that $\{\gamma_i|_{\ker\sigma}\}$ is all of $(\ker\sigma)^\wedge$. Then \begin{align*} \langle \gamma_i,\gamma_j\rangle_L(k)&=\frac{1}{N}\sum_{\sigma(l)=k}\overline{\gamma_i(l)}\gamma_j(l)\\ &=\frac{1}{N}\sum_{\zeta\in\ker \sigma}\overline{\gamma_i(\zeta l_0)}\gamma_j(\zeta l_0)\text{ for any fixed $l_0$ such that $\sigma(l_0)=k$}\\ &=\frac{1}{N}\overline{\gamma_i(l_0)}\gamma_j(l_0)\sum_{\zeta\in\ker \sigma}\big(\overline{\gamma_i}\gamma_j\big)(\zeta). \end{align*} If $i\not= j$, then $(\gamma_i^{-1}\gamma_j)|_{\ker\sigma}$ is a nontrivial character of $\ker \sigma$, and its range is a nontrivial subgroup of $\mathbb{T}$, so the sum vanishes. If $i=j$, then the sum is $N$. So $\{\gamma_j\}$ is orthonormal. We still need to see that $\{\gamma_j\}$ generates $C(K)_L$ as a Hilbert module. The Stone-Weierstrass theorem implies that the characters of $K$ span a dense $*$-subalgebra of $C(K)$, and hence by the equivalence of the norms, they also span a dense subspace of $C(K)_L$. So it suffices to show that each $\gamma\in \widehat K$ is in the submodule generated by $\{\gamma_j\}$. Since $(\ker\sigma)^\wedge=\{\gamma_j\}$, there exists $j$ such that $\gamma|_{\ker\sigma}=\gamma_j$. Then $\gamma_j^{-1}\gamma$ vanishes on $\ker \sigma$, and there is a character $\chi$ such that $\gamma_j^{-1}\gamma=\chi\circ\sigma$. This equation unravels as {$\gamma=\gamma_j(\chi\circ\sigma)=\gamma_j\alpha(\chi)=\gamma_j\cdot\chi$}, so it implies that $\gamma$ belongs to the submodule generated by $\{\gamma_j\}$. \end{proof} \begin{example} Suppose that $A\in M_d(\mathbb{Z})$ is an integer matrix with $|\det A|>1$, and $\sigma_A$ is the endomorphism of $\mathbb{T}^d$ given by $\sigma_A(e^{2\pi ix})=e^{2\pi iAx}$ for $x\in \mathbb{R}^d$. Then $\sigma_A$ is surjective (because $A:\mathbb{R}^d\to \mathbb{R}^d$ is), and $\ker \sigma_A$ has $N:=|\det A|$ elements. A function $m\in C(\mathbb{T}^d)_L$ such that $\langle m\,,\,m\rangle(z)=1$ for all $z$ is called a \emph{quadrature mirror filter} for dilation by $A$, and an orthonormal basis for $C(\mathbb{T}^d)_L$ is a \emph{filter bank}. Lemma~\ref{ex-compactgroup} says that for every $A$, filter banks exist. \end{example} \begin{remark} Although the dilation matrices $A$ are of great relevance to wavelets, the filter banks we constructed in the proof of Lemma~\ref{ex-compactgroup} are not the kind which are useful for the construction of wavelets. There one wants the first filter $m_0$ to be low-pass, which means roughly that $m_0(1)=N^{1/2}$, $m_0$ is smooth near $1$, and $m_0$ does not vanish on a sufficiently large neighbourhood of $1$; for the basis in Lemma~\ref{ex-compactgroup}, we have $|m_0(z)|=1$ for all $z$, and $m_0$ is \emph{all-pass}. The \emph{matrix completion problem} considered in \cite{PR1} asks whether, given a low-pass filter $m_0$, one can find a filter bank $\{m_j\}$ which includes the given $m_0$. This amounts to asking that the submodule $m_0^\perp:=\{m\in C(\mathbb{T}^d)_L:\langle m\,,\,m_0\rangle=0\}$ is free. In \cite[\S4]{PR1}, Packer and Rieffel show by example that it need not be free if $|\det A|>2$ and $d>4$. Of course, since $m_0^\perp$ is a direct summand of a free module, it always has a Parseval frame. \end{remark} When $\alpha$ is the endomorphism of $C(K)$ coming from a surjective endomorphism $\sigma$ of $K$, we know from Lemma~\ref{ex-compactgroup} that $M_L=C(K)_L$ admits an orthonormal basis, and the associated endomorphism $\alpha:f\mapsto f\circ\sigma$ is unital, so $\alpha(1)=1$ is certainly full. Thus for the systems of interest to us, Exel's crosed product $B\rtimes_{\alpha,L}\mathbb{N}$ is isomorphic to the Cuntz-Pimsner algebra $\mathcal{O}(M_L)$. We will use this identification without comment. \section{The six-term exact sequence} We assume throughout this section that $(B,\alpha,L)$ is an Exel system and that $\{m_j:0\leq j<N\}$ is a Parseval frame for $M_L$. We write $Q$ for the quotient map from $\mathcal{T}(M_L)\to\mathcal{O}(M_L)$, and $(\psi,\pi)$ for the universal Toeplitz covariant representation of $M_L$ in $\mathcal{T}(M_L)$. To construct our exact sequence for $K_*(\mathcal{O}(M))$, we analyse the six-term exact sequence \begin{equation}\label{eq-les2} \xymatrix{ K_0(\ker Q)\ar[r]^{\iota_*}&K_0(\mathcal{T}(M_L))\ar[r]^{Q_*}&K_0(\mathcal{O}(M_L))\ar[d]_{\delta_0}\\ K_1(\mathcal{O}(M_L))\ar[u]_{\delta_1}&K_1(\mathcal{T}(M_L))\ar[l]_{Q_*}&K_1(\ker Q).\ar[l]_{\iota_*} } \end{equation} We begin by recalling from \cite[Theorem~4.4]{Pim} that the homomorphism $\pi:B\to \mathcal{T}(M_L)$ induces an isomorphism of $K_i(B)$ onto $K_i(\mathcal{T}(M_L))$, so we can replace $K_i(\mathcal{T}(M_L))$ with $K_i(B)$ provided we can identify the maps. Next we introduce our ``$K$-theoretic left inverse'' for $\alpha$, and then we will work towards showing that $B$ is a full corner in $\ker Q$, so that we can replace $K_i(\ker Q)$ with $K_i(B)$. Part (3) of the next result will not be used in this section; it is included here because it shows how $\Omega$ relates to $\alpha$, and gives a hint of why we view it as a ``$K$-theoretic left inverse'' for $\alpha$. \begin{lemma}\label{lem-Omega} Define $\Omega: B\to M_N(B)$ by $\Omega(a)=(\langle m_j\,,\, a\cdot m_k\rangle)_{j,k}$. Then \smallskip \textnormal{(1)} $\Omega$ is a homomorphism of $C^*$-algebras; \smallskip \textnormal{(2)} $\Omega$ is unital if and only if $\{m_j:0\leq j<N\}$ is an orthonormal basis; \smallskip \textnormal{(3)} if $B$ is commutative and $\{m_j:0\leq j<N\}$ is orthonormal, then $\Omega(\alpha(a))$ is the diagonal matrix $a1_N$ with {diagonal entries $a$.} \end{lemma} \begin{proof} For (1), we let $a,b\in B$ and compute: first \begin{align*} (\Omega(a)\Omega(b))_{j,k} &=\sum_{l=0}^{N-1}\langle m_j\,,\, a\cdot m_l\rangle \langle m_l\,,\, b\cdot m_k\rangle\\ &=\Big\langle m_j\,,\, a\cdot \Big(\sum_{l=0}^{N-1}m_l\cdot \langle m_l\,,\, b\cdot m_k\rangle\Big)\Big\rangle \\ &=\langle m_j\,,\, a\cdot (b\cdot m_k)\rangle\\ &=\Omega(ab)_{j,k}, \end{align*} and then \[ \Omega(a^*)=(\langle m_j\,,\, a^*\cdot m_k\rangle)_{j,k}=(\langle a\cdot m_j\,,\, m_k\rangle)_{j,k}=(\langle m_k\,,\, a\cdot m_j\rangle^*)_{j,k}=\Omega(a)^*. \] Part (2) is easy. For (3), we let $q_L:B_L\to M_L$ be the quotient map, and consider $m=q(b)\in q(B_L)$. Then commutativity of $B$ gives \[m\cdot a=q(b\cdot a)=q(b\alpha(a))=q(\alpha(a)b)=\alpha(a)\cdot q(b)=\alpha(a)\cdot m, \] and this formula extends to $m\in M_L$ by continuity. Thus \begin{align*} \Omega(\alpha(a))_{j,k} =\langle m_j\,,\,\alpha(a)\cdot m_k\rangle=\langle m_j\,,\,m_k\cdot a\rangle =\langle m_j\,,\,m_k\rangle a=\delta_{j,k}a, \end{align*} as required. \end{proof} To describe $\ker Q$, we need some standard notation. We write $M_L^{\otimes i}$ for the $i$-fold internal tensor product $M_L\otimes_B \cdots\otimes_B M_L$, which is itself a right-Hilbert bimodule over $B$. There is a Toeplitz representation $(\psi^{\otimes i},\pi)$ of $M_L^{\otimes k}$ in $\mathcal{T}(M_L)$ such that $\psi^{\otimes k}(\xi)=\prod_{i=1}^i\psi(\xi_i)$ for elementary tensors $\xi=\xi_1\otimes\cdots\otimes\xi_k$ in $M_L^{\otimes k}$ (see \cite[Proposition~1.8]{FR}, for example). By convention, we set $M_L^{\otimes 0}:=B$ and $\psi^{\otimes 0}:=\pi$. Then from \cite[Lemma~2.4]{FR} we have \begin{equation}\label{spanTo} \mathcal{T}(M_L)=\overline{\sp}\{\psi^{\otimes k}(\xi)\psi^{\otimes l}(\eta)^*:k,l\geq 0, \xi\in M_L^{\otimes k}, \eta\in M_L^{\otimes l}\}. \end{equation} We also recall from Lemma~\ref{Cuntzfam}(1) that the element $q:=\sum_{j=0}^{N-1}\psi(m_j)\psi(m_j)^*$ of $\mathcal{T}(M_L)$ is a projection which commutes with every $\pi(a)$. \begin{lemma}\label{lem-kerQ} With the preceding notation, we have \begin{enumerate} \item $1-q=1-\sum_{j=0}^{N-1}\psi(m_j)\psi(m_j)^*$ is a full projection in $\ker Q$; \smallskip \item $(1-q)\psi^{\otimes k}(\xi)=0$ for all $\xi\in M_L^{\otimes k}$ with $k\geq 1$; and \smallskip \item $\ker Q=\overline{\sp}\{\psi^{\otimes k}(\xi)(1-q)\psi^{\otimes l}(\eta)^*:k,l\geq 0, \xi\in M_L^{\otimes k}, \eta\in M_L^{\otimes l}\}$. \end{enumerate} \end{lemma} \begin{proof} (1) The reconstruction formula implies that $\phi(a)=\sum_{j=0}^{N-1}\Theta_{a\cdot m_j,m_j}$, and so \begin{equation}\label{formfor^(1)} (\psi,\pi)^{(1)}(\phi(a))=\sum_{j=1}^{N-1}\psi(a\cdot m_j)\psi(m_j)^*=\pi(a)q. \end{equation} This implies in particular that \[ Q(1-q)=Q(\pi(1)-\pi(1)q)=Q\big(\pi(1)-(\psi,\pi)^{(1)}(\phi(1))\big)=0, \] so $1-q$ belongs to $\ker Q$. Since $\ker Q$ is by definition the ideal in $\mathcal{T}(M_L)$ generated by the elements $\pi(a)-(\psi,\pi)^{(1)}(\phi(a))$ for $a\in B$, \eqref{formfor^(1)} also implies that $\ker Q$ is generated by the elements $\pi(a)(1-q)$, and hence by the single element $1-q$. This says precisely that the projection $1-q$ is full. (2) First we consider $m\in M_L^{\otimes 1}=M_L$. The reconstruction formula gives \[ q\psi(m) =\sum_{j=0}^{N-1}\psi(m_j)\psi(m_j)^*\psi(m) =\sum_{j=0}^{N-1}\psi(m_j\cdot \langle m_j\,,\, m\rangle )=\psi(m), \] so $(1-q)\psi(m)=0$. Now for $k>1$ and for an elementary tensor $\xi=\xi_1\otimes\cdots \otimes\xi_k$, we have $(1-q)\psi^{\otimes k}(\xi)=(1-q)\big(\prod_{i=1}^i\psi(\xi_i)\big)=0$, and the result extends to arbitrary $\xi \in M^{\otimes k}$ by linearity and continuity. (3) In view of part (2), we can deduce from \eqref{spanTo} that $\ker Q=\mathcal{T}(M_L)(1-q)\mathcal{T}(M_L)$ is spanned by the elements of the form \[\psi^{\otimes k}(\xi)\pi(a)^*(1-q)\pi(b)\psi^{\otimes l}(\eta)^*= \psi^{\otimes k}(\xi\cdot a)(1-q)\psi^{\otimes l}(\eta\cdot b^*)^*\] for $\xi\in M^{\otimes k}$, $\eta\in M^{\otimes l}$ and $a,b\in B$, which gives (3). \end{proof} \begin{lemma}\label{lem-rho} There is a homomorphism $\rho:B\to\ker Q$ such that $\rho(a)=\pi(a)(1-q)$, and $\rho$ is an isomomorphism of $B$ onto $(1-q)\ker Q(1-q)$. \end{lemma} \begin{proof} Lemma~\ref{lem-kerQ} says that $\pi(a)(1-q)$ belongs to $\ker Q$ and Lemma~\ref{lem-Omega} says that $q$ commutes with every $\pi(a)$, so there is a homomorphism $\rho:B\to (1-q)\ker Q(1-q)\subset\ker Q$ such that $\rho(a)=\pi(a)(1-q)$. From parts (2) and (3) of Lemma~\ref{lem-kerQ} we get: \begin{align*} (1-q)\ker Q(1-q)&=\overline{\sp}\{ (1-q)\psi^{\otimes k}(\xi)(1-q)\psi^{\otimes j}(\eta)^*(1-q): k,l\geq 0 \}\\ &=\overline{\sp}\{ (1-q)\pi(a)(1-q)\pi(b)(1-q): a,b\in B \}\\ &=\overline{\sp}\{ (1-q)\pi(ab): a,b\in B \}, \end{align*} which is precisely the range of $\rho$. So $\rho$ is surjective. To see that $\rho$ is injective we choose a faithful representation $\pi_0:B\to B(\mathcal{H})$ and consider the Fock representation $(\psi_F,\pi_F)$ of $M_L$ induced from $\pi_0$, as described in \cite[Example~1.4]{FR}. The underlying space of this Fock representation is $F(M_L)\otimes_B\mathcal{H}:=\bigoplus_{k\geq 0}(M_L^{\otimes k}\otimes_B\mathcal{H})$; $B$ acts diagonally on the left, and $M_L$ acts by creation operators. The crucial point for us is that each $\psi_F(m)^*$ is an annihilation operator which vanishes on the subspace $B\otimes_B\mathcal{H}=M_L^{\otimes 0}\otimes_B\mathcal{H}$ of $F(M_L)\otimes_B\mathcal{H}$. Now suppose that $a\in B$. Then \[ 0=\psi_F\times\pi_F(\rho(a))= \psi_F\times\pi_F(\pi(a)(1-q))=\pi_F(a)\Big(1-\sum_{j=0}^{N-1}\psi_F(m_j)\psi_F(m_j)^*\Big). \] Since $\psi(m_j)^*$ vanishes on $B\otimes_B \mathcal{H}$, we have \begin{align*} \rho(a)=0&\Longrightarrow \pi_F(a)\Big(1-\sum_{j=0}^{N-1}\psi_F(m_j)\psi_F(m_j)^*\Big)(1\otimes_B h)=0\ \text{ for all $h\in \mathcal{H}$}\\ &\Longrightarrow \pi_F(a)(1\otimes_B h)=0\ \text{ for all $h\in \mathcal{H}$}\\ &\Longrightarrow a\otimes_B h=0\ \text{ for all $h\in \mathcal{H}$}\\ &\Longrightarrow \pi_0(a)h=0\ \text{ for all $h\in \mathcal{H}$,} \end{align*} which implies that $a=0$ because $\pi_0$ is faithful. \end{proof} Lemma~\ref{lem-rho} implies we can replace $K_i(\ker Q)$ in \eqref{eq-les2} by $K_i(B)$, as claimed. Now we need to check what this replacement does to the map $\iota_*$. \begin{prop}\label{prop-square} The following diagram commutes for $i=0$: \begin{equation}\label{diagram**} \xymatrix{ K_i(B)\ar[r]^{\operatorname{id}-\Omega_*}\ar[d]_{\rho_*} &K_i(B)\ar[d]_{\pi_*}\\ K_i(\ker Q)\ar[r]_{\iota_*}&K_i(\mathcal{T}(M_L)) } \end{equation} If $\{m_j:0\leq j<N\}$ is orthonormal then the diagram also commutes for $i=1$. \end{prop} Since $\Omega:B\to M_N(B)$, the $\Omega_*$ in the diagram is really the composition of $\Omega_*:K_i(B)\to K_i(M_N(B))$ with the isomorphism $K_i(M_N(B))\to K_i(B)$; the latter is induced by the map which views an element in $M_r(M_N(B))$ as an element of $M_{rN}(B)$. The proof needs two standard lemmas. The first says, loosely, that if we rewrite an $r\times r$ matrix of $N\times N$ blocks as an $N\times N$ matrix of $r\times r$ blocks, then the resulting $rN\times rN$ matrices are unitarily equivalent. We agree that this can't be a surprise to anyone, and we apologise for failing to come up with more elegant notation. \begin{lemma}\label{tocome} Suppose that $B$ is a $C^*$-algebra, $r\geq 1$ and $N\geq 2$ are integers, and \[ \{b_{j,s;k,t}:0\leq j,k<N\text{ and }0\leq s,t<r\} \] is a subset of $B$. For $m,n$ satisfying $0\leq m,n<rN-1$, we define \begin{align*} c_{m,n}&=b_{j,s;k,t}\text{ where $m=sN+j$ and $n=tN+k$, and}\\ d_{m,n}&=b_{j,s;k,t}\text{ where $m=jr+s$ and $n=kr+t$.} \end{align*} Then there is a scalar unitary permutation matrix $U$ such that the matrices $C:=(c_{m,n})$ and $D:=(d_{m,n})$ are related by $C=UDU^*$. \end{lemma} \begin{proof} For $0\leq p,q<rN-1$, we define \[ u_{p,q}=\begin{cases} 1&\text{if there exist $k$, $t$ such that $p=tN+k$ and $q=kr+t$}\\ 0&\text{otherwise.} \end{cases} \] Each row and column contain exactly one $1$, so $U:=(u_{p,q})$ is a scalar permutation matrix, and we can verify that both $(CU)_{m,q}$ and $(UD)_{m,q}$ are equal to $b_{j,s;k,t}$ where $m=sN+j$ and $q=kr+t$, so $CU=UD$. \end{proof} \begin{lemma}\label{lem-isometrytrick} Suppose that $S$ is an isometry in a unital $C^*$-algebra $B$. Then \[ U:=\begin{pmatrix} S&1-SS^*\\ 0&S^* \end{pmatrix} \] is a unitary element of $M_2(B)$ and its class in $K_1(B)$ is the identity. \end{lemma} \begin{proof} A straightforward calculation shows that $U$ is unitary. Let $\mathcal{T}=C^*(v)$ be the Toeplitz algebra. By Coburn's Theorem \cite{Coburn} there is a homomorphism $\pi_S:\mathcal{T}\to B$ such that $\pi_S(v)=S$. Since $K_1(\mathcal{T})=0$ (see, for example, \cite[Remark~11.2.2]{WO}), \[ \left[\begin{pmatrix} v&1-vv^*\\ 0&v^* \end{pmatrix}\right]=[1] \text{\ in $K_i(\mathcal{T})$,} \] and hence \[ \left[\begin{pmatrix} S&1-SS^*\\ 0&S^* \end{pmatrix}\right] =(\pi_S)_*\left( \left[\begin{pmatrix} v&1-vv^*\\ 0&v^* \end{pmatrix}\right] \right)=(\pi_S)_*([1])=[1] \text{\ in $K_i(B)$.}\qedhere \] \end{proof} \begin{proof}[Proof of Proposition~\ref{prop-square}] We start with $i=0$. Let $a=(a_{s,t})$ be a projection in $M_r(B)$. For $\pi:A\to B$, we write $\pi_r$ for the induced homomorphism of $M_r(A)$ into $M_r(B)$. Then we have \begin{align*} \rho_*([a])&=[(\rho(a_{r,s})]=\big[(\pi(a_{s,t})(1-q))\big]=\big[(\pi(a_{s,t}))(1-q)1_r)\big]\\&=[\pi_r(a)((1-q)1_r)]=[\pi_r(a)]-[(\pi_r(a)(q1_r))],\quad\text{and}\\ \pi_*\circ(\operatorname{id}-\Omega_*)([a])&=[\pi_r(a)]-\pi_{*}\circ\Omega_*([a]), \end{align*} so it suffices to show that $[\pi_r(a)(q1_r)]=\pi_{*}\circ\Omega_*([a])$ in $K_0(\mathcal{T}(M_L))$. The class $\pi_{*}\circ\Omega_*([a])$ appears as the class of the $r\times r$ block matrix $\pi_{rN}(\Omega_r(a))$ whose $(s,t)$ entry is the $N\times N$ block $\big(\pi(\langle m_j\,,\, a_{s,t}\cdot m_k\rangle)\big)_{j,k}$. In other words, with $b_{j,s;k,t}=\pi(\langle m_j\,,\, a_{s,t}\cdot m_k\rangle)$, {the matrix} $\pi_{rN}(\Omega_r(a))$ is the matrix $C=(c_{m,n})$ in Lemma~\ref{tocome}. We now consider the matrix $T$ in $M_N(M_r(\mathcal{T}(M_L)))$ defined by \begin{equation} \label{eq-T}T=\begin{pmatrix} \psi(m_0)1_r&\cdots&\psi(m_{N-1})1_r\\ 0_r&\cdots&0_r\\ \vdots&\cdots&\vdots \end{pmatrix}. \end{equation} Computations show that $TT^*=(q1_r)\oplus 0_{r(N-1)}$, and since $\pi_r(a)$ is a projection which commutes with $q1_r$, we deduce that $(\pi_r(a)\oplus 0_{r(N-1)})T$ is a partial isometry which implements a Murray-von Neumann equivalence between $T^*(\pi_r(a)\oplus 0_{r(N-1)})T$ and $\big(\pi_r(a)\oplus 0_{r(N-1)}\big)TT^*=(\pi_r(a)(q1_r))\oplus 0_{r(N-1)}$. Thus we have \[ [\pi_r(a)(q1_r)]=\big[\pi_r(a)(q1_r)\oplus 0_{r(N-1)}\big]=\big[T^*(\pi_r(a)\oplus 0_{r(N-1)})T\big]. \] Another computation shows that the $(j,k)$ entry of $T^*(\pi_r(a)\oplus 0_{r(N-1)})T$ is the $r\times r$ matrix $\big(\pi_r(\langle m_j\,,\, a_{s,t}\cdot m_k\rangle1_r)\big)_{s,t}$. Thus with the same choice of $b_{j,s;k,t}=\pi(\langle m_j\,,\, a_{s,t}\cdot m_k\rangle)$, $T^*(\pi_r(a)\oplus 0_{r(N-1)})T$ is the matrix $D=(d_{m,n})$ in Lemma~\ref{tocome}. Since unitarily equivalent projections have the same class in $K_0$, we can therefore deduce from Lemma~\ref{tocome} that \begin{equation}\label{changeshape} [\pi_r(a)(q1_r)]=\big[T^*(\pi_r(a)\oplus 0_{r(N-1)})T\big]=\big[\pi_{rN}(\Omega_r(a))\big]=\pi_*\circ\Omega_*([a])]. \end{equation} Thus Diagram~\ref{diagram**} commutes when $i=0$. Now consider $i=1$, where we assume in addition that $\{m_j\}$ is orthonormal. Let $u$ be a unitary in $M_r(B)$. To compute $\rho_*:K_1(B)\to K_1(\ker Q)$ we observe that $\rho$ is the composition of a unital isomorphism of $B$ onto $(1-q)\ker Q (1-q)$, which takes $[u]$ to $[\rho_r(u)]=[\pi_r(u)((1-q)1_r)]$, with the inclusion of $(1-q)\ker Q (1-q)$ as a full corner in the non-unital algebra $\ker Q$, which takes $[\pi_r(u)((1-q)1_r)]$ to $[\pi_r(u)((1-q)1_r)+q1_r]\in K_1((\ker Q)^+)=K_1(\ker Q)$. On the other hand, \[ \pi_*\circ(\operatorname{id}-\Omega_*)([u])=[\pi_r(u)]-[\pi_{rN}\circ\Omega_r(u)]. \] So we need to show that \begin{equation}\label{eq-strategy} [(\pi_r(u)((1-q)1_r)+ q1_r)\oplus 1_{r(N-1)}]=[\pi_r(u)\oplus 1_{r(N-1)}]-[\pi_{rN}\circ\Omega_r(u)] \end{equation} in $K_1(\mathcal{T}(M_L))$. To this end, we note that the left-hand side of \eqref{eq-strategy} is unchanged by pre- or post-multiplying by any invertible matrix $C\in M_{2rN}(\mathcal{T}(M_L))$ whose $K_1$ class is $1$. In particular, we can do this when $C$ is: \begin{itemize} \item a unitary of the form \[ C=\begin{pmatrix} S&1-SS^*\\ 0&S^* \end{pmatrix}\] where $S\in M_{rN}(\mathcal{T}(M_L))$ is an isometry (see Lemma~\ref{lem-isometrytrick}); \item an upper- or lower-triangular matrix of the form \[ C=\begin{pmatrix} 1&A\\ 0&1 \end{pmatrix}\quad\text{or}\quad C=\begin{pmatrix} 1&0\\ A&1 \end{pmatrix} \] (which are connected to $1_{2rN}$ via $t\mapsto\left( \begin{smallmatrix} 1&tA\\ 0&1 \end{smallmatrix}\right)$ and its transpose); \item any constant invertible matrix $C$ in $M_{2rN}(\mathbb{C})$ (because $GL_{2rN}(\mathbb{C})$ is connected); this implies that we can perform row and column operations without changing the class in $K_1$. \end{itemize} Since $\{m_j\}$ is an orthonormal basis, the matrix $T$ defined at \eqref{eq-T} is an isometry in $M_{rN}(\mathcal{T}(M_L))$. Thus \begin{align*} \big[\big(\pi_r(u)&((1-q)1_r)+ q1_r\big)\oplus 1_{r(N-1)}\big]\\ &=\left[ \begin{pmatrix}(\pi_r(u)((1-q)1_r)+ q1_r)\oplus 1_{r(N-1)}&0_{rN} \\0_{rN}& 1_{rN} \end{pmatrix}\right] \left[\begin{pmatrix} T&1_{rN}-TT^*\\ 0_{rN}&T^* \end{pmatrix}\right]\\ &=\left[\begin{pmatrix}(\pi_r(u)((1-q)1_r)+ q1_r)\oplus 1_{r(N-1)}&0_{rN} \\0_{rN}& 1_{rN} \end{pmatrix}\right] \left[\begin{pmatrix} T&(1-q)1_r\oplus 1_{r(N-1)} \\ 0_{rN}&T^* \end{pmatrix}\right]\\ &= \left[ \begin{pmatrix} {\big((\pi_r(u)((1-q)1_r)+ q1_r)\oplus 1_{r(N-1)}\big) T} & \pi_r(u)((1-q)1_r)\oplus 1_{r(N-1)} \\0_{rN}&T^* \end{pmatrix} \right],\\ \intertext{{which, since $(1-q)\psi(m_i)=0$ by Lemma~\ref{lem-kerQ}(2), is }} &=\left[\begin{pmatrix} T & \pi_r(u)((1-q)1_r)\oplus 1_{r(N-1)} \\ 0_{rN} &T^* \end{pmatrix}\right]\\ &=\left[\begin{pmatrix} T & \pi_r(u)((1-q)1_r)\oplus 1_{r(N-1)}\\ 0_{rN}&T^* \end{pmatrix}\right]\left[ \begin{pmatrix}1_{rN}&T^*\big(\pi_r(u)\oplus 1_{r(N-1)} \big) \\0_{rN}&1_{rN}\end{pmatrix} \right]\\ &=\left[\begin{pmatrix}T&\pi_r(u)\oplus 1_{r(N-1)}\\0_{rN}& T^* \end{pmatrix} \right]\\ \intertext{since $TT^*=q1_r\oplus 0_{r(N-1)}$ and $(q1_r)\pi_r(u)=\pi_r(u)(q1_r)$. By an elementary row operation this is} &=\left[\begin{pmatrix}\pi_r(u)\oplus 1_{r(N-1)}&T\\ T^*&0_{rN} \end{pmatrix} \right]\\ &=\left[\begin{pmatrix}\pi_r(u)\oplus 1_{r(N-1)}&T\\ T^*& 0_{rN}\end{pmatrix} \right]\left[\begin{pmatrix}1_{rN}&-\big(\pi_r(u^{-1})\oplus 1_{r(N-1)}\big)T\\ 0_{rN}& 1_{rN}\end{pmatrix} \right]\\ &=\left[\begin{pmatrix} \pi_r(u)\oplus 1_{r(N-1)}&0_{rN}\\mathbb{T}^*&-T^*\big(\pi_r(u^{-1})\oplus 1_{r(N-1)} \big)T \end{pmatrix} \right]\\ &=\left[ \begin{pmatrix}1_{rN}&0_{rN}\\-T^*\big(\pi_r(u^{-1})\oplus 1_{r(N-1)}\big) & 1_{rN}\end{pmatrix} \right] \left[\begin{pmatrix} \pi_r(u)\oplus 1&0_{rN}\\mathbb{T}^*&-T^*\big(\pi_r(u^{-1})\oplus 1 \big)T \end{pmatrix} \right]\\ &=\left[\begin{pmatrix} \pi_r(u)\oplus 1_{r(N-1)}&0_{rN}\\0_{rN}&-T^*\big(\pi_r(u^{-1})\oplus 1_{r(N-1)} \big)T \end{pmatrix} \right]\left[\begin{pmatrix}1_{rN}&0_{rN}\\0_{rN}&-1_{rN} \end{pmatrix}\right]\\ &=[\pi_r(u)\oplus 1_{r(N-1)}]+[T^*\big(\pi_r(u^{-1})\oplus 1_{r(N-1)} \big)T]. \end{align*} Now we recall from the argument in the second paragraph (see \eqref{changeshape}) that \[ \big[T^*\big(\pi_r(u^{-1})\oplus 1_{r(N-1)} \big)T\big]=\big[\pi_{rN}(\Omega_r(u^{-1}))\big]=-[\pi_{rN}\circ\Omega_r(u)], \] and we see that we have proved what we wanted. \end{proof} \begin{thm}\label{thm-stes} Let $(B,\alpha,L)$ be an Exel system with $B$ unital and separable, and suppose that $M_L$ has an orthonormal basis $\{m_j\}_{j=0}^{N-1}$. Let $(j_{M_L},j_B)$ be the canonical Cuntz-Pimsner covariant representation of $M_L$ in $\mathcal{O}(M_L)$. Then there is an exact sequence \begin{equation}\label{eq-les} \xymatrix{ K_0(B)\ar[r]^{\operatorname{id}-\Omega_*}&K_0(B)\ar[r]^{j_{B*}\quad}&K_0(\mathcal{O}(M_L))\ar[d]^{\rho_*^{-1}\circ\delta_0}\\ K_1(\mathcal{O}(M_L))\ar[u]^{\rho_*^{-1}\circ\delta_1}&K_1(B)\ar[l]_{\quad \ j_{B*}}&K_1(B).\ar[l]_{\operatorname{id}-\Omega_*} } \end{equation} \end{thm} \begin{proof} The canonical representation $(j_{M_L},j_B)$ is the composition of the universal Toeplitz representation $(\psi,\pi)$ of $M_L$ in $\mathcal{T}(M_L)$ with the quotient map $Q$, and in particular $j_B=Q\circ\pi$. Since $B$ is separable, \cite[Theorem~4.4]{P} says that the homomorphism $\pi:B\to\mathcal{T}(M_L)$ induces an isomorphism $\pi_*:K_i(B)\to K_i(\mathcal{T}(M_L))$, and since $\rho:B\to \ker Q$ is an isomorphism onto a full corner, $\rho_*$ is an isomorphism. So splicing the commutative diagram of Proposition~\ref{prop-square} into \eqref{eq-les2} gives the result. \end{proof} \section{Endomorphisms arising from dilation matrices} Throughout this section, $d$ is an integer $\geq 2$ and $A\in M_d(\mathbb{Z})$ is an integer dilation matrix, by which we mean that all the complex eigenvalues $\lambda$ of $A$ satisfy $|\lambda|>1$. We consider the surjective endomorphism $\sigma_A$ of $\mathbb{T}^d$ defined by $\sigma_A(e^{2\pi ix})=e^{2\pi iAx}$ for $x\in\mathbb{R}^d$, which has $|\ker \sigma_A|=|\det A|$, and the associated Exel system $(C(\mathbb{T}^d),\alpha_A, L)$, where $\alpha_A$ is the endomorphism of $C(\mathbb{T}^d)$ given by $\sigma_A$. We start by showing that $C(\mathbb{T}^d)\rtimes_{\alpha_A,L}\mathbb{N}=\mathcal{O}(M_L)$ is simple and purely infinite. We deduce simplicity from results of Exel and Vershik \cite{EV} on crossed products by endomorphisms, and pure infiniteness from results of Katsura \cite{K4} on the $C^*$-algebras of topological graphs. So we need to note that the map $f\mapsto N^{1/2}f$ is an isomorphism of the bimodule $M_L=C(K)_L$ onto the bimodule of the topological graph $E$ with $E^0=\mathbb{T}^d$, $E^1=\mathbb{T}^d$, $r=\operatorname{id}$ and $s=\sigma_A$, and hence the crossed product $C(\mathbb{T}^d)\rtimes_{\alpha, L}\mathbb{N}=\mathcal{O}(M_L)$ can also be viewed as the $C^*$-algebra $C^*(E)$ studied in \cite{K1, K4}. We need the following lemma on the operator norms of $A^n$ acting on $\mathbb{R}^d$. \begin{lemma}\label{lem-dilation} We have $\|A^{-n}\|\to 0$ as $n\to\infty$. \end{lemma} \begin{proof} For a real matrix $B$, the operator norms of $B$ in $B(\mathbb{R}^d)$ and $B(\mathbb{C}^d)$ coincide (the $C^*$-identities imply that both are equal to the square root of the largest eigenvalue of $B^TB$). So we may as well work over $\mathbb{C}$, and then there exists $P\in GL_d(\mathbb{C})$ such that $P^{-1}A^{-1}P$ is in Jordan canonical form. Thus $P^{-1}A^{-1}P$ has the form $D+N$ where $D$ is diagonal, $N$ is nilpotent with $N^{d}=0$, and $D$ and $N$ commute. The entries of $D$ are the reciprocals of the eigenvalues of $A$, so \[ \|D\|=\max\{|\lambda^{-1}|: \text{ $\lambda$ is an eigenvalue of $A$}\}< 1, \] and $\|N\|\leq 1$ because $N$ is a truncated shift. Since $\|A^{-n}\|\leq\|P\|\|P^{-1}\|\|(D+N)^n\|$, it suffices to show that $\|(D+N)^n\|\to 0$ as $n\to\infty$. Since $D$ and $N$ commute and $N^{d}=0$, for $n\geq d$ the binomial theorem gives \[ \|(D+N)^n\|=\left\| \sum_{k=0}^{d-1}\binom{n}{k}D^{n-k}N^k\right\|\leq\|D\|^{n-d+1}\sum_{k=0}^{d-1}\binom{n}{k}\|D^{d-1-k}\|\|N\|^k, \] and since $\|D^{d-1-k}\|\leq \|D\|^{d-1-k}\leq 1$ for $0\leq k\leq d-1$, we have \[ \|(D+N)^n\|\leq\|D\|^{-d+1}\|D\|^{n}\sum_{k=0}^{d-1}\binom{n}{k}=\|D\|^{-d+1}\|D\|^{n}f(n) \] where $f$ is a polynomial of degree $d-1$. But $\|D\|^nf(n)=\exp(n\ln\|D\|)f(n)\to 0$ as $n\to\infty$ because $\ln\|D\|<0$, and the lemma follows. \end{proof} \begin{prop}\label{prop-simple} The Cuntz-Pimsner algebra $\mathcal{O}(M_L)$ is simple and purely infinite. \end{prop} \begin{proof} We show that $\mathcal{O}(M_L)$ is simple using \cite[Theorem~11.2]{EV}, which says that $C(\mathbb{T}^d)\rtimes_{\alpha,L}\mathbb{N}$ is simple if and only if $\sigma_A$ is irreducible. We recall from \cite[\S11]{EV} that $x,y\in \mathbb{T}^d$ are \emph{trajectory-equivalent}, written $x\sim y$, if there are $n,m\in\mathbb{N}$ such that $\sigma_A^n(x) = \sigma_A^m(y)$, and a subset $Y\subseteq \mathbb{T}^d$ is \emph{invariant} if $x\sim y \in Y$ implies that $x\in Y$; $\sigma_A$ is \emph{irreducible} if the only closed invariant sets are $\emptyset$ and $\mathbb{T}^d$. Let $Y$ be a non-empty closed invariant subset of $\mathbb{T}^d$, and pick a point $e^{2\pi i y}\in Y$. We need to show that $Y=\mathbb{T}^d$. Fix $e^{2\pi i z}\in \mathbb{T}^d$. Since the unit cube in $\mathbb{R}^d$ has diameter $\sqrt{d}$, {for every $n\in \mathbb{N}$} we can find $k_n\in \mathbb{Z}^d$ such that $|A^nz-(y+k_n)|\leq \sqrt{d}$. Then $x_n:=A^{-n}(y+k_n)$ has $\sigma_A^n(e^{2\pi ix_n})=e^{2\pi iA^nx_n}=e^{2\pi iy}\in Y$, and invariance implies that $e^{2\pi ix_n}\in Y$ also. Lemma~\ref{lem-dilation} implies that \[ |z-x_n|\leq\|A^{-n}\||A^nz-(y+k_n)|\leq\|A^{-n}\|\sqrt{d}\to 0\text{\ as $n\to\infty$}, \] so $x_n\to z$ in $\mathbb{R}^d$ and $e^{2\pi i x_n}\to e^{2\pi i z}$. Since $Y$ is closed, this implies that $e^{2\pi i z}\in Y$, as required. Thus $\sigma_A$ is irreducible, and $\mathcal{O}(M_L)$ is simple. To show that $\mathcal{O}(M_L)$ is purely infinite we realise $\mathcal{O}(M_L)=C(\mathbb{T}^d)\rtimes_{\alpha, L}\mathbb{N}$ as $C^*(E)$ with $E=(\mathbb{T}^d, \mathbb{T}^d,\operatorname{id}, \sigma_A)$. Since $C^*(E)=\mathcal{O}(M_L)$ is simple, $E$ is minimal by \cite[Proposition~1.11]{K4}. So by \cite[Theorem~A]{K4} it suffices to prove that $E$ is contracting at some vertex $v_0\in E^0$ in the sense of Definition~2.3 of \cite{K4}; we will show that $E$ is contracting at $v=(1,1,\dots,1)$. First, we need to see that the positive orbit $\{z:\sigma_A^n(z)=v\}$ of $v$ is dense in $E^0=\mathbb{T}^d$. The positive orbit of $v$ contains all points of the form $e^{2\pi i A^{-n}k}$ for $n\in N$ and $k\in \mathbb{Z}^d$, and it follows from our proof of the irreducibility of $\sigma_A$ above (with $y=0$) that this positive orbit is dense in $E^0$. Second, we fix a neighbourhood $V$ of $v$; we need to show that $V$ contains a contracting open set $W$ (see \cite[Definition~2.3]{K4}). For this, it suffices to find a open neighbourhood $W$ of $v$ such that $W\subset V$ and $\overline{W}\subsetneq \sigma_A^k(W)$ for some $k\geq 1$. By Lemma~\ref{lem-dilation} we can choose $k$ such that $\|A^{-k}\|<1$. Then for every $\epsilon>0$ and every $x$ in the closed unit ball $\overline{B(0,\epsilon)}$ in $\mathbb{R}^d$, we have $|A^{-k}x|<\epsilon$, so $x=A^k(A^{-k}x)$ belongs to $A^k(B(0,\epsilon))$. Thus $\overline{B(0,\epsilon)}\subset A^k(B(0,\epsilon))$. The inequality $\|A^kA^{-k}\|\leq \|A^k\|\,\|A^{-k}\|$ implies that $\|A^k\|>1$, so for every $\epsilon>0$ there exists $y\in B(0,\epsilon)$ such that $|A^ky|>\epsilon$, and $\overline{B(0,\epsilon)}\subsetneq A^k(B(0,\epsilon))$. If $\epsilon$ is small enough to ensure that $x\mapsto e^{2\pi ix}$ is one-to-one on $A^k(B(0,\epsilon))$, then $W:=\{e^{2\pi i x}:x\in B(0,\epsilon)\}$ satisfies $\overline{W}\subsetneq \sigma_A^k(W)$, and by taking $\epsilon$ smaller still we can ensure that $W\subset V$. Thus $E$ is contracting, and the result follows from \cite[Theorem~A]{K4}. \end{proof} We now want to calculate the $K$-theory of $C(\mathbb{T}^d)\rtimes_{\alpha_A,L}\mathbb{N}=\mathcal{O}(M_L)$, and we aim to use Theorem~\ref{thm-stes}. To do this, we need descriptions of $K_*(C(\mathbb{T}^d))$ and the map $\Omega_*$. \begin{lemma}\label{lem-omega*} Suppose that $(B,\alpha,L)$ is an Exel system with $B$ commutative, that $M_L$ admits an orthonormal basis $\{m_j:0\leq j<N\}$, and that $\Omega:B\to M_N(B)$ is the homomorphism described in Lemma~\ref{lem-Omega}. Then $\Omega_*\circ\alpha_*$ is multiplication by $N$ on both $K_0(B)$ and $K_1(B)$. \end{lemma} \begin{proof} We know from Lemma~\ref{lem-Omega}(3) that $\Omega\circ \alpha(a)=a1_n$. If $b\in M_r(B)$, then $(\Omega\circ\alpha)_r(b)$ is the $N\times N$ block matrix which has $0$s off the diagonal and $\alpha_r(b)$ down the diagonal. If we view $(\Omega\circ\alpha)_r(b)$ as an element of $M_r(M_N(B))$, as in Lemma~\ref{tocome}, it becomes $b\oplus b\oplus\cdots \oplus b$. Whether $b$ is a projection or a unitary, $[b\oplus\cdots \oplus b]=N[b]$. Thus by Lemma~\ref{tocome}, we have \[ \Omega_*\circ\alpha_*([b])=(\Omega\circ\alpha)_*([b)]=[(\Omega\circ\alpha)_r(b)]=[b\oplus\cdots \oplus b]=N[b].\qedhere \] \end{proof} Ji proved in \cite{J} that the Chern character is a $\mathbb{Z}/2$-graded ring isomorphism of $K_*(C(\mathbb{T}^d))=K^*(\mathbb{T}^d)$ onto the integral cohomology ring \[ \textstyle{H^*(\mathbb{T}^d,\mathbb{Z}):=\bigoplus_{k\in\mathbb{Z}}^\infty H^k(\mathbb{T}^d,\mathbb{Z})=\bigoplus_{k=0}^d H^k(\mathbb{T}^d,\mathbb{Z}),} \] which in turn is isomorphic as a $\mathbb{Z}$-graded ring to the exterior algebra $\bigwedge^*\mathbb{Z}^d$. Thus the ring $H^*(\mathbb{T}^d,\mathbb{Z})$ is generated by $H^1(\mathbb{T}^d,\mathbb{Z})$, which is isomorphic to the set of homotopy classes of continuous functions from $\mathbb{T}^d$ to $\mathbb{T}$, and is the free abelian group generated by the coordinate functions $u_k:z=(z_1,\cdots, z_n)\mapsto z_k$. Since the homomorphism $\alpha_*$ is induced by a continuous map $\sigma_A:\mathbb{T}^d\to \mathbb{T}^d$, the corresponding ring homomorphism on $H^*(\mathbb{T}^d,\mathbb{Z})$ is the map $\sigma_A^*$, which respects the $\mathbb{Z}$-grading. Thus we can compute $\alpha_*$ on $\bigwedge^*\mathbb{Z}^d$ by working out what $\sigma_A^*$ does on $H^1(\mathbb{T}^d,\mathbb{Z})$ using the basis $\{e_k:=[u_k]:1\leq k\leq d\}$, and then taking exterior powers. Once we know what $\alpha_*$ is, we can use the formula for $\Omega_*\circ\alpha_*$ in Lemma~\ref{lem-omega*} to work out what $\Omega_*$ is. \begin{lemma}\label{lem-alpha*1} With respect to the basis $\{[u_k]\}$, $\alpha_*:\operatorname{span}\{[u_k]\}\to \operatorname{span}\{[u_k]\}$ is multiplication by the transpose $A^T$ of $A$. \end{lemma} \begin{proof} We have $\alpha_*([u_k])=[\alpha(u_k)]=[u_k\circ \sigma_A]$. Since \begin{align*} u_k\circ\sigma_c(e^{2\pi ix})&=u_k(e^{2\pi iAx}) =e^{2\pi i\sum_j a_{k,j}x_j}=\prod_j e^{2\pi ia_{k,j}x_j}\\ &=\prod_j (e^{2\pi ix_j})^{a_{k,j}}=\prod_ju_j(e^{2\pi ix})^{a_{k,j}}, \end{align*} we have $u_k\circ\sigma_A=\prod_j u_j^{a_{k,j}}$. Hence $[u_k\circ \sigma_A]=\sum_j a_{k,j}[u_j]$. \end{proof} Since the $0$-graded component is isomorphic to $H^0(\mathbb{T}^d,\mathbb{Z})$, the free abelian group generated by the connected components, the action of $\alpha_*$ on the $0$-component $\bigwedge^0(\mathbb{Z})=\mathbb{Z}$ is the identity map. For $n=1$, Lemma~\ref{lem-alpha*1} implies that $\alpha_*=A^T$. For $n> 1$, we use the basis \[ \mathcal{E}_n=\big\{e_J=e_{j_1}\wedge\dots\wedge e_{j_n}:J\subset\{1,\dots,d\}, |J|=n,J=\{j_1<j_2<\dots <j_n\}\big\} \] for $\bigwedge^n\mathbb{Z}^d$. For $e_K\in \mathcal{E}_n$, we write $K'=\{1,\dots, d\}\setminus K$. With $K$ and $K'$ listed in increasing order as $K=\{k_1<\dots<k_n\}$ and $K'=\{k_{n+1}<\dots<k_d\}$, we let $\tau_K$ be the permutation $i\mapsto k_i$ for $1\leq k\leq d$. For subsets $K,J$ of the same size, we write $A_{K,J}$ for the submatrix of $A$ whose entries belong to the rows in $K$ and the columns in $J$. The following Lemma is essentially Lemma~1 of \cite[Chapter~5]{N}; we have included a short proof because the conventions of \cite{N} are different (matrices act on the right of vector spaces, for example). \begin{lemma} Let $1\leq n\leq d$. The matrix $C_n$ of $\alpha_*|:=\bigwedge^n \mathbb{Z}^d\to \bigwedge^n \mathbb{Z}^d$ with respect to the basis $\mathcal{E}_n$ has $(J,K)$ entry $\det A_{K,J}$. \end{lemma} \begin{proof} Fix $e_K\in\mathcal{E}_n$ with $K=\{k_1<\dots <k_n\}$. Then \begin{align*} (\bigwedge{}^n A^T)(e_K)&=(\bigwedge{}^n A^T)(e_{k_1}\wedge\dots\wedge e_{k_n})\\ &=A^Te_{k_1}\wedge\dots\wedge A^T e_{k_n}\\ &=\sum_{m_1=1,\dots, m_n=1}^d a_{k_1,m_1}\dots a_{k_n, m_n}(e_{m_1}\wedge\dots\wedge e_{m_n})\\ &= \sum_{e_J\in\mathcal{E}_n}\sum_{\{m_1,\dots,m_n\}=J}a_{k_1,m_1}\dots a_{k_n, m_n}(e_{m_1}\wedge\dots\wedge e_{m_n})\\ &=\sum_{e_J\in\mathcal{E}_n}\sum_{\sigma\in S_n} a_{k_1,\sigma(j_1)}\dots a_{k_n, \sigma(j_n)}(e_{\sigma(j_1)}\wedge\dots\wedge e_{\sigma(j_n)})\\ &=\sum_{e_J\in\mathcal{E}_n}\sum_{\sigma\in S_n}(-1)^{\text{deg}\sigma} a_{k_1,\sigma(j_1)}\dots a_{k_n, \sigma(j_n)}(e_{j_1}\wedge\dots\wedge e_{j_n})\\ &=\sum_{e_J\in\mathcal{E}_n}(\det A_{K,J})e_J.\qedhere \end{align*} \end{proof} We are now ready to compute the matrix $B_n$ of $\Omega_*$ on $\bigwedge\mathbb{Z}^d$ with respect to the same basis {$\mathcal{E}_n$}. The answer must, of course, be an integer matrix. But Lemma~\ref{lem-omega*} implies that $C_n$ is invertible as a real matrix, and hence if we can find matrices $B_n$ such that $B_nC_n=N1_n$, then uniqueness of the real inverse tells us that $B_n$ is the matrix of $\Omega_*$. \begin{prop}\label{lem-helper2} Let $B_0=|\det A|$, $B_d=\operatorname{sign}(\det A)$, and \[ B_n=\begin{cases} \Big( (-1)^{\deg(\tau_K\tau_L)} \det(A_{K',L'})\Big)_{K,L}&\text{if $\det A>1$;}\\ -\Big( (-1)^{\deg(\tau_K\tau_L)} \det(A_{K',L'})\Big)_{K,L}&\text{if $\det A<-1$.} \end{cases} \] \smallskip \textnormal{(1)} Then $B_nC_n=|\det A| 1$ where $1$ is the $\binom{d}{n}\times \binom{d}{n}$ identity matrix. \smallskip \textnormal{(2)} We have $1-B_0= 1-|\det A|<0$, $\det (1- B_n)\neq 0$ for $1\leq n<d$, and \[1-B_d=\begin{cases} 0&\text{if $\det A>1$}\\ 2&\text{if $\det A<-1$.} \end{cases}\] \end{prop} {For the proof of Proposition~\ref{lem-helper2} we need the following lemma; its first part appears as equation (5.3.7) in \cite{N}, for example.} \begin{lemma}\label{lem-helper1} Fix $n$ satisfying $1\leq n\leq d-1$. \smallskip \textnormal{(1)} If $e_J\in \mathcal{E}_n$, then \[ \det A=\sum_{e_K\in\mathcal{E}_n} (-1)^{\deg(\tau_K\tau_J)}\det(A_{K,J})\det(A_{K',J'}). \] \textnormal{(2)} If $e_J,e_L\in \mathcal{E}_n$ and $L\neq J$, then \[ \sum_{e_K\in\mathcal{E}_n}(-1)^{\deg(\tau_K\tau_J)}\det(A_{K,J})\det(A_{K',L'})=0. \] \end{lemma} \begin{proof} \noindent (1) Fix $e_J\in\mathcal{E}_n$. We have \begin{align} \det A &=\sum_{\sigma\in S_d}(-1)^{\deg\sigma}a_{\sigma(1),1}\dots a_{\sigma(d),d}\notag\\ &=(-1)^{\deg\tau_J}\sum_{\sigma\in S_d}(-1)^{\deg\sigma}a_{\sigma(1),j_1}\dots a_{\sigma(d),j_d}\notag\\ \intertext{which, by reordering the sum according to the image of $I_n:=\{1,\dots,n\}$ under $\sigma$, is } &=(-1)^{\deg\tau_J}\sum_{e_K\in\mathcal{E}_n}\sum_{\{\sigma:\sigma(I_n)=K\}}(-1)^{\deg\sigma}a_{\sigma(1),j_1}\dots a_{\sigma(d),j_d}.\label{eq-headache} \end{align} Note that for fixed $\sigma\in S_n$ such that $\sigma(I_n)=K$ we have \[ \sigma=(\sigma_K\times\sigma_{K'})\circ \tau_K \] where $\sigma_K(k_i):=\sigma(i)$ and $\sigma_{K'}(k_l):=\sigma(l)$. So \begin{align*} \eqref{eq-headache} &= (-1)^{\deg\tau_J}\sum_{e_K\in\mathcal{E}_n}\sum_{\{\sigma:\sigma(I_n)=K\}}(-1)^{\deg\tau_K}(-1)^ {\deg(\sigma_K\times\sigma_{K'})}a_{\sigma_K(k_1),j_1}\dots a_{\sigma_K(k_n),j_n}\cdot\\ &\hskip7cm\cdot a_{\sigma_{K'}(k_{n+1}),j_{n+1}}\dots a_{\sigma_{K'}(k_{d}),j_{d}}\\ &= (-1)^{\deg\tau_J}\sum_{e_K\in\mathcal{E}_n}(-1)^{\deg\tau_K}\sum_{\alpha\in S_K,\beta\in S_{K'}} (-1)^{\deg\alpha} a_{\alpha(k_1),j_1}\dots a_{\alpha(k_n),j_n}\cdot\\ &\hskip7cm \cdot (-1)^{\deg\beta} a_{\beta(k_{n+1}),j_{n+1}}\dots a_{\beta(k_{d}),j_{d}}\\ &= \sum_{e_K\in\mathcal{E}_n}(-1)^{\deg(\tau_K\tau_J)}\det(A_{K,J})\det(A_{K',J'}). \end{align*} (2) If $L\neq J$ then $L'\neq J'$ and $L'\cap J\neq \emptyset$. Consider the matrix $D$ whose entries are those of $A$ except that the $L\setminus J$ columns of $D$ have been replaced by copies of the $J\setminus L$ columns of $A$. Thus $\det D=0$. Note that $A_{K,J}$ and $D_{K,L}$ have the same columns up to permutation, so $\det(A_{K,J})=\pm\det(D_{K,L})$. For every $K$ we have $D_{K',L'}=A_{K',L'}$, so using (1) we get \begin{align*} \sum_{e_K\in\mathcal{E}_n}&(-1)^{\deg(\tau_K\tau_J)}\det(A_{K,J})\det(A_{K',L'})\\ &=\pm\sum_{e_K\in\mathcal{E}_n}(-1)^{\deg(\tau_K\tau_J)}\det(D_{K,L})\det(D_{K',L'})=\det D=0.\qedhere \end{align*} \end{proof} \begin{remark} In \cite[page~92]{N}, it is observed that the coefficient $(-1)^{\deg(\tau_K\tau_J)}$ can be realised as the product $\prod_{i=1}^n(-1)^{j_i+k_i}$. To see this, first observe that $(-1)^{\deg(\tau_J)}=\prod_{i=1}^n(-1)^{j_i-i}$ (because $j_n-n$, for example, is the number of transpositions required to move $j_n$ to its correct place in $J'$ without changing the ordering of $J'$), and then $(-1)^{\deg(\tau_K\tau_J)}=\prod_{i=1}^n(-1)^{(j_i-i)+(k_i-i)}$. \end{remark} \begin{proof}[Proof of Proposition~\ref{lem-helper2}] Say $\det A>1$. Then the $(J,L)$ entry of $C_n B_n$ is \[ \sum_{e_K\in\mathcal{E}_n}\det (A_{K,J})(-1)^{\deg(\tau_K\tau_L)}\det(A_{K',L'}) \] which, by Lemma~\ref{lem-helper1}, equals $\delta_{J,L}(\det A)1$. If $\det A<-1$ the same calculation gives $-\delta_{J,L}(\det A)1=\delta_{J,L}|\det A|1$. Thus $C_n B_n=|\det A|1=B_nC_n$. This gives (1). \smallskip The statements in (2) about $B_0$ and $B_d$ are immediate, so we suppose $1\leq n\leq d-1$. To compute $\det(I-B_n)$ we work over $\mathbb{C}$, and choose a basis for $\mathbb{C}^d$ such that $A$ is upper-triangular. We claim that if $J=\{j_1<\dots<j_n\}>K=\{k_1<\dots<k_n\}$ in the lexicographical order, then $\det (A_{J,K})=0$. If $J>K$ then there exists $m$ such that $j_i=k_i$ for $i<m$ and $j_m>k_m$. Since $A$ is upper-triangular, $j_m>k_m$ implies $a_{j_m,k_m}=0$. Moreover, $j_{n-m+1}>\dots>j_{m+1}>j_m>k_m$, so $A_{J,K}$ has the form \[A_{J,K}=\begin{pmatrix} U&*\\ 0&V \end{pmatrix} \] where $U$ is an $(m-1)\times (m-1)$ upper-triangular matrix, and $V$ is a square matrix with the first column consisting of zeros. Thus $\det (A_{J,K})=0$, as claimed. So if we order $\mathcal{E}_n$ with the lexicographic order, then the matrix $(\det(A_{K,J}))_{J,K}$ of $\alpha_*|=\bigwedge^n A^T$ is lower-triangular. Hence its inverse $(\det A)^{-1}B_n$ is also lower-triangular, and so is $B_n$. The diagonal entries of $B_n$ are $\det(A_{K',K'})=\prod_{k\in K'}a_{k,k}$; since each $a_{k,k}$ is an eigenvalue of $A$, we have $|a_{k,k}|>1$, and each diagonal entry of $B_n$ has absolute value greater than $1$. Since $B_n$ is lower-triangular, it follows that $\det(1- B_n)\neq 0$. \end{proof} \begin{thm}\label{thm-example} Let $A$ be a dilation matrix $A\in GL_d(\mathbb{Z})$ with $d\geq 1$, and define $B_n$ as in Proposition~\ref{lem-helper2}. Let $M_L$ be the bimodule for the Exel system $(C(\mathbb{T}^d),\alpha_A,L)$ and for which $C(\mathbb{T}^d)\rtimes_{\alpha_A,L}\mathbb{N}=\mathcal{O}(M_L)$. \smallskip \textnormal{(1)} If $\det A>1$ and $d$ is odd, then \begin{align*} K_0(\mathcal{O}(M_L))&= {\textstyle\big(\bigoplus_{\textnormal{$n$ even, $n<d$}} \operatorname{coker}(1-B_n)\big)\oplus\mathbb{Z}},\ \text{and}\\ K_1(\mathcal{O}(M_L))&= {\textstyle \bigoplus_{\textnormal{$n$ odd, $n\leq d$}} \operatorname{coker}(1-B_n).} \end{align*} If $\det A>1$ and $d$ is even, then \begin{align*} K_0(\mathcal{O}(M_L))&= {\textstyle\bigoplus_{\textnormal{$n$ even, $n\leq d$}} \operatorname{coker}(1-B_n)},\ \text{and}\\ K_1(\mathcal{O}(M_L))&= \big({\textstyle \bigoplus_{\textnormal{$n$ odd, $n< d$}} \operatorname{coker}(1-B_n)\big)\oplus\mathbb{Z}.} \end{align*} \textnormal{(2)} If $\det A<-1$, then \begin{align*} K_0(\mathcal{O}(M_L))&= {\textstyle\bigoplus_{\textnormal{$n$ even, $n\leq d$}} \operatorname{coker}(1-B_n)},\ \text{and}\\ K_1(\mathcal{O}(M_L))&= {\textstyle \bigoplus_{\textnormal{$n$ odd, $n\leq d$}} \operatorname{coker}(1-B_n).} \end{align*} \end{thm} \begin{proof} We identify \[\textstyle{K_1(C(\mathbb{T}^d))\cong\bigoplus_{\textnormal{$n$ odd, $n\leq d$}}\bigwedge^n\mathbb{Z}^d\quad\text{and}\quad K_0(C(\mathbb{T}^d))\cong\bigoplus_{\textnormal{$n$ even, $n\leq d$}}\bigwedge^n\mathbb{Z}^d.}\] \smallskip {Suppose that $\det A>1$. By Lemma~\ref{lem-omega*}, $(\Omega\circ\alpha)_*$ is multiplication by $|\det A|$, and by Proposition~\ref{lem-helper2}(1) the matrix $C_n$ of $\alpha_*|$ has inverse $|\det A|^{-1}B_n$; it follows that the map} $\operatorname{id}-\Omega_*$ appearing in Diagram~\ref{eq-les} is \[\textstyle{\bigoplus_{\textnormal{even $n\leq d$}}(1-B_n)\quad\text{and}\quad \bigoplus_{\textnormal{odd\ } n\leq d}(1-B_n)}\] on $K_0(C(\mathbb{T}^d))$ and $K_1(C(\mathbb{T}^d))$, respectively. By Proposition~\ref{lem-helper2}(2), each $1-B_n$ with $n<d$ is injective, and $1-B_d=0$. Suppose that $d$ is odd. Then $\bigoplus_{\textnormal{even $n\leq d$}}(1-B_n)$ is injective and \[\ker \big(\textstyle{\bigoplus_{\textnormal{odd $n\leq d$}}}(1-B_n)\big)=\ker(1-B_d)=\mathbb{Z}.\] Thus Diagram~\ref{eq-les} gives \[K_1(\mathcal{O}(M_L))\cong \textstyle{\bigoplus_{\textnormal{$n$ odd, $n\leq d$}}} \operatorname{coker}(1-B_n)\] and an exact sequence \begin{equation*} \xymatrix{ 0\ar[r]&\textstyle{\bigoplus_{\textnormal{$n$ even, $n< d$}}}\operatorname{coker}(1-B_n) \ar[r]&K_0(\mathcal{O}(M_L))\ar[r]&\mathbb{Z}\ar[r]&0. } \end{equation*} Since $\mathbb{Z}$ is free this sequence splits, and the formula for $K_0$ follows. The proof for even $d$ is similar. For part (2), we just note that Proposition~\ref{lem-helper2}(2) implies that \[{\textstyle\bigoplus_{\textnormal{$n$ even, $n\leq d$}}}(1-B_n)\quad\text{and}\quad {\textstyle\bigoplus_{\textnormal{$n$ odd, $n\leq d$}}}(1-B_n)\] are injective, and the result follows. \end{proof} For small $d$ we can identify the $B_n$ in more familiar terms. Both $B_0$ and $B_d$ are just numbers (or rather, multiplication by those numbers on $\mathbb{Z}$). Next we have: \begin{prop}\label{B1Bd-1} For every $d$ we have $B_1=|\det A|(A^T)^{-1}$. If we list the basis for $\bigwedge^{d-1}\mathbb{Z}^d$ as $f_k:=e_{\{1,\cdots,d\}\setminus\{k\}}$, then $B_{d-1}$ is the matrix with $(k,l)$ entry $(-1)^{k+l}a_{k,l}$ (if $\det A>0$) or $(-1)^{k+l+1}a_{k,l}$ (if $\det A<0$). \end{prop} \begin{proof} For each singleton set $\{k\}$, the permutation $\tau_{\{k\}}$ is the cycle which pulls $k$ to the front and moves the elements $1,\cdots k-1$ to the right, which has degree $k$. The complements $\{k\}'$ are the sets $\hat k:=\{1,\cdots,k\}\setminus\{k\}$, and the number \[ (-1)^{\deg(\tau_K\tau_L)} \det A_{\{k\}',\{l\}'}=(-1)^{\deg \tau_K+\deg\tau_L}\det A_{\hat k,\hat l}=(-1)^{k+l}\det A_{\hat k,\hat l} \] is the $(l,k)$ entry in $(\det A)A^{-1}$, and the $(k,l)$ entry in $(\det A)(A^T)^{-1}$. The extra minus sign in the formula for $B_1$ when $\det A<0$ shows that $B_1$ is $(|\det A|)(A^T)^{-1}$. The $(k,l)$ entry in the matrix of $B_{d-1}$ with respect to the basis $\{f_k\}$ is the $(\hat k,\hat l)$ entry in the matrix with respect to the basis $\mathcal{E}_{d-1}$. For $K=\hat k$, $\tau_K$ is the cycle which moves $k$ to the back and the last $d-k$ terms one forward, which has degree $d-k+1$. Since $A_{(\hat k)',(\hat l)'}$ is the $1\times 1$ matrix with entry $a_{k,l}$, we have \[ (-1)^{\deg(\tau_K\tau_L)} \det A_{(\hat k)',(\hat l)'}=(-1)^{(d-k+1)+(d-l+1)}a_{k,l}=(-1)^{2(d+1)-(k+l)}a_{k,l}=(-1)^{k+l}a_{k,l}. \] This immediately gives the result for $\det A>0$, and for $\det A<0$, the extra minus sign in the formula for $B_{d-1}$ means we need to replace $(1)^{k+l}$ by $(-1)^{k+l+1}$. \end{proof} We can now sum up our results for small $d$: Corollary~\ref{cord=1} is well-known, as we observed in the introduction, but Corollary~\ref{calcd=2} was a bit of a surprise. \begin{cor}\label{cord=1} Suppose $N$ is a non-zero integer, and consider the Exel system $(C(\mathbb{T}),\alpha_N,L)$ associated to the covering map $z\mapsto z^N$. \smallskip \textnormal{(1)} If $N>1$, then $K_0(C(\mathbb{T})\rtimes_{\alpha_N,L}\mathbb{N})=(\mathbb{Z}/(N-1)\mathbb{Z})\oplus\mathbb{Z}$ and $K_1(C(\mathbb{T})\rtimes_{\alpha_N,L}\mathbb{N})=\mathbb{Z}$. \smallskip \textnormal{(2)} If $N<-1$, then $K_0(C(\mathbb{T})\rtimes_{\alpha_N,L}\mathbb{N})=\mathbb{Z}/(N-1)\mathbb{Z}$ and $K_1(C(\mathbb{T})\rtimes_{\alpha_N,L}\mathbb{N})=\mathbb{Z}/2\mathbb{Z}$. \end{cor} \begin{cor}\label{calcd=2} Suppose that $A=(a_{ij})\in M_2(\mathbb{Z})$ is a dilation matrix. Then \[ K_0(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})= \begin{cases} \mathbb{Z}/(|\det A|-1)\mathbb{Z}\oplus\mathbb{Z}&\text{if $\det A>1$}\\ (\mathbb{Z}/(|\det A|-1)\mathbb{Z})\oplus (\mathbb{Z}/2\mathbb{Z})&\text{if $\det A<-1$,} \end{cases} \] and \[ K_1(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})= \begin{cases} \mathbb{Z}\oplus\operatorname{coker}\bigg(\begin{matrix}1-a_{11}&a_{12}\\a_{21}&1-a_{22}\end{matrix}\bigg)&\text{if $\det A>1$}\\ \operatorname{coker}\bigg(\begin{matrix}1+a_{11}&-a_{12}\\-a_{21}&1+a_{22}\end{matrix}\bigg)&\text{if $\det A<-1$.} \end{cases} \] \end{cor} \begin{proof} The statement about $K_0$ follows immediately from Theorem~\ref{thm-example}. For $K_1$, we use the description of $B_1=B_{2-1}$ in Proposition~\ref{B1Bd-1}. (If we had used the description of $B_1$ as $|\det A|(A^T)^{-1}$, we would have got a different matrix, because we would then be calculating it with respect to the basis $\{e_1,e_2\}$ rather than $\{f_1,f_2\}=\{e_2,e_1\}$. However, the two matrices are conjugate in $M_2(\mathbb{Z})$, and hence have isomorphic cokernels.) \end{proof} We now look at the implications of these results for some concrete examples of dilation matrices. The first two were used in \cite{P2} to provide examples of projective multi-resolution analyses. \begin{examples} (1) The matrix $A=\big(\begin{smallmatrix}0&1\\2&0\end{smallmatrix}\big)$ has $\det A=-2<-1$. So $K_1(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})$ is the cokernel of {$\big(\begin{smallmatrix}1&-1\\-2&1\end{smallmatrix}\big)$}; since this matrix has determinant $-1$, it is invertible over $\mathbb{Z}$, and we have \[ K_0(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})=\mathbb{Z}/2\mathbb{Z} \text{ and } K_1(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})=0. \] These $K$-groups are the same as those of $\mathcal{O}_3$, but the class of the identity is different. To see the last statement, note that the class $[1]$ of the identity in $K_0(C(\mathbb{T}^2))$ is the image of $1\in \mathbb{Z}=\bigwedge^0\mathbb{Z}^2$, and when $|\det A|=2$, $1-B_0$ is invertible, so $[1]$ belongs to the range of $\operatorname{id}-\Omega_*$. Thus the class of the identity $1_{C(\mathbb{T}^2)\rtimes\mathbb{N}}=j_{C(\mathbb{T})}(1)$ in $K_0(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})$ is $0$. For $\mathcal{O}_3$, on the other hand, $[1]$ is the generator of $K_0(\mathcal{O}_3)$. \smallskip (2) The matrix $A=\big(\begin{smallmatrix}1&1\\-1&1\end{smallmatrix}\big)$ has $\det A=2>1$. So Corollary~\ref{calcd=2} implies that \[ K_0(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})=\mathbb{Z} \text{ and } K_1(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})=\mathbb{Z}. \] \smallskip (3) The matrix $A=\big(\begin{smallmatrix}2&1\\-1&2\end{smallmatrix}\big)$ has $\det A=5>1$. Thus \[ K_0(\mathcal{O}(M_L))=\mathbb{Z}/4\mathbb{Z}\oplus\mathbb{Z} \text{ and } K_1(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})=\mathbb{Z}\oplus (\mathbb{Z}/2\mathbb{Z}). \] \smallskip (4) The matrix $A=\big(\begin{smallmatrix}2&-1\\1&-3\end{smallmatrix}\big)$ has determinant $-5$, and \[ K_0(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})=(\mathbb{Z}/4\mathbb{Z})\oplus (\mathbb{Z}/2\mathbb{Z})\text{ and } K_1(C(\mathbb{T}^2)\rtimes_{\alpha_A,L}\mathbb{N})=\mathbb{Z}/5\mathbb{Z}. \] \end{examples} No, we don't see any obvious pattern either.
1,116,691,498,211
arxiv
\section{Introduction} Faced with various policy-violating activities ranging from disinformation~\cite{resende2019mis} to harassment~\cite{hua2020characterizing,matias2015reporting,duggan2014online} and abuse~\cite{bursztein2019rethinking,revenge}, social media companies increasingly rely on automated algorithms to detect deleterious content. One widely used approach is to check that user content is not too similar to known-bad content. For example, to detect child sexual abuse imagery~\cite{bursztein2019rethinking}, some platforms utilize similarity hashing approaches like PhotoDNA~\cite{photodna} or PDQHash~\cite{pdqhash}. These approaches map user-shared images into unique representations that encode perceptual structure, enabling quick comparisons against a database of hash values. Such approaches could be helpful for combating other forms of bad content, such as the viral spread of visual misinformation on end-to-end encrypted messaging services~\cite{resende2019mis}. For example, they could augment other efforts to provide users with important context about shared content~\cite{googlefact,whatsappsearch}. Currently deployed approaches rely on sending user content or a similarity hash of the content to a moderation service. This risks user privacy. As we detail in the body, the service can easily match a submitted similarity hash against known images to learn the content of a user's image with overwhelming confidence. Privacy can be improved utilizing cryptographic two-party computation (2PC)~\cite{yao1986generate,kulshrestha2021identifying} techniques to only reveal matching content to the moderation service and nothing more. The recent CSAM image detection system proposed by Apple~\cite{apple_csam} goes one step further and notifies the platform only when the number of matching images surpasses a certain threshold. Automated notification of platforms necessarily raises concerns about privacy and accountability (e.g., how to ensure the system is not used for privacy-invasive search for benign images). An alternative approach is to have only the client learn the output of similarity checks, to enable client-side notifications, warning or otherwise informing users. This may not be suitable for all classes of abuse content, such as CSAM, where the recipient may be adversarial, but could be useful for other abuse categories (misinformation, harassment, etc.). However, the scale of databases makes it prohibitive both to send the known-bad hashes to the client or, should hashes be sensitive, apply 2PC techniques to ensure as little as possible about the database leaks to clients. For an example of the latter, Kulshrestha and Mayer's~\cite{kulshrestha2021identifying} private approximate membership computation (PAMC) protocol achieves state-of-the-art performance, but nevertheless requires about~$27$ seconds to perform a similarity check against a database with one million images. The protocol also has an average false negative rate of almost $17\%$ for slightly transformed images, meaning many similar images may be erroneously marked as dissimilar. In this work, we target client-side detection, in order to warn users against abusive content. To this end, we explore the question of how to scale privacy-preserving image similarity protocols, while preserving correctness of the similarity testing. We introduce and formalize the concept of similarity-based bucketization (SBB). The idea is to reveal a small amount of structured information in a message to a database-holding server, so that it can determine a bucket of possibly relevant database entries. Ideally the bucket ends up only a small fraction of the full database, enabling use of a subsequent similarity testing protocol on the bucket to perform the final similarity check. We explore instantiating the testing protocol in a variety of ways. The key technical challenge facing SBB is balancing the competing goals of minimizing bucket size (efficiency) with leaking as little information as possible (privacy). For example, one could modify a standard similarity hash, say PDQHash, to provide only very coarse comparisons. But as we will show, this still leaks a lot of information to the server, allowing high-confidence attacks that can associate the coarse hash to the specific content of a client request. More broadly we need a framework for navigating this tension. We propose such a framework. It formalizes information leakage using a game-based definition. To be specific, an adversarial server attempts to learn, from an SBB message generated for some image drawn from an adversary-known distribution, a predicate about the underlying image. As an important running example, we use a ``matching predicate'' that checks if the underlying image has the same perceptual hash value as that of a known target image. Unlike in more traditional cryptographic definitions (e.g.,~\cite{goldwasser1984probabilistic}), we do not require the adversarial server to have negligible success (which would preclude efficiency) and instead offer a range of measures including accuracy improvement over baseline guessing, adversarial precision, and adversarial area under the receiver operating characteristic curve~(AUC). Indeed, there is no one-size-fits-all approach to measuring privacy damage, and our framework allows one to more broadly assess risks. We offer a concrete SBB mechanism that increases adversarial uncertainty compared to naive approaches. It converts any similarity hash that uses Hamming distance to a privacy-preserving coarse embedding; we focus on PDQHash because it is widely supported. We combine techniques from locality-sensitive hashing~\cite{gionis1999similarity} with lightweight noise mechanisms. The ultimate algorithm is conveniently simple: apply a standard PDQHash to an image, choose a designated number~$d$ of bit indices randomly, flip each selected bit with probability~$\gamma$, and then send the resulting~$d$ bits and their indices to the server. An image in the server's database is included in a bucket should~$k$ or fewer of the relevant~$d$ bits of its PDQHash mismatch with those that are sent from the client. Using real-world social media data, we empirically assess correctness, efficiency and privacy under various definitions. We explore various settings of $d$, $\gamma$, and $k$, and show that it is possible to ensure average bucket sizes of~$9.3\%$ of the database, while: (1)~ensuring that the similar images are included in the bucket at least 95\% of the time, and (2) an optimal adversary for the matching predicate achieves less than 50\% precision, signifying low confidence in matching attacks. We caution that these empirical results are dataset-dependent, and may not generalize to every potential use case. Instead they can be interpreted as a proof-of-concept that SBB works in a realistic scenario. We then combine our SBB mechanism with various similarity protocols, with different privacy guarantees for the server's content. For the expedient approach of downloading the bucket of server PDQHash values and performing comparisons on the client side, SBB provides a speedup of $29\times$ or more. A full similarity check requires less than $0.5$ seconds for a database of $2^{23}$ images. We also explore using SBB to speed up an ad hoc similarity protocol based on secure sketches~\cite{dodis2004fuzzy}, as well as 2PC protocols implemented in the EMP~\cite{emp-toolkit} and CrypTen~\cite{crypten2020} frameworks. Our experiments indicate that SBB can provide speed-ups of $601\times$, $97\times$, and $67\times$, respectively, and often enables use of 2PC that would fail otherwise due to the size of the database. We conclude by discussing various limitations of our results, and open questions that future work should answer before deployment in practice. Nevertheless, we expect that our SBB approach will be useful in a variety of contexts. Encrypted messaging apps could use it to help warn users about malicious content, with significantly better privacy than approaches that send plaintext data to third-party servers~\cite{googlefact,whatsappsearch}. In another setting, social media platforms that currently query their users' plaintext data to third-party services to help identify abuse (e.g.,~\cite{threatexchange,joint}) could use our techniques to improve privacy for their users. To facilitate future research, our prototype implementation is publicly available.\footnote{\url{https://github.com/vegetable68/sbb}} \section{Background and Existing Approaches} \label{sec:overview} In this section, we provide some background about a key motivating setting: providing client-side detection of bad content in end-to-end (E2E) encrypted messaging. That said, our approaches are more general and we discuss other deployment scenarios in \apref{ap:mod}. \paragraph{Content detection and end-to-end encryption.} Content moderation aims to mitigate abuse on social media platforms, and can include content removal, content warnings, blocking users, and more. Most moderation approaches rely on detecting objectionable content, particularly at scale where automated techniques seem to be requisite. Social media companies often maintain large databases of known adversarial content~\cite{bursztein2019rethinking,photodna} and compare a client message with items in the databases to see if the message is sufficiently similar to some piece of adversarial content. However, this approach requires the client to reveal plaintext message content, which stands in tension with privacy-preserving technologies like E2E encryption. On the other hand, leaving contents unmoderated on the platform is unsatisfactory given the harms caused by abusive content such as misinformation, child sexual abuse material (CSAM), harassment, and more. Governments\footnote{\url{https://www.justice.gov/opa/press-release/file/1207081}} and non-governmental organizations\footnote{\url{https://www.missingkids.org/e2ee}} have for many years emphasized the need for technical innovations that could enable law enforcement access to encrypted data, while minimizing risks of privacy violation~\cite{rozenshtein_child,group_moving,eu_report}. However, security experts have repeatedly expressed concern that such `backdoor' access would fundamentally break the privacy of E2E encryption~\cite{cdt_new,crodker_dont,portnoy_why,muffet_what} or, if it provided content blocking functionality, enable problematic censorship~\cite{portnoy_why}. In this work, we target mitigations that allow privacy-preserving client-side detection of content similar to known bad content. We focus on images, as discussed below. Our protocol is agnostic to how client software uses this detection capability, but we believe that client software should be designed to empower users with information and the ability to make their own decisions about content. Our techniques may be useful, for example, to mitigate the increasing use of E2E encrypted messaging for harmful disinformation campaigns~\cite{resende2019mis,gursky2021countering}. A widely discussed approach is to warn users against known disinformation. Recent research~\cite{kaiser2021adapting} has shown that when carefully designed, such warnings are effective in guiding user behaviors to avoid disinformation. Our work provides a technical solution for the client-side warning mechanism. To be specific, the proposed system queries whether a client's received content is similar to known disinformation and returns the answer only to the client. Such a design avoids both outright censorship and notifying platform operators that a particular client received a particular piece of content. This solution would enable the kinds of user-initiated known content detection approaches that have been suggested recently~\cite{mayer_content,callas_thoughts}, and could help complement existing anti-abuse techniques that do not consider content, such as those used in WhatsApp~\cite{whatsapp_stop}. But warning-style approaches that inform and empower users may not be suitable for threats like CSAM, where the recipients of messages can themselves be bad actors. Here client software would seemingly have to limit user choice, automatically blocking detected content and/or notifying some authority about it. Recent designs for CSAM mitigations include the Kulshrestha-Mayer protocol~\cite{kulshrestha2021identifying} (when used to notify the platform) and the CSAM detection proposal by Apple~\cite{apple_csam}. Cryptographers have, in turn, raised the alarm that, while efforts to combat CSAM are laudable, these platform-notifying systems represent a potential E2E encryption backdoor that is subject to misuse by platform operators or governments~\cite{mckinney_apple,green_apple} and that future work is needed to make such systems transparent and accountable. Our work is different, as we target client-side notification and not platform notification. Another concern is that even client-side notification ends up a stepping stone towards riskier backdoor/censorship mechanisms, because once the former is deployed it will be easier to deploy, or justify deploying, the latter. Client-side functionality at least provides the opportunity for activists and others to detect changes to client-side software and understand their effects, adding some transparency and accountability. At the same time, arguments for, or against, various anti-abuse mechanisms would do well to delineate between approaches that empower users to understand and control their online experience (warnings, the ability to select users/content to block) and that disempower users (client-side or platform-side automatic censorship). We believe our techniques will be useful for the former, without intrinsically promoting the latter. \paragraph{Client-side similarity testing and privacy.} \label{sec:deployment} As mentioned above, we focus on private image similarity testing services. These allow a client, who receives some value $w$ on an E2E encrypted platform, to submit a request to a service provider holding a database $\altmathcal{B}$; the response indicates to the client whether $w$ is similar to any item in $\altmathcal{B}$. As the database $\altmathcal{B}$ may be quite large, we need scalable solutions. The service provider could be the messaging platform, or a third party service. In the case when the provider is a third party service, the protocol runs between the client and the testing service, without involvement of platform servers. A key concern will be the privacy risk imposed on clients by a testing service. Our threat model consists of an adversary in control of the service's servers, who wants to learn information about a client's image~$w$ by inspecting messages sent to the service in the course of similarity testing. This is often referred to as a semi-honest adversary, though our approaches will meaningfully resist some types of malicious adversaries that deviate from the prescribed protocol. In terms of privacy threat, we primarily focus on what we call a matching attack, in which the adversary wishes to accurately check whether $w$ matches some adversarially chosen image (see \secref{sec:privacy-goal} for a formalization). A matching attack enables, for example, adversarial service operators to monitor whether clients received any image on an adversarially chosen watchlist. In this initial work we primarily focus on the risks against a single query from the client, and explicitly do not consider adversaries that just want to recover partial plaintext information, such as if the adversary wants to infer if an image contains a person or not. While we believe our results also improve privacy for such attacks, we do not offer evidence either way and future work will be needed to explore such threats. We also do not consider misbehaving servers that seek to undermine correctness, e.g., by modifying $\altmathcal{B}$ to force clients to erroneously flag innocuous images. How to build accountability mechanisms for this setting is an interesting open question. We simulate the scenarios of adversaries that somehow can take advantage of known correlations between queried images in \apref{ap:repeated} and propose potential mitigation solutions in \secref{sec:limitations}. Nevertheless there are already several challenges facing developing a service that prevents accurate matching attacks in our setting. While prior work has established practical protocols for private set membership~\cite{thomas2019protecting,li2019protocols}, these only provide exact equality checks. Even small manipulations such as image re-sampling, minor cropping, or format conversion make exact matching schemes fail. Second, the database $\altmathcal{B}$ can be arbitrarily large and may require constant update. For instance, the published dataset from Twitter with activities of accounts associated to the Russian Internet Research Agency consist of 2 million images in total~\cite{twitter_ira}. \paragraph{Existing approaches.} We review deployed systems and suggested designs for image similarity testing. \textbf{\emph{Plaintext services.}} Most current deployments have the client upload their image to a third party service. A prominent example is the PhotoDNA service. After a client submits an image to the service, it immediately hashes the image using a proprietary algorithm~\cite{photodna}. Importantly, the hash can be compared to other hashes of images in a way that measures similarity of the original images. Such hashes are often called similarity hashes~\cite{oliva2001modeling,chum2008near} or perceptual hashes~\cite{zauner2010implementation}. (We show examples later.) The original image that was sent to the service is deleted after hashing. This plaintext design has various benefits, including simplicity for clients and the ability to hide the details of the hashing used. The latter is important in contexts where malicious users attempt to modify an image $w \in \altmathcal{B}$ in the service's bad list to create an image $w'$ that will not be found as similar to any image in $\altmathcal{B}$ (including $w$)~\cite{xiao2019seeing}. Another example of a plaintext service is WhatsApp's in-app reverse search function to combat visual misinformation~\cite{whatsappsearch}, rolled out in June 2020. This feature allows users to submit their images to Google reverse image search for the source or context of a specific image. In this case, the user needs to reveal their image to both Google and WhatsApp, sacrificing user privacy. \textbf{\emph{Hashing-based services.}} For privacy-aware clients, revealing plaintext images represents a significant privacy risk. An alternative approach is to use a public hashing algorithm, have the client first hash their image, and submit only the resulting representation to the similarity checking service. While this requires making the hashing algorithm available to clients (and, potentially, adversarial users), it improves privacy because the original images are not revealed to the service. It also improves performance: hashes can be compact (e.g., 256 bits) and compared against a large database $\altmathcal{B}$ in sublinear time~\cite{norouzi2012fast}. This approach is used by Facebook's ThreatExchange~\cite{threatexchange} service that allows organizations to share hashes of images across trust boundaries. They use a custom similarity hash called PDQHash~\cite{pdqhash}. Sharing hashes, however, still has privacy risk. For example, although the lossy process of PDQHash generation makes recovering the exact input impossible in general, revealing the hash allows inferring whether a queried value is similar to another image. An adversary at the service provider's side may brute-force search a database of images to find ones close to the queried value. \textbf{\emph{Cryptography-based services.}} An alternative approach that preserves privacy is to employ a secure 2PC protocol~\cite{yao1986generate} between the client and service. Existing 2PC protocols for similarity matching (e.g.,~\cite{jarrous2013secure,asharov2018privacy,chen2020sanns}) can, in the best case, ensure that no information about the client's image is leaked to the server and that nothing about $\altmathcal{B}$ (beyond whether it contains a similar image) leaks to the client. However, existing 2PC protocols do not efficiently scale to large databases~$\altmathcal{B}$. Recent work by Kulshrestha and Mayer~\cite{kulshrestha2021identifying} proposed private approximate membership computation~(PAMC) to allow similarity testing of images encoded as PDQHashes. The protocol begins by splitting the database $\altmathcal{B}$ into buckets. Using private information retrieval, a client retrieves a bucket from the server with the bucket identifier generated from the PDQHash of their image. The chosen bucket is not disclosed to the server. The two parties then perform a private similarity test to determine whether the client PDQHash has sufficiently small Hamming distance to any image in the bucket. The protocol is still rather expensive, with their initial experiments requiring $37.2$ seconds for a one-time set up and $27.5$ seconds for a query for a block list with the size of $2^{20}$. These times exclude network delays (measurements were performed with client and server on the same workstation). While a step closer to practicality, this remains prohibitive particularly since we expect that performance in deployment would be worse for lightweight client hardware such as mobile phones. Concurrent work by Apple~\cite{apple_csam} proposed a framework that encodes images from a user's cloud storage into perceptual hashes. The perceptual hashing algorithm maps similar images into identical hashes with high probability. The framework then performs private set intersection between the encoded hashes and a database of known CSAM images. The private contents are revealed to the platform only when the number of matches exceeds a certain threshold. Whether such a framework, designed for CSAM detection, is fit for client-side detection remains a question for future work to explore. In summary, all three existing design approaches for image similarity testing --- revealing images as client requests, using similarity representations like PDQHash as client requests, and employing secure 2PC protocols --- do not provide a satisfying solution. The first two designs do not provide sufficient privacy, while 2PC designs are currently not sufficiently efficient. Thus we need a new approach to similarity testing. \section{Similarity-Based Bucketization} \label{sec:sbb} To enable efficient, privacy-preserving client-side similarity testing, we take inspiration from previous work that used bucketization to support efficient private set membership testing~\cite{thomas2019protecting,li2019protocols}. These will, however, not work for image similarity testing. We therefore introduce a new two-step framework, as shown in \figref{fig:sbb-overview}. It first enables scaling by utilizing what we call similarity-based bucketization (SBB) to gather a subset $B \subseteq \altmathcal{B}$, called a bucket, of possibly relevant images. The second step is to perform a similarity testing protocol over the bucket; we explore how SBB can provide scaling improvements for several different similarity testing protocols. In this section we introduce coarse embeddings, which allow crude similarity comparisons, rather than the granular ones that regular similarity hashes provide. A summary of the notation we use throughout this paper appears in \tabref{tab:notation}. For simplicity, we refer to the similarity testing server as the server. \paragraph{Formalizing SBB.} We formalize embeddings first. A similarity embedding method is a function $\altmathcal{F}\smidge\colon\smidge \altmathcal{W}\rightarrow\{0,1\}^\ell$ for the space of images~$\altmathcal{W}$ and where $\ell$ is a parameterizable length. Some embeddings map images to $\mathbb{R}^\ell$ (or a suitably granular approximation of it), but we focus on bit strings unless explicitly mentioned otherwise. Associated with $\altmathcal{F}$ is a distance measure $\Delta\smidge\colon\smidge \{0,1\}^\ell \times \{0,1\}^\ell \to \mathbb{Z}$. We focus on $\Delta$ being Hamming distance. Most often one sets an application-dependent threshold $T$ as the definition of similarity, and builds $\altmathcal{F}$ so that matches with distance values smaller than $T$ indicate the images depicted are perceptually similar. An example $\altmathcal{F}$ is the aforementioned PDQHash~\cite{pdqhash}. PDQHash was designed to capture the ``syntactic'' similarity between images. Syntactic similarity captures if two images are essentially the same, e.g., the same image but of different quality, or rotated slightly. This is different from semantic similarity, which focuses on whether images share the same features, e.g., the same person. Algorithms designed for syntactic similarity also include PhotoDNA~\cite{photodna} and pHash~\cite{zauner2010implementation}. PDQHash first converts a given image $w$ from RGB to luminance, then uses two-pass Jarosz filters to compute a weighted average of $64\times64$ subblocks of the luminance image. Given the $64\times64$ downsample, the algorithm computes a two-dimenstional discrete cosine transform~(DCT), and keeps only the first $16$ slots in both X and Y directions. After that, each entry of the $16\times16$ DCT output is transformed into a binary bit after being compared to the median, with $1$ indicating larger than the median and $0$ indicating otherwise. \begin{table}[t] \centering \footnotesize \begin{tabular}[t]{lp{0.35\textwidth}} \toprule Symbol & Description\\ \midrule $w$ / $\altmathcal{W}$ & an image / set of all images \\ $\altmathcal{B}$ & set of images held by server\\ $\mathcal{D}$ & distribution from which images are sampled\\ \midrule $\ell$ & length of similarity embedding \\ $\altmathcal{F}$ & similarity embedding method \\ $v$ / $\altmathcal{Y}$ & result of similarity embedding / set of all such results\\ $\Delta$ & distance function between two similarity embeddings\\ $T$ & distance threshold for similarity matching using $\Delta$\\ $\mathcal{D}_{\altmathcal{F}}$ & distribution of similarity embeddings induced by $\altmathcal{F}$ \\ \midrule $\textnormal{Emb}$ & coarse embedding algorithm\\ $\textnormal{Sim}$ & coarse embedding similarity algorithm\\ $p$ & an output of $\textnormal{Emb}$\\ $d$ & length of $p$\\ $B$ & candidate bucket generated from $\textnormal{Sim}$\\ \midrule $\gamma$ & parameter of $\mathbb{E}_{LSH}$, flipping bias\\ $k$ & coarse threshold of $\textnormal{Sim}$\\ \bottomrule \end{tabular} \vspace{-0.3cm} \caption{\label{tab:notation} Notation frequently used in this paper.} \end{table} \textbf{Coarse embedding schemes.} To allow bucketization via similarity, we define a coarse embedding scheme $\mathbb{E} = (\textnormal{Emb},\textnormal{Sim},(\altmathcal{W},\Delta_\imagespace))$, as a pair of algorithms and an associated metric space. The (possibly randomized) embedding algorithm $\textnormal{Emb}(w)$ takes as input a value $w \in \altmathcal{W}$ and outputs a value $p \in \{0,1\}^d$. Here $d$ is a configurable parameter. We call $p$ the embedding of $w$, or simply the embedding when $w$ is clear from context. The deterministic algorithm $\textnormal{Sim}(p,w')$ takes as input $p \in \{0,1\}^d$ and $w' \in \altmathcal{W}$ and outputs a bit. The bit being one indicates that the embedding of $w'$ is similar to the embedding of $w$, which is denoted as $p$. It will be convenient to abuse notation, by letting $\textnormal{Sim}(p,\altmathcal{B})$ be defined to output the set $\{w' \;|\; w' \in \altmathcal{B} \land \textnormal{Sim}(p,w') = 1\}$. One idea for a coarse embedding scheme would be to simply use $\altmathcal{F}$ directly, but with smaller $\ell$ and smaller $T$. To be specific, using PDQHash as an example, a coarse PDQHash scheme $\mathbb{E}_{cPDQ} = (\textnormal{Emb}_{cPDQ},\textnormal{Sim}_{cPDQ},(\altmathcal{W},\Delta_\imagespace))$ can be implemented as follows: $\textnormal{Emb}_{cPDQ}(w)$ computes the hash of $w$ on the first $4\times4$ slots of the DCT output, rather than $16\times16$ of the output, producing a $16$-bit binary string. The $16$-bit value can then provide much cruder similarity comparison. Then $\textnormal{Sim}_{cPDQ}(p,\altmathcal{B})$ iterates over all $w' \in \altmathcal{B}$, hashes them, and returns those with distance smaller than a coarse threshold~$k$ as a bucket $B$. Unfortunately this scheme doesn't meet our privacy goals, as we will explore in detail in \secref{sec:expr}. \textbf{Correctness and compression efficiency.} We define $\Delta_\imagespace$ via an existing similarity embedding $\altmathcal{F}$, i.e., $\Delta_\imagespace(w,w') = \Delta(\altmathcal{F}(w),\altmathcal{F}(w'))$. We say that a coarse embedding scheme is $(T,\epsilon, \mathcal{D})$-correct if, for an image $w$ sampled from $\mathcal{D}$, a distribution over $\altmathcal{W}$, and for any $w'$ such that $\Delta_\imagespace(w,w') < T$, we have that $\Prob{\textnormal{Sim}(\textnormal{Emb}(w), w')=1} \geq 1 - \epsilon$, where the probability is taken over the random coins used by $\textnormal{Emb}$ and the choice of $w$ from $\mathcal{D}$. A trivial coarse embedding scheme is to just use $\altmathcal{F}$ itself, which would be $(T,0, \mathcal{D})$-correct for any $T$ and $\mathcal{D}$. But as mentioned, doing this will not provide the desired privacy. Another type of trivial coarse embedding scheme is to have~$\textnormal{Sim}$ always output one. Then $\textnormal{Emb}$ could output a fixed constant value regardless of input, meaning nothing leaks about $w$. This would also be $(T,0,\mathcal{D})$ correct for arbitrary $T$ and any given $\mathcal{D}$, but won't be useful because, in our SBB application, the bucket would end up being the entire set $\altmathcal{B}$. Hence, we define a compression efficiency metric as follows. We say a coarse embedding scheme is $(\altmathcal{B},\alpha,\mathcal{D})$-compressing if for a distribution $\mathcal{D}$ over $\altmathcal{W}$, $\altmathcal{B} \subseteq \altmathcal{W}$ and $w$ drawn from $\mathcal{D}$ we have that $\Ex{\frac{|B|}{|\altmathcal{B}|}} \leq \alpha$ where $B = \{w' | w' \in \altmathcal{B}, \textnormal{Sim}(\textnormal{Emb}(w),w')=1\}$ and the probability space is over the choice of $w$ from $\mathcal{D}$ and the coins used by $\textnormal{Emb}$. This measures the average ratio of bucket size to dataset size. \paragraph{LSH-based coarse embedding.} \label{sec:algo-detail} We propose a coarse embedding scheme that is based on locality sensitive hashing~(LSH)~\cite{gionis1999similarity}. An LSH function family allows approximate nearest neighbour search with high-dimensional data. Formally, the scheme $\mathbb{E}_{LSH}=(\textnormal{Emb}_{LSH},\textnormal{Sim}_{LSH},(\altmathcal{W},\Delta_\imagespace))$ is defined as follows~(see also \figref{fig:sbb-protocol}). Let $\altmathcal{I}$ be a family of hash functions that maps points from a high-dimensional input space $\mathbb{I}$ into a hash universe $U$ of lower dimension. When $\mathbb{I}=\{0,1\}^{\ell}$ and $\Delta$ is Hamming distance, the construction of an LSH function family is intuitive. For an $\ell$-bit string $v$, we denote the individual bits as $v_1,\ldots,v_\ell$. An indexing function is a map $I\smidge\colon\smidge\{0,1\}^\ell\to\{0,1\}$ and we let $\altmathcal{I}$ be the set of all index functions, which is the LSH function family. In our context, we randomize the selection of LSH functions for every individual query, and add noise to ensure privacy. $\textnormal{Emb}_{LSH}$ takes an image $w$ as input, and computes the similarity embedding of it via $v \gets \altmathcal{F}(w)$. In our implementation, we use PDQHash for $\altmathcal{F}$. Our protocol works on other types of embedding functions that use Hamming distance as a metric, such as pHash~\cite{zauner2010implementation}. We sample $d$ bits from $v$ by sampling $d$ LSH functions without replacement and flip each bit with probability~$\gamma$. The resulting embedding with added noise and the indices are shared with the server. The server performs $\textnormal{Sim}_{LSH}$ by comparing the received bits to the corresponding bits of $\altmathcal{F}(w_i)$ for each $w_i \in \altmathcal{B}$, adding $w_i$ to the bucket $B$ if sufficiently many of these bits match. To formalize this we abuse notation slightly. We denote $I$ as a map $\{0,1\}^\ell \to \{0,1\}^d$, a combination of $d$ functions sampled uniformly from $\altmathcal{I}$ without replacement. Similarly one can easily encode an indexing function as a set of indexes; we treat $I$ both as a function and its encoding. We let $\textnormal{Flip}_\gamma$ be the randomized algorithm that takes as input a bit string $p$ and outputs $\tilde{p}$ of the same length, setting $\tilde{p}_i = p_i$ with probability $1 - \gamma$ and $\tilde{p}_i = \lnot p_i$ with probability $\gamma$. The full algorithms for $\mathbb{E}_{LSH}$ are shown in \figref{fig:sbb-protocol}. We use a threshold $k$ for the Hamming distance over the randomly selected indexes. We formally analyze correctness of $\mathbb{E}_{LSH}$ in \apref{sec:correctness}. Different choices of the parameter sets of $\mathbb{E}_{LSH}$, i.e., embedding length $d$, flipping bias $\gamma$, and coarse thresholds $k$ result in different combination of privacy loss, correctness, and bucket compression rate. We explore this trade-off in \secref{sec:expr}. \begin{figure}[t] \centering \fpage{.45}{ \hpagess{.35}{.48}{ \underline{$\textnormal{Emb}_{LSH}(w)$}\\[1pt] $v \gets \altmathcal{F}(w)$\\ $I {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \altmathcal{I}$\\ $p {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \textnormal{Flip}_\gamma(I(v))$\\ Return $(I,p)$ }{ \underline{$\textnormal{Sim}_{LSH}((I,p),\altmathcal{B})$}\\[1pt] $B \gets \{\}$\\ For $w \in \altmathcal{B}$:\\ \hspace*{1em} If $\Delta(p,I(\altmathcal{F}(w))) \le k$ then\\ \hspace*{1em}\myInd $B \gets B \cup \{w\}$\\ Return $B$ } } \vspace{-0.3cm} \caption{Coarse embedding scheme $\mathbb{E}_{LSH}$.} \vspace{-0.3cm} \label{fig:sbb-protocol} \end{figure} One limitation of $\mathbb{E}_{LSH}$ arises should an adversary be able to collect many queries that it knows are for the same image. Eventually it will see all bit locations, and even have enough samples to average out the noise (e.g., via a majority vote for each bit location). We discuss this further in \secref{sec:limitations}. \paragraph{Similarity protocols.} \label{sec:similarity_protocol} A coarse embedding scheme will not suffice to perform a full similarity check. Instead, we compose such a scheme to perform SBB with a similarity protocol where the server uses the resulting bucket $B {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \textnormal{Sim}(\textnormal{Emb}(w),\altmathcal{B})$. The composition achieves privacy levels related to the protocol's for $\altmathcal{B}$, and correctness proportional to the product of the coarse embedding and the protocol's correctness. We discuss some examples and their properties here. These examples ensure perfect correctness, hence the correctness of the composition depends solely on that of the coarse embedding. In \secref{sec:end-to-end}, we show that for various similarity protocols, both runtime efficiency and bandwidth are largely improved when combined with SBB. \textbf{\emph{Similarity embedding retrieval.}} A pragmatic similarity protocol has the server send to the client the similarity embeddings of all the elements in the bucket, i.e., send $v_1,\ldots,v_{|B|}$ where $v_i\ = \altmathcal{F}(w_i)$ for each $w_i \in B$. The client can then compute $\altmathcal{F}(w)$ and compare against each $v_i$. This approach reduces the confidentiality for the server's dataset, since now clients learn all the similarity embeddings in $\altmathcal{B}$ that fall into the bucket. It may also reduce resistance to evasion attacks, but in contexts where client privacy is paramount this simple protocol already improves on existing approaches. \textbf{\emph{Secure-sketch similarity protocol.}} We can improve server confidentiality via a secure-sketch-based~\cite{dodis2004fuzzy} similarity protocol. The protocol ensures that the client can only learn the similarity hashes that are close to a client-known value. If images in the database have sufficiently high min-entropy then the secure sketch ensures that the client cannot learn it. This assumption may not always hold (most obviously in the case that the client has a similar image), in which case confidentiality falls back to that achieved by similarity embedding retrieval. We defer details and formalization to \apref{sec:sssp}. \textbf{\emph{2PC similarity protocols.}} Finally, one may compose SBB with an appropriate 2PC for similarity comparisons. Such an approach provides better confidentiality for $\altmathcal{B}$, but at the cost of larger bandwidth and execution time. We experiment with two frameworks: CrypTen~\cite{crypten2020} and EMP~\cite{emp-toolkit}. CrypTen is a secret-sharing-based semi-honest MPC framework for Python that is geared toward machine learning applications. CrypTen currently relies on a trusted third party for certain operations, including generating Beaver multiplication triples~\cite{beaver1991efficient}. Generation of Beaver triples using Pailler~\cite{paillier1999public} is actively under development. EMP is a circuit-garbling-based generic semi-honest 2PC framework that is implemented in C++. Both frameworks above target semi-honest security. One could also compose SBB with a maliciously secure 2PC protocol, with caveat that a malicious server is not bound to correctly execute the SBB $\textnormal{Sim}$ algorithm and so could deviate by adding arbitrary values to the bucket. In our context, such an attack can anyway be performed by just modifying $\altmathcal{B}$ in the first place, but this could be relevant in future work, particularly as it relates to accountability mechanisms that monitor for changes to~$\altmathcal{B}$. \section{Privacy of Coarse Embeddings} \label{sec:privacy-goal} In this section, we detail our framework for reasoning about privacy threats against coarse embeddings. Our framework is designed to analyze the adversary's confidence in assessing a predicate being true or not, when given one or multiple client requests as input. Here we only consider client privacy; privacy of the server's dataset can be achieved by composing SBB with a suitable similarity protocol (see \secref{sec:similarity_protocol}). \paragraph{Proposed security measures.} We consider settings where an adversary receives the embedding(s) of one or more images, and wants to infer some predicate over the images. Let $\altmathcal{W}^q$ be the Cartesian product of $q$ copies of $\altmathcal{W}$. We denote tuples of images in bold, $\mathbf{\image} \in \altmathcal{W}^q$ and $\mathbf{\image}[i] \in \altmathcal{W}$ for $i \in [1,q]$. Let $\textnormal{Emb}(\mathbf{\image})$ be the result of running $\textnormal{Emb}$ independently on each component of $\mathbf{\image}$, denoted as $\mathbf{\embimage}$. That is, $\mathbf{\embimage} {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \textnormal{Emb}(\mathbf{\image})$ is shorthand for $\mathbf{\embimage}[i] {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \textnormal{Emb}(\mathbf{\image}[i])$ for $i \in [1,q]$. To start, consider a distribution $\mathcal{D}$ over $\altmathcal{W}^q$ and a predicate $f\smidge\colon\smidge \altmathcal{W}^q\rightarrow\{\textsf{false},\mathsf{true}\}$. We want to understand the ability of an adversary to infer $f(\mathbf{\image})$ when given $\textnormal{Emb}(\mathbf{\image})$ for $\mathbf{\image}$ drawn from $\altmathcal{W}^q$ according to $\mathcal{D}$. As an example, let $q=1$ and have~$f$ indicate whether a client image has the same perceptual hash value with that of another image that is chosen by the adversary. We'd like to have a guarantee that revealing $\textnormal{Emb}(\mathbf{\image})$ doesn't allow inferring that the images are similar with high confidence. We refer to a tuple $\pi = (\mathcal{D},\altmathcal{W}^q,f)$ as a privacy setting. We provide three measures of adversarial success: accuracy, precision, and area under the receiver-operator curve (AUC), thereby adapting traditional measures of efficacy for prediction tasks to our adversarial setting. \textbf{Accuracy.} Let $\advA_{\textrm{acc}}$ be a randomized algorithm, called an accuracy adversary. We define a probabilistic experiment that tasks $\altmathcal{A}$ with inferring $f(\mathbf{\image})$ given $\textnormal{Emb}(\mathbf{\image})$ for $\mathbf{\image}$ drawn according to $\mathcal{D}$. This probability space is over the coins used to sample $\mathbf{\image}$, to run $\textnormal{Emb}$ a total of~$q$ times, and to run $\advA_{\textrm{acc}}$. We let ``$\altmathcal{A}(\textnormal{Emb}(\mathbf{\image})) = f(\mathbf{\image})$'' be the event that $\advA_{\textrm{acc}}$ outputs the correct value of the predicate. We write this as a pseudocode game $\textnormal{PRED}_{\mathbb{E},\pi}$ shown in \figref{fig:prec-security-game}, where the returned value captures the event that $\altmathcal{A}$ succeeds. For skewed distributions, the trivial adversary that ignores its input and simply predicts the most likely predicate value may achieve high accuracy. We therefore define the advantage of $\advA_{\textrm{acc}}$ as the improvement over that trivial approach: \begin{align*} \epsilon_{\mathrm{acc}} &= \frac{\Prob{\advA_{\textrm{acc}}(\textnormal{Emb}(\mathbf{\image})) = f(\mathbf{\image})} - \epsilon_{base}}{1- \epsilon_{base}}\;, \end{align*} where $\epsilon_{base} = max(\Prob{f(\mathbf{\image})=1}, \Prob{f(\mathbf{\image})=0})$. \textbf{Precision.} For adversaries that are mainly interested in inferring positive instances, $f(\mathbf{\image})=\mathsf{true}$, accuracy may appear misleading in cases with high skew, i.e., when $f(\mathbf{\image}) = \textsf{false}$ happens almost always~\cite{manning1999foundations}. In our running example, we expect that in practice most images handled by clients will be distinct from the adversary-chosen one. \begin{figure}[t] \centering \hfpagess{.14}{.14}{ \underline{$\textnormal{PRED}_{\textnormal{Emb},\pi}$}\\[1pt] $\mathbf{\image} \getdist{\mathcal{D}} \altmathcal{W}^q$\\ $p {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \textnormal{Emb}(w)$\\ $b {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \altmathcal{A}(p)$\\ Return $(b = f(\mathbf{\image}))$ }{ \underline{$\textnormal{AUC}_{\textnormal{Emb},\mathcal{D}}$}\\[1pt] for $i \in \{\mathsf{true},\textsf{false}\}$\\ \hspace*{1em} $\mathbf{\image}_i \getdist{\mathcal{D}_i} \altmathcal{W}^q$\\ \hspace*{1em} $\mathbf{\embimage}_i {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \textnormal{Emb}(\mathbf{\image}_i)$\\ \hspace*{1em} $r_i {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \advA_{\textrm{auc}}(\mathbf{\embimage}_i)$\\ Return $r_{\mathsf{true}} > r_{\textsf{false}}$ } \vspace{-0.3cm} \caption{Pseudocode games for measuring embedding privacy. Here $f$ represents a predicate.} \label{fig:prec-security-game} \vspace{-0.3cm} \end{figure} We therefore also provide two other security measures. First, we measure the precision of a non-trivial adversary in inferring $f(\mathbf{\image})$. By non-trivial, we mean that the adversary has to predict $f(\mathbf{\image})=\mathsf{true}$ at least once. We use the same probability space as in the previous definition. To emphasize that the best adversary for achieving high precision may differ from the best one for maximizing accuracy improvement, we use $\advA_{\textrm{pre}}$ to denote the adversary when considering precision. We want to measure the probability that $\advA_{\textrm{pre}}$ succeeds, conditioned on $\advA_{\textrm{pre}}$ outputting $\mathsf{true}$. We denote this by $\CondProb{f(\mathbf{\image}) = \mathsf{true}}{\advA_{\textrm{pre}}(\textnormal{Emb}(\mathbf{\image}))=\mathsf{true}}$. To prevent $\advA_{\textrm{pre}}$ from using the trivial strategy of predicting all events as negative, we define an affiliate concept of recall $r}%\epsilon_{\mathrm{recall}}$ as \begin{newmath} r}%\epsilon_{\mathrm{recall}} = \CondProb{\advA_{\textrm{pre}}(\textnormal{Emb}(\mathbf{\image}))=\mathsf{true}} {f(\mathbf{\image}) = \mathsf{true}}\;. \end{newmath} We will restrict attention to adversaries $\advA_{\textrm{pre}}$ for which $r}%\epsilon_{\mathrm{recall}}$ exceeds some threshold, e.g., $r}%\epsilon_{\mathrm{recall}} > 0\%$. We let \begin{newmath} \epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}} = \CondProb{f(\mathbf{\image}) = \mathsf{true}}{\advA_{\textrm{pre}}(\textnormal{Emb}(\mathbf{\image}))=\mathsf{true}} \end{newmath} denote the precision advantage for some adversary $\advA_{\textrm{pre}}$ that achieves $r}%\epsilon_{\mathrm{recall}} > \rho}%\epsilon_{\mathrm{recall}}$, with the exception of $\rho}%\epsilon_{\mathrm{recall}}=100\%$, where the restriction is set as $r}%\epsilon_{\mathrm{recall}}=100\%$. \textbf{AUC.} Precision captures the adversary's confidence in predicting the positive class, i.e., the likelihood of $f(\mathbf{\image})$ being $\mathsf{true}$ when the adversary predicts it to be true. However, it does not capture the adversary's confidence regarding predicting the negative class. We therefore finally formalize a notion of AUC, where recall that AUC is the area under the receiver-operator curve, a popular measure of classifier efficacy. At a high level, AUC-ROC indicates the classifier's capability in differentiating positive classes from negative ones. For a setting $\pi = (\mathcal{D},\altmathcal{B}^q,f)$, let $\mathcal{D}_i$ be the distribution $\mathcal{D}$ over $\altmathcal{W}^q$ conditioned on $f(\mathbf{\image}) = i$ for $i \in \{\mathsf{true},\textsf{false}\}$. Then for an adversary $\advA_{\textrm{auc}}$ that outputs a real value in $[0,1]$ we measure the probability that $\advA_{\textrm{auc}}(\textnormal{Emb}(\mathbf{\image}_{\mathsf{true}})) > \advA_{\textrm{auc}}(\textnormal{Emb}(\mathbf{\image}_{\textsf{false}}))$ where $\mathbf{\image}_i$ is drawn from $\altmathcal{B}^q$ according to $\mathcal{D}_i$. The probability is over the independent choices of $\mathbf{\image}_{\mathsf{true}}$ and $\mathbf{\image}_{\textsf{false}}$, as well as the coins used by the $2q$ executions of $\textnormal{Emb}$ and two executions of $\advA_{\textrm{auc}}$. We provide a pseudocode game $\textnormal{AUC}_{\textnormal{Emb},\pi}$ describing this probability space in \figref{fig:prec-security-game}. Then we define the advantage of an AUC adversary $\advA_{\textrm{auc}}$ by \begin{newmath} \epsilon_{\mathrm{auc}} = 2\cdot\Prob{\advA_{\textrm{auc}}(\textnormal{Emb}(\mathbf{\image}_{\mathsf{true}})) > \advA_{\textrm{auc}}(\textnormal{Emb}(\mathbf{\image}_{\textsf{false}}))} - 1 \;. \end{newmath} This formulation uses a well-known fact~\cite{cortes2004auc,agarwal2005generalization} about AUC that it is equal to the probability that a scoring algorithm (in our case, the adversary) ranks positive-class instances higher than negative-class instances. For simplicity, we ignore ties ($\advA_{\textrm{auc}}$ outputting the same value in each case). Without loss of generality, we can assume that the AUC adversary $\advA_{\textrm{auc}}$ wins the game with probability greater than or equal to 0.5, and so the normalization maps to the range $[0,1]$. (This corresponds to the classic Gini coefficient.) \textbf{Possible predicates.} We focus on the matching predicate in our analyses. An adversary chooses an image $w_{adv}$, and wishes to determine if the client request $\textnormal{Emb}(w_{c})$ corresponds to an image that is very similar to $w_{adv}$, i.e., $\altmathcal{F}(w_c)=\altmathcal{F}(w_{adv})$. Had the adversary the confidence to assert that there's a match, they can recover the content from the submitted request. Such an attack is trivial in hashing-based similarity testing services, when the clients are required to submit similarity hashes as their requests. That said, our framework can be used to analyse other predicates. For example, an adversary may want to infer if $q$ different client requests all correspond to similar content. We leave such analyses to future work. \textbf{Discussion.} We make a few comments about our security goals. We have omitted placing computational limits on adversaries, which would be useful in cases where embedding schemes rely on cryptographic tools --- our mechanisms do not. A computational treatment is a straightforward extension to our framework. The security games underlying our measures are conservative, and in particular we assume that the adversary has perfect knowledge of the distribution $\mathcal{D}$ as well as $\mathcal{D}$'s support, which is unlikely in practice. While we do not explicitly model side information that an adversary might have about a client's image, it is possible to include it indirectly in this framework, for example, by changing the distribution or modifying the privacy predicate. \paragraph{Bayes optimal adversaries.} \label{sec:optimal} To allow simulations that evaluate privacy, we focus on adversaries that maximize advantage. Recall that we assume that the adversary knows the distribution $\mathcal{D}$ from which the clients are sampling images for their requests. Upon receiving client submitted requests $\mathbf{\embimage}=\textnormal{Emb}(\mathbf{\image})$, the Bayes optimal adversary computes the \textbf{exact} likelihood of the predicate being $\mathsf{true}$ --- $\CondProb{f(\mathbf{\image})=\mathsf{true}}{\textnormal{Emb}(\mathbf{\image})=\mathbf{\embimage}}$, probabilities are over the choice of $\mathbf{\image}$ being sampled from $\altmathcal{W}^q$ and coins used by the executions of $\textnormal{Emb}$. The Bayes optimal adversary for the precision metric, $\advA_{\textrm{pre}}$, chooses a threshold $T_{adv}$, such that $\advA_{\textrm{pre}}(\mathbf{\image})=\mathsf{true}$ if and only if $\CondProb{f(\mathbf{\image})=\mathsf{true}}{\textnormal{Emb}(\mathbf{\image})=\mathbf{\embimage}} > T_{adv}$. The adversary may choose $T_{adv}$ to maximize $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$. A similar strategy can be used by $\advA_{\textrm{acc}}$. However when $f(\mathbf{\image})=\mathsf{true}$ is especially rare, the adversary may achieve larger $\epsilon_{\mathrm{acc}}$ by predicting all predicates using the majority class, $f(\mathbf{\image})=\textsf{false}$. When doing so, the optimal $\epsilon_{\mathrm{acc}}$ is zero. Note that in our simulations we consider all possible threshold values for the sampled dataset, and report on the one that provides the best success rate. A real attacker would have to pick a threshold a priori, meaning our analyses are conservative. The Bayes optimal adversary for $\advA_{\textrm{auc}}$ doesn't have to choose a threshold~$T_{adv}$. The adversary is given two scenarios to rank: $\mathbf{\image}_{\mathsf{true}}$ and $\mathbf{\image}_{\textsf{false}}$, one has $f(\mathbf{\image}_{\mathsf{true}})=\mathsf{true}$ and the other has $f(\mathbf{\image}_{\textsf{false}})=\textsf{false}$. The adversary wins the game when they correctly rank the $\mathsf{true}$ scenario over the $\textsf{false}$ one, i.e., when $\advA_{\textrm{auc}}(\mathbf{\embimage}_{\mathsf{true}}) > \advA_{\textrm{auc}}(\mathbf{\embimage}_{\textsf{false}})$. As $\textnormal{Emb}(\mathbf{\embimage})$ is the only information that the adversary gained from our SBB protocol, the optimal strategy to utilize the information is hence to use $\Prob{f(\mathbf{\image})=\mathsf{true}\,|\,\textnormal{Emb}(\mathbf{\image})=\mathbf{\embimage}}$ as $\advA_{\textrm{auc}}(\mathbf{\embimage})$. \section{Balancing Security, Correctness, Efficiency} \label{sec:expr} In this section, we demonstrate how to balance security, correctness and compression efficiency of SBB when using the LSH-based coarse embedding scheme $\mathbb{E}_{LSH}$. We do so via simulations using real-world image sharing data collected from social media sites. Using our framework, we evaluate the security of $\mathbb{E}_{LSH}$ with varying parameter settings. We then fix the security requirement and explore the trade-off between correctness and compression efficiency. \subsection{Experimental Setup} \label{sec:data} \paragraph{Data collection.} Recall that our deployment scenario in \secref{sec:deployment}, an ideal dataset should represent the image sharing behaviors among users on an end-to-end encrypted messaging platform. However, data of one-to-one shares among users on any private messaging platform is by definition, private. Hence, we sought a public dataset that may act as a stand-in for our experiments. As the dataset is publicly available, our experiment did not require review from our IRB office. On Twitter, users may retweet the content that they want to share with their audience. We consider retweets on public Twitter as a proxy for user sharing in private social network. Furthermore, the problem of misinformation, which motivated our study, is prevalent on Twitter~\cite{hindman2018disinformation}. We were able to use a dataset from previous work~\cite{hua2020characterizing,hua2020towards} that contains Twitter interactions with a group of US political candidate accounts between September 13, 2018 and January~6, 2019. The dataset includes 1,190,355 tweets with image URLs, however, by the time of downloading images, only 485\,K images were successfully retrieved. We encode the retrieved images with PDQHash. There are 256,049 unique PDQHash values in total. We collect the total number of retweets of each tweet in November 2019: the data was available for $13\%$ of the tweets. The total number of postings and retweets of the images we retrieved adds up to 1.2 million. We simulate a workload for similarity testing as follows. Any tweet and retweet with a valid image is considered as a client request to our system. Hence, this experimental set has 1.2 million client requests with 256\,K unique PDQHash values. \begin{figure} \begin{adjustbox}{width=0.48\textwidth} \begin{tikzpicture} \begin{axis}[ xbar stacked, bar width=9pt, y=13pt, ymin=-0.2, ymax=3.2, xmin=0, xmax=1, enlarge y limits=0.15, legend style={at={(1.135,1)}, anchor=north,legend columns=1, name=legend,font=\footnotesize}, xlabel={Percentage of Similarity Embeddings}, xlabel style={font=\footnotesize}, ylabel style={font=\footnotesize}, yticklabel style={font=\footnotesize}, xticklabel style={font=\footnotesize}, ytick={0, 1, 2, 3}, yticklabels={{$T=0$}, {$T=32$}, {$T=64$}, {$T=70$}}, xticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%}, ] \addplot[fill=clr2_1!25, draw=white, xbar] plot coordinates {(0.0996,0) (0.0835,1) (0.0805,2) (0.08,3)}; \addlegendentry{$1$} \addplot[fill=clr2_1!50, draw=white, xbar] plot coordinates {(0.8401,0) (0.7480,1) (0.7008,2) (0.6901,3)}; \addlegendentry{$(1, 10]$} \addplot[fill=clr2_1!75,draw=white, xbar] plot coordinates {(0.0577,0) (0.14,1) (0.1707,2) (0.1777,3)}; \addlegendentry{$(10, 100]$} \addplot[fill=clr2_1!100, draw=white, xbar] plot coordinates {(0.0026,0) (0.0285,1) (0.048,2) (0.0523,3)}; \addlegendentry{$> 100$} \end{axis} \end{tikzpicture} \end{adjustbox} \vspace{-0.8cm} \caption{\label{fig:neighborhood} $T$-Neighborhood size distribution for different $T$.} \vspace{-0.4cm} \end{figure} \textbf{Dataset statistics.} Users on social media share similar images frequently. The $T$-neighborhood size of an image~$w$ is the number of images that are $T$-similar to it. Two images are $T$-similar if and only if their similarity embeddings have a Hamming distance smaller than $T$. We choose the values of $T$ according to the recommendations from the white paper on PDQHash~\cite{pdqhash}, where 32 and 70 were specified as the lower and upper bounds of recommended similarity thresholds. We also include $T=0$ and $T=64$ for comparison. Figure~\ref{fig:neighborhood} shows the distribution of $T$-neighborhood sizes~(shades of color) of the images in client requests, with different~$T$~(in different rows). The lightest shades~(left) are requests with neighborhood size of one, i.e., the neighborhood only contains the single image. The following darker shades are the images with neighborhood size in the range $(1, 10]$, $(1,100]$, and $(100,\infty)$. The bottom row shows the neighborhood size distribution with $T=0$. In our dataset, most of the images in the client requests~($84\%$) share the same similarity embedding with more than one, but fewer than $10$ neighbors. The distributions of neighborhood size are mostly similar to each other, especially for $T=64$ and $T=70$. Naturally, with a larger threshold, there are more requests containing images with a larger neighborhood size. For example, only $2.85\%$ of all request images have a neighborhood size larger than $100$ with $T=32$~(third row from top, darkest column with purple), while $5.23\%$ of the requests satisfy the same condition when $T=70$~(first row from top, darkest column with purple). \paragraph{Implementation.} We compare the privacy of SBB when using $\mathbb{E}_{LSH}$ with different embedding lengths~$d$ to the baseline method $\mathbb{E}_{cPDQ}$. We focus on the security guarantees against the matching attack and explain our implementation details. \begin{figure*} \centering \begin{adjustbox}{width=\textwidth} \begin{tikzpicture} \begin{axis}[ name=plot1, legend columns = 1, ticklabel style = {font=\footnotesize}, ylabel={Adversarial advantage}, ylabel style={font=\footnotesize, at={(axis description cs:0.08,.5)},anchor=south}, xlabel style={font=\footnotesize}, xlabel={Coarse embedding length $d$}, xtick = {8,9,10,11,12,13,14,15,16,17.5}, xticklabels = {$8$,$9$,$10$,$11$,$12$,$13$,$14$,$15$,$16$}, xticklabel style={align=center}, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%}, height=4cm, width=0.45\textwidth, scaled y ticks=false, legend style={/tikz/every even column/.append style={column sep=0.5cm}, at={(1.01,0.48)},anchor=south west, font=\footnotesize}, ] \addplot[clr3_1, only marks,error bars/.cd, y dir=both, y explicit] coordinates { (8.0000, 0.9981) += (0, 0.0002) -= (0, 0.0002) (9.0000, 0.9991) += (0, 0.0001) -= (0, 0.0001) (10.0000, 0.9995) += (0, 0.0001) -= (0, 0.0001) (11.0000, 0.9998) += (0, 0.0001) -= (0, 0.0001) (12.0000, 0.9999) += (0, 0.0000) -= (0, 0.0000) (13.0000, 0.9999) += (0, 0.0000) -= (0, 0.0000) (14.0000, 1.0000) += (0, 0.0000) -= (0, 0.0000) (15.0000, 1.0000) += (0, 0.0000) -= (0, 0.0000) (16.0000, 1.0000) += (0, 0.0000) -= (0, 0.0000) }; \addplot[clr3_2, only marks,error bars/.cd, y dir=both, y explicit,] coordinates { (8.0000, 0.3426) += (0, 0.0192) -= (0, 0.0192) (9.0000, 0.5306) += (0, 0.0232) -= (0, 0.0232) (10.0000, 0.6651) += (0, 0.0238) -= (0, 0.0238) (11.0000, 0.8103) += (0, 0.0333) -= (0, 0.0333) (12.0000, 0.8864) += (0, 0.0192) -= (0, 0.0192) (13.0000, 0.9443) += (0, 0.0241) -= (0, 0.0241) (14.0000, 0.9706) += (0, 0.0133) -= (0, 0.0133) (15.0000, 0.9907) += (0, 0.0050) -= (0, 0.0050) (16.0000, 0.9930) += (0, 0.0046) -= (0, 0.0046) }; \addplot[clr3_3, only marks,error bars/.cd, y dir=both, y explicit,] coordinates { (8.0000, 0.0000) += (0, 0.0000) -= (0, 0.0000) (9.0000, 0.1186) += (0, 0.0715) -= (0, 0.0715) (10.0000, 0.4931) += (0, 0.0544) -= (0, 0.0544) (11.0000, 0.7620) += (0, 0.0544) -= (0, 0.0544) (12.0000, 0.8709) += (0, 0.0247) -= (0, 0.0247) (13.0000, 0.9397) += (0, 0.0291) -= (0, 0.0291) (14.0000, 0.9694) += (0, 0.0145) -= (0, 0.0145) (15.0000, 0.9905) += (0, 0.0051) -= (0, 0.0051) (16.0000, 0.9929) += (0, 0.0047) -= (0, 0.0047) }; \addplot[clr3_1, only marks, mark=diamond*,mark options={scale=2}] coordinates {(17.5, 0.964756)}; \addplot[clr3_2, only marks, mark=diamond*,mark options={scale=2}] coordinates {(17.5, 0.965956)}; \addplot[clr3_3, only marks,mark=diamond*,mark options={scale=2}] coordinates {(17.5, 0.999980)}; \addplot[red, line width=1pt] coordinates { (17, 0.85) (18, 0.85)}; \addplot[red, line width=1pt] coordinates { (17, 0.85) (17, 1.15)}; \addplot[red, line width=1pt] coordinates { (18, 0.85) (18, 1.15)}; \addplot[red, line width=1pt] coordinates { (17, 1.15) (18, 1.15)}; \legend{$\epsilon_{\mathrm{auc}}$, $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}=100\%}$, $\epsilon_{\mathrm{acc}}$} \node[ anchor=north west, align=left, font=\footnotesize ] at (axis cs:16.5,.85) {$\mathbb{E}_{cPDQ}$\\$d=16$}; \end{axis} \begin{axis}[ name=plot2, at=(plot1.right of south east), anchor=left of south west, legend columns = 1, xshift=0.3cm, ticklabel style = {font=\footnotesize}, legend style={nodes={scale=.95, font=\footnotesize}}, xmin=-0.05, xmax=0.4, ymin=-0.1, ymax=0.75, xticklabel style={align=center}, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%}, ylabel style={font=\footnotesize, at={(axis description cs:0.1,.5)},anchor=south}, xlabel style={font=\footnotesize}, ylabel={$\epsilonPrecR{r}%\epsilon_{\mathrm{recall}} > \rho}%\epsilon_{\mathrm{recall}}}$}, xlabel={Flipping bias $\gamma$}, height=4cm, width=0.45\textwidth, scaled y ticks=false, legend style={/tikz/every even column/.append style={column sep=0.25cm}, at={(1.18,1)},anchor=north, font=\footnotesize} ] \addplot[clr5_1,mark=*,line width=1pt,error bars/.cd, y dir=both, y explicit,] coordinates { (0.0000, 0.5306) += (0, 0.0232) -= (0, 0.0232) (0.0500, 0.4049) += (0, 0.0271) -= (0, 0.0271) (0.1000, 0.2846) += (0, 0.0141) -= (0, 0.0141) (0.1500, 0.1939) += (0, 0.0060) -= (0, 0.0060) (0.2000, 0.1227) += (0, 0.0052) -= (0, 0.0052) (0.2500, 0.0718) += (0, 0.0052) -= (0, 0.0052) (0.3000, 0.0387) += (0, 0.0031) -= (0, 0.0031) (0.3500, 0.0220) += (0, 0.0014) -= (0, 0.0014) }; \addplot[clr5_2,mark=*,line width=1pt,error bars/.cd, y dir=both, y explicit,] coordinates { (0.0000, 0.5306) += (0, 0.0232) -= (0, 0.0232) (0.0500, 0.4049) += (0, 0.0271) -= (0, 0.0271) (0.1000, 0.2846) += (0, 0.0141) -= (0, 0.0141) (0.1500, 0.1270) += (0, 0.0043) -= (0, 0.0043) (0.2000, 0.0531) += (0, 0.0016) -= (0, 0.0016) (0.2500, 0.0320) += (0, 0.0017) -= (0, 0.0017) (0.3000, 0.0139) += (0, 0.0007) -= (0, 0.0007) (0.3500, 0.0083) += (0, 0.0002) -= (0, 0.0002) }; \addplot[clr5_3,mark=*,line width=1pt,error bars/.cd, y dir=both, y explicit,] coordinates { (0.0000, 0.5306) += (0, 0.0232) -= (0, 0.0232) (0.0500, 0.4049) += (0, 0.0271) -= (0, 0.0271) (0.1000, 0.1218) += (0, 0.0054) -= (0, 0.0054) (0.1500, 0.0630) += (0, 0.0018) -= (0, 0.0018) (0.2000, 0.0282) += (0, 0.0018) -= (0, 0.0018) (0.2500, 0.0152) += (0, 0.0002) -= (0, 0.0002) (0.3000, 0.0086) += (0, 0.0004) -= (0, 0.0004) (0.3500, 0.0053) += (0, 0.0001) -= (0, 0.0001) }; \addplot[clr5_4,mark=*,line width=1pt,error bars/.cd, y dir=both, y explicit,] coordinates { (0.0000, 0.5306) += (0, 0.0232) -= (0, 0.0232) (0.0500, 0.1438) += (0, 0.0091) -= (0, 0.0091) (0.1000, 0.0736) += (0, 0.0028) -= (0, 0.0028) (0.1500, 0.0251) += (0, 0.0010) -= (0, 0.0010) (0.2000, 0.0137) += (0, 0.0007) -= (0, 0.0007) (0.2500, 0.0077) += (0, 0.0002) -= (0, 0.0002) (0.3000, 0.0053) += (0, 0.0002) -= (0, 0.0002) (0.3500, 0.0036) += (0, 0.0001) -= (0, 0.0001) }; \addplot[clr5_5,mark=*,line width=1pt,error bars/.cd, y dir=both, y explicit,] coordinates { (0.0000, 0.5306) += (0, 0.0232) -= (0, 0.0232) (0.0500, 0.0067) += (0, 0.0010) -= (0, 0.0010) (0.1000, 0.0031) += (0, 0.0003) -= (0, 0.0003) (0.1500, 0.0023) += (0, 0.0002) -= (0, 0.0002) (0.2000, 0.0021) += (0, 0.0001) -= (0, 0.0001) (0.2500, 0.0021) += (0, 0.0000) -= (0, 0.0000) (0.3000, 0.0020) += (0, 0.0000) -= (0, 0.0000) (0.3500, 0.0018) += (0, 0.0005) -= (0, 0.0005) }; \addplot [black, no markers, line width=1pt,dashed] coordinates {(-0.5,0.5) (0.6,0.5)}; \node[ anchor=north west, align=left, font=\footnotesize ] at (axis cs:0.27,0.67) {$\epsilonPrecR{r}%\epsilon_{\mathrm{recall}} > \rho}%\epsilon_{\mathrm{recall}}}=50\%$}; \legend{{$r}%\epsilon_{\mathrm{recall}} > 0\%$}, {$r}%\epsilon_{\mathrm{recall}} > 25\%$}, {$r}%\epsilon_{\mathrm{recall}} > 50\%$},{$r}%\epsilon_{\mathrm{recall}} > 75\%$}, {$r}%\epsilon_{\mathrm{recall}} =100\%$}} \end{axis} \end{tikzpicture} \end{adjustbox} \vspace{-0.8cm} \caption{\label{fig:privacy} \textbf{Left}: Simulation results of the three security metrics, evaluated on (1) $\mathbb{E}_{LSH}$ with embedding length $d$ from $8$ to $16$ (round markers), $\gamma=0$ and (2) $\mathbb{E}_{cPDQ}$ (diamond markers in red box). \textbf{Right}: The conditioned precision metric $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$ of matching attack, at different recall threshold, with $d=9$ and varying $\gamma$. Error bars in both plots represent the $95\%$ confidence interval.} \end{figure*} We formally define the matching attack as $\pi_{match}^{w_{adv}} = (\mathcal{D},\altmathcal{W},f_{match}^{w_{adv}})$, where $\mathcal{D}$ is the distribution over $\altmathcal{W}$ that the client requests are sampled from and $w_{adv}$ is an image chosen by the adversary. For any client submitted request with an image $w$, the adversary wishes to learn the value of $f_{match}^{w_{adv}}(w)$. We have $f_{match}^{w_{adv}}(w)=\mathsf{true}$ if and only if the corresponding PDQHash of the two images are the same, i.e., $\altmathcal{F}(w)=\altmathcal{F}(w_{adv})$. When trying to match the client image to $w_{adv}$, an adversary who receives the client request through a bucketization protocol is able to filter out images that are not in the bucket. When the client image is in fact similar to $w_{adv}$, it should most likely be included in the same bucket by definition of correctness. In this case, a $w_{adv}$ that is shared more frequently than any other image in the same bucket may boost the adversary's confidence in asserting that the client image is a similar match. Hence, having a more popular image as $w_{adv}$ increases the adversarial advantage. In the following experiments, we use the most popular image as $w_{adv}$, which appeared in $0.2\%$ of all requests. To evaluate the privacy guarantees provided by $\mathbb{E}_{LSH}$ and $\mathbb{E}_{cPDQ}$, we iterate over all requests in our dataset and simulate the client, server, and adversary behavior, to compute the security metrics $\epsilon_{\mathrm{acc}}$, $\epsilon_{\mathrm{auc}}$, and $\epsilonPrecR{cond}$. \textbf{Coarse PDQHash embedding scheme~($\mathbb{E}_{cPDQ}$).} Recall the algorithm of $\mathbb{E}_{cPDQ}$ described in \secref{sec:sbb}. The embedding algorithm $\textnormal{Emb}_{cPDQ}$ takes an image~$w$ as an input, and follows the PDQHash algorithm but with modified parameter settings, to generate a $16$-bit coarse PDQHash. This allows coarse grained similarity comparison. When receiving a request $p=\textnormal{Emb}(w)$, the Bayes optimal adversary follows the strategy as described in \secref{sec:optimal}. To be specific, the adversary computes the likelihood of the predicate being true, $\Prob{w=w_{adv}\,|\,\textnormal{Emb}_{cPDQ}(w)=p}$, to make a binary prediction. \textbf{LSH-based embedding scheme~($\mathbb{E}_{LSH}$).} Recall that in \figref{fig:sbb-protocol}, the LSH-based embedding scheme $\mathbb{E}_{LSH}$ consists of two algorithms $\textnormal{Emb}_{LSH}$ and $\textnormal{Sim}_{LSH}$. $\textnormal{Emb}_{LSH}$ takes an image~$w$ as an input, and outputs the selected indexing function and the resulting coarse embedding. To be specific, $\textnormal{Emb}_{LSH}(w)$ randomly selects a length $d$ indexing function $I$ from $\altmathcal{I}$, and computes $\tilde{p}=\textnormal{Flip}_{\gamma}(I(\altmathcal{F}(w)))$. The function is parameterized by two parameters: the length of the indexing function $d$ and the bias~$\gamma$ to flip an index that was chosen. Note that $d$ is also the length of the coarse embedding. $\textnormal{Sim}_{LSH}$ takes the output from $\textnormal{Emb}_{LSH}$ and a dataset $\altmathcal{B}$ as input, and outputs a candidate bucket as an output. The function has one parameter $k$, a coarse threshold to choose items for the candidate bucket. We analyse the security guarantee of $\mathbb{E}_{LSH}$ against the Bayes optimal adversary under the setting of a matching attack $\pi_{match}^{w_{adv}}$. When receiving a request $p$, as in $p=\textnormal{Emb}(w)$, the Bayes optimal adversary wants to predict $f_{match}^{w_{adv}}(w)=\mathsf{true}$. The adversary bases their prediction on $\Prob{\altmathcal{F}(w)=\altmathcal{F}(w_{adv})|\textnormal{Emb}_{LSH}(w)=p}$, and computes the probability by using all the information that is revealed to them: the indexing function~$I$, the resulting coarse embedding~$\tilde{p}$ and the added noise~$\gamma$. \begin{restatable}{theorem}{bayesOpt}\label{thm:bayes-opt} Let $\Delta_I(v, p) = \Delta(I(v), p)$ for a similarity hash $v \in \{0,1\}^\ell$. Consider a fixed image $w_{adv} \in \altmathcal{W}$ and a sampled image $w {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \mathcal{D}$. Let $v_{adv} = \altmathcal{F}(w_{adv})$ and $v = \altmathcal{F}(w)$. Furthermore, consider the distribution $\dist_\simhash$ that $\altmathcal{F}$ induces on $\{0,1\}^\ell$, where $$\dist_\simhash(x) = \Pr[\altmathcal{F}(w) = x] = \sum\limits_{\substack{w' \in \altmathcal{W},\\ \altmathcal{F}(w') = x}} \mathcal{D}(w')$$ for all $x \in \{0,1\}^\ell$. We have that \begin{equation*} \begin{multlined} \Pr[v = v_{adv} \mid \textnormal{Emb}(w) = (I, p)] \\ = \dfrac{\gamma^{\Delta_I(v_{adv}, p)} \cdot (1 - \gamma)^{d - \Delta_I(v_{adv}, p)} \cdot \dist_\simhash(v_{adv})} {\sum\limits_{v' \in \{0,1\}^\ell} \gamma^{\Delta_I(v', p)} \cdot (1 - \gamma)^{d - \Delta_I(v', p)}\cdot \dist_\simhash(v')} \end{multlined} \end{equation*} where the probability is over the coins used by $\textnormal{Emb}$ and the choice of $w$ sampled from $\mathcal{D}$. \end{restatable} We prove this theorem in \apref{ap:bayes}. When analysing the security guarantee of $\mathbb{E}_{LSH}$, we vary the parameter settings of $d$ and $\gamma$ only, as~$k$ has no impact on the adversarial advantage. For any choice of~$d$ and~$\gamma$, we randomly choose an index function $I$, and then execute the protocol for all requests in the dataset. We fix $I$ in the simulation because this information is revealed to the adversary and the adversary computes the likelihood conditioned on the indexing function. We first repeat this process for at least $10$ times. For parameter settings that result in large fluctuation in the results, we repeat for another $10$ iterations. We were able to obtain a stable estimate for all experiments after at most $20$ iterations. \subsection{Privacy of SBB} \label{sec:expr-attack} \paragraph{Using different security metrics.} To obtain a broad understanding of the information leakage from our protocol, we present all three security metrics $\epsilon_{\mathrm{auc}}$, $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$, and $\epsilon_{\mathrm{acc}}$ in Figure~\ref{fig:privacy}~(left). To be specific, we show the results of using (1) $\mathbb{E}_{LSH}$ without added noise (i.e., $\gamma=0$), but with varying embedding length $d$, and (2) $\mathbb{E}_{cPDQ}$, a baseline method that embeds a client request into a $16$-bit coarse PDQHash. The Y axis denotes the value of the security metrics, ranging from $0$ to $100\%$. The X axis denotes different methods used. From left to right, we list $\mathbb{E}_{LSH}$ with $d$ ranging from $8$ to $16$ (with round markers). In the red box, to the very right, the diamond markers represent the security metrics for $\mathbb{E}_{cPDQ}$. A naive baseline of using the plaintext similarity embedding~(as mentioned in \secref{sec:overview}) achieves $100\%$ for all security metrics~(not included in figure). In our setting, accuracy measures the adversary's performance in predicting the correct class; precision measures the adversary's confidence in predicting the positive class; AUC measures the adversary's ability in differentiating negative classes from positive ones. Our dataset is highly skewed, with more negative predicates than positive ones: ~only $0.2\%$ of all requests trigger positive matches. This property may lead to a biased view when measured by certain metrics. \textbf{\emph{Accuracy.}} In datasets with a skewed distribution, a trivial algorithm that always outputs the majority class, i.e., $f_{match}^{w_{adv}}(w)=\textsf{false}$, may achieve higher accuracy than any meaningful algorithm that tries to differentiate positive cases from negative cases. In fact, in some cases, when experimenting with $\mathbb{E}_{LSH}$ with $d=8$, we have $\epsilon_{\mathrm{acc}}=0$~(\figref{fig:privacy} on the left, green markers). This indicates that the adversarial advantage as measured by $\epsilon_{\mathrm{acc}}$ was based on the performance of the trivial algorithm, hence was considered as none. Meanwhile, other metrics~($\epsilon_{\mathrm{auc}}$ and $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}=100\%}$) show that the adversarial advantage is non-zero, e.g., $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}=100\%}=37\%$ for $d=8$. Both $\mathbb{E}_{cPDQ}$ and $\mathbb{E}_{LSH}$ allow the adversary to have perfect accuracy improvement when performing the matching attack when of similar coarse embedding length. However, applying $\mathbb{E}_{LSH}$ with a smaller coarse embedding length, e.g., $d=10$ decreases the accuracy improvement to $45\%$. \textbf{\emph{Precision.}} When there's no noise added in $\mathbb{E}_{LSH}$~($\gamma=0$), $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$ remains the same regardless of varying recall threshold $\rho}%\epsilon_{\mathrm{recall}}$. We will expand on this point later. With the same value of precision, a larger recall specifies more true positive predicates that the adversary may correctly classify with high confidence, hence larger privacy damage. When using $\mathbb{E}_{LSH}$, decreasing the length of coarse embeddings~($d < 12$) decreases adversarial precision, improving security. However, with embeddings of similar length, $\mathbb{E}_{LSH}$ and $\mathbb{E}_{cPDQ}$ both behave poorly~($d=15,16$ compared with $\mathbb{E}_{cPDQ}$). \textbf{\emph{AUC.}} Regardless of the embedding schemes, $\epsilon_{\mathrm{auc}}$ is almost $100\%$. The reason is that there are disproportionally many images for which the predicate evaluates as negative that can be easily differentiated from positive ones. Hence, most images with different PDQHashes from $w_{adv}$ are assigned to different buckets than $w_{adv}$. Therefore, when measured by the adversary confidence of differentiating negative cases from positive ones, as most of the negative cases can be distinguished correctly, the embedding schemes behave poorly. Note that the definition of AUC is in direct conflict with the utility of an SBB scheme, which allows efficiently ruling out images that are not similar with a given client input. None of the metrics ensures a lower bound for another. While testing on all metrics of adversarial advantage offers a better understanding, it is more practical to focus on a specific security metric that suits the application context. We focus on increasing the adversary's uncertainty in classifying a positive match. This aligns with previous work in the machine learning community that recommends the precision-recall metric over both AUC and accuracy, when evaluating prediction algorithms on highly imbalanced datasets, especially when correctly predicting the positive class is valued more~\cite{saito2015precision,raghavan1989critical,davis2006relationship,manning1999foundations}. Hence, $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}} > \rho}%\epsilon_{\mathrm{recall}}}$ fits our purpose the best, and we focus on it in the following discussion. \paragraph{Adding noise.}We now evaluate the security of $\mathbb{E}_{LSH}$ with added noise, i.e., have $\gamma > 0$. We fix the value of $d$ with $d=9$ and present results for $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$ in \figref{fig:privacy}~(right) with different values of $\rho}%\epsilon_{\mathrm{recall}}$. The results are similar for other values of $d$, though the exact value of $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$ differs. The Y-axis denotes $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$ and the X-axis denotes increasing value of $\gamma$. Precision metrics conditioned with different recall thresholds have different meanings. For example, there are 2,533 requests in total that contain images similar to $w_{adv}$. The metric $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>0\%}$ measures the confidence of the adversary catching one true positive case out of all, while $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}=100\%}$ measures the confidence of the adversary classifying all 2,533 cases correctly. Naturally we have $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>0\%} \geq \epsilonPrecR{r}%\epsilon_{\mathrm{recall}}=100\%}$. On the other hand, $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}=100\%}$ indicates more advantage for the adversary when having the same value as $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>0\%}$. When no noise is added~($\gamma=0$), the adversary computes the same likelihood for all true positive cases when following the strategy as described in \secref{sec:optimal}. Hence $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$ remains the same regardless of the recall threshold $r$. With increasing $\gamma>0$, it gets harder for the adversary to have high precision while maintaining large recall~(green, purple and orange lines). Even a small value for~$\gamma$ improves the privacy of the client request. For example, when $\gamma = 0.05$ and $d=9$, the likelihood that at least one value in a client request gets changed is $1-(1-\gamma)^d=37\%$. That means that in the majority of queries, none of the client request bits will be flipped, nevertheless the possibility that they could have been affects adversarial precision. For example, this results in a drop from $53\%$~(leftmost orange node in \figref{fig:privacy}, right chart) to $40\%$~(second green node from left) for $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}} > \rho}%\epsilon_{\mathrm{recall}}}$ for $\rho}%\epsilon_{\mathrm{recall}} \leq 50\%$. There's a more drastic drop when $\rho}%\epsilon_{\mathrm{recall}}$ is larger. When the adversary has to identify more than $50\%$ of the true positives, they have to lower the threshold $T_{adv}$ in predicting a positive answer, this leads to a larger likelihood of being impacted from the uncertainty introduced by the flipping bias. Nevertheless, when having $\gamma \geq 0.05$, $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$ is smaller than $50\%$~(noted by the horizontal line) for all recall thresholds. This indicates that for any given query that the $\advA_{\textrm{pre}}$ predicts as $\mathsf{true}$, the adversarial success rate is lower than randomly flipping a coin. In summary, these results show that using coarse PDQHash $\mathbb{E}_{cPDQ}$ fails to provide privacy for clients. The naive solution of revealing the plaintext similarity embeddings to the server also provides no privacy. Different security metrics demonstrate different aspects of adversarial advantage. We focus on $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$ as its definition fits our privacy goal the best. For our purpose, we consider $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}} > 0\%} < 50\%$ as our security goal, i.e., when the majority of an adversary's positive guesses~(given that there's at least one) are wrong. Given our empirical analysis, we suggest that a reasonable choice of parameters is embedding length $d=9$ and flipping bias $\gamma=0.05$, but caution that the privacy performance may vary in practice should the image distribution be very different (see \secref{sec:limitations}). \subsection{Correctness and compression efficiency} \label{sec:correctness-expr} We show the tradeoff between correctness and bucket compression rate under the security parameters suggested above ($d=9$, $\gamma=0.05$). We vary the value of the coarse threshold $k$, which specifies the bucket being sent from the server side when performing $\textnormal{Sim}_{LSH}$. Formally, we refer to the notion of correctness as $\epsilon$ in the definition of $(T, \epsilon, \mathcal{D})$-correct, bucket compression rate as $\alpha$ in that of $(\altmathcal{B},\alpha,\mathcal{D})$-compressing~(see \secref{sec:sbb}). We use the dataset as~$\mathcal{D}$, which we refer to as $\mathcal{D}_{twitter}$ and all the possible values of PDQHash in the dataset as~$\altmathcal{B}$. We randomly select 2 million requests with replacement and perform the protocol, and take the average value of the correctness and compression rate from all iterations. \begin{figure} \centering \begin{adjustbox}{width=0.45\textwidth} \begin{tikzpicture} \begin{axis}[ name=plot1, legend columns = 1, ticklabel style = {font=\footnotesize}, xmin=-0.05, xmax=1.05, ymin=0.83, ymax=1.02, yticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%}, xticklabel={\pgfmathparse{\tick*100}\pgfmathprintnumber{\pgfmathresult}\%}, ylabel = {Correctness $\epsilon$}, xlabel = {Compression efficiency $\alpha$}, ylabel style={font=\footnotesize, at={(axis description cs:0.08,.5)},anchor=south}, xlabel style = {font=\footnotesize}, height=3.8cm, width=0.45\textwidth, scaled y ticks=false, legend style={/tikz/every even column/.append style={column sep=0.38cm}, at={(1.02,0.25)},anchor=south west, font=\footnotesize} ] \addplot[clr5_1, only marks,error bars/.cd, y dir=both, x dir=both, y explicit, x explicit] coordinates { (0.0210, 0.9291) += (0.0000, 0.0004) -= (0.0000, 0.0004) (0.0932, 0.9917) += (0.0000, 0.0001) -= (0.0000, 0.0001) (0.2573, 0.9994) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.5002, 1.0000) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.7430, 1.0000) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.9070, 1.0000) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.9791, 1.0000) += (0.0000, 0.0000) -= (0.0000, 0.0000) }; \addplot[clr5_2, only marks,error bars/.cd, y dir=both, y explicit, x dir=both, x explicit] coordinates { (0.0210, 0.9052) += (0.0000, 0.0004) -= (0.0000, 0.0004) (0.0932, 0.9837) += (0.0000, 0.0002) -= (0.0000, 0.0002) (0.2573, 0.9977) += (0.0000, 0.0001) -= (0.0000, 0.0001) (0.5002, 0.9997) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.7430, 1.0000) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.9070, 1.0000) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.9791, 1.0000) += (0.0000, 0.0000) -= (0.0000, 0.0000) }; \addplot[clr5_3, only marks,error bars/.cd, y dir=both, y explicit,] coordinates { (0.0022, 0.5563) += (0.0000, 0.0006) -= (0.0000, 0.0006) (0.0210, 0.8622) += (0.0000, 0.0004) -= (0.0000, 0.0004) (0.0932, 0.9586) += (0.0000, 0.0002) -= (0.0000, 0.0002) (0.2573, 0.9878) += (0.0000, 0.0001) -= (0.0000, 0.0001) (0.5002, 0.9970) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.7430, 0.9994) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.9070, 0.9999) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.9791, 1.0000) += (0.0000, 0.0000) -= (0.0000, 0.0000) }; \addplot[clr5_4, only marks,error bars/.cd, y dir=both, y explicit,] coordinates { (0.0022, 0.5475) += (0.0000, 0.0006) -= (0.0000, 0.0006) (0.0210, 0.8509) += (0.0000, 0.0004) -= (0.0000, 0.0004) (0.0932, 0.9502) += (0.0000, 0.0002) -= (0.0000, 0.0002) (0.2573, 0.9836) += (0.0000, 0.0001) -= (0.0000, 0.0001) (0.5002, 0.9955) += (0.0000, 0.0001) -= (0.0000, 0.0001) (0.7430, 0.9991) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.9070, 0.9999) += (0.0000, 0.0000) -= (0.0000, 0.0000) (0.9791, 1.0000) += (0.0000, 0.0000) -= (0.0000, 0.0000) }; \addplot [black, no markers, line width=1pt,dashed] coordinates {(-0.1,0.95) (1.1,0.95)}; \legend{{$T=0$,$T=32$,$T=64$,$T=70$}} \end{axis} \end{tikzpicture} \end{adjustbox} \vspace{-0.4cm} \caption{\label{fig:accuracy} Correctness~(with varying similarity threshold $ T$) and compression rate tradeoff. The dashed line marks $95\%$.} \vspace{-0.4cm} \end{figure} In Figure~\ref{fig:accuracy}, we present the trade-off between correctness and compression efficiency. We plot the correctness $\epsilon$ (Y-axis) as the average compression rate $\alpha$ varies (X-axis). The dashed horizontal line represents $95\%$. We experiment with different definitions of similarity, i.e., with different values of $T$, denoted by different colors. Each node represents a parameter setup with the resulting correctness and compression rate. For example, the second blue node from left in~\figref{fig:accuracy} represents that with $k=3$, the resulting embedding scheme is $(32,98\%, \mathcal{D}_{twitter})$-correct and $(\altmathcal{B}, 9.3\%, \mathcal{D}_{twitter})$-compact. Note that our notion of correctness is stricter than the false negative rate defined in the prior work~\cite{kulshrestha2021identifying} for the Kulshrestha-Mayer protocol, as the prior work measures the rate of transformed images that are not mapped to the original one, constrained on the fact that the PDQHash of the transformed image stays in normalized Hamming distance $0.1$ to the original. The protocol~\cite{kulshrestha2021identifying} reported an average false positive rate of $16.8\%$ on a different dataset. Using our definition, this would be estimated as $(25.6, 83.2\%, \mathcal{D}_{KM})$-correct. In conclusion, these experiments suggest that for $\mathcal{D}_{twitter}$, one can achieve $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}} > 0\%} < 50\%$, over $95\%$ correctness for the investigated values of $T$ and $9.3\%$ compression rate using $\mathbb{E}_{LSH}$ with $d=9$, $\gamma=0.05$ and $k=3$. Hence, the resulting scheme achieves an almost order of magnitude reduction in the amount of data input to a second-stage similarity protocol. \subsection{End-to-end Simulation} \label{sec:end-to-end} We now perform end-to-end simulation on varying sizes of blocklists $\altmathcal{B}$ to demonstrate the improvement of execution time and bandwidth for different similarity protocols combined with SBB. For the experiments, we use parameters suggested in \secref{sec:correctness-expr}, i.e. $d=9$, $\gamma=0.05$ and $k=3$. \begin{table*} \centering \fontsize{7}{9}\selectfont \begin{tabular}[t]{rrrrrrrrr} \toprule \multirow{2}{*}{$|\altmathcal{B}|$} & \multicolumn{2}{c}{Execution Time~(s)} & \multicolumn{2}{c}{Total Bandwidth~(MiB)} & \multicolumn{2}{c}{Execution Time~(s)} & \multicolumn{2}{c}{Total Bandwidth~(MiB)} \\ & \multicolumn{1}{c}{No SBB} & \multicolumn{1}{c}{SBB} & \multicolumn{1}{c}{No SBB} & \multicolumn{1}{c}{SBB} &\multicolumn{1}{c}{No SBB} & \multicolumn{1}{c}{SBB} & \multicolumn{1}{c}{No SBB} &\multicolumn{1}{c}{SBB}\\ \midrule &\multicolumn{4}{c}{\textbf{Similarity Embedding Retrieval}} & \multicolumn{4}{c}{\textbf{Secure Sketch}}\\ \midrule $2^{18}$&$0.76$ $(0.00)$&$0.02$ $(0.00)$&$18.41$ $(0.00)$&$0.21$ $(0.01)$ & $1664.98$ $(\phantom{3}870.78)$&$2.77$ $(\phantom{1}0.17)$&$78.81$ $(0.36)$&$0.89$ $(0.05)$\\ $2^{19}$&$1.55$ $(0.01)$&$0.03$ $(0.01)$&$36.84$ $(0.02)$&$0.46$ $(0.15)$ & $5702.44$ $(3049.44)$&$6.03$ $(\phantom{1}0.41)$&$158.66$ $(2.39)$&$1.90$ $(0.14)$\\ $2^{20}$&$3.09$ $(0.01)$&$0.06$ $(0.02)$&$73.65$ $(0.00)$&$0.83$ $(0.06)$ &\multicolumn{1}{c}{--} & $12.09$ $(\phantom{1}0.99)$&\multicolumn{1}{c}{--}&$3.56$ $(0.26)$\\ $2^{21}$&$6.17$ $(0.01)$&$0.10$ $(0.01)$&$147.31$ $(0.01)$&$1.69$ $(0.12)$ &\multicolumn{1}{c}{--} & $26.70$ $(\phantom{1}2.40)$&\multicolumn{1}{c}{--}&$7.18$ $(0.55)$\\ $2^{22}$&$12.41$ $(0.07)$&$0.43$ $(0.99)$&$294.61$ $(0.03)$&$3.33$ $(0.25)$ & \multicolumn{1}{c}{--} & $61.99$ $(\phantom{1}6.63)$&\multicolumn{1}{c}{--}&$14.35$ $(1.17)$\\ $2^{23}$& \multicolumn{1}{c}{--} &$0.40$ $(0.02)$& \multicolumn{1}{c}{--} &$6.70$ $(0.41)$ & \multicolumn{1}{c}{--} & $157.57$ $(14.41)$&\multicolumn{1}{c}{--}&$28.40$ $(1.68)$\\ \midrule &\multicolumn{4}{c}{\textbf{CrypTen}} & \multicolumn{4}{c}{\textbf{EMP}}\\ \midrule $2^{13}$&$17.85$ $(0.18)$&$1.18$ $(0.31)$&$941.42$ $(0.37)$&$11.26$ $(\phantom{22}1.64)$ & $13.14$ $(0.06)$&$0.21$ $(0.01)$ & $1520.63$ $(0.44)$ &$17.14$ $(\phantom{33}1.82)$ \\ $2^{14}$&$36.65$ $(0.44)$&$1.33$ $(0.30)$&$1882.82$ $(0.87)$&$21.40$ $(\phantom{22}2.14)$& $26.93$ $(0.98)$&$0.36$ $(0.03)$ & $3040.57$ $(1.39)$ &$34.52$ $(\phantom{33}4.12)$ \\ $2^{15}$&$71.76$ $(0.07)$&$1.55$ $(0.06)$&$3766.55$ $(0.81)$&$42.43$ $(\phantom{22}4.06)$& $54.61$ $(1.26)$&$0.67$ $(0.03)$ & $6079.48$ $(2.67)$ &$70.45$ $(\phantom{33}3.68)$ \\ $2^{16}$&$147.10$ $(0.66)$&$2.19$ $(0.08)$&$7534.29$ $(0.90)$&$83.77$ $(\phantom{22}6.96)$ &$117.52$ $(8.11)$&$1.21$ $(0.14)$ & $12158.00$ $(0.88)$ &$133.42$ $(\phantom{3}16.20)$ \\ $2^{17}$&\multicolumn{1}{c}{--} &$3.55$ $(0.40)$&\multicolumn{1}{c}{--} &$167.70$ $(\phantom{2}14.69)$&\multicolumn{1}{c}{--} & $2.42$ $(0.13)$ & \multicolumn{1}{c}{--} &$275.52$ $(\phantom{3}15.32)$ \\ $2^{18}$&\multicolumn{1}{c}{--} &$6.45$ $(0.40)$&\multicolumn{1}{c}{--} &$347.53$ $(\phantom{2}22.65)$&\multicolumn{1}{c}{--} & $4.79$ $(0.28)$ & \multicolumn{1}{c}{--} &$551.26$ $(\phantom{3}32.51)$ \\ $2^{19}$&\multicolumn{1}{c}{--}&$13.77$ $(1.22)$&\multicolumn{1}{c}{--}&$695.30$ $(\phantom{2}45.49)$&\multicolumn{1}{c}{--}&$9.63$ $(0.52)$ & \multicolumn{1}{c}{--} &$1097.53$ $(\phantom{3}59.78)$ \\ $2^{20}$&\multicolumn{1}{c}{--}&$27.89$ $(1.88)$&\multicolumn{1}{c}{--}&$1386.84$ $(107.27)$& \multicolumn{1}{c}{--}&$20.24$ $(1.26)$ & \multicolumn{1}{c}{--} &$2299.33$ $(143.63)$ \\ \midrule \end{tabular} \vspace{-0.3cm} \caption{\label{tab:end-to-end} Average time and bandwidth of similarity protocols without and with SBB for four different similarity testing protocols. Dashes (--) indicate when execution failed due to poor scaling. Numbers in parentheses are standard deviations.} \end{table*} \textbf{Datasets.} To form varying sizes of blocklist $\altmathcal{B}$, we randomly sample images from the datasets that were used in prior work~\cite{kulshrestha2021identifying}.\footnote{These are unfortunately not suitable for the privacy simulations of previous sections; they don't include information about sharing frequency.} The details of the datasets are listed in \tabref{tab:end-to-end-dataset}. In particular, for each experiment, we generate $\altmathcal{B}$ of the requisite size by uniformly selecting images (without replacement) from the union of the COCO, T4SA, and Webvision~2.0 datasets. For client requests, we sample half of the requests from the IMDB-WIKI dataset to simulate requests that don't match any image in the block list, and the rest from the generated~$\altmathcal{B}$ to simulate client requests that return a match. For each set of experiments with a randomly generated $\altmathcal{B}$ tested with similarity embedding retrieval (the server simply sends the embeddings of bucket entries to the client), we provide measurements over $40$ iterations. With secure sketch, we use $20$ iterations and with 2PC protocols, we use $10$ iterations because these take significantly longer to run. \textbf{Implementation.} We use an AWS EC2 p2.xlarge\footnote{\url{https://aws.amazon.com/ec2/instance-types/p2/}} instance with 61\,GiB of memory and 200\,GiB disk storage for the server side computation. The instance is initialized with the deep learning AMI provided by Amazon. An AWS EC2 t2.small\footnote{\url{https://aws.amazon.com/ec2/instance-types/t2/}} instance with 2\,GiB of memory and 64\,GiB storage in the same region acted as a client. The measured bandwidth between the two instances was 1\,Gbits/sec for both directions and the network latency was $0.9$\,ms. The server side implementation uses Python and is parallelized using the GPU. The client side implementation uses Go. For the secure sketch protocol, we use an oblivious pseudorandom function~(OPRF), implemented by the circl Go library from CloudFlare~\cite{circl}. For the 2PC protocols our setup is identical except that we use an AWS EC2 t2.large instance with 8\,GiB of RAM as the client to be able to handle the 2PC frameworks. The bandwidth was measured to be 1.01\,Gbits/sec from server to client, 721\,Mbits/sec from client to server. The observed network latency was $0.3$\,ms. For both the CrypTen and EMP frameworks, we used the computed functionality that checks if there exists a hash among the server's input that has Hamming distance less than $T$ to the client-provided hash. The server's input is the entire $\altmathcal{B}$, and the generated SBB bucket, in the non-bucketized and bucketized setting, respectively. We further XOR the output of this comparison with a randomly-generated client-provided bit so that only the client learns the result. For CrypTen, for simplicity we configured the trusted-third party for Beaver triple generation to run on the same EC2 instance as the client (so-called trusted first-party mode). This is not a secure configuration but provides lower bounds on performance (moving Beaver generation to another server would decrease performance). Experimental results for CrypTen should therefore be considered to be lower bounds on performance for secure deployments. Note that our 2PC prototypes are not optimized, and absolute timings would be improved using custom protocols for our setting such as PAMC~\cite{kulshrestha2021identifying}. However, it is unclear if PAMC can be combined with SBB since it requires generating all buckets at setup time. \textbf{Results.} We present the average total execution time, average total bandwidth, and the speedup provided by SBB for varying $|\altmathcal{B}|$ in \tabref{tab:end-to-end}. The execution time and total bandwidth do not vary much between client requests that match images in $\altmathcal{B}$ and those that do not. Many of the similarity testing protocols do not scale well, and we denote by dashes in the table experiments that failed to complete. Typically this was due to the client instance running out of memory. In all these cases, SBB was able to increase scaling to complete executions with the available resources. Our results show that SBB drastically improves the similarity protocol's performance, both in terms of execution time and total bandwidth. For similarity embedding retrieval, SBB provides a $29\times$ speedup~($|\altmathcal{B}|=2^{22}$) in execution time. For large-scale datasets~($|\altmathcal{B}|\leq 2^{23}$), similarity embedding retrieval with SBB returns the answer in real time, under $0.5$ seconds. For the secure sketch protocol, the improvement is even larger, the speedup is at least $601\times$ for $|\altmathcal{B}|=2^{18}$. For 2PC protocols, the improvement provided by SBB grows larger as $\altmathcal{B}$ becomes bigger, when $|\altmathcal{B}|=2^{16}$, the speedup in execution time is $67\times$ and $97\times$ for CrypTen and EMP, respectively. For $|\altmathcal{B}|=2^{20}$, EMP with SBB takes less time than the Kulshrestha-Mayer protocol~($20.24$s vs $27.5$s), however it requires larger bandwidth. \section{Related Work} \label{sec:related} \textbf{Secure proximity search.} Secure proximity search based on general multi-party computation has been used in many applications, including privacy-preserving facial recognition~\cite{erkin2009privacy,sadeghi2009efficient}, biometric authentication~\cite{barni2010privacy,evans2011efficient}, querying sensitive health data~\cite{asharov2018privacy}, and more. However, these works don't scale sufficiently for our use case. More scalable solutions include fuzzy extractors ~\cite{dodis2004fuzzy,juels1999fuzzy}, which give a small piece of client information to the server, that does not leak information about the secret, in order to derive secrets from noisy readings such as biometrics. However, the security guarantee is based on the assumption that the distribution of the secret~(for example, fingerprints) has enough minimum entropy, which is not necessarily true for image distributions. \noindent\textbf{Private information retrieval.} Chor et al.~\cite{chor1995private} introduced the concept of private information retrieval, a type of protocol that enables a client to retrieve an item from a database such that the identity of the item is not revealed to the server. The protocol requires clients to supply the index of the data item that they are querying about for retrieval purposes. However, this is unlikely to be applied to our use case. The most likely solution for similarity lookup is content-based privacy preserving retrieval~\cite{shashank2008private,xia2016privacy,lu2009enabling}. Previous works~\cite{xia2016privacy,lu2009enabling} in this area assume three parties involved in this type of protocol, a data owner, the client who queries the service and an actual server where the service is hosted. The data owner encodes images or other types of multimedia into feature vectors that are further encoded with searchable encryption schemes and used as indices. The server is made oblivious to the actual content of indices and the content of corresponding data items. The client queries the server by generating indices from their images and receives the data items as answers. The threat model doesn't prevent the case when the data owner colludes with the server, hence cannot be directly applied to our scenario, where the data owner is the server. \noindent \textbf{Secure k nearest neighbors search.} Another possible solution is to use secure k nearest neighbors search~(k-NNS) to look for similar items at the server side without revealing client information. Most of the k-NNS solutions require a linear scan of the database that is queried against. Recent work~\cite{chen2020sanns} proposed a sub-linear clustering-based algorithm, yet the solution requires significant preprocessing time for each client. Similar to our work, Riazi et al.~\cite{riazi2016sub} used locality sensitive hashing to encode client queries for efficient k-NNS. Different from our approach, they preserved user privacy by converting the LSH encodings to secure bits. However, the notion of security is different in their work, in particular, an adversary can estimate the similarity of two given data points based on the encodings. This makes the protocol vulnerable to the matching attack, which we addressed in our work. \noindent \textbf{Location based services.} We draw comparisons between our bucketization setting and that of location-based services (LBSs). Andrés et al. \cite{geoind} propose a privacy-preserving mechanism in which mobile clients send their noise-perturbed locations to a server in order to obtain recommendations for nearby restaurants. One may view the noise-perturbed location as a coarse embedding and the server-provided list of restaurants as a similarity bucket. Similar to our coarse embedding scheme, the mechanism of Andrés et al. suffers from privacy loss when applied repeatedly to the same user input. These connections suggest that our framework could have applications to reasoning about LBS privacy. Conversely, insights from location privacy may serve as inspiration for improved SBB mechanisms. \noindent \textbf{Privacy measures.} Prior work has proposed measuring privacy using an adversary's expected error when making inferences based on a posterior distribution on user inputs \cite{quantifyingLocPriv, isGeoIndWhat, optLocPriv}. Recent work has explored the Bayes security measure \cite{bayesMeasure}, which is similar to $\epsilon_{\mathrm{acc}}$, but involves a security game in which the adversary attempts to recover a secret input as opposed to guessing a predicate on the secret input. Local differential privacy \cite{dwork2008differential} has also proven to be a popular worst-case privacy measure, but often incurs high correctness penalties. Although similar, these metrics can not be directly applied to our scenario nor replace $\epsilon_{\mathrm{auc}}$ and $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}} > \rho}%\epsilon_{\mathrm{recall}}}$. \section{Limitations} \label{sec:limitations} Our work naturally suffers from several limitations that should be explored further before deployments are considered. Most notably, use of an SBB mechanism fundamentally must leak some information to the server to trade-off client privacy for efficiency. In some contexts leaking even a single bit of information about user content would be detrimental, in which case our techniques are insufficiently private. We speculate that leaking some information about client images is, however, fundamental to achieve practical performance in deployment for large~$\altmathcal{B}$. How to provide a formal treatment establishing that scaling requires some leakage and what that means for moderation mechanisms remain open questions. Second, our empirical analyses focus on matching attacks for a single query, which excludes some other potential threats. In particular it does not address adversarially-known correlations between multiple images queried by one or more clients. A simple example, mentioned in \secref{sec:sbb}, is an `averaging' attack against our LSH-based coarse embedding in which the adversary obtains a large number of embeddings all for the same image~$w$. Then the adversary can average out the per-bit noise and recover the granular embedding $\altmathcal{F}(w)$. We discuss simulation results for this scenario in \apref{ap:repeated}. The results indicate that, similar to privacy-preserving mechanisms for location-based services~\cite{geoind,quantifyingLocPriv, isGeoIndWhat,bayesMeasure}, repeated queries on the same content drastically weakens the privacy guarantee of SBB: an adversary that sees multiple SBB outputs that it knows are for the same image can obtain near-perfect matching attack precision for almost all recall thresholds. To limit risk here, client software might cache images they've recently queried. The client would not query the similarity service if a new image is too close to a prior image, and instead just reuse the cached result for the latter. Caching may not be feasible in all cases, and doesn't speak to cross-user sharing of images, which may be inferrable from traffic analysis should the adversary have access both to the similarity service and the messaging platform. Another approach would be to somehow ensure that the same noise is added to the same image, regardless of which client is sending it. This could possibly be done by having some clients share a secret key, and use it to apply a pseudorandom function to the image (or its PDQHash) to derive the coins needed for the random choices underlying our coarse embedding. Here an adversary's $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}} > 0\%}$ advantage remains at $50\%$ regardless of the increasing number of repeated queries. But this doesn't account for other potentially adversarially-known correlations across images (e.g., they are almost identical), and may be fragile in the face of malicious client software. Moreover, sending the same SBB embedding for the same image would seem to increase susceptability to linking attacks in which an adversary infers when two or more queries correspond to the same image. We are unsure which scenario bears more risk in practice. We leave the exploration of these mitigations to the future work. A related limitation is the exclusive use of empiricism for evaluation. While we focus on Bayes-optimal adversaries, it would be preferable to couple empiricism with analyses providing bounds on adversarial success. While our definitional framework provides the basis for proving bounds on, e.g., precision for particular data distributions, we do not yet have proofs and it appears to be challenging. We emphasize that such results cannot fully replace empirical work, because even formal results would necessarily make assumptions about data that must be empirically validated. Nevertheless, we consider the empirical results presented in this initial work as a proof-of-concept of the SBB framework and encourage future works to further examine the theoretical bounds for this approach. Finally, the public perceptual hash algorithms that SBB relies on increases the risk of evasion attacks that seek to modify images just enough to avoid detection. This risk seems particularly acute when using a similarity testing protocol that sends a bucket of PDQHash values to the client, as the adversary could extract these values from a client to inform their attacks. Allowing users to report misinformation images to frequently update the database may mitigate this risk. \section{Conclusion and Future Directions} \label{sec:conclusion} In this paper, in order to allow efficient privacy-preserving similarity testing, we defined the framework of similarity-based bucketization and formalized a set of privacy goals that are important to this application. We consider the information that the adversary wants to infer from a client input as the answer to a prediction task. An adversary's advantage is measured by their uncertainty regarding the prediction, using metrics that are widely applied in machine learning. Towards a realistic prototype for SBB, we focus on image similarity testing. Driven by the privacy formalization, we ran simulations on real-world social media data and analyzed the SBB protocol's security against a ``matching attack''. The attack refers to the scenario where an adversary tries to infer if a client input is similar to an adversary-chosen image. Using our framework, deployments can tune the performance/privacy trade-off depending on the application context. We then test SBB's performance when composed with four similarity protocols with varying level of privacy guarantee for the server content. We show that the composition with SBB significantly reduces the protocol latency and required bandwidth. While further research is needed to address various open questions and limitations of our results, we nevertheless believe that SBB represents a promising approach to scaling private similarity testing in practice. \section{A Secure Sketch Similarity Protocol} \label{sec:sssp} Here we provide further details regarding our secure sketch based protocol. A secure sketch~(SS) is a cryptographic mechanism originally suggested for use with biometrics or other fuzzy secrets. Formally, an SS is a pair of algorithms $(SS,Rec)$. The first is randomized, takes as input (in our context) a similarity representation $v=\altmathcal{F}(w)$ computed from image $w$, and outputs a bit string $z$, called a sketch. The recovery algorithm $Rec$ takes as input a sketch $z$ and a value $v'$ and outputs a corrected value~$v''$. Informally, correctness requires that $v' = v''$ if $\Delta(v,v') \le T$. For our purpose, we use secure sketches that work on the output space of similarity embedding $\altmathcal{F}$ for $\Delta$ being Hamming distance. To realize our secure-sketch similarity protocol~(SSSP), we use an oblivious pseudorandom function~(OPRF), implemented by the circl Go library from CloudFlare~\cite{circl}. It suffices to have a PRF $F_K(X)$ whose output is a member of a group, and for which $F_K(X)^{K'} = F_{K\cdot K'}(X)$ for all $K,K',X$, and where the keys form a group with operation `$\cdot$'. The SSSP starts by having the server compute a secure sketch $z$ for each image $w' \in B$ using the Reed-Muller code~\cite{reed1953class,muller1954application}, as well as the precomputed OPRF output for each such $w'$. The resulting values $z_1,\ldots,z_{|B|}$ and $F_K(\altmathcal{F}(w_1')),\ldots,F_K(\altmathcal{F}(w_{|B|}'))$ are sent to the client. The client chooses a random OPRF blinding key~$\tau$ and computes $Y_i \gets F_{\tau}(Rec(z_i,\altmathcal{F}(w)))$ for each sketch in the bucket. It sends the resulting $Y_1,\ldots,Y_{|B|}$ to the server. (If there are collisions in recovered similarity hashes, the client replaces the repeat OPRF output with a random value.) The server computes and returns to the client $Y_1^K,\ldots,Y_{|B|}^K$. Finally, the client can unblind the OPRF values by raising them each to $\tau^{-1}$, and then looks for values that match one of $Y_1,\ldots,Y_{|B|}$. The secure sketch protocol was implemented as a Go app at both the server side and client side. \paragraph{Background on secure sketches.} Our description of secure sketches follows~\cite{dodis2004fuzzy}. Let $V$ be a random variable. The min-entropy of $V$ measures the probability of correctly guessing a draw of~$V$ in a single guess. Formally it is defined as \begin{newmath} \mathbf{\mathsf{\tilde{H}}}_\infty(V) = -\log \max_{v} \Prob{V = v} \;. \end{newmath} where the maximization is over all values $v$ in the support of~$V$. We also can define the conditional min-entropy of a variable $V$ conditioned on another (usually correlated) value~$Z$. Formally we define it as \begin{newmath} \condminentropy{V}{Z} = -\log \sum_{z} \max_v \CondProb{V = v}{Z = z}\cdot\Prob{Z=z} \end{newmath} where the summation is over all values $z$ in the support of $Z$ and the maximization over all values $v$ in the support of $V$. In words this is just the negative log of the expected min-entropy of $V$ conditioned on $Z$, averaged over choice of $Z$. An $(\altmathcal{V},\mu,\mu',T)$-secure sketch is a pair of algorithms $(SS,Rec)$ for which: \begin{newitemize} \item The sketching algorithm $SS$ takes as input $v \in \altmathcal{V}$ and outputs a string $z \in \{0,1\}^*$. \item The recovery algorithm $Rec$ takes as input a string $z$ and an image similarity representation $v'$, and outputs a corrected value $v''$. Correctness mandates that if $\Delta(v,v') \le T$ then $v'' = v$. Otherwise no guarantees about $v''$ are made. \item The following security guarantee holds. For any distribution $\mathcal{D}$ over $\altmathcal{V}$ that has min-entropy $\mu$, then it holds that $\condminentropy{V}{SS(V)} \ge \mu'$ where $V$ is the random variable representing a value drawn according to $\mathcal{D}$ and $SS(V)$. \end{newitemize} For our context, the security guarantee means that if images have high min-entropy then the secure sketch ensures that they remain unpredictable. \paragraph{Oblivious PRFs.} In addition to a secure sketch, our protocol also relies on one other cryptographic object, an oblivious PRF (OPRF)~\cite{jarecki2009efficient,jarecki2014round}. An OPRF allows one party with a message $p$ to obtain the output of a PRF $F_K(p)$ where $K$ is held by the other party. We particularly use an OPRF construction from~\cite{jarecki2014round}, which is based in part off earlier protocols due to Ford and Kaliski~\cite{ford2000server}. The OPRF uses a cryptographic group (such as a suitable elliptic curve group) $\mathbb{G}$ of order~$p$ with generator $g$, a hash function $H\smidge\colon\smidge\{0,1\}^*\rightarrow\mathbb{G}$, and a hash function $H'\smidge\colon\smidge\mathbb{G}\rightarrow\{0,1\}^n$ for some parameter~$n$ (e.g., $n = 256$ if one uses SHA-256 for $H'$). The party holding input $p$ begins by choosing a random integer $\tau{\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \mathbb{Z}_p$, computes $X \gets H(p)^\tau$ and sends the result to the party holding the secret key $K$ (chosen uniformly from~$\mathbb{Z}_p$). Upon receiving $Y$, that party computes $\gets X^K$ and sends $Y$ back. Finally, the initiator computes $Y \gets X^{\tau^{-1}}$ and outputs $H'(p,Y)$. Thus ultimately the PRF is defined as $F_K(p) = H'(p,H(p)^K)$. We note that this scheme can be extended to allow verification that the server consistently uses the same PRF key $K$. We refer the reader to~\cite{jarecki2014round} for details. \paragraph{The protocol.} Our protocol involves a client whose input is an image similarity representation $v \in \altmathcal{V}$ and a server whose input is a set of images $\altmathcal{B} \subseteq \altmathcal{W}$. The protocol is parameterized by a similarity embedding function $\altmathcal{F}$ and distance threshold $T$. We assume a$(\altmathcal{V},\mu,\mu',T)$-secure sketch $(SS,Rec)$ that works for the distance measure $\Delta$ associated with $\altmathcal{F}$. We also use the OPRF protocol described above. A diagram of the protocol appears in \figref{fig:sssp}. Our secure-sketch similarity protocol (SSSP) starts by having the server compute a secure sketch for each image $w' \in B$, as well as the OPRF output for each such $w'$. The resulting values $z_1,\ldots,z_{|B|}$ and $F_K(\altmathcal{F}(w_1')),\ldots,F_K(\altmathcal{F}(w_{|B|}'))$ are sent to the client. The client chooses a random OPRF blinding key~$\tau$ and computes $Y_i \gets F_{\tau}(Rec(z_i,\altmathcal{F}(w)))$ for each sketch in the bucket. It sends the resulting $Y_1,\ldots,Y_{|B|}$ to the server. (If there are collisions in recovered similarity hashes, the client replaces the repeat OPRF output with a random value.) The server computes and returns to the client $Y_1^K,\ldots,Y_{|B|}^K$. Finally, the client can unblind the OPRF values by raising them each to $\tau^{-1}$, and then looks for values that match one of $Y_1,\ldots,Y_{|B|}$. \begin{figure}[t] \center {\footnotesize \begin{tikzpicture}[yscale=-1,node distance=0.5cm] \coordinate(topleft) at (0.0,0.0); \coordinate(clientcoord) at (1.0,0.10); \draw (topleft) rectangle ++(8.5,3.6); \node(client)[align=center,minimum width=3,anchor=north] at (clientcoord) {\textbf{Client}($v$)}; \node(server)[align=center,minimum width=3,anchor=north] at ($(clientcoord)+(5.8,0)$) {\textbf{Server}($\altmathcal{B},K$)}; \node(clientline) [align=left,minimum width=3,anchor=north west] at (0.1,0.4) { $\tau{\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \mathbb{Z}_p$ }; \node(serverline)[align=left,minimum width=3, anchor=north] at ($(server)+(0,.20)$) { For $w_i\inB$:\\ \hspace*{1em} $z_i {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} SS(\altmathcal{F}(w_i)))$\\ \hspace*{1em} $Y_i \gets F_K(\altmathcal{F}(w_i))$ }; \draw[thick,<-] (3,1.10) -- node [text width=3cm,midway,above,align=center] {$z_1,\ldots,z_{|B|}$\\ $Y_1,\ldots,Y_{|B|}$ } ++(2,0); \node(clientline) [align=left,minimum width=3,anchor=north west] at (0.1,0.85) { For $i\in[0,|B|]$:\\ \hspace*{1em} $p_i \gets Rec(z_i,v)$\\ \hspace*{1em} $X_i \gets H(p_i)^\tau$\\ }; \draw[thick,->] (3,2) -- node [text width=3cm,midway,above,align=center] {$X_1,\ldots,X_{|B|}$ } ++(2,0); \node(serverline2)[align=left,minimum width=3,anchor=north] at ($(server)+(-0.5,1.4)$) { For $i\in[0,|B|]$:\\ \hspace*{1em} $Y'_i \gets X_i^K$ }; \draw[thick,<-] (3,2.9) -- node [text width=3cm,midway,above,align=center] {$Y'_1,\ldots,Y'_{|B|}$ } ++(2,0); \node(clientline2) [align=left,minimum width=3,anchor=north west] at (0.1,2.4) { For $i\in[0,|B|]$:\\ \hspace*{1em} $Y'_i \gets {{{Y'_i}^{\tau^{-1}}}}$\\ }; \end{tikzpicture} \vspace{-0.7cm} } \caption{\label{fig:sssp}$F_{(\cdot)}(\cdot)$ is a PRF. $SS(\cdot)$ and $Rec(\cdot, \cdot)$ are from the secure sketch protocol.} \end{figure} \section{Acknowledgements} We thank Lucas Dixon, Nirvan Tyagi, and Paul Grubbs for useful discussions and feedback about this work. We thank Laurens van der Maaten and Brian Knott for answering our questions regarding CrypTen, and Xiao Wang for answering questions about EMP. This research was supported in part by NSF award \#1704527. \section{Correctness bound of $\mathbb{E}_{LSH}$} \label{sec:correctness} \begin{theorem} For any $w, w' \in \altmathcal{W}$ with $\Delta_\imagespace(w,w')<T$, when applying $\mathbb{E}_{LSH}$ with embedding length $d$, flipping bias $\gamma$ and coarse threshold $k$, for any $\beta > 1$ and $k > d\frac{T + \beta \ell \gamma}{\ell}$, we have \begin{equation*} \begin{multlined} \prob[\textnormal{Sim}_{LSH}(\textnormal{Emb}_{LSH}(w),w') = \mathsf{true}]\\ \indent > (1- e^{-2\ell(\beta-1)^2\gamma^2}) \cdot(1-e^{-2d(\frac{k}{d}-\frac{T + \beta \ell \gamma}{\ell})^2})\;. \end{multlined} \end{equation*} \end{theorem} \begin{proof} $\textnormal{Emb}_{LSH}$ works as follows: First, the algorithm samples an index function family $I$ as a combination of $d$ functions sampled uniformly from $\altmathcal{I}$ without replacement. The function $I$ is then applied on the similarity embedding of $w$, hence we have $p=I(\altmathcal{F}(w))$. A randomized algorithm $\textnormal{Flip}_\gamma$ that takes as input a bit string $p$ and outputs $\tilde{p}$ of the same length, setting $\tilde{p}_i = p_i$ with probability $1 - \gamma$ and $\tilde{p}_i = \lnot p_i$ with probability $\gamma$. A threshold $k$ defined with the embedding scheme is used as the coarse threshold for the Hamming distance over the randomly selected indexes. Swapping the order of applying $\textnormal{Flip}_{\gamma}$ and $I$ doesn't change the output of $\textnormal{Emb}(w)$. Hence, we apply $\textnormal{Flip}_\gamma$ on $\altmathcal{F}(w)$ first. Let $v=\altmathcal{F}(w)$ and $\tilde{v}=\textnormal{Flip}_\gamma(p)$. We define $X=\Sigma_{i=1}^\ell X_i$, where the random variable $X_i=1$ if and only if $\tilde{v}_i$ is set to $\lnot v_i$. Since each index is flipped independently with probability $\gamma$ as in a Bernoulli trial, we have that $X$ is sampled from a binomial distribution $X \sim B(\ell, \gamma)$, where $\e[X]=\ell\gamma$. Thus according to Hoeffding's inequality, for any $\delta > \ell\gamma$, $\prob[X \geq \delta] \leq e^{-2\ell(\frac{\delta}{\ell}-\gamma)^2}\;$. For any $w,w' \in \altmathcal{W}$ with $\Delta_\imagespace(w,w') < T$, let $v=\altmathcal{F}(w)$, $\tilde{v}=\textnormal{Flip}_\gamma(v)$ and $v'=\altmathcal{F}(w')$. We have $\Delta(\tilde{v},v') \leq X+\Delta(v,v')$, $X$ as defined previously. Hence, in the case of $\Delta(v,v') < T$, for self-defined $\beta$, such that $\beta> 1$, \begin{align*} \prob[\Delta(\tilde{v}, v') \geq T + \beta \ell\gamma] &\leq \prob[X \geq T - \Delta(v, v') + \beta \ell \gamma]\\ &\leq e^{-2\ell(\frac{(T-\Delta(v,v')}{\ell} + (\beta -1 )\gamma)^2}\\ &\leq e^{-2\ell(\beta -1)^2\gamma^2} \end{align*} Consider a randomly chosen $I$ from $\altmathcal{I}$, we define a random variable $Y$ as the number of indexing functions being drawn, such that $\tilde{v}[I] \neq v'[I]$. Note that $Y=\Delta(I(\tilde{v}), I(v')))$, and it is drawn from a hypergeometric distribution. Thus, with Hoeffding's inequality, we have $\prob[Y \geq k] \leq e^{-2d(\frac{k}{d}-\frac{\Delta(\tilde{v}, v')}{\ell})^2}\;$. In the case where $k > d \cdot \frac{T+ \beta \ell \gamma}{\ell}$, $\prob[\Delta(I(\tilde{v}), I(v')) \geq k \mid \Delta(\tilde{v}, v') < T + \beta \ell\gamma] \leq e^{-2d(\frac{k}{d}-\frac{T + \beta \ell \gamma}{\ell})^2}\;$. Hence, we have, \begin{align*} \prob[\textnormal{Sim}_{LSH}&(\textnormal{Emb}_{LSH}(w),w') = \mathsf{true}]\\ &= \prob[\Delta(\textnormal{Emb}_{LSH}(w),\textnormal{Emb}_{LSH}(w')) < k] \\ &> \prob[\Delta(\tilde{v}, v') < T + \beta \ell\gamma]\\ &\qquad\cdot \prob[\Delta(I(\tilde{v}), I(v')) < k \mid \Delta(\tilde{v}, v') < T + \beta \ell\gamma]\\ &> (1- e^{-2\ell(\beta-1)^2\gamma^2})(1-e^{-2d(\frac{k}{d}-\frac{T + \beta \ell \gamma}{\ell})^2})\;. \end{align*} \end{proof} \section{SBB to Share Moderation Efforts} \label{ap:mod} The SBB mechanism can also be used for social media sites to share moderation efforts across trust boundaries. Small social media sites have fewer moderation resources hence often have to rely on databases of known bad contents such as the ThreatExchange service~\cite{threatexchange} provided by Facebook, while preserving the privacy of their users. In this section, we review the deployment scenario of centralized moderators of social media sites using a third-party database to improve their moderation efforts. Unlike the deployment scenario where individual users use SBB, the centralized moderator is able to deduplicate the contents that they receive on the forum and reduce the repetition rate of the popular images that are sent in the queries. Using a dataset of images collected from the imageboard 4chan by Matatov et al.~\cite{matatovdejavu}, a social media site that is known for inappropriate content and could use our service, we conduct experimental analyses to find suitable parameters of SBB under this setting. Our results show that while preserving client's privacy and ensuring correctness, SBB compresses the database to $2.1\%$. Our simulation dataset consists of $1.2$ million posts with images collected from 4chan's politically incorrect board (/pol/) between July 10, 2018 and October 31, 2019, with $750$ thousand unique PDQHashes. The repetition rate of popular images are much lower as compared to the deployment scenario of individual users, as can be observed in \figref{fig:overview-ap}~(upper-left). Most of the images~($78\%$) don't share the same PDQHash value with others~(first row from bottom, lightest shade). Each post with an image corresponds to a query for that image to the SBB protocol. Similar to that in \secref{sec:expr}, we examine the security of SBB against the matching attack of the most popular image, which appears in $0.02\%$ of all posts. When comparing the baseline of using $\mathbb{E}_{cPDQ}$ with $\mathbb{E}_{LSH}$, \figref{fig:overview-ap} (upper-right) shows similar trend as in \figref{fig:privacy}~(left). First, we notice that the accuracy metric cannot accurately describe the privacy leakage, while $\epsilon_{\mathrm{auc}}$ remains $100\%$ in all experiments. Second, across all metrics, $\mathbb{E}_{cPDQ}$ allows the adversary to have perfect improvement when performing the matching attack, while $\mathbb{E}_{LSH}$ offers better security guarantees. In particular, when measured by $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}=100\%}$, $\mathbb{E}_{LSH}$ performs better than $\mathbb{E}_{cPDQ}$ even when the coarse embeddings are of similar length. The adversary achieves $89.5\%$ precision upon receiving a $16$-bit LSH-based coarse embedding as request, as compared to almost $96.6\%$ precision when we apply $\mathbb{E}_{cPDQ}$. In \figref{fig:overview-ap}~(lower-left), we fix the coarse embedding length at $d=12$ and present the results for $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$ with varying $\gamma$. Similar to that in \figref{fig:privacy}~(right), for any $\gamma$, $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$ is smaller than $50\%$~(noted by the horizontal line) for all recall thresholds, satisfying our security requirement as defined in \secref{sec:expr}. In \figref{fig:overview-ap}~(lower-right), we fix $d=12$ and $\gamma=0.05$, and vary the value of the coarse threshold $k$. Similar to previous experiment, we use the 4chan dataset as $\mathcal{D}$, and all the possible values of PDQHash in the dataset as $\altmathcal{B}$. The results are aggregated from $2$M requests randomly selected from the dataset with replacement. The parameter settings are less conservative in this scenario, allowing better efficiency. For example, when $T=32$, a sufficiently correct embedding scheme with $\epsilon \geq 95\%$~(noted by the dashed horizontal line), produces a smaller bucket~($2.1\%$) in \figref{fig:overview-ap}~(lower-right) than that~($9.3\%$) in \figref{fig:accuracy}~(left). The distribution of images shared by clients may differ depending on the deployment context. Such difference impacts the adversarial advantage when performing matching attacks. Our framework allows simulation on real-world social media datasets, in order to find good parameters for $\mathbb{E}_{LSH}$ when navigating the privacy/efficiency tradeoff. \section{Simulating Repeated Queries} \label{ap:repeated} We consider the following attack scenario in this section. An adversary who receives $q$ embeddings and is able to infer that the embeddings correspond to the same image aims to guess if the image has the same PDQHash value as an adversary chosen image $w_{adv}$. The embeddings are generated with independent coins. We first generalize from Theorem~\ref{thm:bayes-opt} to compute the probability used by a Bayes optimal adversary. \begin{theorem} Let $\Delta_I(v, p) = \Delta(I(v), p)$ for a similarity hash $v \in \{0,1\}^\ell$. Consider a fixed image $w_{adv} \in \altmathcal{W}$. Suppose we sample an image $w$ and generate $q$ embeddings for this image as follows: $$w {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \mathcal{D}, (I_1, p_1) {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \textnormal{Emb}(w), \ldots, (I_q, p_q) {\;{\leftarrow{\hspace*{-3pt}\raisebox{.75pt}{$\scriptscriptstyle\$$}}}\;} \textnormal{Emb}(w)$$. Let $v_{adv} = \altmathcal{F}(w_{adv})$ and $v = \altmathcal{F}(w)$. Furthermore, consider the distribution $\dist_\simhash$ that $\altmathcal{F}$ induces on $\{0,1\}^\ell$, where $$\dist_\simhash(x) = \Pr[\altmathcal{F}(w) = x] = \sum\limits_{\substack{w' \in \altmathcal{W},\\ \altmathcal{F}(w') = x}} \mathcal{D}(w')$$ for all $x \in \{0,1\}^\ell$. We have that \begin{equation*} \begin{multlined} \Pr[v = v_{adv} \mid (I_1, p_1), \ldots, (I_q, p_q)] \\ = \dfrac{\gamma^{\delta(v_{adv})}\cdot (1 - \gamma)^{q \cdot d - \delta(v_{adv})} \cdot \dist_\simhash(w_{adv})}{\sum\limits_{v' \in \{0,1\}^\ell} \gamma^{\delta(v')}\cdot (1 - \gamma)^{q \cdot d - \delta(v')} \cdot \dist_\simhash(w')} \end{multlined} \end{equation*} where $\delta(v) = \sum\limits_{j \in [q]} \Delta_{I_j}(v, p_j)$ and the probability is over the sampling of $w$ and the coins used in the invocations of $\textnormal{Emb}$. \end{theorem} \begin{proof} The proof follows that of Theorem~\ref{thm:bayes-opt} except we now consider independent bit flips over multiple embeddings corresponding the same image $w$, hence the sum in the exponents of $\gamma$ and $1 - \gamma$. \end{proof} We experiment with the deployment scenario of when individual users use our services and follow the experimental setup as in \secref{sec:data} with the same dataset. \figref{fig:repeated-ap} presents the adversary advantage as measured by the precision metric $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>\rho}%\epsilon_{\mathrm{recall}}}$~(Y axes) with the increase of the repetition frequency $q$~(X axes). The left figure shows the drastically increasing adversary advantage when embeddings are randomized. Even with low repetition rates, e.g. $q=2$, $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>0\%}$ is almost $100\%$. This is because for two randomized queries of the same image, the amount of information leaked to the adversary is around $2d$ bits when the repetition rate of picking the same indexing function is low. Hence the privacy loss of $d=9$ and $q=2$ would be similar to that of $d=18$, which is detrimental. However, when the embeddings are derandomized, i.e. in the case when indexing functions are fixed for repeated queries, the adversary advantage is at most around $50\%$ as shown in the right figure, even for $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}>0\%}$ and $q=5$. This is because the adversary doesn't receive much extra information from the repeated queries, nevertheless, $\epsilonPrecR{r}%\epsilon_{\mathrm{recall}}=100\%}$ keeps increasing with the increase of $q$, as the adversary becomes more certain when predicting more cases of $w$ as positive. \section{Probability Computed by Bayes Optimal Adversary} \label{ap:bayes} We prove Theorem~\ref{thm:bayes-opt} in this section and recall it below. \bayesOpt* \begin{proof} By Bayes' theorem, \begin{equation*} \begin{multlined} \Pr[v = v_{adv} \mid \textnormal{Emb}(w) = (I, p)] \\ = \dfrac{\Pr[\textnormal{Emb}(w) = (I, p) \mid v = v_{adv}] \cdot \Pr[v = v_{adv}]}{\Pr[\textnormal{Emb}(w) = (I, p)]} \end{multlined} \end{equation*} Observe that $\Pr[v = v_{adv}] = \dist_\simhash(v_{adv})$ and that \begin{align*} \Pr&[\textnormal{Emb}(w) = (I, p) \mid v = v_{adv}] \\ &= \Pr[\textnormal{Emb}(w) = (I, p) \mid I, v = v_{adv}] \Pr[I]\\ &= \gamma^{\Delta_I(v_{adv}, p)} \cdot (1 - \gamma)^{d - \Delta_I(v_{adv}, p)} \cdot \Pr[I] \end{align*} The last equality holds by the independence of the $\gamma$-biased coin flips, with $(1 - \gamma)$ corresponding to the bits in common and $\gamma$ corresponding to the differing bits. Now, note that $$\Pr[\textnormal{Emb}(w) = (I, p)] = \Pr[\textnormal{Emb}(w) = (I, p) \mid I] \Pr[I]\;.$$ Observe that \begin{align*} \Pr&[\textnormal{Emb}(w) = (I, p) \mid I] \\ &= \sum\limits_{v' \in \{0,1\}^\ell} \Pr[\textnormal{Emb}(w) = (I, p) \mid I, \altmathcal{F}(w) = v'] \cdot \Pr[\altmathcal{F}(w) = v']\\ &= \sum\limits_{v' \in \{0,1\}^\ell} \gamma^{\Delta_I(v', p)} \cdot (1 - \gamma)^{d - \Delta_I(v', p)}\cdot \dist_\simhash(v') \end{align*} Plugging back into our original expression and cancelling the $\Pr[I]$ terms completes the proof of the theorem. \end{proof}
1,116,691,498,212
arxiv
\section{Introduction} \label{sec1} Dislocations are of fundamental importance in the physics of semiconductors, both from a mechanical and from an electronic point of view. They are the carriers of plasticity in crystals and act as trapping and scattering centers for electronic carriers. A wealth of experimental information is available about the properties of dislocations in tetrahedrally bonded semiconductors. \cite{hirsch,duesbery,alexan,gotts,farber,nikit,kolar} In Si, the predominant slip systems are the 60$^{\circ}$ and the screw dislocations oriented along $\left[110\right]$ directions in a $\{111\}$ slip plane. Both are known to occur in the glide configuration and to dissociate into pairs of partial dislocations bounding a ribbon of intrinsic stacking fault.\cite{hirsch,duesbery,alexan} Dissociation lowers the strain energy and is made energetically favorable by the low energy of the stacking fault in this material. (Evidence indicates that the above is also true in the case of germanium and for III-V and II-VI semiconductors.\cite{hirsch,duesbery}) The resulting 90$^{\circ}$ and 30$^{\circ}$ partials are believed to undergo core reconstruction, which eliminates the unsaturated bonds, thus restoring the fourfold coordination of the atoms at the cores. This picture is consistent with the low density of dangling bonds, as suggested by EPR measurements.\cite{hirsch,duesbery} Dislocation motion occurs by nucleation and propagation of kinks along the dislocation line. Due to thermal fluctuations or the action of an applied stress, double kinks can be nucleated along the dislocation line. When these reach a critical separation, dissociation occurs and the individual kinks propagate in opposite directions, thus generating a displacement of a dislocation segment.\cite{h&l} A detailed understanding of the atomic-scale structure of the kinks and the barriers associated with their motion is thus of the greatest importance. In semiconductors, according to the theoretical model proposed by Hirth and Lothe (HL),\cite{h&l} double-kink nucleation and the high Peierls potential barrier to kink motion control the rate of dislocation propagation. This is to be contrasted with the case of metals, where kinks experience a very low barrier to motion, and the rate is controlled by nucleation only. The HL model is often used in the interpretation of dislocation mobility experiments, although the occurrence of such high Peierls barriers has been questioned by some authors.\cite{maeda} Furthermore, an alternative theoretical model has been proposed in which dislocation motion is controlled by the pinning of kinks by obstacles distributed along the dislocation line.\cite{obst1,obst2} Recent experimental evidence suggests that the barriers are indeed high, but experiments cannot clearly decide between these two theoretical models.\cite{gotts,farber,nikit,kolar} A complete microscopic picture is still lacking. Related issues, such as the dependence of dislocation mobility on doping and the photoplastic effect in semiconductors,\cite{hirsch} would also profit from a better understanding of dislocation structure at the atomic level. On the computational side, large-scale problems of this nature have been mostly studied by using classical interatomic potentials. Such studies are not always reliable, since these potentials are often unable to reproduce effects of intrinsic quantum-mechanical nature such as bond reconstruction and Peierls or Jahn-Teller symmetry breaking. For example, while the Stillinger-Weber~\cite{stwb} potential has been used to study the core reconstruction and kinks of the 30$^{\circ}$ partial,\cite{bulatov} it fails to reproduce the spontaneous symmetry-breaking core reconstruction of the 90$^{\circ}$ partial from the symmetric ``quasi-fivefold'' reconstruction. \cite{bigger,nbv} A proper quantum-mechanical description of the electronic structure is clearly needed. One is thus led to consider tight-binding (TB) and {\it ab initio} methods. Recent {\it ab initio} and TB theoretical works have concentrated on such issues as the core reconstruction of the 90$^{\circ}$ partial,\cite{bigger,hansen} and the elastic interaction between dislocations of a dipole in the shuffle~\cite{arias} and glide sets.\cite{hansen} Using a relatively small supercell, one first-principles study has determined a kink mobility barrier in the 30$^{\circ}$ partial,\cite{huang} but only one kink species was studied, out of a very rich variety characteristic of this system. As will become clear from the conclusions of the present work and from Ref.~\onlinecite{bulatov}, the formation and migration energies of other kinks are needed for a proper comparison with the experimental results. An important recent development is our prediction,\cite{bnv} on the basis of classical-potential, tight-binding, and {\it ab initio} calculations, that the reconstruction of the 90$^{\circ}$ partial in Si is not the above-mentioned symmetry-breaking structure, as had generally been accepted in the theoretical literature. \cite{bigger,nbv,hansen,chel84,markl83,markl94,jones80,jones93,% heggie83,heggie93,oberg} Instead, we proposed a double-period (DP) reconstruction whose core structure is reminiscent of that of a double-kink of minimal length that would form on the core of the original single-period (SP) structure.\cite{bnv} Cluster calculations on kinks and solitons~\cite{foot-sol} in the SP reconstruction of the 90$^{\circ}$ partial have also been reported.\cite{jones80,heggie83,heggie93,oberg} These calculations have identified many of the basic types of defects in this system, but must be taken at a semi-quantitative level, since they suffer from the lack of coupling of the defect local strain fields with the lattice elastic fields. To address properly the issues related to dislocation mobility, a comprehensive study of dislocation kink structure and dynamics would require the use of very large supercells, for which the application of {\it ab initio} techniques is still computationally prohibitive. In view of this, the natural choice is the application of more efficient quantum-mechanics based methods to study the electronic and structural excitations in the dislocation cores. In this work, we employ the tight-binding-total-energy (TBTE) Hamiltonian of Kwon and collaborators~\cite{kwon} to carry out a detailed atomistic study of the atomic structure of both the 30$^{\circ}$ and the 90$^{\circ}$ partial dislocations in Si. To make these calculations tractable, we use the linear-scaling or ``${\cal{O}}(N)$'' method of Li, Nunes and Vanderbilt~\cite{lnv} to solve for the electronic-structure contribution to energies and forces, enabling us to treat system sizes up to $10^3$ atoms easily on a workstation platform. Our work addresses some of the fundamental issues associated with these two systems. More specifically, we address the ground-state structural properties of the dislocation cores and of defects in the core, such as kinks and reconstruction defects (RD), as well as energy barriers to motion of the various defects. In this work, when considering the 90$^{\circ}$ partial, we will discuss only the SP reconstruction. Despite the fact that this is not the correct ground state for this dislocation in Si, we hope that understanding this somewhat simpler system will help us in the study of the myriad of defects in the more complicated ground-state DP reconstruction, to which the former is related.\cite{bnv} Moreover, we should keep in mind that the 90$^\circ$ partial is equally important in other materials, such as germanium (Ge), diamond (C), and the III-V and II-VI semiconductors.\cite{hirsch,duesbery,alexan} Preliminary calculations,\cite{bnv} using a Keating model,\cite{keating} indicate that in C the SP reconstruction is more likely to be lower in energy, while Ge, like Si, would prefer the DP structure. More accurate calculations are needed to reveal which of the two reconstructions would be favored in each case. Therefore, the study of the SP structure is still important from a theoretical point of view. The paper is organized as follows. The next section contains the technical details of the calculations we performed. In Sec.~\ref{sec3}, we discuss our results for the core reconstruction and related defects in the 30$^\circ$ partial dislocation. Our main results for the SP reconstruction of the 90$^\circ$ partial are described in Sec.~\ref{sec4}. Finally, in Sec.~\ref{sec5} we summarize the main conclusions and results, and compare our kink energies and barriers with the available experimental results. In particular, we will argue that our results appear to be consistent with the HL theory of dislocation glide. \section{Technical details of the calculations} \label{sec2} We use the TBTE parameters of Kwon {\it et al.},\cite{kwon} which describe well the acoustic phonon modes and elastic constants of Si, thus being adequate to describe the strain fields associated with the dislocation cores and related defects. Owing to its good transferability between different crystal structures, ranging from diamond to FCC, this Hamiltonian is also expected to give a good description of the coordination defects in the present study. The ${\cal{O}}(N)$ method of Li {\it et al.}~\cite{lnv} is used to solve for the electronic structure contribution to the energies and forces. For the density matrix, we initially work at a real-space cutoff $R_c=$ 6.2 \AA\ on the range of the density matrix used in the tests presented in Ref.~\onlinecite{lnv} for the ${\cal{O}}(N)$ method. In a second stage, we improve the convergence of our results by further relaxing the ionic positions and the electronic structure with a larger cutoff value of $R_c =$ 7.3 \AA. The numerical minimization of the ${\cal{O}}(N)$ functional was carried out by the conjugate-gradient algorithm, with the internal line minimization performed exactly. To obtain the right number of electrons, the chemical potential is adjusted iteratively, in each case. Usually, this procedure has to be repeated only at the initial steps of the structural relaxation procedure, after which the chemical potential converges to the adequate value and remains constant. Ground-state structures were computed by allowing all atomic coordinates to relax fully (average forces less than 0.1 meV/\AA). Our supercells are chosen with the dislocation direction (corresponding to a $\left[1{\overline 1}0\right]$ crystalline direction) lying on the $y$-axis. The glide plane (which contains a stacking fault) is normal to the $\left[111\right]$ direction and coincides with the $xy$ plane of the cell. (Fig.~\ref{core-30} shows the glide plane of the 30$^\circ$ partial dislocation, with the crystalline directions indicated.) The $z$ direction of the cell is thus parallel to the $\left[111\right]$ direction. Each supercell contains two dislocations having opposite Burgers vectors (a dislocation dipole), which allows us to use periodic boundary conditions. Supercell vectors are chosen such as to array the dislocations in a quadrupole configuration, as suggested in Ref.~\onlinecite{bigger}, to avoid the spurious shear strains associated with the minimal dipole cell. To ensure the convergence of our calculations with respect to supercell size, we used three different cells, containing 216, 264, and 308 atoms respectively, for the simulation of the reconstructed core of the 30$^\circ$ partial dislocation. Each cell corresponds to a slab of atoms at a 60$^\circ$ angle with respect to the dislocation direction, including twice the lattice period in that direction, to allow for the period-doubling reconstruction of the 30$^\circ$ partial. The two parameters characterizing the geometry of each cell are the separation between the two dislocations in the glide plane (i.e., the width of the stacking fault) within a given unit cell, and the distance between the periodic-image dipoles along the $z$ direction. In our calculations, these distances are, respectively, 15.0~\AA\ and 18.8~\AA\ for the 216-atom cell, 18.3~\AA\ and 18.8~\AA\ for the 264-atom cell, and 18.3~\AA\ and 21.9~\AA\ for the 308-atom cell. The supercells for the computation of defect energies are obtained by repeating the core slabs several times along the dislocation direction. The defect energies we quote are referred to the corresponding supercell containing defect-free (but reconstructed and fully relaxed) dislocations. For the kinks in the 30$^\circ$ partial, each of the core slabs were repeated three times (two and a half times for the RD, and three and a half times for the kink-RD complexes) along the dislocation direction, so that the defect-defect separation along the line was 19.2~\AA\ or larger, depending on the type of defect. Below, we describe the procedure we used for the computation of defect energy barriers. Because of the higher computational demands involved, in this case we employed only the smaller cells (three times the 216-atom slab for kinks and two and a half times the same slab for the RD). Table I in the next section illustrates the convergence of our results with respect to dislocation separation. As a further check, we also computed the energies of the core and of one the kinks (the LK kink, as described below) with an even larger slab, consisting of 600 atoms for the reconstructed core (1800 atoms for the defect). In this case, dislocation distances are 24.9~\AA\ in the $xy$ plane and 31.4~\AA\ in the $z$ direction. The change in defect energy with respect to the 308-atom slab was only $\sim$0.02~eV. To test the effect of defect interaction, this kink was studied with a larger kink-kink separation (with the smallest slab repeated eight times), which produced a change of only $\sim$0.01~eV in the energy. Therefore, we consider our calculations to be converged within 0.03~eV with respect to core-core and defect-defect interactions. To estimate the error involved our choice of cutoff for the density matrix, kink and core energies were computed using a larger cutoff ($R_c=$ 8.1 \AA). The kink formation energy changed by only $\sim$0.06~eV, which justifies our choice of cutoff. From these results we can also estimate that our defect energy barriers are converged within 0.3 eV. The supercells used for the study of the 90$^\circ$ partial, are as described in Refs.~\onlinecite{nbv,bnv}. In this case, despite the fact that we are only interested in the qualitative nature of our results, our values are well converged, with dislocation distances on the order of 26.6~\AA, and defect-defect separations of at least $\sim$13.4~\AA. (As in the case of the 30$^\circ$ partial, barriers are computed using smaller cell sizes, corresponding to a dislocation separation of 13.3~\AA.) Barrier energies were calculated by identifying the 3$N$-dimensional configuration-space vector ${\bf R}_{12}={\bf R}_2-{\bf R}_1$ pointing from one equilibrium position ${\bf R}_1$ of the defect to a neighboring position ${\bf R}_2$, and defining a reaction coordinate $Q = {\bf R} \cdot {\bf R}_{12}$ measuring the progress from ${\bf R}_1$ to ${\bf R}_2$. Then, for a series of values of this coordinate, we computed the energy with this coordinate fixed and all others fully relaxed. This approach is efficient in simple cases, but we find that it often fails to converge to the saddle-point configuration when the reaction path~\cite{foot-path} makes sharp angles with respect to ${\bf R}_{12}$. In these cases, we can usually find two configurations, ${\bf R'_1}$ and ${\bf R'_2}$, near the saddle-point, with nearly the same value of $Q$ but with opposite forces along ${\bf R}_{12}$. By exploring the space spanned by ${\bf R}_{12}$ and $\left({\bf R'_2} - {\bf R'_1}\right)$ while allowing all other coordinates to relax, we were able to determine the energy barriers with good accuracy for all cases studied (average forces less than 1.0 meV/\AA). \section{The 30$^\circ$ partial dislocation} \label{sec3} \subsection{Core reconstruction} \label{sbsec3.1} \begin{figure} \epsfxsize=2.8 truein \centerline{\epsfbox{fig1.ps}\quad} \vskip 0.20truein \caption{(a) Unreconstructed core of the 30$^\circ$ partial dislocation, viewed from above the $(111)$ slip plane. Shaded region indicates stacking fault. Black (white) atoms lie below (above) the slip plane. (b) Same view of the double-period reconstructed structure. Crystalline directions are also shown.} \label{core-30} \end{figure} In Fig.~\ref{core-30}(a), a top view of the atomic structure of the unreconstructed 30$^{\circ}$ partial in the glide plane is shown. The shaded area represents the stacking fault, and the dislocation line is indicated by the boundary between shaded and unshaded areas. The crystalline directions are also displayed. Atoms shown as white (black) are above (below) the glide plane; each atom is bonded to another either above or below it, and these are not shown in the picture. Thus, fourfold coordinate atoms have three of their bonds in the plane of the figure. The atoms at the core of the defect are threefold coordinated, with a dangling bond lying nearly parallel to the dislocation line. In Fig.~\ref{core-30}(b) we show a reconstruction in which the fourfold coordination of the atoms at the core is restored by atoms bonding in pairs along the line, leading to a doubling of the period in that direction. This reconstruction is well accepted as being the ground-state of the 30$^{\circ}$ partial, and has been discussed theoretically by other authors.\cite{markl83,northrup,chel82,bulatov} In Ref~\onlinecite{bulatov}, it was found to be 0.21 eV/\AA\ lower in energy than the unreconstructed structure, using a Stillinger-Weber potential. We find a higher value of 0.36 eV/\AA\ for the reconstruction energy. \begin{table} \caption{Formation energy of defects in the 30$^\circ$ partial dislocation, in eV. Defect energies are referred to a defect-free dislocation core. TB results for three supercell sizes are shown. For the PSD, supercells contain 5/6 of the number of atoms shown. For the LK, in the third column we indicate in parenthesis the formation energy computed with a 1800-atom cell. Fourth column contains Keating energy computed for the largest cell (with ``Keating + 0.4 eV'' numbers in parenthesis). } \begin{tabular}{lcccc} &648 atoms &792 atoms &924 atoms &Keating \\ \hline PSD &1.35 &1.32 &1.33 \\ LK &0.52 &0.37 &0.35 (0.33) &-0.06 (0.34) \\ LK$'$ &0.97 &0.81 &0.76 &$\;$0.44 (0.84) \\ RK &0.93 &1.20 &1.24 &$\;$1.00 (1.40) \\ RK$'$ &1.64 &1.84 &1.85 &$\;$1.30 (1.70) \\ \end{tabular} \end{table} A look at the distribution of bond lengths for this structure shows that the reconstruction is indeed strong, with maximum bond-length deviations of only 3.0\% (maximum and minimum bond lengths are 2.423 \AA\ and 2.308 \AA, respectively) with respect to Si bulk values (2.351 \AA). The core energy is mostly due to the strain associated with bond-angle distortions at the core of the defect, with bond angles ranging between $\sim$90$^\circ$ and $\sim$126$^\circ$ ($109.5^\circ$ is the bulk value). No mid-gap levels are expected for this structure, in accordance with the EPR evidence.\cite{hirsch,duesbery,alexan} A rich variety of core defects is associated with this reconstruction, including kinks and RDs, and complexes of these basic types. A very extensive study of these defects is found in Ref.~\onlinecite{bulatov}, including structural features and energetics under a Stillinger-Weber potential. To a large extent, our study of this specific dislocation relies on this previous study, adding to it the benefits of a quantum-mechanical treatment of the electronic structure. More specifically, the defects considered in this work are the ones identified in Ref.~\onlinecite{bulatov}. As we proceed, it will be seen that some of our results differ qualitatively from those in Ref.~\onlinecite{bulatov}, and also that we find a better agreement with the experimental results. \subsection{Phase switching defect (PSD)} \label{sbsec3.2} Fig.~\ref{PSD}(a) shows a RD associated with the core of the 30$^\circ$ partial. We shall refer to this defect as a phase switching defect (PSD).\cite{foot-sol} The existence of such defects has been hinted at since the realization that the core of the partials might undergo reconstruction.\cite{hirsch,jones80} We computed the energy of a fully relaxed PSD by repeating the atom slabs five times along the $\langle 110\rangle$ direction, and introducing one PSD in each dislocation line. Our value for the PSD formation energy is 1.32 eV (see Table I below), which is somewhat higher than the value of 0.81 eV obtained in Ref.~\onlinecite{bulatov}. We believe our result to be more reliable, given the quantum-mechanical nature of our approach, in particular for a defect containing a dangling bond. To a first approximation this defect can be understood as a $p$ dangling-bond defect, which indicates that a formation energy on the order of 1 eV (roughly the bond-breaking energy in bulk Si) is to be expected. The exact value is determined by the relaxation of the atoms surrounding the defect. \begin{figure} \epsfxsize=2.8 truein \centerline{\epsfbox{fig2.ps}} \vskip 0.20truein \caption{(a) Core structure of a phase-switching defect (PSD), which is a reconstruction defect in the core 30-partial dislocation. The phase of the reconstructed bond along the dislocation line is switched, going through the defect. (b) Saddle-point configuration for the propagation of a PSD along the core.} \label{PSD} \end{figure} We also computed the migration barrier for the propagation of the PSD along the dislocation direction. The relaxed structure of the barrier (saddle-point) configuration is shown in Fig.~\ref{PSD}(b). In this case, the symmetry between adjacent positions of the defect along the line indicates that the saddle-point configuration is at the halfway position. It was somewhat surprising to find that even in this case, we had to resort to a two-dimensional reaction coordinate as described Sec.~\ref{sec2}. Our saddle-point configuration, with an energy barrier of 0.3 eV, is very similar to that in Ref.~\onlinecite{bulatov}. In can be seen that the atom at the center becomes fivefold coordinated, which leads to a smooth process of bond substitution as the PSD propagates to the right. This explains the low energy barrier involved in this process. \subsection{Kinks} \label{sbsec3.3} The period doubling of the reconstructed core gives rise to a multiplicity of kinks in this system. Two distinct families of such defects appear, depending on whether the dislocation ``kinks'' to the left (Fig.~\ref{LK}) or to the right (Fig.~\ref{RK}). The period doubling of the core introduces a choice of phase of the core reconstruction both ahead of, and behind, the kink. Of the four configurations generated in this way, two of them (those necessarily containing a coordination defect) will be classified as PSD-kink complexes, and will be considered in Sec.~\ref{sbsec3.4}. The remaining two configurations will be classified as ``pure'' kinks and are considered here. The two left kinks LK and LK$'$ are shown in Figs.~\ref{LK}(a-b), while the two right kinks RK and RK$'$ are shown in Fig.~\ref{RK}(a-b). \begin{figure} \epsfxsize=2.8 truein \centerline{\epsfbox{fig3.ps}} \vskip 0.20truein \caption{Core structure of the left kinks in the 30$^\circ$ partial, and associated transition state. Kink notation is explained in the text. (a) LK kink. (b) LK$'$ kink. (c) Transition state for the LK $\rightarrow$ LK$'$ transformation.} \label{LK} \end{figure} The energies for each type of kink were computed using the TBTE Hamiltonian, as well as with a classical Keating model,\cite{keating} with the parameters proposed in Ref.~\onlinecite{qianchadi}, in order to look at the local-strain contributions to the energy of each defect. In Table I, we show the TBTE results for each of the three slabs described in section~\ref{sec2}, along with the Keating results for the 924-atom cell. For one of the kinks, the energy computed with a 1800-atom slab is also shown in parenthesis; note the convergence of these results with respect to cell size. Next, we discuss the results for each kink family in more detail. \begin{figure} \epsfxsize=2.8 truein \centerline{\epsfbox{fig4.ps}} \vskip 0.20truein \caption{Core structure and transition state of right kinks in the 30$^\circ$ partial. Kink notation is explained in the text. (a) RK kink. (b) RK$'$ kink. (c) Transition state for the RK $\rightarrow$ RK$'$ transformation.} \label{RK} \end{figure} \subsubsection{Left kinks} The left kinks LK and LK$'$ are shown in Fig.~\ref{LK}, together with the saddle-point configuration for the LK $\rightarrow$ LK$'$ translational motion. The energies, as given in Table I, show that reconstruction produces low energy kinks in this case, as compared to the energy of the unreconstructed PSD defect. At first sight, the formation energy for these reconstructed defects is expected to be mostly associated with the local strain at the kink cores. The Keating-model results can give us an estimate of these local-strain effects. The LK is found to add no additional strain on that imposed by the core reconstruction itself, as can be seen by its slightly negative energy. On the other hand, the LK$'$ kink is found to have a Keating formation energy of 0.44 eV. It is interesting to note that the {\it relative} Keating energy of the two left kinks is in good agreement with the TB results. This is actually true for all four kink types, as can be seen be comparing differences of the Keating energies in the fourth column of Table I. In the last column, we add a constant shift of 0.4 eV to each Keating energy. It is evident that the ``Keating + 0.4 eV'' results are reasonably close to the TB ones. In view of this, we conclude that a roughly constant band-structure energy of 0.4 eV is associated with each kink. For the saddle-point configuration in Fig.~\ref{LK}(c) we computed an energy barrier of 1.52 eV. This result is in very good agreement with experimental estimates. In our concluding section, we will discuss more extensively the significance of our results in light of the available experimental evidence. Here, we note that such a high barrier can be understood by the presence of severe bond-bending and stretching distortions at the core of the defect, along with the presence of malcoordinated atoms. Bond angles as small as 50.4$^\circ$ are found, as well as bonds stretching to 2.80 \AA. \subsubsection{Right kinks} Shown in Fig.~\ref{RK} are the two kinks of the right family, RK and RK$'$, together with the saddle-point corresponding to the RK $\rightarrow$ RK$'$ reaction. Despite the fact that both kinks are fully reconstructed, the formation energies of 1.24 eV for RK and 1.85 eV for RK$'$ are surprisingly high. Again, note the agreement between the Keating values and the TB ones, after adding a constant shift of 0.4 eV to the former. No single structural feature of the right kinks could be traced in order to explain the unexpected formation energies. The minimum and maximum distortions of bond lengths and angles do not vary drastically among the four kink types. To help us better understand these results, we observe that the Keating energies can be decomposed in an atom-by-atom basis. Bond bending energies, associated with changes in the angle between two bonds, are assigned to the vertex atom, and half of the bond stretching energy of a given bond is assigned to each of the two participating atoms. To examine the nature of the strain fields associated with each kink type, using our largest cells (924 atoms), we looked at these atomic energies integrated over shells of atoms defined by their distance from the core of the defect. (Since our supercells contain two cores and thus two defects, we always choose the shortest distance to a defect.) The integrated energies are then defined by \begin{equation} E^d(R)= \sum_{R_i \leq R} E^d_i(R_i)-E^c_i(R_i)\;, \end{equation} where the Keating energy $E^c_i(R_i)$ of each atom in a correspondent kink-free supercell (containing only the dislocation dipole) is subtracted, and we sum over all atoms within a distance $R$ from the kink. The results are shown in Fig.~\ref{keating}. We see that the kink energies are determined by the medium-range behavior of the associated strains. At short range ($R<3.0$\AA) the LK kink is actually the highest in energy. As we advance away from the core of the kinks, the energies only approach their final relative values at a distance of about $R=$10.0\AA. \begin{figure} \epsfxsize=3.3 truein \centerline{\epsfbox{fig5.ps}} \vskip 0.20truein \caption{Keating energy for 30$^\circ$-partial kinks. Energy $E(R)$ is the sum over all atoms within a distance $R$ from the dislocation core. Corresponding core energy is subtracted to yield defect energies.} \label{keating} \end{figure} As in the case of the left kinks, a look at the saddle-point configuration shows that the rather high migration barrier of 2.03 eV for the right kinks is associated with the drastic bond distortions and malcoordination of atoms at the core. Note that this barrier is substantially higher than the 1.52 value we obtained for the left kinks, leading to a physical picture of ``fast'' and ``slow'' plasticity carriers for the 30$^\circ$ partial dislocation. In our concluding section, we discuss this point further. Here, it is worth pausing to compare our results with those in Ref.~\onlinecite{bulatov}. Individual kink energies are not obtained in their work, since in all their calculations the supercells contained a double kink (two kinks, one of each family). Therefore, we cannot compare our kink energies directly with their results. In their procedure, what is computed are the relative energies of kinks within each kink family, assuming the LK and RK to have the same energy. A first aspect to be pointed out is that the above assumption of degeneracy between the LK and RK kinks is in sharp disagreement with our findings. In agreement with our work, they find an energy difference of $\sim$0.4~eV between the two left kinks. On the other hand, while our results indicate that the two right kinks also differ by $\sim$0.4~eV, they find these two kinks to be almost degenerate, with energies differing by 0.07~eV only. Our kink migration barriers are substantially higher, despite the fact that the associated saddle-point configurations seem to be very similar with those identified in their work. Below, our results will be seen to compare more favorably with experimental estimates of the kink barriers. \subsection{PSD-kink complexes} \label{sbsec3.4} Kinks and PSDs can be considered as the fundamental types of excitations in the dislocation cores. Important structural features and modes of dislocation dynamics can also be associated with the complexes formed by these basic defect types. Moreover, since RDs such as the PSD are malcoordinated (thus acting as weak links in the reconstructed core), they may act as preferential sites for the nucleation of double kinks, as suggested by Heggie and Jones.\cite{hirsch,heggie83} Possibly, a PSD-kink complex could result from such a nucleation process, as the double kink expands and eventually dissociates into single kinks. Therefore, it is important to understand the structure and energetics of these complexes. \begin{table} \caption{Formation energy of defect complexes in the 30$^\circ$ partial dislocation, in eV. Two different states are considered for each complex (notation is explained in the text). Binding energy for the largest cell is indicated in the last column.} \begin{tabular}{lcccc} &756 atoms &924 atoms &1078 atoms &Binding energy \\ \hline LC(0) &1.11 &0.97 &0.88 &0.80 \\ LC(1) &1.78 &1.66 &1.58 &$\;$-\\ RC(0) &1.90 &2.09 &2.15 &0.42 \\ RC(1) &2.43 &2.55 &2.64 &$\;$- \\ \end{tabular} \end{table} Here, we consider the energetics of the PSD-kink complexes. The important questions concern whether or not these complexes form bound states, as well as the associated binding energies and migration barriers. We considered each of the PSD-kink complexes in two configurations, as shown in Fig.~\ref{cmplx}. The left complex (LC = LK + PSD) is shown in the state of closest approach, LC(0), Fig.~\ref{cmplx}(a), in which the two constituents overlap and cannot be distinguished; and in an extended state, LC(1), Fig.~\ref{cmplx}(b), in which the PSD and the kink have been separated to adjacent positions. The corresponding right-complex cases RC(0) and RC(1) are shown in Fig.~\ref{cmplx}(c) and (d), respectively. In Table II we show our results for the energies of these four configurations, where it can be seen that the PSD binds strongly with both the left and the right kinks, in agreement with Ref.~\onlinecite{bulatov}. Contrary to what is found in Ref.~\onlinecite{bulatov}, our results indicate the LC to be more strongly bound than the RC. From the binding energies and the energies of these more extended configurations, we obtain a lower bound of 0.80 eV (LC) and 0.49 (RC) for the dissociation barrier of these bound states. Below, these results will be shown to be in sharp contrast with those for kink-RD complexes in the SP reconstruction of the 90$^\circ$ partial dislocation, which are found to be unstable. Finally, we note that the energy of the LC is lower than that of the PSD, making the former the more likely site for unpaired electrons in the core of the 30$^\circ$ partial. \begin{figure} \epsfxsize=3.3 truein \centerline{\epsfbox{fig6.ps}} \vskip 0.20truein \caption{Core structure of kink-PSD complexes in the 30$^\circ$ partial. In each case, two states of the complex are considered, as explained in the text. (a) LC(0) = LK + PSD at zero separation. (b) LC(1) = LK + PSD one lattice period apart. (c) RC(0) = RK + PSD at zero separation. (d) RC(1) = RK + PSD one lattice period apart.} \label{cmplx} \end{figure} \section{The 90$^\circ$ partial dislocation} \label{sec4} \subsection{Core reconstruction} \label{sbsec4.1} Considerable theoretical effort has been devoted to the study of the 90$^\circ$ partial dislocation. \cite{bigger,nbv,hansen,chel84,markl83,markl94,jones80,jones93,% heggie83,heggie93,oberg} Basically, two types of core reconstruction have been considered. These are the symmetric quasi-fivefold (QF) and the symmetry-breaking SP reconstructions shown in Fig.~\ref{core-90}(a) and (b), respectively. Both preserve the original periodicity of the lattice along the dislocation direction. The latter structure has been found to be lower in energy. It was thus commonly assumed to be the ground state in Si and other semiconductors, and the bulk of studies of core excitations has relied upon this assumption. Recently, we proposed an alternative solution for the ground-state in Si,\cite{bnv} where a period-doubling symmetry-breaking structure, seen in Fig.~\ref{core-90}(c), is shown to have lower energy than the SP one. As a consequence, the study of core excitations and the related issue of dislocation mobility have to be re-addressed. We are currently undertaking this task, and the results will be published elsewhere. \begin{figure} \epsfxsize=3.3 truein \centerline{\epsfbox{fig7.ps}} \vskip 0.20truein \caption{Models for core reconstruction of the 90$^\circ$ partial dislocation. (a) Symmetric QF reconstruction. (b) Symmetry-breaking SP structure. (c) Ground state symmetry-breaking DP structure. (d) Reconstruction defect or DSD in the SP core.} \label{core-90} \end{figure} Nevertheless, we note that the DP structure is closely related to the SP one, being obtained by inserting alternating kinks in the core of the latter.\cite{bnv} Therefore, understanding the defect structure of the simpler SP core may prove useful to the study of the rather large number of core defects of the DP reconstruction. In this section, we summarize our main results for the core and related defects of the SP structure. The SP core has two degenerate ground states, depending on the direction of the symmetry-breaking bonds. By convention, we denote the configuration in Fig.~\ref{core-90}(b) as the ``right'' reconstruction, from which we can obtain the ``left'' state by applying the broken mirror operations [the ones that are unbroken in the QF core in Fig.~\ref{core-90}(a)]. We find the SP core to be 0.18 eV/\AA\ lower in energy than the QF one. This result is in good agreement with previous TB~\cite{hansen} (0.18 eV) and LDA~\cite{bigger} (0.23 eV) works. In Ref.~\onlinecite{bigger}, it was found that symmetry breaking occurs spontaneously, a result that is confirmed by our model. In our calculations, the reconstructed bonds are stretched 3.0\% with respect to the perfect crystal values (2.5\% in Ref.~\onlinecite{bigger}), and the minimum and maximum bond angles are 97$^{\circ}$ and 135$^{\circ}$, respectively (96$^{\circ}$ and 138$^{\circ}$ in Ref.~\onlinecite{bigger}). Core defects are considered next. \label{sbsec4.2} \subsection{Direction switching defect (DSD)} Symmetry breaking in the SP core gives rise to a RD in which the direction of the bonds is switched, as shown in Fig.~\ref{core-90}(d). We shall refer to this defect as a direction switching defect (DSD).\cite{foot-sol} Note that, like the 30$^\circ$-partial PSD, this defect contains a dangling bond, which explains its formation energy of 1.45 eV. Our result is in reasonable agreement with the 1.2 eV value obtained in the cluster calculations of Ref.~\onlinecite{heggie93}. For the DSD motion, we computed an energy barrier of only 0.04 eV for the propagation between two adjacent equilibrium positions. Given such a small barrier, the DSD is expected to be extremely mobile even at low temperatures. As a test, we performed a molecular dynamics simulation on a supercell having a pair of DSD defects, initially separated by 9.6 \AA, on an otherwise defect-free partial dislocation. Remarkably, at a temperature of only 50~K, recombination of the pair took place after only 1.3 ps. Unlike PSDs in the 30$^\circ$ partial, such highly mobile DSDs do not bind strongly with kinks to form DSD-kink complexes, as explained below. \subsection{Kinks} It would be possible to define left (LK) and right (RK) kinks in the case of the 90$^\circ$ partial, just as for the 30$^\circ$ partial. However, in the 90$^\circ$ case, each LK is directly related to a corresponding RK by application of a mirror symmetry. (This was not true for the 30$^\circ$ partial, where the mirror symmetry was absent from the outset.) Thus, for the 90$^\circ$ partial, we shall restrict the discussion to right kinks only. Moreover, we will now use the notation `L' and `R' in a completely different way, namely, to denote the direction of the core reconstruction on either side of the kink. Referring to Fig.~\ref{defect-90}(a), the reconstruction will be said to tilt to the `left' and to the `right' on the left and right sides of the kink, respectively. Hence, we call this a left-right (LR) kink, the notation following accordingly for the other defects. \begin{figure} \epsfxsize=2.8 truein \centerline{\epsfbox{fig8.ps}} \vskip 0.20truein \caption{Core structure of kinks and DSD-kink complexes in the SP core. See text for notation. (a) LR kink. (b) RL kink. (c) LL complex = LR + DSD. (d) RR complex = LR(RL) + DSD.} \label{defect-90} \end{figure} We compute the sum of the energies of the LR and RL kinks shown in Fig.~\ref{defect-90}(a) and (b), to be 0.24 eV only. The RL and LR kinks are structurally quite similar; they would be related by a two-fold rotation axis normal to the plane of Fig.~\ref{defect-90}, if it were not for the fact that a stacking fault exists on one side but not the other. Thus we expect the energies of the two kinks to be similar, and assign the average energy of 0.12 eV to each. The rather low formation energy can be seen as another indication of the DP core structure, since even individual kinks add little strain over that imposed by the SP core itself. In the formation of the DP core, this additional strain is more than compensated for by the attraction between the LR and RL kinks. In the present context, this result only shows that our previous results for kink energies in Ref.~\onlinecite{nbv} were severely under-converged with respect to the dislocation interaction in the cell. We also computed an energy barrier of 1.62 eV for the motion of the LR and RL kinks. As is the case for reconstructed kinks in the 30$^\circ$ partial, such large energy barriers are associated with the existence of malcoordinated atoms and severe bond distortions at the core of the kink. \subsection{DSD-kink complexes} There are two additional kink-type defects associated with the SP reconstruction of the core. These are the RR and the LL defects, shown in Figs.~\ref{defect-90}(c) and (d). We prefer to regard these as complexes of a LR or a RL kink together with a DSD. Two LL complexes are possible (only one is shown in Fig.~\ref{defect-90}), and they share the same ``quasi-symmetry'' that the LR and RL kinks do, differing only by the position of the fivefold and dangling-bond-containing rings with respect to the stacking fault. In contrast with complexes in the 30$^\circ$ partial, these complexes appear to be either unstable or marginally stable against the emission of a DSD, as discussed in Ref.~\onlinecite{nbv}. The dissociation barrier, if present, is basically the DSD migration barrier, which indicates that these complexes should dissociate very easily at moderate temperatures. This was confirmed by a simulation performed at 300~K, with a supercell containing a pair of RR complexes in each dislocation, separated by a distance of 34.6\AA. On the time scale of 1 ps, one of the kink complexes undergoes the DSD-emission reaction RR $\rightarrow$ RL + DSD, with the DSD propagating rather easily towards the other RR complex, where a DSD + RR $\rightarrow$ LR process takes place. Overall, a dislocation containing a pair of RR complexes relaxes into one containing alternating RL and LR kinks, by means of DSD emission (absorption) and propagation. \section{Comparison with experimental results} \label{sec5} In Table III we summarize our results for the formation energies and migration barriers of kinks in the 90$^\circ$ and 30$^\circ$ partial dislocations. For the 30$^\circ$ partial, of the two equilibrium states of each kink [(LK,LK$'$) and (RK,RK$'$)], one is to be regarded as an intermediate metastable state in the propagation of the kink, given the substantial difference in formation energy between the two states. Only the state with the lower formation energy will determine the kink concentration in each case (this lower formation energy is the number included in Table III). For comparison, results from Ref.~\onlinecite{bulatov} are also included, as are the ranges of experimental results for both quantities, obtained from different techniques.\cite{gotts,farber,nikit,kolar} We observe that, for the 30$^\circ$ partial, our values are in excellent agreement with the experimental ones. \begin{table} \caption{ Formation energy and migration barriers of dislocation kinks in Si, in eV. Range of available experimental estimates of is included. For comparison, results from Ref.~\protect\onlinecite{bulatov} are also include. } \begin{tabular}{lccc} Dislocation &Kink type &Formation energy &Migration barrier\\ \hline 30$^\circ$ &LK &0.35 (0.82\tablenote{From Ref.~\protect\onlinecite{bulatov}}) &1.53 (0.82\tablenotemark[1]) \\ 30$^\circ$ &RK &1.24 (0.82\tablenotemark[1]) &2.10 (0.74\tablenotemark[1]) \\ 90$^\circ$ &LR &0.12 &1.62 \\ 90$^\circ$ &RL &0.12 &1.62 \\ \hline Experiments & &0.4-0.7 &1.2-1.8 \end{tabular} \end{table} The interpretation of these experiments is done according to the theory of Hirth and Lothe. In this theory, the dislocation velocity is given by \begin{equation} v_d \propto 2 \times \exp \left[-{1 \over kT} \left( U_k + W_m\right) \right]\;, \end{equation} where $U_k$ is the kink formation energy and $W_m$ is the kink migration barrier. This equation is written under the assumption that the two kinks that result from the nucleation of a stable double kink (a kink-antikink pair) are equivalent. This assumption does not hold for the 30$^\circ$ partial, where the left and right kinks are intrinsically different. The more general form \begin{eqnarray} v_d &\propto& \exp \left[-{1 \over 2kT} \left(U_{LK} + U_{R K} \right) \right] \nonumber \\ &\times& \left[ \exp \left( -{W^{LK}_m\over kT} \right) + \exp \left( -{W^{RK}_m \over kT} \right) \right]\;, \end{eqnarray} must be used. We note that the quantity of interest in the first activated term is the average formation energy of the two kink species. The second term is derived from the kink velocities, and therefore the relative velocity appears in the generalized form. In the 30$^\circ$ partial this term is dominated by the velocity of the left kinks (fast carriers), given the much higher migration barrier of the right kinks (slow carriers). We should point out that the average formation energy of the kink-antikink pairs in Table III falls within the range of the experimental numbers, for the 30$^\circ$ partial. As we mentioned in the introduction, another theory of dislocation glide has been proposed,\cite{obst1,obst2} in which the motion is controlled by the pinning of kinks by strong obstacles along the dislocation line, and the kink migration barriers are not rate controlling. Despite the fact that our work does not address such pinning mechanisms, and thus cannot clearly decide between these two theories, our results are certainly consistent with the HL interpretation. Strictly speaking, our comparison is only valid for the 30$^\circ$ partial, since we did not consider the true ground state for the 90$^\circ$ partial. In the latter case, the excellent agreement we obtain for the kink barriers appears to be fortuitous. Nevertheless, our results are qualitatively consistent with the experimental images in Ref.~\onlinecite{kolar}, which show a higher concentration of kinks in the 90$^\circ$ partial. In Table III, we see that kink energies are lower in this dislocation, as compared to the 30$^\circ$ partial. Obviously, this is only plausible to the extent that this general trend of lower kink energies carries over to the ground-state DP structure of the 90$^\circ$ partial. \section{Conclusions} \label{sec6} In this work, an extensive study of the core reconstruction and structural excitations in the cores of both the 30$^\circ$ and the 90$^\circ$ partial (in its SP reconstruction) in Si, was presented. For both partials, we find the core to undergo strong bond reconstruction, restoring the fourfold coordination of the core atoms. The reconstructed bonds are stretched by only $\sim$3\% with respect to bulk values, and the core energies are mostly associated with the bond angle distortions present in the reconstructed cores. In the case of the SP structure of the 90$^\circ$ partial, the RD (or DSD) is associated with a switch of direction of the reconstructed bonds, and is found to be highly mobile. Kink-DSD complexes are found to be only marginally stable against emission of a DSD, a reaction that is observed to proceed rather quickly in our simulations at room temperature. The LR and RL kinks have very low formation energy, indicating that they introduce little additional on the SP core, a result which is consistent with the lower energy of the DP core, as proposed in Ref.~\onlinecite{bnv}. For the 30$^\circ$ partial, two kink species (RK and LK) are identified, and the RK's are found to have higher formation energies that the LK ones. This is explained by the medium-range behavior of the associated strains. The RD (or PSD) is related to a phase-switching of the core reconstruction, and binds strongly with kinks to form PSD-kink complexes. These are the more likely sites for unpaired electrons in the 30$^\circ$ partial core. The results for this particular dislocation can be directly compared with experiment, and we find good agreement between our calculated values for the kink formation energies and migration barriers and the experimental results. \acknowledgments Partial support was provided by the DoD Software Initiative. J.B. and D.V. acknowledge support from NSF Grants DMR-91-15342 and DMR-96-13648.
1,116,691,498,213
arxiv
\section{INTRODUCTION} \label{intro} Gravitational lensing is an important probe for the mass distribution of lensing galaxies as well as for the internal structure of lensed sources \citep[e.g.,][]{koc06}. Quadruply lensed quasars are particularly useful in this regard since the positions and flux ratios of the lensed images provide strong constraints on lens models, and provide far more observational information than doubly imaged systems. For instance, recently revealed quads with ``anomalous flux ratios'', namely flux ratios which are unexplained in smooth-lens models, suggest the presence of numerous substructures in the lensing galaxy \citep[e.g.,][]{mao98,met01,chi02,dal02,sch02,kee03}. Whether or not these substructures correspond to cold dark matter (CDM) subhalos (milli-lensing), stellar populations (micro-lensing), or differential reddening by dust, remains an issue, and depends on the lens system \citep[e.g.,][]{chi05}. It is of particular interest to assess whether there is a high level of substructures or subhalos in galaxy-sized halos, as predicted in CDM models \citep{kly99,moo99}. A technique to distinguish the origin of anomalous flux ratios was proposed by \citet{mou03}, utilizing spectroscopic information of lensed quasars. If we take into account a finite (i.e., not point-like) source size of a quasar heart, which consists of a small continuum emitting region (CR) with typical size $R_{\rm S}$ of $\sim 10^{-4}$~pc, broad line region (BLR) with $R_{\rm S}\sim 1$~pc, and narrow line region (NLR) with $R_{\rm S} \ga 100$~pc, then the lensing magnification will be functions of these source sizes and the mass of any substructure inside its Einstein angle, $M_{\rm E}$ (i.e., either CDM subhalos with a mass $M_{\rm E} \sim10^{7-9} M_\odot$ or stars with $M_{\rm E} \sim 1 M_\odot$). If CDM subhalos are responsible for anomalous flux ratios, then they can magnify both the CR and BLR because their Einstein radii (projected onto the distance of the quasar), $r_{\rm E} \sim 0.01 (M_{\rm E}/M_\odot)^{1/2} h^{-1/2}$~pc (where $h=H_0/100$ km~s$^{-1}$~Mpc$^{-1}$), are larger than the sizes of the CR and BLR. The NLR may be magnified as well, depending on the upper mass of the CDM subhalos. On the other hand, stars would be unable to magnify emission lines from either the BLR or NLR, while still magnifying the CR. As a consequence, depending on the nature of substructure, the fluxes of emission lines in each of the lensed images that show anomalous (continuum) flux ratios can be either magnified or unchanged. The observations of multiply lensed objects with an integral field spectrograph (IFS) is useful in this study, since it provides simultaneous information among quasar images both spatially and spectrally \citep[e.g.,][]{met04,way05}. The IFS is more superior to conventional slit spectroscopic techniques in the following aspects: (1) Slit spectroscopic observations would require separate observations for each pair of components. This not only requires more observing time but also introduces significant errors and uncertainties for relative flux measurements since exposures for each pair are carried out in different observational conditions. (2) IFS allows us to optimize the relative flux measurements {\it after} the observations, in terms of determining the size and center positions of the apertures used for photometry. In this paper, we report on our spectroscopic mapping of the quadruply imaged quasar, 1RXS J1131$-$1231, which shows anomalous flux ratios, using the IFS mode of Kyoto tridimensional spectrograph II \citep[Kyoto 3DII:][]{sug00b,sug02,sug04} mounted on the Subaru 8.2-m telescope. This lens system is nearly unique among known gravitationally lensed systems in that it brings together rare properties, including the quadruplicity, a bright optical Einstein ring, a small redshift, and high amplification \citep[hereafter S03]{slu03}. \citetalias{slu03} measured redshifts of 0.658 and 0.295 respectively for the quasar and the lensing galaxy. The lens shows three roughly co-linear highly magnified images, A, B, and, C. This configuration emerges if the quasar is close to and inside a cusp point of the astroid caustic for an elliptical lens (see also Figure~\ref{oiiiextensionmodel} shown in \S 4). In such a lens system associated with a cusp singularity, there exists a universal relation between the image fluxes, $(F_B+F_C)/F_A=1$ \citep[e.g.,][]{mao98}, whereas the observed flux ratios violate these rules significantly [$(F_B+F_C)/F_A \simeq 2.1$ in the $V$ band and $2.2$ in the $R$ band \citepalias{slu03}]. The flux ratios appear to be changing with time, as a result of microlensing effects on some of the images. Time delay effects between the images do not play a role in the flux ratio anomaly because simple smooth-lens models predict only a day or a fraction of a day for the time delays between images A, B, and C \citep{slu06,cla06}. Several different lens models with smooth mass distribution have been constructed to attempt to reproduce the image positions, flux ratios, and time delays of this lens system. But it appears that one requires a high-order lens structure in addition to a simple ellipsoidal lens, in the form of an octupole component or an unidentified satellite \citep[][hereafter M06]{cla06,mor06}. Armed with the IFS mode of Kyoto 3DII, we measured the emission-line fluxes of both the BLR H$\beta$ and the NLR [OIII]$\lambda\lambda$4959,5007 for images A, B, and C simultaneously. The measurements of the line fluxes are more reliable than those of the continuum because there are no contributions from the lensing galaxy, the redshift of which differs from the quasar's. The H$\beta$ and [OIII] lines are very close in wavelength, so that the effect of differential reddening between them is almost totally negligible. The unique fine sampling of $\sim 0^{\prime\prime}.1$, together with the simple shape of the point spread function and Subaru's excellent image quality, provides the most suitable method for measuring the relative line fluxes. This paper is organized as follows. \S 2 describes the observations and method for data reduction. Spectra of images A, B, and C and flux ratios between each image pair are shown in \S 3. \S 4 is devoted to discussion and conclusions. \section{OBSERVATIONS AND REDUCTION} \label{obs} We observed 1RXS J1131-1231 on 2005 February 8 with the IFS mode of the visitor instrument Kyoto 3DII, mounted on the Subaru. We used the atmospheric dispersion corrector installed at the Subaru Cassegrain focus \citep{iye04}. This enabled us to obtain the atmospheric-dispersion-free data cube without having to apply a correction during the data reduction \citep[e.g.,][]{sug05}. The IFS mode uses an array of $37 \times 37$ lenslets, enabling us to obtain spectra of $\sim 10^3$ spatial elements. The spectral range from $7300 \AA$ to $9150 \AA$ was observed in each of two one-hour exposures. With the spatial sampling of $0^{\prime\prime}.096$ lenslet$^{-1}$ \citep{sug04}, the field of view of $\sim 3^{\prime\prime}$ covered the three brighter quasar images. The target was located at the same position of the lenslet array during the two exposures. Since the positional difference derived from the reconstructed data cubes was small, within a few tenths of a lenslet (i.e., a few $\times 0^{\prime\prime}.01$), we combined them without any offset between them. The spatial resolution was stable during the observations, and was determined to be $0^{\prime\prime}.5$-$0^{\prime\prime}.6$ from a strong ^^ ^^ point"-like H$\beta$ emission-line distribution in the quasar image A (see \S~\ref{spectabc}). We used halogen lamp spectral frames for the flat fielding. These frames were obtained immediately before and after each of the target frames, with the same optical setting as the target frames. Since our calibration optics exactly simulates the telescope optics, i.e, since it provides the same micropupil images as the telescope does, it properly corrects differences in the lenslet-to-lenslet response (and efficiency). The uniformity of a flat-fielded reconstructed image is $1.5\%$ (1 $\sigma$). The wavelength calibration was carried out using He and Ne lamps, whose spectra were also taken immediately before and after each of the target frames. We measured that the calibration lamp line positions changed only by 0.1 pixel ($\sim 0.3 \AA$) along the spectral direction during the whole sequence of observations. This corresponds to a systematic uncertainty in the wavelength calibration. Moreover, the technique of micropupil spectroscopy \citep{sug06} provides us with an accuracy in the relative wavelength calibration of $\sim 0.2 \AA$, which has been determined within each of the calibration lamp frames. The full width at half maximum of the instrumental line profile was measured as $7 \AA$, which corresponds to 260 km s$^{-1}$. In order to improve signal-to-noise ratios in target spectra, we smoothed them in wavelength by convolution with a gaussian profile so that the resultant velocity resolution was 390 km s$^{-1}$. The sky subtraction was carried out using simultaneous sky spectra. This is particularly important for longer wavelengths, where the sky emission lines are strong. For an accurate sky subtraction the Kyoto 3DII has two separate fields of view: one for the target and the other smaller one for sky, which is located $33^{\prime\prime}$ away from the target \citep{sug06}. The averaged sky spectrum was subtracted from each target lenslet spectrum (see \S~\ref{spectabc}). The IFS data of the spectroscopic standard star HD93521 were used for the instrument response curve correction as well as for absolute flux density calibration. The extinction curve at Mauna Kea (CFHT Observatory Manual, http://www.cfht.hawaii.edu/Instruments/ ObservatoryManual/) was used to correct for the effects of airmass in both the target object and standard star spectra. The airmasses were 1.38 and 1.22 for the target frames and 1.23 for the standard star frames. The atmospheric conditions were checked using a guide star count log and were determined to have been non-photometric. We roughly estimate the uncertainty of absolute flux density calibration to be about $\pm 20\%$. \section{SPECTRA OF QUASAR IMAGES A, B, AND C} \label{spectabc} Figure~\ref{lineimage} shows line images and a line-free continuum image that have been obtained from the IFS data. We find an extended component of the [OIII] emission connecting quasar images A and C. Such an extended component is not seen between quasar images A and B. The spatial distribution of the line-free continuum emission is similar to that of the H$\beta$ emission rather than the [OIII] emission. Despite the severe contamination from the host galaxy (\S~\ref{compab}), the continuum is much fainter in image C than in image B. Figure~\ref{abcab} (upper) shows the spectra of quasar images A, B, and C that have each been extracted with an 8-lenslet, i.e. a circular aperture with a diameter $0^{\prime\prime}.77$. An aperture of this size includes $\sim 60$ \% of the total flux of a point source. The aperture centers have been determined from the H$\beta$ image shown in Figure~\ref{lineimage}. The relative positions of these centers are consistent, to within several milliarcseconds, with peak positions obtained by \citetalias{mor06} using the {\it Hubble Space Telescope}. As an example, the lower panel in Figure~\ref{abcab} shows the spectrum of image A both with and without sky subtraction in order to demonstrate the effectiveness of the sky subtraction as described in \S~\ref{obs}. \subsection{Comparison Between A and B} \label{compab} If all the emission from the quasar is magnified in the same manner within each quasar image, we should be able to completely subtract the spectrum of one quasar image with the spectrum of another quasar image by scaling the latter with an appropriate factor. If this is not the case, e.g. when the NLR line is magnified only by a macrolens while the BLR line is further magnified by a milli/micro-lens, it should be impossible to subtract these two lines completely with a single multiplicative factor. Using such spectral fittings, we derive the flux ratios in the BLR and in the NLR below. We do not attempt to derive the flux ratios in the continuum, which is severely contaminated by the quasar host galaxy. Figure~\ref{aboiiihbbroad} (upper) shows a comparison between quasar images A \& B. The spectrum of image A has been fit, using the least squares method, in the [OIII] and the line-free continuum emission wavelength regions according to the following equation: $f_A(\lambda) = b_0 \times f_B(\lambda) - b_1 \times (\lambda/8000 \AA)^{b_2}$, where $f_A(\lambda)$ and $f_B(\lambda)$ are the flux densities of images A and B, respectively. The fitting parameter $b_0$ determines the scaling factor, while $b_1$ and $b_2$ are used to remove the smooth continuum. The spectrum has been fit with $b_0=1.63$. Similarly, we have fit the spectrum of image A in the broad H$\beta$ and the continuum wavelength regions (Figure~\ref{aboiiihbbroad} (lower)). The broad H$\beta$ in image A seems to have only slightly weaker emission in the bluer wing, compared with that in the scaled image B. The best fit $b_0$ is 1.74. We therefore conclude that the flux ratios A$/$B in the [OIII] and broad H$\beta$ emission lines are $1.63^{+0.04}_{-0.02}$ and $1.74^{+0.07}_{-0.12}$ respectively, where the uncertainties are derived as discussed in \S~\ref{uncertainty}. In order to compare these values with the ratio predicted by macrolens models, we have constructed a smooth-lens model with a singular isothermal ellipsoid plus an external shear \citep{kas93,kor94}, based on the quasar image positions obtained by \citetalias{mor06}. A slight offset of the halo center with respect to the lensing galaxy center has been allowed. This model predicts an A$/$B ratio of 1.66. The smooth-lens models constructed by \citetalias{slu03} and by \citet[][hereafter BPR06]{bla06}, with a singular isothermal sphere plus an external shear, also predicts a similar ratio of 1.75 - 1.70. We conclude therefore that the measured ratios are close to those predicted by smooth-lens models. The required offset for the spectral fitting, i.e. non-zero $b_1$, is likely caused primarily by contamination from the quasar host galaxy continuum. The contamination from the lensing galaxy is small since it is located outside of our field of view: $\sim 2^{\prime\prime}.1$ \citepalias[e.g.,][]{mor06} down from image A in Figure~\ref{lineimage}. Figure~\ref{lineimage} (right) suggests that the contribution of the lensing galaxy is less than $6 \times 10^{-18}$ erg cm$^{-2}$ s$^{-1}$ $\AA^{-1}$ and actually much smaller since we do not detect any signature of it even at positions that are closer to its center. \subsection{Comparison Between B and C} \label{compbc} We have compared the spectra of images B and C using the same method as in \S~\ref{compab}. When we scale the spectra according to the [OIII] line emission (Figure~\ref{bcoiiihbbroad} (upper)), we find differences in the H$\beta$. The H$\beta$ profile also differs in these two spectra. Figure~\ref{bcoiiihbbroad} (lower) shows the same comparison but with the C spectrum scaled according to the broad H$\beta$ emission. We find differences in the [OIII] lines and in the narrow line (NL) H$\beta$. The redshift of the NL H$\beta$ matches that of the [OIII] line ($z_{heliocentric}=0.6542 \pm 0.0001 (1 \sigma)$, where the uncertainty includes the accuracy of [OIII] line peak determination as well as the wavelength calibration uncertainties). The ^^ ^^ NL-subtracted" spectrum in Figure~\ref{bcoiiihbbroad} (upper) suggests that the broad line (BL) H$\beta$ is more symmetric than the original BL$+$NL H$\beta$, and is redshifted with respect to the NL H$\beta$ ($z_{heliocentric}=0.6581 \pm 0.0001$ for BL H$\beta$). We have measured an asymmetry parameter, $A$, of the H$\beta$ emission for both the original and residual spectrum of image B: $A=(HWHM_{Red} - HWHM_{Blue})/FWHM$, where $HWHM_{Red}$ and $HWHM_{Blue}$ represent the half-widths at half-maximum flux density of the red and blue wings of the profile respectively \citep{wil93}. Our measurements have shown that A=0.32 and A=0.08 respectively, for the original BL$+$NL H$\beta$ and the residual BL H$\beta$. The difference in the H$\beta$ profile between images B and C is caused by different fractions of NL contribution. The above comparisons lead us to conclude that the flux ratio between images B and C largely depends on where the lines originate: the BLR or the NLR. The flux ratio C$/$B in the [OIII] line is $1.19^{+0.03}_{-0.12}$, whereas it is $0.46^{+0.02}_{-0.03}$ in the BL H$\beta$ line. As before, uncertainties are derived as in \S~\ref{uncertainty}. Compared to predictions of the smooth-lens models, this is larger only by a factor of $\sim 1.2$ in [OIII] but is smaller by a factor of $\sim 2.1$ in the BL H$\beta$. Table~\ref{tbl-ifsspect} summarizes the flux ratios obtained between the quasar images A, B, and C, together with the ratios expected from the smooth-lens models. In order to derive the H$\beta$ and [OIII] flux ratios between quasar images, we have used the simple scaling and fitting methods above, without using any template for the FeII line emission. This is because we have found that the observed FeII line ratios differ slightly from the well-established I~Zw~1 template \citep{ver04} and that the FeII is better subtracted by the scaling between quasar images themselves. Figure~\ref{psfmoffat} demonstrates in another way the suitability of our scaling method. The H$\beta$ residual image has been created by subtracting three point sources with the H$\beta$ flux ratios derived above, from the H$\beta$ image shown in Figure~\ref{lineimage}. The residuals have an rms of only $3 \%$ of the peak value of quasar image A. \subsection{Uncertainty Estimates} \label{uncertainty} The uncertainties of the measured flux ratios in Table~\ref{tbl-ifsspect} have been estimated as follows. First we have tried another spectrum fitting procedure that differs from the procedure used in Figures~\ref{aboiiihbbroad} and \ref{bcoiiihbbroad}. In this new fitting procedure, we fit the line emission after removing the continuum emission from each image spectrum, rather than fitting the line and continuum emission simultaneously. This altered the flux ratios slightly, by 0.00--0.05. Secondly we have estimated the mutual flux contamination between images due to point-like components, by using the model of three point sources shown in Figure~\ref{psfmoffat}. In the $0^{\prime\prime}.77$ aperture the contamination in broad H$\beta$ for images A, B, and C are 1 \%, 3 \%, and 5 \% respectively. The effects on the A$/$B and C$/$B ratios are $-2 \ \%$ and $+2 \ \%$. In the same aperture the mutual contaminations between the ^^ ^^ point-like" [OIII] components (see \S~\ref{discon}) are estimated as 2 \%, 3 \%, and 2 \% respectively. The effects on the A$/$B and C$/$B ratios are $-1 \%$. Thirdly we have derived the flux ratios with the simultaneous (line $+$ continuum) fitting procedure, but using successively smaller apertures, down to the size of a 4-lenslet ($0^{\prime\prime}.38$ diameter). While the C$/$B ratio in [OIII] decreases by up to 0.07 as the aperture size is reduced, the other ratios vary by no more than $\pm 0.02$. These differences should in practice include various effects: mutual contamination, contributions from the spatially extended components, readout and photon noise, and flat fielding uncertainties. Lastly we have considered the uncertainty of the A$/$B ratio in the broad H$\beta$ emission due to the slight difference in its wing profile between images A and B (\S~\ref{compab}). When, during the fitting, we reduce the weighting of each blue wing data point by half, the A$/$B ratio is 1.76. When we reduce the weighting of each red wing data point by half, on the other hand, the A$/$B ratio is 1.69. We have taken all the above into consideration when determining the uncertainties of the measured flux ratios in Table~\ref{tbl-ifsspect}. The effects of uncertainties of the aperture center positions are negligible because the positional uncertainties are smaller than one tenth of a lenslet. \section{DISCUSSION AND CONCLUSIONS} \label{discon} Anomalous flux ratios in lensed images can be induced by substructure in a host lens. As a reference for investigating such substructure lensing, we estimate the angular size of the radius of a line-emitting region, $\theta_{\rm S}$, and of an Einstein radius for a lens substructure, $\theta_{\rm E}$, based on the set of cosmological parameters $\Omega=0.3$, $\Lambda=0.7$, and $h=0.7$. A source image with radius $R_{\rm S}$ at the redshift 0.658 of the quasar has an angular size of $\theta_{\rm S} \simeq 1.4 \times 10^{-4} (R_{\rm S}/1 {\rm pc})$ arcsec. For comparison, a lens having a mass $M_{\rm E}$ inside an Einstein angle yields $\theta_{\rm E} \simeq 6.7 \times 10^{-7} (M_{\rm E}/0.1 M_\odot)^{1/2}$ arcsec for the lens redshift of 0.295. Thus, a dimensionless ratio $\xi \equiv \theta_{\rm S} / \theta_{\rm E}$ showing the degree of a lensing effect \citepalias{bla06} is given as $\xi \simeq 200 (R_{\rm S}/1 {\rm pc})(M_{\rm E}/0.1 M_\odot)^{-1/2}$. \citet{wyi02} showed, based on detailed lensing simulations, that $\theta_{\rm E}$ must be at least an order of magnitude smaller than $\theta_{\rm S}$ for no magnification. This leads us to require that $\xi \ga 10$ if there is no additional magnification of an image by a lens substructure. Otherwise an image should be subject to flux anomaly. As presented in Table~\ref{tbl-ifsspect}, the flux ratios for the [OIII] line emission, which originates from the NLR with $R_{\rm S} \ga 100$ pc, are basically in agreement with those of a standard smooth lens consisting of an elliptical lens and an external shear. Also, the [OIII] flux ratios appear to be inconsistent with \citetalias{mor06}'s lens model, which posits a massive satellite with $\sim 5 \times 10^{10} M_\odot$ near image A in order to reproduce their time delay measurements. Such a massive lens perturber would affect the [OIII] flux significantly. The basic agreement of the [OIII] flux ratios with the smooth-lens model predictions suggests not only the validity of the smooth-lens models but also the absence of millilensing effects. A star with subsolar mass is unable to magnify the NLR because $\xi \sim 2 \times 10^4$ for $R_{\rm S}=100$ pc and $M_{\rm E}=0.1$ $M_\odot$. A more massive substructure such as a CDM subhalo, even if it exists near the light path to the NLR, is tightly constrained to have $M_{\rm E} < 10^5$ $M_\odot$. It is worth noting that the observed flux ratio C$/$B in the [OIII] line emission slightly differs from the model predictions at about the 20 \% level, while the flux ratio A$/$B is well reproduced. This slight inconsistency with a smooth-lens model might be caused by a substructure with an extended instead of point-like mass distribution being located at one of the lensed images. \citet{ino05} showed that for a substructure with a singular isothermal density distribution, the flux ratio can be changed by about 20~\% from a smooth-lens prediction even if $\xi \ga 10$, provided the substructure is just centered at the source image. Alternatively, and perhaps more likely, is the effect of an asymmetric structure or light distribution intrinsic to the NLR \citep[e.g.,][]{schmitt03}. In contrast, a smooth-lens model assumes a uniform and circularly-symmetric NLR as a source image. In fact, this effect can partially be seen as an extension of the [OIII] emission between images A and C as well as a slight extension of image B in the direction opposite to images A and C (Figure~\ref{lineimage}). This indicates that we have spatially resolved the NLR itself. Figure~\ref{oiiiextension} emphasizes these extensions in the subtraction of three ^^ ^^ point" source models from the [OIII] line image. The total [OIII] flux of the extended emission between images A and C is about a half of that of the point-like component of image A at our resolution. The fractional contributions from the extended emission components are 16 \% -- 20 \% even in the $0^{\prime\prime}.77$ apertures of the quasar images. This causes the slight aperture dependence of [OIII] flux ratios between images (particularly the C$/$B ratio) as described in \S~\ref{uncertainty}. We have found, based on the lens modeling, that asymmetric configurations of lensed images can be caused by asymmetric structure in the NLR in the north-south direction, i.e. if the north side of the NLR is more luminous or more spatially extended than the south side. An example of such models is shown in Figure~\ref{oiiiextensionmodel}. This asymmetry in the lensed images of the NLR would be attributed to a slight difference between the measured flux ratios inside a finite aperture and the smooth-lens predictions based on a uniform and circularly-symmetric source. In contrast to the [OIII] line emission, the broad-line H$\beta$ emission, which originates from the BLR, shows a significant difference to the model for the flux ratio C$/$B , i.e. C$/$B$=0.46$ compared with the predicted ratio of $0.91 \sim 1.00$, while the flux ratio A$/$B is well reproduced by a smooth lens. This anomaly for the flux ratio C$/$B in the H$\beta$ line emission is caused neither by effects of (i) dust extinction nor (ii) time delay on lensed images, as argued below. (i) There are no effects of dust extinction on the H$\beta$ line emission, because they are not seen in the [OIII]$\lambda \lambda$ 4959,5007 lines, whose wavelengths are close to that of H$\beta$. (ii) The time delay between lensed images B and C is a small fraction of a day as estimated by our smooth-lens model as well as by \citetalias{slu03} and \citetalias{bla06}. This is too short for the BLR to change its line flux by a factor of two, because reverberation mapping \citep[e.g.,][]{pet93} indicates the sizes of BLRs in AGNs to be 10$^{1-2}$ light days. For these reasons, the most likely explanation for the anomalous flux ratio C$/$B in the H$\beta$ line emission is micro/milli-lensing of image C. Microlensing of image C would be of particular importance, because of the finite probability of de-amplification even though it is located at a minimum in the arrival time surface (Schechter \& Wambsganss 2002, see also Keeton 2003 for the case of millilensing). We also note that simultaneous amplification of both images A and B by similar amounts is another possibility, but it appears to be unlikely because of the lack of such time variation in their continuum emission \citepalias{mor06}. In addition to micro/milli-lensing of image C, microlensing of image A has occurred such that its CR is microlensed, while its BLR is only partially affected. \citetalias{mor06} have shown that the brightness of image A in optical bands has been increasing over recent years, probably because of recovery from a de-amplified stage due to microlensing. Image A was once fainter than image B \citepalias{slu03}. Using Table 1 of \citetalias{mor06}, we have found that the $R$-band flux ratios at the time of our observations (HJD$=$2453411) read A$/$B$\simeq 1.2$ and C$/$B$\simeq 0.5$, implying that the CR for image A is microlensed. As shown in \S~\ref{compab}, an asymmetric profile of the H$\beta$ line for image A, i.e., weaker blue wing than red wing, may also indicate the presence of microlensing for the BLR as well but only in part; such an asymmetric profile can be caused by the microlensing of the part of the BLR that is rotating away from us. Thus, the Einstein radius of a substructure (most probably a star) near image A must be small compared with the size of the BLR. In contrast, a substructure located near image C would have to be more massive than that near image A, because it would have to affect the emission from both the CR and BLR of image C, since the flux ratios C$/$B in both regions ($\sim 0.5$) are almost the same. The likely radius of the BLR, $R_{\rm BLR}$, is estimated as about $0.01$ - $0.05$ pc, using the relation between $R_{\rm BLR}$ and the intrinsic luminosity of H$\beta$, $L({\rm H}\beta)$, obtained by \citet{kas05}. For $L({\rm H}\beta)$, we estimate $L({\rm H}\beta) \sim (1.8$-$4.3) \times 10^{42}$ erg~s$^{-1}$ as derived from the observed H$\beta$ flux (which is corrected for the limited aperture size and includes the uncertainty of the absolute flux calibration) after taking into account (i) the dust extinction in the lensing galaxy and (ii) the lensing magnification predicted by lens models, as detailed below. (i) Our estimate of dust extinction in the lensing galaxy, which is an elliptical as deduced from the presence of typical absorption lines \citepalias{slu03}, is based on work by \citet{eli06}. They derived, using their sample of mostly early-type lenses, that the mean differential extinction for the most extinguished image pair for each lens is $A_V \sim 0.56$ mag ($R_V \sim 2.8$). We thus adopt $A_V \sim 0$ - $0.56$ mag as a possible range of the absolute extinction, which corresponds to $A_{{\rm 0.62}\mu{\rm m}} \sim 0$ - $0.48$ mag in the rest frame of the lensing galaxy. (ii) For the lensing magnification, we adopt the predictions of smooth-lens models, yielding 12.4 (this work) to 14.7 \citep{slu03} for the magnification factor of image B. Apparently, $L({\rm H}\beta)$ estimated in this manner is subject to some systematics and the derived $R_{\rm BLR}$ also includes uncertainties associated with an intrinsic scatter in the $R_{\rm BLR}$ vs. $L({\rm H}\beta)$ relation \citep{kas05}. Nonetheless, the range of the estimated $R_{\rm BLR}$ of 1RXS J1131-1231, $0.01$ - $0.05$ pc, seems to be comparable to that of a luminous Seyfert 1 rather than the typical size in luminous quasars ($\sim 0.1$ pc taken from Fig. 2 of Kaspi et al. 2005). This is actually consistent with the work by \citetalias{slu03} who, based on the estimation of its unlensed absolute magnitude, suggested that the source object in the 1RXS J1131-1231 system is a Seyfert~1. Based on our estimate of $R_{\rm BLR} = 0.01$ - $0.05$ pc, the condition $\xi \la 10$ for the substructure lensing of image C's H$\beta$ line emission requires $M_{\rm E} \ga 0.1$ $M_\odot$ for the mass of a substructure near image C. On the other hand, for a substructure near image A, $M_{\rm E}$ must be as small as 0.01 - 0.1 $M_\odot$. If the mass were larger, the H$\beta$ flux of image A would significantly differ from the smooth-lens predictions, which is not the case (see Table~\ref{tbl-ifsspect}). In summary, we have observed the quadruply lensed quasar 1RXS J1131-1231 with the IFS mode of the Kyoto 3DII. The simultaneous observation of the three brighter quasar images at high spatial resolution has enabled us to measure accurate flux ratios between them in the H$\beta$ and [OIII] lines, and to clarify the cause of anomalous flux ratios also seen in the continuum emission. We have found that the flux ratios in the NLR [OIII] line can be explained by smooth-lens models, with only a slight deviation from the model predictions. The deviation is likely caused by asymmetric structure in the NLR. The absence of microlensing and/or millilensing effects on the [OIII] line emission sets a tight limit of $M_{\rm E} < 10^5 M_\odot$ along the light path to the NLR. The BLR H$\beta$ shows a C$/$B line flux ratio that is smaller than the smooth-lens model predictions. This is most likely caused by the micro/milli-lensing of image C. The slight difference of the broad H$\beta$ line profile in image A compared with those in the other images suggests the presence of a small microlensing effect on image A. Our results demonstrate that the IFS observations of gravitational lens systems are useful for understanding the mass distribution of lensing galaxies and the internal structure of lensed sources. \acknowledgments We thank the staff at Subaru Telescope for their help during the observing run. We acknowledge partial financial support from NAOJ, including Subaru Telescope. We also thank S. Ozaki, H. Ohtani, T. Hayashi, T. Ishigaki, M. Ishii, M. Sasaki, Y. Okita, T. Minezaki, and R. I. Davies for discussions, and the referee for helpful comments. This work is supported by the Grant-in-Aid for the 21st Century COE ^^ ^^ Center for Diversity and Universality in Physics" from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan. Facilities: \facility{Subaru(Kyoto 3DII)}.
1,116,691,498,214
arxiv
\section{Introduction}In~\cite{VX} we initiated a study of character sheaves for ${\mathbb Z}/m{\mathbb Z}$-graded Lie algebras. The invariant theory of graded Lie algebras was studied by Vinberg in~\cite{V} where the classical graded Lie algebras were divided into types I, II and III. In~\cite{VX,VX2} we focus on type I classical graded Lie algebras (and some exceptional cases), which is the most complicated type in the following sense. There are families of cuspidal character sheaves, with full support, associated to irreducible representations of various Hecke algebras of complex reflections groups at roots of unity. In this note we study character sheaves in the cases of gradings arising from inner automorphisms of special linear groups, which we refer to as type AI, and Vinberg's type II classical graded Lie algebras. The latter can be viewed as the simplest type, in the sense that we do not expect the existence of cuspidal character sheaves. For type III, we expect that cuspidal character sheaves are rare and further that they all have nilpotent support, as in Lusztig's generalised Springer correspondence~\cite{L1}. We will deal with type III and make connection (in all types) to the Lusztig-Yun work~\cite{LY} in future publications. Let us briefly recall the set-up. Let $G$ be a reductive algebraic group and $\theta:G\to G$ an order $m$ automorphism of $G$. We have a decomposition ${\mathfrak g}=\oplus_{i\in{\mathbb Z}/m{\mathbb Z}}{\mathfrak g}_i$ of ${\mathfrak g}=\operatorname{Lie}G$ into eigenspaces of $d\theta$, where $d\theta|_{{\mathfrak g}_i}=\zeta_m^i$ and we write $\zeta_k=e^{2\pi\mathbf{i}/k}$ for $k\in{\mathbb Z}_{+}$. Let $K=(G^\theta)^0$. By Vinberg~\cite{V}, the invariant theory of $K$-action on ${\mathfrak g}_1$ is analogous to that of the adjoint action of $G$ on ${\mathfrak g}$. In particular, there exists a Cartan subspace ${\mathfrak a}\subset{\mathfrak g}_1$ consisting of semisimple elements such that ${\mathbb C}[{\mathfrak g}_1]^K\cong{\mathbb C}[{\mathfrak a}]^{W_{\mathfrak a}}$, where $W_{\mathfrak a}\cong N_K({\mathfrak a})/Z_K({\mathfrak a})$ is the little Weyl group. The group $W_{\mathfrak a}$ is in general a complex reflection group. We are interested in describing explicitly the simple $K$-equivariant perverse sheaves on ${\mathfrak g}_1$ that are Fourier transforms of simple $K$-equivariant perverse sheaves on ${\mathcal N}_{-1}={\mathfrak g}_{-1}\cap{\mathcal N}$, where ${\mathcal N}$ is the nilpotent cone of ${\mathfrak g}$. Here Fourier transform refers to the functor ${\mathfrak{F}}:\operatorname{Perv}_K({\mathfrak g}_{-1})\to\operatorname{Perv}_K({\mathfrak g}_{1})$, where we have identified ${\mathfrak g}_1$ with ${\mathfrak g}_{-1}^*$. We call these perverse sheaves character sheaves for graded Lie algebras, and write $\operatorname{Char}_K({\mathfrak g}_1)$ for them. Let ${\mathcal A}_K({\mathcal N}_{-1})$ denote the set of simple $K$-equivariant perverse sheaves on ${\mathcal N}_{-1}$, called nilpotent orbital complexes. We have $\operatorname{Char}_K({\mathfrak g}_1)={\mathfrak{F}}({\mathcal A}_K({\mathcal N}_{-1}))$ by definition. A character sheaf is called cuspidal if it does not arise as a direct summand (up to degree shift) from parabolic induction of character sheaves in $\operatorname{Char}_{(L^\theta)^0}({\mathfrak{l}}_1)$ for any $\theta$-stable Levi subgroup $L$ contained in a $\theta$-stable parabolic subgroup. We write $\operatorname{Char}_K^{cusp}({\mathfrak g}_1)$ for them. We also write $\operatorname{Char}_K^{{\mathrm n}}({\mathfrak g}_1)$ (resp. $\operatorname{Char}_K^{{\mathrm f}}({\mathfrak g}_1)$) for the set of nilpotent support (resp. full support) character sheaves. In this note we describe the character sheaves explicitly (Theorem~\ref{cs-sl} for type AI and Theorem~\ref{cs} for type II) under the assumption that the conjectural description (Conjecture~\ref{conj-nilp} and Conjecture~\ref{conj-biorbital}) of the set $\operatorname{Char}_K^{{\mathrm n}}({\mathfrak g}_1)$ of nilpotent support character sheaves holds. We also give a conjectural description of cuspidal character sheaves for type AI (Conjecture~\ref{conj-cusp}). Recall that we expect that there are no cuspidal character sheaves in type II. In Section~\ref{sec-pre} we recall the graded Lie algebras considered here and write down the quiver description of the $K$-action on ${\mathfrak g}_1$ following~\cite{V,Y}. We also recall some braid group representations that will be used to describe the character sheaves. In Section~\ref{sec-nilp} we parametrize the nilpotent orbits and determine their equivariant fundamental groups. In particular, we determine the distinguished nilpotent orbits. We also write down the total number of nilpotent orbits (Lemma~\ref{lem-nb1}) and that of distinguished orbits (Lemma~\ref{lem-nb2}). We discuss the dual strata which give rise to supports of character sheaves. In Section~\ref{sec-cs-sl} we deal with inner automorphisms of special linear groups. As in~\cite{VX3} we make use of a generalised nearby cycle construction and parabolic induction to produce character sheaves. We note that the character sheaves with trivial central character have been described in~\cite{L}. In Section~\ref{cs-typeII} we discuss type II. In particular, we expect that all nilpotent support character sheaves arise from parabolic induction of $\theta$-stable Borel subgroups. {\bf Acknowledgement.} I thank Pavel Etingof, Oscar Kivinen, Ivan Losev, George Lusztig, Emily Norton, Peng Shan, Cheng-Chiang Tsai, Kari Vilonen, and Zhiwei Yun for helpful discussions. Special thanks are due to Dennis Stanton for teaching me the counting arguments. \section{Preliminaries}\label{sec-pre} \subsection{Classical graded Lie algebras}\label{ssec-gla} We recall the explicit description of gradings arising from inner automorphisms of special linear groups and type II classical graded Lie algebras, following Vinberg \cite{V}. Let $\theta:G\to G$ be an order $m$ automorphism. Recall $K=(G^\theta)^0$ and $\zeta_k=e^{2\pi\mathbf{i}/k}$ for $k\in{\mathbb Z}_{+}$. {\bf Type AI.} Let $G=SL_V$, where $V$ is a complex vector space of dimension $N$. We can assume that $\theta(g)=\gamma g\gamma^{-1}$, where $\gamma\in G$ and $\gamma^m=1$. {\bf Type AII.} Let $G=SL_V$, where $V$ is a complex vector space of dimension $2n$ equipped with a non-degenerate bilinear form $(-,-)$ defined by $(v,w)=v^tAw$, $v,w\in V$. We can assume that $ \theta(g)=A^{-1}(g^t)^{-1}A$, $g\in G$. We have $\theta^2(g)=\gamma g\gamma^{-1}$, where $\gamma=A^{-1}A^t$. Then $m=2m_0$ for some odd $m_0\in{\mathbb Z}_{+}$ and we can assume that $\gamma^{m_0}=-1$. {\bf Type CII (resp. DII).} Let $G=Sp_V$ (resp. $SO_V$), where $V$ is a complex vector space of dimension $2n$ equipped with a non-degenerate symplectic (resp. symmetric bilinear) form $(\,,\,)$. We can assume that $ \theta(g)=\gamma g\gamma^{-1},\,\gamma\in Sp_V \text{ (resp. $O_V$)}. $ We have that $m$ is even and $\gamma^{m}=1$ (resp. $-1$). In each case we have $V=\oplus V_\lambda$, where $V_\lambda:=\{v\in V\mid \gamma v=\lambda v\}$, $\lambda\in{\mathbb C}$. Moreover, \begin{eqnarray*} &{\mathfrak g}_k=\{x\in{\mathfrak g}\mid xV_\lambda\subset V_{\zeta_{m_0}^k\lambda},\ (xv,w)+\zeta_{m}^k(v,xw)=0,\ \forall v,w\in V\}&\text{AII} \\ &{\mathfrak g}_k=\{x\in{\mathfrak g}\mid xV_\lambda\subset V_{\zeta_{m}^k\lambda}\}&\text{AI,\,CDII}\\ &W_{\mathfrak a}=G_{m_0,1,r} \text{ (resp. $G_{m,1,r}$)}&\text{AII (resp. AI, CDII)}. \end{eqnarray*} Let \begin{eqnarray*} l=(m_0-1)/2\text{ (resp. $l=m/2$)}&&\text{AII (resp. CDII)}\\ \text{ and $M_i=V_{\xi_{m}^{-i}}$ (resp. $V_{\xi_{m}^{1-2i}}$, resp. $V_{\xi_{2m}^{1-2i}}$)}&&\text{AI, CII (resp. AII, resp. DII)}. \end{eqnarray*} We have \begin{equation*} r=\operatorname{dim}{\mathfrak a}=\operatorname{min}\{\operatorname{dim}M_i\}\text{ (resp. $\min\{[\frac{\operatorname{dim}M_i}{2}]\}$)}\ \ \text{ AI (resp. ACDII)}. \end{equation*} We have the following description of the pair $(K,{\mathfrak g}_1)$ using quivers, following~\cite{Y} (see also~\cite{YY}). We write $\mathbf{d}=(d_i)$, $d_i=\operatorname{dim}M_i$, for the dimension vector of the quiver, and $|\mathbf{d}|=\sum d_i$. {\bf Type AI.} We can identify ${\mathfrak g}_1$ with the representations of the following cyclic quiver \begin{equation*} \xymatrix@R=2mm{&M_2\ar[dl]&\cdots\ar[l]&M_{k}\ar[l]&\\M_1\ar[dr]&&&&M_{k+1}\ar[ul]\\&M_m\ar[r]&\cdots\ar[r]&M_{k+2}\ar[ur]&} \end{equation*} We have \begin{eqnarray*} K\cong\prod_{j=1}^m GL_{M_j},\ {\mathfrak g}_{1}\cong\bigoplus_{i=1}^m \operatorname{Hom}(M_i,M_{i-1}). \end{eqnarray*} {\bf Type AII.} We have $(M_i,M_j)=0$ unless $i+j\equiv 1\mod m_0$, and $(,)|_{M_{l+1}}$ is a non-degenerate symplectic form. In particular, $d_{l+1}$ is even. We can identify ${\mathfrak g}_1$ with the set of representations of the following cyclic quiver \begin{equation*} \xymatrix@R=2mm{&M_1\ar[dd]_{x_1}&M_2\ar[l]_{x_2}&\cdots\ar[l]&M_{l}\ar[l]_{x_{l}}\\&&&&&M_{l+1}\ar[ul]_-{x_{l+1}}\\&M_{2l+1}\ar[r]_{x_2^*}&M_{2l}\ar[r]&\cdots\ar[r]_{x_{l}^*}&M_{l+2}\ar[ur]_{x_{l+1}^*}} \end{equation*} such that $(x_iv_i,w_{m_0+2-i})+\zeta_m(v_i,x_i^*w_{m_0+2-i})=0$, $i\in[2,l+1]$, and $(x_1v_{1},w_{1})=-(x_1w_{1},v_{1})$, for all $v_j,w_j\in M_j$. We have \begin{equation*} \begin{gathered} K\cong \prod_{j=1}^lGL_{M_j}\times Sp_{M_{l+1}},\ {\mathfrak g}_1\cong\bigoplus_{i=1}^l\operatorname{Hom}(M_{i+1},M_{i})\oplus\wedge^2(M_{1}^*). \end{gathered} \end{equation*} {\bf Type CII.} We have $( M_i,M_j)=0$ unless $i+j\equiv 0\mod m$, $(\,,\,)|_{M_l}$ and $(\,,\,)|_{M_m}$ are both non-degenerate. In particular both $d_l$ and $d_m$ are even. We can identify ${\mathfrak g}_1$ with the set of representations of the following cyclic quiver \begin{equation*} \xymatrix@R=2mm{&M_1\ar[dl]_{x_1}&M_2\ar[l]_{x_2}&\cdots\ar[l]&M_{l-1}\ar[l]_{x_{l-1}}\\M_{2l}\ar[dr]_{x_1^*}&&&&&M_l\ar[ul]_-{x_l}\\&M_{2l-1}\ar[r]_{x_2^*}&M_{2l-2}\ar[r]&\cdots\ar[r]_{x_{l-1}^*}&M_{l+1}\ar[ur]_{x_l^*}} \end{equation*} such that $(x_iv_i,w_{m+1-i})+(v_i,x_i^*w_{m+1-i})=0$, $i\in[1,l]$, for all $v_j,w_j\in M_j$. We have \begin{equation*} \begin{gathered} K\cong \prod_{j=1}^{l-1}GL_{M_j}\times Sp_{M_l}\times Sp_{M_m},\ {\mathfrak g}_1\cong\bigoplus_{j=1}^l\operatorname{Hom}(M_j,M_{j-1}). \end{gathered} \end{equation*} {\bf Type DII.} We have $(M_i,M_j)=0$ unless $i+j\equiv 1\mod m$. We can identify ${\mathfrak g}_1$ with the set of representations of the following cyclic quiver \begin{equation*} \xymatrix@R=4.5mm{&M_1\ar[d]_{x_1}&M_2\ar[l]_{x_2}&\cdots\ar[l]&M_l\ar[l]_{x_l}\\&M_{2l}\ar[r]_{x_2^*}&M_{2l-1}\ar[r]&\cdots\ar[r]_{x_l^*}&M_{l+1}\ar[u]_{x_{l+1}}} \end{equation*} such that $(x_iv_i,w_{m+2-i})+(v_i,x_i^*w_{m+2-i})=0$, $i\in[2,l]$, and $(x_1v_{1},w_{1})=-(x_1w_{1},v_{1})$, and $(x_{l+1}v_{l+1},w_{l+1})=-(x_{l+1}w_{l+1},v_{l+1})$, for all $v_j,w_j\in M_j$. We have \begin{equation*} \begin{gathered} K\cong \prod_{j=1}^lGL_{M_j},\ \ {\mathfrak g}_1\cong\bigoplus_{j=2}^{l}\operatorname{Hom}(M_j,M_{j-1})\oplus \wedge^2(M_1^*)\oplus \wedge^2(M_l). \end{gathered} \end{equation*} \subsection{Hecke algebras associated to complex reflection groups}\label{Hecke}We make use of the notations in~\cite[\S7.1]{VX}. Let $\iota|m$. Let ${\mathcal H}^\iota_{{G_{m,1,k}}}$ denote the Hecke algebra associated to the complex reflection group $G_{m,1,k}$ defined as the quotient of the group algebra ${\mathbb C}[B_{G_{m,1,k}}]$ by the ideal generated by the elements (see~\cite{BMR}) \begin{equation*} \sigma_{H_{s_{ij}^{(p)}}}^2-1,\,1\leq i<j\leq k,\,0\leq p\leq m-1,\ \ (\sigma_{H_{\tau_i}}^\iota-1)^{m/\iota},\,1\leq i\leq k. \end{equation*} By~\cite[Section 3]{M}, the irreducible representations of ${\mathcal H}_{G_{m,1,k}}^\iota$ are parametrized by $\iota$-partitions of $k$, that is, $\iota$-tuples of partitions $(\nu^1,\ldots,\nu^\iota)$ such that $\sum_{i=1}^\iota|\nu^i|=k$. We write ${\mathcal P}_\iota(k)$ for the set of $\iota$-partitions of $k$, and write ${\mathcal P}(k)={\mathcal P}_1(k)$. For each $\tau\in{\mathcal P}_\iota(k)$, let $L_\tau$ denote the irreducible representation of ${\mathcal H}_{G_{m,1,k}}^\iota$ corresponding to $\tau$. We will also write $L_\tau$ for the irreducible representation of $B_{G_{m,1,k}}$ obtained by pulling back $L_\tau$ via ${\mathbb C}[B_{G_{m,1,k}}]\to{\mathcal H}^l_{G_{m,1,k}}$. \section{Nilpotent orbits and component groups}\label{sec-nilp} In this section we describe the parametrization of $K$-orbits in ${\mathcal N}_1$ (or equivalently, ${\mathcal N}_{-1}$) and the component groups $A_K(x):=Z_K(x)/Z_K(x)^0$, $x\in{\mathcal N}_1$ (or ${\mathcal N}_{-1}$). \subsection{Young $(k,\pm)$-diagrams}Fix a positive integer $k$. We say that a Young diagram is a $(k,{+})$-diagram (resp. $(k,-)$-diagram) if the rows of the Young diagram are filled with consecutive {\em decreasing} (resp. {\em increasing}) numbers from $\{1,\ldots,k\}$, where $k$ is identified with $0$. Two Young $(k,\pm)$-diagrams are regarded equivalent if they can be obtained from each other by interchanging rows. We write \begin{equation}\label{yda} \lambda^{\pm}=(\lambda_1)_1^{p_1^1}\cdots(\lambda_1)^{p_1^{k}}_k\cdots (\lambda_s)_1^{p_s^1}\cdots(\lambda_s)^{p_s^{k}}_{k} \end{equation} for the Young $(k,\pm)$-diagram such that $\lambda=(\lambda_1)^{p_1^1+\ldots +p_1^k}\cdots (\lambda_s)^{p_s^1+\ldots +p_s^k}$ is the underlying partition with $\lambda_1>\lambda_2>\cdots\lambda_s>0$, and there are $p_i^j$ rows of length $\lambda_i$ that start with a beginning box $j$. We write $|\lambda^{\pm}|=\sum_{i=1}^s(\sum_{j=1}^kp_i^j)\lambda_i$ and \begin{equation*} \mathbf{d}(\lambda^\pm)=(d_1(\lambda^\pm),\ldots,d_k(\lambda^\pm)) \end{equation*} where $d_i(\lambda^\pm)$ denotes the number of boxes filled with $i$ in $\lambda^\pm$. We will make use of Young $(k,+)$ (resp. $(k,-)$)-diagrams to parametrize nilpotent orbits in ${\mathcal N}_1$ (resp. ${\mathcal N}_{-1}$). Fix a dimension vector $\mathbf{d}=(d_1,\ldots,d_k)$. Let us write $\Sigma_{k,\mathbf{d}}^{\pm}$ for the set of Young $(k,\pm)$-diagrams consisting of $\lambda^\pm$ such that $\mathbf{d}(\lambda^\pm)=\mathbf{d}$. Let $\mathbf{1}_k=(1,\ldots,1)\in{\mathbb N}^k$. We will often omit the superscript $\pm$ when it is clear from the context. \subsection{Nilpotent orbits and component groups of centralisers}The nilpotent $K$-orbits in ${\mathcal N}_{\pm 1}$ are parametrized by {\bf Type AI.} The set $\Sigma_{m,\mathbf{d}}^{\pm}$ {\bf Type AII.} The subset $\Sigma_{m_0,{\mathbf d}}^{A,\pm}\subset \Sigma_{m_0,{\mathbf d}}^{\pm}$ consisting of Young $(m_0,\pm)$-diagrams of the form~\eqref{yda} such that \begin{eqnarray}\label{orbit-1} &p_i^a=p_i^b\text{ if }a+b\equiv\lambda_i\nmod m_0,\ p_i^a\text{ is even if }2a\equiv\lambda_i\nmod m_0,\ i\in[1,s]. \end{eqnarray} {\bf Type CII.} The subset $\Sigma_{m,{\mathbf d}}^{C,\pm}\subset \Sigma_{m,{\mathbf d}}^{\pm}$ consisting of Young $(m,\pm)$-diagrams of the form~\eqref{yda} such that \begin{eqnarray}\label{orbit-2} &p_i^a=p_i^b\text{ if }a+b\equiv\lambda_i-1\nmod m,\ p_i^a\text{ is even if }2a\equiv\lambda_i-1\nmod m,\ i\in[1,s]. \end{eqnarray} {\bf Type DII.} The subset $\Sigma_{m,{\mathbf d}}^{D,\pm}\subset \Sigma_{m,{\mathbf d}}^{\pm}$ consisting of Young $(m,\pm)$-diagrams of the form~\eqref{yda} such that \begin{eqnarray}\label{orbit-3} &p_i^a=p_i^b\text{ if }a+b\equiv\lambda_i\nmod m,\ p_i^a\text{ is even if }2a\equiv\lambda_i\nmod m,\ i\in[1,s]. \end{eqnarray} Given a Young diagram $\lambda$, we write ${\mathcal O}_\lambda$ for the corresponding nilpotent $K$-orbit. Let $x\in{\mathcal O}_\lambda$. We have \begin{eqnarray} \label{comp-1}&A_K(x)={\mathbb Z}/d_\lambda{\mathbb Z},\ d_\lambda:=\operatorname{gcd}(\lambda_1,\ldots,\lambda_s)&\text{type AI}\\ \label{comp-2}&A_K(x)=1&\text{type ACDII}. \end{eqnarray} The above statements for type AI is well-known. In the remainder of this subsection, we explain the parametrization of nilpotent orbits in type II and prove~\eqref{comp-2}. These are probably known to the experts. For completeness and to fix notations, we include them here. For $x\in{\mathcal O}_\lambda$, let $\phi_x=(x,y,h)$ be a normal $\mathfrak{sl}_2$-triple associated to $x$, where $h\in{\mathfrak g}_0$. We work with ${\mathcal N}_1$. The case of ${\mathcal N}_{-1}$ is entirely similar. Let $x\in{\mathcal N}_1$. Suppose that $x$ has a Jordan block $v_a\xrightarrow{x} v_{a-1}\xrightarrow{x}\cdots\xrightarrow{x} v_{a-p+1}\xrightarrow{x}0$ of size $p$, where $v_a\in M_{a}$. Let $W_p=\operatorname{span}\{v_{a-i},i\in[0,p-1]\}$. We show that $(\,,\,)|_{W_p}=0$. We have $(x^{p-1}v_a,x^jv_a)=(-1)^{p-1}c_p(v_a,x^{p-1+j}v_a)=0$ if $j\geq 1$, where $c_p=\zeta_m^{p-1}$ in type A, and $c_p=1$ in type CD. It remains to show that $(x^{p-1}v_a,v_a)=0$.\\ (Type AII) Suppose first that $p=2k$. We have $ (x^{2k-1}v_a,v_a)=(-\xi_m)^{k-1}(x^kv_a,x^{k-1}v_a). $ If $2a-2k\not\equiv0\,\nmod m_0$, then $(x^kv_a,x^{k-1}v_a)=0$. If $2a-2k\equiv0\,\nmod m_0$, then $x^{k-1}v_a\in M_1$ and $(x^kv_a,x^{k-1}v_a)=-(x^kv_a,x^{k-1}v_a)$ implies that $(x^kv_a,x^{k-1}v_a)=0$. Suppose next that $p=2k+1$. We have $ (x^{2k}v_a,v_a)=(-\xi_m)^{k}(x^kv_a,x^{k}v_a).$ If $2a-2k\not\equiv1\,\nmod m_0$, then $(x^kv_a,x^{k}v_a)=0$; if $2a-2k\equiv1\,\nmod m_0$, then $x_kv_a\in M_{l+1}$ and $(x^kv_a,x^{k}v_a)=0$ as $(,)|_{M_{l+1}}$ is symplectic.\\ (Type CII) If $p$ is even, since $2a-p+1\not\equiv0\,\nmod m$, $ ( x^{p-1}v_a,v_a)=0. $ If $p$ is odd, then $( x^{p-1}v_a,v_a)=(-1)^{\frac{p-1}{2}}( x^{\frac{p-1}{2}}v_a,x^{\frac{p-1}{2}}v_a)=0$.\\ (Type DII) If $p$ is even, then $(x^{p-1}v_a,v_a)=(-1)^{p-1}(v_a,x^{p-1}v_a)$ implies that $(x^{p-1}v_a,v_a)=0$. If $p$ is odd, $(x^{p-1}v_a,v_a)=(-1)^{(p-1)/2}(x^{(p-1)/2}v_a,x^{(p-1)/2}v_a)=0$, since $2a-p+1\not\equiv1\,\nmod m$. It follows that all Jordan blocks appear in pairs. More precisely, if $x$ has a Jordan block $v_a\xrightarrow{x} v_{a-1}\xrightarrow{x}\cdots\xrightarrow{x} v_{a-p+1}\xrightarrow{x}0$ of size $p$, where $v_a\in M_{a}$, then $x$ has another Jordan block $v_b\xrightarrow{x} v_{b-1}\xrightarrow{x}\cdots\xrightarrow{x} v_{b-p+1}\xrightarrow{x}0$ of size $p$, where $v_b\in M_b$ and $a+b-p\equiv 0\,\nmod m_0$ (resp. $a+b-p+1\equiv0\,\nmod m$, $a+b-p\equiv0\,\nmod m$) in type AII (resp. CII, DII). Thus~\eqref{orbit-1},~\eqref{orbit-2} and~\eqref{orbit-3} hold. Let $x\in{\mathcal O}_\lambda$, $\lambda\in\Sigma_{m_0}^A$. We have $G^{\phi_x}\cong S(\prod_{j=1}^sGL_{\sum_{i=1}^dp_j^i})$. Note that $(x^{k-1}v,w)=-(x^{k-1}w,v)$ if $v,w\in M_a$ and $2a\equiv k\,\nmod m_0$. It follows that $\theta|_{G^{\phi_x}}$ is of type II. Let $x\in{\mathcal O}_\lambda$, $\lambda\in\Sigma_m^C$. We have $G^{\phi_x}\cong (\prod_{\lambda_j\text{ odd }}Sp_{\sum_{i=1}^dp_j^i})\times(\prod_{\lambda_j\text{ even}}O_{\sum_{i=1}^dp_j^i})$. Since $2a\not\equiv\lambda_i-1\,\nmod m$ for even $\lambda_i$, $\theta|_{O_{\sum_{i=1}^dp_j^i}}$ is of type II. Let $x\in{\mathcal O}_\lambda$, $\lambda\in\Sigma_m^D$. We have $G^{\phi_x}\cong (\prod_{\lambda_j\text{ even}}Sp_{\sum_{i=1}^dp_j^i})\times(\prod_{\lambda_j\text{ odd}}O_{\sum_{i=1}^dp_j^i})$. Since $2a\not\equiv\lambda_i\,\nmod m$ for odd $\lambda_i$, $\theta|_{O_{\sum_{i=1}^dp_j^i}}$ is of type II. We conclude that $A_K(x)\cong K^{\phi_x}/(K^{\phi_x})^0=1$. \subsection{Number of nilpotent orbits in type II}Recall $m_0=2l+1$ and $m=2l$. Let \begin{equation*} \Sigma_{m_0,2n}^A=\bigsqcup_{|{\mathbf d}|=2n}\Sigma_{m_0,{\mathbf d}}^A,\ \ \Sigma_{m,2n}^C=\bigsqcup_{|{\mathbf d}|=2n}\Sigma_{m,{\mathbf d}}^C,\ \ \Sigma_{m,2n}^D=\bigsqcup_{|{\mathbf d}|=2n}\Sigma_{m,{\mathbf d}}^D. \end{equation*} \begin{lemma}\label{lem-nb1}We have \begin{eqnarray*} &&\sum_n|\Sigma_{m_0,2n}^A|x^n=\prod_{k\geq 1}\frac{1}{(1-x^{k})^{l+1}},\ \ \sum_{n\geq 0}|\Sigma_{m,2n}^C|x^n=\prod_{k\geq 1}\frac{1+x^k}{(1-x^{k})^{l}}\\ &&\sum_{n\geq 0}|\Sigma_{m,2n}^D|x^n=\prod_{k\geq 1}\frac{1}{(1-x^{k})^{l+1}(1+x^k)}. \end{eqnarray*} \end{lemma} \begin{proof} We can count $|\Sigma_{m_0,2n}^A|$ as follows. For a partition $\lambda=(\lambda_1)^{k_1}\cdots(\lambda_s)^{k_s}$ of $n$, we associate weight $\omega_\lambda^A=\prod_{i=1}^s{k_i+l\choose l}$. Then we have $|\Sigma_{m_0,2n}^A|=\sum_{\lambda\in{\mathcal P}(n)}\omega_\lambda^A$. Note that $\sum_{k=0}^\infty{k+l\choose l}t^k=\frac{1}{(1-t)^{l+1}}$. Thus \begin{equation*} \sum_n |\Sigma_{m_0,2n}^A|x^n=\prod_{k\geq 1}\sum_{j=0}^\infty x^{kj}{j+l\choose l}=\prod_{k\geq 1}\frac{1}{(1-x^k)^{l+1}}. \end{equation*} Similarly, for a partition $\lambda=(\lambda_1)^{k_1}\cdots(\lambda_s)^{k_s}$ of $n$, we associate weight $$\omega_\lambda^C=\prod_{\substack{i=1,\ldots,s\\\lambda_i\text{ odd}}}{k_i+l\choose l}\prod_{\substack{i=1,\ldots,s\\\lambda_i\text{ even}}}{k_i+l-1\choose l-1}\text{ (resp. $\omega_\lambda^D=\prod_{\substack{i=1,\ldots,s\\\lambda_i\text{ even}}}{k_i+l\choose l}\prod_{\substack{i=1,\ldots,s\\\lambda_i\text{ odd}}}{k_i+l-1\choose l-1}$)}.$$ We have $|\Sigma_{m,2n}^C|=\sum_{\lambda\in{\mathcal P}(n)}\omega_\lambda^C$ (resp. $|\Sigma_{m,2n}^D|=\sum_{\lambda\in{\mathcal P}(n)}\omega_\lambda^D$). Thus \begin{eqnarray*} \sum_n |\Sigma_{m,2n}^C|x^n&=&\prod_{\substack{k\geq 1\\k\text{ odd}}}\sum_{j=0}^\infty x^{kj}{j+l\choose l}\prod_{\substack{k\geq 1\\k\text{ eveb}}}\sum_{j=0}^\infty x^{kj}{j+l-1\choose l-1}\\ &=&\prod_{\substack{k\geq 1\\k\text{ odd}}}\frac{1}{(1-x^k)^{l+1}}\prod_{\substack{k\geq 1\\k\text{ even}}}\frac{1}{(1-x^k)^l}=\prod_{k\geq 1}\frac{1+x^k}{(1-x^k)^l}\\ \sum_n |\Sigma_{m,2n}^D|x^n&=&\prod_{\substack{k\geq 1\\k\text{ even}}}\frac{1}{(1-x^k)^{l+1}}\prod_{\substack{k\geq 1\\k\text{ odd}}}\frac{1}{(1-x^k)^l}=\prod_{k\geq 1}\frac{1}{(1-x^k)^{l+1}(1+x^k)}. \end{eqnarray*} \end{proof} \subsection{Distinguished nilpotent orbits}\label{ssec-dist} Let $x\in{\mathcal N}_{-1}$. We say that $x$ is distinguished if ${\mathfrak g}_1^x:=\{y\in{\mathfrak g}_1\mid[x,y]=0\}$ consists of nilpotent elements, that is, ${\mathfrak g}_1^x\subset{\mathcal N}_1$. The $K$-orbit of $x$ is called a distinguished orbit. Similarly we define distinguished nilpotent orbits in ${\mathcal N}_1$. In what follows we describe the distinguished nilpotent orbits and define some subsets of these orbits in type AI. {\bf Type AI.} Let $a\in{\mathbb Z}_+$ and $d:=\operatorname{gcd}(a,m)$. Let ${}^{0}_a\Sigma_{m,{\mathbf d}}\subset\Sigma_{m,{\mathbf d}}$ denote the subset consisting of Young diagrams $\lambda$ such that $a|d_\lambda$ (see~\eqref{comp-1} for the definition of $d_\lambda$) and such that \begin{equation}\label{eqn-biorb} \prod_{k=0}^{m/d-1} p_j^{i+kd}=0\text{ for each }j\in[1,s] \text{ and each }i\in[1,d]. \end{equation} We write $\displaystyle{{}^{0}_a\Sigma_{m,N}=\bigsqcup_{|{\mathbf d}|=N}{}^{0}_a\Sigma_{m,{\mathbf d}}}$. Note that ${}^{0}_a\Sigma_N\neq\emptyset$ only if $d<m$. {\bf Type ACDII.} Let ${}^0\Sigma_{m_0,{\mathbf d}}^A\subset\Sigma_{m_0,{\mathbf d}}^A$ (resp. ${}^0\Sigma_{m,{\mathbf d}}^C\subset\Sigma_{m,{\mathbf d}}^C$, ${}^0\Sigma_{m,{\mathbf d}}^D\subset\Sigma_{m,{\mathbf d}}^D$) denote the subset consisting of Young $m_0$-diagrams (resp. $m$-diagrams) such that \begin{eqnarray*} \min\{p_i^j,j\in[1,m_0]\}\leq 1\text{ (resp. $\min\{p_i^j,j\in[1,m]\}\leq 1$)},\ i\in[1,s]. \end{eqnarray*} Similarly, we define ${}^0\Sigma_{m_0,2n}^A\subset\Sigma_{m_0,2n}^A$ (resp. ${}^0\Sigma_{m,2n}^C\subset\Sigma_{m,2n}^C$, ${}^0\Sigma_{m,2n}^D\subset\Sigma_{m,2n}^D$). The set of distinguished orbits are \begin{eqnarray*} \{{\mathcal O}_\lambda\mid\lambda\in{}^{0}_1\Sigma_{m,{\mathbf d}}\}\text{ AI},\ \ \{{\mathcal O}_\lambda\mid\lambda\in{}^0\Sigma_{m_0,{\mathbf d}}^A\text{ (resp. ${}^0\Sigma_{m,{\mathbf d}}^C,{}^0\Sigma_{m,{\mathbf d}}^D$)}\}\text{ AII (resp. CII, DII)}. \end{eqnarray*} \begin{remark} The set ${}^{0}_1\Sigma_N$ coincides with the set of {\em aperiodic} orbits in~\cite{L}. \end{remark} \begin{lemma}\label{lem-nb2}{\rm (i)} We have \begin{equation*} \sum_{N\geq 0} |{}^{0}_a\Sigma_{m,N}|x^{N/a}=\frac{\prod_{k\geq 1}(1-x^{\frac{m}{d}k})^d}{\prod_{k\geq 1}(1-x^{k})^m}. \end{equation*} {\rm (ii)} We have \begin{eqnarray*} &&\sum_{n\geq 0}|{}^0\Sigma_{m_0,2n}^A|x^n=\prod_{k\geq 1}\frac{(1-x^{(2l+1)k})}{(1-x^k)^{l+1}},\ \ \sum_{n\geq 0}|{}^0\Sigma_{m,2n}^C|x^n=\prod_{k\geq 1}\frac{(1-x^{2lk})(1+x^k)}{(1-x^k)^{l}}\\ &&\sum_{n\geq 0}|{}^0\Sigma_{m,2n}^D|x^n=\prod_{k\geq 1}\frac{(1-x^{2lk})}{(1-x^k)^{l+1}(1+x^k)}. \end{eqnarray*} \end{lemma} \begin{proof} We have $|{}^{0}_a\Sigma_{m,N}|=\sum_{\lambda\in{\mathcal P}(N/a)}\omega_\lambda$, where for $\lambda=(\lambda_1)^{k_1}\cdots(\lambda_s)^{k_s}$, $\omega_\lambda=\prod_{i=1}^s\omega_{k_i}$, and $\omega_k$ equals the number of degree $k$ monomials in variables $z_j^i$, $j=1,\ldots,m/d$, $i=1,\ldots,m_0$ such that for each $i\in[1,m_0]$, at least one of $z_j^i$ has degree 0. Since $\omega_k$ equals the coefficient of $t^k$ in $\frac{(1-t^{m/m_0})^m_0}{(1-t)^m}$, the same argument as in the proof of Lemma~\ref{lem-nb1} proves (i). Similarly for $\lambda=(\lambda_1)^{k_1}\cdots(\lambda_s)^{k_s}$, let $\omega_\lambda^\Delta=\prod_{i=1}^s\omega_{k_i}^{\Delta}$, $\Delta=A,C,D$. where $\omega_k^A$ equals the coefficient of $t^k$ in $ \frac{1-t^{2l+1}}{(1-t)^{l+1}}$, $\omega_{2k+1}^C$ (resp. $\omega_{2k}^D$) equals the coefficient of $t^{2k+1}$ (resp. $t^{2k}$) in $ \frac{1-t^{2l}}{(1-t)^{l+1}}$, $\omega_{2k}^C$ (resp. $\omega_{2k+1}^D$) equals the coefficient of $t^{2k}$ (resp. $t^{2k+1}$) in $ \frac{1-t^{2l}}{(1-t)^{l}}$. We have $|{}^0\Sigma_{m_0,2n}^A|=\sum_{\mu\in{\mathcal P}(n)}\omega_\mu^A$ and $|{}^0\Sigma_{m,2n}^\Delta|=\sum_{\mu\in{\mathcal P}(n)}\omega_\mu^\Delta$, $\Delta=C,D$. The same argument as in the proof of Lemma~\ref{lem-nb1} proves (ii). \end{proof} \subsection{Dual strata}We write $\underline{{\mathcal N}_{\pm 1}}$ for the set of $K$-orbits in ${\mathcal N}_{\pm 1}$. Let ${\mathcal O}\in\underline{{\mathcal N}_{-1}}$. We define the dual stratum $\widecheck{\mathcal O}\subset{\mathfrak g}_1$ entirely similarly as in~\cite[\S3.1]{VX4}. Then the supports of the character sheaves are of the form $\overline{\widecheck{\mathcal O}}$ for those ${\mathcal O}$ lie in a subset of $\underline{{\mathcal N}_{-1}}$, which we will describe later. Let ${\mathcal O}_\lambda\subset{\mathcal N}_{-1}$ be a distinguished nilpotent orbit. Then $\widecheck{\mathcal O}_\lambda$ is a distinguished nilpotent orbit in ${\mathcal N}_1$. We define $\widecheck\lambda$ to be the Young diagram such that \begin{equation}\label{checkmu} \widecheck{\mathcal O}_\lambda\cong{\mathcal O}_{\widecheck\lambda}. \end{equation} Suppose that we are in type AI. Let $\lambda$ be as in~\eqref{yda}. For each $i\in[1,s]$, we define $l_i=\min\{p_i^j,j\in[1,m]\}$. Suppose that $l_i>0$ for some $i$. We define $$\mu_\lambda=(\lambda_1)^{p^1_1-l_1}_1\cdots(\lambda_1)^{p_1^m-l_1}_m\cdots(\lambda_s)^{p_s^1-l_s}_1\cdots (\lambda_s)^{p_s^m-l_s}_m.$$ Let $x\in\widecheck{\mathcal O}_{\lambda}$. Then we have \begin{equation}\label{eqn-comp} A_K(x)\cong{\mathbb Z}/\check d_\lambda{\mathbb Z},\ \check d_{\lambda}=\begin{cases}\operatorname{gcd}(m\lambda_i,l_i>0,\ d_{\widecheck\mu_\lambda})&\text{ if $\mu_\lambda\neq\emptyset$} \\md_\lambda&\text{ if $\mu_\lambda=\emptyset$}.\end{cases} \end{equation} \section{Character sheaves: inner automorphisms of $SL_N$}\label{sec-cs-sl} In this section we describe the character sheaves in the case of inner automorphisms of $G=SL_N$, that is, type AI in Section~\ref{ssec-gla}. We follow the approach in~\cite{VX3,VX4}. We will also make use of the notations there. \subsection{Central characters}Let $Z(G)$ denote the center of $G$. We have that $Z(G)^\theta\cong{\mathbb Z}/N{\mathbb Z}$. Recall that we have \begin{equation*} {\mathcal A}_K({\mathcal N}_{\pm1})=\bigsqcup_{\kappa:Z(G)^\theta\to{\mathbb C}^*}{\mathcal A}_K({\mathcal N}_{\pm1})_\kappa,\ \ \operatorname{Char}_K({\mathfrak g}_1)=\bigsqcup_{\kappa:Z(G)^\theta\to{\mathbb C}^*}\operatorname{Char}_K({\mathfrak g}_1)_\kappa \end{equation*} where the subscript $\kappa$ indicates the action of $Z(G)^\theta$. For a positive integer $k$, we define ${\mathcal A}_K({\mathcal N}_{\pm1})_k$ and $\operatorname{Char}_K({\mathfrak g}_1)_k$ entirely similarly as in~\cite[\S2.2]{VX3}, that is, the sheaves where $Z(G)$ acts via an order $k$ character $\psi\in(\widehat{{\mathbb Z}/N{\mathbb Z}})_k$. As mentioned in the introduction, the set $\operatorname{Char}_K({\mathfrak g}_1)_1$ has been determined in~\cite{L}. \subsection{Supports of character sheaves} Let $a\in{\mathbb Z}_+$ and $d:=\operatorname{gcd}(a,m)$. Recall the set ${}^0_a\Sigma_{m,{\mathbf d}}$ defined in Section~\ref{ssec-dist}. We define the set $\underline{{\mathcal N}_{-1}}^{cs,a}$ to be the subset of $\underline{{\mathcal N}_{-1}}$ consisting of the following nilpotent orbits \begin{eqnarray*} &{\mathcal O}_{a,\mu}:={\mathcal O}_{(\frac{a}{d})^{l}_1\cdots(\frac{a}{d})^{l}_m\sqcup\widecheck\mu},\,0\leq l\leq \frac{Nd}{ma},\, \mu\in{}^{0}_{a}\Sigma_{m,{\mathbf d}-\frac{al}{d}\mathbf{1}_m}&\text{ if $d<m$}\\ &{\mathcal O}_{a,\emptyset}:={\mathcal O}_{(\frac{a}{m})^{l}_1\cdots(\frac{a}{m})^{l}_m},\,l=\frac{N}{a},&\text{ if $d=m$}, \end{eqnarray*} where $\widecheck\mu$ is defined in~\eqref{checkmu}. Making use of~\eqref{eqn-comp} and entirely similar argument as in~\cite{VX4}, we conclude that \begin{eqnarray*} &&\pi_1^K(\widecheck{\mathcal O}_{a,\mu})\cong{\mathbb Z}/\check d_{a,\mu}{\mathbb Z}\times B_{G_{m,1,l}},\ l=\frac{d(N-|\mu|)}{ma},\,\ \check d_{a,\mu}=\begin{cases}\operatorname{gcd}(ma/d,d_{\mu})&\text{ if $\mu\neq\emptyset$}\\ma/d&\text{ if $\mu=\emptyset$}\end{cases}, \end{eqnarray*} where by convention $B_{G_{m,1,0}}=\{1\}$. Note that $a|\check d_{a,\mu}$. \subsection{Nilpotent support character sheaves}To describe the character sheaves, we begin by giving a conjectural description of the set $\operatorname{Char}^{\mathrm n}_K({\mathfrak g}_1)$ of nilpotent support character sheaves. Given an orbit ${\mathcal O}_\lambda\subset{\mathcal N}_{\pm1}$ and $\psi\in\widehat{{\mathbb Z}/d_\lambda{\mathbb Z}}$, we write ${\mathcal E}_{\psi}$ for the irreducible $K$-equivariant local system on ${\mathcal O}_\lambda$ given by $\psi$. Let $\operatorname{Char}^{\mathrm n}_K({\mathfrak g}_1)_a=\operatorname{Char}^{\mathrm n}_K({\mathfrak g}_1)\cap\operatorname{Char}_K({\mathfrak g}_1)_a$. \begin{conjecture}\label{conj-nilp}We have \begin{eqnarray} \label{biorbital}&& \operatorname{Char}^{\mathrm n}_K({\mathfrak g}_1)_a=\{\operatorname{IC}({\mathcal O}_{\mu},{\mathcal E}_{\psi_a})\mid\mu\in{}^0_a\Sigma_{m,{\mathbf d}},\,\psi_a\in(\widehat{{\mathbb Z}/d_\mu{\mathbb Z}})_a\}. \end{eqnarray} \end{conjecture} \begin{remark} The above conjecture holds when $m=2$ by~\cite{VX3}. Moreover,~\eqref{biorbital} holds for $a=1$ by~\cite{L} or by~\cite{H}. As pointed out in~\cite{L}, $\operatorname{Char}^{\mathrm n}_K({\mathfrak g}_1)_1$ is exactly the set of perverse sheaves that give rise to canonical bases for an affine Lie algebra of type A. \end{remark} We give some evidence of the above conjecture. Suppose that $m\nmid N$ and ${\mathfrak g}_1$ contains a regular nilpotent element of ${\mathfrak g}$. Let ${\mathcal O}_{reg}\subset{\mathcal N}_1$ be the unique $K$-orbit containing a regular nilpotent and let $\psi_N\in(\widehat{{\mathbb Z}/N{\mathbb Z}})_N$. \begin{lemma}\label{lem-rn} Suppose that $m\nmid N$. Then $\operatorname{IC}({\mathcal O}_{reg},{\mathcal E}_{\psi_N})$ is a cuspidal character sheaf. \end{lemma} \begin{proof} Since $m\nmid N$, in view of~\eqref{eqn-comp}, the only $\widecheck{\mathcal O}_\lambda$'s that afford a central character of order $N$ is ${\mathcal O}_{reg}$. Moreover, $\operatorname{Char}_{L^\theta}({\mathfrak{l}}_1)_N=\emptyset $ for any $\theta$-stable Levi subgroup $L$ contained in a $\theta$-stable parabolic subgroup. The lemma follows from central character considerations. \end{proof} \begin{corollary} Suppose that $\operatorname{gcd}(a,m)=d<m$. Then $\operatorname{IC}({\mathcal O}_{a^t_i},{\mathcal E}_{\psi_a})$, $\psi_a\in(\widehat{{\mathbb Z}/a{\mathbb Z}})_a$, is a nilpotent support character sheaf. Moreover, it is cuspidal if and only if $t=1$. \end{corollary} \begin{proof} Suppose that $a=k_0m+a_0$, where $a_0\in(0,m)$. We can assume that $i=a_0$. We have $d_i=(k_0+1)t$, $i\in[1,a_0]$, $d_i=k_0t$, $i\in[a_0+1,m]$. Let $L$ be a $\theta$-stable Levi subgroup contained in a $\theta$-stable parabolic subgroup such that $L\cong S(GL_a^{\times t})$ and ${\mathcal O}_{{\mathfrak{l}}_1}:={\mathcal O}_{a_{a_0}}^{\boxtimes t}\subset {\mathfrak{l}}_1$. Such an $L$ exists. We have $\pi_1^{L^\theta}({\mathcal O}_{{\mathfrak{l}}_1})\cong{\mathbb Z}/a{\mathbb Z}$. Let $\psi_a\in(\widehat{{\mathbb Z}/a{\mathbb Z}})_a$. Then $\operatorname{IC}({\mathcal O}_{{\mathfrak{l}}_1},{\mathcal E}_{\psi_a})\in\operatorname{Char}_{L^\theta}({\mathfrak{l}}_1)$ by Lemma~\ref{lem-rn}. We show that the only nilpotent orbit ${\mathcal O}$ such that ${\mathcal O}\cap({\mathcal O}_{{\mathfrak{l}}_1}\oplus({\mathfrak{n}}_P)_1)\neq\emptyset$ and such that $\widehat{\pi_1^{K}({\mathcal O})}_a\neq\emptyset$ is ${\mathcal O}_{a^t_{a_0}}$. It then follows that \begin{equation*} \operatorname{Ind}_{{\mathfrak{l}}_1\subset{\mathfrak{p}}_1}^{{\mathfrak g}_1}\operatorname{IC}({\mathcal O}_{{\mathfrak{l}}_1},{\mathcal E}_{\psi_a})=\operatorname{IC}({\mathcal O}_{a^t_i},{\mathcal E}_{\psi_a})\oplus\cdots \end{equation*} (up to shift) and the corollary follows. Let ${\mathcal O}_\lambda$ be an orbit such that $\widehat{\pi_1^{K}({\mathcal O})}_a\neq\emptyset$. Then $\lambda=(a\mu_1)^{m_1}\cdots(a\mu_s)^{m_s}$. We have $a\mu_i=k_0\mu_im+a_0\mu_i$. Suppose that $a_0\mu_i=k_im+a_i$, $a_i\in[0,m)$. Then we have \begin{eqnarray*} &&(k_0+1)t=d_1(\lambda)\leq\sum_{i\in[1,s],a_i\neq 0}(k_0\mu_i+k_i+1)m_i+\sum_{i\in[1,s],a_i= 0}(k_0\mu_i+k_i)m_i\\ &&\Rightarrow\sum_{i\in[1,s],a_i\neq 0}\mu_im_i+\sum_{i\in[1,s],a_i= 0}\mu_im_i=t\leq \sum_{i\in[1,s],a_i\neq 0}(k_i+1)m_i+\sum_{i\in[1,s],a_i= 0}k_im_i. \end{eqnarray*} Note that $\mu_i\geq k_i+1$. Thus we get $a_i>0$ and $\mu_i=k_i+1$ for all $i$. Suppose that $\mu_i\geq 2$ (or equivalently $k_i\geq 1$) for some $i$, then $a_i<a_0$. It follows that \begin{eqnarray*} &&k_0t+t=d_{a_0}(\lambda)\leq\sum_{i\in[1,s],\mu_i\geq 2}(k_0\mu_i+\mu_i-1)m_i+\sum_{i\in[1,s],\mu_i=1}(k_0\mu_i+\mu_i)m_i\\ &&=k_0t+\sum_{i\in[1,s],\mu_i\geq 2}(\mu_i-1)m_i+\sum_{i\in[1,s],\mu_i=1}\mu_im_i. \end{eqnarray*} This holds only if $\mu_i=1$ for all $i$. We then conclude that $\lambda=a^t_{a_0}$. \end{proof} \begin{remark}\label{rmk-nilp}Suppose that $ \mu=(a\mu_1)^{p_1^1}_1\cdots(a\mu_1)^{p_1^m}_m\cdots (a\mu_s)^{p_s^1}_1\cdots(a\mu_s)^{p_s^m}_m\in{}^0_a\Sigma. $ We expect that $\operatorname{IC}({\mathcal O}_\mu,{\mathcal E}_{\psi_a})$ can be obtained by applying parabolic induction to character sheaves of the form $\operatorname{IC}({\boxtimes{\mathcal O}_{a_i^{p_i^j}}},{\mathcal E}_{\psi_a})$ on $\theta$-stable Levi subgroups of the form $S(\prod_{i,j}{GL_{ap_i^j}}^{\times \mu_i})$. It will then follow that the sheaves in $\operatorname{Char}^{\mathrm n}_K({\mathfrak g}_1)_a$ can be obtained by parabolic induction from the character sheaf supported on a regular nilpotent orbit in $\operatorname{Char}_{L^\theta}({\mathfrak{l}}_1)$ for a $\theta$-stable Levi subgroup of the form $S(GL_a^{\times N/a})$. \end{remark} \subsection{Character sheaves}Let $\tau\in{\mathcal P}_d(l)$. Recall the irreducible representation $L_\tau$ of $B_{G_{m,1,l}}$ (see Section~\ref{Hecke}). Let ${\mathcal T}_{\psi_a,\tau}$ denote the $K$-equivariant local system on $\widecheck{\mathcal O}_{a,\mu}$ corresponding to the representation $\psi_a\boxtimes L_\tau$ of $\pi_1^K(\widecheck{\mathcal O}_{a,\mu})$ for $\psi_a\in(\widehat{{\mathbb Z}/\check d_{a,\mu}{\mathbb Z}})_a$. Assume that Conjecture~\ref{conj-nilp} holds. We have the following explicit description of the character sheaves. \begin{theorem}\label{cs-sl} {\rm (i)} Suppose that $m|a$. We have \begin{equation*} \operatorname{Char}_K({\mathfrak g}_1)_a=\{\operatorname{IC}(\widecheck{\mathcal O}_{a,\emptyset},{\mathcal T}_{\psi_{a},\tau})\mid \psi_a\in(\widehat{{\mathbb Z}/a{\mathbb Z}})_a,\,\tau\in{\mathcal P}_m(N/a)\}. \end{equation*} {\rm (ii)} Suppose that $\operatorname{gcd}(a,m)=d<m$ and Conjecture~\ref{conj-nilp} holds. We have \begin{eqnarray*} \operatorname{Char}_K({\mathfrak g}_1)_a&=&\{\operatorname{IC}(\widecheck{\mathcal O}_{a,\mu},{\mathcal T}_{\psi_{a},\tau})\mid{\mathcal O}_{a,\mu}\in\underline{{\mathcal N}_{-1}}^{cs,a},\,\psi_a\in(\widehat{{\mathbb Z}/\check d_{a,\mu}{\mathbb Z}})_a,\,\tau\in{\mathcal P}_d(l)\}. \end{eqnarray*} \end{theorem} To prove the theorem, we begin by defining a bijection between ${\mathcal A}_K({\mathcal N}_{-1})_a$ and the set of sheaves in Theorem~\ref{cs-sl}, for each $a$. Let ${\mathcal O}_\lambda$ be an orbit such that $a|d_\lambda$. Then we have \begin{equation*} \lambda=(a\mu_1)^{p_1^1}_1\cdots(a\mu_1)^{p_1^m}_m\cdots (a\mu_t)^{p_t^1}_1\cdots(a\mu_t)^{p_t^m}_m. \end{equation*} For $i\in[1,d]$, $k\in[1,t]$, let \begin{eqnarray*} l_k^i=\operatorname{min}\{p_k^{i+jd},\,j\in[0,m/d-1]\}, q_k^{jd+i}=p_k^{jd+i}-l_k^i,\,j\in[0,m/d-1]. \end{eqnarray*} Let \begin{equation*} \begin{gathered} \nu^i=(\mu_1)^{l_1^i}(\mu_2)^{l_2^i}\cdots(\mu_t)^{l_t^i},\,i\in[1,d];\ \ \mu_\lambda=(a\mu_1)^{q_1^1}_1\cdots(a\mu_1)^{q_1^m}_m\cdots (a\mu_t)^{q_t^1}_1\cdots(a\mu_t)^{q_t^m}_m. \end{gathered} \end{equation*} Then $\tau_\lambda:=(\nu^1,\ldots,\nu^d)\in{\mathcal P}_d(l_\lambda)$, $l_\lambda=\sum_{i=1}^d|\nu^i|$ and $\mu_\lambda\in{}^0_a\Sigma_{m,N-mla/d}$. Note that $\mu_\lambda=\emptyset$ when $d=m$. The maps \begin{equation*} \begin{gathered} \operatorname{IC}({\mathcal O}_\lambda,{\mathcal E}_{\psi_a})\mapsto\operatorname{IC}(\widecheck{\mathcal O}_{a,\emptyset},{\mathcal T}_{\psi_a,\tau_\lambda}),\,\psi_a\in(\widehat{{\mathbb Z}/d_\lambda{\mathbb Z}})_a\text{ when $d=m$}\\ \operatorname{IC}({\mathcal O}_\lambda,\psi_a)\mapsto\operatorname{IC}(\widecheck{\mathcal O}_{a,\mu_\lambda},{\mathcal T}_{\psi_a,\tau_\lambda})\text{ when $d<m$} \end{gathered} \end{equation*} define the desired bijections. It remains to show that the sheaves in Theorem~\ref{cs-sl} are indeed character sheaves. This is done in the next two subsections. \subsection{Nearby cycle sheaves}Suppose that ${\mathbf d}=N/m\mathbf{1}_{m}$. In this case the representation $(K,{\mathfrak g}_1)$ is a stable polar representation, that is, we have $\overline{{\mathfrak g}_1^{rs}}={\mathfrak g}_1$. We have $ I=Z_K({\mathfrak a})/Z_K({\mathfrak a})^0\cong{\mathbb Z}/m{\mathbb Z}.$ We apply the nearby cycle sheaf construction in~\cite{GVX} and make use of notations there. Let $\chi\in\hat I_\iota$, $\iota|m$, and $P_\chi$ the corresponding nearby cycle sheaf. Let $s\in W_{\mathfrak a}\cong G_{m,1,r}$ be a reflection of the form $s_{ij}^{(k)}$. Then we have $ Z_G({\mathfrak a}_s)_{\operatorname{der}}\cong {SL_2^{\times m}} $ (product of $m$ copies of $SL_2$) and $\theta|_{Z_G({\mathfrak a}_s)_{\operatorname{der}}}$ permutes the $m$ factors of $SL_2$. Let $t\in W_{\mathfrak a}$ be a reflection of the form $\tau_k$. Then we have $ Z_G({\mathfrak a}_t)_{\operatorname{der}}\cong SL_m $ and $\theta|_{Z_G({\mathfrak a}_t)_{\operatorname{der}}}$ is (GIT) stable of rank $1$. Apply~\cite{GVX} and the (GIT) stable rank 1 calculation in~\cite[\S6.1]{VX}, we conclude that \begin{equation*} {\mathfrak{F}}(P_\chi)\cong\operatorname{IC}({\mathfrak g}_1^{rs},{\mathcal{M}}_\chi),\ M_\chi\cong {\mathbb C}_{\chi}\boxtimes{\mathcal H}^{\iota}(G_{m,1,r}) \end{equation*} where we write $M_\chi$ for the representation of $\pi_1^K({\mathfrak g}_1^{rs})\cong{\mathbb Z}/m{\mathbb Z}\times B_{G_{m,1,r}}$ that gives rise to the $K$-equivariant local system ${\mathcal{M}}_\chi$ on ${\mathfrak g}_1^{rs}$. It follows that \begin{equation*} \{\operatorname{IC}({\mathfrak g}_1^{rs},{\mathcal T}_{\psi_\iota,\tau})\mid\psi_\iota\in\widehat{({\mathbb Z}/m{\mathbb Z})}_{\iota},\tau\in{\mathcal P}_\iota(N/m)\}\subset\operatorname{Char}_K({\mathfrak g}_1)_\iota. \end{equation*} Next, we apply the generalised nearby cycle construction in~\cite{VX3} to produce character sheaves supported on $\widecheck{\mathcal O}:=\widecheck{\mathcal O}_{k^l_1\cdots k^l_m}$. Let $\chi\in\widehat{{\mathbb Z}/km{\mathbb Z}}$. The following proposition can be derived using similar argument as in the proof of~\cite[(6.1)]{VX3}. \begin{proposition}We have \begin{eqnarray*} &&{\mathfrak{F}} P_{\widecheck{\mathcal O},\chi}\cong\operatorname{IC}(\widecheck{\mathcal O},{\mathcal H}_{G_{m,1,l}}^{d}\otimes {\mathbb C}_\chi)\text{ if $\chi\in(\widehat{{\mathbb Z}/km{\mathbb Z}})_{kd}$, $d|m$ and $\operatorname{gcd}(k,m/d)=1$}. \end{eqnarray*} \end{proposition} Let $a\in{\mathbb Z}_+$ and $\operatorname{gcd}(m,a)=d$. We write $k=a/d$ and $l=N/mk$. It follows from the above proposition that \begin{equation}\label{char-nby} \left\{\operatorname{IC}(\widecheck{\mathcal O}_{a,\emptyset},{\mathcal T}_{\psi_{a},\tau})\mid \,\psi_a\in(\widehat{{\mathbb Z}/mk{\mathbb Z}})_a,\,\tau\in{\mathcal P}_d(l)\right\}\subset\operatorname{Char}_K({\mathfrak g}_1)_a. \end{equation} Thus part (i) of Theorem~\ref{cs-sl} follows. Now assume that $d<m$. We prove part (ii) of Theorem~\ref{cs-sl} by showing that the sheaves there can be obtained from parabolic induction. To that end, let $\{e_i^j,j\in[1,d_i]\}$ be a basis of $M_i$. Let $P$ be the $\theta$-stable parabolic subgroup that stabilises the flag $0\subset V_1\subset V$, where $V_1=\operatorname{span}\{e_i^j,\,i\in[1,m],\,j\in[1,lk]\}$. Let $L\cong S(GL_{lkm}\times GL_{N-lkm})$ be the natural $\theta$-stable Levi subgroup. Consider the stratum $\widecheck{\mathcal O}_{{\mathfrak{l}}_1}:=\widecheck{\mathcal O}_{k^l_1\cdots k^l_m}\boxtimes{\mathcal O}_{\mu}\subset{\mathfrak{l}}_1$, where $\mu\in{}^0_a\Sigma$. We have $\pi_1^{L^\theta}({\mathfrak{l}}_1)\cong{\mathbb Z}/\check d_{a,\mu}{\mathbb Z}\times B_{G_{m,1,l}}$. Let $\psi_a\in(\widehat{{\mathbb Z}/\check d_{a,\mu}{\mathbb Z}})_a$ and $\tau\in {\mathcal P}_d(l)$. Let ${\mathcal{L}}_{\psi_a,\tau}$ denote the $L^\theta$-equivariant local system on $\widecheck{\mathcal O}_{{\mathfrak{l}}_1}$ given by the irreducible representation $\psi_a\boxtimes L_\tau$ of $\pi_1^{L^\theta}({\mathfrak{l}}_1)$. Then $\operatorname{IC}(\widecheck{\mathcal O}_{{\mathfrak{l}}_1},{\mathcal{L}}_{\psi_a,\tau})\in\operatorname{Char}_{L^\theta}({\mathfrak{l}}_1)_a$ by~\eqref{char-nby} and the assumption that Conjecture~\ref{conj-nilp} holds. Consider the map $\pi:K\times^{P_K}(\widecheck{\mathcal O}_{{\mathfrak{l}}_1}+({\mathfrak{n}}_P)_1)\to{\mathfrak g}_1$. One checks that $\dim K/P_K=\operatorname{dim}({\mathfrak{n}}_P)_1=Nkl-mk^2l^2$, $\operatorname{dim}\widecheck{\mathcal O}_{{\mathfrak{l}}_1}=l(mk^2l-k+1)+\sum_{i=1}^md_i^2-2Nkl+mk^2l^2-c_\mu$ and $\operatorname{dim}\widecheck{\mathcal O}_{a,\mu}=\sum_{i=1}^md_i^2-c_\mu-lk+l$, where $c_\mu$ denotes the dimension of centraliser of $x_\mu$ in ${G'}^\theta$ for $\theta|_{G'=GL_{N-lkm}}$. Moreover, the fiber $\pi^{-1}(x)$ consists of $1$ element, for $x\in\widecheck{\mathcal O}_{a,\mu}$. It follows that the image of $\pi$ equals the closure of $\widecheck{\mathcal O}_{a,\mu}$. Entirely similar argument as in~\cite{VX4} proves that \begin{equation}\label{eqn-ind1} \operatorname{Ind}_{{\mathfrak{l}}_1\subset{\mathfrak p}_1}^{{\mathfrak g}_1}\operatorname{IC}(\widecheck{\mathcal O}_{{\mathfrak{l}}_1},{\mathcal{L}}_{\psi_a,\tau})=\operatorname{IC}(\widecheck{\mathcal O}_{a,\mu},{\mathcal T}_{\psi_{a},\tau})\oplus\cdots \end{equation} It follows that $\operatorname{IC}(\widecheck{\mathcal O}_{a,\mu},{\mathcal T}_{\psi_{a},\tau})\in\operatorname{Char}_K({\mathfrak g}_1)_a$. This completes the proof of Theorem~\ref{cs-sl}. \subsection{Cuspidal character sheaves} In this subsection we give a conjectural description of the set of cuspidal character sheaves. \begin{conjecture}\label{conj-cusp} {\rm (i)}Suppose that $m\nmid N$. Then $\operatorname{Char}^{cusp}_K({\mathfrak g}_1)\neq \emptyset$ if and only if the grading is N-regular, that is, ${\mathfrak g}_1$ contains a regular nilpotent element of ${\mathfrak g}$. In the latter case, we have \begin{equation*} \operatorname{Char}^{cusp}_K({\mathfrak g}_1)=\{\operatorname{IC}({\mathcal O}_{reg},{\mathcal E}_{\psi_N})\mid{\mathcal O}_{reg}\subset{\mathcal N}_1\cap{\mathfrak g}^{reg},\,\psi_N\in\widehat{({\mathbb Z}/N{\mathbb Z})}_N\}. \end{equation*} {\rm (ii)}Suppose that $m|N$. Then $\operatorname{Char}^{cusp}_K({\mathfrak g}_1)\neq \emptyset$ only if ${\mathbf d}=N/m\mathbf{1}_m$. In the latter case we have \begin{eqnarray*} &&\operatorname{Char}^{cusp}_K({\mathfrak g}_1)=\bigsqcup_{\substack{d|m\\\operatorname{gcd}(N/m,m/d)=1}}\operatorname{Char}^{cusp}_K({\mathfrak g}_1)_{\frac{dN}{m}}\\ &&\operatorname{Char}^{cusp}_K({\mathfrak g}_1)_{\frac{dN}{m}} =\{\operatorname{IC}\left(\widecheck{\mathcal O}_{(\frac{N}{m})_1\cdots(\frac{N}{m})_m},{\mathcal T}_{\psi_{\frac{dN}{m}},\tau}\right)\mid\psi_{\frac{dN}{m}}\in(\widehat{{\mathbb Z}/N{\mathbb Z}})_{\frac{dN}{m}},\ \tau\in{\mathcal P}_d(1),\ \,i\in[1,d]\}. \end{eqnarray*} \end{conjecture} \begin{remark} Note that when $m=N$ in (ii) we are in the (GIT) stable grading case. In that case the conjecture is compatible with~\cite{VX} where all cuspidal sheaves are expected to have full support. \end{remark} As an evidence of the above conjecture, we show that the character sheaves in~\eqref{char-nby} can be obtained from parabolic induction of a $\theta$-stable Levi subgroup of the form $S(GL_{mk}^{\times l})$ contained in a $\theta$-stable parabolic subgroup $P$. We have $d_i=lk$, $i\in[1,m]$. Let $e_i^j,\,j\in[1,lk]$ be a basis of $M_i$, $i\in[1,m]$. Let $P$ be the $\theta$-stable parabolic subgroup of $G$ that stablises the flag $0\subset V_1\subset V_2\subset\cdots\subset V_{l-1}\subset V_l=V$, where $V_s=\operatorname{span}\{e_i^j,\,i\in[1,m],\,j\in[1,sk]\}$. Let $W_s=\operatorname{span}\{e_i^j,\,i\in[1,m],\,j\in[(s-1)k+1,sk]\}$, $s\in[1,l]$, and $L\cong S(\prod_{s=1}^lGL_{W_s})$ the $\theta$-stable Levi subgroup of $P$. Consider the stratum $\widecheck{\mathcal O}_{{\mathfrak{l}}_1}:=(\widecheck{{\mathcal O}}_{k_1k_2\cdots k_m})^{\boxtimes l}$. We have $\pi_1^{L^\theta}(\widecheck{\mathcal O}_{{\mathfrak{l}}_1})\cong{\mathbb Z}/mk{\mathbb Z}\times B_{G_{m,1,1}}^{\times l}$. Consider the $L^\theta$-equivariant local system ${\mathcal{L}}_{\psi_a,\tau_1,\ldots,\tau_l}$ on $\widecheck{\mathcal O}_{{\mathfrak{l}}_1}$ given by the irreducible representation $\psi_a\boxtimes L_{\tau_1}\boxtimes L_{\tau_2}\boxtimes\cdots\boxtimes L_{\tau_l}$, where $\psi_a\in(\widehat{{\mathbb Z}/mk{\mathbb Z}})_a$ and $\tau_i\in{\mathcal P}_d(1)$. Then $\operatorname{IC}(\widecheck{\mathcal O}_{{\mathfrak{l}}_1},{\mathcal{L}}_{\psi_a,\tau_1,\ldots,\tau_l})\in\operatorname{Char}_{L^\theta}({\mathfrak{l}}_1)_a$. Consider the map $\pi:K\times^{P_K}(\widecheck{\mathcal O}_{{\mathfrak{l}}_1}+({\mathfrak{n}}_P)_1)\to{\mathfrak g}_1$. One checks that $\dim K/P_K=\operatorname{dim}({\mathfrak{n}}_P)_1=mk^2l(l-1)/2$, $\operatorname{dim}\widecheck{\mathcal O}_{{\mathfrak{l}}_1}=l(mk^2-k+1)$, and $\operatorname{dim}\widecheck{\mathcal O}_{a,\emptyset}=l(mk^2l-k+1)$. Moreover, the fiber $\pi^{-1}(x)\cong S_l$, the symmetric group of $l$ letters, for $x\in\widecheck{\mathcal O}_{a,\emptyset}$. It follows that the image of $\pi$ equals the closure of $\widecheck{\mathcal O}_{a,\emptyset}$. Entirely similar argument as in~\cite{VX4} proves that \begin{equation}\label{eqn-ind2} \bigoplus_{(\tau_i)\in{\mathcal P}_d(1)^l}\operatorname{Ind}_{{\mathfrak{l}}_1\subset{\mathfrak p}_1}^{{\mathfrak g}_1}\operatorname{IC}(\widecheck{\mathcal O}_{{\mathfrak{l}}_1},{\mathcal{L}}_{\psi_a,\tau_1,\ldots,\tau_l})=\bigoplus_{\tau\in{\mathcal P}_d(l)}\operatorname{IC}(\widecheck{\mathcal O}_{a,\emptyset},{\mathcal T}_{\psi_{a},\tau})\oplus\cdots\text{ (up to shift).} \end{equation} \begin{remark}If the expectation in Remark~\ref{rmk-nilp} holds, then Conjecture~\ref{conj-cusp} follows from~\eqref{eqn-ind1} and~\eqref{eqn-ind2}. \end{remark} \section{Character sheaves: type II}\label{cs-typeII} In this section we describe the character sheaves in the case of type II graded Lie algebras, see Section~\ref{ssec-gla}. As in the previous section, we follow the approach in~\cite{VX3,VX4} and make use of the notations there. \subsection{Supports of character sheaves} Recall the set of distinguished orbits defined in Section~\ref{ssec-dist}. Consider the following set $\underline{{\mathcal N}_{-1}}^{cs}$ of nilpotent $K$-orbits in ${\mathcal N}_{-1}$ \begin{eqnarray*} {\mathcal O}_{k,\mu}:={\mathcal O}_{1^{2k}_11^{2k}_2\cdots 1^{2k}_{m_0}\sqcup\mu},\ \mu\in{}^0\Sigma_{m_0,{\mathbf d}-2k\mathbf{1}_{m_0}}^A&\text{ AII}\\ {\mathcal O}_{k,\mu}:={\mathcal O}_{1^{2k}_11^{2k}_2\cdots 1^{2k}_{m}\sqcup\mu},\ \mu\in{}^0\Sigma_{m,{\mathbf d}-2k\mathbf{1}_m}^C\text{ (resp. ${}^0\Sigma_{m,{\mathbf d}-2k\mathbf{1}_m}^D$)}&\text{ CII (resp. DII)}. \end{eqnarray*} One checks that \begin{eqnarray*} \pi_1^K(\widecheck{\mathcal O}_{k,\mu})\cong B_{G_{m_0,1,k}} \text{ (resp. $B_{G_{m,1,k}}$) AII (resp. CDII)}. \end{eqnarray*} \subsection{Nilpotent support character sheaves}Let us write ${}^0\Sigma$ for ${}^0\Sigma_{m_0,{\mathbf d}}^A$, ${}^0\Sigma_{m,{\mathbf d}}^{C}$ or ${}^0\Sigma_{m,{\mathbf d}}^{D}$. \begin{conjecture}\label{conj-biorbital} We have \begin{equation}\label{cs-nilp2} \operatorname{Char}_K^{{\mathrm n}}({\mathfrak g}_1)=\{\operatorname{IC}({\mathcal O}_{\mu},{\mathbb C})\mid \mu\in{}^0\Sigma\}. \end{equation} \end{conjecture} Note that by~\cite[\S3.8]{L}, we have $\operatorname{Char}^{{\mathrm n}}_K({\mathfrak g}_1)\subset\{\operatorname{IC}({\mathcal O}_\mu,{\mathbb C})\mid\mu\in{}^0\Sigma\}$. Let $B$ be a $\theta$-stable Borel subgroup. By~\cite[3.2(c)]{L}, we have ${\mathfrak{b}}_i=({\mathfrak{n}}_B)_i$ for $i\neq 0$ in type CDII. One can check that this holds as well in type AII. Let $\pi_B:K\times^{B_K}{\mathfrak{b}}_1\to{\mathfrak g}_1$. We have $\operatorname{Im}\pi_B=\overline{{\mathcal O}_B}$ for some nilpotent orbit ${\mathcal O}_B\subset{\mathcal N}_1$. As in~\cite{VX4}, we call the orbits ${\mathcal O}_B$ Richardson orbits attached to $\theta$-stable Borel groups. \begin{remark}\label{rmk-nil2}We expect that all the sheaves in~\eqref{cs-nilp2} can be obtained from parabolic induction $\operatorname{Ind}_{{\mathfrak{t}}_1=\{0\}\subset{\mathfrak{b}}_1}^{{\mathfrak g}_1}\delta$ of the skyscraper sheaf $\delta$ on ${\mathfrak{t}}_1=\{0\}$, where ${\mathfrak{t}}=\operatorname{Lie}T$, and $T$ is a $\theta$-stable maximal torus contained in a $\theta$-stable Borel subgroup $B$. It is likely that every ${\mathcal O}_{\mu}$, $\mu\in{}^0\Sigma$, is a Richardson orbit attached to some $\theta$-stable Borel subgroup, which will then imply the expectation. \end{remark} \subsection{Character sheaves} We give an explicit description of the character sheaves assuming Conjecture~\ref{cs-nilp2} holds. Let $\rho\in{\mathcal P}(k)$. Recall the irreducible representation $L_\rho$ of $B_{G_{m,1,k}}$. Let us write ${\mathcal{L}}_\rho$ for the $K$-equivariant local system on $\widecheck{\mathcal O}_{k,\mu}$ given by the irreducible representation $L_\rho$ of $\pi_1^{K}(\widecheck{\mathcal O}_{k,\mu})$. \begin{theorem}\label{cs}Suppose that Conjecture~\ref{conj-biorbital} holds. We have \begin{equation}\label{eqn-csII} \operatorname{Char}_K({\mathfrak g}_1)=\{\operatorname{IC}(\widecheck{\mathcal O}_{k,\mu},{\mathcal{L}}_\rho)\mid {\mathcal O}_{k,\mu}\in \underline{{\mathcal N}_1}^{cs},\,\rho\in{\mathcal P}(k)\}. \end{equation} \end{theorem} Recall $r=\operatorname{min}d_i$. Let $\mu_0=1^{k_1}_1\cdots 1^{k_{m_0}}_{m_0}$ (resp. $1^{k_1}_1\cdots 1^{k_m}_m$), where $k_i=d_i-2r$. \begin{corollary}We have \begin{equation*} \operatorname{Char}_K^{{\mathrm f}}({\mathfrak g}_1)=\{\operatorname{IC}(\widecheck{\mathcal O}_{r,\mu_0},{\mathcal{L}}_\rho)\mid \rho\in{\mathcal P}(r)\}. \end{equation*} \end{corollary} \subsection{Proof of Theorem~\ref{cs}}Let us write $I_d^A=\{1,\ldots,m_0\}$, $I_m^C=I_m^D=\{1,\ldots,m\}$. Let $\lambda$ be as in~\eqref{orbit-1} (resp.~\eqref{orbit-2},~\eqref{orbit-3}). We define \begin{equation*} l_i=\min\{[\frac{p_i^a}{2}],\,a\in I_{m_0}^A\text{ (resp. $I_m^C$, $I_m^D$)}\},\ q_i^a=p_i^a-2l_i,\,a\in I_d^A\text{ (resp. $I_m^C$, $I_m^D$)},\ i\in[1,s]. \end{equation*} Define \begin{eqnarray*} &\nu=(\lambda_1)^{l_1}\cdots(\lambda_s)^{l_s},\ \mu=(\lambda_1)_1^{q_1^1}\cdots(\lambda_1)^{q_1^{m_0}}_{m_0}\cdots (\lambda_s)_1^{q_s^1}\cdots(\lambda_s)^{q_s^{m_0}}_{m_0}\\ &\text{(resp. $\mu=(\lambda_1)_1^{q_1^1}\cdots(\lambda_1)^{q_1^{m}}_{m}\cdots (\lambda_s)_1^{q_s^1}\cdots(\lambda_s)^{q_s^{m}}_{m}$)} \end{eqnarray*} The map \begin{equation*} \operatorname{IC}({\mathcal O}_\lambda,{\mathbb C})\mapsto\operatorname{IC}(\widecheck{\mathcal O}_{|\nu|,\mu},{\mathcal{L}}_\nu) \end{equation*} defines a bijection between the set ${\mathcal A}_K({\mathcal N}_{-1})$ of simple $K$-equivariant perverse sheaves on ${\mathcal N}_{-1}$ and the set of sheaves in Theorem~\ref{cs}. Thus to prove Theorem~\ref{cs}, it suffices to show that the sheaves in~\eqref{eqn-csII} are indeed character sheaves. We prove this for type AII. The other cases are entirely similar and we omit the details. We have $d_i=d_{m_0+1-i}$ and $d_{l+1}$ is even. Let $\{e_i^j,\,j\in[1,d_i]\}$ be a basis of $M_i$, $i\in[1,m_0]$ such that $(e_i^j,e_s^t)=\delta_{i+s,m_0+1}\delta_{j,t}$, $i\in[1,l]$, $(e_{l+1}^j,e_{l+1}^t)=\delta_{j+t,d_{l+1}+1}$, $j\in[1,r]$. We first consider the case when $d_i=2r$ for all $i$. We show that \begin{equation}\label{ind1} \{\operatorname{IC}({\mathfrak g}_1^{rs},{\mathcal{L}}_\rho)\mid\rho\in{\mathcal P}(r)\}\subset\operatorname{Char}_{K}({\mathfrak g}_1). \end{equation} Let $P$ be the $\theta$-stable parabolic subgroup that stabilises the flag $0\subset W_{m_0r}=W_{m_0r}^{\perp}\subset V$, where $W_{m_0r}=\operatorname{span}\{e_i^j,\,i\in[1,m_0],\,j\in[1,r]\}$. Let $L$ be the $\theta$-stable Levi subgroup contained in $P$ such that $L\cong S(GL_{W_{m_0r}}\times GL_{U_{m_0r}})$, where $U_{m_0r}=\operatorname{span}\{e_i^j,\,i\in[1,m_0],\,j\in[r,2r]\}$. Then $(L^\theta,{\mathfrak{l}}_1)$ can be identified with $(GL_r^{\times m_0},\operatorname{Hom}({\mathbb C}^r,{\mathbb C}^r)^{\oplus m_0})$. That is, the pair arising from the order $m_0$ inner automorphism of $GL_{m_0r}$ such that $d_i=r$, $i\in[1,m_0]$. Let $\tau\in{\mathcal P}(r)$. Consider the map $\pi:K\times^{P_K}{\mathfrak{p}}_1\to{\mathfrak g}_1$. One checks readily that $\pi$ is surjective and $\pi^{-1}(x)\cong{\mathbb P}_1^k$. Consider the sheaf $\operatorname{IC}({\mathfrak{l}}_1^{rs},{\mathcal T}_{\psi_1,\tau})\in\operatorname{Char}_{L^\theta}({\mathfrak{l}}_1)$. Entirely similar argument as before shows that $\operatorname{Ind}_{{\mathfrak{l}}_1\subset{\mathfrak{p}}_1}^{{\mathfrak g}_1}\operatorname{IC}({\mathfrak{l}}_1^{rs},{\mathcal T}_{\psi_1,\tau})\cong\operatorname{IC}({\mathfrak g}_1^{rs},{\mathcal{L}}_\tau)\oplus\cdots$ (up to shift). This proves~\eqref{ind1}. Returning to the general case. We assume that Conjecture~\ref{conj-biorbital} holds. Let $P$ be the $\theta$-stable parabolic subgroup that stabilises the flag $0\subset W_{m_0k}\subset W_{m_0k}^{\perp}\subset V$, where $W_{m_0k}=\operatorname{span}\{e_i^j,\,i\in[1,l+1],\,j\in[1,k],e_i^j,\,i\in[l+2,m_0],\,j\in[k+1,2k]\}$. Let $L$ be the $\theta$-stable Levi subgroup contained in $P$ such that $L\cong S(GL_{W_{m_0k}}\times GL_{U_{m_0k}}\times GL_{E_{2n-2m_0k}})$, where $U_{m_0k}=\operatorname{span}\{e_i^j,\,i\in[1,l],\,j\in[k,2k],e_i^j,\,i\in[l+2,m_0],\,j\in[1,k],\,e_{l+1}^j,\,j\in[a_{l+1}+1-k,a_{l+1}]\}$ and $E_{2n-2m_0k}=\operatorname{span}\{e_i^j,\,i\in[1,l]\cup[l+2,m_0],\,j\in[2k+1,d_i],\,e_{l+1}^j,\,j\in[k+1,d_{l+1}-k]\}$. Let $\mu\in{}^0\Sigma_{2n-m_0k}$. Consider the stratum $\widecheck{\mathcal O}_{{\mathfrak{l}}_1}:=\widecheck{\mathcal O}_{1^{2k}_1\cdots 1^{2k}_d}\times{\mathcal O}_\mu$. We have $\pi_1^{L^\theta}(\widecheck{\mathcal O}_{{\mathfrak{l}}_1})\cong B_{G_{m_0,1,k}}$. Let $\tau\in{\mathcal P}(k)$ and let ${\mathcal F}_\tau$ denote the $K$-equivariant local system on $\widecheck{\mathcal O}_{{\mathfrak{l}}_1}$ given by the irreducible representation $L_\tau$ of $B_{G_{m_0,1,k}}$. Then $\operatorname{IC}(\widecheck{\mathcal O}_{{\mathfrak{l}}_1},{\mathcal F}_\tau)\in\operatorname{Char}_{L^\theta}({\mathfrak{l}}_1)$ by~\eqref{ind1} and Conjecture~\ref{conj-biorbital}. Consider the map $\pi:K\times^{P_K}(\overline{\widecheck{\mathcal O}_{{\mathfrak{l}}_1}}+({\mathfrak{n}}_P)_1)\to{\mathfrak g}_1$. Let $\widecheck{\mathcal O}=\widecheck{\mathcal O}_{1^{2k}_1\cdots 1^{2k}_{m_0}\sqcup\mu}$. Let $k_i=d_i-2k$. We have $\dim\widecheck{\mathcal O}-\dim\widecheck{\mathcal O}_{{\mathfrak{l}}_1}=m_0k^2+4k\sum_{i=1}^lk_i+2kk_{l+1}-k$, $\dim K/P_K=lk^2+\frac{k^2+k}{2}+2k\sum_{i=1}^lk_i+kk_{l+1}$, and $\dim ({\mathfrak{n}}_P)_1=lk^2+\frac{k^2-k}{2}+2k\sum_{i=1}^lk_i+kk_{l+1}. $ Moreover, for $x\in\widecheck{\mathcal O}$, $\pi_1^{-1}(x)\cong{\mathbb P}_1^k$. We conclude that $\operatorname{Im}\pi=\overline{\widecheck{\mathcal O}}$ and \begin{equation}\label{eqn-ind3} \operatorname{Ind}_{{\mathfrak{l}}_1\subset{\mathfrak{p}}_1}^{{\mathfrak g}_1}\operatorname{IC}(\widecheck{\mathcal O}_{{\mathfrak{l}}_1},{\mathcal F}_\tau)\cong\operatorname{IC}(\widecheck{\mathcal O}_{k,\mu},{\mathcal{L}}_\tau)\oplus\cdots\text{ (up to shift).} \end{equation} This completes the proof of Theorem~\ref{cs} for type AII. \begin{remark}If the expectation in Remark~\ref{rmk-nil2} holds, then together with~\eqref{eqn-ind3}, it implies that there are no cuspidal character sheaves in type II.\end{remark}